This project demonstrates a full-stack AI solution with:
- Backend: FastAPI service providing an
/inferendpoint for object detection using YOLOv7-tiny. - Frontend: Streamlit app to upload images and visualize inference results.
- Deployment: Both services containerized with Docker and orchestrated via Docker Compose.
- Python 3
- FastAPI (backend REST API)
- Streamlit (frontend UI)
- YOLOv7-tiny (pretrained AI model)
- Docker & Docker Compose
Projectrestapi/
- βββ backend-aiservice/
- β βββ app.py
- β βββ requirements.txt
- β βββ Dockerfile
- β βββ yolov7-tiny.pt
- β
- βββ frontend-uiservice/
- β βββ app.py
- β βββ requirements.txt
- β βββ Dockerfile
- β
- βββ docker-compose.yaml
-
Backend (FastAPI):
- Implements /infer endpoint for image inference.
- Loads YOLOv7-tiny model and returns detections.
- Run: uvicorn app:app --host 0.0.0.0 --port 8000
-
Frontend (Streamlit):
- UI with file uploader.
- Sends images to backend /infer endpoint.
- Displays results.
- Runs on port 8501.
-
Docker & Compose:
- Each service has its own Dockerfile.
- docker-compose.yaml defines services and network.
- Frontend connects to backend using AI_BASE_URL env var.
-Prerequisites:
- Install Docker and Docker Compose.
- Steps:
- Open terminal in project directory.
- Run: docker-compose up --build
- Access:
- Backend API: http://localhost:8000/docs
- Frontend UI: http://localhost:8501
- Optimizing model with onnx or TRT
- Creating own yolov7 class Rather than using yolov7 github
- Optimizing UI define for multiple images
- FastAPI Docs: https://fastapi.tiangolo.com/
- Streamlit Docs: https://docs.streamlit.io/
- YOLOv7: https://github.com/WongKinYiu/yolov7
- Docker Docs: https://docs.docker.com/
- Docker Compose: https://docs.docker.com/compose/