Merge branch 'master' into 'main'
Browse filescommit README.md
See merge request ul-dsri/sandbox/sachin-sharma-in/ml-inference-service!2
- README.md +233 -62
- app/core/config.py +0 -1
README.md
CHANGED
|
@@ -1,93 +1,264 @@
|
|
| 1 |
-
#
|
| 2 |
|
|
|
|
|
|
|
| 3 |
|
|
|
|
| 4 |
|
| 5 |
-
##
|
| 6 |
|
| 7 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
-
|
| 10 |
|
| 11 |
-
##
|
| 12 |
-
|
| 13 |
-
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
|
| 14 |
-
- [ ] [Add files using the command line](https://docs.gitlab.com/topics/git/add_files/#add-files-to-a-git-repository) or push an existing Git repository with the following command:
|
| 15 |
|
| 16 |
```
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
```
|
| 22 |
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
- [ ] [Set up project integrations](https://gitlab.com/ul-dsri/sandbox/sachin-sharma-in/ml-inference-service/-/settings/integrations)
|
| 26 |
-
|
| 27 |
-
## Collaborate with your team
|
| 28 |
-
|
| 29 |
-
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
|
| 30 |
-
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
|
| 31 |
-
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
|
| 32 |
-
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
|
| 33 |
-
- [ ] [Set auto-merge](https://docs.gitlab.com/user/project/merge_requests/auto_merge/)
|
| 34 |
|
| 35 |
-
##
|
| 36 |
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
|
| 44 |
|
| 45 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 46 |
|
| 47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
-
|
|
|
|
| 50 |
|
| 51 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
|
| 53 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
|
| 55 |
-
|
| 56 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
|
| 58 |
-
|
| 59 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 60 |
|
| 61 |
-
|
| 62 |
-
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
|
| 63 |
|
| 64 |
-
##
|
| 65 |
-
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
|
| 66 |
|
| 67 |
-
|
| 68 |
-
|
| 69 |
|
| 70 |
-
|
| 71 |
-
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
|
| 72 |
|
| 73 |
-
|
| 74 |
-
|
|
|
|
|
|
|
|
|
|
| 75 |
|
| 76 |
-
|
| 77 |
-
|
|
|
|
|
|
|
|
|
|
| 78 |
|
| 79 |
-
|
| 80 |
-
State if you are open to contributions and what your requirements are for accepting them.
|
| 81 |
|
| 82 |
-
|
|
|
|
|
|
|
|
|
|
| 83 |
|
| 84 |
-
|
| 85 |
|
| 86 |
-
|
| 87 |
-
Show your appreciation to those who have contributed to the project.
|
| 88 |
|
| 89 |
-
##
|
| 90 |
-
For open source projects, say how it is licensed.
|
| 91 |
|
| 92 |
-
|
| 93 |
-
|
|
|
|
|
|
|
|
|
| 1 |
+
# ML Inference Service (FastAPI)
|
| 2 |
|
| 3 |
+
A production-ready **FastAPI** web service that serves **image classification** models.
|
| 4 |
+
This repo ships with a working example using **ResNet-18** (downloaded from Hugging Face) under `models/resnet-18/` and exposes a simple **REST** endpoint.
|
| 5 |
|
| 6 |
+
---
|
| 7 |
|
| 8 |
+
## ✨ What you get
|
| 9 |
|
| 10 |
+
- FastAPI application with clean layering (routes → controller → service)
|
| 11 |
+
- Hot-loaded model on startup (single instance reused per request)
|
| 12 |
+
- Hugging Face–compatible local model folder (`config.json`, weights, preprocessor, etc.)
|
| 13 |
+
- Example endpoint: `POST /predict/resnet` that accepts a base64 image and returns:
|
| 14 |
+
- `prediction` (class label)
|
| 15 |
+
- `confidence` (softmax probability)
|
| 16 |
+
- `predicted_label` (class index)
|
| 17 |
+
- `model` (model id)
|
| 18 |
+
- `mediaType` (echoed)
|
| 19 |
|
| 20 |
+
---
|
| 21 |
|
| 22 |
+
## 🧭 Project Layout
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
```
|
| 25 |
+
ml-inference-service/
|
| 26 |
+
├─ main.py
|
| 27 |
+
├─ app/
|
| 28 |
+
│ ├─ __init__.py
|
| 29 |
+
│ ├─ core/
|
| 30 |
+
│ │ ├─ app.py # App factory & router wiring
|
| 31 |
+
│ │ ├─ config.py # Settings (app name/version/debug)
|
| 32 |
+
│ │ ├─ dependencies.py # DI for model services
|
| 33 |
+
│ │ ├─ lifespan.py # Startup: load model & register service
|
| 34 |
+
│ │ └─ logging.py # Logger setup
|
| 35 |
+
│ ├─ api/
|
| 36 |
+
│ │ ├─ models.py # Pydantic request/response
|
| 37 |
+
│ │ ├─ controllers.py # HTTP → service orchestration
|
| 38 |
+
│ │ └─ routes/
|
| 39 |
+
│ │ ├─ prediction.py # `POST /predict/resnet`
|
| 40 |
+
│ │ └─ resnet_service_manager.py (legacy, unused)
|
| 41 |
+
│ └─ services/
|
| 42 |
+
│ └─ inference.py # ResNetInferenceService (load/predict)
|
| 43 |
+
├─ models/
|
| 44 |
+
│ └─ resnet-18/ # Sample HF-style model folder
|
| 45 |
+
├─ scripts/
|
| 46 |
+
│ └─ model_download.bash # One-liner to snapshot HF weights locally
|
| 47 |
+
├─ requirements.in / requirements.txt
|
| 48 |
+
└─ test_main.http # Example request you can run from IDEs
|
| 49 |
```
|
| 50 |
|
| 51 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
|
| 53 |
+
## 🚀 Quickstart
|
| 54 |
|
| 55 |
+
### 1) Install dependencies (Python 3.9+)
|
| 56 |
+
```bash
|
| 57 |
+
python -m venv .venv
|
| 58 |
+
source .venv/bin/activate # Windows: .venv\Scripts\activate
|
| 59 |
+
pip install -r requirements.txt
|
| 60 |
+
```
|
|
|
|
| 61 |
|
| 62 |
+
### 2) Download the sample model (ResNet‑18) locally
|
| 63 |
+
```bash
|
| 64 |
+
bash scripts/model_download.bash
|
| 65 |
+
```
|
| 66 |
+
This populates `models/resnet-18/` with Hugging Face artifacts (`config.json`, weights, `preprocessor_config.json`, etc.).
|
| 67 |
|
| 68 |
+
### 3) Run the server
|
| 69 |
+
```bash
|
| 70 |
+
uvicorn main:app --reload
|
| 71 |
+
```
|
| 72 |
+
Server listens on `http://127.0.0.1:8000`.
|
| 73 |
|
| 74 |
+
### 4) Call the API
|
| 75 |
+
- Use `test_main.http` from your IDE (VSCode/IntelliJ) **or** curl:
|
| 76 |
|
| 77 |
+
```bash
|
| 78 |
+
curl -X POST http://127.0.0.1:8000/predict/resnet -H "Content-Type: application/json" -d '{
|
| 79 |
+
"image": { "mediaType": "image/jpeg", "data": "<base64-encoded-bytes>" }
|
| 80 |
+
}'
|
| 81 |
+
```
|
| 82 |
|
| 83 |
+
**Response (example):**
|
| 84 |
+
```json
|
| 85 |
+
{
|
| 86 |
+
"prediction": "tiger cat",
|
| 87 |
+
"confidence": 0.9971,
|
| 88 |
+
"predicted_label": 282,
|
| 89 |
+
"model": "microsoft/resnet-18",
|
| 90 |
+
"mediaType": "image/jpeg"
|
| 91 |
+
}
|
| 92 |
+
```
|
| 93 |
|
| 94 |
+
---
|
| 95 |
+
|
| 96 |
+
## 🧩 Bring Your Own Model (BYOM)
|
| 97 |
+
|
| 98 |
+
There are **two** ways to integrate your own model.
|
| 99 |
+
|
| 100 |
+
### Option A — *Drop-in replacement (zero code changes)*
|
| 101 |
+
|
| 102 |
+
If your model is a **Hugging Face image classification** model that works with
|
| 103 |
+
`AutoImageProcessor` and `ResNetForImageClassification` **or** a compatible
|
| 104 |
+
`*ForImageClassification` class from `transformers`, you can simply place the
|
| 105 |
+
model folder alongside `resnet-18` and point the service at it.
|
| 106 |
+
|
| 107 |
+
1. Put your HF-style folder under `models/<your-model-name>/` containing at least:
|
| 108 |
+
- `config.json`
|
| 109 |
+
- weights (e.g., `pytorch_model.bin` or `model.safetensors`)
|
| 110 |
+
- `preprocessor_config.json` / `image_processor` files
|
| 111 |
+
|
| 112 |
+
2. **Choose one** of these approaches:
|
| 113 |
+
- **Simplest**: Replace the contents of `models/resnet-18/` with your model files *but keep the folder name*. The existing `/predict/resnet` endpoint will now serve your model.
|
| 114 |
+
- **Preferred**: Change the model id used at startup:
|
| 115 |
+
- Open `app/core/lifespan.py` and modify the service initialization:
|
| 116 |
+
```python
|
| 117 |
+
resnet_service = ResNetInferenceService(
|
| 118 |
+
model_name="your-org/your-model", # used for local folder name
|
| 119 |
+
use_local_model=True # loads from models/your-model/
|
| 120 |
+
)
|
| 121 |
+
```
|
| 122 |
+
- Ensure your local folder is `models/your-model/`.
|
| 123 |
+
|
| 124 |
+
> How folder naming works: when `use_local_model=True`, the service derives the
|
| 125 |
+
> local directory as `models/<last-segment-of-model_name>`. For
|
| 126 |
+
> `"microsoft/resnet-18"` that becomes `models/resnet-18`. For
|
| 127 |
+
> `"your-org/awesome-vit-base"`, it becomes `models/awesome-vit-base`.
|
| 128 |
+
|
| 129 |
+
That’s it. No code changes elsewhere if your model is a standard image classifier.
|
| 130 |
+
|
| 131 |
+
---
|
| 132 |
+
|
| 133 |
+
### Option B — *New task/model type (minimal code: new service + route)*
|
| 134 |
+
|
| 135 |
+
If you are **not** serving a Hugging Face image classifier (e.g., object detection,
|
| 136 |
+
segmentation, text models), implement a small service class and a route mirroring
|
| 137 |
+
the `ResNetInferenceService` flow.
|
| 138 |
+
|
| 139 |
+
1. **Create your service** (copy and adapt `ResNetInferenceService`):
|
| 140 |
+
- File: `app/services/<your_model>_service.py`
|
| 141 |
+
- Responsibilities you must implement:
|
| 142 |
+
- `__init__(model_name: str, use_local_model: bool)` → set `self.model_path`
|
| 143 |
+
- `load_model()` → load weights & preprocessor
|
| 144 |
+
- `predict(image: PIL.Image.Image) -> Dict[str, Any]` → run inference and return a dict with:
|
| 145 |
+
```python
|
| 146 |
+
{
|
| 147 |
+
"prediction": "<your label or structured result>",
|
| 148 |
+
"confidence": <float 0..1>,
|
| 149 |
+
"predicted_label": <int or meaningful code>,
|
| 150 |
+
"model": "<model id>"
|
| 151 |
+
}
|
| 152 |
+
```
|
| 153 |
+
*Feel free to extend the payload; just update the API schema accordingly.*
|
| 154 |
+
|
| 155 |
+
2. **Wire the dependency**:
|
| 156 |
+
- Register your service at startup in `app/core/lifespan.py` similar to ResNet:
|
| 157 |
+
```python
|
| 158 |
+
from app.core.dependencies import set_resnet_service # or create your own set/get
|
| 159 |
+
from app.services.your_model_service import YourModelService
|
| 160 |
+
|
| 161 |
+
svc = YourModelService(model_name="your-org/your-model", use_local_model=True)
|
| 162 |
+
svc.load_model()
|
| 163 |
+
set_resnet_service(svc) # or create set_your_model_service(...)
|
| 164 |
+
```
|
| 165 |
+
- Optionally create **new getters/setters** in `app/core/dependencies.py` if you serve multiple models in parallel (one getter per model).
|
| 166 |
+
|
| 167 |
+
3. **Add a route**:
|
| 168 |
+
- Create `app/api/routes/your_model.py` analogous to `prediction.py`:
|
| 169 |
+
```python
|
| 170 |
+
from fastapi import APIRouter, Depends
|
| 171 |
+
from app.api.controllers import PredictionController
|
| 172 |
+
from app.api.models import ImageRequest, PredictionResponse
|
| 173 |
+
from app.core.dependencies import get_resnet_service # or your getter
|
| 174 |
+
from app.services.your_model_service import YourModelService
|
| 175 |
+
|
| 176 |
+
router = APIRouter()
|
| 177 |
+
|
| 178 |
+
@router.post("/predict/your-model", response_model=PredictionResponse)
|
| 179 |
+
async def predict_image(request: ImageRequest, service: YourModelService = Depends(get_resnet_service)):
|
| 180 |
+
controller = PredictionController(service) # reuse the controller
|
| 181 |
+
return await controller.predict(request)
|
| 182 |
+
```
|
| 183 |
+
- Register the router in `app/core/app.py`:
|
| 184 |
+
```python
|
| 185 |
+
from app.api.routes import your_model as your_model_routes
|
| 186 |
+
app.include_router(your_model_routes.router)
|
| 187 |
+
```
|
| 188 |
+
|
| 189 |
+
4. **Adjust schemas if needed**:
|
| 190 |
+
- The default `PredictionResponse` in `app/api/models.py` is for single-label classification. For other tasks, either extend it or define a new response model and use it in your route’s `response_model=`.
|
| 191 |
+
|
| 192 |
+
> **Tip**: Keep your controller thin and push all model-specific logic into your service class. The server glue (DI + routes) stays identical across models.
|
| 193 |
+
|
| 194 |
+
---
|
| 195 |
+
|
| 196 |
+
## 🧪 Validating your setup
|
| 197 |
+
|
| 198 |
+
- **Startup logs** should include: `Initializing ResNet service with local model: models/<folder>` and `Model and processor loaded successfully`.
|
| 199 |
+
- Hitting your endpoint should return a **200** with a JSON body like the example above.
|
| 200 |
+
- If you see `Local model directory not found`, check your `models/<name>/` path and filenames.
|
| 201 |
+
|
| 202 |
+
---
|
| 203 |
+
|
| 204 |
+
## 🔌 Request & Response Shapes
|
| 205 |
+
|
| 206 |
+
### Request
|
| 207 |
+
```json
|
| 208 |
+
{
|
| 209 |
+
"image": {
|
| 210 |
+
"mediaType": "image/jpeg",
|
| 211 |
+
"data": "<base64-encoded image bytes>"
|
| 212 |
+
}
|
| 213 |
+
}
|
| 214 |
+
```
|
| 215 |
|
| 216 |
+
### Response
|
| 217 |
+
```json
|
| 218 |
+
{
|
| 219 |
+
"prediction": "string label",
|
| 220 |
+
"confidence": 0.0,
|
| 221 |
+
"predicted_label": 0,
|
| 222 |
+
"model": "your-org/your-model",
|
| 223 |
+
"mediaType": "image/jpeg"
|
| 224 |
+
}
|
| 225 |
+
```
|
| 226 |
|
| 227 |
+
---
|
|
|
|
| 228 |
|
| 229 |
+
## ⚙️ Configuration
|
|
|
|
| 230 |
|
| 231 |
+
Basic settings live in `app/core/config.py`. Out of the box we keep it simple:
|
| 232 |
+
- `app_name`, `app_version`, `debug`
|
| 233 |
|
| 234 |
+
If you want to make the **model** configurable without touching code, extend `Settings` with a `model_name` env var and consume it in `lifespan.py` when creating your service instance.
|
|
|
|
| 235 |
|
| 236 |
+
Example:
|
| 237 |
+
```python
|
| 238 |
+
# app/core/config.py
|
| 239 |
+
from pydantic_settings import BaseSettings
|
| 240 |
+
from pydantic import Field
|
| 241 |
|
| 242 |
+
class Settings(BaseSettings):
|
| 243 |
+
app_name: str = Field("ML Inference Service")
|
| 244 |
+
app_version: str = Field("0.1.0")
|
| 245 |
+
debug: bool = Field(False)
|
| 246 |
+
model_name: str = Field("microsoft/resnet-18", description="HF model id used at startup")
|
| 247 |
|
| 248 |
+
settings = Settings()
|
|
|
|
| 249 |
|
| 250 |
+
# app/core/lifespan.py
|
| 251 |
+
from app.core.config import settings
|
| 252 |
+
svc = ResNetInferenceService(model_name=settings.model_name, use_local_model=True)
|
| 253 |
+
```
|
| 254 |
|
| 255 |
+
Then set `MODEL_NAME=your-org/your-model` in your environment (Pydantic will map `model_name` from `MODEL_NAME`).
|
| 256 |
|
| 257 |
+
---
|
|
|
|
| 258 |
|
| 259 |
+
## 📦 Packaging & Deployment
|
|
|
|
| 260 |
|
| 261 |
+
- **Dev**: `uvicorn main:app --reload`
|
| 262 |
+
- **Prod**: Use a process manager (e.g., `gunicorn -k uvicorn.workers.UvicornWorker`) and add health checks.
|
| 263 |
+
- **Containerize**: Copy only `requirements.txt` and source, install wheels, and bake the `models/` folder into the image or mount it as a volume.
|
| 264 |
+
- **CPU vs GPU**: This example uses CPU by default. If you have CUDA, install a CUDA-enabled PyTorch build and set device placement in your service.
|
app/core/config.py
CHANGED
|
@@ -4,7 +4,6 @@ Basic configuration management.
|
|
| 4 |
Starting simple - just app settings. We'll expand as needed.
|
| 5 |
"""
|
| 6 |
|
| 7 |
-
from typing import Optional
|
| 8 |
from pydantic import Field
|
| 9 |
from pydantic_settings import BaseSettings # Changed import
|
| 10 |
|
|
|
|
| 4 |
Starting simple - just app settings. We'll expand as needed.
|
| 5 |
"""
|
| 6 |
|
|
|
|
| 7 |
from pydantic import Field
|
| 8 |
from pydantic_settings import BaseSettings # Changed import
|
| 9 |
|