sachin sharma commited on
Commit
d136e15
·
2 Parent(s): cf8dd73 34cf378

Merge branch 'master' into 'main'

Browse files

commit README.md

See merge request ul-dsri/sandbox/sachin-sharma-in/ml-inference-service!2

Files changed (2) hide show
  1. README.md +233 -62
  2. app/core/config.py +0 -1
README.md CHANGED
@@ -1,93 +1,264 @@
1
- # ml-Inference-service
2
 
 
 
3
 
 
4
 
5
- ## Getting started
6
 
7
- To make it easy for you to get started with GitLab, here's a list of recommended next steps.
 
 
 
 
 
 
 
 
8
 
9
- Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
10
 
11
- ## Add your files
12
-
13
- - [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
14
- - [ ] [Add files using the command line](https://docs.gitlab.com/topics/git/add_files/#add-files-to-a-git-repository) or push an existing Git repository with the following command:
15
 
16
  ```
17
- cd existing_repo
18
- git remote add origin https://gitlab.com/ul-dsri/sandbox/sachin-sharma-in/ml-inference-service.git
19
- git branch -M main
20
- git push -uf origin main
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  ```
22
 
23
- ## Integrate with your tools
24
-
25
- - [ ] [Set up project integrations](https://gitlab.com/ul-dsri/sandbox/sachin-sharma-in/ml-inference-service/-/settings/integrations)
26
-
27
- ## Collaborate with your team
28
-
29
- - [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
30
- - [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
31
- - [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
32
- - [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
33
- - [ ] [Set auto-merge](https://docs.gitlab.com/user/project/merge_requests/auto_merge/)
34
 
35
- ## Test and Deploy
36
 
37
- Use the built-in continuous integration in GitLab.
38
-
39
- - [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/)
40
- - [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing (SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
41
- - [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
42
- - [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
43
- - [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
44
 
45
- ***
 
 
 
 
46
 
47
- # Editing this README
 
 
 
 
48
 
49
- When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thanks to [makeareadme.com](https://www.makeareadme.com/) for this template.
 
50
 
51
- ## Suggestions for a good README
 
 
 
 
52
 
53
- Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
 
 
 
 
 
 
 
 
 
54
 
55
- ## Name
56
- Choose a self-explaining name for your project.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
 
58
- ## Description
59
- Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
 
 
 
 
 
 
 
 
60
 
61
- ## Badges
62
- On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
63
 
64
- ## Visuals
65
- Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
66
 
67
- ## Installation
68
- Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
69
 
70
- ## Usage
71
- Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
72
 
73
- ## Support
74
- Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
 
 
 
75
 
76
- ## Roadmap
77
- If you have ideas for releases in the future, it is a good idea to list them in the README.
 
 
 
78
 
79
- ## Contributing
80
- State if you are open to contributions and what your requirements are for accepting them.
81
 
82
- For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
 
 
 
83
 
84
- You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
85
 
86
- ## Authors and acknowledgment
87
- Show your appreciation to those who have contributed to the project.
88
 
89
- ## License
90
- For open source projects, say how it is licensed.
91
 
92
- ## Project status
93
- If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
 
 
 
1
+ # ML Inference Service (FastAPI)
2
 
3
+ A production-ready **FastAPI** web service that serves **image classification** models.
4
+ This repo ships with a working example using **ResNet-18** (downloaded from Hugging Face) under `models/resnet-18/` and exposes a simple **REST** endpoint.
5
 
6
+ ---
7
 
8
+ ## What you get
9
 
10
+ - FastAPI application with clean layering (routes controller service)
11
+ - Hot-loaded model on startup (single instance reused per request)
12
+ - Hugging Face–compatible local model folder (`config.json`, weights, preprocessor, etc.)
13
+ - Example endpoint: `POST /predict/resnet` that accepts a base64 image and returns:
14
+ - `prediction` (class label)
15
+ - `confidence` (softmax probability)
16
+ - `predicted_label` (class index)
17
+ - `model` (model id)
18
+ - `mediaType` (echoed)
19
 
20
+ ---
21
 
22
+ ## 🧭 Project Layout
 
 
 
23
 
24
  ```
25
+ ml-inference-service/
26
+ ├─ main.py
27
+ ├─ app/
28
+ │ ├─ __init__.py
29
+ │ ├─ core/
30
+ │ │ ├─ app.py # App factory & router wiring
31
+ │ │ ├─ config.py # Settings (app name/version/debug)
32
+ │ │ ├─ dependencies.py # DI for model services
33
+ │ │ ├─ lifespan.py # Startup: load model & register service
34
+ │ │ └─ logging.py # Logger setup
35
+ │ ├─ api/
36
+ │ │ ├─ models.py # Pydantic request/response
37
+ │ │ ├─ controllers.py # HTTP → service orchestration
38
+ │ │ └─ routes/
39
+ │ │ ├─ prediction.py # `POST /predict/resnet`
40
+ │ │ └─ resnet_service_manager.py (legacy, unused)
41
+ │ └─ services/
42
+ │ └─ inference.py # ResNetInferenceService (load/predict)
43
+ ├─ models/
44
+ │ └─ resnet-18/ # Sample HF-style model folder
45
+ ├─ scripts/
46
+ │ └─ model_download.bash # One-liner to snapshot HF weights locally
47
+ ├─ requirements.in / requirements.txt
48
+ └─ test_main.http # Example request you can run from IDEs
49
  ```
50
 
51
+ ---
 
 
 
 
 
 
 
 
 
 
52
 
53
+ ## 🚀 Quickstart
54
 
55
+ ### 1) Install dependencies (Python 3.9+)
56
+ ```bash
57
+ python -m venv .venv
58
+ source .venv/bin/activate # Windows: .venv\Scripts\activate
59
+ pip install -r requirements.txt
60
+ ```
 
61
 
62
+ ### 2) Download the sample model (ResNet‑18) locally
63
+ ```bash
64
+ bash scripts/model_download.bash
65
+ ```
66
+ This populates `models/resnet-18/` with Hugging Face artifacts (`config.json`, weights, `preprocessor_config.json`, etc.).
67
 
68
+ ### 3) Run the server
69
+ ```bash
70
+ uvicorn main:app --reload
71
+ ```
72
+ Server listens on `http://127.0.0.1:8000`.
73
 
74
+ ### 4) Call the API
75
+ - Use `test_main.http` from your IDE (VSCode/IntelliJ) **or** curl:
76
 
77
+ ```bash
78
+ curl -X POST http://127.0.0.1:8000/predict/resnet -H "Content-Type: application/json" -d '{
79
+ "image": { "mediaType": "image/jpeg", "data": "<base64-encoded-bytes>" }
80
+ }'
81
+ ```
82
 
83
+ **Response (example):**
84
+ ```json
85
+ {
86
+ "prediction": "tiger cat",
87
+ "confidence": 0.9971,
88
+ "predicted_label": 282,
89
+ "model": "microsoft/resnet-18",
90
+ "mediaType": "image/jpeg"
91
+ }
92
+ ```
93
 
94
+ ---
95
+
96
+ ## 🧩 Bring Your Own Model (BYOM)
97
+
98
+ There are **two** ways to integrate your own model.
99
+
100
+ ### Option A — *Drop-in replacement (zero code changes)*
101
+
102
+ If your model is a **Hugging Face image classification** model that works with
103
+ `AutoImageProcessor` and `ResNetForImageClassification` **or** a compatible
104
+ `*ForImageClassification` class from `transformers`, you can simply place the
105
+ model folder alongside `resnet-18` and point the service at it.
106
+
107
+ 1. Put your HF-style folder under `models/<your-model-name>/` containing at least:
108
+ - `config.json`
109
+ - weights (e.g., `pytorch_model.bin` or `model.safetensors`)
110
+ - `preprocessor_config.json` / `image_processor` files
111
+
112
+ 2. **Choose one** of these approaches:
113
+ - **Simplest**: Replace the contents of `models/resnet-18/` with your model files *but keep the folder name*. The existing `/predict/resnet` endpoint will now serve your model.
114
+ - **Preferred**: Change the model id used at startup:
115
+ - Open `app/core/lifespan.py` and modify the service initialization:
116
+ ```python
117
+ resnet_service = ResNetInferenceService(
118
+ model_name="your-org/your-model", # used for local folder name
119
+ use_local_model=True # loads from models/your-model/
120
+ )
121
+ ```
122
+ - Ensure your local folder is `models/your-model/`.
123
+
124
+ > How folder naming works: when `use_local_model=True`, the service derives the
125
+ > local directory as `models/<last-segment-of-model_name>`. For
126
+ > `"microsoft/resnet-18"` that becomes `models/resnet-18`. For
127
+ > `"your-org/awesome-vit-base"`, it becomes `models/awesome-vit-base`.
128
+
129
+ That’s it. No code changes elsewhere if your model is a standard image classifier.
130
+
131
+ ---
132
+
133
+ ### Option B — *New task/model type (minimal code: new service + route)*
134
+
135
+ If you are **not** serving a Hugging Face image classifier (e.g., object detection,
136
+ segmentation, text models), implement a small service class and a route mirroring
137
+ the `ResNetInferenceService` flow.
138
+
139
+ 1. **Create your service** (copy and adapt `ResNetInferenceService`):
140
+ - File: `app/services/<your_model>_service.py`
141
+ - Responsibilities you must implement:
142
+ - `__init__(model_name: str, use_local_model: bool)` → set `self.model_path`
143
+ - `load_model()` → load weights & preprocessor
144
+ - `predict(image: PIL.Image.Image) -> Dict[str, Any]` → run inference and return a dict with:
145
+ ```python
146
+ {
147
+ "prediction": "<your label or structured result>",
148
+ "confidence": <float 0..1>,
149
+ "predicted_label": <int or meaningful code>,
150
+ "model": "<model id>"
151
+ }
152
+ ```
153
+ *Feel free to extend the payload; just update the API schema accordingly.*
154
+
155
+ 2. **Wire the dependency**:
156
+ - Register your service at startup in `app/core/lifespan.py` similar to ResNet:
157
+ ```python
158
+ from app.core.dependencies import set_resnet_service # or create your own set/get
159
+ from app.services.your_model_service import YourModelService
160
+
161
+ svc = YourModelService(model_name="your-org/your-model", use_local_model=True)
162
+ svc.load_model()
163
+ set_resnet_service(svc) # or create set_your_model_service(...)
164
+ ```
165
+ - Optionally create **new getters/setters** in `app/core/dependencies.py` if you serve multiple models in parallel (one getter per model).
166
+
167
+ 3. **Add a route**:
168
+ - Create `app/api/routes/your_model.py` analogous to `prediction.py`:
169
+ ```python
170
+ from fastapi import APIRouter, Depends
171
+ from app.api.controllers import PredictionController
172
+ from app.api.models import ImageRequest, PredictionResponse
173
+ from app.core.dependencies import get_resnet_service # or your getter
174
+ from app.services.your_model_service import YourModelService
175
+
176
+ router = APIRouter()
177
+
178
+ @router.post("/predict/your-model", response_model=PredictionResponse)
179
+ async def predict_image(request: ImageRequest, service: YourModelService = Depends(get_resnet_service)):
180
+ controller = PredictionController(service) # reuse the controller
181
+ return await controller.predict(request)
182
+ ```
183
+ - Register the router in `app/core/app.py`:
184
+ ```python
185
+ from app.api.routes import your_model as your_model_routes
186
+ app.include_router(your_model_routes.router)
187
+ ```
188
+
189
+ 4. **Adjust schemas if needed**:
190
+ - The default `PredictionResponse` in `app/api/models.py` is for single-label classification. For other tasks, either extend it or define a new response model and use it in your route’s `response_model=`.
191
+
192
+ > **Tip**: Keep your controller thin and push all model-specific logic into your service class. The server glue (DI + routes) stays identical across models.
193
+
194
+ ---
195
+
196
+ ## 🧪 Validating your setup
197
+
198
+ - **Startup logs** should include: `Initializing ResNet service with local model: models/<folder>` and `Model and processor loaded successfully`.
199
+ - Hitting your endpoint should return a **200** with a JSON body like the example above.
200
+ - If you see `Local model directory not found`, check your `models/<name>/` path and filenames.
201
+
202
+ ---
203
+
204
+ ## 🔌 Request & Response Shapes
205
+
206
+ ### Request
207
+ ```json
208
+ {
209
+ "image": {
210
+ "mediaType": "image/jpeg",
211
+ "data": "<base64-encoded image bytes>"
212
+ }
213
+ }
214
+ ```
215
 
216
+ ### Response
217
+ ```json
218
+ {
219
+ "prediction": "string label",
220
+ "confidence": 0.0,
221
+ "predicted_label": 0,
222
+ "model": "your-org/your-model",
223
+ "mediaType": "image/jpeg"
224
+ }
225
+ ```
226
 
227
+ ---
 
228
 
229
+ ## ⚙️ Configuration
 
230
 
231
+ Basic settings live in `app/core/config.py`. Out of the box we keep it simple:
232
+ - `app_name`, `app_version`, `debug`
233
 
234
+ If you want to make the **model** configurable without touching code, extend `Settings` with a `model_name` env var and consume it in `lifespan.py` when creating your service instance.
 
235
 
236
+ Example:
237
+ ```python
238
+ # app/core/config.py
239
+ from pydantic_settings import BaseSettings
240
+ from pydantic import Field
241
 
242
+ class Settings(BaseSettings):
243
+ app_name: str = Field("ML Inference Service")
244
+ app_version: str = Field("0.1.0")
245
+ debug: bool = Field(False)
246
+ model_name: str = Field("microsoft/resnet-18", description="HF model id used at startup")
247
 
248
+ settings = Settings()
 
249
 
250
+ # app/core/lifespan.py
251
+ from app.core.config import settings
252
+ svc = ResNetInferenceService(model_name=settings.model_name, use_local_model=True)
253
+ ```
254
 
255
+ Then set `MODEL_NAME=your-org/your-model` in your environment (Pydantic will map `model_name` from `MODEL_NAME`).
256
 
257
+ ---
 
258
 
259
+ ## 📦 Packaging & Deployment
 
260
 
261
+ - **Dev**: `uvicorn main:app --reload`
262
+ - **Prod**: Use a process manager (e.g., `gunicorn -k uvicorn.workers.UvicornWorker`) and add health checks.
263
+ - **Containerize**: Copy only `requirements.txt` and source, install wheels, and bake the `models/` folder into the image or mount it as a volume.
264
+ - **CPU vs GPU**: This example uses CPU by default. If you have CUDA, install a CUDA-enabled PyTorch build and set device placement in your service.
app/core/config.py CHANGED
@@ -4,7 +4,6 @@ Basic configuration management.
4
  Starting simple - just app settings. We'll expand as needed.
5
  """
6
 
7
- from typing import Optional
8
  from pydantic import Field
9
  from pydantic_settings import BaseSettings # Changed import
10
 
 
4
  Starting simple - just app settings. We'll expand as needed.
5
  """
6
 
 
7
  from pydantic import Field
8
  from pydantic_settings import BaseSettings # Changed import
9