Location
main.py -> Model Inference Route
Issue
Synchronous execution of heavy LSTM/Sklearn models blocks the main ASGI event loop during high concurrent requests.
Proposed Fix
Offload model inference to a dedicated Celery/RQ worker pool or use asynchronous batching with a message broker (RabbitMQ/Redis).
Why This Matters
FastAPI's async capabilities are wasted if the CPU-bound ML task blocks the thread, leading to request timeouts for other endpoints.
Difficulty
Hard( Requires setting up external message brokers, modifying deployment infrastructure, and handling asynchronous task states.)
I would like to work on this issue.
@Eshajha19 /assign
Location
main.py-> Model Inference RouteIssue
Synchronous execution of heavy LSTM/Sklearn models blocks the main ASGI event loop during high concurrent requests.
Proposed Fix
Offload model inference to a dedicated Celery/RQ worker pool or use asynchronous batching with a message broker (RabbitMQ/Redis).
Why This Matters
FastAPI's async capabilities are wasted if the CPU-bound ML task blocks the thread, leading to request timeouts for other endpoints.
Difficulty
Hard( Requires setting up external message brokers, modifying deployment infrastructure, and handling asynchronous task states.)
I would like to work on this issue.
@Eshajha19 /assign