Skip to content

main.py: Synchronous execution of heavy LSTM/Sklearn models blocks the main ASGI event lo #463

@aniket866

Description

@aniket866

Location

main.py -> Model Inference Route


Issue

Synchronous execution of heavy LSTM/Sklearn models blocks the main ASGI event loop during high concurrent requests.


Proposed Fix

Offload model inference to a dedicated Celery/RQ worker pool or use asynchronous batching with a message broker (RabbitMQ/Redis).


Why This Matters

FastAPI's async capabilities are wasted if the CPU-bound ML task blocks the thread, leading to request timeouts for other endpoints.


Difficulty

Hard( Requires setting up external message brokers, modifying deployment infrastructure, and handling asynchronous task states.)


I would like to work on this issue.

@Eshajha19 /assign

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions