Use a ml model loaded in the LIFESPAN function in the main file, in another file (ie: api router) #9234
-
First Check
Commit to Help
Example Codefrom fastapi import FastAPI
from app.api import api
from contextlib import asynccontextmanager
import whisper
MODEL = 'base'
model = {}
@asynccontextmanager
async def lifespan(app: FastAPI):
print(f'Loading {MODEL} model')
model['whisper'] = whisper.load_model(MODEL)
yield
print('Shutting down the model...')
app = FastAPI(lifespan=lifespan)
@app.get('/')
async def root():
return {'message': 'Server is running'}
app.include_router(api.router) DescriptionI need to load a ML model, I understood the lifespan explained in documentation , but it is in the same file where are the endpoint, i need to load this model, but i have the api routes in different files, how can I use this loaded model? Operating SystemWindows Operating System DetailsNo response FastAPI Version0.93.0 Python Version3.10 Additional ContextNo response |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
For inspiration, here's what I do with my database connection. In main.py: from contextlib import asynccontextmanager
from fastapi import FastAPI
from .my_database_module import db_connect, db_disconnect
@asynccontextmanager
async def lifespan(_: FastAPI):
await db_connect()
yield
await db_disconnect()
app = FastAPI(lifespan=lifespan) Then, in from databases import Database
DATABASE = Database("postgresql+asyncpg:my-username:hunter2@localhost:5432/db_name")
async def db_connect() -> None:
await DATABASE.connect()
async def db_disconnect() -> None:
if DATABASE.is_connected:
await DATABASE.disconnect() I could now import the |
Beta Was this translation helpful? Give feedback.
-
Hello @pelaezluis You can use a global context like this to share your model with other routes. This sample code can give you an idea main.py (It is not completed) from app.utils.fastapi_globals import g, GlobalsMiddleware
from transformers import pipeline
@asynccontextmanager
async def lifespan(app: FastAPI):
# Startup
# Load a pre-trained sentiment analysis model
sentiment_model = pipeline("sentiment-analysis")
g.set_default("sentiment_model", sentiment_model)
print("startup fastapi")
yield
del sentiment_model
g.cleanup()
app = FastAPI(
title="Fastapi",
lifespan=lifespan,
)
app.add_middleware(GlobalsMiddleware) natural_language.py from fastapi import APIRouter
from app.utils.fastapi_globals import g
from app.schemas.response_schema import IPostResponseBase, create_response
router = APIRouter()
@router.post("/sentiment_analysis")
async def sentiment_analysis_prediction(
prompt: str = "Fastapi is awesome",
) -> IPostResponseBase:
"""
Gets a sentimental analysis predition using a NLP model from transformers libray
"""
sentiment_model = g.sentiment_model
prediction = sentiment_model(prompt)
return create_response(message="Prediction got succesfully", data=prediction) A better example can be found this these two files main.py and natural_language.py |
Beta Was this translation helpful? Give feedback.
Hello @pelaezluis You can use a global context like this to share your model with other routes.
This sample code can give you an idea
main.py (It is not completed)
natural_language.py