A full-stack AI chat application featuring a FastAPI backend with OpenAI integration and a Next.js frontend using the Vercel AI SDK. The application demonstrates streaming chat responses, function calling with custom tools, and real-time AI interactions.
- Streaming Chat: Real-time streaming responses from OpenAI's GPT models
- Function Calling: Custom tool integration with weather lookup and calculator
- Multi-step Reasoning: Automatic tool execution and chained reasoning
- Modern UI: Clean, responsive chat interface built with Next.js
- Server-Sent Events: Efficient streaming using SSE protocol
- Type Safety: Pydantic models for request/response validation
- FastAPI server handling chat requests
- OpenAI API integration with streaming support
- Custom tool implementations (weather, calculator)
- Event-based streaming protocol
- CORS-enabled for local development
- Next.js 15 with React 19
- Vercel AI SDK (
@ai-sdk/react) for chat management - Real-time UI updates for tool execution
- Status indicators for tool states
- Responsive design with inline styles
- Python: 3.11 or higher
- Node.js: 18.0 or higher
- OpenAI API Key: Required for chat functionality
- uv (optional): For faster Python package management
git clone <repository-url>
cd python-open-ai-with-vercel-sdk# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install dependencies
uv sync# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txtCreate a .env file in the project root:
OPENAI_API_KEY=your_openai_api_key_herecd frontend
npm install# From project root
uv run server.py
# Or using uvicorn directly
uvicorn server:app --reload --host 0.0.0.0 --port 8000The backend will be available at http://localhost:8000
# In a new terminal, from frontend directory
cd frontend
npm run devThe frontend will be available at http://localhost:3000
python-open-ai-with-vercel-sdk/
βββ server.py # FastAPI backend server
βββ partial_json.py # JSON parsing utilities
βββ pyproject.toml # Python dependencies (uv)
βββ uv.lock # Lock file for uv
βββ .env # Environment variables (not in repo)
βββ .python-version # Python version specification
βββ README.md # This file
βββ STREAMING_FLOW.md # Documentation on streaming flow
βββ frontend/
βββ app/
β βββ page.js # Main chat interface
β βββ layout.js # Next.js layout
βββ package.json # Frontend dependencies
βββ next.config.js # Next.js configuration
Streams chat responses with tool execution support.
Request Body:
{
"messages": [
{
"role": "user",
"parts": [
{
"type": "text",
"text": "What's the weather in San Francisco?"
}
]
}
]
}Response: Server-Sent Events stream with the following event types:
start: Message initializationtext-start/text-delta/text-end: Text content streamingreasoning-start/reasoning-delta/reasoning-end: Reasoning process (for supported models)tool-input-start/tool-input-delta/tool-input-available: Tool input streamingtool-output-available: Tool execution resultsfinish: Message completion
Health check endpoint.
Response:
{
"message": "Fast api server running"
}Get current weather for a location.
Example: "What's the weather in New York?"
Parameters:
location(string, required): City and stateunit(string, optional): "celsius" or "fahrenheit"
Add two numbers together.
Example: "Calculate 123 + 456"
Parameters:
a(integer, required): First numberb(integer, required): Second number
- FastAPI: Modern Python web framework
- OpenAI Python SDK: Official OpenAI API client
- Pydantic: Data validation and settings management
- python-dotenv: Environment variable management
- uvicorn: ASGI server
- Next.js 15: React framework for production
- React 19: Latest React with concurrent features
- Vercel AI SDK: Purpose-built chat components
- @ai-sdk/react: React hooks for AI interactions
User: Hello! How are you?
Assistant: [Streams response in real-time]
User: What's the weather like in Boston?
Assistant: [Calls weather tool, displays result]
User: Can you calculate 456 * 789?
Assistant: [Calls calculator tool, shows computation]
User: What's the weather in Miami and also calculate 100 + 200?
Assistant: [Executes both tools, synthesizes response]
- Define the tool function in
server.py:
async def your_tool_function(param1: str, param2: int):
# Your implementation
return {"result": "value"}- Add to the
TOOLSdictionary:
TOOLS = {
"your_tool_name": your_tool_function,
# ... other tools
}- Add the tool specification to
tool_specs:
{
"type": "function",
"function": {
"name": "your_tool_name",
"description": "What your tool does",
"parameters": {
# JSON Schema for parameters
}
}
}The server uses OpenAI's gpt-4o model. To change the model, edit line 192 in server.py:
response = await client.chat.completions.create(
model="gpt-4o", # Change this
messages=messages,
stream=True,
tools=tool_specs
)The application uses Server-Sent Events (SSE) for streaming. Each event is formatted as:
data: {"type": "event-type", ...}\n\n
See STREAMING_FLOW.md for detailed documentation on the streaming protocol.
- Ensure your OpenAI API key is set in
.env - Check that all dependencies are installed
- Verify Python version is 3.11+
- Ensure backend is running on port 8000
- Check CORS settings if making requests from different origin
- Verify the API URL in
frontend/app/page.jsline 12
- Check OpenAI API key has access to function calling
- Verify tool specifications match the function signatures
- Review browser console for error messages
This project is open source and available for use and modification.
Contributions are welcome! Please feel free to submit a Pull Request.
For questions or issues, please open an issue in the repository.
Built with β€οΈ using FastAPI, OpenAI, and Vercel AI SDK