Skip to content

vivek12345/fast-api-with-next-ai-sdk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Python OpenAI with Vercel AI SDK

A full-stack AI chat application featuring a FastAPI backend with OpenAI integration and a Next.js frontend using the Vercel AI SDK. The application demonstrates streaming chat responses, function calling with custom tools, and real-time AI interactions.

πŸš€ Features

  • Streaming Chat: Real-time streaming responses from OpenAI's GPT models
  • Function Calling: Custom tool integration with weather lookup and calculator
  • Multi-step Reasoning: Automatic tool execution and chained reasoning
  • Modern UI: Clean, responsive chat interface built with Next.js
  • Server-Sent Events: Efficient streaming using SSE protocol
  • Type Safety: Pydantic models for request/response validation

πŸ—οΈ Architecture

Backend (Python + FastAPI)

  • FastAPI server handling chat requests
  • OpenAI API integration with streaming support
  • Custom tool implementations (weather, calculator)
  • Event-based streaming protocol
  • CORS-enabled for local development

Frontend (Next.js + React)

  • Next.js 15 with React 19
  • Vercel AI SDK (@ai-sdk/react) for chat management
  • Real-time UI updates for tool execution
  • Status indicators for tool states
  • Responsive design with inline styles

πŸ“‹ Prerequisites

  • Python: 3.11 or higher
  • Node.js: 18.0 or higher
  • OpenAI API Key: Required for chat functionality
  • uv (optional): For faster Python package management

πŸ› οΈ Setup

1. Clone the Repository

git clone <repository-url>
cd python-open-ai-with-vercel-sdk

2. Backend Setup

Option A: Using uv (Recommended)

# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh

# Install dependencies
uv sync

Option B: Using pip

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

3. Environment Variables

Create a .env file in the project root:

OPENAI_API_KEY=your_openai_api_key_here

4. Frontend Setup

cd frontend
npm install

πŸš€ Running the Application

Start the Backend Server

# From project root
uv run server.py

# Or using uvicorn directly
uvicorn server:app --reload --host 0.0.0.0 --port 8000

The backend will be available at http://localhost:8000

Start the Frontend Development Server

# In a new terminal, from frontend directory
cd frontend
npm run dev

The frontend will be available at http://localhost:3000

πŸ“ Project Structure

python-open-ai-with-vercel-sdk/
β”œβ”€β”€ server.py              # FastAPI backend server
β”œβ”€β”€ partial_json.py        # JSON parsing utilities
β”œβ”€β”€ pyproject.toml         # Python dependencies (uv)
β”œβ”€β”€ uv.lock               # Lock file for uv
β”œβ”€β”€ .env                  # Environment variables (not in repo)
β”œβ”€β”€ .python-version       # Python version specification
β”œβ”€β”€ README.md             # This file
β”œβ”€β”€ STREAMING_FLOW.md     # Documentation on streaming flow
└── frontend/
    β”œβ”€β”€ app/
    β”‚   β”œβ”€β”€ page.js       # Main chat interface
    β”‚   └── layout.js     # Next.js layout
    β”œβ”€β”€ package.json      # Frontend dependencies
    └── next.config.js    # Next.js configuration

πŸ”Œ API Endpoints

POST /api/chat

Streams chat responses with tool execution support.

Request Body:

{
  "messages": [
    {
      "role": "user",
      "parts": [
        {
          "type": "text",
          "text": "What's the weather in San Francisco?"
        }
      ]
    }
  ]
}

Response: Server-Sent Events stream with the following event types:

  • start: Message initialization
  • text-start / text-delta / text-end: Text content streaming
  • reasoning-start / reasoning-delta / reasoning-end: Reasoning process (for supported models)
  • tool-input-start / tool-input-delta / tool-input-available: Tool input streaming
  • tool-output-available: Tool execution results
  • finish: Message completion

GET /

Health check endpoint.

Response:

{
  "message": "Fast api server running"
}

πŸ› οΈ Available Tools

1. Weather Lookup

Get current weather for a location.

Example: "What's the weather in New York?"

Parameters:

  • location (string, required): City and state
  • unit (string, optional): "celsius" or "fahrenheit"

2. Calculator

Add two numbers together.

Example: "Calculate 123 + 456"

Parameters:

  • a (integer, required): First number
  • b (integer, required): Second number

πŸ”§ Technology Stack

Backend

  • FastAPI: Modern Python web framework
  • OpenAI Python SDK: Official OpenAI API client
  • Pydantic: Data validation and settings management
  • python-dotenv: Environment variable management
  • uvicorn: ASGI server

Frontend

  • Next.js 15: React framework for production
  • React 19: Latest React with concurrent features
  • Vercel AI SDK: Purpose-built chat components
  • @ai-sdk/react: React hooks for AI interactions

πŸ’‘ Usage Examples

Basic Chat

User: Hello! How are you?
Assistant: [Streams response in real-time]

Weather Lookup

User: What's the weather like in Boston?
Assistant: [Calls weather tool, displays result]

Calculator

User: Can you calculate 456 * 789?
Assistant: [Calls calculator tool, shows computation]

Multi-step Reasoning

User: What's the weather in Miami and also calculate 100 + 200?
Assistant: [Executes both tools, synthesizes response]

πŸ” Development Notes

Adding New Tools

  1. Define the tool function in server.py:
async def your_tool_function(param1: str, param2: int):
    # Your implementation
    return {"result": "value"}
  1. Add to the TOOLS dictionary:
TOOLS = {
    "your_tool_name": your_tool_function,
    # ... other tools
}
  1. Add the tool specification to tool_specs:
{
    "type": "function",
    "function": {
        "name": "your_tool_name",
        "description": "What your tool does",
        "parameters": {
            # JSON Schema for parameters
        }
    }
}

Model Configuration

The server uses OpenAI's gpt-4o model. To change the model, edit line 192 in server.py:

response = await client.chat.completions.create(
    model="gpt-4o",  # Change this
    messages=messages,
    stream=True,
    tools=tool_specs
)

Streaming Protocol

The application uses Server-Sent Events (SSE) for streaming. Each event is formatted as:

data: {"type": "event-type", ...}\n\n

See STREAMING_FLOW.md for detailed documentation on the streaming protocol.

πŸ› Troubleshooting

Backend won't start

  • Ensure your OpenAI API key is set in .env
  • Check that all dependencies are installed
  • Verify Python version is 3.11+

Frontend can't connect to backend

  • Ensure backend is running on port 8000
  • Check CORS settings if making requests from different origin
  • Verify the API URL in frontend/app/page.js line 12

Tool calls not working

  • Check OpenAI API key has access to function calling
  • Verify tool specifications match the function signatures
  • Review browser console for error messages

πŸ“ License

This project is open source and available for use and modification.

🀝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

πŸ“§ Support

For questions or issues, please open an issue in the repository.


Built with ❀️ using FastAPI, OpenAI, and Vercel AI SDK

About

Fast api sever with ai sdk v15 from next js

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors