This guide helps you deploy NyaProxy using Docker with OpenAI API configuration.
- Why Choose Docker?
- Prerequisites
- Deployment Options
- Configuring OpenAI API
- Testing Your Deployment
- Troubleshooting
Docker provides these benefits for running NyaProxy:
- Isolation: Runs in its own container without affecting your system
- Consistency: Works the same across all operating systems
- Easy Updates: Simple to upgrade to newer versions
- No Dependencies: No need to install Python or other software
Before starting, you'll need:
-
Docker installed on your computer
-
OpenAI API Key
- Get one from OpenAI's platform
- Or get a free Gemini API key from Google AI Studio (OpenAI-compatible)
Note
If you're completely new to Docker, don't worry! This guide includes simple copy-paste commands.
This is the fastest way to get started:
-
Open a terminal or command prompt
-
Run this command to start NyaProxy:
docker run -d -p 8080:8080 k3scat/nya-proxy:latest- Open
http://localhost:8080in your web browser
Tip
The -p 8080:8080 part maps the container's port to your computer. If port 8080 is already in use, you can change the first number (e.g., -p 9000:8080 would make NyaProxy available at http://localhost:9000).
Docker Compose gives you more control and makes future management easier:
-
Create a new folder on your computer for NyaProxy
-
Inside that folder, create a file named
docker-compose.ymlwith this content:
services:
nya-proxy:
image: k3scat/nya-proxy:latest
container_name: nya-proxy
restart: unless-stopped
ports:
- "8080:8080"
networks:
- nya-proxy-network
networks:
nya-proxy-network:
driver: bridge- In the same folder, open a terminal or command prompt and run:
docker-compose up -d- Open
http://localhost:8080in your web browser
Note
No need to supply any configuration files! NyaProxy will automatically create a basic configuration that works out of the box. You can customize it later through the web interface.
Now let's configure NyaProxy to work with your OpenAI API key:
-
Go to
http://localhost:8080/configin your web browser -
The first time you access this page, you won't need a password as no master API key is configured yet. Simply click "Authenticate":
Note
When no master API key is configured, NyaProxy doesn't show a login page. This is convenient for initial setup but not secure for production use.
-
In the configuration editor, you'll see the automatically generated config
-
Find or add your OpenAI configuration in the
apissection:
apis:
openai:
name: OpenAI API
endpoint: https://generativelanguage.googleapis.com/v1beta/openai
aliases:
- /openai
key_variable: api_keys
headers:
Authorization: 'Bearer ${{api_keys}}'
variables:
api_keys:
- sk-your-openai-key-1
- sk-your-openai-key-2 # Optional: add more keys if you have them
load_balancing_strategy: round_robin
rate_limit:
endpoint_rate_limit: 500/m
key_rate_limit: 250/m- Replace
sk-your-openai-key-1with your actual OpenAI API key
Tip
You can use Gemini AI Studio to get a free API key that works with OpenAI-compatible interfaces. Get a Gemini API key here. Just make sure to use the Gemini endpoint if you're using a Gemini key.
- Under the
serversection, add a secure API key to protect your instance:
server:
# ...existing settings...
api_key:
- your-secure-master-key # Choose a strong password- Click "Save Configuration"
Important
After adding an API key, you'll need to use it the next time you access the dashboard or config UI.
Let's verify everything is working correctly:
-
Visit
http://localhost:8080/dashboardin your web browser -
Enter your master API key when prompted
-
You should see the NyaProxy dashboard with metrics
Make a test request to OpenAI through your proxy:
- Using curl (from terminal/command prompt):
curl http://localhost:8080/api/openai/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-secure-master-key" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Say hello!"}]
}'- Or using Python:
import requests
response = requests.post(
"http://localhost:8080/api/openai/chat/completions",
headers={
"Content-Type": "application/json",
"Authorization": "Bearer your-secure-master-key"
},
json={
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Say hello!"}]
}
)
print(response.json())- You should receive a response from OpenAI, and see the request in your dashboard
-
Port Already In Use
- Error:
port is already allocated - Solution: Change the port mapping in your docker run or docker-compose.yml file
- Error:
-
Container Stops Immediately
- Check logs:
docker logs nya-proxy - Look for configuration errors or crashes
- Check logs:
-
Can't Access Dashboard or Config UI
- Make sure your browser can connect to localhost:8080
- Check that the container is running:
docker ps
-
OpenAI API Errors
- Verify your OpenAI API key is valid and active
- Check if you've reached your OpenAI rate limits
Caution
Always stop your container when not in use to prevent unauthorized access:
docker-compose down or docker stop nya-proxy
Congratulations! You now have NyaProxy running in a Docker container, serving as a proxy for OpenAI's API with load balancing and rate limiting capabilities.