Skip to content

Commit 6b69e90

Browse files
authored
Merge pull request #47 from a692570/feat/telnyx-integration
Add Telnyx AI services integration example
2 parents ccda6be + 4929fd5 commit 6b69e90

File tree

2 files changed

+205
-0
lines changed

2 files changed

+205
-0
lines changed

examples/telnyx/README.md

Lines changed: 141 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,141 @@
1+
# Telnyx Integration for EchoKit Server
2+
3+
This guide explains how to configure EchoKit Server to use Telnyx AI services.
4+
5+
## Why Telnyx?
6+
7+
Telnyx provides a comprehensive AI platform that integrates seamlessly with EchoKit:
8+
9+
- **OpenAI-Compatible API**: All endpoints follow OpenAI specifications, enabling drop-in compatibility with EchoKit's architecture
10+
- **53+ AI Models**: Access to GPT-4, Claude, Llama, Mistral, and many open-source models through a single API
11+
- **Global Edge Network**: Low-latency inference from data centers worldwide
12+
- **Unified Billing**: Single API key for ASR, TTS, and LLM services
13+
- **Competitive Pricing**: Pay-per-use with transparent, per-token pricing
14+
15+
## Prerequisites
16+
17+
1. A Telnyx account ([sign up here](https://telnyx.com/sign-up))
18+
2. An API key from the [Telnyx Portal](https://portal.telnyx.com)
19+
20+
## Quick Start
21+
22+
### 1. Set Your API Key
23+
24+
```bash
25+
export TELNYX_API_KEY="your-api-key-here"
26+
```
27+
28+
### 2. Use the Example Configuration
29+
30+
```bash
31+
cp examples/telnyx/config.toml config.toml
32+
```
33+
34+
### 3. Build and Run
35+
36+
```bash
37+
cargo build --release
38+
./target/release/echokit_server
39+
```
40+
41+
## Configuration Reference
42+
43+
### ASR (Speech Recognition)
44+
45+
```toml
46+
[asr]
47+
platform = "openai"
48+
url = "https://api.telnyx.com/v2/ai/transcriptions"
49+
api_key = "${TELNYX_API_KEY}"
50+
model = "whisper-1"
51+
lang = "en"
52+
```
53+
54+
Available models:
55+
- `whisper-1` - OpenAI Whisper (recommended)
56+
57+
### TTS (Text-to-Speech)
58+
59+
```toml
60+
[tts]
61+
platform = "openai"
62+
url = "https://api.telnyx.com/v2/ai/speech"
63+
model = "tts-1"
64+
api_key = "${TELNYX_API_KEY}"
65+
voice = "alloy"
66+
```
67+
68+
Available voices:
69+
- `alloy`, `echo`, `fable`, `onyx`, `nova`, `shimmer`
70+
71+
Available models:
72+
- `tts-1` - Optimized for speed
73+
- `tts-1-hd` - Higher quality audio
74+
75+
### LLM (Language Model)
76+
77+
```toml
78+
[llm]
79+
platform = "openai_chat"
80+
url = "https://api.telnyx.com/v2/ai/chat/completions"
81+
api_key = "${TELNYX_API_KEY}"
82+
model = "gpt-4o-mini"
83+
history = 5
84+
```
85+
86+
Popular model options:
87+
- `gpt-4o` - Latest GPT-4 optimized
88+
- `gpt-4o-mini` - Fast, cost-effective
89+
- `claude-3-5-sonnet` - Anthropic Claude
90+
- `llama-3.1-70b-instruct` - Open-source alternative
91+
- `llama-3.1-8b-instruct` - Lightweight, fast inference
92+
93+
See the [Telnyx AI documentation](https://developers.telnyx.com/docs/ai/introduction) for the complete model list.
94+
95+
## Using Telnyx LiteLLM Proxy
96+
97+
For advanced use cases, Telnyx offers a LiteLLM proxy that provides:
98+
99+
- Automatic fallback between models
100+
- Load balancing across providers
101+
- Unified rate limiting
102+
- Custom model aliases
103+
104+
Configure the proxy URL in your `config.toml`:
105+
106+
```toml
107+
[llm]
108+
platform = "openai_chat"
109+
url = "https://api.telnyx.com/v2/ai/chat/completions"
110+
# Use any supported model
111+
model = "claude-3-5-sonnet"
112+
```
113+
114+
## Troubleshooting
115+
116+
### Authentication Errors
117+
118+
Verify your API key is set correctly:
119+
120+
```bash
121+
echo $TELNYX_API_KEY
122+
```
123+
124+
### Model Not Found
125+
126+
Check available models in your Telnyx Portal or consult the [API documentation](https://developers.telnyx.com/docs/ai/introduction).
127+
128+
### High Latency
129+
130+
Telnyx routes requests to the nearest edge location automatically. If you experience latency issues, verify your network connection or contact Telnyx support.
131+
132+
## Additional Resources
133+
134+
- [Telnyx AI Documentation](https://developers.telnyx.com/docs/ai/introduction)
135+
- [Telnyx API Reference](https://developers.telnyx.com/docs/api/v2/overview)
136+
- [EchoKit Documentation](https://echokit.dev/docs/quick-start/)
137+
- [Telnyx Discord Community](https://discord.gg/telnyx)
138+
139+
## License
140+
141+
This integration example is provided under the same license as EchoKit Server.

examples/telnyx/config.toml

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
# EchoKit Server Configuration with Telnyx
2+
#
3+
# This example demonstrates using Telnyx AI services with EchoKit Server.
4+
# Telnyx offers 53 AI models via an OpenAI-compatible inference API,
5+
# making it a natural fit for EchoKit's OpenAI-spec architecture.
6+
#
7+
# Key benefits:
8+
# - Single API key for LLM, ASR, and TTS
9+
# - Global edge network for low-latency inference
10+
# - 53+ AI models including GPT, Claude, Llama, and open-source options
11+
# - Pay-per-use pricing with no minimum commitments
12+
#
13+
# Setup:
14+
# 1. Create a Telnyx account at https://telnyx.com
15+
# 2. Generate an API key from the Portal
16+
# 3. Set your TELNYX_API_KEY environment variable
17+
#
18+
# API Documentation: https://developers.telnyx.com
19+
20+
addr = "0.0.0.0:8080"
21+
hello_wav = "hello.wav"
22+
23+
[asr]
24+
platform = "openai"
25+
url = "https://api.telnyx.com/v2/ai/transcriptions"
26+
api_key = "${TELNYX_API_KEY}"
27+
model = "whisper-1"
28+
lang = "en"
29+
30+
[tts]
31+
platform = "openai"
32+
url = "https://api.telnyx.com/v2/ai/speech"
33+
model = "tts-1"
34+
api_key = "${TELNYX_API_KEY}"
35+
voice = "alloy"
36+
37+
[llm]
38+
platform = "openai_chat"
39+
url = "https://api.telnyx.com/v2/ai/chat/completions"
40+
api_key = "${TELNYX_API_KEY}"
41+
model = "gpt-4o-mini"
42+
history = 5
43+
44+
[[llm.sys_prompts]]
45+
role = "system"
46+
content = """
47+
You are a helpful assistant. Answer truthfully and concisely. Always answer in English.
48+
49+
- NEVER use bullet points
50+
- NEVER use tables
51+
- Answer in complete English sentences as if you are in a conversation.
52+
53+
"""
54+
55+
# Alternative: Use Telnyx's LiteLLM proxy for unified access to 53+ models
56+
# Uncomment the section below and comment out the [llm] section above
57+
#
58+
# [llm]
59+
# platform = "openai_chat"
60+
# url = "https://api.telnyx.com/v2/ai/chat/completions"
61+
# api_key = "${TELNYX_API_KEY}"
62+
# # Available models include: gpt-4o, claude-3-5-sonnet, llama-3.1-70b, and more
63+
# model = "llama-3.1-70b-instruct"
64+
# history = 5

0 commit comments

Comments
 (0)