Add Tensorix as a new LLM provider#1022
Conversation
Add Tensorix (https://tensorix.ai) as a new OpenAI-compatible vendor, giving users access to models from DeepSeek, Meta Llama, Qwen, GLM, MiniMax and others through a single API key. Changes: - New vendor: src/modules/llms/vendors/tensorix/ (vendor + setup UI) - Register 'tensorix' dialect in openai.access.ts with default host - Add TENSORIX_API_KEY server env variable - Add hasLlmTensorix backend capability - Wire up vendor in registry, LLMVendorSetup, and backend router Co-authored-by: openhands <openhands@all-hands.dev>
|
@shanemort1982 is attempting to deploy a commit to the Token Fabrics Pro Team on Vercel. A member of the Team first needs to authorize it. |
|
The service seems to be similiar to avian (but maybe better)? #996 Can it be, that people try to get money from that openrouter use case pie? |
|
Hey — actually we're quite different from Avian/OpenRouter. Tensorix isn't an aggregator or proxy — we host all our models on our own hardware. Zero data retention, no third-party routing, your requests never leave our infrastructure. So it's more like running your own inference stack but without the ops headache. We offer popular open-source models (DeepSeek, Llama, GLM, MiniMax, etc.) through a standard OpenAI-compatible API, pay-as-you-go. But the key difference is everything runs on infra we own and operate — nothing gets forwarded to another provider. Happy to answer any other questions! |
|
@shanemort1982 Ah, ok. So more like deepinfra and others. |
|
Considering merge of this one if we see interest or requests by our users. |
JiwaniZakir
left a comment
There was a problem hiding this comment.
In openai.access.ts, the guard condition if (!tensorixKey || !tensorixHost) will never trigger for the host portion, since tensorixHost is derived from llmsFixupHost(access.oaiHost || DEFAULT_TENSORIX_HOST, apiPath) — the DEFAULT_TENSORIX_HOST fallback ensures it's always non-empty. The error message "Missing Tensorix API Key or Host" is therefore misleading; the !tensorixHost branch is dead code and should be removed to keep the guard honest (just if (!tensorixKey)), consistent with how similar dialects like xai handle it.
Additionally, looking at other dialect cases in the same switch block, the tensorix case introduces a let declaration (let tensorixKey) followed by a const on the next line, then reassigns tensorixKey. This is consistent with other cases but placing the llmsRandomKeyFromMultiKey call directly in the initializer (let tensorixKey = llmsRandomKeyFromMultiKey(access.oaiKey || env.TENSORIX_API_KEY || '')) would make the mutation unnecessary and cleaner.
It's also worth confirming whether https://api.tensorix.ai is publicly documented as stable — the comment references https://docs.tensorix.ai but the linked dashboard URL in TensorixServiceSetup.tsx (https://app.tensorix.ai/dashboard) suggests this is a relatively new provider; if the base API URL is subject to change during early access, a fallback note or env-var override documentation would be worth adding to .env.example.
Hey there 👋
This adds Tensorix as a new OpenAI-compatible vendor in big-AGI, letting users access models from DeepSeek, Meta Llama, Qwen, GLM, MiniMax and more through a single API key and endpoint.
Tensorix follows the same OpenAI-compatible pattern as Deepseek/Together/etc, so the integration is straightforward — new vendor folder + wiring into the existing dialect system.
What's included
src/modules/llms/vendors/tensorix/— vendor definition + setup UI componenttensorixadded toopenai.access.tswith default hosthttps://api.tensorix.aiTENSORIX_API_KEYfor server-side deployment confighasLlmTensorixfor the green checkmark when pre-configuredvendors.registry.ts,LLMVendorSetup.tsx, andbackend.router.tsHow it works
Users add their Tensorix API key in Models → Add → Tensorix and models are fetched from the
/v1/modelsendpoint. The setup UI includes an optional custom host field and client-side fetch toggle, matching the pattern of other cloud vendors.This also opens up cross-promotion opportunities — we'd be happy to feature big-AGI prominently in Tensorix's integration docs at docs.tensorix.ai.
Happy to adjust anything if needed!