Install the GonkaGate community nodes for self-hosted n8n.
Use this package name in n8n Community Nodes:
@gonkagate/n8n-nodes-gonkagate
This package is for self-hosted n8n.
If you already run self-hosted n8n, the preferred and fastest install path
is the Community Nodes UI inside n8n.
- Open your self-hosted
n8ninstance. - Open
Settings. - Open
Community Nodes. - Click
Install. - Enter:
@gonkagate/n8n-nodes-gonkagate
- Confirm the community-node prompt if
n8nshows it. - Wait for installation to finish.
- Restart
n8nif your deployment model requires it.
If you already know n8n, that is usually all you need.
If you use queue mode, want Docker or Docker Compose, need a tarball, or want
an unpublished build, use the Installation Guide.
From Community Nodes install to a working GonkaGate node in one short walkthrough:
@gonkagate/n8n-nodes-gonkagate is the GonkaGate community node package for
self-hosted n8n.
Use it if you want a GonkaGate-first integration instead of wiring stock OpenAI-compatible nodes by hand. The package gives you:
- the root node
GonkaGate - the additive AI model node
GonkaGate Chat Model - the shared credential
GonkaGate API
Today the package targets GonkaGate's OpenAI-compatible GET /v1/models and
POST /v1/chat/completions surface, with the canonical base URL fixed to
https://api.gonkagate.com/v1.
After installation:
- Open your
n8nUI. - Click
Start from scratch. - Add
Manual Trigger. - Click
+to add the next node. - Search for
gonkaorGonkaGate. - If
n8nopens theAI Nodespicker first, checkResults in other categories. - Choose the plain
GonkaGatenode. - Ignore
GonkaGate ToolandGonkaGate Chat Modelfor the first check. - Set
OperationtoList Models. - Create
GonkaGate API, paste your API key, and save. - Click
Execute steporExecute workflow.
If that works, change the same node to Chat Completion, choose a model, and
run one short test message.
For the full click-by-click flow, continue with Quickstart.
Use the Installation Guide if you need one of these:
- local macOS smoke test with
npm install -g n8n - manual
npm installon a server - Docker or Docker Compose
- advanced Docker install with your own custom image
- queue mode or worker-based deployments
- tarball install for staging or unpublished builds
| Start with... | Use it when... |
|---|---|
GonkaGate |
You want the fastest first request, List Models, or easier setup |
GonkaGate Chat Model |
You are building AI Agent or other AiLanguageModel workflows |
GonkaGate API is the shared credential for both node surfaces.
Users normally only enter an API key. The canonical GonkaGate base URL stays
hidden and defaults to https://api.gonkagate.com/v1.
If you still have credentials created before the canonical hidden default was added, recreate them before chasing other auth issues.
GonkaGate Chat Model is the additive AI-model surface inside the same package.
It reuses GonkaGate API, the same live model discovery path, and the same
manual Model ID fallback as the root node.
Use it when you are building AI Agent or other AiLanguageModel workflows.
For first-time validation, start with the plain root node GonkaGate and move
to the chat-model surface after auth and model selection are already proven.
GonkaGatewithList ModelsGonkaGatewith non-streamingChat CompletionGonkaGate Chat Modelforn8nAI workflows- shared
GonkaGate APIcredential across both node surfaces - live model discovery from
GET /v1/models - manual
Model IDfallback when the live list is empty or unavailable
- no
/v1/responsessupport - no blanket
n8nversion support claim - self-hosted first only, with no
n8nCloud promise - no verified-node approval yet; submission readiness does not equal Creator Portal approval
- root-node
Chat Completionreturns one final JSON response instead of visible live streaming GonkaGate Chat Modelis the better fit for streaming-capable AI workflows, but not every workflow shape has been live-validated yet
For the exact support posture, see the Compatibility Matrix and Known Limitations.
