How can I bypass the token limit issue? #482
Replies: 2 comments
-
|
Hey @ArtemSultanov-PG — so the question splits into two parts that have different answers: Part 1: cursor / model session management. "Restart the request after failure, resume from last operation, summarize and store in supabase" — that is not really an n8n-mcp thing, that is how you orchestrate the LLM client. In Cursor / Claude Code, your options are:
If you specifically want supabase as the durable store, you can build that loop in n8n itself with the same n8n-mcp exposed via HTTP — the agent calls Supabase tools to log progress, then calls n8n-mcp to apply the next step. Part 2: keep n8n-mcp from chewing through tokens in the first place. This is where I can actually help. Three things make a huge difference:
Try those three together and you should see your effective context budget for an n8n session roughly triple. If you hit a specific case where the model is burning context on something dumb, paste a transcript and I will tell you which knob it is missing. |
Beta Was this translation helpful? Give feedback.
-
|
Also tracking the broader code-execution-with-MCP exploration in #716 if you want to follow along. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone. If the model is running out of tokens, how can I set up the workflow so that:
Beta Was this translation helpful? Give feedback.
All reactions