Skip to content

waqasm86/cursor-llama-mcp-bridge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

cursor-llama-mcp-bridge

Minimal MCP server that lets Cursor talk to a local llama.cpp llama-server (OpenAI-compatible).
Zero llama-cpp-python dependency. Built for reproducible debugging, clean automations, and concise docs.

Quickstart

# 0) Run llama-server (adjust paths/flags)
scripts/run-llama-server.sh

# 1) Put config/mcp.json into ~/.cursor/mcp.json (or project-local .cursor/mcp.json)
#    then open Cursor; it will spawn the MCP server via stdio.

# 2) Smoke test outside Cursor
scripts/smoke-chat.sh

Cursor recipes

See examples/cursor-commands.md for copy-paste @llama-mcp calls.

Troubleshooting

  • /v1/models empty → wrong --model path or the server started without a model.
  • HTTP 400/422 → schema mismatch; check your llama.cpp version and that OpenAI-compatible mode is on.
  • Timeouts → raise LLAMA_TIMEOUT_S or reduce max_tokens.
  • Connection errors → verify LLAMA_BASE_URL host/port/firewall.
  • Older GPUs / low VRAM (e.g., 1GB) → prefer Q4_K_M or smaller ctx-size; keep prompts short.

Support bundle

scripts/support-bundle.sh support.zip

This zips env/version info, /v1/models, /health, and a minimal repro request for fast triage.

Why this project

The Cursor Technical Support Engineer role highlights debugging tricky issues, building automations/tools, and crisp docs. This repo shows:

  • health/models tools for instant diagnostics
  • retrying HTTP client with structured logs (LLAMA_LOG_JSON=1)
  • one-command smoke tests + support bundle
  • small but clear docs and examples

License

MIT

cursor-llama-mcp-bridge

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors