| Documentation | Rust SDK | Python SDK | Discord |
- Gemma 4: Full multimodal: text, image, video, and audio input. Guide | Video setup
- MXFP4 ISQ quantization: MXFP4 with optimized decode kernels for faster, smaller models. Quantization docs
- Qwen 3.5 model family: Support for the Qwen 3.5 series including vision. Guide
- Any Hugging Face model, zero config: Just
mistralrs run -m user/model. - True multimodality: Text, vision, video, and audio, speech generation, image generation, and embeddings in one engine.
- Full quantization control: Choose the precise quantization you want to use, or make your own UQFF with
mistralrs quantize. - Built-in web UI:
mistralrs serve --uigives you a web interface instantly. - Hardware-aware:
mistralrs tunebenchmarks your system and picks optimal quantization + device mapping. - Flexible SDKs: Python package and Rust crate to build your projects.
- Agentic features — tool calling, web search, and MCP client built in
Linux/macOS:
curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/EricLBuehler/mistral.rs/master/install.sh | shWindows (PowerShell):
irm https://raw.githubusercontent.com/EricLBuehler/mistral.rs/master/install.ps1 | iexManual installation & other platforms
# Interactive chat
mistralrs run -m Qwen/Qwen3-4B
# One-shot prompt (no interactive session)
mistralrs run -m Qwen/Qwen3-4B -i "What is the capital of France?"
# One-shot with an image
mistralrs run -m google/gemma-4-E4B-it --image photo.jpg -i "Describe this image"
# Or start a server with web UI
mistralrs serve --ui -m google/gemma-4-E4B-itThen visit http://localhost:1234/ui for the web chat interface.
The CLI is designed to be zero-config: just point it at a model and go.
- Auto-detection: Automatically detects model architecture, quantization format, and chat template
- All-in-one: Single binary for chat, server, benchmarks, and web UI (
run,serve,bench) - Hardware tuning: Run
mistralrs tuneto automatically benchmark and configure optimal settings for your hardware - Format-agnostic: Works with Hugging Face models, GGUF files, and UQFF quantizations seamlessly
# Auto-tune for your hardware and emit a config file
mistralrs tune -m Qwen/Qwen3-4B --emit-config config.toml
# Run using the generated config
mistralrs from-config -f config.toml
# Diagnose system issues (CUDA, Metal, HuggingFace connectivity)
mistralrs doctorPerformance
- Continuous batching support by default on all devices.
- CUDA with FlashAttention V2/V3, Metal, multi-GPU tensor parallelism
- PagedAttention for high throughput continuous batching on CUDA or Apple Silicon, prefix caching (including multimodal)
Quantization (full docs)
- In-situ quantization (ISQ) of any Hugging Face model
- GGUF (2-8 bit), GPTQ, AWQ, HQQ, FP8, BNB support
- ⭐ Per-layer topology: Fine-tune quantization per layer for optimal quality/speed
- ⭐ Auto-select fastest quant method for your hardware
Flexibility
- LoRA & X-LoRA with weight merging
- AnyMoE: Create mixture-of-experts on any base model
- Multiple models: Load/unload at runtime
Agentic Features
- Integrated tool calling with Python/Rust callbacks
- ⭐ Web search integration
- ⭐ MCP client: Connect to external tools automatically
Text Models
- Granite 4.0
- SmolLM 3
- DeepSeek V3
- GPT-OSS
- DeepSeek V2
- Qwen 3 Next
- Qwen 3 MoE
- Phi 3.5 MoE
- Qwen 3
- GLM 4
- GLM-4.7-Flash
- GLM-4.7 (MoE)
- Gemma 2
- Qwen 2
- Starcoder 2
- Phi 3
- Mixtral
- Phi 2
- Gemma
- Llama
- Mistral
Multimodal Models
- Qwen 3.5
- Qwen 3.5 MoE
- Qwen 3-VL
- Qwen 3-VL MoE
- Gemma 3n
- Llama 4
- Gemma 3
- Mistral 3
- Phi 4 multimodal
- Qwen 2.5-VL
- MiniCPM-O
- Llama 3.2 Vision
- Qwen 2-VL
- Idefics 3
- Idefics 2
- LLaVA Next
- LLaVA
- Phi 3V
Speech Models
- Voxtral (ASR/speech-to-text)
- Dia
Image Generation Models
- FLUX
Embedding Models
- Embedding Gemma
- Qwen 3 Embedding
Request a new model | Full compatibility tables
pip install mistralrs # or mistralrs-cuda, mistralrs-metal, mistralrs-mkl, mistralrs-acceleratefrom mistralrs import Runner, Which, ChatCompletionRequest
runner = Runner(
which=Which.Plain(model_id="Qwen/Qwen3-4B"),
in_situ_quant="4",
)
res = runner.send_chat_completion_request(
ChatCompletionRequest(
model="default",
messages=[{"role": "user", "content": "Hello!"}],
max_tokens=256,
)
)
print(res.choices[0].message.content)Python SDK | Installation | Examples | Cookbook
cargo add mistralrsuse anyhow::Result;
use mistralrs::{IsqType, TextMessageRole, TextMessages, MultimodalModelBuilder};
#[tokio::main]
async fn main() -> Result<()> {
let model = MultimodalModelBuilder::new("google/gemma-4-E4B-it")
.with_isq(IsqType::Q4K)
.with_logging()
.build()
.await?;
let messages = TextMessages::new().add_message(
TextMessageRole::User,
"Hello!",
);
let response = model.send_chat_request(messages).await?;
println!("{:?}", response.choices[0].message.content);
Ok(())
}For quick containerized deployment:
docker pull ghcr.io/ericlbuehler/mistral.rs:latest
docker run --gpus all -p 1234:1234 ghcr.io/ericlbuehler/mistral.rs:latest \
serve -m Qwen/Qwen3-4BFor production use, we recommend installing the CLI directly for maximum flexibility.
For complete documentation, see the Documentation.
Quick Links:
- CLI Reference - All commands and options
- HTTP API - OpenAI-compatible endpoints
- Quantization - ISQ, GGUF, GPTQ, and more
- Device Mapping - Multi-GPU and CPU offloading
- MCP Integration - MCP integration documentation
- Troubleshooting - Common issues and solutions
- Configuration - Environment variables for configuration
Contributions welcome! Please open an issue to discuss new features or report bugs. If you want to add a new model, please contact us via an issue and we can coordinate.
This project would not be possible without the excellent work at Candle. Thank you to all contributors!
mistral.rs is not affiliated with Mistral AI.

