dgx-spark
Here are 87 public repositories matching this topic...
One-command vLLM installation for NVIDIA DGX Spark with Blackwell GB10 GPUs (sm_121 architecture)
-
Updated
Oct 28, 2025 - Shell
Headless remote desktop setup for NVIDIA DGX SPARK using Sunshine streaming
-
Updated
Oct 25, 2025 - Shell
vLLM + Qwen3.5-122B-A10B-NVFP4 on NVIDIA DGX Spark (GB10/SM121) — single-GPU NVFP4 W4A4 with MTP speculative decoding, self-contained Docker build
-
Updated
Mar 12, 2026 - Python
Serve the home! Inference stack for your Nvidia DGX Spark aka the Grace Blackwell AI supercomputer on your desk. Mostly vLLM based for now and single-spark. For the not-so-rich buddies
-
Updated
Apr 15, 2026 - JavaScript
headless remote desktop to your dgx spark in crystal clear 4k
-
Updated
Apr 5, 2026 - Shell
The definitive Strix Halo LLM guide — 65 t/s on a $2,999 mini PC. Live benchmarks, tested optimizations, and everything that doesn't work.
-
Updated
Mar 21, 2026 - Shell
LLM fine-tuning with LoRA + NVFP4/MXFP8 on NVIDIA DGX Spark (Blackwell GB10)
-
Updated
Dec 22, 2025 - Python
GPU-accelerated WhisperX on NVIDIA Blackwell (SM_121) - DGX Spark compatible
-
Updated
Jan 25, 2026 - Python
Multi-model LLM serving for NVIDIA DGX Spark with vLLM, web UI, and tool calling
-
Updated
Jan 24, 2026 - Python
Turn any NVIDIA GPU into a local AI platform. Inference + fine-tuning in your browser. One command to start, automatic clustering.
-
Updated
Apr 17, 2026 - Python
(Experimental) A high-throughput and memory-efficient inference and serving engine for LLMs optimized for GB10 homelabs
-
Updated
Apr 17, 2026 - Python
Improve this page
Add a description, image, and links to the dgx-spark topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the dgx-spark topic, visit your repo's landing page and select "manage topics."