Skip to content

Squish v9.0.0 – Cutting-Edge Attention Variants & Distributed Inference

Latest

Choose a tag to compare

@konjoinfinity konjoinfinity released this 12 Mar 14:47
· 452 commits to main since this release

Squish v9.0.0 – Cutting-Edge Attention Variants & Distributed Inference

Release Summary

Squish v9.0.0 introduces 28 new modules across Wave 25 (Cutting-Edge Attention Variants & Compute Fusion) and Wave 26 (Distributed Inference & Production Reliability).

Total modules now: 222 | Total tests: 4,876 | Test coverage: 100%


Wave 25: Cutting-Edge Attention Variants & Compute Fusion (14 modules)

Production-ready attention patterns from DeepSeek-V2/V3, kernel fusions, and speculative decode enhancements:

  • FlashMLA – DeepSeek-V2 multi-head latent attention; 4Γ— KV compression; 0.55 Β΅s append, 38.65 Β΅s attend
  • NativeSparseAttn – DeepSeek-V3 block-sparse + sliding-window; ~87% attention sparsity; 646.6 Β΅s forward
  • FusedSampler – Fused temperature/top-k/top-p/min-p/rep-penalty single-pass sampling; 1767 Β΅s vocab=32k
  • KVDefrag – Online KV cache defragmentation; eliminates fragmentation ratio; 349 Β΅s defrag
  • DualChunkAttn – Intra+inter-chunk for 1M+ contexts; O(chunkΒ²) not O(seqΒ²); 93.3 Β΅s forward
  • ActivationOffload – Layer activation offload to CPU; peak GPU memory ↓; 6.34 Β΅s fetch
  • MorphAttn – Per-layer pattern selection (full/sparse/linear); ~40% FLOP reduction at seq=2048
  • HydraSpec – Multi-draft head speculation; n_heads tokens/step; 1229 Β΅s verify
  • SeqCompact – In-place KV compaction after token pruning; zero-copy repack; 141 Β΅s
  • LatencyPredictor – OLS latency forecasting for scheduling; 0.82 Β΅s predict; sub-microsecond
  • ParallelSampler – Best-of-n sampling with diversity; quality improvement with n candidates
  • ContextSummarizer – Inference-time context compression; keep semantics, shed tokens; 62.5 Β΅s
  • TokenWatermark – Kirchenbauer statistical watermarking; detectable attribution
  • SchemaGen – FSM-accelerated constrained JSON; zero invalid tokens; 5.38 Β΅s constrain

Wave 26: Distributed Inference & Production Reliability (14 modules)

Tensor/sequence parallelism, request scheduling, safety, monitoring, and audit logging:

  • TensorParallel – Row/column tensor sharding + all-reduce; linear memory scaling
  • SequenceParallel – Ulysses-style sequence scatter/gather; attention FLOPs distributed
  • KVMigrate – Live KV migration + checksum; zero-recompute worker handoff
  • DisaggPrefill – Disaggregated prefillβ†’decode; hardware specialisation
  • RequestPreempt – SRPT preemption scheduler; priority inversion elimination
  • InferGateway – Smart routing + health + load balancing; single ingress, N workers
  • ModelVersionSwap – Zero-downtime version swaps; canary β†’ promote β†’ rollback in-flight
  • ProductionProfiler – APM per-op tracking; p50/p99/p999 per operation; sub-200ns record
  • AdaptiveBatcher – Throughput/latency SLO-aware batching; 1.91 Β΅s next_batch
  • SafetyLayer – Inline safety classification; zero extra forward pass
  • SemanticResponseCache – Embedding-similarity dedup; exact + fuzzy cache hits
  • RateLimiter – Token-bucket per-tenant limiting; 0.92 Β΅s consume
  • SchemaValidator – JSON schema validation; 100% schema-compliant outputs
  • AuditLogger – SHA-256 chained audit log; tamper-evident request provenance; 1.92 Β΅s log

Highlights

βœ… 222 modules total across 26 waves (v1–v9)
βœ… 4,876 unit + integration tests β€” 100% coverage
βœ… Micro-benchmarks for all modules (Wave 25+26 in dev/benchmarks/bench_wave25_26.py)
βœ… Demo GIF (dev/demos/squish-v9-demo.gif) β€” 1.95 MB, 10+ scenes from Wave 25+26
βœ… arXiv paper draft (docs/paper.md) β€” abstract, background, architecture, benchmarks, ethics
βœ… HuggingFace integration (dev/publish_hf.py) β€” ready to publish pre-squished weights
βœ… Production hardening β€” fault tolerance, observability, schema validation, audit logging


Documentation


What's Next?

Phase 3: Hardware Validation

Run end-to-end benchmarks on M-series hardware:

squish serve --model qwen2.5:1.5b --port 11435 &
python3 dev/benchmarks/bench_eoe.py --runs 5 --output results/eoe_2026_03_12.json
# Results β†’ README  + paper Section 4.1 (TTFT/tok-s)

Phase 4: Community & Publication

  • MMLU evaluation: lm_eval --tasks mmlu --limit 14042 β†’ docs/RESULTS.md + paper
  • HuggingFace weights: python3 dev/publish_hf.py --model-dir ~/.cache/squish/...
  • Community posts: Hacker News, r/LocalLLaMA, Twitter/X
  • arXiv submission: docs/paper.md β†’ LaTeX, submit to arxiv.org

Installation

pip install squish

# Pull a model (auto-caches after first conversion)
squish pull qwen2.5:1.5b

# Run inference at sub-second load time
squish run qwen2.5:1.5b  "What is machine learning?"

# Drop-in OpenAI-compatible server
squish serve qwen2.5:1.5b --port 11435
curl http://localhost:11435/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model":"squish","messages":[{"role":"user","content":"Hello!"}]}'

Acknowledgments

Squish builds on work from MLX, HuggingFace, Meta (Llama), OpenAI, AnthropicAI, Stanford (SWEET), Microsoft (AWQ), QuIP#, VPTQ and other research communities. See papers.md Section 2 for full citations.