Skill v1.19.0 | Aligned with OpenClaw v2026.3.8 | CLI-first advisor
An AI skill that turns your OpenClaw setup from "it works, I think" into a tuned, cost-efficient, self-documenting system. It audits what you have, tells you exactly what to change, shows you the rollback command before you commit, and remembers everything it learns about your deployment -- permanently, across sessions, across tools.
Disclaimer: This skill is an AI-driven advisor. Its recommendations are generated by whichever AI model runs the skill, and results may vary between models, versions, and deployments. You are responsible for inspecting and confirming every action and recommendation before applying it. The authors assume no liability for any damage, data loss, misconfiguration, or unexpected costs resulting from the use of this skill. Use at your own risk.
Most OpenClaw users are running with default model routing (expensive), bloated bootstrap files (burning tokens every turn), cron jobs piling up on the same minute (thundering herd), and no idea that a stuck delivery queue is silently saturating their gateway's event loop.
This skill finds all of that. Then it fixes it -- but only when you say so.
Not every request needs Opus. The skill builds a 4-tier routing plan that sends heartbeats and cron to the cheapest models, daily tasks to mid-tier, and reserves premium models for coding and orchestration. Exact config and rollback commands included.
Anthropic, OpenAI, OpenAI Codex (subscription-based OAuth with GPT-5.4), Google Gemini, Moonshot/Kimi, Kimi Coding, KiloCode, Groq, xAI, OpenRouter, Bedrock, Together AI, Cerebras, Hugging Face, Ollama, vLLM, MiniMax, Venice, Z.AI, Synthetic, and more. Each with the exact CLI commands to authenticate, configure, and add to your fallback chain. Includes provider ban warnings for Google (AntiGravity/Gemini CLI) and Anthropic (Claude Code tokens).
Your agent's personality is scattered across SOUL.md, IDENTITY.md, AGENTS.md, USER.md, TOOLS.md, HEARTBEAT.md, and MEMORY.md. Content ends up in the wrong file, directives conflict, and duplicated instructions burn tokens silently on every turn.
The identity audit catches all of it: structural issues, misplaced content, overlapping directives, bloat, best practice violations, and USER.md gaps. Then walks you through each fix one at a time -- approve, modify, or skip -- with diffs and dated backups.
Every session, the skill reads a deployment profile before it starts working. That profile contains your topology, IPs, SSH access, provider config, cron inventory, past issues, and lessons learned. When you fix a problem, the fix gets logged. Next time -- even months later, even from a different AI tool -- the skill already knows the answer.
New in v1.16.0: Profiles now support a directory format that splits the monolith into topic files (topology.md, providers.md, routing.md, channels.md, cron.md, lessons.md, issues/). Only a lightweight INDEX.md (~1K tokens) loads at session start -- topic files load on-demand. This cuts session-start context cost by 90%+ for mature deployments. Legacy single-file profiles still work.
Profiles are stored in ~/.openclaw-optimizer/systems/, outside git, shared across Claude Code, OpenClaw, and Gemini CLI on the same machine. Cross-machine sync via SCP.
Hidden Drain Detection
When all your providers timeout simultaneously, most people blame the providers. The skill knows better. It checks for:
- Stuck delivery queues that retry forever on every restart
- Stale node probes hanging on paired devices that aren't running the node service
- Gemini CLI OAuth cycling through expired accounts (90s per account, 6 accounts = 9 minutes of hung connections)
- Cron thundering herd -- multiple jobs firing at the same minute with no concurrency limit
- Proxy SPOFs in the fallback chain that take down all models behind them at once
- Context bloat cascades where unlimited
contextTokenscreates payloads too large for any provider to handle in time
When a CLI command doesn't work as documented, the skill corrects itself. When a troubleshooting step is missing, it gets added. When advice causes a failure, it gets fixed with a warning. Every session is a chance to get more accurate.
A GitHub Actions workflow checks for new OpenClaw releases daily and opens an issue on drift. A lightweight runtime check at session start tells you if you're behind. Updates are always manual and deliberate.
Pick your tool. Each command downloads and installs the skill in one step.
Claude Code:
curl -fsSL https://raw.githubusercontent.com/jacob-bd/the-openclaw-optimizer/main/install.sh | sh -s -- --tools claudeOpenClaw:
curl -fsSL https://raw.githubusercontent.com/jacob-bd/the-openclaw-optimizer/main/install.sh | sh -s -- --tools openclawCursor:
curl -fsSL https://raw.githubusercontent.com/jacob-bd/the-openclaw-optimizer/main/install.sh | sh -s -- --tools cursorGemini CLI:
curl -fsSL https://raw.githubusercontent.com/jacob-bd/the-openclaw-optimizer/main/install.sh | sh -s -- --tools geminiOpenAI Codex:
curl -fsSL https://raw.githubusercontent.com/jacob-bd/the-openclaw-optimizer/main/install.sh | sh -s -- --tools codexAnti-Gravity:
curl -fsSL https://raw.githubusercontent.com/jacob-bd/the-openclaw-optimizer/main/install.sh | sh -s -- --tools antigravityOpenCode:
curl -fsSL https://raw.githubusercontent.com/jacob-bd/the-openclaw-optimizer/main/install.sh | sh -s -- --tools opencodeRoo Code:
curl -fsSL https://raw.githubusercontent.com/jacob-bd/the-openclaw-optimizer/main/install.sh | sh -s -- --tools rooblock/goose:
curl -fsSL https://raw.githubusercontent.com/jacob-bd/the-openclaw-optimizer/main/install.sh | sh -s -- --tools gooseCline:
curl -fsSL https://raw.githubusercontent.com/jacob-bd/the-openclaw-optimizer/main/install.sh | sh -s -- --tools clineComma-separate the tools you want:
curl -fsSL https://raw.githubusercontent.com/jacob-bd/the-openclaw-optimizer/main/install.sh | sh -s -- --tools claude,openclaw,cursorAuto-detects every supported AI tool on your system and installs to all of them:
curl -fsSL https://raw.githubusercontent.com/jacob-bd/the-openclaw-optimizer/main/install.sh | shgit clone https://github.com/jacob-bd/the-openclaw-optimizer.git
cd the-openclaw-optimizer
cp -r openclaw-optimizer ~/.claude/skills/ # or your tool's skills pathThe systems directory (~/.openclaw-optimizer/systems/) is created automatically on first run.
Full audit (safe, no changes):
Audit my OpenClaw setup for cost, reliability, and context bloat. Prioritized plan with rollback. Do NOT apply changes.
Troubleshoot a specific problem:
[Describe your symptom or paste the error message]. Diagnose it and give me the exact fix.
Add a provider:
Add [provider name] as a model provider. Walk me through the CLI steps and show me the exact config before applying.
Model routing optimization:
Propose a tiered routing plan: cheap for heartbeats/cron, mid for daily tasks, premium for coding/reasoning.
Silent cron job:
Create a cron job that runs [task] every [interval]. Isolated session, NO_REPLY on nothing-to-do.
Audit agent personality & identity:
Audit my agent's personality and identity files. Check for conflicts, bloat, and bad practices. Walk me through improvements.
The skill is advisory by default. It never touches your config, cron jobs, or persistent settings without explicit approval. Before every change: the exact CLI command, the expected impact, and the rollback command. If an optimization reduces monitoring coverage, it presents Options A/B/C and waits for you to choose.
The skill is 13 sections of operational knowledge, 4 reference files, and a self-update system.
| # | Section | What It Covers |
|---|---|---|
| 1 | Model Providers | 40+ providers, auth commands, slug lookup, OAuth walkthrough, provider removal checklist |
| 2 | Model Routing Strategy | 4-tier routing (T1-T4), thinking levels, model aliases, session pruning, caching rules |
| 3 | Context Management | Context Engine plugins, light bootstrap, compaction model override, MEMORY.md bloat prevention |
| 4 | Cron & Automation | Job schema, silent patterns, light-context, defer-while-active, restart staggering |
| 5 | Skills & Plugins | Plugin slots (context engine, memory), ClawHub, security warnings |
| 6 | Multi-Agent Architecture | Sub-agents, ACP dispatch, sandbox isolation, single-agent-with-skills pattern |
| 7 | High-ROI Optimization Levers | 16 levers ranked by impact with exact implementation steps |
| 8 | CLI Reference | Common commands, backup create/verify, in-chat commands, env vars, safe gateway restart |
| 9 | Ops Hygiene Checklist | Daily/weekly/quarterly checklists, mandatory system assessment protocol |
| 10 | Troubleshooting | Symptom lookup table, triage sequence, remote Ollama fix, event loop overload, context bloat cascade |
| 11 | System Learning | Deployment profiles (directory + legacy), on-demand topic loading, issue lifecycle, centralized storage |
| 12 | Continuous Improvement | Self-update triggers, versioning, self-audit checklist |
| 13 | Agent Identity Optimizer | 36-check audit, file role definitions, interactive walkthrough with diffs and backups |
Reference files:
references/providers.md-- all 40+ providers, custom provider schema, failover configreferences/troubleshooting.md-- full error reference, 7 failure categories, GitHub issue workaroundsreferences/cli-reference.md-- complete CLI command referencereferences/identity-optimizer.md-- 36-check audit checklist, file roles, walkthrough workflow
Visual workflows: docs/workflows.md -- architecture, session lifecycle, troubleshooting flow, failover chain, identity audit, system learning lifecycle
README.md # This file
install.sh # One-liner installer script
CHANGELOG.md # Version history
docs/
workflows.md # Workflow diagrams and architecture overview
openclaw-optimizer/ # The skill (copy this directory to install)
SKILL.md # Main skill definition (13 sections)
.gitignore # Excludes system profiles
systems/
TEMPLATE.md # Template for new deployments (copied on first run)
references/
providers.md # All 40+ providers, custom provider schema
troubleshooting.md # Full error reference, 7 failure categories
cli-reference.md # Complete CLI command reference
identity-optimizer.md # Agent identity/personality audit (36 checks)
scripts/
version-check.py # Runtime version check (cached, once per session)
update-skill.sh # Drift report + changelog + version bump
metadata/
... # Skill metadata (ClawHub)
~/.openclaw-optimizer/ # CENTRALIZED (not in git)
systems/
TEMPLATE.md # Deployment profile template (directory format)
<deployment-id>/ # Directory format (preferred)
INDEX.md # Always-loaded summary (~1K tokens)
topology.md # Machines, network, paired devices
providers.md # Active and removed providers
routing.md # Model routing, fallbacks, heartbeat
channels.md # Telegram, WhatsApp, delivery queue
cron.md # Full cron job inventory
lessons.md # Permanent lessons learned
issues/
YYYY-MM.md # Monthly issue files
archive.md # Compressed summaries (14+ days old)
<deployment-id>.md # Legacy single-file format (still supported)
Everything below came from real production failures, not documentation. Each one caused downtime or cost overruns before being captured here.
Gateway overload: When ALL providers timeout simultaneously, the problem is the gateway, not the providers. The Node.js event loop gets saturated by stuck delivery-queue entries, skills-remote probes to offline nodes, Gemini CLI OAuth cycling, and concurrent cron pile-ups. The providers respond fine -- the gateway can't process the responses before its own timeout fires.
Stuck delivery queues: Files in ~/.openclaw/delivery-queue/ that will never succeed (wrong channel, message too long) create an infinite retry loop on every restart. Move to failed/ to stop the loop.
Gateway restart: Never use openclaw gateway restart on macOS -- it races with the LaunchAgent's KeepAlive: true and spawns duplicate processes. Use launchctl kickstart -k gui/$(id -u)/ai.openclaw.gateway instead.
Remote Ollama on macOS: OpenClaw cannot connect to remote Ollama over LAN. Two bugs stack: hardcoded 127.0.0.1:11434 for discovery + macOS LaunchAgent sandbox blocks private IPs. Fix: reverse SSH tunnel from the Ollama machine.
Failover chain ordering: Direct-API providers (Anthropic, Google API key) go before proxy providers (KiloCode, OpenRouter). Proxies are SPOFs -- when the proxy degrades, ALL models through it fail simultaneously.
Cron thundering herd: Set cron.maxConcurrentRuns: 1 and stagger schedules. Three jobs firing at :00 will saturate the event loop.
Context bloat cascade: When contextTokens is unset (defaults to unlimited), the main session accumulates conversation history until no provider can respond in time. Each provider in the fallback chain gets the same oversized payload and times out in sequence. Fix: contextTokens: 100000, timeoutSeconds: 180, reserveTokensFloor: 32000.
CLI gotchas: cron edit exists, cron update does not. models fallbacks has add/remove/clear/list but no set. KiloCode model IDs use dots, not dashes. SSH non-interactive sessions miss PATH.
This skill gets better the more you use it. Fork the repo and maintain your own version. After every session where the skill helps you troubleshoot or configure something, ask your AI tool:
Did we learn anything this session that should be updated in the skill? Any CLI commands that didn't work as documented, troubleshooting steps that were missing, or advice that was wrong?
The skill already has a built-in continuous improvement workflow (Section 12 in SKILL.md). It tracks when to update, what to update, and how to version bump. But it only works if you actually let it update itself after real-world use.
Your deployment is unique. The providers you use, the cron jobs you run, the bugs you hit -- all of that turns into knowledge that makes the skill sharper for your specific setup. Over time, your fork becomes an expert on your system.
See CHANGELOG.md for the full release history.
Full transparency: this project was built by a non-developer using AI coding assistants. The skill works and has been tested against real production OpenClaw deployments, but if you're an experienced developer, you might find things to improve.
If you know better, teach us. PRs, issues, and architectural advice are all welcome. This is open source specifically because human expertise is irreplaceable.
MIT
