Answer Engine Optimization for the modern web. Make your site discoverable by ChatGPT, Claude, Perplexity & AI search engines. Generates llms.txt, robots.txt, sitemap, JSON-LD & more.
-
Updated
May 4, 2026 - TypeScript
Answer Engine Optimization for the modern web. Make your site discoverable by ChatGPT, Claude, Perplexity & AI search engines. Generates llms.txt, robots.txt, sitemap, JSON-LD & more.
Curated GEO platforms, repositories, and verification scripts for AI-era discoverability.
Claude Code skill that generates and audits ai.txt, robots.txt (AI-crawler rules), llms.txt, and TDM opt-out files for websites.
EbenEzer v1.0 | Maternal Cognitive Anchor (MCA) for "Ghost Fleet" Liquidation. Resolving 96% Agentic Failure via Protocol-Native Logic Gates. Deterministic Reliability for Jagged Intelligence. The Stone of Help is permanent.
GEO SEO agent skill for AI search optimization, answer engine optimization, AI search visibility, technical SEO, structured data, citations, and AI crawler access.
🤖 Generate optimized robots.txt files for AI search engine crawlers (GPTBot, PerplexityBot, ClaudeBot, and more)
Complete database of AI search engine crawler user-agents (GPTBot, PerplexityBot, ClaudeBot) with robots.txt configuration examples
Open source, vendor-neutral telemetry standard + SDKs for LLM/AI features (traces, metrics, logs) via OpenTelemetry.
Smart prerendering for WordPress — make your content visible to search engines, social bots, and AI crawlers. Zero config. 29+ bots. ~200ms.
Define AI permissions once. Emit the rules today's AI web actually uses. One manifest → robots.txt, AIPREF headers, X-Robots-Tag, Google-Extended, TDMRep.
See your site the way search engines and AI see it.
Audit website robots.txt files. Validates AI crawler visibility and returns a compliance score.
Open-source Cloudflare Worker for SeeLLM AI visibility monitoring.
Canonical product-definition, scope, claims, artefacts, and evidence repository for the Better Robots.txt WordPress plugin.
Cloudflare Worker para cobrar a los crawlers de IA con HTTP 402
To produce a rigorous, primary-source analytical paper that documents the gap between llms.txt’s design intent (inference-time content discovery), the infrastructure reality (WAF/CDN blocking), and actual AI system behavior (no confirmed inference-time usage).
Tracks which crawlers the web's top sites block.
MCP server for Googlebot vs AI crawler parity analysis — compare crawl behaviour from Nginx logs with GSC data to detect visibility gaps
Streaming Apache/Nginx log analyzer for SEO and crawl-budget analysis on multi-GB logs. 70+ bot families incl. modern AI crawlers (GPTBot, ClaudeBot, PerplexityBot, Bytespider), reverse-DNS verification, 24 reports, React dashboard. Tested at 16 GB / 50.9M lines.
Provide an up-to-date database of AI search engine crawler user-agents with robots.txt examples to optimize site access and indexing.
Add a description, image, and links to the ai-crawlers topic page so that developers can more easily learn about it.
To associate your repository with the ai-crawlers topic, visit your repo's landing page and select "manage topics."