See your network traffic clearly.
Real-time monitoring Β· Traffic auditing Β· Multi-gateway support
English | δΈζ
Important
Disclaimer
This project is a traffic analysis and visualization tool for local gateway environments.
It does not provide any network access service, proxy subscription, or cross-network connectivity. All data is collected from the user's own network environment.
This project is open-sourced under the MIT License. We assume no responsibility for any consequences resulting from the use of this software. Please use it in compliance with applicable laws and regulations.
|
|
|
|
Neko (γγ) means cat in Japanese. Pronounced /ΛneΙͺkoΚ/ (NEH-ko).
Like a cat, Neko Master observes network traffic quietly and precisely. It is a lightweight analytics dashboard designed for modern gateway environments.
- β¨ Features
- π Quick Start
- π€ Agent Deployment
- π First Use
- π§ Port Conflict Resolution
- π³ Docker Configuration
- ποΈ ClickHouse (Optional)
- π Reverse Proxy & Tunnel
- π Authentication & Security
- β FAQ
- ποΈ Architecture Guide
- π€ Feedback & Issues
- π Project Structure
- π οΈ Tech Stack
- π License
| Feature | Description |
|---|---|
| π Real-time Monitoring | WebSocket real-time collection with millisecond latency |
| π Trend Analysis | Multi-dimensional traffic trends: 30min / 1h / 24h |
| π Domain Analysis | View traffic, associated IPs, and connection count per domain |
| πΊοΈ IP Analysis | ASN, geo-location, and associated domain display |
| π Proxy Statistics | Traffic distribution and connection count per proxy node |
| π± PWA Support | Install as desktop app for native experience |
| π Dark Mode | Light / Dark / System theme support |
| π i18n Support | English / Chinese seamless switching |
| π Multi-Backend | Monitor multiple OpenClash backend instances simultaneously |
The repository's built-in
docker-compose.ymlmaps3000/3001/3002by default. Scenarios A/B below are minimal templates for common deployments.
services:
neko-master:
image: foru17/neko-master:latest
container_name: neko-master
restart: unless-stopped
ports:
- "3000:3000" # Web UI
volumes:
- ./data:/app/data
# Local MMDB (optional, files should be downloaded into ./geoip)
- ./geoip:/app/data/geoip:ro
environment:
- NODE_ENV=production
- DB_PATH=/app/data/stats.db
- COOKIE_SECRET=${COOKIE_SECRET}Recommended in
.env(same directory asdocker-compose.yml):COOKIE_SECRET=<at least 32-byte random string>(generate withopenssl rand -hex 32)
This mode is fully upgrade-compatible and works out of the box. If WS is not routed, the app falls back to HTTP polling automatically.
services:
neko-master:
image: foru17/neko-master:latest
container_name: neko-master
restart: unless-stopped
ports:
- "3000:3000" # Web UI
- "3002:3002" # WebSocket (for Nginx / Tunnel forwarding)
volumes:
- ./data:/app/data
# Local MMDB (optional, files should be downloaded into ./geoip)
- ./geoip:/app/data/geoip:ro
environment:
- NODE_ENV=production
- DB_PATH=/app/data/stats.db
- COOKIE_SECRET=${COOKIE_SECRET}Then run:
docker compose up -dOpen http://localhost:3000 to get started.
If you use the repository's built-in Compose file (default 3000/3001/3002), run the same command.
# Generate a fixed cookie secret first (for session persistence)
export COOKIE_SECRET="$(openssl rand -hex 32)"# Minimal (only 3000)
docker run -d \
--name neko-master \
-p 3000:3000 \
-v $(pwd)/data:/app/data \
-e COOKIE_SECRET="$COOKIE_SECRET" \
--restart unless-stopped \
foru17/neko-master:latest
# Real-time WS (with reverse proxy)
docker run -d \
--name neko-master \
-p 3000:3000 \
-p 3002:3002 \
-v $(pwd)/data:/app/data \
-e COOKIE_SECRET="$COOKIE_SECRET" \
--restart unless-stopped \
foru17/neko-master:latestOpen http://localhost:3000 to get started.
The frontend uses same-origin
/apiby default, so port 3001 is usually not required externally. For real-time WS, your reverse proxy/tunnel must be able to reach port3002. If not, the app falls back to ~5s HTTP polling.
For
docker run, change external ports using-pmappings directly. Only if you use direct WS access (no reverse proxy) and external WS port is not3002, also pass-e WS_EXTERNAL_PORT=<external-ws-port>.Local MMDB lookup mode (optional): mount
-v $(pwd)/geoip:/app/data/geoip:ro, then switch source to Local inSettings -> Preferences -> IP Lookup Source.
Automatically detects port conflicts and configures everything:
# Using curl
curl -fsSL https://raw.githubusercontent.com/foru17/neko-master/main/setup.sh | bash
# Or using wget
wget -qO- https://raw.githubusercontent.com/foru17/neko-master/main/setup.sh | bashThe script will automatically:
- β
Download
docker-compose.yml - β Check if default ports (3000/3001/3002) are in use
- β Suggest available alternative ports
- β Create configuration file and start the service
# 1. Clone the repository
git clone https://github.com/foru17/neko-master.git
cd neko-master
# 2. Install dependencies
pnpm install
# 3. Prepare collector env (source mode reads apps/collector/.env)
cp apps/collector/.env.example apps/collector/.env
# 4. Start development services
pnpm devOpen http://localhost:3000 to configure.
In source mode: collector listens on
3001/3002, web listens on3000by default. If you changedAPI_PORT(not 3001), setAPI_URLaccordingly (for exampleAPI_URL=http://localhost:4001) so web/apirewrite targets the correct API.apps/collector/.env.localtakes precedence overapps/collector/.env.
Use Agent mode when you want one centralized Neko Master service and multiple remote devices (OpenWrt, Linux, macOS) collecting local gateway data. The agent runs near the gateway, pulls data, and reports to the panel β the panel never connects to the gateway directly.
Supported gateway types: Clash / Mihomo (WebSocket real-time) and Surge v5+ (HTTP polling).
- In the dashboard, go to
Settings β Backends, add anAgentbackend, select gateway type - Click "View Agent Script" and copy the one-line install command, then run it on the target host:
# Clash / Mihomo gateway example
curl -fsSL https://raw.githubusercontent.com/foru17/neko-master/main/apps/agent/install.sh \
| env NEKO_SERVER='http://your-panel:3000' \
NEKO_BACKEND_ID='1' \
NEKO_BACKEND_TOKEN='ag_xxx' \
NEKO_GATEWAY_TYPE='clash' \
NEKO_GATEWAY_URL='http://127.0.0.1:9090' \
sh
# Surge gateway example
curl -fsSL https://raw.githubusercontent.com/foru17/neko-master/main/apps/agent/install.sh \
| env NEKO_SERVER='http://your-panel:3000' \
NEKO_BACKEND_ID='2' \
NEKO_BACKEND_TOKEN='ag_yyy' \
NEKO_GATEWAY_TYPE='surge' \
NEKO_GATEWAY_URL='http://127.0.0.1:9091' \
shAfter install, manage instances with nekoagent:
nekoagent list # list all instances
nekoagent status <instance> # check running state
nekoagent logs <instance> # tail live logs
nekoagent restart <instance> # restart
nekoagent upgrade # global upgrade (CLI + binary)The script auto-detects an existing installation β if
neko-agentis already present, it only adds the new instance without re-downloading. Multiple instances can run on the same host (differentNEKO_INSTANCE_NAME), each pointing to a different gateway.
- Overview: architecture, Direct vs Agent comparison, security model
- Quick Start: end-to-end setup from UI to running agent
- Install Guide: install methods, systemd / launchd autostart
- Configuration: full flag and env variable reference
- Release Flow: versioning and compatibility policy
- Troubleshooting: common errors and fixes
- Open http://localhost:3000
- The Gateway Configuration dialog will appear on first visit
- Fill in your network gateway (e.g., OpenClash) connection info:
- Name: Custom name (e.g., "Home Gateway")
- Type: Select
Clash / Mihomo - Host: Gateway backend address (e.g.,
192.168.101.1) - Port: Gateway backend port (e.g.,
9090) - Token: Fill if Secret is configured, otherwise leave empty
- Click "Add Backend" to save
- The system will automatically start collecting and analyzing traffic data
π‘ Get Gateway Address: Go to your gateway control panel (e.g., OpenClash) β Enable "External Control" β Copy API address
Neko Master supports connecting to Surge gateways for complete rule chain visualization and traffic analysis.
Enable HTTP remote API in your Surge configuration:
[General]
http-api = 127.0.0.1:9091
http-api-tls = false
http-api-web-dashboard = trueOr configure via Surge's graphical interface:
- HTTP Remote API:
SettingsβGeneralβHTTP Remote API - Port: Default
9091 - Authentication: Recommended to set a password for enhanced security
- Open Neko Master settings dialog
- Click "Add Backend"
- Fill in the connection info:
- Name: Custom name (e.g., "Surge Home")
- Type: Select
Surge - Host: IP address where Surge is running (e.g.,
192.168.1.1or127.0.0.1) - Port: HTTP API port (default
9091) - Token: HTTP API password (if configured)
- Click "Test Connection" to verify the configuration
- Save the configuration
π‘ Note: Surge uses HTTP polling to fetch data (compared to Clash's WebSocket real-time stream), with a data refresh delay of approximately 2 seconds.
If you see "port already in use" error, here are the solutions:
Create a .env file in the same directory as docker-compose.yml:
WEB_EXTERNAL_PORT=8080 # Change Web UI port
API_EXTERNAL_PORT=8081 # Change API port
WS_EXTERNAL_PORT=8082 # Change WebSocket external port (only for direct access)
COOKIE_SECRET=your-long-random-secret # Strongly recommended to keep fixedThen restart:
docker compose down
docker compose up -dNow access http://localhost:8080
ports:
- "8080:3000" # External 8080 β Internal 3000
- "8082:3002" # External 8082 β Internal 3002 (for proxy/tunnel WS forwarding)Note: if you use direct WS access (no reverse proxy) and external WS port is not
3002, setWS_EXTERNAL_PORT=<external-ws-port>.
curl -fsSL https://raw.githubusercontent.com/foru17/neko-master/main/setup.sh | bashThe script will automatically detect and suggest available ports.
| Port | Purpose | External Required | Description |
|---|---|---|---|
| 3000 | Web UI | β | Frontend entry point |
| 3001 | API | Optional | Frontend uses same-origin /api by default; usually no public exposure needed (default Compose maps it) |
| 3002 | WebSocket | Optional | Real-time push endpoint; recommended for reverse proxy/tunnel forwarding only (default Compose maps it) |
| Variable | Default | Purpose | When to set |
|---|---|---|---|
WEB_PORT |
3000 |
Web listen port (inside container) | Usually unchanged |
API_PORT |
3001 |
API listen port (inside container) | Usually unchanged |
COLLECTOR_WS_PORT |
3002 |
WS listen port (inside container) | Usually unchanged |
DB_PATH |
/app/data/stats.db |
SQLite data path | Custom data path |
WEB_EXTERNAL_PORT |
3000 |
External web port mapping in docker-compose.yml |
External web port changed |
API_EXTERNAL_PORT |
3001 |
External API port mapping in docker-compose.yml |
Direct external API access needed |
WS_EXTERNAL_PORT |
3002 |
External WS port mapping in docker-compose.yml; also used for direct WS port inference |
Direct WS access without proxy and external WS port changed |
NEXT_PUBLIC_API_URL |
empty | Override frontend API base URL (e.g. https://api.example.com) |
API is not same-origin /api |
NEXT_PUBLIC_WS_URL |
empty | Override frontend WS URL (absolute URL or /custom_ws) |
Custom WS path/domain |
NEXT_PUBLIC_WS_PORT |
3002 |
WS direct-connection fallback port (build-time only β setting this at Docker runtime has no effect; use WS_EXTERNAL_PORT instead) |
Only for custom source builds |
API_URL |
http://localhost:3001 |
Next.js /api rewrite target (mainly source/custom builds) |
API listen address changed |
COOKIE_SECRET |
auto-generated | Cookie signing secret; if not fixed, sessions can be invalidated after restart when data dir is not persisted | Strongly recommended in production |
GEOIP_LOOKUP_PROVIDER |
online |
IP geolocation source (online / local) |
Default to local MMDB lookup |
GEOIP_ONLINE_API_URL |
https://api.ipinfo.es/ipinfo |
Online IP geolocation API endpoint (must be compatible with ipinfo.my response schema) |
Set only when you deploy a compatible endpoint |
FORCE_ACCESS_CONTROL_OFF |
false |
Force disable access control (emergency recovery) | Temporary use only when token is lost |
SHOWCASE_SITE_MODE |
false |
Read-only showcase mode (blocks sensitive write operations) | Public demo sites only |
| Variable | Default | Description |
|---|---|---|
FLUSH_INTERVAL_MS |
30000 |
Buffer flush interval for collector writes |
FLUSH_MAX_BUFFER_SIZE |
5000 |
Max buffer entries before early flush |
REALTIME_MAX_MINUTES |
180 |
Realtime in-memory window size (minutes) |
REALTIME_RANGE_END_TOLERANCE_MS |
120000 |
End-time tolerance for range queries |
SURGE_POLICY_SYNC_INTERVAL_MS |
600000 |
Surge policy sync interval |
DB_RANGE_QUERY_CACHE_TTL_MS |
8000 |
Range-query cache TTL |
DB_HISTORICAL_QUERY_CACHE_TTL_MS |
300000 |
Historical-query cache TTL |
DB_RANGE_QUERY_CACHE_MAX_ENTRIES |
1024 |
Max range-query cache entries |
DB_RANGE_QUERY_CACHE_DISABLED |
empty | Set 1 to disable range-query cache |
DEBUG_SURGE |
false |
Enable Surge collector debug logs (true) |
- API client base:
runtime-config.API_URLβNEXT_PUBLIC_API_URLβ same-origin/api /apiserver-side rewrite target:API_URL(defaulthttp://localhost:3001, applied in Next.js rewrites)- WS URL:
runtime-config.WS_URLβNEXT_PUBLIC_WS_URLβ auto candidates (whenruntime-config.WS_PORTis set, direct port is preferred; otherwise/_cm_wsis tried first) - WS port:
runtime-config.WS_PORT(fromWS_EXTERNAL_PORT) βNEXT_PUBLIC_WS_PORTβ3002 - In normal deployments,
NEXT_PUBLIC_WS_URLis usually unnecessary unless you use a custom WS path/domain
NODE_ENV=production
DB_PATH=/app/data/stats.db
COOKIE_SECRET=<at least 32-byte random string>
# Optional: default to local MMDB lookup
# GEOIP_LOOKUP_PROVIDER=local
# Keep false in normal operation
# FORCE_ACCESS_CONTROL_OFF=falseUse openssl rand -hex 32 to generate COOKIE_SECRET.
Additional recommendations:
- Mount persistent storage (for example
./data:/app/data) to avoid data and secret loss. - If using direct WS access and external WS port is not
3002, setWS_EXTERNAL_PORTaccordingly. - If API port/address changes in source deployment, update
API_URLas well. - For local MMDB lookup, mount
./geoip:/app/data/geoip:roand switch source inSettings -> Preferences -> IP Lookup Source. - MMDB files are large and are not bundled in the image. Download and place them in
./geoipwith fixed names:GeoLite2-City.mmdb,GeoLite2-ASN.mmdb(required), andGeoLite2-Country.mmdb(optional). Recommended source: https://github.com/P3TERX/GeoLite.mmdb.
Advanced Agent details (install, config, release, compatibility) are maintained under
docs/agent/*.
SQLite is Neko Master's default storage engine and works well for most users. Consider enabling ClickHouse if you need:
- Very large datasets (hundreds of thousands of domain/IP entries)
- Fast aggregation queries over long time ranges (β₯ 7 days)
- Separation of historical stats from configuration/metadata storage
ClickHouse is entirely optional. SQLite remains as the configuration and metadata store regardless of whether ClickHouse is enabled.
When ClickHouse is enabled, the system enters dual-write mode:
BatchBuffer.flush()
β
ββββ SQLite (config / metadata, always written)
ββββ ClickHouse (stats traffic data, dual-write)
βββ Buffer tables β SummingMergeTree async merge
Read source is controlled by STATS_QUERY_SOURCE (default: sqlite).
The repository's built-in docker-compose.yml already includes a ClickHouse service, gated by
profiles: [clickhouse] so it does not start by default. From the repository root, run:
docker compose --profile clickhouse up -dClickHouse data is persisted to
./data/clickhouse, separate from the main app data directory.
If you use a custom docker-compose.yml (such as Scenario A/B above), add the ClickHouse
service block manually:
services:
neko-master:
# ... your existing config ...
environment:
# append to existing environment section:
- CH_ENABLED=${CH_ENABLED:-0}
- CH_HOST=${CH_HOST:-clickhouse}
- CH_PORT=${CH_PORT:-8123}
- CH_DATABASE=${CH_DATABASE:-neko_master}
- CH_USER=${CH_USER:-neko}
- CH_PASSWORD=${CH_PASSWORD:-neko_master}
- CH_WRITE_ENABLED=${CH_WRITE_ENABLED:-0}
- STATS_QUERY_SOURCE=${STATS_QUERY_SOURCE:-sqlite}
networks:
- neko-master-network
clickhouse:
image: clickhouse/clickhouse-server:24.8
container_name: neko-master-clickhouse
restart: unless-stopped
profiles: ["clickhouse"]
ports:
- "${CH_EXTERNAL_HTTP_PORT:-8123}:8123"
- "${CH_EXTERNAL_NATIVE_PORT:-9000}:9000"
volumes:
- ./data/clickhouse:/var/lib/clickhouse
environment:
- CLICKHOUSE_DB=${CH_DATABASE:-neko_master}
- CLICKHOUSE_USER=${CH_USER:-neko}
- CLICKHOUSE_PASSWORD=${CH_PASSWORD:-neko_master}
- CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT=1
networks:
- neko-master-network
healthcheck:
test: ["CMD-SHELL", "wget -q --spider http://127.0.0.1:8123/ping || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
networks:
neko-master-network:
driver: bridgeAdd to your .env (same directory as docker-compose.yml):
# Enable ClickHouse connection
CH_ENABLED=1
# Enable dual-write
CH_WRITE_ENABLED=1
# Read source: sqlite (default) / auto (smart routing) / clickhouse (force)
STATS_QUERY_SOURCE=auto
# ClickHouse connection (defaults match docker-compose.yml, no change needed)
CH_HOST=clickhouse
CH_PORT=8123
CH_DATABASE=neko_master
CH_USER=neko
CH_PASSWORD=neko_masterRestart:
docker compose --profile clickhouse up -d| Variable | Default | Description |
|---|---|---|
CH_ENABLED |
0 |
Enable ClickHouse connection (1 to enable) |
CH_WRITE_ENABLED |
0 |
Enable dual-write (requires CH_ENABLED=1) |
CH_ONLY_MODE |
0 |
When CH is healthy, skip SQLite stats writes (CH-only mode) |
CH_HOST |
clickhouse |
ClickHouse host address |
CH_PORT |
8123 |
ClickHouse HTTP port |
CH_DATABASE |
neko_master |
Database name |
CH_USER |
neko |
Username |
CH_PASSWORD |
neko_master |
Password |
CH_SECURE |
0 |
Use HTTPS connection |
CH_REQUIRED |
0 |
Refuse to start if CH is unavailable |
CH_AUTO_CREATE_TABLES |
1 |
Auto-create tables on first start |
CH_WRITE_MAX_PENDING_BATCHES |
200 |
Max pending write batches |
CH_UNHEALTHY_THRESHOLD |
5 |
Consecutive failures before marking unhealthy (auto-fallback to SQLite) |
STATS_QUERY_SOURCE |
sqlite |
Read source: sqlite / auto / clickhouse |
CH_COMPARE_ENABLED |
0 |
Enable SQLite β ClickHouse consistency check |
CH_EXTERNAL_HTTP_PORT |
8123 |
ClickHouse HTTP external port (Compose mapping) |
CH_EXTERNAL_NATIVE_PORT |
9000 |
ClickHouse Native external port (Compose mapping) |
Health & Fallback: After
CH_UNHEALTHY_THRESHOLDconsecutive write failures, the system automatically marks ClickHouse as unhealthy and resumes SQLite writesβeven whenCH_ONLY_MODE=1. Once ClickHouse recovers, it is re-marked healthy and logged.
Upgrading from a SQLite-only version? Your data is safe. The SQLite file (
./data/stats.db) is fully preserved. Here is the recommended gradual migration path:
CH_ENABLED=1
CH_WRITE_ENABLED=1
STATS_QUERY_SOURCE=sqlite # Keep reading from SQLite while CH accumulates dataStart and watch [ClickHouse Writer] logs to confirm successful writes.
STATS_QUERY_SOURCE=auto # Smart routing: recent data from CH, historical from SQLite
# or
STATS_QUERY_SOURCE=clickhouse # Force all reads to ClickHouseTo move historical SQLite stats into ClickHouse:
# Standard migration (truncate CH then re-import, with consistency check)
./scripts/ch-migrate-docker.sh
# Append mode (keep existing CH data, incremental import)
./scripts/ch-migrate-docker.sh --append
# Specific time window
./scripts/ch-migrate-docker.sh --from 2026-02-01T00:00:00Z --to 2026-02-20T00:00:00ZOnce ClickHouse is running stably, stop SQLite stats writes:
CH_ONLY_MODE=1Even with
CH_ONLY_MODE=1, if ClickHouse becomes unhealthy the system automatically falls back to SQLite writesβno data loss.
You can always roll back completely:
CH_ENABLED=0
CH_WRITE_ENABLED=0
CH_ONLY_MODE=0
STATS_QUERY_SOURCE=sqliteRestart and everything returns to pure SQLite mode. Historical data remains intact.
Recommended approach: keep Web and WS under the same domain, with path routing:
/ β 3000, /_cm_ws β 3002.
server {
listen 443 ssl http2;
server_name neko.example.com;
location / {
proxy_pass http://<neko-master-host>:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ^~ /_cm_ws {
proxy_pass http://<neko-master-host>:3002;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 86400;
proxy_send_timeout 86400;
proxy_buffering off;
}
}Optional env override:
# Not required by default (already /_cm_ws)
# NEXT_PUBLIC_WS_URL=/custom_ws~/.cloudflared/config.yml:
tunnel: <your-tunnel-name-or-id>
credentials-file: /path/to/<credentials>.json
ingress:
- hostname: neko.example.com
path: /_cm_ws*
service: http://localhost:3002
- hostname: neko.example.com
path: /*
service: http://localhost:3000
- service: http_status:404Run:
cloudflared tunnel --config ~/.cloudflared/config.yml run <your-tunnel-name-or-id>For Zero Trust dashboard-managed routes (token mode), configure the same two routes and keep /_cm_ws* above /*.
- Do not use
ws(without leading slash) as WS path; it can overmatch and cause/_next/static/...β426 Upgrade Required - WS route must be above catch-all
/* NEXT_PUBLIC_WS_URLis optional by default; if customized, restart frontend/container after changes- Mapping only
3000still works, but falls back to HTTP polling (~5s), with less real-time responsiveness beacon.min.jsfailures (Cloudflare analytics script) are typically unrelated to app API/WS data flow- No extra
/apireverse-proxy rule is required in most setups; frontend uses same-origin/apiand app handles internal forwarding to3001
Note:
/_next/static/... 426 Upgrade Requiredis common in misconfigured reverse proxy / tunnel setups; it is uncommon in direct local access without a proxy.
Docker images support both linux/amd64 and linux/arm64.
Data is stored in /app/data inside the container. Mount it to host to prevent data loss:
volumes:
- ./data:/app/data# Pull the latest image and restart
docker compose pull
docker compose up -dNeko Master supports access authentication to protect dashboard data.
- Set a fixed
COOKIE_SECRET(otherwise sessions may be invalidated after restart). - Do not keep
FORCE_ACCESS_CONTROL_OFF=trueenabled in normal operation. - Use
SHOWCASE_SITE_MODE=trueonly for public demo environments (write operations are restricted).
Example:
COOKIE_SECRET=<at least 32-byte random string>
# FORCE_ACCESS_CONTROL_OFF=false
# SHOWCASE_SITE_MODE=false- Open dashboard and click "Settings" in the lower-left sidebar.
- Go to the "Security" tab.
- Enable/disable access control and set your token.
If you forgot the token, temporarily set FORCE_ACCESS_CONTROL_OFF=true to enter emergency mode.
-
Add to
docker-compose.yml:environment: - FORCE_ACCESS_CONTROL_OFF=true
-
Restart:
docker compose up -d
-
Open dashboard and reset token in "Settings -> Security".
-
Remove this env var immediately after reset, then restart again.
-
Stop and remove container:
docker stop neko-master docker rm neko-master
-
Re-run with emergency flag:
docker run -d \ --name neko-master \ -p 3000:3000 \ -v $(pwd)/data:/app/data \ -e FORCE_ACCESS_CONTROL_OFF=true \ foru17/neko-master:latest -
Reset token, then remove this flag and restart normally.
A: Yes. Core features still work.
If WS is not routed, the app automatically falls back to HTTP polling.
For full realtime experience, route /_cm_ws to 3002.
A: Create/update .env (same directory as docker-compose.yml):
WEB_EXTERNAL_PORT=8080
API_EXTERNAL_PORT=8081
WS_EXTERNAL_PORT=8082Then restart:
docker compose down
docker compose up -dA: Usually because COOKIE_SECRET is not fixed or data directory is not persisted.
- Set a fixed
COOKIE_SECRET - Mount
./data:/app/data
A: Create ./geoip in your project directory (same level as docker-compose.yml is recommended), then place:
GeoLite2-City.mmdb(required)GeoLite2-ASN.mmdb(required)GeoLite2-Country.mmdb(optional)
Recommended source: https://github.com/P3TERX/GeoLite.mmdb.
Inside the container, the fixed lookup path is /app/data/geoip, so keep:
./geoip:/app/data/geoip:ro. To update later, just replace files in host ./geoip.
A: Check:
- External control is enabled on gateway side
- Host/port is correct
- Token/Secret is correct (if configured)
- Container network can reach gateway
A: Backup first:
cp -r ./data ./data-backup-$(date +%Y%m%d)Restore:
docker compose down
cp -r ./data-backup-YYYYMMDD/. ./data/
docker compose up -dIf you want to quickly understand the system design depth, read in this order:
- System Architecture Diagram: end-to-end layering and module responsibilities β docs/architecture.en.md
- Data Flow: Clash / Surge collection pipelines and aggregation
- Data Model & Storage: SQLite schema, ClickHouse Buffer tables, retention policy
- Realtime Channel Design:
RealtimeStoremerge strategy and WS push - ClickHouse Module: dual-write architecture, health fallback, read routing
Full documentation index: docs/README.md
This documentation covers the core design of collection, aggregation, caching, realtime push, and multi-backend management.
This project uses GitHub Issue Templates (Bug / Feature / Support).
Please include at least:
- Deployment method (Compose / Docker Run / Source)
- Version info (image tag or commit)
- Key env vars (masked, e.g.
COOKIE_SECRET=***) - Reproduction steps and expected vs actual behavior
- Key logs (
docker logs, browser console, network errors)
neko-master/
βββ docker-compose.yml # Docker Compose config
βββ Dockerfile # Docker image build
βββ setup.sh # One-click setup script
βββ docker-start.sh # Docker container startup script
βββ start.sh # Source code dev startup script
βββ docs/ # Documentation (see docs/README.md)
β βββ README.md # Documentation index (English default)
β βββ README.zh.md # Documentation index (Chinese)
β βββ README.en.md # Documentation index (English mirror)
β βββ architecture.md # System architecture (Chinese)
β βββ architecture.en.md # System architecture (English)
β βββ release-checklist.md
β βββ agent/ # Agent docs (bilingual)
β β βββ overview.md / overview.en.md
β β βββ quick-start.md / quick-start.en.md
β β βββ install.md / install.en.md
β β βββ config.md / config.en.md
β β βββ release.md / release.en.md
β β βββ troubleshooting.md / troubleshooting.en.md
β βββ research/ # Research reports
β βββ dev/ # Internal development docs
βββ assets/ # Screenshots and icons
βββ apps/
β βββ collector/ # Data collection service (Node.js + WebSocket)
β βββ agent/ # Agent daemon (Go)
β βββ web/ # Next.js frontend app
βββ packages/
βββ shared/ # Shared types and utilities
- Frontend: Next.js 16 + React 19 + TypeScript
- Styling: Tailwind CSS + shadcn/ui
- Charts: Recharts
- i18n: next-intl
- Backend: Node.js + Fastify + WebSocket
- Database: SQLite (better-sqlite3) + ClickHouse (optional)
- Build: pnpm + Turborepo
Contributions are welcome!
- π Submit Bug
- π‘ Request Feature
- π§ Contribute Code
Made with β€οΈ by @foru17
If this project helps you, please consider giving it a β





