Skip to content

af: layered config (global / project-shared / project-local)#207

Merged
jlaneve merged 2 commits intomainfrom
worktree-af-project-config
May 4, 2026
Merged

af: layered config (global / project-shared / project-local)#207
jlaneve merged 2 commits intomainfrom
worktree-af-project-config

Conversation

@jlaneve
Copy link
Copy Markdown
Contributor

@jlaneve jlaneve commented May 4, 2026

summary

af now reads and writes config across three scopes — global, project-shared, project-local — colocated with the Astro CLI's existing ~/.astro/ directory. The model mirrors git config (system / global / local). This lets agents run af instance discover inside a specific astro project without polluting the user's global instance list, and lets a team commit their project's deployment list to .astro/config.yaml so a fresh clone + astro login is enough to be productive.

why

today, every af instance ... operation reads and writes ~/.af/config.yaml. That has two real problems:

  1. agents pollute the global list. An agent running inside one astro project that calls af instance discover will dump every deployment the user has access to (often dozens) into the user's global config, mixing with deployments from unrelated projects.
  2. no project boundary. There's no way to say "these are the deployments that belong to this project," so onboarding a teammate involves them re-running discover and picking the right subset by hand

the mental model

scope file committed? use for
global ~/.astro/config.yaml n/a (per-user) personal default deployments, localhost
project shared <root>/.astro/config.yaml yes the team's deployment inventory for this project
project local <root>/.astro/config.local.yaml no (gitignored) personal current-instance and per-developer overrides

<root> is found by walking up from cwd looking for a .astro/ directory (same marker astro-cli already uses). Once you're in a project, af instance discover populates the team's deployment list directly into the committable file. The instances are stored with astro_pat auth (only context + deployment_id, no token), so committing the file leaks nothing — every developer's astro login resolves their own PAT at request time

read precedence (most-specific wins): current-instance from project-local, then global. Instance lookup: project-local → project-shared → global; same-named entries in narrower scopes shadow

write routing (default, no flags):

  • add → project-shared inside a project, else global
  • use → project-local (each developer can target a different deployment without touching the committed file)
  • delete → most-specific scope that has the name (rerun to peel further scopes)
  • discover → project-shared inside a project, else global

how to use it

setting up a project (the main use case)

inside an astro project:

cd my-airflow-project
astro login                                  # if you haven't already
af instance discover astro --dry-run         # preview what'll get added
af instance discover astro                   # writes 30 deployments to .astro/config.yaml
git add .astro/config.yaml && git commit     # ship the deployment inventory

teammate clones the repo:

git clone <repo> && cd <repo>
astro login
af instance list                             # 30 deployments already there
af instance use prod                         # writes current-instance to .astro/config.local.yaml (gitignored)
af dags list                                 # works

scope flags (override the default)

mutually-exclusive --global / --project / --local flags on add / use / delete / discover:

af instance add localhost-dev --url http://localhost:8080 --global
af instance use staging --local              # implicit, but explicit also works
af instance delete prod --global             # only delete the global copy
af instance add scratch --url ... --token literal-creds --local  # gitignored, fine to use literals

af instance show <name> — answer "where is this defined?"

af instance show prod
# Instance: prod
# Scope: project (/Users/julian/proj/.astro/config.yaml)
# URL: https://prod.example.com
# Auth: astro pat (astronomer.io)
# Deployment ID: clXYZ123

mirrors git config --show-origin for cases where the same name lives in multiple scopes (most-specific wins; the rest are shadowed)

af migrate — move from ~/.af/config.yaml to ~/.astro/config.yaml

if you've used af before this PR, your config lives at ~/.af/config.yaml. On first run after upgrade, af reads from the legacy path with a one-time stderr deprecation note. Run af migrate to do the migration explicitly:

af migrate
# {"status": "migrated",
#  "from": "/Users/you/.af/config.yaml",
#  "to": "/Users/you/.astro/config.yaml",
#  "backup": "/Users/you/.af/config.yaml.bak"}

idempotent. The migration preserves any astro-cli content already in ~/.astro/config.yaml via the same merge logic save() uses. Your old file is renamed to .bak

single-file mode (escape hatch)

AF_CONFIG=<path> (or --config <path>) bypasses layering entirely. The astro otto wrapper's AF_CONFIG=/dev/null neutralize-config sentinel works exactly as before

what changed under the hood

  • config/loader.pyConfigManager.save() now reads existing file content and merges only af-owned top-level keys (instances, current-instance), preserving everything else (project:, cloud:, contexts: from astro-cli). Telemetry is sub-key merged so astro-cli's notice_shown survives. Output is sort_keys=True so cross-tool writes don't churn diffs. File mode is preserved on overwrites and tightened to 0600 on creation. New create_default_if_missing flag for project layers (default True preserves old behavior for the global file)
  • config/models.py — dropped validate_references (in layered world, current-instance can legitimately point to a sibling-scope instance), relaxed Telemetry to extra="ignore"
  • config/scope.py (new) — Scope enum + discover_project_root() walk-up logic
  • config/layered.py (new) — LayeredConfig composes the three managers; handles merged-view reads, scope-routed writes, dangling current-instance cleanup across scopes
  • cli/context.py_load_from_config() switched to LayeredConfig
  • cli/instances.py — every command moved to LayeredConfig. New --global/--project/--local flags. New show command. New SCOPE column in list. Discover commands fail-fast on invalid scope before doing API/scan work
  • cli/main.py — new top-level migrate command

af no longer polices what credentials you put in which file. Convention is gitignore, not tool gating — same as git/terraform/kubectl/aws-cli/gcloud

test plan

  • 555 unit + CLI tests passing (added 81 new ones across test_scope.py, test_layered.py, test_cli_instances.py, test_cli_migrate.py, plus expansions to test_config.py)
  • adversarial smoke battery: AF_CONFIG=/dev/null sentinel, broken project YAML, .astro-as-file, cwd=$HOME (no project layering), --project outside project (rejects upfront, no API/scan work done), mutex flag rejection, dangling current-instance cross-scope cleanup
  • live end-to-end against my real Astro account in a sandbox: discover astro --project --dry-run previews 30 deployments without writing, real discover astro --project writes them with correct schema, af config version against a HEALTHY discovered deployment returns the live response (proves PAT resolution from the merged ~/.astro/config.yaml works at request time)
  • cross-tool round-trip: foreign keys (project, contexts, future astro-cli keys) injected mid-stream survive subsequent af instance use / add writes
  • af migrate happy path, idempotent (already-migrated / nothing-to-migrate), .bak numbering when .bak already exists, malformed legacy YAML errors cleanly

known follow-ups (not blocking)

  • af instance use with no name in a non-tty raises a simple_term_menu traceback. Pre-existing; should detect sys.stdin.isatty() and error cleanly
  • "Warning: Failed to load config" fires on every af invocation when current-instance has a token referencing a missing env var, even for instance commands that don't need the adapter. cli/main.py:init_context() doesn't lazy-init
  • Optional: ship a .gitignore template addition for .astro/config.local.yaml (was deferred per discussion)

🤖 Generated with Claude Code

jlaneve and others added 2 commits May 3, 2026 22:43
splits af config across three scopes co-located with astro-cli's tree
(~/.astro and project .astro/), so agents running `af instance discover`
inside a project no longer pollute the user's global instance list, and
team-shared deployment lists can be committed alongside .astro/config.yaml

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Path.cwd() raises FileNotFoundError when the cwd was deleted out from
under the process. The walk-up needs to catch that — there's no
filesystem position to walk from, so layering just gracefully skips.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
@jlaneve jlaneve marked this pull request as ready for review May 4, 2026 13:22
Copy link
Copy Markdown
Member

@schnie schnie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. Consolidate on astro config location and adds project and project local configs for better usage with Otto across many projects.

@jlaneve jlaneve merged commit 94a0c49 into main May 4, 2026
10 checks passed
@jlaneve jlaneve deleted the worktree-af-project-config branch May 4, 2026 14:12
jlaneve added a commit that referenced this pull request May 4, 2026
## Summary
- The layered-config PR (#207) moved the default global path from
`~/.af/config.yaml` to `~/.astro/config.yaml` but two user-facing
strings in `cli/main.py` still pointed at the old location.
- This brings `af --help` in line with the README and skill docs.

## Test plan
- [x] `af --help` shows `(default: ~/.astro/config.yaml)`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.7 (1M context) <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants