test-design-orchestrator is an AgentSkill for turning requirements into structured, traceable software test artifacts. It is built as a composite skill: the root skill chooses the best-fit black-box test design technique, routes execution to a technique-specific subskill, and optionally formats the result for downstream tooling.
This repository is intentionally split into two layers:
- the installable skill itself:
SKILL.md, subskill folders,references/,assets/templates/,agents/openai.yaml - repository-facing support material:
README.md,examples/,evals/, and validation scripts
- Selects an appropriate test design technique for a given requirement shape.
- Generates test artifacts using boundary value analysis, equivalence partitioning, decision tables, classification trees with n-wise reduction, state transitions, acceptance criteria, or use case analysis.
- Preserves traceability, assumptions, and coverage notes.
- Formats output for review-ready markdown, BDD feature files, Xray-compatible Gherkin feature bundles, Zephyr Scale CSV, or TestLink-oriented import workflows.
- It does not replace domain requirements analysis. Missing business rules are surfaced, not invented.
- It does not provide shared-memory infrastructure. Cross-agent memory belongs in a separate shared-memory skill.
- It does not claim support for every ALM tool beyond the bundled targets and references.
.
|-- SKILL.md
|-- agents/
| `-- openai.yaml
|-- acceptance-criteria-to-test-cases/
|-- boundary-value-analysis/
|-- classification-tree-nwise/
|-- decision-table/
|-- equivalence-partitioning/
|-- state-transition/
|-- technique-selector/
|-- test-case-formatter/
|-- use-case-testing/
|-- references/
|-- assets/templates/
|-- examples/
|-- evals/
`-- scripts/
- Normalize the request into inputs, rules, actors, states, and desired outputs.
- Choose one primary test design technique with a defensible rationale.
- Generate the test artifacts using the matching subskill.
- Format only when the user asks for a specific target representation.
- Return assumptions, traceability, and residual risk with the artifact unless the user requested artifact-only output.
Install from GitHub with npx skills:
npx skills add <owner>/<repo> --skill test-design-orchestratorIf your installer supports direct GitHub URLs, this form is also commonly used:
npx skills add https://github.com/<owner>/<repo> --skill test-design-orchestratorAfter installation, restart Codex so it reloads the newly installed skill.
For a manual install, place this folder in your skill directory and publish it under a lowercase hyphen-case folder name such as test-design-orchestrator.
The required installable files are:
SKILL.md- subskill
SKILL.mdfiles agents/openai.yaml
Everything else is supportive but strongly recommended for maintainability.
Useful inputs include:
- raw requirements
- business rules
- user stories and acceptance criteria
- use cases
- lifecycle or state descriptions
- a requested export target such as markdown, BDD, Xray Gherkin, Zephyr Scale, or TestLink
Typical outputs include:
- technique recommendation with rationale
- partition tables and representative values
- decision tables and optimized rules
- transition paths and invalid-transition tests
- scenario lists and detailed test cases
- import-oriented formatted artifacts
Runtime memory:
- ephemeral only
- used for requirement normalization, assumptions, traceability IDs, and current output format
Project-local persistent memory:
- optional
- only appropriate when the user explicitly wants a reusable test-design brief saved in their project
Shared memory:
- intentionally excluded from this repository
- integrate an external shared-memory skill if cross-agent reuse is required
Run the repository checks:
python scripts/validate-skill-repo.py
python scripts/format-validator.py bdd examples/checkout-feature.feature
python scripts/package-xray-features.py examples/xray-checkout.feature --output dist/xray-features.zipUse the evaluation fixtures:
evals/trigger-queries.jsonfor description triggering checksevals/technique-selection-cases.jsonfor manual or agent-assisted forward testing
Use the prompt examples in examples/ to sanity-check the end-to-end workflow.
assets/templates/zephyr-scale.csv.j2for Zephyr Scale CSV generationassets/templates/xray-gherkin.feature.j2plusscripts/package-xray-features.pyfor Xray Gherkin feature import bundlesreferences/testlink-import-file-formats.pdffor TestLink import guidanceassets/templates/bdd-feature.j2plus the BDD protocol references for feature-file output
These integrations are supported only to the extent that the bundled templates and references cover them. If a target tool requires fields that are not available in the input, the skill should stop and ask for them.
Keep changes tightly scoped and auditable:
- improve trigger quality by updating
SKILL.mdandevals/trigger-queries.jsontogether - keep technique guidance in
references/concise and technique-specific - keep renderable output contracts in
assets/templates/ - add or update example prompts whenever the workflow changes materially
This repository is released under the MIT License. See LICENSE.
Before publishing to GitHub or a skill registry:
- rename the repository folder to lowercase hyphen-case
- run
python scripts/validate-skill-repo.py - optionally run the upstream skill quick validator against the root folder