feat: Calculer automatiquement les moyennes après chaque saisie de notes
Les enseignants ont besoin de moyennes à jour immédiatement après la publication ou modification des notes, sans attendre un batch nocturne. Le système recalcule via Domain Events synchrones : statistiques d'évaluation (min/max/moyenne/médiane), moyennes matières pondérées (normalisation /20), et moyenne générale par élève. Les résultats sont stockés dans des tables dénormalisées avec cache Redis (TTL 5 min). Trois endpoints API exposent les données avec contrôle d'accès par rôle. Une commande console permet le backfill des données historiques au déploiement.
This commit is contained in:
32
.agents/skills/bmad-module-builder/SKILL.md
Normal file
32
.agents/skills/bmad-module-builder/SKILL.md
Normal file
@@ -0,0 +1,32 @@
|
||||
---
|
||||
name: bmad-module-builder
|
||||
description: Plans, creates, and validates BMad modules. Use when the user requests to 'ideate module', 'plan a module', 'create module', 'build a module', or 'validate module'.
|
||||
---
|
||||
|
||||
# BMad Module Builder
|
||||
|
||||
## Overview
|
||||
|
||||
This skill helps you bring BMad modules to life — from the first spark of an idea to a fully scaffolded, installable module. It offers three paths:
|
||||
|
||||
- **Ideate Module (IM)** — A creative brainstorming session that helps you imagine what your module could be, decide on the right architecture (agent vs. workflow vs. both), and produce a detailed plan document. The plan then guides you through building each piece with the Agent Builder and Workflow Builder.
|
||||
- **Create Module (CM)** — Takes an existing folder of built skills and scaffolds the setup infrastructure (module.yaml, module-help.csv, setup skill) that makes it a proper installable BMad module. Supports `--headless` / `-H`.
|
||||
- **Validate Module (VM)** — Checks that a module's setup skill is complete and correct — every skill has its capabilities registered, entries are accurate and well-crafted, and structural integrity is sound.
|
||||
|
||||
**Args:** Accepts `--headless` / `-H` for CM path only, an initial description for IM, or a path to a skills folder for CM/VM.
|
||||
|
||||
## On Activation
|
||||
|
||||
Load available config from `{project-root}/_bmad/config.yaml` and `{project-root}/_bmad/config.user.yaml` (root level and `bmb` section). If config is missing, let the user know `bmad-builder-setup` can configure the module at any time. Use sensible defaults for anything not configured.
|
||||
|
||||
Detect user's intent:
|
||||
|
||||
- **Ideate / Plan** keywords or no path argument → Load `./references/ideate-module.md`
|
||||
- **Create / Scaffold** keywords or a folder path → Load `./references/create-module.md`
|
||||
- **Validate / Check** keywords → Load `./references/validate-module.md`
|
||||
- **Unclear** → Present options:
|
||||
- **Ideate Module (IM)** — "I have an idea for a module and want to brainstorm and plan it"
|
||||
- **Create Module (CM)** — "I've already built my skills and want to package them as a module"
|
||||
- **Validate Module (VM)** — "I want to check that my module's setup skill is complete and correct"
|
||||
|
||||
If `--headless` or `-H` is passed, route to CM with headless mode.
|
||||
@@ -0,0 +1,95 @@
|
||||
---
|
||||
title: 'Module Plan'
|
||||
status: 'ideation'
|
||||
module_name: ''
|
||||
module_code: ''
|
||||
architecture: ''
|
||||
standalone: true
|
||||
expands_module: ''
|
||||
skills_planned: []
|
||||
config_variables: []
|
||||
created: ''
|
||||
updated: ''
|
||||
---
|
||||
|
||||
# Module Plan
|
||||
|
||||
## Vision
|
||||
|
||||
<!-- What this module does, who it's for, and why it matters -->
|
||||
|
||||
## Architecture Decision
|
||||
|
||||
<!-- Agent-centric / workflow-centric / hybrid — and the reasoning behind the choice -->
|
||||
|
||||
## User Experience
|
||||
|
||||
<!-- Who uses this module and what their journey looks like -->
|
||||
|
||||
## Skills
|
||||
|
||||
<!-- For each planned skill, copy this block: -->
|
||||
|
||||
### {skill-name}
|
||||
|
||||
**Type:** {agent | workflow}
|
||||
**Purpose:**
|
||||
|
||||
**Capabilities:**
|
||||
|
||||
| Display Name | Menu Code | Description | Action | Args | Phase | After | Before | Required | Output Location | Outputs |
|
||||
| ------------ | --------- | ----------- | ------ | ---- | ----- | ----- | ------ | -------- | --------------- | ------- |
|
||||
| | | | | | | | | | | |
|
||||
|
||||
**Design Notes:**
|
||||
|
||||
## Memory Architecture
|
||||
|
||||
<!-- For multi-agent modules: personal sidecars only, personal + shared module sidecar, or shared only? -->
|
||||
<!-- What shared context should agents contribute to? (user style, content history, project assets, etc.) -->
|
||||
<!-- If shared only — consider whether a single agent is the better design -->
|
||||
|
||||
## Configuration
|
||||
|
||||
| Variable | Prompt | Default | Result Template | User Setting |
|
||||
| -------- | ------ | ------- | --------------- | ------------ |
|
||||
| | | | | |
|
||||
|
||||
<!-- Reminder: skills should have sensible fallbacks if config hasn't been set, or ask at runtime for values they need -->
|
||||
|
||||
## External Dependencies
|
||||
|
||||
<!-- CLI tools, MCP servers, or other external software that skills depend on -->
|
||||
<!-- For each: what it is, which skills need it, and how the setup skill should handle it -->
|
||||
|
||||
## UI and Visualization
|
||||
|
||||
<!-- Does the module include dashboards, progress views, interactive interfaces, or a web app? -->
|
||||
<!-- If yes: what it shows, which skills feed into it, how it's served/installed -->
|
||||
|
||||
## Setup Extensions
|
||||
|
||||
<!-- Beyond config collection: web app installation, directory scaffolding, external service configuration, starter files, etc. -->
|
||||
<!-- These will need to be manually added to the setup skill after scaffolding -->
|
||||
|
||||
## Integration
|
||||
|
||||
<!-- Standalone: how it provides independent value -->
|
||||
<!-- Expansion: parent module, cross-module capability relationships, skills that may reference parent module ordering -->
|
||||
|
||||
## Creative Use Cases
|
||||
|
||||
<!-- Beyond the primary workflow — unexpected combinations, power-user scenarios, creative applications discovered during brainstorming -->
|
||||
|
||||
## Ideas Captured
|
||||
|
||||
<!-- Raw ideas from brainstorming — preserved for context even if not all made it into the plan -->
|
||||
|
||||
## Build Roadmap
|
||||
|
||||
<!-- Recommended build order for skills -->
|
||||
|
||||
**Next steps:**
|
||||
|
||||
1. Build each skill using **Build an Agent (BA)** or **Build a Workflow (BW)** — share this plan document as context
|
||||
2. When all skills are built, return to **Create Module (CM)** to scaffold the module infrastructure
|
||||
@@ -0,0 +1,76 @@
|
||||
---
|
||||
name: { setup-skill-name }
|
||||
description: Sets up {module-name} module in a project. Use when the user requests to 'install {module-code} module', 'configure {module-name}', or 'setup {module-name}'.
|
||||
---
|
||||
|
||||
# Module Setup
|
||||
|
||||
## Overview
|
||||
|
||||
Installs and configures a BMad module into a project. Module identity (name, code, version) comes from `./assets/module.yaml`. Collects user preferences and writes them to three files:
|
||||
|
||||
- **`{project-root}/_bmad/config.yaml`** — shared project config: core settings at root (e.g. `output_folder`, `document_output_language`) plus a section per module with metadata and module-specific values. User-only keys (`user_name`, `communication_language`) are **never** written here.
|
||||
- **`{project-root}/_bmad/config.user.yaml`** — personal settings intended to be gitignored: `user_name`, `communication_language`, and any module variable marked `user_setting: true` in `./assets/module.yaml`. These values live exclusively here.
|
||||
- **`{project-root}/_bmad/module-help.csv`** — registers module capabilities for the help system.
|
||||
|
||||
Both config scripts use an anti-zombie pattern — existing entries for this module are removed before writing fresh ones, so stale values never persist.
|
||||
|
||||
`{project-root}` is a **literal token** in config values — never substitute it with an actual path. It signals to the consuming LLM that the value is relative to the project root, not the skill root.
|
||||
|
||||
## On Activation
|
||||
|
||||
1. Read `./assets/module.yaml` for module metadata and variable definitions (the `code` field is the module identifier)
|
||||
2. Check if `{project-root}/_bmad/config.yaml` exists — if a section matching the module's code is already present, inform the user this is an update
|
||||
3. Check for per-module configuration at `{project-root}/_bmad/{module-code}/config.yaml` and `{project-root}/_bmad/core/config.yaml`. If either file exists:
|
||||
- If `{project-root}/_bmad/config.yaml` does **not** yet have a section for this module: this is a **fresh install**. Inform the user that installer config was detected and values will be consolidated into the new format.
|
||||
- If `{project-root}/_bmad/config.yaml` **already** has a section for this module: this is a **legacy migration**. Inform the user that legacy per-module config was found alongside existing config, and legacy values will be used as fallback defaults.
|
||||
- In both cases, per-module config files and directories will be cleaned up after setup.
|
||||
|
||||
If the user provides arguments (e.g. `accept all defaults`, `--headless`, or inline values like `user name is BMad, I speak Swahili`), map any provided values to config keys, use defaults for the rest, and skip interactive prompting. Still display the full confirmation summary at the end.
|
||||
|
||||
## Collect Configuration
|
||||
|
||||
Ask the user for values. Show defaults in brackets. Present all values together so the user can respond once with only the values they want to change (e.g. "change language to Swahili, rest are fine"). Never tell the user to "press enter" or "leave blank" — in a chat interface they must type something to respond.
|
||||
|
||||
**Default priority** (highest wins): existing new config values > legacy config values > `./assets/module.yaml` defaults. When legacy configs exist, read them and use matching values as defaults instead of `module.yaml` defaults. Only keys that match the current schema are carried forward — changed or removed keys are ignored.
|
||||
|
||||
**Core config** (only if no core keys exist yet): `user_name` (default: BMad), `communication_language` and `document_output_language` (default: English — ask as a single language question, both keys get the same answer), `output_folder` (default: `{project-root}/_bmad-output`). Of these, `user_name` and `communication_language` are written exclusively to `config.user.yaml`. The rest go to `config.yaml` at root and are shared across all modules.
|
||||
|
||||
**Module config**: Read each variable in `./assets/module.yaml` that has a `prompt` field. Ask using that prompt with its default value (or legacy value if available).
|
||||
|
||||
## Write Files
|
||||
|
||||
Write a temp JSON file with the collected answers structured as `{"core": {...}, "module": {...}}` (omit `core` if it already exists). Then run both scripts — they can run in parallel since they write to different files:
|
||||
|
||||
```bash
|
||||
python3 ./scripts/merge-config.py --config-path "{project-root}/_bmad/config.yaml" --user-config-path "{project-root}/_bmad/config.user.yaml" --module-yaml ./assets/module.yaml --answers {temp-file} --legacy-dir "{project-root}/_bmad"
|
||||
python3 ./scripts/merge-help-csv.py --target "{project-root}/_bmad/module-help.csv" --source ./assets/module-help.csv --legacy-dir "{project-root}/_bmad" --module-code {module-code}
|
||||
```
|
||||
|
||||
Both scripts output JSON to stdout with results. If either exits non-zero, surface the error and stop. The scripts automatically read legacy config values as fallback defaults, then delete the legacy files after a successful merge. Check `legacy_configs_deleted` and `legacy_csvs_deleted` in the output to confirm cleanup.
|
||||
|
||||
Run `./scripts/merge-config.py --help` or `./scripts/merge-help-csv.py --help` for full usage.
|
||||
|
||||
## Create Output Directories
|
||||
|
||||
After writing config, create any output directories that were configured. For filesystem operations only (such as creating directories), resolve the `{project-root}` token to the actual project root and create each path-type value from `config.yaml` that does not yet exist — this includes `output_folder` and any module variable whose value starts with `{project-root}/`. The paths stored in the config files must continue to use the literal `{project-root}` token; only the directories on disk should use the resolved paths. Use `mkdir -p` or equivalent to create the full path.
|
||||
|
||||
## Cleanup Legacy Directories
|
||||
|
||||
After both merge scripts complete successfully, remove the installer's package directories. Skills and agents in these directories are already installed at `.claude/skills/` — the `_bmad/` directory should only contain config files.
|
||||
|
||||
```bash
|
||||
python3 ./scripts/cleanup-legacy.py --bmad-dir "{project-root}/_bmad" --module-code {module-code} --also-remove _config --skills-dir "{project-root}/.claude/skills"
|
||||
```
|
||||
|
||||
The script verifies that every skill in the legacy directories exists at `.claude/skills/` before removing anything. Directories without skills (like `_config/`) are removed directly. If the script exits non-zero, surface the error and stop. Missing directories (already cleaned by a prior run) are not errors — the script is idempotent.
|
||||
|
||||
Check `directories_removed` and `files_removed_count` in the JSON output for the confirmation step. Run `./scripts/cleanup-legacy.py --help` for full usage.
|
||||
|
||||
## Confirm
|
||||
|
||||
Use the script JSON output to display what was written — config values set (written to `config.yaml` at root for core, module section for module values), user settings written to `config.user.yaml` (`user_keys` in result), help entries added, fresh install vs update. If legacy files were deleted, mention the migration. If legacy directories were removed, report the count and list (e.g. "Cleaned up 106 installer package files from bmb/, core/, \_config/ — skills are installed at .claude/skills/"). Then display the `module_greeting` from `./assets/module.yaml` to the user.
|
||||
|
||||
## Outcome
|
||||
|
||||
Once the user's `user_name` and `communication_language` are known (from collected input, arguments, or existing config), use them consistently for the remainder of the session: address the user by their configured name and communicate in their configured `communication_language`.
|
||||
@@ -0,0 +1 @@
|
||||
module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs
|
||||
|
@@ -0,0 +1,6 @@
|
||||
code:
|
||||
name: ""
|
||||
description: ""
|
||||
module_version: 1.0.0
|
||||
default_selected: false
|
||||
module_greeting: >
|
||||
@@ -0,0 +1,259 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.9"
|
||||
# dependencies = []
|
||||
# ///
|
||||
"""Remove legacy module directories from _bmad/ after config migration.
|
||||
|
||||
After merge-config.py and merge-help-csv.py have migrated config data and
|
||||
deleted individual legacy files, this script removes the now-redundant
|
||||
directory trees. These directories contain skill files that are already
|
||||
installed at .claude/skills/ (or equivalent) — only the config files at
|
||||
_bmad/ root need to persist.
|
||||
|
||||
When --skills-dir is provided, the script verifies that every skill found
|
||||
in the legacy directories exists at the installed location before removing
|
||||
anything. Directories without skills (like _config/) are removed directly.
|
||||
|
||||
Exit codes: 0=success (including nothing to remove), 1=validation error, 2=runtime error
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import shutil
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Remove legacy module directories from _bmad/ after config migration."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--bmad-dir",
|
||||
required=True,
|
||||
help="Path to the _bmad/ directory",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-code",
|
||||
required=True,
|
||||
help="Module code being cleaned up (e.g. 'bmb')",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--also-remove",
|
||||
action="append",
|
||||
default=[],
|
||||
help="Additional directory names under _bmad/ to remove (repeatable)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--skills-dir",
|
||||
help="Path to .claude/skills/ — enables safety verification that skills "
|
||||
"are installed before removing legacy copies",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose",
|
||||
action="store_true",
|
||||
help="Print detailed progress to stderr",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def find_skill_dirs(base_path: str) -> list:
|
||||
"""Find directories that contain a SKILL.md file.
|
||||
|
||||
Walks the directory tree and returns the leaf directory name for each
|
||||
directory containing a SKILL.md. These are considered skill directories.
|
||||
|
||||
Returns:
|
||||
List of skill directory names (e.g. ['bmad-agent-builder', 'bmad-builder-setup'])
|
||||
"""
|
||||
skills = []
|
||||
root = Path(base_path)
|
||||
if not root.exists():
|
||||
return skills
|
||||
for skill_md in root.rglob("SKILL.md"):
|
||||
skills.append(skill_md.parent.name)
|
||||
return sorted(set(skills))
|
||||
|
||||
|
||||
def verify_skills_installed(
|
||||
bmad_dir: str, dirs_to_check: list, skills_dir: str, verbose: bool = False
|
||||
) -> list:
|
||||
"""Verify that skills in legacy directories exist at the installed location.
|
||||
|
||||
Scans each directory in dirs_to_check for skill folders (containing SKILL.md),
|
||||
then checks that a matching directory exists under skills_dir. Directories
|
||||
that contain no skills (like _config/) are silently skipped.
|
||||
|
||||
Returns:
|
||||
List of verified skill names.
|
||||
|
||||
Raises SystemExit(1) if any skills are missing from skills_dir.
|
||||
"""
|
||||
all_verified = []
|
||||
missing = []
|
||||
|
||||
for dirname in dirs_to_check:
|
||||
legacy_path = Path(bmad_dir) / dirname
|
||||
if not legacy_path.exists():
|
||||
continue
|
||||
|
||||
skill_names = find_skill_dirs(str(legacy_path))
|
||||
if not skill_names:
|
||||
if verbose:
|
||||
print(
|
||||
f"No skills found in {dirname}/ — skipping verification",
|
||||
file=sys.stderr,
|
||||
)
|
||||
continue
|
||||
|
||||
for skill_name in skill_names:
|
||||
installed_path = Path(skills_dir) / skill_name
|
||||
if installed_path.is_dir():
|
||||
all_verified.append(skill_name)
|
||||
if verbose:
|
||||
print(
|
||||
f"Verified: {skill_name} exists at {installed_path}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
else:
|
||||
missing.append(skill_name)
|
||||
if verbose:
|
||||
print(
|
||||
f"MISSING: {skill_name} not found at {installed_path}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
if missing:
|
||||
error_result = {
|
||||
"status": "error",
|
||||
"error": "Skills not found at installed location",
|
||||
"missing_skills": missing,
|
||||
"skills_dir": str(Path(skills_dir).resolve()),
|
||||
}
|
||||
print(json.dumps(error_result, indent=2))
|
||||
sys.exit(1)
|
||||
|
||||
return sorted(set(all_verified))
|
||||
|
||||
|
||||
def count_files(path: Path) -> int:
|
||||
"""Count all files recursively in a directory."""
|
||||
count = 0
|
||||
for item in path.rglob("*"):
|
||||
if item.is_file():
|
||||
count += 1
|
||||
return count
|
||||
|
||||
|
||||
def cleanup_directories(
|
||||
bmad_dir: str, dirs_to_remove: list, verbose: bool = False
|
||||
) -> tuple:
|
||||
"""Remove specified directories under bmad_dir.
|
||||
|
||||
Returns:
|
||||
(removed, not_found, total_files_removed) tuple
|
||||
"""
|
||||
removed = []
|
||||
not_found = []
|
||||
total_files = 0
|
||||
|
||||
for dirname in dirs_to_remove:
|
||||
target = Path(bmad_dir) / dirname
|
||||
if not target.exists():
|
||||
not_found.append(dirname)
|
||||
if verbose:
|
||||
print(f"Not found (skipping): {target}", file=sys.stderr)
|
||||
continue
|
||||
|
||||
if not target.is_dir():
|
||||
if verbose:
|
||||
print(f"Not a directory (skipping): {target}", file=sys.stderr)
|
||||
not_found.append(dirname)
|
||||
continue
|
||||
|
||||
file_count = count_files(target)
|
||||
if verbose:
|
||||
print(
|
||||
f"Removing {target} ({file_count} files)",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
try:
|
||||
shutil.rmtree(target)
|
||||
except OSError as e:
|
||||
error_result = {
|
||||
"status": "error",
|
||||
"error": f"Failed to remove {target}: {e}",
|
||||
"directories_removed": removed,
|
||||
"directories_failed": dirname,
|
||||
}
|
||||
print(json.dumps(error_result, indent=2))
|
||||
sys.exit(2)
|
||||
|
||||
removed.append(dirname)
|
||||
total_files += file_count
|
||||
|
||||
return removed, not_found, total_files
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
|
||||
bmad_dir = args.bmad_dir
|
||||
module_code = args.module_code
|
||||
|
||||
# Build the list of directories to remove
|
||||
dirs_to_remove = [module_code, "core"] + args.also_remove
|
||||
# Deduplicate while preserving order
|
||||
seen = set()
|
||||
unique_dirs = []
|
||||
for d in dirs_to_remove:
|
||||
if d not in seen:
|
||||
seen.add(d)
|
||||
unique_dirs.append(d)
|
||||
dirs_to_remove = unique_dirs
|
||||
|
||||
if args.verbose:
|
||||
print(f"Directories to remove: {dirs_to_remove}", file=sys.stderr)
|
||||
|
||||
# Safety check: verify skills are installed before removing
|
||||
verified_skills = None
|
||||
if args.skills_dir:
|
||||
if args.verbose:
|
||||
print(
|
||||
f"Verifying skills installed at {args.skills_dir}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
verified_skills = verify_skills_installed(
|
||||
bmad_dir, dirs_to_remove, args.skills_dir, args.verbose
|
||||
)
|
||||
|
||||
# Remove directories
|
||||
removed, not_found, total_files = cleanup_directories(
|
||||
bmad_dir, dirs_to_remove, args.verbose
|
||||
)
|
||||
|
||||
# Build result
|
||||
result = {
|
||||
"status": "success",
|
||||
"bmad_dir": str(Path(bmad_dir).resolve()),
|
||||
"directories_removed": removed,
|
||||
"directories_not_found": not_found,
|
||||
"files_removed_count": total_files,
|
||||
}
|
||||
|
||||
if args.skills_dir:
|
||||
result["safety_checks"] = {
|
||||
"skills_verified": True,
|
||||
"skills_dir": str(Path(args.skills_dir).resolve()),
|
||||
"verified_skills": verified_skills,
|
||||
}
|
||||
else:
|
||||
result["safety_checks"] = None
|
||||
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,408 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.9"
|
||||
# dependencies = ["pyyaml"]
|
||||
# ///
|
||||
"""Merge module configuration into shared _bmad/config.yaml and config.user.yaml.
|
||||
|
||||
Reads a module.yaml definition and a JSON answers file, then writes or updates
|
||||
the shared config.yaml (core values at root + module section) and config.user.yaml
|
||||
(user_name, communication_language, plus any module variable with user_setting: true).
|
||||
Uses an anti-zombie pattern for the module section in config.yaml.
|
||||
|
||||
Legacy migration: when --legacy-dir is provided, reads old per-module config files
|
||||
from {legacy-dir}/{module-code}/config.yaml and {legacy-dir}/core/config.yaml.
|
||||
Matching values serve as fallback defaults (answers override them). After a
|
||||
successful merge, the legacy config.yaml files are deleted. Only the current
|
||||
module and core directories are touched — other module directories are left alone.
|
||||
|
||||
Exit codes: 0=success, 1=validation error, 2=runtime error
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
try:
|
||||
import yaml
|
||||
except ImportError:
|
||||
print("Error: pyyaml is required (PEP 723 dependency)", file=sys.stderr)
|
||||
sys.exit(2)
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Merge module config into shared _bmad/config.yaml with anti-zombie pattern."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--config-path",
|
||||
required=True,
|
||||
help="Path to the target _bmad/config.yaml file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-yaml",
|
||||
required=True,
|
||||
help="Path to the module.yaml definition file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--answers",
|
||||
required=True,
|
||||
help="Path to JSON file with collected answers",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--user-config-path",
|
||||
required=True,
|
||||
help="Path to the target _bmad/config.user.yaml file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--legacy-dir",
|
||||
help="Path to _bmad/ directory to check for legacy per-module config files. "
|
||||
"Matching values are used as fallback defaults, then legacy files are deleted.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose",
|
||||
action="store_true",
|
||||
help="Print detailed progress to stderr",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def load_yaml_file(path: str) -> dict:
|
||||
"""Load a YAML file, returning empty dict if file doesn't exist."""
|
||||
file_path = Path(path)
|
||||
if not file_path.exists():
|
||||
return {}
|
||||
with open(file_path, "r", encoding="utf-8") as f:
|
||||
content = yaml.safe_load(f)
|
||||
return content if content else {}
|
||||
|
||||
|
||||
def load_json_file(path: str) -> dict:
|
||||
"""Load a JSON file."""
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
# Keys that live at config root (shared across all modules)
|
||||
_CORE_KEYS = frozenset(
|
||||
{"user_name", "communication_language", "document_output_language", "output_folder"}
|
||||
)
|
||||
|
||||
|
||||
def load_legacy_values(
|
||||
legacy_dir: str, module_code: str, module_yaml: dict, verbose: bool = False
|
||||
) -> tuple[dict, dict, list]:
|
||||
"""Read legacy per-module config files and return core/module value dicts.
|
||||
|
||||
Reads {legacy_dir}/core/config.yaml and {legacy_dir}/{module_code}/config.yaml.
|
||||
Only returns values whose keys match the current schema (core keys or module.yaml
|
||||
variable definitions). Other modules' directories are not touched.
|
||||
|
||||
Returns:
|
||||
(legacy_core, legacy_module, files_found) where files_found lists paths read.
|
||||
"""
|
||||
legacy_core: dict = {}
|
||||
legacy_module: dict = {}
|
||||
files_found: list = []
|
||||
|
||||
# Read core legacy config
|
||||
core_path = Path(legacy_dir) / "core" / "config.yaml"
|
||||
if core_path.exists():
|
||||
core_data = load_yaml_file(str(core_path))
|
||||
files_found.append(str(core_path))
|
||||
for k, v in core_data.items():
|
||||
if k in _CORE_KEYS:
|
||||
legacy_core[k] = v
|
||||
if verbose:
|
||||
print(f"Legacy core config: {list(legacy_core.keys())}", file=sys.stderr)
|
||||
|
||||
# Read module legacy config
|
||||
mod_path = Path(legacy_dir) / module_code / "config.yaml"
|
||||
if mod_path.exists():
|
||||
mod_data = load_yaml_file(str(mod_path))
|
||||
files_found.append(str(mod_path))
|
||||
for k, v in mod_data.items():
|
||||
if k in _CORE_KEYS:
|
||||
# Core keys duplicated in module config — only use if not already set
|
||||
if k not in legacy_core:
|
||||
legacy_core[k] = v
|
||||
elif k in module_yaml and isinstance(module_yaml[k], dict):
|
||||
# Module-specific key that matches a current variable definition
|
||||
legacy_module[k] = v
|
||||
if verbose:
|
||||
print(
|
||||
f"Legacy module config: {list(legacy_module.keys())}", file=sys.stderr
|
||||
)
|
||||
|
||||
return legacy_core, legacy_module, files_found
|
||||
|
||||
|
||||
def apply_legacy_defaults(answers: dict, legacy_core: dict, legacy_module: dict) -> dict:
|
||||
"""Apply legacy values as fallback defaults under the answers.
|
||||
|
||||
Legacy values fill in any key not already present in answers.
|
||||
Explicit answers always win.
|
||||
"""
|
||||
merged = dict(answers)
|
||||
|
||||
if legacy_core:
|
||||
core = merged.get("core", {})
|
||||
filled_core = dict(legacy_core) # legacy as base
|
||||
filled_core.update(core) # answers override
|
||||
merged["core"] = filled_core
|
||||
|
||||
if legacy_module:
|
||||
mod = merged.get("module", {})
|
||||
filled_mod = dict(legacy_module) # legacy as base
|
||||
filled_mod.update(mod) # answers override
|
||||
merged["module"] = filled_mod
|
||||
|
||||
return merged
|
||||
|
||||
|
||||
def cleanup_legacy_configs(
|
||||
legacy_dir: str, module_code: str, verbose: bool = False
|
||||
) -> list:
|
||||
"""Delete legacy config.yaml files for this module and core only.
|
||||
|
||||
Returns list of deleted file paths.
|
||||
"""
|
||||
deleted = []
|
||||
for subdir in (module_code, "core"):
|
||||
legacy_path = Path(legacy_dir) / subdir / "config.yaml"
|
||||
if legacy_path.exists():
|
||||
if verbose:
|
||||
print(f"Deleting legacy config: {legacy_path}", file=sys.stderr)
|
||||
legacy_path.unlink()
|
||||
deleted.append(str(legacy_path))
|
||||
return deleted
|
||||
|
||||
|
||||
def extract_module_metadata(module_yaml: dict) -> dict:
|
||||
"""Extract non-variable metadata fields from module.yaml."""
|
||||
meta = {}
|
||||
for k in ("name", "description"):
|
||||
if k in module_yaml:
|
||||
meta[k] = module_yaml[k]
|
||||
meta["version"] = module_yaml.get("module_version") # null if absent
|
||||
if "default_selected" in module_yaml:
|
||||
meta["default_selected"] = module_yaml["default_selected"]
|
||||
return meta
|
||||
|
||||
|
||||
def apply_result_templates(
|
||||
module_yaml: dict, module_answers: dict, verbose: bool = False
|
||||
) -> dict:
|
||||
"""Apply result templates from module.yaml to transform raw answer values.
|
||||
|
||||
For each answer, if the corresponding variable definition in module.yaml has
|
||||
a 'result' field, replaces {value} in that template with the answer. Skips
|
||||
the template if the answer already contains '{project-root}' to prevent
|
||||
double-prefixing.
|
||||
"""
|
||||
transformed = {}
|
||||
for key, value in module_answers.items():
|
||||
var_def = module_yaml.get(key)
|
||||
if (
|
||||
isinstance(var_def, dict)
|
||||
and "result" in var_def
|
||||
and "{project-root}" not in str(value)
|
||||
):
|
||||
template = var_def["result"]
|
||||
transformed[key] = template.replace("{value}", str(value))
|
||||
if verbose:
|
||||
print(
|
||||
f"Applied result template for '{key}': {value} → {transformed[key]}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
else:
|
||||
transformed[key] = value
|
||||
return transformed
|
||||
|
||||
|
||||
def merge_config(
|
||||
existing_config: dict,
|
||||
module_yaml: dict,
|
||||
answers: dict,
|
||||
verbose: bool = False,
|
||||
) -> dict:
|
||||
"""Merge answers into config, applying anti-zombie pattern.
|
||||
|
||||
Args:
|
||||
existing_config: Current config.yaml contents (may be empty)
|
||||
module_yaml: The module definition
|
||||
answers: JSON with 'core' and/or 'module' keys
|
||||
verbose: Print progress to stderr
|
||||
|
||||
Returns:
|
||||
Updated config dict ready to write
|
||||
"""
|
||||
config = dict(existing_config)
|
||||
module_code = module_yaml.get("code")
|
||||
|
||||
if not module_code:
|
||||
print("Error: module.yaml must have a 'code' field", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Migrate legacy core: section to root
|
||||
if "core" in config and isinstance(config["core"], dict):
|
||||
if verbose:
|
||||
print("Migrating legacy 'core' section to root", file=sys.stderr)
|
||||
config.update(config.pop("core"))
|
||||
|
||||
# Strip user-only keys from config — they belong exclusively in config.user.yaml
|
||||
for key in _CORE_USER_KEYS:
|
||||
if key in config:
|
||||
if verbose:
|
||||
print(f"Removing user-only key '{key}' from config (belongs in config.user.yaml)", file=sys.stderr)
|
||||
del config[key]
|
||||
|
||||
# Write core values at root (global properties, not nested under "core")
|
||||
# Exclude user-only keys — those belong exclusively in config.user.yaml
|
||||
core_answers = answers.get("core")
|
||||
if core_answers:
|
||||
shared_core = {k: v for k, v in core_answers.items() if k not in _CORE_USER_KEYS}
|
||||
if shared_core:
|
||||
if verbose:
|
||||
print(f"Writing core config at root: {list(shared_core.keys())}", file=sys.stderr)
|
||||
config.update(shared_core)
|
||||
|
||||
# Anti-zombie: remove existing module section
|
||||
if module_code in config:
|
||||
if verbose:
|
||||
print(
|
||||
f"Removing existing '{module_code}' section (anti-zombie)",
|
||||
file=sys.stderr,
|
||||
)
|
||||
del config[module_code]
|
||||
|
||||
# Build module section: metadata + variable values
|
||||
module_section = extract_module_metadata(module_yaml)
|
||||
module_answers = apply_result_templates(
|
||||
module_yaml, answers.get("module", {}), verbose
|
||||
)
|
||||
module_section.update(module_answers)
|
||||
|
||||
if verbose:
|
||||
print(
|
||||
f"Writing '{module_code}' section with keys: {list(module_section.keys())}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
config[module_code] = module_section
|
||||
|
||||
return config
|
||||
|
||||
|
||||
# Core keys that are always written to config.user.yaml
|
||||
_CORE_USER_KEYS = ("user_name", "communication_language")
|
||||
|
||||
|
||||
def extract_user_settings(module_yaml: dict, answers: dict) -> dict:
|
||||
"""Collect settings that belong in config.user.yaml.
|
||||
|
||||
Includes user_name and communication_language from core answers, plus any
|
||||
module variable whose definition contains user_setting: true.
|
||||
"""
|
||||
user_settings = {}
|
||||
|
||||
core_answers = answers.get("core", {})
|
||||
for key in _CORE_USER_KEYS:
|
||||
if key in core_answers:
|
||||
user_settings[key] = core_answers[key]
|
||||
|
||||
module_answers = answers.get("module", {})
|
||||
for var_name, var_def in module_yaml.items():
|
||||
if isinstance(var_def, dict) and var_def.get("user_setting") is True:
|
||||
if var_name in module_answers:
|
||||
user_settings[var_name] = module_answers[var_name]
|
||||
|
||||
return user_settings
|
||||
|
||||
|
||||
def write_config(config: dict, config_path: str, verbose: bool = False) -> None:
|
||||
"""Write config dict to YAML file, creating parent dirs as needed."""
|
||||
path = Path(config_path)
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if verbose:
|
||||
print(f"Writing config to {path}", file=sys.stderr)
|
||||
|
||||
with open(path, "w", encoding="utf-8") as f:
|
||||
yaml.dump(
|
||||
config,
|
||||
f,
|
||||
default_flow_style=False,
|
||||
allow_unicode=True,
|
||||
sort_keys=False,
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
|
||||
# Load inputs
|
||||
module_yaml = load_yaml_file(args.module_yaml)
|
||||
if not module_yaml:
|
||||
print(f"Error: Could not load module.yaml from {args.module_yaml}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
answers = load_json_file(args.answers)
|
||||
existing_config = load_yaml_file(args.config_path)
|
||||
|
||||
if args.verbose:
|
||||
exists = Path(args.config_path).exists()
|
||||
print(f"Config file exists: {exists}", file=sys.stderr)
|
||||
if exists:
|
||||
print(f"Existing sections: {list(existing_config.keys())}", file=sys.stderr)
|
||||
|
||||
# Legacy migration: read old per-module configs as fallback defaults
|
||||
legacy_files_found = []
|
||||
if args.legacy_dir:
|
||||
module_code = module_yaml.get("code", "")
|
||||
legacy_core, legacy_module, legacy_files_found = load_legacy_values(
|
||||
args.legacy_dir, module_code, module_yaml, args.verbose
|
||||
)
|
||||
if legacy_core or legacy_module:
|
||||
answers = apply_legacy_defaults(answers, legacy_core, legacy_module)
|
||||
if args.verbose:
|
||||
print("Applied legacy values as fallback defaults", file=sys.stderr)
|
||||
|
||||
# Merge and write config.yaml
|
||||
updated_config = merge_config(existing_config, module_yaml, answers, args.verbose)
|
||||
write_config(updated_config, args.config_path, args.verbose)
|
||||
|
||||
# Merge and write config.user.yaml
|
||||
user_settings = extract_user_settings(module_yaml, answers)
|
||||
existing_user_config = load_yaml_file(args.user_config_path)
|
||||
updated_user_config = dict(existing_user_config)
|
||||
updated_user_config.update(user_settings)
|
||||
if user_settings:
|
||||
write_config(updated_user_config, args.user_config_path, args.verbose)
|
||||
|
||||
# Legacy cleanup: delete old per-module config files
|
||||
legacy_deleted = []
|
||||
if args.legacy_dir:
|
||||
legacy_deleted = cleanup_legacy_configs(
|
||||
args.legacy_dir, module_yaml["code"], args.verbose
|
||||
)
|
||||
|
||||
# Output result summary as JSON
|
||||
module_code = module_yaml["code"]
|
||||
result = {
|
||||
"status": "success",
|
||||
"config_path": str(Path(args.config_path).resolve()),
|
||||
"user_config_path": str(Path(args.user_config_path).resolve()),
|
||||
"module_code": module_code,
|
||||
"core_updated": bool(answers.get("core")),
|
||||
"module_keys": list(updated_config.get(module_code, {}).keys()),
|
||||
"user_keys": list(user_settings.keys()),
|
||||
"legacy_configs_found": legacy_files_found,
|
||||
"legacy_configs_deleted": legacy_deleted,
|
||||
}
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,220 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.9"
|
||||
# dependencies = []
|
||||
# ///
|
||||
"""Merge module help entries into shared _bmad/module-help.csv.
|
||||
|
||||
Reads a source CSV with module help entries and merges them into a target CSV.
|
||||
Uses an anti-zombie pattern: all existing rows matching the source module code
|
||||
are removed before appending fresh rows.
|
||||
|
||||
Legacy cleanup: when --legacy-dir and --module-code are provided, deletes old
|
||||
per-module module-help.csv files from {legacy-dir}/{module-code}/ and
|
||||
{legacy-dir}/core/. Only the current module and core are touched.
|
||||
|
||||
Exit codes: 0=success, 1=validation error, 2=runtime error
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import csv
|
||||
import json
|
||||
import sys
|
||||
from io import StringIO
|
||||
from pathlib import Path
|
||||
|
||||
# CSV header for module-help.csv
|
||||
HEADER = [
|
||||
"module",
|
||||
"agent-name",
|
||||
"skill-name",
|
||||
"display-name",
|
||||
"menu-code",
|
||||
"capability",
|
||||
"args",
|
||||
"description",
|
||||
"phase",
|
||||
"after",
|
||||
"before",
|
||||
"required",
|
||||
"output-location",
|
||||
"outputs",
|
||||
"", # trailing empty column from trailing comma
|
||||
]
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Merge module help entries into shared _bmad/module-help.csv with anti-zombie pattern."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--target",
|
||||
required=True,
|
||||
help="Path to the target _bmad/module-help.csv file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--source",
|
||||
required=True,
|
||||
help="Path to the source module-help.csv with entries to merge",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--legacy-dir",
|
||||
help="Path to _bmad/ directory to check for legacy per-module CSV files.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-code",
|
||||
help="Module code (required with --legacy-dir for scoping cleanup).",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose",
|
||||
action="store_true",
|
||||
help="Print detailed progress to stderr",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def read_csv_rows(path: str) -> tuple[list[str], list[list[str]]]:
|
||||
"""Read CSV file returning (header, data_rows).
|
||||
|
||||
Returns empty header and rows if file doesn't exist.
|
||||
"""
|
||||
file_path = Path(path)
|
||||
if not file_path.exists():
|
||||
return [], []
|
||||
|
||||
with open(file_path, "r", encoding="utf-8", newline="") as f:
|
||||
content = f.read()
|
||||
|
||||
reader = csv.reader(StringIO(content))
|
||||
rows = list(reader)
|
||||
|
||||
if not rows:
|
||||
return [], []
|
||||
|
||||
return rows[0], rows[1:]
|
||||
|
||||
|
||||
def extract_module_codes(rows: list[list[str]]) -> set[str]:
|
||||
"""Extract unique module codes from data rows."""
|
||||
codes = set()
|
||||
for row in rows:
|
||||
if row and row[0].strip():
|
||||
codes.add(row[0].strip())
|
||||
return codes
|
||||
|
||||
|
||||
def filter_rows(rows: list[list[str]], module_code: str) -> list[list[str]]:
|
||||
"""Remove all rows matching the given module code."""
|
||||
return [row for row in rows if not row or row[0].strip() != module_code]
|
||||
|
||||
|
||||
def write_csv(path: str, header: list[str], rows: list[list[str]], verbose: bool = False) -> None:
|
||||
"""Write header + rows to CSV file, creating parent dirs as needed."""
|
||||
file_path = Path(path)
|
||||
file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if verbose:
|
||||
print(f"Writing {len(rows)} data rows to {path}", file=sys.stderr)
|
||||
|
||||
with open(file_path, "w", encoding="utf-8", newline="") as f:
|
||||
writer = csv.writer(f)
|
||||
writer.writerow(header)
|
||||
for row in rows:
|
||||
writer.writerow(row)
|
||||
|
||||
|
||||
def cleanup_legacy_csvs(
|
||||
legacy_dir: str, module_code: str, verbose: bool = False
|
||||
) -> list:
|
||||
"""Delete legacy per-module module-help.csv files for this module and core only.
|
||||
|
||||
Returns list of deleted file paths.
|
||||
"""
|
||||
deleted = []
|
||||
for subdir in (module_code, "core"):
|
||||
legacy_path = Path(legacy_dir) / subdir / "module-help.csv"
|
||||
if legacy_path.exists():
|
||||
if verbose:
|
||||
print(f"Deleting legacy CSV: {legacy_path}", file=sys.stderr)
|
||||
legacy_path.unlink()
|
||||
deleted.append(str(legacy_path))
|
||||
return deleted
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
|
||||
# Read source entries
|
||||
source_header, source_rows = read_csv_rows(args.source)
|
||||
if not source_rows:
|
||||
print(f"Error: No data rows found in source {args.source}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Determine module codes being merged
|
||||
source_codes = extract_module_codes(source_rows)
|
||||
if not source_codes:
|
||||
print("Error: Could not determine module code from source rows", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
if args.verbose:
|
||||
print(f"Source module codes: {source_codes}", file=sys.stderr)
|
||||
print(f"Source rows: {len(source_rows)}", file=sys.stderr)
|
||||
|
||||
# Read existing target (may not exist)
|
||||
target_header, target_rows = read_csv_rows(args.target)
|
||||
target_existed = Path(args.target).exists()
|
||||
|
||||
if args.verbose:
|
||||
print(f"Target exists: {target_existed}", file=sys.stderr)
|
||||
if target_existed:
|
||||
print(f"Existing target rows: {len(target_rows)}", file=sys.stderr)
|
||||
|
||||
# Use source header if target doesn't exist or has no header
|
||||
header = target_header if target_header else (source_header if source_header else HEADER)
|
||||
|
||||
# Anti-zombie: remove all rows for each source module code
|
||||
filtered_rows = target_rows
|
||||
removed_count = 0
|
||||
for code in source_codes:
|
||||
before_count = len(filtered_rows)
|
||||
filtered_rows = filter_rows(filtered_rows, code)
|
||||
removed_count += before_count - len(filtered_rows)
|
||||
|
||||
if args.verbose and removed_count > 0:
|
||||
print(f"Removed {removed_count} existing rows (anti-zombie)", file=sys.stderr)
|
||||
|
||||
# Append source rows
|
||||
merged_rows = filtered_rows + source_rows
|
||||
|
||||
# Write result
|
||||
write_csv(args.target, header, merged_rows, args.verbose)
|
||||
|
||||
# Legacy cleanup: delete old per-module CSV files
|
||||
legacy_deleted = []
|
||||
if args.legacy_dir:
|
||||
if not args.module_code:
|
||||
print(
|
||||
"Error: --module-code is required when --legacy-dir is provided",
|
||||
file=sys.stderr,
|
||||
)
|
||||
sys.exit(1)
|
||||
legacy_deleted = cleanup_legacy_csvs(
|
||||
args.legacy_dir, args.module_code, args.verbose
|
||||
)
|
||||
|
||||
# Output result summary as JSON
|
||||
result = {
|
||||
"status": "success",
|
||||
"target_path": str(Path(args.target).resolve()),
|
||||
"target_existed": target_existed,
|
||||
"module_codes": sorted(source_codes),
|
||||
"rows_removed": removed_count,
|
||||
"rows_added": len(source_rows),
|
||||
"total_rows": len(merged_rows),
|
||||
"legacy_csvs_deleted": legacy_deleted,
|
||||
}
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
127
.agents/skills/bmad-module-builder/references/create-module.md
Normal file
127
.agents/skills/bmad-module-builder/references/create-module.md
Normal file
@@ -0,0 +1,127 @@
|
||||
# Create Module
|
||||
|
||||
**Language:** Use `{communication_language}` for all output.
|
||||
|
||||
## Your Role
|
||||
|
||||
You are a module packaging specialist. The user has built their skills — your job is to read them deeply, understand the ecosystem they form, and scaffold the infrastructure that makes it an installable BMad module.
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Discover the Skills
|
||||
|
||||
Ask the user for the folder path containing their built skills. Also ask: do they have a plan document from an Ideate Module (IM) session? If so, read it — it provides valuable context for ordering, relationships, and design intent.
|
||||
|
||||
**Read every SKILL.md in the folder thoroughly.** Understand each skill's:
|
||||
|
||||
- Name, purpose, and capabilities
|
||||
- Arguments and interaction model
|
||||
- What it produces and where
|
||||
- Dependencies on other skills or external tools
|
||||
|
||||
### 2. Gather Module Identity
|
||||
|
||||
Collect through conversation (or extract from a plan document in headless mode):
|
||||
|
||||
- **Module name** — Human-friendly display name (e.g., "Creative Intelligence Suite")
|
||||
- **Module code** — 2-4 letter abbreviation (e.g., "cis"). Used in skill naming, config sections, and folder conventions
|
||||
- **Description** — One-line summary of what the module does
|
||||
- **Version** — Starting version (default: 1.0.0)
|
||||
- **Module greeting** — Message shown to the user after setup completes
|
||||
- **Standalone or expansion?** If expansion: which module does it extend? This affects how help CSV entries may reference capabilities from the parent module
|
||||
|
||||
### 3. Define Capabilities
|
||||
|
||||
Build the help CSV entries for each skill. A single skill can have multiple capabilities (rows). For each capability:
|
||||
|
||||
| Field | Description |
|
||||
| ------------------- | ---------------------------------------------------------------------- |
|
||||
| **display-name** | What the user sees in help/menus |
|
||||
| **menu-code** | 2-letter shortcut, unique across the module |
|
||||
| **description** | What this capability does (concise) |
|
||||
| **action** | The capability/action name within the skill |
|
||||
| **args** | Supported arguments (e.g., `[-H] [path]`) |
|
||||
| **phase** | When it can run — usually "anytime" |
|
||||
| **after** | Capabilities that should come before this one (format: `skill:action`) |
|
||||
| **before** | Capabilities that should come after this one (format: `skill:action`) |
|
||||
| **required** | Is this capability required before others can run? |
|
||||
| **output-location** | Where output goes (config variable name or path) |
|
||||
| **outputs** | What it produces |
|
||||
|
||||
Ask the user about:
|
||||
|
||||
- How capabilities should be ordered — are there natural sequences?
|
||||
- Which capabilities are prerequisites for others?
|
||||
- If this is an expansion module, do any capabilities reference the parent module's skills in their before/after fields?
|
||||
|
||||
### 4. Define Configuration Variables
|
||||
|
||||
Does the module need custom installation questions? For each custom variable:
|
||||
|
||||
| Field | Description |
|
||||
| ------------------- | ---------------------------------------------------------------------------- |
|
||||
| **Key name** | Used in config.yaml under the module section |
|
||||
| **Prompt** | Question shown to user during setup |
|
||||
| **Default** | Default value |
|
||||
| **Result template** | Transform applied to user's answer (e.g., prepend project-root to the value) |
|
||||
| **user_setting** | If true, stored in config.user.yaml instead of config.yaml |
|
||||
|
||||
Remind the user: skills should always have sensible fallbacks if config hasn't been set. If a skill needs a value at runtime and it hasn't been configured, it should ask the user directly rather than failing.
|
||||
|
||||
### 5. External Dependencies and Setup Extensions
|
||||
|
||||
Ask the user about requirements beyond configuration:
|
||||
|
||||
- **CLI tools or MCP servers** — Do any skills depend on externally installed tools? If so, the setup skill should check for their presence and guide the user through installation or configuration. These checks would be custom additions to the cloned setup SKILL.md.
|
||||
- **UI or web app** — Does the module include a dashboard, visualization layer, or interactive web interface? If the setup skill needs to install or configure a web app, scaffold UI files, or set up a dev server, capture those requirements.
|
||||
- **Additional setup actions** — Beyond config collection: scaffolding project directories, generating starter files, configuring external services, setting up webhooks, etc.
|
||||
|
||||
If any of these apply, let the user know the scaffolded setup skill will need manual customization after creation to add these capabilities. Document what needs to be added so the user has a clear checklist.
|
||||
|
||||
### 6. Generate and Confirm
|
||||
|
||||
Present the complete module.yaml and module-help.csv content for the user to review. Show:
|
||||
|
||||
- Module identity and metadata
|
||||
- All configuration variables with their prompts and defaults
|
||||
- Complete help CSV entries with ordering and relationships
|
||||
- Any external dependencies or setup extensions that need manual follow-up
|
||||
|
||||
Iterate until the user confirms everything is correct.
|
||||
|
||||
### 7. Scaffold
|
||||
|
||||
Write the confirmed module.yaml and module-help.csv content to temporary files. Run the scaffold script:
|
||||
|
||||
```bash
|
||||
python3 ./scripts/scaffold-setup-skill.py \
|
||||
--target-dir "{skills-folder}" \
|
||||
--module-code "{code}" \
|
||||
--module-name "{name}" \
|
||||
--module-yaml "{temp-yaml-path}" \
|
||||
--module-csv "{temp-csv-path}"
|
||||
```
|
||||
|
||||
This creates `bmad-{code}-setup/` in the user's skills folder containing:
|
||||
|
||||
- `./SKILL.md` — Generic setup skill with module-specific frontmatter
|
||||
- `./scripts/` — merge-config.py, merge-help-csv.py, cleanup-legacy.py
|
||||
- `./assets/module.yaml` — Generated module definition
|
||||
- `./assets/module-help.csv` — Generated capability registry
|
||||
|
||||
### 8. Confirm and Next Steps
|
||||
|
||||
Show what was created — the setup skill folder structure and key file contents. Let the user know:
|
||||
|
||||
- To install this module in any project, run the setup skill
|
||||
- The setup skill handles config collection, writing, and help CSV registration
|
||||
- The module is now a complete, distributable BMad module
|
||||
|
||||
## Headless Mode
|
||||
|
||||
When `--headless` is set, the skill requires either:
|
||||
|
||||
- A **plan document path** — extract all module identity, capabilities, and config from it
|
||||
- A **skills folder path** — read skills and infer sensible defaults for module identity
|
||||
|
||||
In headless mode: skip interactive questions, scaffold immediately, present a summary of what was created at the end. If critical information is missing and cannot be inferred (like module code), exit with an error explaining what's needed.
|
||||
128
.agents/skills/bmad-module-builder/references/ideate-module.md
Normal file
128
.agents/skills/bmad-module-builder/references/ideate-module.md
Normal file
@@ -0,0 +1,128 @@
|
||||
# Ideate Module
|
||||
|
||||
**Language:** Use `{communication_language}` for all conversation. Write plan document in `{document_output_language}`.
|
||||
|
||||
## Your Role
|
||||
|
||||
You are a creative collaborator and module architect — part brainstorming partner, part technical advisor. Your job is to help the user discover and articulate their vision for a BMad module. The user is the creative force. You draw out their ideas, build on them, and help them see possibilities they haven't considered yet. When the session is over, they should feel like every great idea was theirs.
|
||||
|
||||
## Facilitation Principles
|
||||
|
||||
These are non-negotiable — they define the experience:
|
||||
|
||||
- **The user is the genius.** Build on their ideas. When you see a connection they haven't made, ask a question that leads them there — don't just state it. When they land on something great, celebrate it genuinely.
|
||||
- **"Yes, and..."** — Never dismiss. Every idea has a seed worth growing. Add to it, extend it, combine it with something else.
|
||||
- **Stay generative longer than feels comfortable.** The best ideas come after the obvious ones are exhausted. Resist the urge to organize or converge early. When the user starts structuring prematurely, gently redirect: "Love that — let's capture it. Before we organize, what else comes to mind?"
|
||||
- **Capture everything.** When the user says something in passing that's actually important, note it in the plan document and surface it at the right moment later.
|
||||
- **Soft gates at transitions.** "Anything else on this, or shall we explore...?" Users almost always remember one more thing when given a graceful exit ramp.
|
||||
- **Make it fun.** This should feel like the best brainstorming session the user has ever had — energizing, surprising, and productive. Match the user's energy. If they're excited, be excited with them. If they're thoughtful, go deep.
|
||||
|
||||
## Brainstorming Toolkit
|
||||
|
||||
Weave these into conversation naturally. Never name them or make the user feel like they're in a methodology. They're your internal playbook for keeping the conversation rich and multi-dimensional:
|
||||
|
||||
- **First Principles** — Strip away assumptions. "What problem is this actually solving at its core?" "If you could only do one thing for your users, what would it be?"
|
||||
- **What If Scenarios** — Expand possibility space. "What if this could also..." "What if we flipped that and..." "What would change if there were no technical constraints?"
|
||||
- **Reverse Brainstorming** — Find constraints through inversion. "What would make this terrible for users?" "What's the worst version of this module?" Then flip the answers.
|
||||
- **Assumption Reversal** — Challenge architecture decisions. "Do these really need to be separate?" "What if a single agent could handle all of that?" "What assumption are we making that might not be true?"
|
||||
- **Perspective Shifting** — Rotate viewpoints. Ask from the end-user angle, the developer maintaining it, someone extending it later, a complete beginner encountering it for the first time.
|
||||
- **Question Storming** — Surface unknowns. "What questions will users have when they first see this?" "What would a skeptic ask?" "What's the thing we haven't thought of yet?"
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Open the Session
|
||||
|
||||
Initialize the plan document immediately using `./assets/module-plan-template.md`. Write it to `{bmad_builder_reports}` with a descriptive filename. Set `created` and `updated` timestamps. This document is your cache — update it progressively as the conversation unfolds so work survives context compaction.
|
||||
|
||||
Start by understanding the spark. Let the user talk freely — this is where the richest context comes from:
|
||||
|
||||
- What's the idea? What problem space or domain?
|
||||
- Who would use this and what would they get from it?
|
||||
- Is there anything that inspired this — an existing tool, a frustration, a gap they've noticed?
|
||||
|
||||
Don't rush to structure. Just listen, ask follow-ups, and capture.
|
||||
|
||||
### 2. Explore Creatively
|
||||
|
||||
This is the heart of the session — spend real time here. Use the brainstorming toolkit to help the user explore:
|
||||
|
||||
- What capabilities would serve users in this domain?
|
||||
- What would delight users? What would surprise them?
|
||||
- What are the edge cases and hard problems?
|
||||
- What would a power user want vs. a beginner?
|
||||
- How might different capabilities work together in unexpected ways?
|
||||
- What exists today that's close but not quite right?
|
||||
|
||||
Update the **Ideas Captured** section of the plan document as ideas emerge. Capture raw ideas generously — even ones that seem tangential. They're context for later.
|
||||
|
||||
Energy check: if the conversation plateaus, try a perspective shift or reverse brainstorming to open a new vein.
|
||||
|
||||
### 3. Shape the Architecture
|
||||
|
||||
When exploration feels genuinely complete (not just "we have enough"), shift to architecture.
|
||||
|
||||
**Guide toward agent-with-capabilities when appropriate.** Many users default to thinking they need multiple specialized agents. But a well-designed single agent with rich internal capabilities and routing:
|
||||
|
||||
- Provides a more seamless user experience
|
||||
- Benefits from accumulated memory and context
|
||||
- Is simpler to maintain and configure
|
||||
- Can still have distinct modes or capabilities that feel like separate tools
|
||||
|
||||
However, **multiple agents make sense when:**
|
||||
|
||||
- The module spans genuinely different expertise domains that benefit from distinct personas
|
||||
- Users may want to interact with one agent without loading the others
|
||||
- Each agent needs its own memory context — personal history, learned preferences, domain-specific notes
|
||||
- Some capabilities are optional add-ons the user might not install
|
||||
|
||||
**Multiple workflows make sense when:**
|
||||
|
||||
- Capabilities serve different user journeys or require different tools
|
||||
- The workflow requires sequential phases with fundamentally different processes
|
||||
- No persistent persona or memory is needed between invocations
|
||||
|
||||
Even with multiple agents, each should be self-contained with its own capabilities. Duplicating some common functionality across agents is fine — it keeps each agent coherent and independently useful. This is the user's decision, but guide them toward self-sufficiency per agent.
|
||||
|
||||
Present the trade-offs. Let the user decide. Document the reasoning either way — future-them will want to know why.
|
||||
|
||||
**Memory architecture for multi-agent modules.** If the module has multiple agents, explore how memory should work. Every agent has its own sidecar (personal memory at `{project-root}/_bmad/memory/{skillName}-sidecar/`), but modules may also benefit from shared memory:
|
||||
|
||||
| Pattern | When It Fits | Example |
|
||||
| ------------------------------------ | ------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Personal sidecars only** | Agents have distinct domains with little overlap | A module with a code reviewer and a test writer — each tracks different things |
|
||||
| **Personal + shared module sidecar** | Agents have their own context but also learn shared things about the user | A social creative module — podcast, video, and blog experts each remember their domain specifics but share knowledge about the user's style, catchphrases, and content preferences |
|
||||
| **Shared sidecar only** | All agents serve the same domain and context | Probably a sign this should be a single agent |
|
||||
|
||||
With shared memory, each agent writes to both its personal sidecar and a module-level sidecar (e.g., `{project-root}/_bmad/memory/{moduleCode}-shared/`) when it learns something relevant to the whole module. Shared content might include: user style preferences, project assets, recurring themes, content history, or any cross-cutting context.
|
||||
|
||||
If the memory architecture points entirely toward shared memory with no personal differentiation, gently surface whether a single agent with multiple capabilities might be the better design.
|
||||
|
||||
### 4. Define Module Context
|
||||
|
||||
- **Standalone or expansion?** If expansion: which module does it extend? How do the new capabilities relate? Even expansion modules should provide value independently — the parent module being absent shouldn't break this one.
|
||||
- **Custom configuration?** Does the module need to ask users questions during setup? What variables would skills use? Important guidance to capture: skills should always have sensible fallbacks if config hasn't been set, or ask at runtime for specific values they need.
|
||||
- **External dependencies?** Do any planned skills rely on externally installed CLI tools or MCP servers? If so, the setup skill may need to check for these, guide the user through installation, or configure connection details. Capture what's needed and why.
|
||||
- **UI or visualization?** Could the module benefit from a user interface? This could be a shared progress dashboard, per-skill visualizations, an interactive view showing how skills relate and flow together, or even a cohesive module-level dashboard. Some modules might warrant a bespoke web app. Not every module needs this, but it's worth exploring — users often don't think of it until prompted.
|
||||
- **Setup skill extensions?** Beyond config collection, does the setup process need to do anything special? Install a web app, scaffold project directories, configure external services, generate starter files? The setup skill is extensible — it can do more than just write config.
|
||||
|
||||
### 5. Define Each Skill
|
||||
|
||||
For each planned skill (whether agent or workflow), work through:
|
||||
|
||||
- **Name** — following `bmad-{modulecode}-{skillname}` convention
|
||||
- **Purpose** — the core outcome in one sentence
|
||||
- **Capabilities** — each distinct action or mode. These become rows in the help CSV: display name, menu code, description, action name, args, phase, ordering (before/after), required flag, output location, outputs
|
||||
- **Relationships** — how skills relate to each other. Does one need to run before another? Are there cross-skill dependencies?
|
||||
- **Design notes** — non-obvious considerations the skill builders should know
|
||||
|
||||
Update the **Skills** section of the plan document with structured entries for each.
|
||||
|
||||
### 6. Finalize the Plan
|
||||
|
||||
Complete all sections of the plan document. Review with the user — walk through the plan and confirm it captures their vision. Update `status` to "complete" in the frontmatter.
|
||||
|
||||
**Close with next steps:**
|
||||
|
||||
- "Build each skill using **Build an Agent (BA)** or **Build a Workflow (BW)** — share this plan document as context so the builder understands the bigger picture."
|
||||
- "When all skills are built, return to **Create Module (CM)** to scaffold the module infrastructure."
|
||||
- Point them to the plan document location so they can reference it.
|
||||
@@ -0,0 +1,54 @@
|
||||
# Validate Module
|
||||
|
||||
**Language:** Use `{communication_language}` for all output.
|
||||
|
||||
## Your Role
|
||||
|
||||
You are a module quality reviewer. Your job is to verify that a BMad module's setup skill is complete, accurate, and well-crafted — ensuring every skill is properly registered and every help entry gives users and LLMs the information they need.
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Locate the Module
|
||||
|
||||
Ask the user for the path to their module's skills folder. Identify the setup skill (`bmad-*-setup`) and all other skill folders.
|
||||
|
||||
### 2. Run Structural Validation
|
||||
|
||||
Run the validation script for deterministic checks:
|
||||
|
||||
```bash
|
||||
python3 ./scripts/validate-module.py "{module-skills-folder}"
|
||||
```
|
||||
|
||||
This checks: setup skill structure, module.yaml completeness, CSV integrity (missing entries, orphans, duplicate menu codes, broken before/after references, missing required fields).
|
||||
|
||||
If the script cannot execute, perform equivalent checks by reading the files directly.
|
||||
|
||||
### 3. Quality Assessment
|
||||
|
||||
This is where LLM judgment matters. Read every SKILL.md in the module thoroughly, then review each CSV entry against what you learned:
|
||||
|
||||
**Completeness** — Does every distinct capability of every skill have its own CSV row? A skill with multiple modes or actions should have multiple entries. Look for capabilities described in SKILL.md overviews that aren't registered.
|
||||
|
||||
**Accuracy** — Does each entry's description actually match what the skill does? Are the action names correct? Do the args match what the skill accepts?
|
||||
|
||||
**Description quality** — Each description should be:
|
||||
|
||||
- Concise but informative — enough for a user to know what it does and for an LLM to route correctly
|
||||
- Action-oriented — starts with a verb (Create, Validate, Brainstorm, Scaffold)
|
||||
- Specific — avoids vague language ("helps with things", "manages stuff")
|
||||
- Not overly verbose — one sentence, no filler
|
||||
|
||||
**Ordering and relationships** — Do the before/after references make sense given what the skills actually do? Are required flags set appropriately?
|
||||
|
||||
**Menu codes** — Are they intuitive? Do they relate to the display name in a way users can remember?
|
||||
|
||||
### 4. Present Results
|
||||
|
||||
Combine script findings and quality assessment into a clear report:
|
||||
|
||||
- **Structural issues** (from script) — list with severity
|
||||
- **Quality findings** (from your review) — specific, actionable suggestions per entry
|
||||
- **Overall assessment** — is this module ready for use, or does it need fixes?
|
||||
|
||||
For each finding, explain what's wrong and suggest the fix. Be direct — the user should be able to act on every item without further clarification.
|
||||
@@ -0,0 +1,124 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.10"
|
||||
# ///
|
||||
"""Scaffold a BMad module setup skill from template.
|
||||
|
||||
Copies the setup-skill-template into the target directory as bmad-{code}-setup/,
|
||||
then writes the generated module.yaml and module-help.csv into the assets folder
|
||||
and updates the SKILL.md frontmatter with the module's identity.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import shutil
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Scaffold a BMad module setup skill from template"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--target-dir",
|
||||
required=True,
|
||||
help="Directory to create the setup skill in (the user's skills folder)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-code",
|
||||
required=True,
|
||||
help="Module code (2-4 letter abbreviation, e.g. 'cis')",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-name",
|
||||
required=True,
|
||||
help="Module display name (e.g. 'Creative Intelligence Suite')",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-yaml",
|
||||
required=True,
|
||||
help="Path to the generated module.yaml content file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-csv",
|
||||
required=True,
|
||||
help="Path to the generated module-help.csv content file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose", action="store_true", help="Print progress to stderr"
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
template_dir = Path(__file__).resolve().parent.parent / "assets" / "setup-skill-template"
|
||||
setup_skill_name = f"bmad-{args.module_code}-setup"
|
||||
target = Path(args.target_dir) / setup_skill_name
|
||||
|
||||
if not template_dir.is_dir():
|
||||
print(
|
||||
json.dumps({"status": "error", "message": f"Template not found: {template_dir}"}),
|
||||
file=sys.stdout,
|
||||
)
|
||||
return 2
|
||||
|
||||
for source_path in [args.module_yaml, args.module_csv]:
|
||||
if not Path(source_path).is_file():
|
||||
print(
|
||||
json.dumps({"status": "error", "message": f"Source file not found: {source_path}"}),
|
||||
file=sys.stdout,
|
||||
)
|
||||
return 2
|
||||
|
||||
target_dir = Path(args.target_dir)
|
||||
if not target_dir.is_dir():
|
||||
print(
|
||||
json.dumps({"status": "error", "message": f"Target directory not found: {target_dir}"}),
|
||||
file=sys.stdout,
|
||||
)
|
||||
return 2
|
||||
|
||||
# Remove existing setup skill if present (anti-zombie)
|
||||
if target.exists():
|
||||
if args.verbose:
|
||||
print(f"Removing existing {setup_skill_name}/", file=sys.stderr)
|
||||
shutil.rmtree(target)
|
||||
|
||||
# Copy template
|
||||
if args.verbose:
|
||||
print(f"Copying template to {target}", file=sys.stderr)
|
||||
shutil.copytree(template_dir, target)
|
||||
|
||||
# Update SKILL.md frontmatter placeholders
|
||||
skill_md = target / "SKILL.md"
|
||||
content = skill_md.read_text(encoding="utf-8")
|
||||
content = content.replace("{setup-skill-name}", setup_skill_name)
|
||||
content = content.replace("{module-name}", args.module_name)
|
||||
content = content.replace("{module-code}", args.module_code)
|
||||
skill_md.write_text(content, encoding="utf-8")
|
||||
|
||||
# Write generated module.yaml
|
||||
yaml_content = Path(args.module_yaml).read_text(encoding="utf-8")
|
||||
(target / "assets" / "module.yaml").write_text(yaml_content, encoding="utf-8")
|
||||
|
||||
# Write generated module-help.csv
|
||||
csv_content = Path(args.module_csv).read_text(encoding="utf-8")
|
||||
(target / "assets" / "module-help.csv").write_text(csv_content, encoding="utf-8")
|
||||
|
||||
# Collect file list
|
||||
files_created = sorted(
|
||||
str(p.relative_to(target)) for p in target.rglob("*") if p.is_file()
|
||||
)
|
||||
|
||||
result = {
|
||||
"status": "success",
|
||||
"setup_skill": setup_skill_name,
|
||||
"location": str(target),
|
||||
"files_created": files_created,
|
||||
"files_count": len(files_created),
|
||||
}
|
||||
print(json.dumps(result, indent=2))
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -0,0 +1,223 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.10"
|
||||
# ///
|
||||
"""Tests for scaffold-setup-skill.py"""
|
||||
|
||||
import json
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
SCRIPT = Path(__file__).resolve().parent.parent / "scaffold-setup-skill.py"
|
||||
TEMPLATE_DIR = Path(__file__).resolve().parent.parent.parent / "assets" / "setup-skill-template"
|
||||
|
||||
|
||||
def run_scaffold(tmp: Path, **kwargs) -> tuple[int, dict]:
|
||||
"""Run the scaffold script and return (exit_code, parsed_json)."""
|
||||
target_dir = kwargs.get("target_dir", str(tmp / "output"))
|
||||
Path(target_dir).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
module_code = kwargs.get("module_code", "tst")
|
||||
module_name = kwargs.get("module_name", "Test Module")
|
||||
|
||||
yaml_path = tmp / "module.yaml"
|
||||
csv_path = tmp / "module-help.csv"
|
||||
yaml_path.write_text(kwargs.get("yaml_content", f'code: {module_code}\nname: "{module_name}"\n'))
|
||||
csv_path.write_text(
|
||||
kwargs.get(
|
||||
"csv_content",
|
||||
"module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs\n"
|
||||
f'{module_name},bmad-{module_code}-example,Example,EX,An example skill,do-thing,,anytime,,,false,output_folder,artifact\n',
|
||||
)
|
||||
)
|
||||
|
||||
cmd = [
|
||||
sys.executable,
|
||||
str(SCRIPT),
|
||||
"--target-dir", target_dir,
|
||||
"--module-code", module_code,
|
||||
"--module-name", module_name,
|
||||
"--module-yaml", str(yaml_path),
|
||||
"--module-csv", str(csv_path),
|
||||
]
|
||||
result = subprocess.run(cmd, capture_output=True, text=True)
|
||||
try:
|
||||
data = json.loads(result.stdout)
|
||||
except json.JSONDecodeError:
|
||||
data = {"raw_stdout": result.stdout, "raw_stderr": result.stderr}
|
||||
return result.returncode, data
|
||||
|
||||
|
||||
def test_basic_scaffold():
|
||||
"""Test that scaffolding creates the expected structure."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
target_dir = tmp / "output"
|
||||
target_dir.mkdir()
|
||||
|
||||
code, data = run_scaffold(tmp, target_dir=str(target_dir))
|
||||
assert code == 0, f"Script failed: {data}"
|
||||
assert data["status"] == "success"
|
||||
assert data["setup_skill"] == "bmad-tst-setup"
|
||||
|
||||
setup_dir = target_dir / "bmad-tst-setup"
|
||||
assert setup_dir.is_dir()
|
||||
assert (setup_dir / "SKILL.md").is_file()
|
||||
assert (setup_dir / "scripts" / "merge-config.py").is_file()
|
||||
assert (setup_dir / "scripts" / "merge-help-csv.py").is_file()
|
||||
assert (setup_dir / "scripts" / "cleanup-legacy.py").is_file()
|
||||
assert (setup_dir / "assets" / "module.yaml").is_file()
|
||||
assert (setup_dir / "assets" / "module-help.csv").is_file()
|
||||
|
||||
|
||||
def test_skill_md_frontmatter_substitution():
|
||||
"""Test that SKILL.md placeholders are replaced."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
target_dir = tmp / "output"
|
||||
target_dir.mkdir()
|
||||
|
||||
code, data = run_scaffold(
|
||||
tmp,
|
||||
target_dir=str(target_dir),
|
||||
module_code="xyz",
|
||||
module_name="XYZ Studio",
|
||||
)
|
||||
assert code == 0
|
||||
|
||||
skill_md = (target_dir / "bmad-xyz-setup" / "SKILL.md").read_text()
|
||||
assert "bmad-xyz-setup" in skill_md
|
||||
assert "XYZ Studio" in skill_md
|
||||
assert "{setup-skill-name}" not in skill_md
|
||||
assert "{module-name}" not in skill_md
|
||||
assert "{module-code}" not in skill_md
|
||||
|
||||
|
||||
def test_generated_files_written():
|
||||
"""Test that module.yaml and module-help.csv contain generated content."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
target_dir = tmp / "output"
|
||||
target_dir.mkdir()
|
||||
|
||||
custom_yaml = 'code: abc\nname: "ABC Module"\ndescription: "Custom desc"\n'
|
||||
custom_csv = "module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs\nABC Module,bmad-abc-thing,Do Thing,DT,Does the thing,run,,anytime,,,false,output_folder,report\n"
|
||||
|
||||
code, data = run_scaffold(
|
||||
tmp,
|
||||
target_dir=str(target_dir),
|
||||
module_code="abc",
|
||||
module_name="ABC Module",
|
||||
yaml_content=custom_yaml,
|
||||
csv_content=custom_csv,
|
||||
)
|
||||
assert code == 0
|
||||
|
||||
yaml_content = (target_dir / "bmad-abc-setup" / "assets" / "module.yaml").read_text()
|
||||
assert "ABC Module" in yaml_content
|
||||
assert "Custom desc" in yaml_content
|
||||
|
||||
csv_content = (target_dir / "bmad-abc-setup" / "assets" / "module-help.csv").read_text()
|
||||
assert "bmad-abc-thing" in csv_content
|
||||
assert "DT" in csv_content
|
||||
|
||||
|
||||
def test_anti_zombie_replaces_existing():
|
||||
"""Test that an existing setup skill is replaced cleanly."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
target_dir = tmp / "output"
|
||||
target_dir.mkdir()
|
||||
|
||||
# First scaffold
|
||||
run_scaffold(tmp, target_dir=str(target_dir))
|
||||
stale_file = target_dir / "bmad-tst-setup" / "stale-marker.txt"
|
||||
stale_file.write_text("should be removed")
|
||||
|
||||
# Second scaffold should remove stale file
|
||||
code, data = run_scaffold(tmp, target_dir=str(target_dir))
|
||||
assert code == 0
|
||||
assert not stale_file.exists()
|
||||
|
||||
|
||||
def test_missing_target_dir():
|
||||
"""Test error when target directory doesn't exist."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
nonexistent = tmp / "nonexistent"
|
||||
|
||||
# Write valid source files
|
||||
yaml_path = tmp / "module.yaml"
|
||||
csv_path = tmp / "module-help.csv"
|
||||
yaml_path.write_text('code: tst\nname: "Test"\n')
|
||||
csv_path.write_text("header\n")
|
||||
|
||||
cmd = [
|
||||
sys.executable,
|
||||
str(SCRIPT),
|
||||
"--target-dir", str(nonexistent),
|
||||
"--module-code", "tst",
|
||||
"--module-name", "Test",
|
||||
"--module-yaml", str(yaml_path),
|
||||
"--module-csv", str(csv_path),
|
||||
]
|
||||
result = subprocess.run(cmd, capture_output=True, text=True)
|
||||
assert result.returncode == 2
|
||||
data = json.loads(result.stdout)
|
||||
assert data["status"] == "error"
|
||||
|
||||
|
||||
def test_missing_source_file():
|
||||
"""Test error when module.yaml source doesn't exist."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
target_dir = tmp / "output"
|
||||
target_dir.mkdir()
|
||||
|
||||
# Remove the yaml after creation to simulate missing file
|
||||
yaml_path = tmp / "module.yaml"
|
||||
csv_path = tmp / "module-help.csv"
|
||||
csv_path.write_text("header\n")
|
||||
# Don't create yaml_path
|
||||
|
||||
cmd = [
|
||||
sys.executable,
|
||||
str(SCRIPT),
|
||||
"--target-dir", str(target_dir),
|
||||
"--module-code", "tst",
|
||||
"--module-name", "Test",
|
||||
"--module-yaml", str(yaml_path),
|
||||
"--module-csv", str(csv_path),
|
||||
]
|
||||
result = subprocess.run(cmd, capture_output=True, text=True)
|
||||
assert result.returncode == 2
|
||||
data = json.loads(result.stdout)
|
||||
assert data["status"] == "error"
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
tests = [
|
||||
test_basic_scaffold,
|
||||
test_skill_md_frontmatter_substitution,
|
||||
test_generated_files_written,
|
||||
test_anti_zombie_replaces_existing,
|
||||
test_missing_target_dir,
|
||||
test_missing_source_file,
|
||||
]
|
||||
passed = 0
|
||||
failed = 0
|
||||
for test in tests:
|
||||
try:
|
||||
test()
|
||||
print(f" PASS: {test.__name__}")
|
||||
passed += 1
|
||||
except AssertionError as e:
|
||||
print(f" FAIL: {test.__name__}: {e}")
|
||||
failed += 1
|
||||
except Exception as e:
|
||||
print(f" ERROR: {test.__name__}: {e}")
|
||||
failed += 1
|
||||
print(f"\n{passed} passed, {failed} failed")
|
||||
sys.exit(1 if failed else 0)
|
||||
@@ -0,0 +1,202 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.10"
|
||||
# ///
|
||||
"""Tests for validate-module.py"""
|
||||
|
||||
import json
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
SCRIPT = Path(__file__).resolve().parent.parent / "validate-module.py"
|
||||
|
||||
CSV_HEADER = "module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs\n"
|
||||
|
||||
|
||||
def create_module(tmp: Path, skills: list[str] | None = None, csv_rows: str = "",
|
||||
yaml_content: str = "", setup_name: str = "bmad-tst-setup") -> Path:
|
||||
"""Create a minimal module structure for testing."""
|
||||
module_dir = tmp / "module"
|
||||
module_dir.mkdir()
|
||||
|
||||
# Setup skill
|
||||
setup = module_dir / setup_name
|
||||
setup.mkdir()
|
||||
(setup / "SKILL.md").write_text("---\nname: " + setup_name + "\n---\n# Setup\n")
|
||||
(setup / "assets").mkdir()
|
||||
(setup / "assets" / "module.yaml").write_text(
|
||||
yaml_content or 'code: tst\nname: "Test Module"\ndescription: "A test module"\n'
|
||||
)
|
||||
(setup / "assets" / "module-help.csv").write_text(CSV_HEADER + csv_rows)
|
||||
|
||||
# Other skills
|
||||
for skill in (skills or []):
|
||||
skill_dir = module_dir / skill
|
||||
skill_dir.mkdir()
|
||||
(skill_dir / "SKILL.md").write_text(f"---\nname: {skill}\n---\n# {skill}\n")
|
||||
|
||||
return module_dir
|
||||
|
||||
|
||||
def run_validate(module_dir: Path) -> tuple[int, dict]:
|
||||
"""Run the validation script and return (exit_code, parsed_json)."""
|
||||
result = subprocess.run(
|
||||
[sys.executable, str(SCRIPT), str(module_dir)],
|
||||
capture_output=True, text=True,
|
||||
)
|
||||
try:
|
||||
data = json.loads(result.stdout)
|
||||
except json.JSONDecodeError:
|
||||
data = {"raw_stdout": result.stdout, "raw_stderr": result.stderr}
|
||||
return result.returncode, data
|
||||
|
||||
|
||||
def test_valid_module():
|
||||
"""A well-formed module should pass."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
csv_rows = 'Test Module,bmad-tst-foo,Do Foo,DF,Does the foo thing,run,,anytime,,,false,output_folder,report\n'
|
||||
module_dir = create_module(tmp, skills=["bmad-tst-foo"], csv_rows=csv_rows)
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
assert code == 0, f"Expected pass: {data}"
|
||||
assert data["status"] == "pass"
|
||||
assert data["summary"]["total_findings"] == 0
|
||||
|
||||
|
||||
def test_missing_setup_skill():
|
||||
"""Module with no setup skill should fail critically."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
module_dir = tmp / "module"
|
||||
module_dir.mkdir()
|
||||
skill = module_dir / "bmad-tst-foo"
|
||||
skill.mkdir()
|
||||
(skill / "SKILL.md").write_text("---\nname: bmad-tst-foo\n---\n")
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
assert code == 1
|
||||
assert any(f["category"] == "structure" for f in data["findings"])
|
||||
|
||||
|
||||
def test_missing_csv_entry():
|
||||
"""Skill without a CSV entry should be flagged."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
module_dir = create_module(tmp, skills=["bmad-tst-foo", "bmad-tst-bar"],
|
||||
csv_rows='Test Module,bmad-tst-foo,Do Foo,DF,Does foo,run,,anytime,,,false,output_folder,report\n')
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
assert code == 1
|
||||
missing = [f for f in data["findings"] if f["category"] == "missing-entry"]
|
||||
assert len(missing) == 1
|
||||
assert "bmad-tst-bar" in missing[0]["message"]
|
||||
|
||||
|
||||
def test_orphan_csv_entry():
|
||||
"""CSV entry for nonexistent skill should be flagged."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
csv_rows = 'Test Module,bmad-tst-ghost,Ghost,GH,Does not exist,run,,anytime,,,false,output_folder,report\n'
|
||||
module_dir = create_module(tmp, skills=[], csv_rows=csv_rows)
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
orphans = [f for f in data["findings"] if f["category"] == "orphan-entry"]
|
||||
assert len(orphans) == 1
|
||||
assert "bmad-tst-ghost" in orphans[0]["message"]
|
||||
|
||||
|
||||
def test_duplicate_menu_codes():
|
||||
"""Duplicate menu codes should be flagged."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
csv_rows = (
|
||||
'Test Module,bmad-tst-foo,Do Foo,DF,Does foo,run,,anytime,,,false,output_folder,report\n'
|
||||
'Test Module,bmad-tst-foo,Also Foo,DF,Also does foo,other,,anytime,,,false,output_folder,report\n'
|
||||
)
|
||||
module_dir = create_module(tmp, skills=["bmad-tst-foo"], csv_rows=csv_rows)
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
dupes = [f for f in data["findings"] if f["category"] == "duplicate-menu-code"]
|
||||
assert len(dupes) == 1
|
||||
assert "DF" in dupes[0]["message"]
|
||||
|
||||
|
||||
def test_invalid_before_after_ref():
|
||||
"""Before/after references to nonexistent capabilities should be flagged."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
csv_rows = 'Test Module,bmad-tst-foo,Do Foo,DF,Does foo,run,,anytime,bmad-tst-ghost:phantom,,false,output_folder,report\n'
|
||||
module_dir = create_module(tmp, skills=["bmad-tst-foo"], csv_rows=csv_rows)
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
refs = [f for f in data["findings"] if f["category"] == "invalid-ref"]
|
||||
assert len(refs) == 1
|
||||
assert "bmad-tst-ghost:phantom" in refs[0]["message"]
|
||||
|
||||
|
||||
def test_missing_yaml_fields():
|
||||
"""module.yaml with missing required fields should be flagged."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
csv_rows = 'Test Module,bmad-tst-foo,Do Foo,DF,Does foo,run,,anytime,,,false,output_folder,report\n'
|
||||
module_dir = create_module(tmp, skills=["bmad-tst-foo"], csv_rows=csv_rows,
|
||||
yaml_content='code: tst\n')
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
yaml_findings = [f for f in data["findings"] if f["category"] == "yaml"]
|
||||
assert len(yaml_findings) >= 1 # at least name or description missing
|
||||
|
||||
|
||||
def test_empty_csv():
|
||||
"""CSV with header but no rows should be flagged."""
|
||||
with tempfile.TemporaryDirectory() as tmp:
|
||||
tmp = Path(tmp)
|
||||
module_dir = create_module(tmp, skills=["bmad-tst-foo"], csv_rows="")
|
||||
|
||||
code, data = run_validate(module_dir)
|
||||
assert code == 1
|
||||
empty = [f for f in data["findings"] if f["category"] == "csv-empty"]
|
||||
assert len(empty) == 1
|
||||
|
||||
|
||||
def test_nonexistent_directory():
|
||||
"""Nonexistent path should return error."""
|
||||
result = subprocess.run(
|
||||
[sys.executable, str(SCRIPT), "/nonexistent/path"],
|
||||
capture_output=True, text=True,
|
||||
)
|
||||
assert result.returncode == 2
|
||||
data = json.loads(result.stdout)
|
||||
assert data["status"] == "error"
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
tests = [
|
||||
test_valid_module,
|
||||
test_missing_setup_skill,
|
||||
test_missing_csv_entry,
|
||||
test_orphan_csv_entry,
|
||||
test_duplicate_menu_codes,
|
||||
test_invalid_before_after_ref,
|
||||
test_missing_yaml_fields,
|
||||
test_empty_csv,
|
||||
test_nonexistent_directory,
|
||||
]
|
||||
passed = 0
|
||||
failed = 0
|
||||
for test in tests:
|
||||
try:
|
||||
test()
|
||||
print(f" PASS: {test.__name__}")
|
||||
passed += 1
|
||||
except AssertionError as e:
|
||||
print(f" FAIL: {test.__name__}: {e}")
|
||||
failed += 1
|
||||
except Exception as e:
|
||||
print(f" ERROR: {test.__name__}: {e}")
|
||||
failed += 1
|
||||
print(f"\n{passed} passed, {failed} failed")
|
||||
sys.exit(1 if failed else 0)
|
||||
242
.agents/skills/bmad-module-builder/scripts/validate-module.py
Normal file
242
.agents/skills/bmad-module-builder/scripts/validate-module.py
Normal file
@@ -0,0 +1,242 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.10"
|
||||
# ///
|
||||
"""Validate a BMad module's setup skill structure and help CSV integrity.
|
||||
|
||||
Performs deterministic structural checks:
|
||||
- Setup skill exists with required files (SKILL.md, assets/module.yaml, assets/module-help.csv)
|
||||
- All skill folders have at least one capability entry in the CSV
|
||||
- No orphan CSV entries pointing to nonexistent skills
|
||||
- Menu codes are unique
|
||||
- Before/after references point to real capability entries
|
||||
- Required module.yaml fields are present
|
||||
- CSV column count is consistent
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import csv
|
||||
import json
|
||||
import sys
|
||||
from io import StringIO
|
||||
from pathlib import Path
|
||||
|
||||
REQUIRED_YAML_FIELDS = {"code", "name", "description"}
|
||||
CSV_HEADER = [
|
||||
"module", "skill", "display-name", "menu-code", "description",
|
||||
"action", "args", "phase", "after", "before", "required",
|
||||
"output-location", "outputs",
|
||||
]
|
||||
|
||||
|
||||
def find_setup_skill(module_dir: Path) -> Path | None:
|
||||
"""Find the setup skill folder (bmad-*-setup)."""
|
||||
for d in module_dir.iterdir():
|
||||
if d.is_dir() and d.name.startswith("bmad-") and d.name.endswith("-setup"):
|
||||
return d
|
||||
return None
|
||||
|
||||
|
||||
def find_skill_folders(module_dir: Path, setup_name: str) -> list[str]:
|
||||
"""Find all skill folders (directories with SKILL.md), excluding the setup skill."""
|
||||
skills = []
|
||||
for d in module_dir.iterdir():
|
||||
if d.is_dir() and d.name != setup_name and (d / "SKILL.md").is_file():
|
||||
skills.append(d.name)
|
||||
return sorted(skills)
|
||||
|
||||
|
||||
def parse_yaml_minimal(text: str) -> dict[str, str]:
|
||||
"""Parse top-level YAML key-value pairs (no nested structures)."""
|
||||
result = {}
|
||||
for line in text.splitlines():
|
||||
line = line.strip()
|
||||
if ":" in line and not line.startswith("#") and not line.startswith("-"):
|
||||
key, _, value = line.partition(":")
|
||||
key = key.strip()
|
||||
value = value.strip().strip('"').strip("'")
|
||||
if value and not value.startswith(">"):
|
||||
result[key] = value
|
||||
return result
|
||||
|
||||
|
||||
def parse_csv_rows(csv_text: str) -> tuple[list[str], list[dict[str, str]]]:
|
||||
"""Parse CSV text into header and list of row dicts."""
|
||||
reader = csv.DictReader(StringIO(csv_text))
|
||||
header = reader.fieldnames or []
|
||||
rows = list(reader)
|
||||
return header, rows
|
||||
|
||||
|
||||
def validate(module_dir: Path, verbose: bool = False) -> dict:
|
||||
"""Run all structural validations. Returns JSON-serializable result."""
|
||||
findings: list[dict] = []
|
||||
info: dict = {}
|
||||
|
||||
def finding(severity: str, category: str, message: str, detail: str = ""):
|
||||
findings.append({
|
||||
"severity": severity,
|
||||
"category": category,
|
||||
"message": message,
|
||||
"detail": detail,
|
||||
})
|
||||
|
||||
# 1. Find setup skill
|
||||
setup_dir = find_setup_skill(module_dir)
|
||||
if not setup_dir:
|
||||
finding("critical", "structure", "No setup skill found (bmad-*-setup directory)")
|
||||
return {"status": "fail", "findings": findings, "info": info}
|
||||
|
||||
info["setup_skill"] = setup_dir.name
|
||||
|
||||
# 2. Check required files in setup skill
|
||||
required_files = {
|
||||
"SKILL.md": setup_dir / "SKILL.md",
|
||||
"assets/module.yaml": setup_dir / "assets" / "module.yaml",
|
||||
"assets/module-help.csv": setup_dir / "assets" / "module-help.csv",
|
||||
}
|
||||
for label, path in required_files.items():
|
||||
if not path.is_file():
|
||||
finding("critical", "structure", f"Missing required file: {label}")
|
||||
|
||||
if not all(p.is_file() for p in required_files.values()):
|
||||
return {"status": "fail", "findings": findings, "info": info}
|
||||
|
||||
# 3. Validate module.yaml
|
||||
yaml_text = (setup_dir / "assets" / "module.yaml").read_text(encoding="utf-8")
|
||||
yaml_data = parse_yaml_minimal(yaml_text)
|
||||
info["module_code"] = yaml_data.get("code", "")
|
||||
info["module_name"] = yaml_data.get("name", "")
|
||||
|
||||
for field in REQUIRED_YAML_FIELDS:
|
||||
if not yaml_data.get(field):
|
||||
finding("high", "yaml", f"module.yaml missing or empty required field: {field}")
|
||||
|
||||
# 4. Parse and validate CSV
|
||||
csv_text = (setup_dir / "assets" / "module-help.csv").read_text(encoding="utf-8")
|
||||
header, rows = parse_csv_rows(csv_text)
|
||||
|
||||
# Check header
|
||||
if header != CSV_HEADER:
|
||||
missing = set(CSV_HEADER) - set(header)
|
||||
extra = set(header) - set(CSV_HEADER)
|
||||
detail_parts = []
|
||||
if missing:
|
||||
detail_parts.append(f"missing: {', '.join(sorted(missing))}")
|
||||
if extra:
|
||||
detail_parts.append(f"extra: {', '.join(sorted(extra))}")
|
||||
finding("high", "csv-header", f"CSV header mismatch: {'; '.join(detail_parts)}")
|
||||
|
||||
if not rows:
|
||||
finding("high", "csv-empty", "module-help.csv has no capability entries")
|
||||
return {"status": "fail", "findings": findings, "info": info}
|
||||
|
||||
info["csv_entries"] = len(rows)
|
||||
|
||||
# 5. Check column count consistency
|
||||
expected_cols = len(CSV_HEADER)
|
||||
for i, row in enumerate(rows):
|
||||
if len(row) != expected_cols:
|
||||
finding("medium", "csv-columns", f"Row {i + 2} has {len(row)} columns, expected {expected_cols}",
|
||||
f"skill={row.get('skill', '?')}")
|
||||
|
||||
# 6. Collect skills from CSV and filesystem
|
||||
csv_skills = {row.get("skill", "") for row in rows}
|
||||
skill_folders = find_skill_folders(module_dir, setup_dir.name)
|
||||
info["skill_folders"] = skill_folders
|
||||
info["csv_skills"] = sorted(csv_skills)
|
||||
|
||||
# 7. Skills without CSV entries
|
||||
for skill in skill_folders:
|
||||
if skill not in csv_skills:
|
||||
finding("high", "missing-entry", f"Skill '{skill}' has no capability entries in the CSV")
|
||||
|
||||
# 8. Orphan CSV entries
|
||||
for skill in csv_skills:
|
||||
if skill not in skill_folders and skill != setup_dir.name:
|
||||
# Check if it's the setup skill itself (valid)
|
||||
if not (module_dir / skill / "SKILL.md").is_file():
|
||||
finding("high", "orphan-entry", f"CSV references skill '{skill}' which does not exist in the module folder")
|
||||
|
||||
# 9. Unique menu codes
|
||||
menu_codes: dict[str, list[str]] = {}
|
||||
for row in rows:
|
||||
code = row.get("menu-code", "").strip()
|
||||
if code:
|
||||
menu_codes.setdefault(code, []).append(row.get("display-name", "?"))
|
||||
|
||||
for code, names in menu_codes.items():
|
||||
if len(names) > 1:
|
||||
finding("high", "duplicate-menu-code", f"Menu code '{code}' used by multiple entries: {', '.join(names)}")
|
||||
|
||||
# 10. Before/after reference validation
|
||||
# Build set of valid capability references (skill:action)
|
||||
valid_refs = set()
|
||||
for row in rows:
|
||||
skill = row.get("skill", "").strip()
|
||||
action = row.get("action", "").strip()
|
||||
if skill and action:
|
||||
valid_refs.add(f"{skill}:{action}")
|
||||
|
||||
for row in rows:
|
||||
display = row.get("display-name", "?")
|
||||
for field in ("after", "before"):
|
||||
value = row.get(field, "").strip()
|
||||
if not value:
|
||||
continue
|
||||
# Can be comma-separated
|
||||
for ref in value.split(","):
|
||||
ref = ref.strip()
|
||||
if ref and ref not in valid_refs:
|
||||
finding("medium", "invalid-ref",
|
||||
f"'{display}' {field} references '{ref}' which is not a valid capability",
|
||||
"Expected format: skill-name:action-name")
|
||||
|
||||
# 11. Required fields in each row
|
||||
for row in rows:
|
||||
display = row.get("display-name", "?")
|
||||
for field in ("skill", "display-name", "menu-code", "description"):
|
||||
if not row.get(field, "").strip():
|
||||
finding("high", "missing-field", f"Entry '{display}' is missing required field: {field}")
|
||||
|
||||
# Summary
|
||||
severity_counts = {"critical": 0, "high": 0, "medium": 0, "low": 0}
|
||||
for f in findings:
|
||||
severity_counts[f["severity"]] = severity_counts.get(f["severity"], 0) + 1
|
||||
|
||||
status = "pass" if severity_counts["critical"] == 0 and severity_counts["high"] == 0 else "fail"
|
||||
|
||||
return {
|
||||
"status": status,
|
||||
"info": info,
|
||||
"findings": findings,
|
||||
"summary": {
|
||||
"total_findings": len(findings),
|
||||
"by_severity": severity_counts,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Validate a BMad module's setup skill structure and help CSV integrity"
|
||||
)
|
||||
parser.add_argument(
|
||||
"module_dir",
|
||||
help="Path to the module's skills folder (containing the setup skill and other skills)",
|
||||
)
|
||||
parser.add_argument("--verbose", action="store_true", help="Print progress to stderr")
|
||||
args = parser.parse_args()
|
||||
|
||||
module_path = Path(args.module_dir)
|
||||
if not module_path.is_dir():
|
||||
print(json.dumps({"status": "error", "message": f"Not a directory: {module_path}"}))
|
||||
return 2
|
||||
|
||||
result = validate(module_path, verbose=args.verbose)
|
||||
print(json.dumps(result, indent=2))
|
||||
return 0 if result["status"] == "pass" else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
Reference in New Issue
Block a user