feat: Calculer automatiquement les moyennes après chaque saisie de notes
Les enseignants ont besoin de moyennes à jour immédiatement après la publication ou modification des notes, sans attendre un batch nocturne. Le système recalcule via Domain Events synchrones : statistiques d'évaluation (min/max/moyenne/médiane), moyennes matières pondérées (normalisation /20), et moyenne générale par élève. Les résultats sont stockés dans des tables dénormalisées avec cache Redis (TTL 5 min). Trois endpoints API exposent les données avec contrôle d'accès par rôle. Une commande console permet le backfill des données historiques au déploiement.
This commit is contained in:
@@ -0,0 +1,54 @@
|
||||
# Validate Module
|
||||
|
||||
**Language:** Use `{communication_language}` for all output.
|
||||
|
||||
## Your Role
|
||||
|
||||
You are a module quality reviewer. Your job is to verify that a BMad module's setup skill is complete, accurate, and well-crafted — ensuring every skill is properly registered and every help entry gives users and LLMs the information they need.
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Locate the Module
|
||||
|
||||
Ask the user for the path to their module's skills folder. Identify the setup skill (`bmad-*-setup`) and all other skill folders.
|
||||
|
||||
### 2. Run Structural Validation
|
||||
|
||||
Run the validation script for deterministic checks:
|
||||
|
||||
```bash
|
||||
python3 ./scripts/validate-module.py "{module-skills-folder}"
|
||||
```
|
||||
|
||||
This checks: setup skill structure, module.yaml completeness, CSV integrity (missing entries, orphans, duplicate menu codes, broken before/after references, missing required fields).
|
||||
|
||||
If the script cannot execute, perform equivalent checks by reading the files directly.
|
||||
|
||||
### 3. Quality Assessment
|
||||
|
||||
This is where LLM judgment matters. Read every SKILL.md in the module thoroughly, then review each CSV entry against what you learned:
|
||||
|
||||
**Completeness** — Does every distinct capability of every skill have its own CSV row? A skill with multiple modes or actions should have multiple entries. Look for capabilities described in SKILL.md overviews that aren't registered.
|
||||
|
||||
**Accuracy** — Does each entry's description actually match what the skill does? Are the action names correct? Do the args match what the skill accepts?
|
||||
|
||||
**Description quality** — Each description should be:
|
||||
|
||||
- Concise but informative — enough for a user to know what it does and for an LLM to route correctly
|
||||
- Action-oriented — starts with a verb (Create, Validate, Brainstorm, Scaffold)
|
||||
- Specific — avoids vague language ("helps with things", "manages stuff")
|
||||
- Not overly verbose — one sentence, no filler
|
||||
|
||||
**Ordering and relationships** — Do the before/after references make sense given what the skills actually do? Are required flags set appropriately?
|
||||
|
||||
**Menu codes** — Are they intuitive? Do they relate to the display name in a way users can remember?
|
||||
|
||||
### 4. Present Results
|
||||
|
||||
Combine script findings and quality assessment into a clear report:
|
||||
|
||||
- **Structural issues** (from script) — list with severity
|
||||
- **Quality findings** (from your review) — specific, actionable suggestions per entry
|
||||
- **Overall assessment** — is this module ready for use, or does it need fixes?
|
||||
|
||||
For each finding, explain what's wrong and suggest the fix. Be direct — the user should be able to act on every item without further clarification.
|
||||
Reference in New Issue
Block a user