feat: Calculer automatiquement les moyennes après chaque saisie de notes
Les enseignants ont besoin de moyennes à jour immédiatement après la publication ou modification des notes, sans attendre un batch nocturne. Le système recalcule via Domain Events synchrones : statistiques d'évaluation (min/max/moyenne/médiane), moyennes matières pondérées (normalisation /20), et moyenne générale par élève. Les résultats sont stockés dans des tables dénormalisées avec cache Redis (TTL 5 min). Trois endpoints API exposent les données avec contrôle d'accès par rôle. Une commande console permet le backfill des données historiques au déploiement.
This commit is contained in:
49
.agents/skills/bmad-code-review/steps/step-03-triage.md
Normal file
49
.agents/skills/bmad-code-review/steps/step-03-triage.md
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
---
|
||||
|
||||
# Step 3: Triage
|
||||
|
||||
## RULES
|
||||
|
||||
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
|
||||
- Be precise. When uncertain between categories, prefer the more conservative classification.
|
||||
|
||||
## INSTRUCTIONS
|
||||
|
||||
1. **Normalize** findings into a common format. Expected input formats:
|
||||
- Adversarial (Blind Hunter): markdown list of descriptions
|
||||
- Edge Case Hunter: JSON array with `location`, `trigger_condition`, `guard_snippet`, `potential_consequence` fields
|
||||
- Acceptance Auditor: markdown list with title, AC/constraint reference, and evidence
|
||||
|
||||
If a layer's output does not match its expected format, attempt best-effort parsing. Note any parsing issues for the user.
|
||||
|
||||
Convert all to a unified list where each finding has:
|
||||
- `id` -- sequential integer
|
||||
- `source` -- `blind`, `edge`, `auditor`, or merged sources (e.g., `blind+edge`)
|
||||
- `title` -- one-line summary
|
||||
- `detail` -- full description
|
||||
- `location` -- file and line reference (if available)
|
||||
|
||||
2. **Deduplicate.** If two or more findings describe the same issue, merge them into one:
|
||||
- Use the most specific finding as the base (prefer edge-case JSON with location over adversarial prose).
|
||||
- Append any unique detail, reasoning, or location references from the other finding(s) into the surviving `detail` field.
|
||||
- Set `source` to the merged sources (e.g., `blind+edge`).
|
||||
|
||||
3. **Classify** each finding into exactly one bucket:
|
||||
- **decision_needed** -- There is an ambiguous choice that requires human input. The code cannot be correctly patched without knowing the user's intent. Only possible if `{review_mode}` = `"full"`.
|
||||
- **patch** -- Code issue that is fixable without human input. The correct fix is unambiguous.
|
||||
- **defer** -- Pre-existing issue not caused by the current change. Real but not actionable now.
|
||||
- **dismiss** -- Noise, false positive, or handled elsewhere.
|
||||
|
||||
If `{review_mode}` = `"no-spec"` and a finding would otherwise be `decision_needed`, reclassify it as `patch` (if the fix is unambiguous) or `defer` (if not).
|
||||
|
||||
4. **Drop** all `dismiss` findings. Record the dismiss count for the summary.
|
||||
|
||||
5. If `{failed_layers}` is non-empty, report which layers failed before announcing results. If zero findings remain after dropping dismissed AND `{failed_layers}` is non-empty, warn the user that the review may be incomplete rather than announcing a clean review.
|
||||
|
||||
6. If zero findings remain after triage (all rejected or none raised): state "✅ Clean review — all layers passed." (Step 3 already warned if any review layers failed via `{failed_layers}`.)
|
||||
|
||||
|
||||
## NEXT
|
||||
|
||||
Read fully and follow `./step-04-present.md`
|
||||
Reference in New Issue
Block a user