Les enseignants ont besoin de moyennes à jour immédiatement après la publication ou modification des notes, sans attendre un batch nocturne. Le système recalcule via Domain Events synchrones : statistiques d'évaluation (min/max/moyenne/médiane), moyennes matières pondérées (normalisation /20), et moyenne générale par élève. Les résultats sont stockés dans des tables dénormalisées avec cache Redis (TTL 5 min). Trois endpoints API exposent les données avec contrôle d'accès par rôle. Une commande console permet le backfill des données historiques au déploiement.
137 lines
3.3 KiB
Markdown
137 lines
3.3 KiB
Markdown
# QA Generate E2E Tests Workflow
|
|
|
|
**Goal:** Generate automated API and E2E tests for implemented code.
|
|
|
|
**Your Role:** You are a QA automation engineer. You generate tests ONLY — no code review or story validation (use the `bmad-code-review` skill for that).
|
|
|
|
---
|
|
|
|
## INITIALIZATION
|
|
|
|
### Configuration Loading
|
|
|
|
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
|
|
|
- `project_name`, `user_name`
|
|
- `communication_language`, `document_output_language`
|
|
- `implementation_artifacts`
|
|
- `date` as system-generated current datetime
|
|
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
|
|
|
|
### Paths
|
|
|
|
- `test_dir` = `{project-root}/tests`
|
|
- `source_dir` = `{project-root}`
|
|
- `default_output_file` = `{implementation_artifacts}/tests/test-summary.md`
|
|
|
|
### Context
|
|
|
|
- `project_context` = `**/project-context.md` (load if exists)
|
|
|
|
---
|
|
|
|
## EXECUTION
|
|
|
|
### Step 0: Detect Test Framework
|
|
|
|
Check project for existing test framework:
|
|
|
|
- Look for `package.json` dependencies (playwright, jest, vitest, cypress, etc.)
|
|
- Check for existing test files to understand patterns
|
|
- Use whatever test framework the project already has
|
|
- If no framework exists:
|
|
- Analyze source code to determine project type (React, Vue, Node API, etc.)
|
|
- Search online for current recommended test framework for that stack
|
|
- Suggest the meta framework and use it (or ask user to confirm)
|
|
|
|
### Step 1: Identify Features
|
|
|
|
Ask user what to test:
|
|
|
|
- Specific feature/component name
|
|
- Directory to scan (e.g., `src/components/`)
|
|
- Or auto-discover features in the codebase
|
|
|
|
### Step 2: Generate API Tests (if applicable)
|
|
|
|
For API endpoints/services, generate tests that:
|
|
|
|
- Test status codes (200, 400, 404, 500)
|
|
- Validate response structure
|
|
- Cover happy path + 1-2 error cases
|
|
- Use project's existing test framework patterns
|
|
|
|
### Step 3: Generate E2E Tests (if UI exists)
|
|
|
|
For UI features, generate tests that:
|
|
|
|
- Test user workflows end-to-end
|
|
- Use semantic locators (roles, labels, text)
|
|
- Focus on user interactions (clicks, form fills, navigation)
|
|
- Assert visible outcomes
|
|
- Keep tests linear and simple
|
|
- Follow project's existing test patterns
|
|
|
|
### Step 4: Run Tests
|
|
|
|
Execute tests to verify they pass (use project's test command).
|
|
|
|
If failures occur, fix them immediately.
|
|
|
|
### Step 5: Create Summary
|
|
|
|
Output markdown summary:
|
|
|
|
```markdown
|
|
# Test Automation Summary
|
|
|
|
## Generated Tests
|
|
|
|
### API Tests
|
|
- [x] tests/api/endpoint.spec.ts - Endpoint validation
|
|
|
|
### E2E Tests
|
|
- [x] tests/e2e/feature.spec.ts - User workflow
|
|
|
|
## Coverage
|
|
- API endpoints: 5/10 covered
|
|
- UI features: 3/8 covered
|
|
|
|
## Next Steps
|
|
- Run tests in CI
|
|
- Add more edge cases as needed
|
|
```
|
|
|
|
## Keep It Simple
|
|
|
|
**Do:**
|
|
|
|
- Use standard test framework APIs
|
|
- Focus on happy path + critical errors
|
|
- Write readable, maintainable tests
|
|
- Run tests to verify they pass
|
|
|
|
**Avoid:**
|
|
|
|
- Complex fixture composition
|
|
- Over-engineering
|
|
- Unnecessary abstractions
|
|
|
|
**For Advanced Features:**
|
|
|
|
If the project needs:
|
|
|
|
- Risk-based test strategy
|
|
- Test design planning
|
|
- Quality gates and NFR assessment
|
|
- Comprehensive coverage analysis
|
|
- Advanced testing patterns and utilities
|
|
|
|
> **Install Test Architect (TEA) module**: <https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/>
|
|
|
|
## Output
|
|
|
|
Save summary to: `{default_output_file}`
|
|
|
|
**Done!** Tests generated and verified. Validate against `./checklist.md`.
|