Files
iag-dev-evaluator/README.md

146 lines
4.6 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
# IA Gen Maturity Evaluator
Production-ready web app for evaluating IA/GenAI maturity of candidates. Built for the Cars Front team.
## Tech Stack
- **Next.js 16** (App Router), **React 19**, **TypeScript**, **TailwindCSS**
- **Prisma** + **SQLite** (local) — switch to Postgres/Supabase for production
- **Recharts** (radar chart), **jsPDF** (PDF export)
## Setup
```bash
pnpm install
cp .env.example .env
pnpm db:generate
pnpm db:push # ou pnpm db:migrate pour une DB vide
pnpm db:seed
```
**Note** : Si la DB existe déjà (créée avec `db push`), pour basculer sur les migrations :
`pnpm prisma migrate resolve --applied 20250220000000_init`
## Run
```bash
pnpm dev
```
Open [http://localhost:3000](http://localhost:3000).
## Seed Data
- **3 candidates** with sample evaluations (Alice Chen, Bob Martin, Carol White)
- **2 templates**: Full 15-dimensions, Short 8-dimensions
- **Admin user**: `admin@cars-front.local` (mock auth)
## API Routes
| Route | Method | Description |
|-------|--------|-------------|
| `/api/evaluations` | GET, POST | List / create evaluations |
| `/api/evaluations/[id]` | GET, PUT | Get / update evaluation |
| `/api/templates` | GET | List templates |
| `/api/export/csv?id=` | GET | Export evaluation as CSV |
| `/api/export/pdf?id=` | GET | Export evaluation as PDF |
| `/api/auth` | GET, POST | Mock auth |
| `/api/ai/suggest-followups` | POST | AI follow-up suggestions (stub) |
## Export cURL Examples
```bash
# CSV export (replace EVAL_ID with actual evaluation id)
curl -o evaluation.csv "http://localhost:3000/api/export/csv?id=EVAL_ID"
# PDF export
curl -o evaluation.pdf "http://localhost:3000/api/export/pdf?id=EVAL_ID"
```
With auth header (when real auth is added):
```bash
curl -H "Authorization: Bearer YOUR_TOKEN" -o evaluation.csv "http://localhost:3000/api/export/csv?id=EVAL_ID"
```
## AI Assistant Stub
The AI assistant is a **client-side stub** that returns deterministic follow-up suggestions based on:
- Dimension name
- Candidate answer length
- Current score (low scores trigger probing questions)
**To plug a real LLM:**
1. Create or update `/api/ai/suggest-followups` to call OpenAI/Anthropic/etc.
2. Pass `{ dimensionName, candidateAnswer, currentScore }` in the request body.
3. Use a prompt like: *"Given this dimension and candidate answer, suggest 23 probing interview questions."*
4. Return `{ suggestions: string[] }`.
The client already calls this API when the user clicks "Get AI follow-up suggestions" in the dimension card.
## Tests
```bash
# Unit tests (Vitest)
pnpm test
# E2E tests (Playwright) — requires dev server
pnpm test:e2e
```
Run `pnpm exec playwright install` once to install browsers for E2E.
## Deploy
1. Set `DATABASE_URL` to Postgres (e.g. Supabase, Neon).
2. Run migrations: `pnpm db:migrate` (ou `pnpm db:push` en dev)
3. Seed if needed: `pnpm db:seed`
4. Build: `pnpm build && pnpm start`
5. Or deploy to Vercel (set env, use Vercel Postgres or external DB).
## Acceptance Criteria
- [x] Auth: mock single-admin login
- [x] Dashboard: list evaluations and candidates
- [x] Create/Edit evaluation: candidate, role, date, evaluator, template
- [x] Templates: Full 15-dim, Short 8-dim
- [x] Interview guide: definition, rubric 1→5, signals, questions per dimension
- [x] AI assistant: stub suggests follow-ups
- [x] Scoring: 15, justification, examples, confidence
- [x] Probing questions when score ≤ 2
- [x] Radar chart + findings/recommendations
- [x] Export PDF and CSV
- [x] Admin: view templates
- [x] Warning when all scores = 5 without comments
- [x] Edit after submission (audit log)
- [x] Mobile responsive (Tailwind)
## Manual Test Plan
1. **Dashboard**: Open `/`, verify evaluations table or empty state.
2. **New evaluation**: Click "New Evaluation", fill form, select template, submit.
3. **Interview guide**: On evaluation page, score dimensions, add notes, click "Get AI follow-up suggestions".
4. **Low score**: Set a dimension to 1 or 2, verify probing questions appear.
5. **All 5s**: Set all scores to 5 with no justification, submit — verify warning.
6. **Aggregate**: Click "Auto-generate findings", verify radar chart and text.
7. **Export**: Click Export, download CSV and PDF.
8. **Admin**: Open `/admin`, verify templates listed.
## File Structure
```
src/
├── app/
│ ├── api/ # API routes
│ ├── evaluations/ # Evaluation pages
│ ├── admin/ # Admin page
│ └── page.tsx # Dashboard
├── components/ # UI components
└── lib/ # Utils, db, ai-stub, export-utils
prisma/
├── schema.prisma
└── seed.ts
tests/e2e/ # Playwright E2E
```