Julien Froidefond dc8581f545
All checks were successful
Deploy with Docker Compose / deploy (push) Successful in 2m54s
Update Dockerfile to add a new startup script and adjust permissions for entrypoint and startup scripts, simplifying the command execution process.
2026-02-20 13:46:42 +01:00

IA Gen Maturity Evaluator

Production-ready web app for evaluating IA/GenAI maturity of candidates. Built by Peaksys for Peaksys.

Tech Stack

  • Next.js 16 (App Router), React 19, TypeScript, TailwindCSS
  • Prisma + SQLite (local) — switch to Postgres/Supabase for production
  • Recharts (radar chart), jsPDF (PDF export)

Setup

pnpm install
cp .env.example .env
pnpm db:generate
pnpm db:push   # ou pnpm db:migrate pour une DB vide
pnpm db:seed

Note : Si la DB existe déjà (créée avec db push), pour basculer sur les migrations : pnpm prisma migrate resolve --applied 20250220000000_init

Run

pnpm dev

Open http://localhost:3000.

Seed Data

  • 3 candidates with sample evaluations (Alice Chen, Bob Martin, Carol White)
  • 2 templates: Full 15-dimensions, Short 8-dimensions
  • Admin user: admin@peaksys.local (mock auth)

API Routes

Route Method Description
/api/evaluations GET, POST List / create evaluations
/api/evaluations/[id] GET, PUT Get / update evaluation
/api/templates GET List templates
/api/export/csv?id= GET Export evaluation as CSV
/api/export/pdf?id= GET Export evaluation as PDF
/api/auth GET, POST Mock auth
/api/ai/suggest-followups POST AI follow-up suggestions (stub)

Export cURL Examples

# CSV export (replace EVAL_ID with actual evaluation id)
curl -o evaluation.csv "http://localhost:3000/api/export/csv?id=EVAL_ID"

# PDF export
curl -o evaluation.pdf "http://localhost:3000/api/export/pdf?id=EVAL_ID"

With auth header (when real auth is added):

curl -H "Authorization: Bearer YOUR_TOKEN" -o evaluation.csv "http://localhost:3000/api/export/csv?id=EVAL_ID"

AI Assistant Stub

The AI assistant is a client-side stub that returns deterministic follow-up suggestions based on:

  • Dimension name
  • Candidate answer length
  • Current score (low scores trigger probing questions)

To plug a real LLM:

  1. Create or update /api/ai/suggest-followups to call OpenAI/Anthropic/etc.
  2. Pass { dimensionName, candidateAnswer, currentScore } in the request body.
  3. Use a prompt like: "Given this dimension and candidate answer, suggest 23 probing interview questions."
  4. Return { suggestions: string[] }.

The client already calls this API when the user clicks "Get AI follow-up suggestions" in the dimension card.

Tests

# Unit tests (Vitest)
pnpm test

# E2E tests (Playwright) — requires dev server
pnpm test:e2e

Run pnpm exec playwright install once to install browsers for E2E.

Deploy

  1. Set DATABASE_URL to Postgres (e.g. Supabase, Neon).
  2. Run migrations: pnpm db:migrate (ou pnpm db:push en dev)
  3. Seed if needed: pnpm db:seed
  4. Build: pnpm build && pnpm start
  5. Or deploy to Vercel (set env, use Vercel Postgres or external DB).

Acceptance Criteria

  • Auth: mock single-admin login
  • Dashboard: list evaluations and candidates
  • Create/Edit evaluation: candidate, role, date, evaluator, template
  • Templates: Full 15-dim, Short 8-dim
  • Interview guide: definition, rubric 1→5, signals, questions per dimension
  • AI assistant: stub suggests follow-ups
  • Scoring: 15, justification, examples, confidence
  • Probing questions when score ≤ 2
  • Radar chart + findings/recommendations
  • Export PDF and CSV
  • Admin: view templates
  • Warning when all scores = 5 without comments
  • Edit after submission (audit log)
  • Mobile responsive (Tailwind)

Manual Test Plan

  1. Dashboard: Open /, verify evaluations table or empty state.
  2. New evaluation: Click "New Evaluation", fill form, select template, submit.
  3. Interview guide: On evaluation page, score dimensions, add notes, click "Get AI follow-up suggestions".
  4. Low score: Set a dimension to 1 or 2, verify probing questions appear.
  5. All 5s: Set all scores to 5 with no justification, submit — verify warning.
  6. Aggregate: Click "Auto-generate findings", verify radar chart and text.
  7. Export: Click Export, download CSV and PDF.
  8. Admin: Open /admin, verify templates listed.

File Structure

src/
├── app/
│   ├── api/           # API routes
│   ├── evaluations/   # Evaluation pages
│   ├── admin/         # Admin page
│   └── page.tsx       # Dashboard
├── components/        # UI components
└── lib/               # Utils, db, ai-stub, export-utils
prisma/
├── schema.prisma
└── seed.ts
tests/e2e/             # Playwright E2E
Description
No description provided
Readme 1.6 MiB
Languages
TypeScript 98.3%
Dockerfile 0.7%
CSS 0.4%
JavaScript 0.3%
Shell 0.3%