Enhance project setup with Prisma, new scripts, and dependencies; update README for clarity and add API routes; improve layout and styling for better user experience
This commit is contained in:
148
README.md
148
README.md
@@ -1,36 +1,142 @@
|
||||
This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app).
|
||||
# IA Gen Maturity Evaluator
|
||||
|
||||
## Getting Started
|
||||
Production-ready web app for evaluating IA/GenAI maturity of candidates. Built for the Cars Front team.
|
||||
|
||||
First, run the development server:
|
||||
## Tech Stack
|
||||
|
||||
- **Next.js 16** (App Router), **React 19**, **TypeScript**, **TailwindCSS**
|
||||
- **Prisma** + **SQLite** (local) — switch to Postgres/Supabase for production
|
||||
- **Recharts** (radar chart), **jsPDF** (PDF export)
|
||||
|
||||
## Setup
|
||||
|
||||
```bash
|
||||
npm run dev
|
||||
# or
|
||||
yarn dev
|
||||
# or
|
||||
pnpm dev
|
||||
# or
|
||||
bun dev
|
||||
pnpm install
|
||||
cp .env.example .env
|
||||
pnpm db:generate
|
||||
pnpm db:push
|
||||
pnpm db:seed
|
||||
```
|
||||
|
||||
Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.
|
||||
## Run
|
||||
|
||||
You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file.
|
||||
```bash
|
||||
pnpm dev
|
||||
```
|
||||
|
||||
This project uses [`next/font`](https://nextjs.org/docs/app/building-your-application/optimizing/fonts) to automatically optimize and load [Geist](https://vercel.com/font), a new font family for Vercel.
|
||||
Open [http://localhost:3000](http://localhost:3000).
|
||||
|
||||
## Learn More
|
||||
## Seed Data
|
||||
|
||||
To learn more about Next.js, take a look at the following resources:
|
||||
- **3 candidates** with sample evaluations (Alice Chen, Bob Martin, Carol White)
|
||||
- **2 templates**: Full 15-dimensions, Short 8-dimensions
|
||||
- **Admin user**: `admin@cars-front.local` (mock auth)
|
||||
|
||||
- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
|
||||
- [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial.
|
||||
## API Routes
|
||||
|
||||
You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js) - your feedback and contributions are welcome!
|
||||
| Route | Method | Description |
|
||||
|-------|--------|-------------|
|
||||
| `/api/evaluations` | GET, POST | List / create evaluations |
|
||||
| `/api/evaluations/[id]` | GET, PUT | Get / update evaluation |
|
||||
| `/api/templates` | GET | List templates |
|
||||
| `/api/export/csv?id=` | GET | Export evaluation as CSV |
|
||||
| `/api/export/pdf?id=` | GET | Export evaluation as PDF |
|
||||
| `/api/auth` | GET, POST | Mock auth |
|
||||
| `/api/ai/suggest-followups` | POST | AI follow-up suggestions (stub) |
|
||||
|
||||
## Deploy on Vercel
|
||||
## Export cURL Examples
|
||||
|
||||
The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js.
|
||||
```bash
|
||||
# CSV export (replace EVAL_ID with actual evaluation id)
|
||||
curl -o evaluation.csv "http://localhost:3000/api/export/csv?id=EVAL_ID"
|
||||
|
||||
Check out our [Next.js deployment documentation](https://nextjs.org/docs/app/building-your-application/deploying) for more details.
|
||||
# PDF export
|
||||
curl -o evaluation.pdf "http://localhost:3000/api/export/pdf?id=EVAL_ID"
|
||||
```
|
||||
|
||||
With auth header (when real auth is added):
|
||||
|
||||
```bash
|
||||
curl -H "Authorization: Bearer YOUR_TOKEN" -o evaluation.csv "http://localhost:3000/api/export/csv?id=EVAL_ID"
|
||||
```
|
||||
|
||||
## AI Assistant Stub
|
||||
|
||||
The AI assistant is a **client-side stub** that returns deterministic follow-up suggestions based on:
|
||||
- Dimension name
|
||||
- Candidate answer length
|
||||
- Current score (low scores trigger probing questions)
|
||||
|
||||
**To plug a real LLM:**
|
||||
|
||||
1. Create or update `/api/ai/suggest-followups` to call OpenAI/Anthropic/etc.
|
||||
2. Pass `{ dimensionName, candidateAnswer, currentScore }` in the request body.
|
||||
3. Use a prompt like: *"Given this dimension and candidate answer, suggest 2–3 probing interview questions."*
|
||||
4. Return `{ suggestions: string[] }`.
|
||||
|
||||
The client already calls this API when the user clicks "Get AI follow-up suggestions" in the dimension card.
|
||||
|
||||
## Tests
|
||||
|
||||
```bash
|
||||
# Unit tests (Vitest)
|
||||
pnpm test
|
||||
|
||||
# E2E tests (Playwright) — requires dev server
|
||||
pnpm test:e2e
|
||||
```
|
||||
|
||||
Run `pnpm exec playwright install` once to install browsers for E2E.
|
||||
|
||||
## Deploy
|
||||
|
||||
1. Set `DATABASE_URL` to Postgres (e.g. Supabase, Neon).
|
||||
2. Run migrations: `pnpm db:push`
|
||||
3. Seed if needed: `pnpm db:seed`
|
||||
4. Build: `pnpm build && pnpm start`
|
||||
5. Or deploy to Vercel (set env, use Vercel Postgres or external DB).
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [x] Auth: mock single-admin login
|
||||
- [x] Dashboard: list evaluations and candidates
|
||||
- [x] Create/Edit evaluation: candidate, role, date, evaluator, template
|
||||
- [x] Templates: Full 15-dim, Short 8-dim
|
||||
- [x] Interview guide: definition, rubric 1→5, signals, questions per dimension
|
||||
- [x] AI assistant: stub suggests follow-ups
|
||||
- [x] Scoring: 1–5, justification, examples, confidence
|
||||
- [x] Probing questions when score ≤ 2
|
||||
- [x] Radar chart + findings/recommendations
|
||||
- [x] Export PDF and CSV
|
||||
- [x] Admin: view templates
|
||||
- [x] Warning when all scores = 5 without comments
|
||||
- [x] Edit after submission (audit log)
|
||||
- [x] Mobile responsive (Tailwind)
|
||||
|
||||
## Manual Test Plan
|
||||
|
||||
1. **Dashboard**: Open `/`, verify evaluations table or empty state.
|
||||
2. **New evaluation**: Click "New Evaluation", fill form, select template, submit.
|
||||
3. **Interview guide**: On evaluation page, score dimensions, add notes, click "Get AI follow-up suggestions".
|
||||
4. **Low score**: Set a dimension to 1 or 2, verify probing questions appear.
|
||||
5. **All 5s**: Set all scores to 5 with no justification, submit — verify warning.
|
||||
6. **Aggregate**: Click "Auto-generate findings", verify radar chart and text.
|
||||
7. **Export**: Click Export, download CSV and PDF.
|
||||
8. **Admin**: Open `/admin`, verify templates listed.
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
src/
|
||||
├── app/
|
||||
│ ├── api/ # API routes
|
||||
│ ├── evaluations/ # Evaluation pages
|
||||
│ ├── admin/ # Admin page
|
||||
│ └── page.tsx # Dashboard
|
||||
├── components/ # UI components
|
||||
└── lib/ # Utils, db, ai-stub, export-utils
|
||||
prisma/
|
||||
├── schema.prisma
|
||||
└── seed.ts
|
||||
tests/e2e/ # Playwright E2E
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user