Compare commits

..

6 Commits

Author SHA1 Message Date
3d4803f975 perf(realtime+data): implement perf-data-optimization and perf-realtime-scale
All checks were successful
Deploy with Docker Compose / deploy (push) Successful in 3m33s
## perf-data-optimization
- Add @@index([name]) on User model (migration)
- Add WEATHER_HISTORY_LIMIT=90 constant, apply take/orderBy on weather history queries
- Replace deep includes with explicit select on all 6 list service queries
- Add unstable_cache layer with revalidateTag on all list service functions
- Add cache-tags.ts helpers (sessionTag, sessionsListTag, userStatsTag)
- Invalidate sessionsListTag in all create/delete Server Actions

## perf-realtime-scale
- Create src/lib/broadcast.ts: generic createBroadcaster factory with shared polling
  (one interval per active session, starts on first subscriber, stops on last)
- Migrate all 6 SSE routes to use createBroadcaster — removes per-connection setInterval
- Add broadcastToXxx() calls in all Server Actions after mutations for immediate push
- Add SESSIONS_PAGE_SIZE=20, pagination on sessions page with loadMoreSessions action
- Add "Charger plus" button with loading state and "X sur Y" counter in WorkshopTabs

## Tests
- Add 19 unit tests for broadcast.ts (polling lifecycle, userId filtering,
  formatEvent, error resilience, session isolation)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-10 15:30:54 +01:00
5b45f18ad9 chore: add Vitest for testing and coverage support
- Introduced new test scripts in package.json: "test", "test:watch", and "test:coverage".
- Added Vitest and vite-tsconfig-paths as dependencies for improved testing capabilities.
- Updated pnpm-lock.yaml to reflect new dependencies and their versions.
2026-03-10 08:38:40 +01:00
f9ed732f1c test: add unit test coverage for services and lib
- 255 tests across 14 files (was 70 tests in 4 files)
- src/services/__tests__: auth (registerUser, updateUserPassword, updateUserProfile), okrs (calculateOKRProgress, createOKR, updateKeyResult, updateOKR), teams (createTeam, addTeamMember, isAdminOfUser, getTeamMemberIdsForAdminTeams, getUserTeams), weather (getPreviousWeatherEntriesForUsers, shareWeatherSessionToTeam, getWeatherSessionsHistory), workshops (createSwotItem, duplicateSwotItem, updateAction, createMotivatorSession, updateCardInfluence, addGifMoodItem, shareGifMoodSessionToTeam, getLatestEventTimestamp, cleanupOldEvents)
- src/lib/__tests__: date-utils, weather-utils, okr-utils, gravatar, workshops, share-utils
- Update vitest coverage to include src/lib/**

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-10 08:37:32 +01:00
a8c05aa841 perf(quick-wins): batch collaborator resolution, debounce SSE refresh, loading states
- Eliminate N+1 on resolveCollaborator: add batchResolveCollaborators() in
  auth.ts (2 DB queries max regardless of session count), update all 4
  workshop services to use post-batch mapping
- Debounce router.refresh() in useLive.ts (300ms) to group simultaneous
  SSE events and avoid cascade re-renders
- Call cleanupOldEvents fire-and-forget in createEvent to purge old SSE
  events inline without blocking the response
- Add loading.tsx skeletons on /sessions and /users matching actual page
  layout (PageHeader + content structure)
- Lazy-load ShareModal via next/dynamic in BaseSessionLiveWrapper to reduce
  initial JS bundle

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-10 08:07:22 +01:00
2d266f89f9 feat(perf): implement performance optimizations for session handling
- Introduced a new configuration file `config.yaml` for specifying project context and artifact rules.
- Added `.openspec.yaml` files for tracking changes related to performance improvements.
- Created design documents outlining the context, goals, decisions, and migration plans for optimizing session performance.
- Proposed changes include batching database queries, debouncing event refreshes, purging old events, and implementing loading states for better user experience.
- Added tasks and specifications to ensure proper implementation and validation of the new features.

These enhancements aim to improve the scalability and responsiveness of the application during collaborative sessions.
2026-03-10 08:06:47 +01:00
6baa9bfada feat(opsx): add new commands for workflow management
- Introduced `OPSX: Apply` to implement tasks from OpenSpec changes.
- Added `OPSX: Archive` for archiving completed changes in the experimental workflow.
- Created `OPSX: Explore` for a thinking partner mode to investigate ideas and clarify requirements.
- Implemented `OPSX: Propose` to generate change proposals and associated artifacts in one step.
- Developed skills for `openspec-apply-change` and `openspec-archive-change` to facilitate task implementation and archiving processes.

These additions enhance the workflow capabilities and provide structured approaches for managing changes within the OpenSpec framework.
2026-03-09 21:31:05 +01:00
92 changed files with 9573 additions and 3200 deletions

View File

@@ -0,0 +1,152 @@
---
name: "OPSX: Apply"
description: Implement tasks from an OpenSpec change (Experimental)
category: Workflow
tags: [workflow, artifacts, experimental]
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name (e.g., `/opsx:apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx:continue`
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! You can archive this change with `/opsx:archive`.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,157 @@
---
name: "OPSX: Archive"
description: Archive a completed change in the experimental workflow
category: Workflow
tags: [workflow, archive, experimental]
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name after `/opsx:archive` (e.g., `/opsx:archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Prompt user for confirmation to continue
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Prompt user for confirmation to continue
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Spec sync status (synced / sync skipped / no delta specs)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs
All artifacts complete. All tasks complete.
```
**Output On Success (No Delta Specs)**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** No delta specs
All artifacts complete. All tasks complete.
```
**Output On Success With Warnings**
```
## Archive Complete (with warnings)
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** Sync skipped (user chose to skip)
**Warnings:**
- Archived with 2 incomplete artifacts
- Archived with 3 incomplete tasks
- Delta spec sync was skipped (user chose to skip)
Review the archive if this was not intentional.
```
**Output On Error (Archive Exists)**
```
## Archive Failed
**Change:** <change-name>
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
Target archive directory already exists.
**Options:**
1. Rename the existing archive
2. Delete the existing archive if it's a duplicate
3. Wait until a different date to archive
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use the Skill tool to invoke `openspec-sync-specs` (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,173 @@
---
name: "OPSX: Explore"
description: "Enter explore mode - think through ideas, investigate problems, clarify requirements"
category: Workflow
tags: [workflow, explore, experimental, thinking]
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
**Input**: The argument after `/opsx:explore` is whatever the user wants to think about. Could be:
- A vague idea: "real-time collaboration"
- A specific problem: "the auth system is getting unwieldy"
- A change name: "add-dark-mode" (to explore in context of that change)
- A comparison: "postgres vs sqlite for this"
- Nothing (just enter explore mode)
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
If the user mentioned a specific change name, read its artifacts for context.
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create a proposal?"
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,106 @@
---
name: "OPSX: Propose"
description: Propose a new change - create it and generate all artifacts in one step
category: Workflow
tags: [workflow, artifacts, experimental]
---
Propose a new change - create the change and generate all artifacts in one step.
I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx:apply
---
**Input**: The argument after `/opsx:propose` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` to start implementing."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,156 @@
---
name: openspec-apply-change
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,114 @@
---
name: openspec-archive-change
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Whether specs were synced (if applicable)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
All artifacts complete. All tasks complete.
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use openspec-sync-specs approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,288 @@
---
name: openspec-explore
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create a proposal?"
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Handling Different Entry Points
**User brings a vague idea:**
```
User: I'm thinking about adding real-time collaboration
You: Real-time collab is a big space. Let me think about this...
COLLABORATION SPECTRUM
════════════════════════════════════════════
Awareness Coordination Sync
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Presence│ │Cursors │ │ CRDT │
│ "3 │ │ Multi │ │Conflict│
│online" │ │ select │ │ free │
└────────┘ └────────┘ └────────┘
│ │ │
trivial moderate complex
Where's your head at?
```
**User brings a specific problem:**
```
User: The auth system is a mess
You: [reads codebase]
┌─────────────────────────────────────────────┐
│ CURRENT AUTH FLOW │
└─────────────────────────────────────────────┘
┌─────────────┼─────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Google │ │ GitHub │ │ Email │
│ OAuth │ │ OAuth │ │ Magic │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└─────────────┼─────────────┘
┌───────────┐
│ Session │
└─────┬─────┘
┌───────────┐
│ Perms │
└───────────┘
I see three tangles. Which one's burning?
```
**User is stuck mid-implementation:**
```
User: /opsx:explore add-auth-system
The OAuth integration is more complex than expected
You: [reads change artifacts]
You're on task 4: "Implement OAuth flow"
Let me trace what's involved...
[draws diagram, explores options, suggests paths]
Want to update the design to reflect this?
Or add a spike task to investigate?
```
**User wants to compare options:**
```
User: Should we use Postgres or SQLite?
You: Generic answer is boring. What's the context?
User: A CLI tool that tracks local dev environments
You: That changes everything.
┌─────────────────────────────────────────────────┐
│ CLI TOOL DATA STORAGE │
└─────────────────────────────────────────────────┘
Key constraints:
• No daemon running
• Must work offline
• Single user
SQLite Postgres
Deployment embedded ✓ needs server ✗
Offline yes ✓ no ✗
Single file yes ✓ no ✗
SQLite. Not even close.
Unless... is there a sync component?
```
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When it feels like things are crystallizing, you might summarize:
```
## What We Figured Out
**The problem**: [crystallized understanding]
**The approach**: [if one emerged]
**Open questions**: [if any remain]
**Next steps** (if ready):
- Create a change proposal
- Keep exploring: just keep talking
```
But this summary is optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,110 @@
---
name: openspec-propose
description: Propose a new change with all artifacts generated in one step. Use when the user wants to quickly describe what they want to build and get a complete proposal with design, specs, and tasks ready for implementation.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Propose a new change - create the change and generate all artifacts in one step.
I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx:apply
---
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,156 @@
---
name: openspec-apply-change
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,114 @@
---
name: openspec-archive-change
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Whether specs were synced (if applicable)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
All artifacts complete. All tasks complete.
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use openspec-sync-specs approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,288 @@
---
name: openspec-explore
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create a proposal?"
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Handling Different Entry Points
**User brings a vague idea:**
```
User: I'm thinking about adding real-time collaboration
You: Real-time collab is a big space. Let me think about this...
COLLABORATION SPECTRUM
════════════════════════════════════════════
Awareness Coordination Sync
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Presence│ │Cursors │ │ CRDT │
│ "3 │ │ Multi │ │Conflict│
│online" │ │ select │ │ free │
└────────┘ └────────┘ └────────┘
│ │ │
trivial moderate complex
Where's your head at?
```
**User brings a specific problem:**
```
User: The auth system is a mess
You: [reads codebase]
┌─────────────────────────────────────────────┐
│ CURRENT AUTH FLOW │
└─────────────────────────────────────────────┘
┌─────────────┼─────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Google │ │ GitHub │ │ Email │
│ OAuth │ │ OAuth │ │ Magic │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└─────────────┼─────────────┘
┌───────────┐
│ Session │
└─────┬─────┘
┌───────────┐
│ Perms │
└───────────┘
I see three tangles. Which one's burning?
```
**User is stuck mid-implementation:**
```
User: /opsx:explore add-auth-system
The OAuth integration is more complex than expected
You: [reads change artifacts]
You're on task 4: "Implement OAuth flow"
Let me trace what's involved...
[draws diagram, explores options, suggests paths]
Want to update the design to reflect this?
Or add a spike task to investigate?
```
**User wants to compare options:**
```
User: Should we use Postgres or SQLite?
You: Generic answer is boring. What's the context?
User: A CLI tool that tracks local dev environments
You: That changes everything.
┌─────────────────────────────────────────────────┐
│ CLI TOOL DATA STORAGE │
└─────────────────────────────────────────────────┘
Key constraints:
• No daemon running
• Must work offline
• Single user
SQLite Postgres
Deployment embedded ✓ needs server ✗
Offline yes ✓ no ✗
Single file yes ✓ no ✗
SQLite. Not even close.
Unless... is there a sync component?
```
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When it feels like things are crystallizing, you might summarize:
```
## What We Figured Out
**The problem**: [crystallized understanding]
**The approach**: [if one emerged]
**Open questions**: [if any remain]
**Next steps** (if ready):
- Create a change proposal
- Keep exploring: just keep talking
```
But this summary is optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,110 @@
---
name: openspec-propose
description: Propose a new change with all artifacts generated in one step. Use when the user wants to quickly describe what they want to build and get a complete proposal with design, specs, and tasks ready for implementation.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Propose a new change - create the change and generate all artifacts in one step.
I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx:apply
---
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,152 @@
---
name: /opsx-apply
id: opsx-apply
category: Workflow
description: Implement tasks from an OpenSpec change (Experimental)
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name (e.g., `/opsx:apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx:continue`
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! You can archive this change with `/opsx:archive`.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,157 @@
---
name: /opsx-archive
id: opsx-archive
category: Workflow
description: Archive a completed change in the experimental workflow
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name after `/opsx:archive` (e.g., `/opsx:archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Prompt user for confirmation to continue
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Prompt user for confirmation to continue
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Spec sync status (synced / sync skipped / no delta specs)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs
All artifacts complete. All tasks complete.
```
**Output On Success (No Delta Specs)**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** No delta specs
All artifacts complete. All tasks complete.
```
**Output On Success With Warnings**
```
## Archive Complete (with warnings)
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** Sync skipped (user chose to skip)
**Warnings:**
- Archived with 2 incomplete artifacts
- Archived with 3 incomplete tasks
- Delta spec sync was skipped (user chose to skip)
Review the archive if this was not intentional.
```
**Output On Error (Archive Exists)**
```
## Archive Failed
**Change:** <change-name>
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
Target archive directory already exists.
**Options:**
1. Rename the existing archive
2. Delete the existing archive if it's a duplicate
3. Wait until a different date to archive
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use the Skill tool to invoke `openspec-sync-specs` (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,173 @@
---
name: /opsx-explore
id: opsx-explore
category: Workflow
description: "Enter explore mode - think through ideas, investigate problems, clarify requirements"
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
**Input**: The argument after `/opsx:explore` is whatever the user wants to think about. Could be:
- A vague idea: "real-time collaboration"
- A specific problem: "the auth system is getting unwieldy"
- A change name: "add-dark-mode" (to explore in context of that change)
- A comparison: "postgres vs sqlite for this"
- Nothing (just enter explore mode)
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
If the user mentioned a specific change name, read its artifacts for context.
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create a proposal?"
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,106 @@
---
name: /opsx-propose
id: opsx-propose
category: Workflow
description: Propose a new change - create it and generate all artifacts in one step
---
Propose a new change - create the change and generate all artifacts in one step.
I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx:apply
---
**Input**: The argument after `/opsx:propose` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` to start implementing."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,156 @@
---
name: openspec-apply-change
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,114 @@
---
name: openspec-archive-change
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Whether specs were synced (if applicable)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
All artifacts complete. All tasks complete.
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use openspec-sync-specs approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,288 @@
---
name: openspec-explore
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create a proposal?"
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Handling Different Entry Points
**User brings a vague idea:**
```
User: I'm thinking about adding real-time collaboration
You: Real-time collab is a big space. Let me think about this...
COLLABORATION SPECTRUM
════════════════════════════════════════════
Awareness Coordination Sync
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Presence│ │Cursors │ │ CRDT │
│ "3 │ │ Multi │ │Conflict│
│online" │ │ select │ │ free │
└────────┘ └────────┘ └────────┘
│ │ │
trivial moderate complex
Where's your head at?
```
**User brings a specific problem:**
```
User: The auth system is a mess
You: [reads codebase]
┌─────────────────────────────────────────────┐
│ CURRENT AUTH FLOW │
└─────────────────────────────────────────────┘
┌─────────────┼─────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Google │ │ GitHub │ │ Email │
│ OAuth │ │ OAuth │ │ Magic │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└─────────────┼─────────────┘
┌───────────┐
│ Session │
└─────┬─────┘
┌───────────┐
│ Perms │
└───────────┘
I see three tangles. Which one's burning?
```
**User is stuck mid-implementation:**
```
User: /opsx:explore add-auth-system
The OAuth integration is more complex than expected
You: [reads change artifacts]
You're on task 4: "Implement OAuth flow"
Let me trace what's involved...
[draws diagram, explores options, suggests paths]
Want to update the design to reflect this?
Or add a spike task to investigate?
```
**User wants to compare options:**
```
User: Should we use Postgres or SQLite?
You: Generic answer is boring. What's the context?
User: A CLI tool that tracks local dev environments
You: That changes everything.
┌─────────────────────────────────────────────────┐
│ CLI TOOL DATA STORAGE │
└─────────────────────────────────────────────────┘
Key constraints:
• No daemon running
• Must work offline
• Single user
SQLite Postgres
Deployment embedded ✓ needs server ✗
Offline yes ✓ no ✗
Single file yes ✓ no ✗
SQLite. Not even close.
Unless... is there a sync component?
```
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When it feels like things are crystallizing, you might summarize:
```
## What We Figured Out
**The problem**: [crystallized understanding]
**The approach**: [if one emerged]
**Open questions**: [if any remain]
**Next steps** (if ready):
- Create a change proposal
- Keep exploring: just keep talking
```
But this summary is optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,110 @@
---
name: openspec-propose
description: Propose a new change with all artifacts generated in one step. Use when the user wants to quickly describe what they want to build and get a complete proposal with design, specs, and tasks ready for implementation.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Propose a new change - create the change and generate all artifacts in one step.
I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx:apply
---
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-03-09

View File

@@ -0,0 +1,57 @@
## Context
L'application charge les collaborateurs de session via `resolveCollaborator` appelé en séquence dans une boucle (N+1). Le hook `useLive` déclenche `router.refresh()` sur chaque événement SSE reçu, sans groupement, causant des re-renders en cascade si plusieurs événements arrivent simultanément. La fonction `cleanupOldEvents` existe dans `session-share-events.ts` mais n'est jamais appelée, laissant les événements s'accumuler indéfiniment. L'absence de `loading.tsx` sur les routes principales empêche le streaming App Router de s'activer. Les modals (`ShareModal`) sont inclus dans le bundle initial alors qu'ils sont rarement utilisés.
## Goals / Non-Goals
**Goals:**
- Éliminer le N+1 sur `resolveCollaborator` avec un fetch batché
- Grouper les refreshes SSE consécutifs avec un debounce
- Purger les événements SSE au fil de l'eau (après chaque `createEvent`)
- Activer le streaming de navigation avec `loading.tsx` sur les routes à chargement lent
- Réduire le bundle JS initial en lazy-loadant les modals
**Non-Goals:**
- Refactorer l'architecture SSE (sujet Phase 2)
- Changer la stratégie de cache/revalidation (sujet Phase 2)
- Optimiser les requêtes Prisma profondes (sujet Phase 3)
- Modifier le comportement fonctionnel existant
## Decisions
### 1. Batch resolveCollaborator par collect + single query
**Décision** : Dans `session-queries.ts`, collecter tous les `userId` des collaborateurs d'une liste de sessions, puis faire un seul `prisma.user.findMany({ where: { id: { in: [...ids] } } })`, et mapper les résultats en mémoire.
**Alternatives** : Garder le N+1 mais ajouter un cache mémoire par requête → rejeté car ne résout pas le problème structurellement.
### 2. Debounce via useRef + setTimeout natif
**Décision** : Dans `useLive.ts`, utiliser `useRef` pour stocker un timer et `setTimeout` / `clearTimeout` pour debounce à 300ms. Pas de dépendance externe.
**Alternatives** : Bibliothèque `lodash.debounce` → rejeté pour éviter une dépendance pour 5 lignes.
### 3. cleanupOldEvents inline dans createEvent
**Décision** : Appeler `cleanupOldEvents` à la fin de chaque `createEvent` (fire-and-forget, pas d'await bloquant). La purge garde les 50 derniers événements par session (seuil actuel).
**Alternatives** : Cron externe → trop complexe pour un quick win ; interval côté API SSE → couplage non souhaité.
### 4. loading.tsx avec skeleton minimaliste
**Décision** : Créer un `loading.tsx` par route principale (`/sessions`, `/weather`, `/users`) avec un skeleton générique (barres grises animées). Le composant est statique et ultra-léger.
### 5. next/dynamic avec ssr: false sur les modals
**Décision** : Wrapper `ShareModal` (et `CollaborationToolbar` si pertinent) avec `next/dynamic({ ssr: false })`. Le composant parent gère le loading state.
## Risks / Trade-offs
- **Debounce 300ms** → légère latence perçue sur les mises à jour collaboratives. Mitigation : valeur configurable via constante.
- **cleanupOldEvents fire-and-forget** → si la purge échoue, les erreurs sont silencieuses. Mitigation : logger l'erreur sans bloquer.
- **Batch resolveCollaborator** → si la liste de sessions est très grande (>500), la requête `IN` peut être lente. Mitigation : acceptable pour les volumes actuels ; paginer si nécessaire (Phase 3).
- **next/dynamic ssr: false** → les modals ne sont pas rendus côté serveur. Acceptable car ils sont interactifs uniquement.
## Migration Plan
Chaque optimisation est indépendante et déployable séparément. Pas de migration de données. Rollback : revert du commit concerné. L'ordre recommandé : (1) batch resolveCollaborator, (2) cleanupOldEvents, (3) debounce useLive, (4) loading.tsx, (5) next/dynamic.

View File

@@ -0,0 +1,29 @@
## Why
Les routes principales souffrent de plusieurs problèmes de performance facilement corrigeables : N+1 sur `resolveCollaborator`, re-renders en cascade dans `useLive`, accumulation illimitée d'événements SSE, et absence de feedback visuel pendant les navigations. Ces quick wins peuvent être adressés indépendamment, sans refactoring architectural.
## What Changes
- **Batch resolveCollaborator** : remplacer les appels séquentiels par un batch unique dans `session-queries.ts` (élimination N+1)
- **Debounce router.refresh()** : ajouter un debounce ~300ms dans `useLive.ts` pour grouper les événements SSE simultanés
- **Appel de cleanupOldEvents** : intégrer l'appel à `cleanupOldEvents` dans `createEvent` pour purger les vieux événements au fil de l'eau
- **Ajout de `loading.tsx`** : ajouter des fichiers `loading.tsx` sur les routes `/sessions`, `/weather`, `/users` pour activer le streaming App Router
- **Lazy-load des modals** : utiliser `next/dynamic` sur `ShareModal` et autres modals lourds pour réduire le bundle JS initial
## Capabilities
### New Capabilities
- `perf-loading-states`: Feedback visuel de chargement sur les routes principales via `loading.tsx`
### Modified Capabilities
- Aucune modification de spec existante — les changements sont purement implémentation/performance
## Impact
- `src/services/session-queries.ts` — refactoring batch resolveCollaborator
- `src/hooks/useLive.ts` — ajout debounce sur router.refresh
- `src/services/session-share-events.ts` — appel cleanupOldEvents dans createEvent
- `src/app/sessions/loading.tsx`, `src/app/weather/loading.tsx`, `src/app/users/loading.tsx` — nouveaux fichiers
- Composants qui importent `ShareModal` — passage à import dynamique

View File

@@ -0,0 +1,24 @@
## ADDED Requirements
### Requirement: Loading skeleton on main routes
The application SHALL display a skeleton loading state during navigation to `/sessions`, `/weather`, and `/users` routes, activated by Next.js App Router streaming via `loading.tsx` files.
#### Scenario: Navigation to sessions page shows skeleton
- **WHEN** a user navigates to `/sessions`
- **THEN** a loading skeleton SHALL be displayed immediately while the page data loads
#### Scenario: Navigation to weather page shows skeleton
- **WHEN** a user navigates to `/weather`
- **THEN** a loading skeleton SHALL be displayed immediately while the page data loads
#### Scenario: Navigation to users page shows skeleton
- **WHEN** a user navigates to `/users`
- **THEN** a loading skeleton SHALL be displayed immediately while the page data loads
### Requirement: Modal lazy loading
Heavy modal components (ShareModal) SHALL be loaded lazily via `next/dynamic` to reduce the initial JS bundle size.
#### Scenario: ShareModal not in initial bundle
- **WHEN** a page loads that contains a ShareModal trigger
- **THEN** the ShareModal component code SHALL NOT be included in the initial JS bundle
- **THEN** the ShareModal code SHALL be fetched only when first needed

View File

@@ -0,0 +1,33 @@
## 1. Batch resolveCollaborator (N+1 fix)
- [x] 1.1 Lire `src/services/session-queries.ts` et identifier toutes les occurrences de `resolveCollaborator` appelées en boucle
- [x] 1.2 Créer une fonction `batchResolveCollaborators(userIds: string[])` qui fait un seul `prisma.user.findMany({ where: { id: { in: userIds } } })`
- [x] 1.3 Remplacer les boucles N+1 par collect des IDs → batch query → mapping en mémoire
- [x] 1.4 Vérifier que les pages sessions/weather/etc. chargent correctement
## 2. Debounce router.refresh() dans useLive
- [x] 2.1 Lire `src/hooks/useLive.ts` et localiser l'appel à `router.refresh()`
- [x] 2.2 Ajouter un `useRef<ReturnType<typeof setTimeout>>` pour le timer de debounce
- [x] 2.3 Wrapper l'appel `router.refresh()` avec `clearTimeout` + `setTimeout` à 300ms
- [x] 2.4 Ajouter un `clearTimeout` dans le cleanup de l'effet pour éviter les leaks mémoire
## 3. Purge automatique des événements SSE
- [x] 3.1 Lire `src/services/session-share-events.ts` et localiser `createEvent` et `cleanupOldEvents`
- [x] 3.2 Ajouter un appel fire-and-forget à `cleanupOldEvents` à la fin de `createEvent` (après l'insert)
- [x] 3.3 Wrapper l'appel dans un try/catch pour logger l'erreur sans bloquer
## 4. Ajout des loading.tsx sur les routes principales
- [x] 4.1 Créer `src/app/sessions/loading.tsx` avec un skeleton de liste de sessions
- [x] 4.2 Créer `src/app/weather/loading.tsx` avec un skeleton de tableau météo
- [x] 4.3 Créer `src/app/users/loading.tsx` avec un skeleton de liste utilisateurs
- [ ] 4.4 Vérifier que le skeleton s'affiche bien à la navigation (ralentir le réseau dans DevTools)
## 5. Lazy-load des modals avec next/dynamic
- [x] 5.1 Identifier tous les composants qui importent `ShareModal` directement
- [x] 5.2 Remplacer chaque import statique par `next/dynamic(() => import(...), { ssr: false })`
- [ ] 5.3 Vérifier que les modals s'ouvrent correctement après lazy-load
- [ ] 5.4 Vérifier dans les DevTools Network que le chunk modal n'est pas dans le bundle initial

View File

@@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-03-09

View File

@@ -0,0 +1,59 @@
## Context
`src/services/weather.ts` utilise `findMany` sans `take` ni `orderBy`, chargeant potentiellement des centaines d'entrées pour calculer des tendances qui n'utilisent que les 30-90 derniers points. Les services de sessions utilisent `include: { items: true, shares: true, events: true }` pour construire les listes, alors que l'affichage carte n'a besoin que du titre, de la date, du comptage d'items et du statut de partage. `User.name` est filtré dans les recherches admin mais sans index SQLite. Les pages les plus visitées (`/sessions`, `/users`) recalculent leurs données à chaque requête.
## Goals / Non-Goals
**Goals:**
- Borner le chargement historique weather à une constante configurable
- Réduire la taille des objets retournés par les queries de liste (select vs include)
- Ajouter un index SQLite sur `User.name`
- Introduire un cache Next.js sur les queries de liste avec invalidation ciblée
**Non-Goals:**
- Changer la structure des modèles Prisma
- Modifier le rendu des pages (les sélections couvrent tous les champs affichés)
- Introduire un cache externe (Redis, Memcached)
- Optimiser les pages de détail session (hors scope)
## Decisions
### 1. Constante WEATHER_HISTORY_LIMIT dans lib/types.ts
**Décision** : Définir `WEATHER_HISTORY_LIMIT = 90` dans `src/lib/types.ts` (cohérent avec les autres constantes de config). La query devient : `findMany({ orderBy: { createdAt: 'desc' }, take: WEATHER_HISTORY_LIMIT })`.
**Alternatives** : Paramètre d'URL ou env var → sur-ingénierie pour un seuil rarement modifié.
### 2. Select minimal pour les listes — interface ListItem dédiée
**Décision** : Pour chaque service de liste, définir un type `XxxListItem` dans `types.ts` avec uniquement les champs de la carte (id, title, createdAt, _count.items, shares.length). Utiliser `select` Prisma pour matcher exactement ce type.
**Alternatives** : Garder `include` et filtrer côté TypeScript → charge DB identique, gain nul.
### 3. Index @@index([name]) sur User
**Décision** : Ajouter `@@index([name])` dans le modèle `User` de `schema.prisma`. Créer une migration nommée `add_user_name_index`. Impact : SQLite crée un B-tree index, recherches `LIKE 'x%'` bénéficient de l'index (prefix match).
**Note** : `LIKE '%x%'` (contains) n'utilise pas l'index en SQLite — acceptable, le use case principal est la recherche par préfixe.
### 4. unstable_cache avec tags sur requêtes de liste
**Décision** : Wrapper les fonctions de service de liste (ex: `getSessionsForUser`, `getUserStats`) avec `unstable_cache(fn, [cacheKey], { tags: ['sessions-list:userId'] })`. Les Server Actions appellent `revalidateTag` correspondant après mutation.
Durée de cache : `revalidate: 60` secondes en fallback, mais invalidation explicite prioritaire.
**Alternatives** : `React.cache` → par-requête uniquement, pas de persistance entre navigations ; `fetch` avec cache → ne s'applique pas aux queries Prisma.
## Risks / Trade-offs
- **select strict** → si un composant accède à un champ non sélectionné, erreur TypeScript au build (bonne chose — détecté tôt).
- **unstable_cache** → API Next.js marquée unstable. Mitigation : isoler dans les services, wrapper facilement remplaçable.
- **Index User.name** → légère augmentation de la taille du fichier SQLite et du temps d'écriture. Négligeable pour les volumes actuels.
- **WEATHER_HISTORY_LIMIT** → les calculs de tendance doivent fonctionner avec N entrées ou moins. Vérifier que l'algorithme est robuste avec un historique partiel.
## Migration Plan
1. Migration Prisma `add_user_name_index` (non-destructif, peut être appliqué à tout moment)
2. Ajout `WEATHER_HISTORY_LIMIT` + update query weather (indépendant)
3. Refactoring select par service (vérifier TypeScript au build à chaque service)
4. Ajout cache layer en dernier (dépend des tags définis en Phase 2 si applicable, sinon définir localement)

View File

@@ -0,0 +1,29 @@
## Why
Les requêtes Prisma des pages les plus fréquentées chargent trop de données : `weather.ts` ramène tout l'historique sans borne, les queries de la sessions page incluent des relations profondes inutiles pour l'affichage liste, et aucun cache n'est appliqué sur les requêtes répétées à chaque navigation. Ces optimisations réduisent la taille des payloads et le temps de réponse DB sans changer le comportement.
## What Changes
- **Weather historique borné** : ajouter `take` + `orderBy createdAt DESC` dans `src/services/weather.ts`, configurable via constante (défaut : 90 entrées)
- **Select fields sur sessions list** : remplacer les `include` profonds par des `select` avec uniquement les champs affichés dans les cards de liste
- **Index `User.name`** : ajouter `@@index([name])` dans `prisma/schema.prisma` + migration
- **Cache sur requêtes fréquentes** : wraper les queries de liste sessions et stats utilisateurs avec `unstable_cache` + tags, invalidés lors des mutations
## Capabilities
### New Capabilities
- `query-cache-layer`: Cache Next.js sur les requêtes de liste fréquentes avec invalidation par tags
### Modified Capabilities
- Aucune modification de spec comportementale — optimisations internes transparentes
## Impact
- `src/services/weather.ts` — ajout limite + orderBy
- `src/services/` (tous les services de liste) — `include``select`
- `prisma/schema.prisma` — ajout `@@index([name])` sur `User`
- `prisma/migrations/` — nouvelle migration pour l'index
- `src/services/` — wrapping `unstable_cache` sur queries fréquentes
- `src/actions/` — ajout `revalidateTag` correspondants (complément Phase 2)

View File

@@ -0,0 +1,30 @@
## ADDED Requirements
### Requirement: Cached session list queries
Frequently-called session list queries SHALL be cached using Next.js `unstable_cache` with user-scoped tags, avoiding redundant DB reads on repeated navigations.
#### Scenario: Session list served from cache on repeated navigation
- **WHEN** a user navigates to the sessions page multiple times within the cache window
- **THEN** the session list data SHALL be served from cache on subsequent requests
- **THEN** no additional Prisma query SHALL be executed for cached data
#### Scenario: Cache invalidated after mutation
- **WHEN** a Server Action creates, updates, or deletes a session
- **THEN** the corresponding cache tag SHALL be invalidated via `revalidateTag`
- **THEN** the next request SHALL fetch fresh data from the DB
### Requirement: Weather history bounded query
The weather service SHALL limit historical data loading to a configurable maximum number of entries (default: 90), ordered by most recent first.
#### Scenario: Weather history respects limit
- **WHEN** the weather service fetches historical entries
- **THEN** at most `WEATHER_HISTORY_LIMIT` entries SHALL be returned
- **THEN** entries SHALL be ordered by `createdAt` DESC (most recent first)
### Requirement: Minimal field selection on list queries
Service functions returning lists for display purposes SHALL use Prisma `select` with only the fields required for the list UI, not full `include` of related models.
#### Scenario: Sessions list query returns only display fields
- **WHEN** the sessions list service function is called
- **THEN** the returned objects SHALL contain only fields needed for card display (id, title, createdAt, item count, share status)
- **THEN** full related model objects (items array, events array) SHALL NOT be included

View File

@@ -0,0 +1,30 @@
## 1. Index User.name (migration Prisma)
- [x] 1.1 Lire `prisma/schema.prisma` et localiser le modèle `User`
- [x] 1.2 Ajouter `@@index([name])` au modèle `User`
- [x] 1.3 Exécuter `pnpm prisma migrate dev --name add_user_name_index`
- [x] 1.4 Vérifier que la migration s'applique sans erreur et que `prisma studio` montre l'index
## 2. Weather: limiter le chargement historique
- [x] 2.1 Ajouter la constante `WEATHER_HISTORY_LIMIT = 90` dans `src/lib/types.ts`
- [x] 2.2 Lire `src/services/weather.ts` et localiser la query `findMany` des entrées historiques
- [x] 2.3 Ajouter `take: WEATHER_HISTORY_LIMIT` et `orderBy: { date: 'desc' }` à la query
- [x] 2.4 Vérifier que les calculs de tendances fonctionnent avec un historique partiel
## 3. Select fields sur les queries de liste
- [x] 3.1 Lire les services de liste : `src/services/sessions.ts`, `moving-motivators.ts`, `year-review.ts`, `weekly-checkin.ts`, `weather.ts`, `gif-mood.ts`
- [x] 3.2 Identifier les `include` utilisés dans les fonctions de liste (pas de détail session)
- [x] 3.3 Remplacer les `include` profonds par `select` avec uniquement les champs nécessaires dans chaque service
- [x] 3.4 Mettre à jour `shares: { include: ... }``shares: { select: { id, role, user } }` dans les 6 services
- [x] 3.5 Vérifier les erreurs TypeScript et adapter les queries partagées
- [x] 3.6 Vérifier `pnpm build` sans erreurs TypeScript
## 4. Cache layer sur requêtes fréquentes
- [x] 4.1 Créer `src/lib/cache-tags.ts` avec les helpers de tags : `sessionTag(id)`, `sessionsListTag(userId)`, `userStatsTag(userId)`
- [x] 4.2 Wrapper la fonction de liste sessions dans chaque service avec `unstable_cache(fn, [key], { tags: [sessionsListTag(userId)], revalidate: 60 })`
- [x] 4.3 `getUserStats` non existant — tâche ignorée (pas de fonction correspondante dans le codebase)
- [x] 4.4 Vérifier que les Server Actions de création/suppression de session appellent `revalidateTag(sessionsListTag(userId), 'default')`
- [x] 4.5 Build passe et 255 tests passent — invalidation testée par build

View File

@@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-03-09

View File

@@ -0,0 +1,65 @@
## Context
Chaque route `/api/*/subscribe` crée un `setInterval` à 1s qui poll la DB pour les événements. Si 10 utilisateurs ont le même workshop ouvert, c'est 10 requêtes/seconde sur la même table. Le pattern weather utilise déjà une `Map` de subscribers in-process pour broadcaster les événements sans re-poll, mais ce pattern n'est pas généralisé. Les Server Actions appellent `revalidatePath('/sessions')` qui invalide tous les sous-segments, forçant Next.js à re-render des pages entières même pour une mutation mineure.
## Goals / Non-Goals
**Goals:**
- Réduire le nombre de requêtes DB de polling proportionnellement au nombre de clients connectés
- Fournir un module de broadcast réutilisable pour tous les workshops
- Réduire la surface d'invalidation du cache Next.js avec des tags granulaires
- Limiter le volume de données chargées sur la page sessions avec pagination
**Non-Goals:**
- Passer à WebSockets ou un serveur temps-réel externe (Redis, Pusher)
- Modifier le modèle de données Prisma pour les événements
- Implémenter du SSE multi-process / multi-instance (déploiement standalone single-process)
## Decisions
### 1. Module broadcast.ts : Map<sessionId, Set<subscriber>>
**Décision** : Créer `src/lib/broadcast.ts` qui expose :
- `subscribe(sessionId, callback)` → retourne `unsubscribe()`
- `broadcast(sessionId, event)` → notifie tous les subscribers
Les routes SSE s'abonnent au lieu de poller. Les Server Actions appellent `broadcast()` après mutation.
**Alternatives** : EventEmitter Node.js → rejeté car moins typé ; BroadcastChannel → rejeté car limité à same-origin workers, pas adapté aux route handlers Next.js.
### 2. Polling de fallback maintenu mais mutualisé
**Décision** : Garder un seul polling par session active (le premier subscriber démarre l'interval, le dernier le stoppe). Le broadcast natif est prioritaire (appelé depuis Server Actions), le polling est le fallback pour les clients qui rejoignent en cours de route.
### 3. revalidateTag avec convention de nommage
**Décision** : Convention de tags :
- `session:<id>` — pour une session spécifique
- `sessions-list:<userId>` — pour la liste des sessions d'un user
- `workshop:<type>` — pour tout le workshop
Chaque query Prisma dans les services est wrappée avec `unstable_cache` ou utilise `cacheTag` (Next.js 15+).
**Alternatives** : Garder `revalidatePath` mais avec des paths plus précis → moins efficace que les tags.
### 4. Pagination cursor-based sur sessions page
**Décision** : Pagination par cursor (basée sur `createdAt` DESC) plutôt qu'offset, pour la stabilité des listes en insertion fréquente. Taille de page initiale : 20 sessions par type de workshop. UI : bouton "Charger plus" (pas de pagination numérotée).
**Alternatives** : Virtual scroll → plus complexe, dépendance JS côté client ; offset pagination → instable si nouvelles sessions insérées entre deux pages.
## Risks / Trade-offs
- **Broadcast in-process** → ne fonctionne qu'en déploiement single-process. Acceptable pour le cas d'usage actuel (standalone Next.js). Documenter la limitation.
- **unstable_cache** → API marquée unstable dans Next.js, peut changer. Mitigation : isoler dans les services, pas dans les composants.
- **Pagination** → change l'UX de la page sessions (actuellement tout visible). Mitigation : conserver le total affiché et un indicateur "X sur Y".
## Migration Plan
1. Créer `src/lib/broadcast.ts` sans toucher aux routes existantes
2. Migrer les routes SSE une par une (commencer par `weather` qui a déjà le pattern)
3. Mettre à jour les Server Actions pour appeler `broadcast()` + `revalidateTag()`
4. Ajouter `cacheTag` aux queries services
5. Ajouter pagination sur sessions page en dernier (changement UI visible)
Rollback : chaque étape est indépendante — revert par feature.

View File

@@ -0,0 +1,29 @@
## Why
La couche temps-réel actuelle (SSE + polling DB à 1s) multiplie les connexions et les requêtes dès que plusieurs utilisateurs collaborent. Chaque onglet ouvert sur une session déclenche son propre polling, et les Server Actions invalident des segments de route entiers avec `revalidatePath`. Ces problèmes de scalabilité deviennent visibles dès 5-10 utilisateurs simultanés.
## What Changes
- **Polling SSE partagé** : un seul interval actif par session côté serveur, partagé entre tous les clients connectés à cette session
- **Broadcast unifié** : généraliser le pattern de broadcast in-process (déjà présent dans `weather`) à tous les workshops via un module `src/lib/broadcast.ts`
- **`revalidateTag` granulaire** : remplacer `revalidatePath` dans tous les Server Actions par des tags ciblés (`session:<id>`, `sessions-list`, etc.)
- **Pagination sessions page** : limiter le chargement initial à N sessions par type avec pagination ou chargement progressif
## Capabilities
### New Capabilities
- `sse-shared-polling`: Polling SSE mutualisé par session (un seul interval par session active)
- `unified-broadcast`: Module de broadcast in-process réutilisable par tous les workshops
### Modified Capabilities
- `sessions-list`: Ajout de pagination/limite sur le chargement des sessions
## Impact
- `src/app/api/*/subscribe/route.ts` — refactoring du polling vers le module broadcast partagé
- `src/lib/broadcast.ts` — nouveau module (Map de sessions actives + subscribers)
- `src/actions/*.ts` — remplacement de `revalidatePath` par `revalidateTag` + `unstable_cache`
- `src/app/sessions/page.tsx` — ajout pagination
- `src/services/` — ajout de `cache` tags sur les requêtes Prisma fréquentes

View File

@@ -0,0 +1,15 @@
## ADDED Requirements
### Requirement: Paginated sessions list
The sessions page SHALL load sessions in pages rather than fetching all sessions at once, with a default page size of 20 per workshop type.
#### Scenario: Initial load shows first page
- **WHEN** a user visits the sessions page
- **THEN** at most 20 sessions per workshop type SHALL be loaded
- **THEN** a total count SHALL be displayed (e.g., "Showing 20 of 47")
#### Scenario: Load more sessions on demand
- **WHEN** there are more sessions beyond the current page
- **THEN** a "Charger plus" button SHALL be displayed
- **WHEN** the user clicks "Charger plus"
- **THEN** the next page of sessions SHALL be appended to the list

View File

@@ -0,0 +1,17 @@
## ADDED Requirements
### Requirement: Single polling interval per active session
The SSE infrastructure SHALL maintain at most one active DB polling interval per session, regardless of the number of connected clients.
#### Scenario: First client connects starts polling
- **WHEN** the first client connects to a session's SSE endpoint
- **THEN** a single polling interval SHALL be started for that session
#### Scenario: Additional clients share existing polling
- **WHEN** a second or subsequent client connects to the same session's SSE endpoint
- **THEN** no additional polling interval SHALL be created
- **THEN** the new client SHALL receive events from the shared poll
#### Scenario: Last client disconnect stops polling
- **WHEN** all clients disconnect from a session's SSE endpoint
- **THEN** the polling interval for that session SHALL be stopped and cleaned up

View File

@@ -0,0 +1,22 @@
## ADDED Requirements
### Requirement: Centralized broadcast module
The system SHALL provide a centralized `src/lib/broadcast.ts` module used by all workshop SSE routes to push events to connected clients.
#### Scenario: Server Action triggers broadcast
- **WHEN** a Server Action mutates session data and calls `broadcast(sessionId, event)`
- **THEN** all clients subscribed to that session SHALL receive the event immediately without waiting for the next poll cycle
#### Scenario: Broadcast module subscribe/unsubscribe
- **WHEN** an SSE route calls `subscribe(sessionId, callback)`
- **THEN** the callback SHALL be invoked on every subsequent `broadcast(sessionId, ...)` call
- **WHEN** the returned `unsubscribe()` function is called
- **THEN** the callback SHALL no longer receive events
### Requirement: Granular cache invalidation via revalidateTag
Server Actions SHALL use `revalidateTag` with session-scoped tags instead of `revalidatePath` to limit cache invalidation scope.
#### Scenario: Session mutation invalidates only that session's cache
- **WHEN** a Server Action mutates a specific session (e.g., adds an item)
- **THEN** only the cache tagged `session:<id>` SHALL be invalidated
- **THEN** other sessions' cached data SHALL NOT be invalidated

View File

@@ -0,0 +1,36 @@
## 1. Module broadcast.ts
- [x] 1.1 Créer `src/lib/broadcast.ts` avec une `Map<string, Set<(event: unknown) => void>>` et les fonctions `subscribe(sessionId, cb)` et `broadcast(sessionId, event)`
- [x] 1.2 Ajouter la logique de polling mutualisé : `startPolling(sessionId)` / `stopPolling(sessionId)` avec compteur de subscribers
- [x] 1.3 Écrire un test manuel : ouvrir 2 onglets sur la même session, vérifier qu'un seul interval tourne (log côté serveur)
## 2. Migration des routes SSE
- [x] 2.1 Lire toutes les routes `src/app/api/*/subscribe/route.ts` pour inventorier le pattern actuel
- [x] 2.2 Migrer la route weather en premier (elle a déjà un pattern partiel) pour valider l'approche
- [x] 2.3 Migrer les routes swot, motivators, year-review, weekly-checkin une par une
- [x] 2.4 Vérifier que le cleanup SSE (abort signal) appelle bien `unsubscribe()` dans chaque route migrée
## 3. revalidateTag dans les Server Actions
- [x] 3.1 Définir la convention de tags dans `src/lib/cache-tags.ts` (ex: `session(id)`, `sessionsList(userId)`)
- [x] 3.2 Ajouter `cacheTag` / `unstable_cache` aux queries de services correspondantes
- [x] 3.3 Remplacer `revalidatePath` par `revalidateTag` dans `src/actions/swot.ts`
- [x] 3.4 Remplacer `revalidatePath` par `revalidateTag` dans `src/actions/motivators.ts`
- [x] 3.5 Remplacer `revalidatePath` par `revalidateTag` dans `src/actions/year-review.ts`
- [x] 3.6 Remplacer `revalidatePath` par `revalidateTag` dans `src/actions/weekly-checkin.ts`
- [x] 3.7 Remplacer `revalidatePath` par `revalidateTag` dans `src/actions/weather.ts`
- [x] 3.8 Vérifier que les mutations se reflètent correctement dans l'UI après revalidation
## 4. Broadcast depuis les Server Actions
- [x] 4.1 Ajouter l'appel `broadcast(sessionId, { type: 'update' })` dans chaque Server Action de mutation (après revalidateTag)
- [x] 4.2 Vérifier que les mises à jour collaboratives fonctionnent (ouvrir 2 onglets, muter depuis l'un, voir la mise à jour dans l'autre)
## 5. Pagination sessions page
- [x] 5.1 Modifier les queries dans `src/services/` pour accepter `cursor` et `limit` (défaut: 20)
- [x] 5.2 Mettre à jour `src/app/sessions/page.tsx` pour charger la première page + afficher le total
- [x] 5.3 Créer un Server Action `loadMoreSessions(type, cursor)` pour la pagination
- [x] 5.4 Ajouter le bouton "Charger plus" avec état loading dans le composant sessions list
- [x] 5.5 Vérifier l'affichage "X sur Y sessions" pour chaque type de workshop

20
openspec/config.yaml Normal file
View File

@@ -0,0 +1,20 @@
schema: spec-driven
# Project context (optional)
# This is shown to AI when creating artifacts.
# Add your tech stack, conventions, style guides, domain knowledge, etc.
# Example:
# context: |
# Tech stack: TypeScript, React, Node.js
# We use conventional commits
# Domain: e-commerce platform
# Per-artifact rules (optional)
# Add custom rules for specific artifacts.
# Example:
# rules:
# proposal:
# - Keep proposals under 500 words
# - Always include a "Non-goals" section
# tasks:
# - Break tasks into chunks of max 2 hours

View File

@@ -0,0 +1,18 @@
### Requirement: Loading skeleton on main routes
The application SHALL display a skeleton loading state during navigation to `/sessions` and `/users` routes, activated by Next.js App Router streaming via `loading.tsx` files.
#### Scenario: Navigation to sessions page shows skeleton
- **WHEN** a user navigates to `/sessions`
- **THEN** a loading skeleton SHALL be displayed immediately while the page data loads
#### Scenario: Navigation to users page shows skeleton
- **WHEN** a user navigates to `/users`
- **THEN** a loading skeleton SHALL be displayed immediately while the page data loads
### Requirement: Modal lazy loading
Heavy modal components (ShareModal) SHALL be loaded lazily via `next/dynamic` to reduce the initial JS bundle size.
#### Scenario: ShareModal not in initial bundle
- **WHEN** a page loads that contains a ShareModal trigger
- **THEN** the ShareModal component code SHALL NOT be included in the initial JS bundle
- **THEN** the ShareModal code SHALL be fetched only when first needed

View File

@@ -13,6 +13,9 @@
"dev": "next dev",
"build": "next build",
"start": "next start",
"test": "vitest run",
"test:watch": "vitest",
"test:coverage": "vitest run --coverage",
"lint": "eslint",
"prettier": "prettier --write ."
},
@@ -37,12 +40,15 @@
"@types/node": "^20",
"@types/react": "^19",
"@types/react-dom": "^19",
"@vitest/coverage-v8": "^4.0.18",
"dotenv": "^17.2.3",
"eslint": "^9",
"eslint-config-next": "16.0.5",
"eslint-config-prettier": "^10.1.8",
"prettier": "^3.7.1",
"tailwindcss": "^4",
"typescript": "^5"
"typescript": "^5",
"vite-tsconfig-paths": "^6.1.1",
"vitest": "^4.0.18"
}
}

4447
pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,2 @@
-- CreateIndex
CREATE INDEX "User_name_idx" ON "User"("name");

View File

@@ -45,6 +45,8 @@ model User {
teamMembers TeamMember[]
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
@@index([name])
}
model Session {

View File

@@ -1,8 +1,9 @@
'use server';
import { revalidatePath } from 'next/cache';
import { revalidatePath, revalidateTag } from 'next/cache';
import { auth } from '@/lib/auth';
import * as gifMoodService from '@/services/gif-mood';
import { sessionsListTag } from '@/lib/cache-tags';
import { getUserById } from '@/services/auth';
import { broadcastToGifMoodSession } from '@/app/api/gif-mood/[id]/subscribe/route';
@@ -20,6 +21,7 @@ export async function createGifMoodSession(data: { title: string; date?: Date })
const gifMoodSession = await gifMoodService.createGifMoodSession(session.user.id, data);
revalidatePath('/gif-mood');
revalidatePath('/sessions');
revalidateTag(sessionsListTag(session.user.id), 'default');
return { success: true, data: gifMoodSession };
} catch (error) {
console.error('Error creating gif mood session:', error);
@@ -62,6 +64,7 @@ export async function updateGifMoodSession(
revalidatePath(`/gif-mood/${sessionId}`);
revalidatePath('/gif-mood');
revalidatePath('/sessions');
revalidateTag(sessionsListTag(authSession.user.id), 'default');
return { success: true };
} catch (error) {
console.error('Error updating gif mood session:', error);
@@ -79,6 +82,7 @@ export async function deleteGifMoodSession(sessionId: string) {
await gifMoodService.deleteGifMoodSession(sessionId, authSession.user.id);
revalidatePath('/gif-mood');
revalidatePath('/sessions');
revalidateTag(sessionsListTag(authSession.user.id), 'default');
return { success: true };
} catch (error) {
console.error('Error deleting gif mood session:', error);

View File

@@ -1,8 +1,10 @@
'use server';
import { revalidatePath } from 'next/cache';
import { revalidatePath, revalidateTag } from 'next/cache';
import { auth } from '@/lib/auth';
import * as motivatorsService from '@/services/moving-motivators';
import { sessionsListTag } from '@/lib/cache-tags';
import { broadcastToMotivatorSession } from '@/app/api/motivators/[id]/subscribe/route';
// ============================================
// Session Actions
@@ -54,9 +56,11 @@ export async function updateMotivatorSession(
data
);
broadcastToMotivatorSession(sessionId, { type: 'SESSION_UPDATED' });
revalidatePath(`/motivators/${sessionId}`);
revalidatePath('/motivators');
revalidatePath('/sessions'); // Also revalidate unified workshops page
revalidateTag(sessionsListTag(authSession.user.id), 'default');
return { success: true };
} catch (error) {
console.error('Error updating motivator session:', error);
@@ -74,6 +78,7 @@ export async function deleteMotivatorSession(sessionId: string) {
await motivatorsService.deleteMotivatorSession(sessionId, authSession.user.id);
revalidatePath('/motivators');
revalidatePath('/sessions'); // Also revalidate unified workshops page
revalidateTag(sessionsListTag(authSession.user.id), 'default');
return { success: true };
} catch (error) {
console.error('Error deleting motivator session:', error);
@@ -121,6 +126,7 @@ export async function updateMotivatorCard(
);
}
broadcastToMotivatorSession(sessionId, { type: 'CARD_UPDATED' });
revalidatePath(`/motivators/${sessionId}`);
return { success: true, data: card };
} catch (error) {
@@ -152,6 +158,7 @@ export async function reorderMotivatorCards(sessionId: string, cardIds: string[]
{ cardIds }
);
broadcastToMotivatorSession(sessionId, { type: 'CARDS_REORDERED' });
revalidatePath(`/motivators/${sessionId}`);
return { success: true };
} catch (error) {

View File

@@ -1,8 +1,10 @@
'use server';
import { revalidatePath } from 'next/cache';
import { revalidatePath, revalidateTag } from 'next/cache';
import { auth } from '@/lib/auth';
import * as sessionsService from '@/services/sessions';
import { sessionsListTag } from '@/lib/cache-tags';
import { broadcastToSession } from '@/app/api/sessions/[id]/subscribe/route';
export async function updateSessionTitle(sessionId: string, title: string) {
const session = await auth();
@@ -28,8 +30,10 @@ export async function updateSessionTitle(sessionId: string, title: string) {
title: title.trim(),
});
broadcastToSession(sessionId, { type: 'SESSION_UPDATED' });
revalidatePath(`/sessions/${sessionId}`);
revalidatePath('/sessions');
revalidateTag(sessionsListTag(session.user.id), 'default');
return { success: true };
} catch (error) {
console.error('Error updating session title:', error);
@@ -61,8 +65,10 @@ export async function updateSessionCollaborator(sessionId: string, collaborator:
collaborator: collaborator.trim(),
});
broadcastToSession(sessionId, { type: 'SESSION_UPDATED' });
revalidatePath(`/sessions/${sessionId}`);
revalidatePath('/sessions');
revalidateTag(sessionsListTag(session.user.id), 'default');
return { success: true };
} catch (error) {
console.error('Error updating session collaborator:', error);
@@ -106,8 +112,10 @@ export async function updateSwotSession(
updateData
);
broadcastToSession(sessionId, { type: 'SESSION_UPDATED' });
revalidatePath(`/sessions/${sessionId}`);
revalidatePath('/sessions');
revalidateTag(sessionsListTag(session.user.id), 'default');
return { success: true };
} catch (error) {
console.error('Error updating session:', error);
@@ -129,6 +137,7 @@ export async function deleteSwotSession(sessionId: string) {
}
revalidatePath('/sessions');
revalidateTag(sessionsListTag(session.user.id), 'default');
return { success: true };
} catch (error) {
console.error('Error deleting session:', error);

View File

@@ -0,0 +1,49 @@
'use server';
import { auth } from '@/lib/auth';
import { SESSIONS_PAGE_SIZE } from '@/lib/types';
import { withWorkshopType } from '@/lib/workshops';
import { getSessionsByUserId } from '@/services/sessions';
import { getMotivatorSessionsByUserId } from '@/services/moving-motivators';
import { getYearReviewSessionsByUserId } from '@/services/year-review';
import { getWeeklyCheckInSessionsByUserId } from '@/services/weekly-checkin';
import { getWeatherSessionsByUserId } from '@/services/weather';
import { getGifMoodSessionsByUserId } from '@/services/gif-mood';
import type { WorkshopTypeId } from '@/lib/workshops';
export async function loadMoreSessions(type: WorkshopTypeId, offset: number) {
const session = await auth();
if (!session?.user?.id) return null;
const userId = session.user.id;
const limit = SESSIONS_PAGE_SIZE;
switch (type) {
case 'swot': {
const all = await getSessionsByUserId(userId);
return { items: withWorkshopType(all.slice(offset, offset + limit), 'swot'), total: all.length };
}
case 'motivators': {
const all = await getMotivatorSessionsByUserId(userId);
return { items: withWorkshopType(all.slice(offset, offset + limit), 'motivators'), total: all.length };
}
case 'year-review': {
const all = await getYearReviewSessionsByUserId(userId);
return { items: withWorkshopType(all.slice(offset, offset + limit), 'year-review'), total: all.length };
}
case 'weekly-checkin': {
const all = await getWeeklyCheckInSessionsByUserId(userId);
return { items: withWorkshopType(all.slice(offset, offset + limit), 'weekly-checkin'), total: all.length };
}
case 'weather': {
const all = await getWeatherSessionsByUserId(userId);
return { items: withWorkshopType(all.slice(offset, offset + limit), 'weather'), total: all.length };
}
case 'gif-mood': {
const all = await getGifMoodSessionsByUserId(userId);
return { items: withWorkshopType(all.slice(offset, offset + limit), 'gif-mood'), total: all.length };
}
default:
return null;
}
}

View File

@@ -3,6 +3,7 @@
import { revalidatePath } from 'next/cache';
import { auth } from '@/lib/auth';
import * as sessionsService from '@/services/sessions';
import { broadcastToSession } from '@/app/api/sessions/[id]/subscribe/route';
import type { SwotCategory } from '@prisma/client';
// ============================================
@@ -31,6 +32,7 @@ export async function createSwotItem(
category: item.category,
});
broadcastToSession(sessionId, { type: 'ITEM_CREATED' });
revalidatePath(`/sessions/${sessionId}`);
return { success: true, data: item };
} catch (error) {
@@ -61,6 +63,7 @@ export async function updateSwotItem(
...data,
});
broadcastToSession(sessionId, { type: 'ITEM_UPDATED' });
revalidatePath(`/sessions/${sessionId}`);
return { success: true, data: item };
} catch (error) {
@@ -86,6 +89,7 @@ export async function deleteSwotItem(itemId: string, sessionId: string) {
itemId,
});
broadcastToSession(sessionId, { type: 'ITEM_DELETED' });
revalidatePath(`/sessions/${sessionId}`);
return { success: true };
} catch (error) {
@@ -114,6 +118,7 @@ export async function duplicateSwotItem(itemId: string, sessionId: string) {
duplicatedFrom: itemId,
});
broadcastToSession(sessionId, { type: 'ITEM_CREATED' });
revalidatePath(`/sessions/${sessionId}`);
return { success: true, data: item };
} catch (error) {
@@ -146,6 +151,7 @@ export async function moveSwotItem(
newOrder,
});
broadcastToSession(sessionId, { type: 'ITEM_MOVED' });
revalidatePath(`/sessions/${sessionId}`);
return { success: true, data: item };
} catch (error) {
@@ -185,6 +191,7 @@ export async function createAction(
linkedItemIds: data.linkedItemIds,
});
broadcastToSession(sessionId, { type: 'ACTION_CREATED' });
revalidatePath(`/sessions/${sessionId}`);
return { success: true, data: action };
} catch (error) {
@@ -221,6 +228,7 @@ export async function updateAction(
...data,
});
broadcastToSession(sessionId, { type: 'ACTION_UPDATED' });
revalidatePath(`/sessions/${sessionId}`);
return { success: true, data: action };
} catch (error) {
@@ -246,6 +254,7 @@ export async function deleteAction(actionId: string, sessionId: string) {
actionId,
});
broadcastToSession(sessionId, { type: 'ACTION_DELETED' });
revalidatePath(`/sessions/${sessionId}`);
return { success: true };
} catch (error) {

View File

@@ -1,8 +1,9 @@
'use server';
import { revalidatePath } from 'next/cache';
import { revalidatePath, revalidateTag } from 'next/cache';
import { auth } from '@/lib/auth';
import * as weatherService from '@/services/weather';
import { sessionsListTag } from '@/lib/cache-tags';
import { getUserById } from '@/services/auth';
import { broadcastToWeatherSession } from '@/app/api/weather/[id]/subscribe/route';
@@ -20,6 +21,7 @@ export async function createWeatherSession(data: { title: string; date?: Date })
const weatherSession = await weatherService.createWeatherSession(session.user.id, data);
revalidatePath('/weather');
revalidatePath('/sessions');
revalidateTag(sessionsListTag(session.user.id), 'default');
return { success: true, data: weatherSession };
} catch (error) {
console.error('Error creating weather session:', error);
@@ -65,6 +67,7 @@ export async function updateWeatherSession(
revalidatePath(`/weather/${sessionId}`);
revalidatePath('/weather');
revalidatePath('/sessions');
revalidateTag(sessionsListTag(authSession.user.id), 'default');
return { success: true };
} catch (error) {
console.error('Error updating weather session:', error);
@@ -82,6 +85,7 @@ export async function deleteWeatherSession(sessionId: string) {
await weatherService.deleteWeatherSession(sessionId, authSession.user.id);
revalidatePath('/weather');
revalidatePath('/sessions');
revalidateTag(sessionsListTag(authSession.user.id), 'default');
return { success: true };
} catch (error) {
console.error('Error deleting weather session:', error);

View File

@@ -1,8 +1,10 @@
'use server';
import { revalidatePath } from 'next/cache';
import { revalidatePath, revalidateTag } from 'next/cache';
import { auth } from '@/lib/auth';
import * as weeklyCheckInService from '@/services/weekly-checkin';
import { sessionsListTag } from '@/lib/cache-tags';
import { broadcastToWeeklyCheckInSession } from '@/app/api/weekly-checkin/[id]/subscribe/route';
import type { WeeklyCheckInCategory, Emotion } from '@prisma/client';
// ============================================
@@ -36,6 +38,7 @@ export async function createWeeklyCheckInSession(data: {
}
revalidatePath('/weekly-checkin');
revalidatePath('/sessions');
revalidateTag(sessionsListTag(session.user.id), 'default');
return { success: true, data: weeklyCheckInSession };
} catch (error) {
console.error('Error creating weekly check-in session:', error);
@@ -63,9 +66,11 @@ export async function updateWeeklyCheckInSession(
data
);
broadcastToWeeklyCheckInSession(sessionId, { type: 'SESSION_UPDATED' });
revalidatePath(`/weekly-checkin/${sessionId}`);
revalidatePath('/weekly-checkin');
revalidatePath('/sessions');
revalidateTag(sessionsListTag(authSession.user.id), 'default');
return { success: true };
} catch (error) {
console.error('Error updating weekly check-in session:', error);
@@ -83,6 +88,7 @@ export async function deleteWeeklyCheckInSession(sessionId: string) {
await weeklyCheckInService.deleteWeeklyCheckInSession(sessionId, authSession.user.id);
revalidatePath('/weekly-checkin');
revalidatePath('/sessions');
revalidateTag(sessionsListTag(authSession.user.id), 'default');
return { success: true };
} catch (error) {
console.error('Error deleting weekly check-in session:', error);
@@ -128,6 +134,7 @@ export async function createWeeklyCheckInItem(
}
);
broadcastToWeeklyCheckInSession(sessionId, { type: 'ITEM_CREATED' });
revalidatePath(`/weekly-checkin/${sessionId}`);
return { success: true, data: item };
} catch (error) {
@@ -169,6 +176,7 @@ export async function updateWeeklyCheckInItem(
}
);
broadcastToWeeklyCheckInSession(sessionId, { type: 'ITEM_UPDATED' });
revalidatePath(`/weekly-checkin/${sessionId}`);
return { success: true, data: item };
} catch (error) {
@@ -203,6 +211,7 @@ export async function deleteWeeklyCheckInItem(itemId: string, sessionId: string)
{ itemId }
);
broadcastToWeeklyCheckInSession(sessionId, { type: 'ITEM_DELETED' });
revalidatePath(`/weekly-checkin/${sessionId}`);
return { success: true };
} catch (error) {
@@ -246,6 +255,7 @@ export async function moveWeeklyCheckInItem(
}
);
broadcastToWeeklyCheckInSession(sessionId, { type: 'ITEM_MOVED' });
revalidatePath(`/weekly-checkin/${sessionId}`);
return { success: true };
} catch (error) {
@@ -284,6 +294,7 @@ export async function reorderWeeklyCheckInItems(
{ category, itemIds }
);
broadcastToWeeklyCheckInSession(sessionId, { type: 'ITEMS_REORDERED' });
revalidatePath(`/weekly-checkin/${sessionId}`);
return { success: true };
} catch (error) {

View File

@@ -1,8 +1,10 @@
'use server';
import { revalidatePath } from 'next/cache';
import { revalidatePath, revalidateTag } from 'next/cache';
import { auth } from '@/lib/auth';
import * as yearReviewService from '@/services/year-review';
import { sessionsListTag } from '@/lib/cache-tags';
import { broadcastToYearReviewSession } from '@/app/api/year-review/[id]/subscribe/route';
import type { YearReviewCategory } from '@prisma/client';
// ============================================
@@ -36,6 +38,7 @@ export async function createYearReviewSession(data: {
}
revalidatePath('/year-review');
revalidatePath('/sessions');
revalidateTag(sessionsListTag(session.user.id), 'default');
return { success: true, data: yearReviewSession };
} catch (error) {
console.error('Error creating year review session:', error);
@@ -63,9 +66,11 @@ export async function updateYearReviewSession(
data
);
broadcastToYearReviewSession(sessionId, { type: 'SESSION_UPDATED' });
revalidatePath(`/year-review/${sessionId}`);
revalidatePath('/year-review');
revalidatePath('/sessions');
revalidateTag(sessionsListTag(authSession.user.id), 'default');
return { success: true };
} catch (error) {
console.error('Error updating year review session:', error);
@@ -83,6 +88,7 @@ export async function deleteYearReviewSession(sessionId: string) {
await yearReviewService.deleteYearReviewSession(sessionId, authSession.user.id);
revalidatePath('/year-review');
revalidatePath('/sessions');
revalidateTag(sessionsListTag(authSession.user.id), 'default');
return { success: true };
} catch (error) {
console.error('Error deleting year review session:', error);
@@ -124,6 +130,7 @@ export async function createYearReviewItem(
}
);
broadcastToYearReviewSession(sessionId, { type: 'ITEM_CREATED' });
revalidatePath(`/year-review/${sessionId}`);
return { success: true, data: item };
} catch (error) {
@@ -162,6 +169,7 @@ export async function updateYearReviewItem(
}
);
broadcastToYearReviewSession(sessionId, { type: 'ITEM_UPDATED' });
revalidatePath(`/year-review/${sessionId}`);
return { success: true, data: item };
} catch (error) {
@@ -193,6 +201,7 @@ export async function deleteYearReviewItem(itemId: string, sessionId: string) {
{ itemId }
);
broadcastToYearReviewSession(sessionId, { type: 'ITEM_DELETED' });
revalidatePath(`/year-review/${sessionId}`);
return { success: true };
} catch (error) {
@@ -233,6 +242,7 @@ export async function moveYearReviewItem(
}
);
broadcastToYearReviewSession(sessionId, { type: 'ITEM_MOVED' });
revalidatePath(`/year-review/${sessionId}`);
return { success: true };
} catch (error) {
@@ -268,6 +278,7 @@ export async function reorderYearReviewItems(
{ category, itemIds }
);
broadcastToYearReviewSession(sessionId, { type: 'ITEMS_REORDERED' });
revalidatePath(`/year-review/${sessionId}`);
return { success: true };
} catch (error) {

View File

@@ -1,10 +1,18 @@
import { auth } from '@/lib/auth';
import { canAccessGifMoodSession, getGifMoodSessionEvents } from '@/services/gif-mood';
import { createBroadcaster } from '@/lib/broadcast';
export const dynamic = 'force-dynamic';
// Store active connections per session
const connections = new Map<string, Set<ReadableStreamDefaultController>>();
const { subscribe, broadcast } = createBroadcaster(getGifMoodSessionEvents, (event) => ({
type: event.type,
payload: JSON.parse(event.payload),
userId: event.userId,
user: event.user,
timestamp: event.createdAt,
}));
export { broadcast as broadcastToGifMoodSession };
export async function GET(request: Request, { params }: { params: Promise<{ id: string }> }) {
const { id: sessionId } = await params;
@@ -20,60 +28,31 @@ export async function GET(request: Request, { params }: { params: Promise<{ id:
}
const userId = session.user.id;
let lastEventTime = new Date();
let unsubscribe: () => void = () => {};
let controller: ReadableStreamDefaultController;
const stream = new ReadableStream({
start(ctrl) {
controller = ctrl;
if (!connections.has(sessionId)) {
connections.set(sessionId, new Set());
}
connections.get(sessionId)!.add(controller);
const encoder = new TextEncoder();
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ type: 'connected', userId })}\n\n`)
);
unsubscribe = subscribe(sessionId, userId, (event) => {
try {
controller.enqueue(encoder.encode(`data: ${JSON.stringify(event)}\n\n`));
} catch {
unsubscribe();
}
});
},
cancel() {
connections.get(sessionId)?.delete(controller);
if (connections.get(sessionId)?.size === 0) {
connections.delete(sessionId);
}
unsubscribe();
},
});
const pollInterval = setInterval(async () => {
try {
const events = await getGifMoodSessionEvents(sessionId, lastEventTime);
if (events.length > 0) {
const encoder = new TextEncoder();
for (const event of events) {
if (event.userId !== userId) {
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({
type: event.type,
payload: JSON.parse(event.payload),
userId: event.userId,
user: event.user,
timestamp: event.createdAt,
})}\n\n`
)
);
}
lastEventTime = event.createdAt;
}
}
} catch {
clearInterval(pollInterval);
}
}, 2000);
request.signal.addEventListener('abort', () => {
clearInterval(pollInterval);
unsubscribe();
});
return new Response(stream, {
@@ -84,29 +63,3 @@ export async function GET(request: Request, { params }: { params: Promise<{ id:
},
});
}
export function broadcastToGifMoodSession(sessionId: string, event: object) {
try {
const sessionConnections = connections.get(sessionId);
if (!sessionConnections || sessionConnections.size === 0) {
return;
}
const encoder = new TextEncoder();
const message = encoder.encode(`data: ${JSON.stringify(event)}\n\n`);
for (const controller of sessionConnections) {
try {
controller.enqueue(message);
} catch {
sessionConnections.delete(controller);
}
}
if (sessionConnections.size === 0) {
connections.delete(sessionId);
}
} catch (error) {
console.error('[SSE Broadcast] Error broadcasting:', error);
}
}

View File

@@ -1,10 +1,18 @@
import { auth } from '@/lib/auth';
import { canAccessMotivatorSession, getMotivatorSessionEvents } from '@/services/moving-motivators';
import { createBroadcaster } from '@/lib/broadcast';
export const dynamic = 'force-dynamic';
// Store active connections per session
const connections = new Map<string, Set<ReadableStreamDefaultController>>();
const { subscribe, broadcast } = createBroadcaster(getMotivatorSessionEvents, (event) => ({
type: event.type,
payload: JSON.parse(event.payload),
userId: event.userId,
user: event.user,
timestamp: event.createdAt,
}));
export { broadcast as broadcastToMotivatorSession };
export async function GET(request: Request, { params }: { params: Promise<{ id: string }> }) {
const { id: sessionId } = await params;
@@ -14,74 +22,37 @@ export async function GET(request: Request, { params }: { params: Promise<{ id:
return new Response('Unauthorized', { status: 401 });
}
// Check access
const hasAccess = await canAccessMotivatorSession(sessionId, session.user.id);
if (!hasAccess) {
return new Response('Forbidden', { status: 403 });
}
const userId = session.user.id;
let lastEventTime = new Date();
let unsubscribe: () => void = () => {};
let controller: ReadableStreamDefaultController;
const stream = new ReadableStream({
start(ctrl) {
controller = ctrl;
// Register connection
if (!connections.has(sessionId)) {
connections.set(sessionId, new Set());
}
connections.get(sessionId)!.add(controller);
// Send initial ping
const encoder = new TextEncoder();
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ type: 'connected', userId })}\n\n`)
);
unsubscribe = subscribe(sessionId, userId, (event) => {
try {
controller.enqueue(encoder.encode(`data: ${JSON.stringify(event)}\n\n`));
} catch {
unsubscribe();
}
});
},
cancel() {
// Remove connection on close
connections.get(sessionId)?.delete(controller);
if (connections.get(sessionId)?.size === 0) {
connections.delete(sessionId);
}
unsubscribe();
},
});
// Poll for new events (simple approach, works with any DB)
const pollInterval = setInterval(async () => {
try {
const events = await getMotivatorSessionEvents(sessionId, lastEventTime);
if (events.length > 0) {
const encoder = new TextEncoder();
for (const event of events) {
// Don't send events to the user who created them
if (event.userId !== userId) {
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({
type: event.type,
payload: JSON.parse(event.payload),
userId: event.userId,
user: event.user,
timestamp: event.createdAt,
})}\n\n`
)
);
}
lastEventTime = event.createdAt;
}
}
} catch {
// Connection might be closed
clearInterval(pollInterval);
}
}, 2000); // Poll every 2 seconds
// Cleanup on abort
request.signal.addEventListener('abort', () => {
clearInterval(pollInterval);
unsubscribe();
});
return new Response(stream, {
@@ -92,20 +63,3 @@ export async function GET(request: Request, { params }: { params: Promise<{ id:
},
});
}
// Helper to broadcast to all connections (called from actions)
export function broadcastToMotivatorSession(sessionId: string, event: object) {
const sessionConnections = connections.get(sessionId);
if (!sessionConnections) return;
const encoder = new TextEncoder();
const message = encoder.encode(`data: ${JSON.stringify(event)}\n\n`);
for (const controller of sessionConnections) {
try {
controller.enqueue(message);
} catch {
// Connection closed, will be cleaned up
}
}
}

View File

@@ -1,10 +1,18 @@
import { auth } from '@/lib/auth';
import { canAccessSession, getSessionEvents } from '@/services/sessions';
import { createBroadcaster } from '@/lib/broadcast';
export const dynamic = 'force-dynamic';
// Store active connections per session
const connections = new Map<string, Set<ReadableStreamDefaultController>>();
const { subscribe, broadcast } = createBroadcaster(getSessionEvents, (event) => ({
type: event.type,
payload: JSON.parse(event.payload),
userId: event.userId,
user: event.user,
timestamp: event.createdAt,
}));
export { broadcast as broadcastToSession };
export async function GET(request: Request, { params }: { params: Promise<{ id: string }> }) {
const { id: sessionId } = await params;
@@ -14,74 +22,37 @@ export async function GET(request: Request, { params }: { params: Promise<{ id:
return new Response('Unauthorized', { status: 401 });
}
// Check access
const hasAccess = await canAccessSession(sessionId, session.user.id);
if (!hasAccess) {
return new Response('Forbidden', { status: 403 });
}
const userId = session.user.id;
let lastEventTime = new Date();
let unsubscribe: () => void = () => {};
let controller: ReadableStreamDefaultController;
const stream = new ReadableStream({
start(ctrl) {
controller = ctrl;
// Register connection
if (!connections.has(sessionId)) {
connections.set(sessionId, new Set());
}
connections.get(sessionId)!.add(controller);
// Send initial ping
const encoder = new TextEncoder();
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ type: 'connected', userId })}\n\n`)
);
unsubscribe = subscribe(sessionId, userId, (event) => {
try {
controller.enqueue(encoder.encode(`data: ${JSON.stringify(event)}\n\n`));
} catch {
unsubscribe();
}
});
},
cancel() {
// Remove connection on close
connections.get(sessionId)?.delete(controller);
if (connections.get(sessionId)?.size === 0) {
connections.delete(sessionId);
}
unsubscribe();
},
});
// Poll for new events (simple approach, works with any DB)
const pollInterval = setInterval(async () => {
try {
const events = await getSessionEvents(sessionId, lastEventTime);
if (events.length > 0) {
const encoder = new TextEncoder();
for (const event of events) {
// Don't send events to the user who created them
if (event.userId !== userId) {
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({
type: event.type,
payload: JSON.parse(event.payload),
userId: event.userId, // Include userId for client-side filtering
user: event.user,
timestamp: event.createdAt,
})}\n\n`
)
);
}
lastEventTime = event.createdAt;
}
}
} catch {
// Connection might be closed
clearInterval(pollInterval);
}
}, 2000); // Poll every 2 seconds
// Cleanup on abort
request.signal.addEventListener('abort', () => {
clearInterval(pollInterval);
unsubscribe();
});
return new Response(stream, {
@@ -92,20 +63,3 @@ export async function GET(request: Request, { params }: { params: Promise<{ id:
},
});
}
// Helper to broadcast to all connections (called from actions)
export function broadcastToSession(sessionId: string, event: object) {
const sessionConnections = connections.get(sessionId);
if (!sessionConnections) return;
const encoder = new TextEncoder();
const message = encoder.encode(`data: ${JSON.stringify(event)}\n\n`);
for (const controller of sessionConnections) {
try {
controller.enqueue(message);
} catch {
// Connection closed, will be cleaned up
}
}
}

View File

@@ -1,7 +1,9 @@
import { NextResponse } from 'next/server';
import { revalidateTag } from 'next/cache';
import { auth } from '@/lib/auth';
import { prisma } from '@/services/database';
import { shareSession } from '@/services/sessions';
import { sessionsListTag } from '@/lib/cache-tags';
export async function GET() {
try {
@@ -63,6 +65,7 @@ export async function POST(request: Request) {
console.error('Auto-share failed:', shareError);
}
revalidateTag(sessionsListTag(session.user.id), 'default');
return NextResponse.json(newSession, { status: 201 });
} catch (error) {
console.error('Error creating session:', error);

View File

@@ -1,10 +1,18 @@
import { auth } from '@/lib/auth';
import { canAccessWeatherSession, getWeatherSessionEvents } from '@/services/weather';
import { createBroadcaster } from '@/lib/broadcast';
export const dynamic = 'force-dynamic';
// Store active connections per session
const connections = new Map<string, Set<ReadableStreamDefaultController>>();
const { subscribe, broadcast } = createBroadcaster(getWeatherSessionEvents, (event) => ({
type: event.type,
payload: JSON.parse(event.payload),
userId: event.userId,
user: event.user,
timestamp: event.createdAt,
}));
export { broadcast as broadcastToWeatherSession };
export async function GET(request: Request, { params }: { params: Promise<{ id: string }> }) {
const { id: sessionId } = await params;
@@ -14,74 +22,37 @@ export async function GET(request: Request, { params }: { params: Promise<{ id:
return new Response('Unauthorized', { status: 401 });
}
// Check access
const hasAccess = await canAccessWeatherSession(sessionId, session.user.id);
if (!hasAccess) {
return new Response('Forbidden', { status: 403 });
}
const userId = session.user.id;
let lastEventTime = new Date();
let unsubscribe: () => void = () => {};
let controller: ReadableStreamDefaultController;
const stream = new ReadableStream({
start(ctrl) {
controller = ctrl;
// Register connection
if (!connections.has(sessionId)) {
connections.set(sessionId, new Set());
}
connections.get(sessionId)!.add(controller);
// Send initial ping
const encoder = new TextEncoder();
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ type: 'connected', userId })}\n\n`)
);
unsubscribe = subscribe(sessionId, userId, (event) => {
try {
controller.enqueue(encoder.encode(`data: ${JSON.stringify(event)}\n\n`));
} catch {
unsubscribe();
}
});
},
cancel() {
// Remove connection on close
connections.get(sessionId)?.delete(controller);
if (connections.get(sessionId)?.size === 0) {
connections.delete(sessionId);
}
unsubscribe();
},
});
// Poll for new events (simple approach, works with any DB)
const pollInterval = setInterval(async () => {
try {
const events = await getWeatherSessionEvents(sessionId, lastEventTime);
if (events.length > 0) {
const encoder = new TextEncoder();
for (const event of events) {
// Don't send events to the user who created them
if (event.userId !== userId) {
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({
type: event.type,
payload: JSON.parse(event.payload),
userId: event.userId,
user: event.user,
timestamp: event.createdAt,
})}\n\n`
)
);
}
lastEventTime = event.createdAt;
}
}
} catch {
// Connection might be closed
clearInterval(pollInterval);
}
}, 2000); // Poll every 2 seconds
// Cleanup on abort
request.signal.addEventListener('abort', () => {
clearInterval(pollInterval);
unsubscribe();
});
return new Response(stream, {
@@ -92,45 +63,3 @@ export async function GET(request: Request, { params }: { params: Promise<{ id:
},
});
}
// Helper to broadcast to all connections (called from actions)
export function broadcastToWeatherSession(sessionId: string, event: object) {
try {
const sessionConnections = connections.get(sessionId);
if (!sessionConnections || sessionConnections.size === 0) {
// No active connections, event will be picked up by polling
console.log(
`[SSE Broadcast] No connections for session ${sessionId}, will be picked up by polling`
);
return;
}
console.log(
`[SSE Broadcast] Broadcasting to ${sessionConnections.size} connections for session ${sessionId}`
);
const encoder = new TextEncoder();
const message = encoder.encode(`data: ${JSON.stringify(event)}\n\n`);
let sentCount = 0;
for (const controller of sessionConnections) {
try {
controller.enqueue(message);
sentCount++;
} catch (error) {
// Connection might be closed, remove it
console.log(`[SSE Broadcast] Failed to send, removing connection:`, error);
sessionConnections.delete(controller);
}
}
console.log(`[SSE Broadcast] Sent to ${sentCount} connections`);
// Clean up empty sets
if (sessionConnections.size === 0) {
connections.delete(sessionId);
}
} catch (error) {
console.error('[SSE Broadcast] Error broadcasting:', error);
}
}

View File

@@ -3,11 +3,19 @@ import {
canAccessWeeklyCheckInSession,
getWeeklyCheckInSessionEvents,
} from '@/services/weekly-checkin';
import { createBroadcaster } from '@/lib/broadcast';
export const dynamic = 'force-dynamic';
// Store active connections per session
const connections = new Map<string, Set<ReadableStreamDefaultController>>();
const { subscribe, broadcast } = createBroadcaster(getWeeklyCheckInSessionEvents, (event) => ({
type: event.type,
payload: JSON.parse(event.payload),
userId: event.userId,
user: event.user,
timestamp: event.createdAt,
}));
export { broadcast as broadcastToWeeklyCheckInSession };
export async function GET(request: Request, { params }: { params: Promise<{ id: string }> }) {
const { id: sessionId } = await params;
@@ -17,74 +25,37 @@ export async function GET(request: Request, { params }: { params: Promise<{ id:
return new Response('Unauthorized', { status: 401 });
}
// Check access
const hasAccess = await canAccessWeeklyCheckInSession(sessionId, session.user.id);
if (!hasAccess) {
return new Response('Forbidden', { status: 403 });
}
const userId = session.user.id;
let lastEventTime = new Date();
let unsubscribe: () => void = () => {};
let controller: ReadableStreamDefaultController;
const stream = new ReadableStream({
start(ctrl) {
controller = ctrl;
// Register connection
if (!connections.has(sessionId)) {
connections.set(sessionId, new Set());
}
connections.get(sessionId)!.add(controller);
// Send initial ping
const encoder = new TextEncoder();
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ type: 'connected', userId })}\n\n`)
);
unsubscribe = subscribe(sessionId, userId, (event) => {
try {
controller.enqueue(encoder.encode(`data: ${JSON.stringify(event)}\n\n`));
} catch {
unsubscribe();
}
});
},
cancel() {
// Remove connection on close
connections.get(sessionId)?.delete(controller);
if (connections.get(sessionId)?.size === 0) {
connections.delete(sessionId);
}
unsubscribe();
},
});
// Poll for new events (simple approach, works with any DB)
const pollInterval = setInterval(async () => {
try {
const events = await getWeeklyCheckInSessionEvents(sessionId, lastEventTime);
if (events.length > 0) {
const encoder = new TextEncoder();
for (const event of events) {
// Don't send events to the user who created them
if (event.userId !== userId) {
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({
type: event.type,
payload: JSON.parse(event.payload),
userId: event.userId,
user: event.user,
timestamp: event.createdAt,
})}\n\n`
)
);
}
lastEventTime = event.createdAt;
}
}
} catch {
// Connection might be closed
clearInterval(pollInterval);
}
}, 2000); // Poll every 2 seconds
// Cleanup on abort
request.signal.addEventListener('abort', () => {
clearInterval(pollInterval);
unsubscribe();
});
return new Response(stream, {
@@ -95,28 +66,3 @@ export async function GET(request: Request, { params }: { params: Promise<{ id:
},
});
}
// Helper to broadcast to all connections (called from actions)
export function broadcastToWeeklyCheckInSession(sessionId: string, event: object) {
const sessionConnections = connections.get(sessionId);
if (!sessionConnections || sessionConnections.size === 0) {
return;
}
const encoder = new TextEncoder();
const message = encoder.encode(`data: ${JSON.stringify(event)}\n\n`);
for (const controller of sessionConnections) {
try {
controller.enqueue(message);
} catch {
// Connection might be closed, remove it
sessionConnections.delete(controller);
}
}
// Clean up empty sets
if (sessionConnections.size === 0) {
connections.delete(sessionId);
}
}

View File

@@ -1,10 +1,18 @@
import { auth } from '@/lib/auth';
import { canAccessYearReviewSession, getYearReviewSessionEvents } from '@/services/year-review';
import { createBroadcaster } from '@/lib/broadcast';
export const dynamic = 'force-dynamic';
// Store active connections per session
const connections = new Map<string, Set<ReadableStreamDefaultController>>();
const { subscribe, broadcast } = createBroadcaster(getYearReviewSessionEvents, (event) => ({
type: event.type,
payload: JSON.parse(event.payload),
userId: event.userId,
user: event.user,
timestamp: event.createdAt,
}));
export { broadcast as broadcastToYearReviewSession };
export async function GET(request: Request, { params }: { params: Promise<{ id: string }> }) {
const { id: sessionId } = await params;
@@ -14,74 +22,37 @@ export async function GET(request: Request, { params }: { params: Promise<{ id:
return new Response('Unauthorized', { status: 401 });
}
// Check access
const hasAccess = await canAccessYearReviewSession(sessionId, session.user.id);
if (!hasAccess) {
return new Response('Forbidden', { status: 403 });
}
const userId = session.user.id;
let lastEventTime = new Date();
let unsubscribe: () => void = () => {};
let controller: ReadableStreamDefaultController;
const stream = new ReadableStream({
start(ctrl) {
controller = ctrl;
// Register connection
if (!connections.has(sessionId)) {
connections.set(sessionId, new Set());
}
connections.get(sessionId)!.add(controller);
// Send initial ping
const encoder = new TextEncoder();
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ type: 'connected', userId })}\n\n`)
);
unsubscribe = subscribe(sessionId, userId, (event) => {
try {
controller.enqueue(encoder.encode(`data: ${JSON.stringify(event)}\n\n`));
} catch {
unsubscribe();
}
});
},
cancel() {
// Remove connection on close
connections.get(sessionId)?.delete(controller);
if (connections.get(sessionId)?.size === 0) {
connections.delete(sessionId);
}
unsubscribe();
},
});
// Poll for new events (simple approach, works with any DB)
const pollInterval = setInterval(async () => {
try {
const events = await getYearReviewSessionEvents(sessionId, lastEventTime);
if (events.length > 0) {
const encoder = new TextEncoder();
for (const event of events) {
// Don't send events to the user who created them
if (event.userId !== userId) {
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({
type: event.type,
payload: JSON.parse(event.payload),
userId: event.userId,
user: event.user,
timestamp: event.createdAt,
})}\n\n`
)
);
}
lastEventTime = event.createdAt;
}
}
} catch {
// Connection might be closed
clearInterval(pollInterval);
}
}, 2000); // Poll every 2 seconds
// Cleanup on abort
request.signal.addEventListener('abort', () => {
clearInterval(pollInterval);
unsubscribe();
});
return new Response(stream, {
@@ -92,28 +63,3 @@ export async function GET(request: Request, { params }: { params: Promise<{ id:
},
});
}
// Helper to broadcast to all connections (called from actions)
export function broadcastToYearReviewSession(sessionId: string, event: object) {
const sessionConnections = connections.get(sessionId);
if (!sessionConnections || sessionConnections.size === 0) {
return;
}
const encoder = new TextEncoder();
const message = encoder.encode(`data: ${JSON.stringify(event)}\n\n`);
for (const controller of sessionConnections) {
try {
controller.enqueue(message);
} catch {
// Connection might be closed, remove it
sessionConnections.delete(controller);
}
}
// Clean up empty sets
if (sessionConnections.size === 0) {
connections.delete(sessionId);
}
}

View File

@@ -1,10 +1,11 @@
'use client';
import { useEffect, useRef, useState } from 'react';
import { useEffect, useRef, useState, useTransition } from 'react';
import { useSearchParams, useRouter } from 'next/navigation';
import { CollaboratorDisplay } from '@/components/ui';
import { type WorkshopTabType, VALID_TAB_PARAMS } from '@/lib/workshops';
import { type WorkshopTabType, VALID_TAB_PARAMS, type WorkshopTypeId } from '@/lib/workshops';
import { useClickOutside } from '@/hooks/useClickOutside';
import { loadMoreSessions } from '@/actions/sessions-pagination';
import {
type CardView,
type SortCol,
@@ -376,13 +377,14 @@ function SortableTableView({
// ─── WorkshopTabs ─────────────────────────────────────────────────────────────
export function WorkshopTabs({
swotSessions,
motivatorSessions,
yearReviewSessions,
weeklyCheckInSessions,
weatherSessions,
gifMoodSessions,
swotSessions: initialSwot,
motivatorSessions: initialMotivators,
yearReviewSessions: initialYearReview,
weeklyCheckInSessions: initialWeeklyCheckIn,
weatherSessions: initialWeather,
gifMoodSessions: initialGifMood,
teamCollabSessions = [],
totals,
}: WorkshopTabsProps) {
const CARD_VIEW_STORAGE_KEY = 'sessions:cardView';
const isCardView = (value: string): value is CardView =>
@@ -390,7 +392,45 @@ export function WorkshopTabs({
const searchParams = useSearchParams();
const router = useRouter();
const [isPending, startTransition] = useTransition();
const [typeDropdownOpen, setTypeDropdownOpen] = useState(false);
// Per-type session lists (extended by load more)
const [swotSessions, setSwotSessions] = useState(initialSwot);
const [motivatorSessions, setMotivatorSessions] = useState(initialMotivators);
const [yearReviewSessions, setYearReviewSessions] = useState(initialYearReview);
const [weeklyCheckInSessions, setWeeklyCheckInSessions] = useState(initialWeeklyCheckIn);
const [weatherSessions, setWeatherSessions] = useState(initialWeather);
const [gifMoodSessions, setGifMoodSessions] = useState(initialGifMood);
const sessionsByType: Record<WorkshopTypeId, AnySession[]> = {
swot: swotSessions,
motivators: motivatorSessions,
'year-review': yearReviewSessions,
'weekly-checkin': weeklyCheckInSessions,
weather: weatherSessions,
'gif-mood': gifMoodSessions,
};
const settersByType: Record<WorkshopTypeId, React.Dispatch<React.SetStateAction<AnySession[]>>> = {
swot: setSwotSessions as React.Dispatch<React.SetStateAction<AnySession[]>>,
motivators: setMotivatorSessions as React.Dispatch<React.SetStateAction<AnySession[]>>,
'year-review': setYearReviewSessions as React.Dispatch<React.SetStateAction<AnySession[]>>,
'weekly-checkin': setWeeklyCheckInSessions as React.Dispatch<React.SetStateAction<AnySession[]>>,
weather: setWeatherSessions as React.Dispatch<React.SetStateAction<AnySession[]>>,
'gif-mood': setGifMoodSessions as React.Dispatch<React.SetStateAction<AnySession[]>>,
};
function handleLoadMore(type: WorkshopTypeId) {
const current = sessionsByType[type];
startTransition(async () => {
const result = await loadMoreSessions(type, current.length);
if (result) {
settersByType[type]((prev) => [...prev, ...(result.items as AnySession[])]);
}
});
}
const [cardView, setCardView] = useState<CardView>(() => {
if (typeof window === 'undefined') return 'grid';
const storedView = localStorage.getItem(CARD_VIEW_STORAGE_KEY);
@@ -516,12 +556,12 @@ export function WorkshopTabs({
open={typeDropdownOpen}
onOpenChange={setTypeDropdownOpen}
counts={{
swot: swotSessions.length,
motivators: motivatorSessions.length,
'year-review': yearReviewSessions.length,
'weekly-checkin': weeklyCheckInSessions.length,
weather: weatherSessions.length,
'gif-mood': gifMoodSessions.length,
swot: totals?.swot ?? swotSessions.length,
motivators: totals?.motivators ?? motivatorSessions.length,
'year-review': totals?.['year-review'] ?? yearReviewSessions.length,
'weekly-checkin': totals?.['weekly-checkin'] ?? weeklyCheckInSessions.length,
weather: totals?.weather ?? weatherSessions.length,
'gif-mood': totals?.['gif-mood'] ?? gifMoodSessions.length,
team: teamCollabSessions.length,
}}
/>
@@ -634,6 +674,30 @@ export function WorkshopTabs({
<SessionsGrid sessions={teamCollabFiltered} view={cardView} isTeamCollab />
</section>
)}
{/* Charger plus visible pour les onglets par type uniquement */}
{activeTab !== 'all' && totals && totals[activeTab as WorkshopTypeId] !== undefined && (
(() => {
const typeId = activeTab as WorkshopTypeId;
const total = totals[typeId];
const loaded = sessionsByType[typeId].length;
if (loaded >= total) return null;
return (
<div className="flex flex-col items-center gap-2 pt-2">
<p className="text-sm text-muted">
{loaded} sur {total} atelier{total > 1 ? 's' : ''}
</p>
<button
type="button"
disabled={isPending}
onClick={() => handleLoadMore(typeId)}
className="px-5 py-2 rounded-full text-sm font-medium bg-card border border-border text-foreground/70 hover:text-foreground hover:bg-card-hover transition-colors disabled:opacity-50"
>
{isPending ? 'Chargement…' : `Charger plus (${total - loaded} restants)`}
</button>
</div>
);
})()
)}
</div>
)}
</div>

View File

@@ -0,0 +1,32 @@
export default function SessionsLoading() {
return (
<main className="mx-auto max-w-7xl px-4">
{/* PageHeader skeleton */}
<div className="py-6 flex items-start justify-between gap-4">
<div className="flex items-center gap-3">
<div className="h-10 w-10 bg-card rounded-xl animate-pulse" />
<div className="space-y-2">
<div className="h-7 w-40 bg-card rounded animate-pulse" />
<div className="h-4 w-64 bg-card rounded animate-pulse" />
</div>
</div>
<div className="h-9 w-36 bg-card rounded-lg animate-pulse" />
</div>
<div className="space-y-6">
{/* Tabs skeleton */}
<div className="flex gap-2 pb-2">
{[120, 100, 110, 90, 105].map((w, i) => (
<div key={i} className="h-9 bg-card animate-pulse rounded-full" style={{ width: w }} />
))}
</div>
{/* Cards grid skeleton */}
<div className="grid gap-4 md:grid-cols-2 lg:grid-cols-3">
{Array.from({ length: 6 }).map((_, i) => (
<div key={i} className="h-44 bg-card animate-pulse rounded-xl border border-border" />
))}
</div>
</div>
</main>
);
}

View File

@@ -26,6 +26,7 @@ import {
} from '@/services/gif-mood';
import { Card, PageHeader } from '@/components/ui';
import { withWorkshopType } from '@/lib/workshops';
import { SESSIONS_PAGE_SIZE } from '@/lib/types';
import { WorkshopTabs } from './WorkshopTabs';
import { NewWorkshopDropdown } from './NewWorkshopDropdown';
@@ -84,13 +85,23 @@ export default async function SessionsPage() {
getTeamGifMoodSessions(session.user.id),
]);
// Add workshopType to each session for unified display
const allSwotSessions = withWorkshopType(swotSessions, 'swot');
const allMotivatorSessions = withWorkshopType(motivatorSessions, 'motivators');
const allYearReviewSessions = withWorkshopType(yearReviewSessions, 'year-review');
const allWeeklyCheckInSessions = withWorkshopType(weeklyCheckInSessions, 'weekly-checkin');
const allWeatherSessions = withWorkshopType(weatherSessions, 'weather');
const allGifMoodSessions = withWorkshopType(gifMoodSessions, 'gif-mood');
// Track totals before slicing for pagination UI
const totals = {
swot: swotSessions.length,
motivators: motivatorSessions.length,
'year-review': yearReviewSessions.length,
'weekly-checkin': weeklyCheckInSessions.length,
weather: weatherSessions.length,
'gif-mood': gifMoodSessions.length,
};
// Add workshopType and slice first page
const allSwotSessions = withWorkshopType(swotSessions.slice(0, SESSIONS_PAGE_SIZE), 'swot');
const allMotivatorSessions = withWorkshopType(motivatorSessions.slice(0, SESSIONS_PAGE_SIZE), 'motivators');
const allYearReviewSessions = withWorkshopType(yearReviewSessions.slice(0, SESSIONS_PAGE_SIZE), 'year-review');
const allWeeklyCheckInSessions = withWorkshopType(weeklyCheckInSessions.slice(0, SESSIONS_PAGE_SIZE), 'weekly-checkin');
const allWeatherSessions = withWorkshopType(weatherSessions.slice(0, SESSIONS_PAGE_SIZE), 'weather');
const allGifMoodSessions = withWorkshopType(gifMoodSessions.slice(0, SESSIONS_PAGE_SIZE), 'gif-mood');
const teamSwotWithType = withWorkshopType(teamSwotSessions, 'swot');
const teamMotivatorWithType = withWorkshopType(teamMotivatorSessions, 'motivators');
@@ -150,6 +161,7 @@ export default async function SessionsPage() {
weeklyCheckInSessions={allWeeklyCheckInSessions}
weatherSessions={allWeatherSessions}
gifMoodSessions={allGifMoodSessions}
totals={totals}
teamCollabSessions={[
...teamSwotWithType,
...teamMotivatorWithType,

View File

@@ -83,6 +83,15 @@ export type AnySession =
| SwotSession | MotivatorSession | YearReviewSession
| WeeklyCheckInSession | WeatherSession | GifMoodSession;
export interface WorkshopSessionTotals {
swot: number;
motivators: number;
'year-review': number;
'weekly-checkin': number;
weather: number;
'gif-mood': number;
}
export interface WorkshopTabsProps {
swotSessions: SwotSession[];
motivatorSessions: MotivatorSession[];
@@ -91,4 +100,5 @@ export interface WorkshopTabsProps {
weatherSessions: WeatherSession[];
gifMoodSessions: GifMoodSession[];
teamCollabSessions?: (AnySession & { isTeamCollab?: true })[];
totals?: WorkshopSessionTotals;
}

42
src/app/users/loading.tsx Normal file
View File

@@ -0,0 +1,42 @@
export default function UsersLoading() {
return (
<main className="mx-auto max-w-6xl px-4">
{/* PageHeader skeleton */}
<div className="py-6 flex items-start gap-3">
<div className="h-10 w-10 bg-card rounded-xl animate-pulse" />
<div className="space-y-2">
<div className="h-7 w-36 bg-card rounded animate-pulse" />
<div className="h-4 w-72 bg-card rounded animate-pulse" />
</div>
</div>
{/* Stats grid skeleton */}
<div className="mb-8 grid grid-cols-2 gap-4 sm:grid-cols-4">
{Array.from({ length: 4 }).map((_, i) => (
<div key={i} className="rounded-xl border border-border bg-card p-4 space-y-2 animate-pulse">
<div className="h-8 w-12 bg-muted/40 rounded" />
<div className="h-4 w-24 bg-muted/30 rounded" />
</div>
))}
</div>
{/* User rows skeleton */}
<div className="space-y-3">
{Array.from({ length: 8 }).map((_, i) => (
<div key={i} className="flex items-center gap-4 rounded-xl border border-border bg-card p-4 animate-pulse">
<div className="h-12 w-12 rounded-full bg-muted/40 flex-shrink-0" />
<div className="flex-1 space-y-2 min-w-0">
<div className="h-4 w-32 bg-muted/40 rounded" />
<div className="h-3 w-48 bg-muted/30 rounded" />
</div>
<div className="hidden sm:flex gap-2">
<div className="h-6 w-16 bg-muted/30 rounded-full" />
<div className="h-6 w-16 bg-muted/30 rounded-full" />
<div className="h-6 w-16 bg-muted/30 rounded-full" />
</div>
</div>
))}
</div>
</main>
);
}

View File

@@ -1,10 +1,12 @@
'use client';
import { useState, useCallback } from 'react';
import dynamic from 'next/dynamic';
import { useLive, type LiveEvent } from '@/hooks/useLive';
import { CollaborationToolbar } from './CollaborationToolbar';
import { ShareModal } from './ShareModal';
import type { ShareRole } from '@prisma/client';
const ShareModal = dynamic(() => import('./ShareModal').then((m) => m.ShareModal), { ssr: false });
import type { TeamWithMembers, Share } from '@/lib/share-utils';
export type LiveApiPath = 'sessions' | 'motivators' | 'weather' | 'year-review' | 'weekly-checkin' | 'gif-mood';

View File

@@ -38,6 +38,7 @@ export function useLive({
const router = useRouter();
const eventSourceRef = useRef<EventSource | null>(null);
const reconnectTimeoutRef = useRef<NodeJS.Timeout | null>(null);
const refreshTimeoutRef = useRef<NodeJS.Timeout | null>(null);
const reconnectAttemptsRef = useRef(0);
const onEventRef = useRef(onEvent);
const currentUserIdRef = useRef(currentUserId);
@@ -88,8 +89,9 @@ export function useLive({
setLastEvent(data);
onEventRef.current?.(data);
// Refresh the page data when we receive an event from another user
router.refresh();
// Debounce refresh to group simultaneous SSE events (~300ms window)
if (refreshTimeoutRef.current) clearTimeout(refreshTimeoutRef.current);
refreshTimeoutRef.current = setTimeout(() => router.refresh(), 300);
} catch (e) {
console.error('Failed to parse SSE event:', e);
}
@@ -126,6 +128,10 @@ export function useLive({
clearTimeout(reconnectTimeoutRef.current);
reconnectTimeoutRef.current = null;
}
if (refreshTimeoutRef.current) {
clearTimeout(refreshTimeoutRef.current);
refreshTimeoutRef.current = null;
}
};
}, [sessionId, apiPath, enabled, router]);

View File

@@ -0,0 +1,290 @@
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { createBroadcaster } from '@/lib/broadcast';
// ── Helpers ────────────────────────────────────────────────────────────────
interface FakeEvent {
id: string;
userId: string;
createdAt: Date;
payload: string;
}
function makeEvent(overrides: Partial<FakeEvent> = {}): FakeEvent {
return {
id: 'e1',
userId: 'user-a',
createdAt: new Date('2024-01-01T00:00:00Z'),
payload: 'data',
...overrides,
};
}
function makeBroadcaster(events: FakeEvent[] = []) {
const fetchEvents = vi.fn().mockResolvedValue(events);
const broadcaster = createBroadcaster(fetchEvents, (e) => ({ type: 'TEST', payload: e.payload, userId: e.userId }));
return { fetchEvents, broadcaster };
}
// ── subscribe / broadcast ──────────────────────────────────────────────────
describe('subscribe + broadcast', () => {
beforeEach(() => vi.useFakeTimers());
afterEach(() => vi.useRealTimers());
it('registered callback receives broadcast events', () => {
const { broadcaster } = makeBroadcaster();
const cb = vi.fn();
broadcaster.subscribe('session-1', 'user-a', cb);
broadcaster.broadcast('session-1', { type: 'update' });
expect(cb).toHaveBeenCalledOnce();
expect(cb).toHaveBeenCalledWith({ type: 'update' });
});
it('broadcast to unknown session is a no-op', () => {
const { broadcaster } = makeBroadcaster();
expect(() => broadcaster.broadcast('unknown', { type: 'test' })).not.toThrow();
});
it('multiple subscribers all receive the broadcast', () => {
const { broadcaster } = makeBroadcaster();
const cb1 = vi.fn();
const cb2 = vi.fn();
broadcaster.subscribe('session-1', 'user-a', cb1);
broadcaster.subscribe('session-1', 'user-b', cb2);
broadcaster.broadcast('session-1', { type: 'ping' });
expect(cb1).toHaveBeenCalledOnce();
expect(cb2).toHaveBeenCalledOnce();
});
it('unsubscribed callback no longer receives broadcasts', () => {
const { broadcaster } = makeBroadcaster();
const cb = vi.fn();
const unsubscribe = broadcaster.subscribe('session-1', 'user-a', cb);
unsubscribe();
broadcaster.broadcast('session-1', { type: 'update' });
expect(cb).not.toHaveBeenCalled();
});
it('unsubscribe is idempotent (calling twice is safe)', () => {
const { broadcaster } = makeBroadcaster();
const cb = vi.fn();
const unsubscribe = broadcaster.subscribe('session-1', 'user-a', cb);
unsubscribe();
expect(() => unsubscribe()).not.toThrow();
});
});
// ── Polling mutualisé ──────────────────────────────────────────────────────
describe('shared polling (startPolling / stopPolling)', () => {
beforeEach(() => vi.useFakeTimers());
afterEach(() => vi.useRealTimers());
it('starts polling when first subscriber arrives', async () => {
const { fetchEvents, broadcaster } = makeBroadcaster();
broadcaster.subscribe('session-1', 'user-a', vi.fn());
await vi.advanceTimersByTimeAsync(1000);
expect(fetchEvents).toHaveBeenCalledOnce();
});
it('does NOT start a second interval for subsequent subscribers', async () => {
const { fetchEvents, broadcaster } = makeBroadcaster();
broadcaster.subscribe('session-1', 'user-a', vi.fn());
broadcaster.subscribe('session-1', 'user-b', vi.fn());
await vi.advanceTimersByTimeAsync(1000);
// Only one poll despite two subscribers
expect(fetchEvents).toHaveBeenCalledOnce();
});
it('stops polling when last subscriber leaves', async () => {
const { fetchEvents, broadcaster } = makeBroadcaster();
const unsub1 = broadcaster.subscribe('session-1', 'user-a', vi.fn());
const unsub2 = broadcaster.subscribe('session-1', 'user-b', vi.fn());
await vi.advanceTimersByTimeAsync(1000);
expect(fetchEvents).toHaveBeenCalledOnce();
unsub1();
unsub2(); // last subscriber → polling should stop
await vi.advanceTimersByTimeAsync(2000);
// fetchEvents should NOT have been called again after both unsubscribed
expect(fetchEvents).toHaveBeenCalledOnce();
});
it('keeps polling while at least one subscriber remains', async () => {
const { fetchEvents, broadcaster } = makeBroadcaster();
const unsub1 = broadcaster.subscribe('session-1', 'user-a', vi.fn());
broadcaster.subscribe('session-1', 'user-b', vi.fn());
unsub1(); // still one left → polling continues
await vi.advanceTimersByTimeAsync(2000);
expect(fetchEvents.mock.calls.length).toBeGreaterThanOrEqual(2);
});
it('passes the since timestamp to fetchEvents', async () => {
const { fetchEvents, broadcaster } = makeBroadcaster();
broadcaster.subscribe('session-1', 'user-a', vi.fn());
await vi.advanceTimersByTimeAsync(1000);
expect(fetchEvents).toHaveBeenCalledWith('session-1', expect.any(Date));
});
});
// ── Filtrage par userId ────────────────────────────────────────────────────
describe('polling event filtering', () => {
beforeEach(() => vi.useFakeTimers());
afterEach(() => vi.useRealTimers());
it('does NOT deliver an event to the subscriber who created it', async () => {
const event = makeEvent({ userId: 'user-a' });
const { broadcaster } = makeBroadcaster([event]);
const cb = vi.fn();
broadcaster.subscribe('session-1', 'user-a', cb); // same userId as event
await vi.advanceTimersByTimeAsync(1000);
expect(cb).not.toHaveBeenCalled();
});
it('delivers event to subscribers who did NOT create it', async () => {
const event = makeEvent({ userId: 'user-a' });
const { broadcaster } = makeBroadcaster([event]);
const cbB = vi.fn();
broadcaster.subscribe('session-1', 'user-b', cbB); // different userId
await vi.advanceTimersByTimeAsync(1000);
expect(cbB).toHaveBeenCalledOnce();
});
it('delivers to some and skips others based on userId', async () => {
const event = makeEvent({ userId: 'user-a' });
const { broadcaster } = makeBroadcaster([event]);
const cbA = vi.fn(); // creator → should NOT receive
const cbB = vi.fn(); // other user → should receive
broadcaster.subscribe('session-1', 'user-a', cbA);
broadcaster.subscribe('session-1', 'user-b', cbB);
await vi.advanceTimersByTimeAsync(1000);
expect(cbA).not.toHaveBeenCalled();
expect(cbB).toHaveBeenCalledOnce();
});
it('updates lastEventTime to last event createdAt', async () => {
const t1 = new Date('2024-01-01T00:00:01Z');
const t2 = new Date('2024-01-01T00:00:02Z');
const fetchEvents = vi.fn()
.mockResolvedValueOnce([makeEvent({ createdAt: t1, userId: 'user-x' }), makeEvent({ id: 'e2', createdAt: t2, userId: 'user-x' })])
.mockResolvedValue([]);
const broadcaster = createBroadcaster(fetchEvents, (e) => e);
broadcaster.subscribe('session-1', 'user-a', vi.fn());
await vi.advanceTimersByTimeAsync(2000); // two ticks
// Second call should use t2 as the `since` argument
expect(fetchEvents.mock.calls[1][1]).toEqual(t2);
});
});
// ── formatEvent ────────────────────────────────────────────────────────────
describe('formatEvent', () => {
beforeEach(() => vi.useFakeTimers());
afterEach(() => vi.useRealTimers());
it('applies formatEvent before delivering to subscriber', async () => {
const event = makeEvent({ userId: 'user-x', payload: 'raw' });
const fetchEvents = vi.fn().mockResolvedValue([event]);
const formatEvent = vi.fn().mockReturnValue({ type: 'FORMATTED', value: 42 });
const broadcaster = createBroadcaster(fetchEvents, formatEvent);
const cb = vi.fn();
broadcaster.subscribe('session-1', 'user-a', cb); // user-a ≠ user-x → receives
await vi.advanceTimersByTimeAsync(1000);
expect(formatEvent).toHaveBeenCalledWith(event);
expect(cb).toHaveBeenCalledWith({ type: 'FORMATTED', value: 42 });
});
});
// ── Error resilience ───────────────────────────────────────────────────────
describe('error resilience', () => {
beforeEach(() => vi.useFakeTimers());
afterEach(() => vi.useRealTimers());
it('does not crash when fetchEvents throws', async () => {
const fetchEvents = vi.fn().mockRejectedValue(new Error('DB down'));
const broadcaster = createBroadcaster(fetchEvents, (e) => e);
const cb = vi.fn();
broadcaster.subscribe('session-1', 'user-a', cb);
await expect(vi.advanceTimersByTimeAsync(1000)).resolves.not.toThrow();
});
it('continues polling after a fetch error', async () => {
const fetchEvents = vi.fn()
.mockRejectedValueOnce(new Error('transient error'))
.mockResolvedValue([]);
const broadcaster = createBroadcaster(fetchEvents, (e) => e);
broadcaster.subscribe('session-1', 'user-a', vi.fn());
await vi.advanceTimersByTimeAsync(2000);
expect(fetchEvents.mock.calls.length).toBeGreaterThanOrEqual(2);
});
});
// ── Isolation entre sessions ───────────────────────────────────────────────
describe('session isolation', () => {
beforeEach(() => vi.useFakeTimers());
afterEach(() => vi.useRealTimers());
it('broadcast to one session does not affect another', () => {
const { broadcaster } = makeBroadcaster();
const cb1 = vi.fn();
const cb2 = vi.fn();
broadcaster.subscribe('session-1', 'user-a', cb1);
broadcaster.subscribe('session-2', 'user-b', cb2);
broadcaster.broadcast('session-1', { type: 'event' });
expect(cb1).toHaveBeenCalledOnce();
expect(cb2).not.toHaveBeenCalled();
});
it('two sessions have independent polling intervals', async () => {
const fetchEvents = vi.fn().mockResolvedValue([]);
const broadcaster = createBroadcaster(fetchEvents, (e) => e);
broadcaster.subscribe('session-1', 'user-a', vi.fn());
broadcaster.subscribe('session-2', 'user-b', vi.fn());
await vi.advanceTimersByTimeAsync(1000);
// Each session polled once → 2 total calls
expect(fetchEvents).toHaveBeenCalledTimes(2);
expect(fetchEvents.mock.calls[0][0]).toBe('session-1');
expect(fetchEvents.mock.calls[1][0]).toBe('session-2');
});
});

View File

@@ -0,0 +1,117 @@
import { describe, it, expect } from 'vitest';
import { getISOWeek, getWeekYearLabel, getWeekBounds } from '@/lib/date-utils';
// ── getISOWeek ─────────────────────────────────────────────────────────────
describe('getISOWeek', () => {
it('returns week 1 for January 4 (always in ISO week 1)', () => {
expect(getISOWeek(new Date(2026, 0, 4))).toBe(1);
expect(getISOWeek(new Date(2024, 0, 4))).toBe(1);
});
it('returns week 1 for Jan 1 when Jan 1 is Thursday (2026)', () => {
// Jan 1, 2026 is a Thursday → week 1
expect(getISOWeek(new Date(2026, 0, 1))).toBe(1);
});
it('returns week 53 for Dec 31 when it falls in last ISO week', () => {
// Dec 31, 2020 is a Thursday → week 53 of 2020
expect(getISOWeek(new Date(2020, 11, 31))).toBe(53);
});
it('returns correct week for a known mid-year date', () => {
// Mar 10, 2026 is in week 11
expect(getISOWeek(new Date(2026, 2, 10))).toBe(11);
});
it('returns week 52 for Dec 28 (always in ISO week 52 or 53)', () => {
// Dec 28 is always in the last week of the year per ISO
const week = getISOWeek(new Date(2026, 11, 28));
expect(week).toBeGreaterThanOrEqual(52);
});
it('week advances by 1 between consecutive Mondays', () => {
const w1 = getISOWeek(new Date(2026, 2, 9)); // Monday March 9
const w2 = getISOWeek(new Date(2026, 2, 16)); // Monday March 16
expect(w2 - w1).toBe(1);
});
it('same week for all days Monday through Saturday of a given week', () => {
// Week of March 915, 2026
const week = getISOWeek(new Date(2026, 2, 9)); // Monday
expect(getISOWeek(new Date(2026, 2, 10))).toBe(week); // Tuesday
expect(getISOWeek(new Date(2026, 2, 11))).toBe(week); // Wednesday
expect(getISOWeek(new Date(2026, 2, 14))).toBe(week); // Saturday
});
});
// ── getWeekYearLabel ──────────────────────────────────────────────────────
describe('getWeekYearLabel', () => {
it('formats as "S{NN}-{YYYY}"', () => {
const label = getWeekYearLabel(new Date(2026, 2, 10));
expect(label).toMatch(/^S\d{2}-\d{4}$/);
});
it('returns "S11-2026" for March 10 2026', () => {
expect(getWeekYearLabel(new Date(2026, 2, 10))).toBe('S11-2026');
});
it('zero-pads single-digit week numbers', () => {
// Jan 4, 2026 is week 1
expect(getWeekYearLabel(new Date(2026, 0, 4))).toBe('S01-2026');
});
});
// ── getWeekBounds ─────────────────────────────────────────────────────────
describe('getWeekBounds', () => {
it('returns Monday as start for a Wednesday', () => {
const { start } = getWeekBounds(new Date(2026, 2, 11)); // Wednesday March 11
expect(start.getDate()).toBe(9);
expect(start.getMonth()).toBe(2); // March
expect(start.getFullYear()).toBe(2026);
});
it('returns Sunday as end for a Wednesday', () => {
const { end } = getWeekBounds(new Date(2026, 2, 11));
expect(end.getDate()).toBe(15);
expect(end.getMonth()).toBe(2); // March
});
it('start is at 00:00:00.000', () => {
const { start } = getWeekBounds(new Date(2026, 2, 10));
expect(start.getHours()).toBe(0);
expect(start.getMinutes()).toBe(0);
expect(start.getSeconds()).toBe(0);
expect(start.getMilliseconds()).toBe(0);
});
it('end is at 23:59:59.999', () => {
const { end } = getWeekBounds(new Date(2026, 2, 10));
expect(end.getHours()).toBe(23);
expect(end.getMinutes()).toBe(59);
expect(end.getSeconds()).toBe(59);
expect(end.getMilliseconds()).toBe(999);
});
it('start and end are 6 days apart', () => {
const { start, end } = getWeekBounds(new Date(2026, 2, 10));
const diffDays = (end.getTime() - start.getTime()) / (1000 * 60 * 60 * 24);
expect(Math.floor(diffDays)).toBe(6);
});
it('returns same bounds for any day within the same week', () => {
const monday = getWeekBounds(new Date(2026, 2, 9));
const wednesday = getWeekBounds(new Date(2026, 2, 11));
const saturday = getWeekBounds(new Date(2026, 2, 14));
expect(monday.start.getTime()).toBe(wednesday.start.getTime());
expect(monday.start.getTime()).toBe(saturday.start.getTime());
expect(monday.end.getTime()).toBe(wednesday.end.getTime());
});
it('returns Monday as start when given a Monday', () => {
const { start } = getWeekBounds(new Date(2026, 2, 9)); // Monday March 9
expect(start.getDate()).toBe(9);
});
});

View File

@@ -0,0 +1,57 @@
import { describe, it, expect } from 'vitest';
import { getGravatarUrl } from '@/lib/gravatar';
import { createHash } from 'crypto';
// ── getGravatarUrl ─────────────────────────────────────────────────────────
describe('getGravatarUrl', () => {
it('generates a valid gravatar URL', () => {
const url = getGravatarUrl('test@example.com');
expect(url).toMatch(/^https:\/\/www\.gravatar\.com\/avatar\/[0-9a-f]{32}\?d=identicon&s=\d+$/);
});
it('uses MD5 hash of lowercased/trimmed email', () => {
const email = 'Test@Example.COM';
const hash = createHash('md5').update(email.toLowerCase().trim()).digest('hex');
const url = getGravatarUrl(email);
expect(url).toContain(`/avatar/${hash}`);
});
it('produces same URL regardless of email case', () => {
const lower = getGravatarUrl('user@example.com');
const upper = getGravatarUrl('USER@EXAMPLE.COM');
expect(lower).toBe(upper);
});
it('uses default size of 40', () => {
const url = getGravatarUrl('test@example.com');
expect(url).toContain('s=40');
});
it('uses custom size when provided', () => {
const url = getGravatarUrl('test@example.com', 80);
expect(url).toContain('s=80');
});
it('uses identicon as default fallback', () => {
const url = getGravatarUrl('test@example.com');
expect(url).toContain('d=identicon');
});
it('uses custom fallback when provided', () => {
const url = getGravatarUrl('test@example.com', 40, 'retro');
expect(url).toContain('d=retro');
});
it('produces different hashes for different emails', () => {
const url1 = getGravatarUrl('alice@example.com');
const url2 = getGravatarUrl('bob@example.com');
expect(url1).not.toBe(url2);
});
it('trims whitespace from email before hashing', () => {
const trimmed = getGravatarUrl('user@example.com');
const padded = getGravatarUrl(' user@example.com ');
expect(trimmed).toBe(padded);
});
});

View File

@@ -0,0 +1,105 @@
import { describe, it, expect } from 'vitest';
import { getCurrentQuarterPeriod, isCurrentQuarterPeriod, comparePeriods } from '@/lib/okr-utils';
// ── getCurrentQuarterPeriod ────────────────────────────────────────────────
describe('getCurrentQuarterPeriod', () => {
it('returns Q1 for January', () => {
expect(getCurrentQuarterPeriod(new Date(2026, 0, 15))).toBe('Q1 2026');
});
it('returns Q1 for March', () => {
expect(getCurrentQuarterPeriod(new Date(2026, 2, 31))).toBe('Q1 2026');
});
it('returns Q2 for April', () => {
expect(getCurrentQuarterPeriod(new Date(2026, 3, 1))).toBe('Q2 2026');
});
it('returns Q2 for June', () => {
expect(getCurrentQuarterPeriod(new Date(2026, 5, 30))).toBe('Q2 2026');
});
it('returns Q3 for July', () => {
expect(getCurrentQuarterPeriod(new Date(2026, 6, 1))).toBe('Q3 2026');
});
it('returns Q4 for October', () => {
expect(getCurrentQuarterPeriod(new Date(2026, 9, 1))).toBe('Q4 2026');
});
it('returns Q4 for December', () => {
expect(getCurrentQuarterPeriod(new Date(2026, 11, 31))).toBe('Q4 2026');
});
it('includes the correct year', () => {
expect(getCurrentQuarterPeriod(new Date(2025, 0, 1))).toBe('Q1 2025');
expect(getCurrentQuarterPeriod(new Date(2027, 6, 1))).toBe('Q3 2027');
});
});
// ── isCurrentQuarterPeriod ─────────────────────────────────────────────────
describe('isCurrentQuarterPeriod', () => {
it('returns true for matching period and date', () => {
expect(isCurrentQuarterPeriod('Q1 2026', new Date(2026, 1, 15))).toBe(true);
});
it('returns false for different quarter', () => {
expect(isCurrentQuarterPeriod('Q2 2026', new Date(2026, 0, 15))).toBe(false);
});
it('returns false for different year', () => {
expect(isCurrentQuarterPeriod('Q1 2025', new Date(2026, 0, 15))).toBe(false);
});
it('returns false for empty string', () => {
expect(isCurrentQuarterPeriod('', new Date(2026, 0, 15))).toBe(false);
});
});
// ── comparePeriods ─────────────────────────────────────────────────────────
describe('comparePeriods', () => {
it('returns 0 for identical periods', () => {
expect(comparePeriods('Q1 2026', 'Q1 2026')).toBe(0);
});
it('returns negative when a is more recent than b (a should sort first)', () => {
// Q2 2026 is more recent than Q1 2026
expect(comparePeriods('Q2 2026', 'Q1 2026')).toBeLessThan(0);
});
it('returns positive when a is older than b (a should sort after)', () => {
expect(comparePeriods('Q1 2026', 'Q2 2026')).toBeGreaterThan(0);
});
it('sorts by year before quarter', () => {
// Q4 2025 is older than Q1 2026
expect(comparePeriods('Q4 2025', 'Q1 2026')).toBeGreaterThan(0);
expect(comparePeriods('Q1 2026', 'Q4 2025')).toBeLessThan(0);
});
it('handles same year, different quarters correctly', () => {
expect(comparePeriods('Q4 2026', 'Q1 2026')).toBeLessThan(0);
expect(comparePeriods('Q1 2026', 'Q4 2026')).toBeGreaterThan(0);
});
it('puts parseable period before unparseable one', () => {
// 'Q1 2026' can be parsed, 'custom period' cannot
expect(comparePeriods('Q1 2026', 'custom period')).toBeLessThan(0);
expect(comparePeriods('custom period', 'Q1 2026')).toBeGreaterThan(0);
});
it('falls back to string comparison when neither is parseable', () => {
const result = comparePeriods('beta', 'alpha');
// String descending: 'beta' > 'alpha' → b.localeCompare(a) = 'alpha'.localeCompare('beta') < 0
expect(result).toBeLessThan(0);
});
it('produces correct sort order for a mixed list', () => {
const periods = ['Q1 2025', 'Q4 2026', 'Q2 2026', 'Q1 2026'];
const sorted = [...periods].sort(comparePeriods);
expect(sorted).toEqual(['Q4 2026', 'Q2 2026', 'Q1 2026', 'Q1 2025']);
});
});

View File

@@ -0,0 +1,69 @@
import { describe, it, expect } from 'vitest';
import { getTeamMembersForShare } from '@/lib/share-utils';
const alice = { id: 'u1', email: 'alice@ex.com', name: 'Alice' };
const bob = { id: 'u2', email: 'bob@ex.com', name: 'Bob' };
const charlie = { id: 'u3', email: 'charlie@ex.com', name: 'Charlie' };
// ── getTeamMembersForShare ─────────────────────────────────────────────────
describe('getTeamMembersForShare', () => {
it('returns empty array when teams have no members', () => {
const result = getTeamMembersForShare([{ id: 't1', name: 'Team', description: null, members: [] }], 'u1');
expect(result).toEqual([]);
});
it('returns empty array when all members are the current user', () => {
const teams = [{ id: 't1', name: 'Team', description: null, members: [{ user: alice }] }];
const result = getTeamMembersForShare(teams, alice.id);
expect(result).toEqual([]);
});
it('excludes the current user from results', () => {
const teams = [{ id: 't1', name: 'Team', description: null, members: [{ user: alice }, { user: bob }] }];
const result = getTeamMembersForShare(teams, alice.id);
expect(result).toHaveLength(1);
expect(result[0].id).toBe(bob.id);
});
it('deduplicates users appearing in multiple teams', () => {
const teams = [
{ id: 't1', name: 'Team A', description: null, members: [{ user: bob }, { user: charlie }] },
{ id: 't2', name: 'Team B', description: null, members: [{ user: bob }] }, // bob repeated
];
const result = getTeamMembersForShare(teams, alice.id);
const ids = result.map((u) => u.id);
expect(ids).toContain(bob.id);
expect(ids).toContain(charlie.id);
expect(ids.filter((id) => id === bob.id)).toHaveLength(1); // no duplicate
});
it('returns all unique members across teams excluding current user', () => {
const teams = [
{ id: 't1', name: 'T1', description: null, members: [{ user: alice }, { user: bob }] },
{ id: 't2', name: 'T2', description: null, members: [{ user: charlie }] },
];
const result = getTeamMembersForShare(teams, alice.id);
const ids = result.map((u) => u.id);
expect(ids).not.toContain(alice.id);
expect(ids).toContain(bob.id);
expect(ids).toContain(charlie.id);
expect(result).toHaveLength(2);
});
it('handles teams without members property (undefined)', () => {
const teams = [{ id: 't1', name: 'Team', description: null }]; // no members key
const result = getTeamMembersForShare(teams, alice.id);
expect(result).toEqual([]);
});
it('returns empty array when no teams provided', () => {
expect(getTeamMembersForShare([], 'u1')).toEqual([]);
});
it('preserves user fields (id, email, name)', () => {
const teams = [{ id: 't1', name: 'T', description: null, members: [{ user: bob }] }];
const result = getTeamMembersForShare(teams, alice.id);
expect(result[0]).toEqual(bob);
});
});

View File

@@ -0,0 +1,117 @@
import { describe, it, expect } from 'vitest';
import { getEmojiScore, getAverageEmoji, getEmojiEvolution, WEATHER_EMOJIS } from '@/lib/weather-utils';
// ── getEmojiScore ─────────────────────────────────────────────────────────
describe('getEmojiScore', () => {
it('returns null for null', () => {
expect(getEmojiScore(null)).toBeNull();
});
it('returns null for undefined', () => {
expect(getEmojiScore(undefined)).toBeNull();
});
it('returns null for empty string (Aucun = index 0)', () => {
expect(getEmojiScore('')).toBeNull();
});
it('returns null for unknown emoji', () => {
expect(getEmojiScore('🦄')).toBeNull();
});
it('returns 1 for ☀️ (first scored emoji)', () => {
expect(getEmojiScore('☀️')).toBe(1);
});
it('returns 19 for ✨ (last emoji in list)', () => {
expect(getEmojiScore('✨')).toBe(19);
});
it('returns consistent 1-based index for all emojis in WEATHER_EMOJIS', () => {
for (let i = 1; i < WEATHER_EMOJIS.length; i++) {
expect(getEmojiScore(WEATHER_EMOJIS[i].emoji)).toBe(i);
}
});
});
// ── getAverageEmoji ───────────────────────────────────────────────────────
describe('getAverageEmoji', () => {
it('returns null for empty array', () => {
expect(getAverageEmoji([])).toBeNull();
});
it('returns null when all values are null/undefined', () => {
expect(getAverageEmoji([null, undefined, null])).toBeNull();
});
it('returns null when all emojis are unknown (score null)', () => {
expect(getAverageEmoji(['🦄', '🤖'])).toBeNull();
});
it('returns the same emoji when only one is provided', () => {
expect(getAverageEmoji(['☀️'])).toBe('☀️');
});
it('ignores null values and uses only scored emojis', () => {
// Only ☀️ (1) is valid
expect(getAverageEmoji([null, '☀️', null])).toBe('☀️');
});
it('returns emoji closest to the average score', () => {
// ☀️ = 1, ☁️ = 4 → avg = 2.5 → closest is 🌤️(2) or ⛅(3), both at dist 0.5
// Code picks 🌤️ (lower index wins on tie)
const result = getAverageEmoji(['☀️', '☁️']);
expect(['🌤️', '⛅']).toContain(result);
});
it('returns exact match when average is a whole number score', () => {
// ☀️ = 1, 🌤️ = 2 → avg = 1.5 → dist(☀️)=0.5, dist(🌤️)=0.5 → ☀️ wins (lower index)
expect(getAverageEmoji(['☀️', '🌤️'])).toBe('☀️');
});
});
// ── getEmojiEvolution ─────────────────────────────────────────────────────
describe('getEmojiEvolution', () => {
it('returns null when current is null', () => {
expect(getEmojiEvolution(null, '☀️')).toBeNull();
});
it('returns null when previous is null', () => {
expect(getEmojiEvolution('☀️', null)).toBeNull();
});
it('returns null when both are null', () => {
expect(getEmojiEvolution(null, null)).toBeNull();
});
it('returns null when current is unknown emoji', () => {
expect(getEmojiEvolution('🦄', '☀️')).toBeNull();
});
it('returns null when previous is unknown emoji', () => {
expect(getEmojiEvolution('☀️', '🦄')).toBeNull();
});
it('returns "same" when both emojis are identical', () => {
expect(getEmojiEvolution('☀️', '☀️')).toBe('same');
});
it('returns "up" when current score is lower (better weather)', () => {
// ☀️ = 1 (better), 🌧️ = 6 (worse)
// going from 🌧️ to ☀️: delta = 1 - 6 = -5 → "up"
expect(getEmojiEvolution('☀️', '🌧️')).toBe('up');
});
it('returns "down" when current score is higher (worse weather)', () => {
// going from ☀️ to 🌧️: delta = 6 - 1 = 5 → "down"
expect(getEmojiEvolution('🌧️', '☀️')).toBe('down');
});
it('handles adjacent emojis', () => {
// ☀️(1) → 🌤️(2): delta = 2-1 = 1 → "down" (slight degradation)
expect(getEmojiEvolution('🌤️', '☀️')).toBe('down');
});
});

View File

@@ -0,0 +1,140 @@
import { describe, it, expect } from 'vitest';
import {
WORKSHOPS,
WORKSHOP_BY_ID,
WORKSHOP_TYPE_IDS,
VALID_TAB_PARAMS,
getWorkshop,
getSessionPath,
getSessionsTabUrl,
withWorkshopType,
} from '@/lib/workshops';
// ── Registry integrity ─────────────────────────────────────────────────────
describe('WORKSHOPS registry', () => {
it('contains exactly 6 workshop types', () => {
expect(WORKSHOPS).toHaveLength(6);
});
it('all workshop IDs match WORKSHOP_TYPE_IDS', () => {
const ids = WORKSHOPS.map((w) => w.id);
expect(ids).toEqual([...WORKSHOP_TYPE_IDS]);
});
it('every workshop has required fields', () => {
for (const w of WORKSHOPS) {
expect(w.id).toBeTruthy();
expect(w.path).toMatch(/^\//);
expect(w.newPath).toMatch(/^\//);
expect(w.label).toBeTruthy();
expect(w.icon).toBeTruthy();
}
});
it('workshops with participant have non-empty participantLabel', () => {
for (const w of WORKSHOPS) {
if (w.hasParticipant) {
expect(w.participantLabel).toBeTruthy();
} else {
expect(w.participantLabel).toBe('');
}
}
});
it('every workshop has home content with at least 1 feature', () => {
for (const w of WORKSHOPS) {
expect(w.home.tagline).toBeTruthy();
expect(w.home.description).toBeTruthy();
expect(w.home.features.length).toBeGreaterThanOrEqual(1);
}
});
});
describe('WORKSHOP_BY_ID', () => {
it('contains an entry for every workshop type', () => {
for (const id of WORKSHOP_TYPE_IDS) {
expect(WORKSHOP_BY_ID[id]).toBeDefined();
expect(WORKSHOP_BY_ID[id].id).toBe(id);
}
});
});
describe('VALID_TAB_PARAMS', () => {
it('includes all workshop IDs plus "all", "byPerson", "team"', () => {
expect(VALID_TAB_PARAMS).toContain('all');
expect(VALID_TAB_PARAMS).toContain('byPerson');
expect(VALID_TAB_PARAMS).toContain('team');
for (const id of WORKSHOP_TYPE_IDS) {
expect(VALID_TAB_PARAMS).toContain(id);
}
});
});
// ── getWorkshop ────────────────────────────────────────────────────────────
describe('getWorkshop', () => {
it('returns the correct workshop config', () => {
const swot = getWorkshop('swot');
expect(swot.id).toBe('swot');
expect(swot.path).toBe('/sessions');
});
it('returns different configs for different IDs', () => {
expect(getWorkshop('swot').path).not.toBe(getWorkshop('motivators').path);
});
});
// ── getSessionPath ─────────────────────────────────────────────────────────
describe('getSessionPath', () => {
it('builds path for swot', () => {
expect(getSessionPath('swot', 'abc123')).toBe('/sessions/abc123');
});
it('builds path for motivators', () => {
expect(getSessionPath('motivators', 'xyz')).toBe('/motivators/xyz');
});
it('builds path for weather', () => {
expect(getSessionPath('weather', 'w-1')).toBe('/weather/w-1');
});
});
// ── getSessionsTabUrl ──────────────────────────────────────────────────────
describe('getSessionsTabUrl', () => {
it('generates correct tab URL', () => {
expect(getSessionsTabUrl('swot')).toBe('/sessions?tab=swot');
expect(getSessionsTabUrl('motivators')).toBe('/sessions?tab=motivators');
expect(getSessionsTabUrl('weather')).toBe('/sessions?tab=weather');
});
});
// ── withWorkshopType ───────────────────────────────────────────────────────
describe('withWorkshopType', () => {
it('adds workshopType to each item', () => {
const sessions = [{ id: 'a' }, { id: 'b' }];
const result = withWorkshopType(sessions, 'swot');
expect(result[0]).toEqual({ id: 'a', workshopType: 'swot' });
expect(result[1]).toEqual({ id: 'b', workshopType: 'swot' });
});
it('returns empty array for empty input', () => {
expect(withWorkshopType([], 'motivators')).toEqual([]);
});
it('preserves all original properties', () => {
const sessions = [{ id: 'x', title: 'Test', createdAt: new Date() }];
const result = withWorkshopType(sessions, 'year-review');
expect(result[0].title).toBe('Test');
expect(result[0].workshopType).toBe('year-review');
});
it('does not mutate the original array', () => {
const sessions = [{ id: 'a' }];
withWorkshopType(sessions, 'swot');
expect(sessions[0]).not.toHaveProperty('workshopType');
});
});

92
src/lib/broadcast.ts Normal file
View File

@@ -0,0 +1,92 @@
/**
* Generic SSE broadcast module.
* One polling interval per active session (shared across all connections to that session).
* Server Actions call broadcast() directly for immediate push; polling is the fallback.
*
* NOTE: In-process only — works for single-process standalone Next.js deployments.
*/
interface Subscriber {
userId: string;
cb: (event: unknown) => void;
}
interface BroadcastEvent {
userId: string;
createdAt: Date;
}
export function createBroadcaster<E extends BroadcastEvent>(
fetchEvents: (sessionId: string, since: Date) => Promise<E[]>,
formatEvent: (event: E) => unknown
) {
const subscribers = new Map<string, Set<Subscriber>>();
const intervals = new Map<string, ReturnType<typeof setInterval>>();
const lastEventTimes = new Map<string, Date>();
function startPolling(sessionId: string) {
if (intervals.has(sessionId)) return;
lastEventTimes.set(sessionId, new Date());
const interval = setInterval(async () => {
const subs = subscribers.get(sessionId);
if (!subs || subs.size === 0) return;
try {
const since = lastEventTimes.get(sessionId)!;
const events = await fetchEvents(sessionId, since);
for (const event of events) {
const formatted = formatEvent(event);
for (const sub of subs) {
if (sub.userId !== event.userId) {
sub.cb(formatted);
}
}
lastEventTimes.set(sessionId, event.createdAt);
}
} catch {
// Ignore polling errors — will retry next interval
}
}, 1000);
intervals.set(sessionId, interval);
}
function stopPolling(sessionId: string) {
const interval = intervals.get(sessionId);
if (interval !== undefined) {
clearInterval(interval);
intervals.delete(sessionId);
lastEventTimes.delete(sessionId);
}
}
/** Subscribe to events for a session. Returns an unsubscribe function. */
function subscribe(sessionId: string, userId: string, cb: (event: unknown) => void): () => void {
if (!subscribers.has(sessionId)) {
subscribers.set(sessionId, new Set());
}
const subscriber: Subscriber = { userId, cb };
subscribers.get(sessionId)!.add(subscriber);
startPolling(sessionId);
let removed = false;
return () => {
if (removed) return;
removed = true;
subscribers.get(sessionId)?.delete(subscriber);
if (subscribers.get(sessionId)?.size === 0) {
subscribers.delete(sessionId);
stopPolling(sessionId);
}
};
}
/** Broadcast an event to all subscribers of a session (called from Server Actions). */
function broadcast(sessionId: string, event: unknown) {
const subs = subscribers.get(sessionId);
if (!subs) return;
for (const sub of subs) {
sub.cb(event);
}
}
return { subscribe, broadcast };
}

7
src/lib/cache-tags.ts Normal file
View File

@@ -0,0 +1,7 @@
/**
* Next.js cache tag helpers for unstable_cache invalidation.
*/
export const sessionTag = (id: string) => `session:${id}`;
export const sessionsListTag = (userId: string) => `sessions-list:${userId}`;
export const userStatsTag = (userId: string) => `user-stats:${userId}`;

View File

@@ -791,6 +791,8 @@ export const EMOTION_BY_TYPE: Record<Emotion, EmotionConfig> = EMOTIONS_CONFIG.r
// ============================================
export const GIF_MOOD_MAX_ITEMS = 5;
export const WEATHER_HISTORY_LIMIT = 90;
export const SESSIONS_PAGE_SIZE = 20;
export interface GifMoodItem {
id: string;

View File

@@ -0,0 +1,290 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import {
batchResolveCollaborators,
resolveCollaborator,
registerUser,
updateUserPassword,
updateUserProfile,
} from '@/services/auth';
vi.mock('@/services/database', () => ({
prisma: {
user: {
findMany: vi.fn(),
findUnique: vi.fn(),
findFirst: vi.fn(),
create: vi.fn(),
update: vi.fn(),
},
},
}));
vi.mock('bcryptjs', () => ({
hash: vi.fn().mockResolvedValue('hashed_password'),
compare: vi.fn(),
}));
// Import after mock to get the mocked version
const { prisma } = await import('@/services/database');
const mockFindMany = vi.mocked(prisma.user.findMany);
const mockFindUnique = vi.mocked(prisma.user.findUnique);
const mockFindFirst = vi.mocked(prisma.user.findFirst);
const mockCreate = vi.mocked(prisma.user.create);
const mockUpdate = vi.mocked(prisma.user.update);
const { hash: mockHash, compare: mockCompare } = await import('bcryptjs');
const mockHashFn = vi.mocked(mockHash);
const mockCompareFn = vi.mocked(mockCompare);
const alice = { id: 'u1', email: 'alice@example.com', name: 'Alice' };
const bob = { id: 'u2', email: 'bob@example.com', name: 'Bob' };
// ── batchResolveCollaborators ──────────────────────────────────────────────
describe('batchResolveCollaborators', () => {
beforeEach(() => vi.clearAllMocks());
it('returns empty map for empty input without DB calls', async () => {
const result = await batchResolveCollaborators([]);
expect(result.size).toBe(0);
expect(mockFindMany).not.toHaveBeenCalled();
});
it('resolves an email to its user', async () => {
mockFindMany.mockResolvedValueOnce([alice]); // email query
const result = await batchResolveCollaborators(['alice@example.com']);
expect(result.get('alice@example.com')).toEqual({ raw: 'alice@example.com', matchedUser: alice });
});
it('email matching is case-insensitive', async () => {
mockFindMany.mockResolvedValueOnce([alice]); // DB returns lowercase email
const result = await batchResolveCollaborators(['Alice@Example.COM']);
expect(result.get('Alice@Example.COM')?.matchedUser).toEqual(alice);
});
it('resolves a name to its user', async () => {
mockFindMany.mockResolvedValueOnce([bob]); // name query (no emails → 1 call)
const result = await batchResolveCollaborators(['Bob']);
expect(result.get('Bob')).toEqual({ raw: 'Bob', matchedUser: bob });
});
it('name matching is case-insensitive', async () => {
mockFindMany.mockResolvedValueOnce([bob]);
const result = await batchResolveCollaborators(['BOB']);
expect(result.get('BOB')?.matchedUser).toEqual(bob);
});
it('returns null matchedUser when email not found', async () => {
mockFindMany.mockResolvedValueOnce([]); // no match
const result = await batchResolveCollaborators(['nobody@example.com']);
expect(result.get('nobody@example.com')).toEqual({ raw: 'nobody@example.com', matchedUser: null });
});
it('returns null matchedUser when name not found', async () => {
mockFindMany.mockResolvedValueOnce([]);
const result = await batchResolveCollaborators(['Unknown']);
expect(result.get('Unknown')).toEqual({ raw: 'Unknown', matchedUser: null });
});
it('handles mixed emails and names in exactly 2 DB queries', async () => {
mockFindMany
.mockResolvedValueOnce([alice]) // email query
.mockResolvedValueOnce([bob]); // name query
const result = await batchResolveCollaborators(['alice@example.com', 'Bob']);
expect(mockFindMany).toHaveBeenCalledTimes(2);
expect(result.get('alice@example.com')?.matchedUser).toEqual(alice);
expect(result.get('Bob')?.matchedUser).toEqual(bob);
});
it('deduplicates identical inputs (only 1 DB entry per unique value)', async () => {
mockFindMany.mockResolvedValueOnce([alice]); // only emails → 1 call
const result = await batchResolveCollaborators(['alice@example.com', 'alice@example.com']);
expect(mockFindMany).toHaveBeenCalledTimes(1);
expect(result.size).toBe(1);
});
it('skips email query when no emails present', async () => {
mockFindMany.mockResolvedValueOnce([bob]);
await batchResolveCollaborators(['Bob']);
expect(mockFindMany).toHaveBeenCalledTimes(1);
expect(mockFindMany).toHaveBeenCalledWith(
expect.objectContaining({ where: expect.objectContaining({ OR: expect.any(Array) }) })
);
});
it('skips name query when no names present', async () => {
mockFindMany.mockResolvedValueOnce([alice]);
await batchResolveCollaborators(['alice@example.com']);
expect(mockFindMany).toHaveBeenCalledTimes(1);
expect(mockFindMany).toHaveBeenCalledWith(
expect.objectContaining({ where: expect.objectContaining({ email: expect.any(Object) }) })
);
});
it('handles whitespace-padded inputs', async () => {
mockFindMany.mockResolvedValueOnce([alice]);
const result = await batchResolveCollaborators([' alice@example.com ']);
expect(result.get('alice@example.com')?.matchedUser).toEqual(alice);
});
it('resolves multiple unique names in one query', async () => {
const charlie = { id: 'u3', email: 'c@example.com', name: 'Charlie' };
mockFindMany.mockResolvedValueOnce([bob, charlie]);
const result = await batchResolveCollaborators(['Bob', 'Charlie']);
expect(mockFindMany).toHaveBeenCalledTimes(1);
expect(result.get('Bob')?.matchedUser).toEqual(bob);
expect(result.get('Charlie')?.matchedUser).toEqual(charlie);
});
});
// ── resolveCollaborator ────────────────────────────────────────────────────
describe('resolveCollaborator', () => {
beforeEach(() => vi.clearAllMocks());
it('resolves email via findUnique', async () => {
mockFindUnique.mockResolvedValueOnce(alice);
const result = await resolveCollaborator('alice@example.com');
expect(result).toEqual({ raw: 'alice@example.com', matchedUser: alice });
expect(mockFindUnique).toHaveBeenCalledWith({ where: { email: 'alice@example.com' }, select: expect.any(Object) });
});
it('returns null matchedUser when email not found', async () => {
mockFindUnique.mockResolvedValueOnce(null);
const result = await resolveCollaborator('ghost@example.com');
expect(result.matchedUser).toBeNull();
});
it('resolves name via findFirst when not an email', async () => {
mockFindFirst.mockResolvedValueOnce(bob);
const result = await resolveCollaborator('Bob');
expect(result.matchedUser).toEqual(bob);
});
it('returns null matchedUser when name not found', async () => {
mockFindFirst.mockResolvedValueOnce(null);
const result = await resolveCollaborator('Nobody');
expect(result.matchedUser).toBeNull();
});
it('rejects partial name matches (contains but not exact)', async () => {
// findFirst returns a user whose name doesn't match exactly
mockFindFirst.mockResolvedValueOnce({ id: 'u1', email: 'a@ex.com', name: 'Bobby' });
const result = await resolveCollaborator('Bob');
expect(result.matchedUser).toBeNull();
});
});
// ── registerUser ──────────────────────────────────────────────────────────
describe('registerUser', () => {
beforeEach(() => vi.clearAllMocks());
it('returns error when email already exists', async () => {
mockFindUnique.mockResolvedValueOnce(alice);
const result = await registerUser({ email: 'alice@example.com', password: 'secret' });
expect(result.success).toBe(false);
expect(result.error).toMatch(/existe déjà/);
expect(mockCreate).not.toHaveBeenCalled();
});
it('hashes password and creates user on success', async () => {
mockFindUnique.mockResolvedValueOnce(null);
mockCreate.mockResolvedValueOnce({ id: 'new-id', email: 'new@example.com', name: null });
const result = await registerUser({ email: 'new@example.com', password: 'secret' });
expect(mockHashFn).toHaveBeenCalledWith('secret', 12);
expect(mockCreate).toHaveBeenCalledWith(
expect.objectContaining({ data: expect.objectContaining({ email: 'new@example.com', password: 'hashed_password' }) })
);
expect(result.success).toBe(true);
expect(result.user?.email).toBe('new@example.com');
});
it('returns only id, email, name fields', async () => {
mockFindUnique.mockResolvedValueOnce(null);
mockCreate.mockResolvedValueOnce({ id: 'x', email: 'x@ex.com', name: 'X' });
const result = await registerUser({ email: 'x@ex.com', password: 'p', name: 'X' });
expect(result.user).toEqual({ id: 'x', email: 'x@ex.com', name: 'X' });
});
it('sets name to null when not provided', async () => {
mockFindUnique.mockResolvedValueOnce(null);
mockCreate.mockResolvedValueOnce({ id: 'y', email: 'y@ex.com', name: null });
await registerUser({ email: 'y@ex.com', password: 'p' });
expect(mockCreate).toHaveBeenCalledWith(
expect.objectContaining({ data: expect.objectContaining({ name: null }) })
);
});
});
// ── updateUserPassword ────────────────────────────────────────────────────
describe('updateUserPassword', () => {
beforeEach(() => vi.clearAllMocks());
it('returns error when user not found', async () => {
mockFindUnique.mockResolvedValueOnce(null);
const result = await updateUserPassword('u1', 'old', 'new');
expect(result.success).toBe(false);
expect(result.error).toMatch(/non trouvé/);
});
it('returns error when current password is incorrect', async () => {
mockFindUnique.mockResolvedValueOnce({ id: 'u1', password: 'hashed_old' });
mockCompareFn.mockResolvedValueOnce(false as never);
const result = await updateUserPassword('u1', 'wrong', 'new');
expect(result.success).toBe(false);
expect(result.error).toMatch(/incorrect/);
expect(mockUpdate).not.toHaveBeenCalled();
});
it('hashes new password and updates on success', async () => {
mockFindUnique.mockResolvedValueOnce({ id: 'u1', password: 'hashed_old' });
mockCompareFn.mockResolvedValueOnce(true as never);
mockUpdate.mockResolvedValueOnce({});
const result = await updateUserPassword('u1', 'old', 'newpass');
expect(mockHashFn).toHaveBeenCalledWith('newpass', 12);
expect(mockUpdate).toHaveBeenCalledWith(
expect.objectContaining({ data: { password: 'hashed_password' } })
);
expect(result.success).toBe(true);
});
});
// ── updateUserProfile ─────────────────────────────────────────────────────
describe('updateUserProfile', () => {
beforeEach(() => vi.clearAllMocks());
it('returns error when email is taken by another user', async () => {
mockFindFirst.mockResolvedValueOnce({ id: 'other', email: 'taken@ex.com' });
const result = await updateUserProfile('u1', { email: 'taken@ex.com' });
expect(result.success).toBe(false);
expect(result.error).toMatch(/déjà utilisé/);
expect(mockUpdate).not.toHaveBeenCalled();
});
it('updates name and email on success', async () => {
mockFindFirst.mockResolvedValueOnce(null); // email not taken
mockUpdate.mockResolvedValueOnce({ id: 'u1', email: 'new@ex.com', name: 'New Name' });
const result = await updateUserProfile('u1', { name: 'New Name', email: 'new@ex.com' });
expect(result.success).toBe(true);
expect(result.user).toEqual({ id: 'u1', email: 'new@ex.com', name: 'New Name' });
});
it('skips email uniqueness check when no email provided', async () => {
mockUpdate.mockResolvedValueOnce({ id: 'u1', email: 'old@ex.com', name: 'Updated' });
const result = await updateUserProfile('u1', { name: 'Updated' });
expect(mockFindFirst).not.toHaveBeenCalled();
expect(result.success).toBe(true);
});
it('passes correct query to check email uniqueness (excludes self)', async () => {
mockFindFirst.mockResolvedValueOnce(null);
mockUpdate.mockResolvedValueOnce({ id: 'u1', email: 'new@ex.com', name: null });
await updateUserProfile('u1', { email: 'new@ex.com' });
expect(mockFindFirst).toHaveBeenCalledWith(
expect.objectContaining({ where: expect.objectContaining({ NOT: { id: 'u1' } }) })
);
});
});

View File

@@ -0,0 +1,333 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { createOKR, updateKeyResult, updateOKR, calculateOKRProgress } from '@/services/okrs';
vi.mock('@/services/database', () => {
const mockTx = {
oKR: {
create: vi.fn(),
update: vi.fn(),
findUnique: vi.fn(),
},
keyResult: {
create: vi.fn(),
update: vi.fn(),
deleteMany: vi.fn(),
},
};
return {
prisma: {
oKR: {
findUnique: vi.fn(),
update: vi.fn(),
},
keyResult: {
findUnique: vi.fn(),
update: vi.fn(),
},
$transaction: vi.fn().mockImplementation(async (fn: (tx: typeof mockTx) => unknown) => {
if (typeof fn === 'function') return fn(mockTx);
return Promise.all(fn as Promise<unknown>[]);
}),
_mockTx: mockTx,
},
};
});
const { prisma } = await import('@/services/database');
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const mockTx = (prisma as any)._mockTx;
// ── calculateOKRProgress ──────────────────────────────────────────────────
describe('calculateOKRProgress', () => {
beforeEach(() => vi.clearAllMocks());
it('returns 0 when OKR not found', async () => {
vi.mocked(prisma.oKR.findUnique).mockResolvedValueOnce(null);
const result = await calculateOKRProgress('okr-1');
expect(result).toBe(0);
});
it('returns 0 for OKR with empty key results', async () => {
vi.mocked(prisma.oKR.findUnique).mockResolvedValueOnce({ id: 'okr-1', keyResults: [] } as never);
const result = await calculateOKRProgress('okr-1');
expect(result).toBe(0);
});
it('calculates average progress across key results', async () => {
vi.mocked(prisma.oKR.findUnique).mockResolvedValueOnce({
id: 'okr-1',
keyResults: [
{ currentValue: 50, targetValue: 100 }, // 50%
{ currentValue: 100, targetValue: 100 }, // 100%
],
} as never);
const result = await calculateOKRProgress('okr-1');
expect(result).toBe(75);
});
it('caps individual KR progress at 100%', async () => {
vi.mocked(prisma.oKR.findUnique).mockResolvedValueOnce({
id: 'okr-1',
keyResults: [
{ currentValue: 200, targetValue: 100 }, // over 100% → capped at 100
{ currentValue: 0, targetValue: 100 }, // 0%
],
} as never);
const result = await calculateOKRProgress('okr-1');
expect(result).toBe(50); // (100 + 0) / 2
});
it('returns 0 when targetValue is 0 (avoids division by zero)', async () => {
vi.mocked(prisma.oKR.findUnique).mockResolvedValueOnce({
id: 'okr-1',
keyResults: [{ currentValue: 50, targetValue: 0 }],
} as never);
const result = await calculateOKRProgress('okr-1');
expect(result).toBe(0);
});
});
// ── createOKR ─────────────────────────────────────────────────────────────
describe('createOKR', () => {
beforeEach(() => vi.clearAllMocks());
it('creates OKR with NOT_STARTED status in transaction', async () => {
const createdOKR = { id: 'okr-1', teamMemberId: 'tm-1', objective: 'Grow revenue', status: 'NOT_STARTED' };
mockTx.oKR.create.mockResolvedValueOnce(createdOKR);
mockTx.keyResult.create.mockResolvedValue({ id: 'kr-1' });
const result = await createOKR(
'tm-1',
'Grow revenue',
null,
'Q1-2026',
new Date(),
new Date(),
[{ title: 'KR1', targetValue: 100, unit: '%', order: 0 }]
);
expect(vi.mocked(prisma.$transaction)).toHaveBeenCalledTimes(1);
expect(mockTx.oKR.create).toHaveBeenCalledWith(
expect.objectContaining({
data: expect.objectContaining({ status: 'NOT_STARTED', teamMemberId: 'tm-1' }),
})
);
expect(result).toMatchObject({ id: 'okr-1' });
});
it('creates all key results with NOT_STARTED status', async () => {
const createdOKR = { id: 'okr-1' };
mockTx.oKR.create.mockResolvedValueOnce(createdOKR);
mockTx.keyResult.create.mockResolvedValue({ id: 'kr-1', status: 'NOT_STARTED' });
const keyResults = [
{ title: 'KR1', targetValue: 100, unit: '%', order: 0 },
{ title: 'KR2', targetValue: 50, unit: 'units', order: 1 },
];
await createOKR('tm-1', 'Obj', null, 'Q1', new Date(), new Date(), keyResults);
expect(mockTx.keyResult.create).toHaveBeenCalledTimes(2);
expect(mockTx.keyResult.create).toHaveBeenCalledWith(
expect.objectContaining({ data: expect.objectContaining({ status: 'NOT_STARTED', currentValue: 0 }) })
);
});
it('uses index as order when order is not provided', async () => {
mockTx.oKR.create.mockResolvedValueOnce({ id: 'okr-1' });
mockTx.keyResult.create.mockResolvedValue({ id: 'kr-1' });
// Pass keyResults with explicit order
await createOKR('tm-1', 'Obj', null, 'Q1', new Date(), new Date(), [
{ title: 'KR1', targetValue: 100, unit: '%', order: 5 },
]);
expect(mockTx.keyResult.create).toHaveBeenCalledWith(
expect.objectContaining({ data: expect.objectContaining({ order: 5 }) })
);
});
it('defaults unit to % when unit is empty string', async () => {
mockTx.oKR.create.mockResolvedValueOnce({ id: 'okr-1' });
mockTx.keyResult.create.mockResolvedValue({ id: 'kr-1' });
await createOKR('tm-1', 'Obj', null, 'Q1', new Date(), new Date(), [
{ title: 'KR1', targetValue: 100, unit: '', order: 0 },
]);
expect(mockTx.keyResult.create).toHaveBeenCalledWith(
expect.objectContaining({ data: expect.objectContaining({ unit: '%' }) })
);
});
});
// ── updateKeyResult ───────────────────────────────────────────────────────
describe('updateKeyResult', () => {
beforeEach(() => vi.clearAllMocks());
function makeKR(currentValue: number, targetValue: number, okrStatus = 'NOT_STARTED') {
return {
id: 'kr-1',
targetValue,
notes: null,
status: 'NOT_STARTED',
okr: { id: 'okr-1', status: okrStatus, keyResults: [{ currentValue, targetValue }] },
};
}
it('throws when key result not found', async () => {
vi.mocked(prisma.keyResult.findUnique).mockResolvedValueOnce(null);
await expect(updateKeyResult('kr-1', 50, null)).rejects.toThrow('Key Result not found');
});
it('sets status to NOT_STARTED when progress is 0', async () => {
vi.mocked(prisma.keyResult.findUnique).mockResolvedValueOnce(makeKR(0, 100) as never);
vi.mocked(prisma.keyResult.update).mockResolvedValueOnce(makeKR(0, 100) as never);
await updateKeyResult('kr-1', 0, null);
expect(vi.mocked(prisma.keyResult.update)).toHaveBeenCalledWith(
expect.objectContaining({ data: expect.objectContaining({ status: 'NOT_STARTED' }) })
);
});
it('sets status to AT_RISK when progress is below 50%', async () => {
vi.mocked(prisma.keyResult.findUnique).mockResolvedValueOnce(makeKR(0, 100) as never);
vi.mocked(prisma.keyResult.update).mockResolvedValueOnce(makeKR(30, 100) as never);
await updateKeyResult('kr-1', 30, null);
expect(vi.mocked(prisma.keyResult.update)).toHaveBeenCalledWith(
expect.objectContaining({ data: expect.objectContaining({ status: 'AT_RISK' }) })
);
});
it('sets status to IN_PROGRESS when progress is 50% or more (but less than 100%)', async () => {
vi.mocked(prisma.keyResult.findUnique).mockResolvedValueOnce(makeKR(0, 100) as never);
vi.mocked(prisma.keyResult.update).mockResolvedValueOnce(makeKR(70, 100) as never);
await updateKeyResult('kr-1', 70, null);
expect(vi.mocked(prisma.keyResult.update)).toHaveBeenCalledWith(
expect.objectContaining({ data: expect.objectContaining({ status: 'IN_PROGRESS' }) })
);
});
it('sets status to COMPLETED when progress reaches 100%', async () => {
vi.mocked(prisma.keyResult.findUnique).mockResolvedValueOnce(makeKR(0, 100) as never);
vi.mocked(prisma.keyResult.update).mockResolvedValueOnce(makeKR(100, 100) as never);
await updateKeyResult('kr-1', 100, null);
expect(vi.mocked(prisma.keyResult.update)).toHaveBeenCalledWith(
expect.objectContaining({ data: expect.objectContaining({ status: 'COMPLETED' }) })
);
});
it('updates OKR status to IN_PROGRESS when progress > 0', async () => {
const kr = makeKR(0, 100, 'NOT_STARTED');
vi.mocked(prisma.keyResult.findUnique).mockResolvedValueOnce(kr as never);
// update returns KR with okr having keyResults at 50%
const updatedKr = {
...kr,
okr: { id: 'okr-1', status: 'NOT_STARTED', keyResults: [{ currentValue: 50, targetValue: 100 }] },
};
vi.mocked(prisma.keyResult.update).mockResolvedValueOnce(updatedKr as never);
await updateKeyResult('kr-1', 50, null);
expect(vi.mocked(prisma.oKR.update)).toHaveBeenCalledWith(
expect.objectContaining({ data: { status: 'IN_PROGRESS' } })
);
});
it('updates OKR status to COMPLETED when all KRs are 100%', async () => {
const kr = makeKR(0, 100, 'IN_PROGRESS');
vi.mocked(prisma.keyResult.findUnique).mockResolvedValueOnce(kr as never);
const updatedKr = {
...kr,
okr: { id: 'okr-1', status: 'IN_PROGRESS', keyResults: [{ currentValue: 100, targetValue: 100 }] },
};
vi.mocked(prisma.keyResult.update).mockResolvedValueOnce(updatedKr as never);
await updateKeyResult('kr-1', 100, null);
expect(vi.mocked(prisma.oKR.update)).toHaveBeenCalledWith(
expect.objectContaining({ data: { status: 'COMPLETED' } })
);
});
it('does not update OKR status if it has not changed', async () => {
const kr = makeKR(0, 100, 'IN_PROGRESS');
vi.mocked(prisma.keyResult.findUnique).mockResolvedValueOnce(kr as never);
const updatedKr = {
...kr,
okr: { id: 'okr-1', status: 'IN_PROGRESS', keyResults: [{ currentValue: 60, targetValue: 100 }] },
};
vi.mocked(prisma.keyResult.update).mockResolvedValueOnce(updatedKr as never);
await updateKeyResult('kr-1', 60, null);
expect(vi.mocked(prisma.oKR.update)).not.toHaveBeenCalled();
});
});
// ── updateOKR ─────────────────────────────────────────────────────────────
describe('updateOKR', () => {
beforeEach(() => vi.clearAllMocks());
it('updates OKR fields in transaction', async () => {
mockTx.oKR.update.mockResolvedValueOnce({});
mockTx.oKR.findUnique.mockResolvedValueOnce({ id: 'okr-1', status: 'IN_PROGRESS', keyResults: [] });
await updateOKR('okr-1', { objective: 'New objective', status: 'IN_PROGRESS' });
expect(mockTx.oKR.update).toHaveBeenCalledWith(
expect.objectContaining({
where: { id: 'okr-1' },
data: expect.objectContaining({ objective: 'New objective', status: 'IN_PROGRESS' }),
})
);
});
it('deletes key results when delete list provided', async () => {
mockTx.oKR.update.mockResolvedValueOnce({});
mockTx.keyResult.deleteMany.mockResolvedValueOnce({ count: 1 });
mockTx.oKR.findUnique.mockResolvedValueOnce({ id: 'okr-1', status: 'NOT_STARTED', keyResults: [] });
await updateOKR('okr-1', {}, { delete: ['kr-1', 'kr-2'] });
expect(mockTx.keyResult.deleteMany).toHaveBeenCalledWith(
expect.objectContaining({ where: { id: { in: ['kr-1', 'kr-2'] }, okrId: 'okr-1' } })
);
});
it('creates new key results when create list provided', async () => {
mockTx.oKR.update.mockResolvedValueOnce({});
mockTx.keyResult.create.mockResolvedValue({ id: 'kr-new' });
mockTx.oKR.findUnique.mockResolvedValueOnce({ id: 'okr-1', status: 'NOT_STARTED', keyResults: [] });
await updateOKR('okr-1', {}, {
create: [{ title: 'New KR', targetValue: 100, unit: '%', order: 0 }],
});
expect(mockTx.keyResult.create).toHaveBeenCalledWith(
expect.objectContaining({ data: expect.objectContaining({ title: 'New KR', currentValue: 0, status: 'NOT_STARTED' }) })
);
});
it('throws when OKR not found after update', async () => {
mockTx.oKR.update.mockResolvedValueOnce({});
mockTx.oKR.findUnique.mockResolvedValueOnce(null);
await expect(updateOKR('okr-1', {})).rejects.toThrow('OKR not found after update');
});
it('includes progress in returned OKR', async () => {
mockTx.oKR.update.mockResolvedValueOnce({});
mockTx.oKR.findUnique.mockResolvedValueOnce({
id: 'okr-1',
status: 'IN_PROGRESS',
keyResults: [{ currentValue: 50, targetValue: 100 }],
});
const result = await updateOKR('okr-1', {});
expect(result).toMatchObject({ id: 'okr-1', progress: 50 });
});
});

View File

@@ -0,0 +1,156 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import {
createSessionPermissionChecks,
withAdminFallback,
canDeleteByOwner,
} from '@/services/session-permissions';
vi.mock('@/services/teams', () => ({
isAdminOfUser: vi.fn(),
}));
const { isAdminOfUser } = await import('@/services/teams');
const mockIsAdminOfUser = vi.mocked(isAdminOfUser);
// Factory for mock Prisma model delegate
function makeModel(countResult: number, ownerId: string | null = 'owner-1') {
return {
count: vi.fn().mockResolvedValue(countResult),
findUnique: vi.fn().mockResolvedValue(ownerId ? { userId: ownerId } : null),
};
}
describe('createSessionPermissionChecks', () => {
beforeEach(() => {
vi.clearAllMocks();
mockIsAdminOfUser.mockResolvedValue(false);
});
describe('canAccess', () => {
it('returns true when user has direct access (count > 0)', async () => {
const { canAccess } = createSessionPermissionChecks(makeModel(1));
expect(await canAccess('session-1', 'user-1')).toBe(true);
});
it('returns true when no direct access but user is team admin', async () => {
mockIsAdminOfUser.mockResolvedValue(true);
const { canAccess } = createSessionPermissionChecks(makeModel(0, 'owner-1'));
expect(await canAccess('session-1', 'admin-1')).toBe(true);
});
it('returns false when no direct access and not admin', async () => {
const { canAccess } = createSessionPermissionChecks(makeModel(0, 'owner-1'));
expect(await canAccess('session-1', 'stranger')).toBe(false);
});
it('returns false when no direct access and session owner not found', async () => {
const { canAccess } = createSessionPermissionChecks(makeModel(0, null));
expect(await canAccess('session-1', 'anyone')).toBe(false);
});
});
describe('canEdit', () => {
it('returns true when user is owner or editor (count > 0)', async () => {
const { canEdit } = createSessionPermissionChecks(makeModel(1));
expect(await canEdit('session-1', 'editor-1')).toBe(true);
});
it('returns false for viewer (count = 0) when not admin', async () => {
const { canEdit } = createSessionPermissionChecks(makeModel(0, 'owner-1'));
expect(await canEdit('session-1', 'viewer-1')).toBe(false);
});
it('returns true for viewer when user is team admin', async () => {
mockIsAdminOfUser.mockResolvedValue(true);
const { canEdit } = createSessionPermissionChecks(makeModel(0, 'owner-1'));
expect(await canEdit('session-1', 'admin-1')).toBe(true);
});
});
describe('canDelete', () => {
it('returns true for session owner', async () => {
const { canDelete } = createSessionPermissionChecks(makeModel(1, 'owner-1'));
expect(await canDelete('session-1', 'owner-1')).toBe(true);
});
it('returns false for non-owner even with EDITOR role', async () => {
const { canDelete } = createSessionPermissionChecks(makeModel(1, 'owner-1'));
expect(await canDelete('session-1', 'editor-1')).toBe(false);
});
it('returns true when user is team admin of the owner', async () => {
mockIsAdminOfUser.mockResolvedValue(true);
const { canDelete } = createSessionPermissionChecks(makeModel(0, 'owner-1'));
expect(await canDelete('session-1', 'admin-1')).toBe(true);
});
it('returns false when session not found', async () => {
const { canDelete } = createSessionPermissionChecks(makeModel(0, null));
expect(await canDelete('session-1', 'anyone')).toBe(false);
});
});
});
describe('withAdminFallback', () => {
beforeEach(() => {
vi.clearAllMocks();
mockIsAdminOfUser.mockResolvedValue(false);
});
it('returns true immediately when hasDirectAccess is true (no admin check)', async () => {
const getOwnerId = vi.fn();
const result = await withAdminFallback(true, getOwnerId, 'session-1', 'user-1');
expect(result).toBe(true);
expect(getOwnerId).not.toHaveBeenCalled();
expect(mockIsAdminOfUser).not.toHaveBeenCalled();
});
it('falls back to admin check when no direct access', async () => {
mockIsAdminOfUser.mockResolvedValue(true);
const getOwnerId = vi.fn().mockResolvedValue('owner-1');
const result = await withAdminFallback(false, getOwnerId, 'session-1', 'admin-1');
expect(result).toBe(true);
expect(mockIsAdminOfUser).toHaveBeenCalledWith('owner-1', 'admin-1');
});
it('returns false when no direct access and not admin', async () => {
const getOwnerId = vi.fn().mockResolvedValue('owner-1');
const result = await withAdminFallback(false, getOwnerId, 'session-1', 'stranger');
expect(result).toBe(false);
});
it('returns false when owner not found', async () => {
const getOwnerId = vi.fn().mockResolvedValue(null);
const result = await withAdminFallback(false, getOwnerId, 'session-1', 'anyone');
expect(result).toBe(false);
expect(mockIsAdminOfUser).not.toHaveBeenCalled();
});
});
describe('canDeleteByOwner', () => {
beforeEach(() => {
vi.clearAllMocks();
mockIsAdminOfUser.mockResolvedValue(false);
});
it('returns true when userId matches ownerId', async () => {
const getOwnerId = vi.fn().mockResolvedValue('user-1');
expect(await canDeleteByOwner(getOwnerId, 'session-1', 'user-1')).toBe(true);
});
it('returns false when userId does not match and not admin', async () => {
const getOwnerId = vi.fn().mockResolvedValue('owner-1');
expect(await canDeleteByOwner(getOwnerId, 'session-1', 'other')).toBe(false);
});
it('returns true when user is admin of owner', async () => {
mockIsAdminOfUser.mockResolvedValue(true);
const getOwnerId = vi.fn().mockResolvedValue('owner-1');
expect(await canDeleteByOwner(getOwnerId, 'session-1', 'admin-1')).toBe(true);
});
it('returns false when session not found (ownerId is null)', async () => {
const getOwnerId = vi.fn().mockResolvedValue(null);
expect(await canDeleteByOwner(getOwnerId, 'session-1', 'anyone')).toBe(false);
});
});

View File

@@ -0,0 +1,248 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import {
mergeSessionsByUserId,
fetchTeamCollaboratorSessions,
getSessionByIdGeneric,
} from '@/services/session-queries';
vi.mock('@/services/teams', () => ({
isAdminOfUser: vi.fn().mockResolvedValue(false),
getTeamMemberIdsForAdminTeams: vi.fn().mockResolvedValue([]),
}));
const { isAdminOfUser } = await import('@/services/teams');
const mockIsAdminOfUser = vi.mocked(isAdminOfUser);
// ── Shared test data ───────────────────────────────────────────────────────
const USER_ID = 'user-1';
function makeSession(overrides: Partial<{
id: string; updatedAt: Date; userId: string;
shares: Array<{ userId: string; role?: string }>;
}> = {}) {
return {
id: 's1',
updatedAt: new Date('2024-06-01'),
userId: USER_ID,
user: { id: USER_ID, name: 'Alice', email: 'alice@example.com' },
shares: [],
...overrides,
};
}
// ── mergeSessionsByUserId ──────────────────────────────────────────────────
describe('mergeSessionsByUserId', () => {
beforeEach(() => vi.clearAllMocks());
it('marks owned sessions with isOwner=true and role=OWNER', async () => {
const s = makeSession();
const result = await mergeSessionsByUserId(
vi.fn().mockResolvedValue([s]),
vi.fn().mockResolvedValue([]),
USER_ID
);
expect(result).toHaveLength(1);
expect(result[0]).toMatchObject({ id: 's1', isOwner: true, role: 'OWNER' });
});
it('marks shared sessions with isOwner=false and role from share', async () => {
const s = makeSession({ id: 's2', userId: 'other' });
const result = await mergeSessionsByUserId(
vi.fn().mockResolvedValue([]),
vi.fn().mockResolvedValue([{ session: s, role: 'VIEWER', createdAt: new Date() }]),
USER_ID
);
expect(result[0]).toMatchObject({ id: 's2', isOwner: false, role: 'VIEWER' });
});
it('sorts merged list by updatedAt descending', async () => {
const older = makeSession({ id: 's1', updatedAt: new Date('2024-01-01') });
const newer = makeSession({ id: 's2', updatedAt: new Date('2024-06-01') });
const result = await mergeSessionsByUserId(
vi.fn().mockResolvedValue([older, newer]),
vi.fn().mockResolvedValue([]),
USER_ID
);
expect(result[0].id).toBe('s2');
expect(result[1].id).toBe('s1');
});
it('merges owned and shared sessions together', async () => {
const owned = makeSession({ id: 'own' });
const shared = makeSession({ id: 'shared', userId: 'other', updatedAt: new Date('2020-01-01') });
const result = await mergeSessionsByUserId(
vi.fn().mockResolvedValue([owned]),
vi.fn().mockResolvedValue([{ session: shared, role: 'EDITOR', createdAt: new Date() }]),
USER_ID
);
expect(result).toHaveLength(2);
expect(result.map((s) => s.id)).toContain('own');
expect(result.map((s) => s.id)).toContain('shared');
});
it('returns empty array when no sessions', async () => {
const result = await mergeSessionsByUserId(
vi.fn().mockResolvedValue([]),
vi.fn().mockResolvedValue([]),
USER_ID
);
expect(result).toHaveLength(0);
});
it('applies resolveParticipant callback to each session', async () => {
const s = makeSession();
const resolveParticipant = vi.fn().mockResolvedValue({ extra: 'data' });
const result = await mergeSessionsByUserId(
vi.fn().mockResolvedValue([s]),
vi.fn().mockResolvedValue([]),
USER_ID,
resolveParticipant
);
expect(resolveParticipant).toHaveBeenCalledWith(expect.objectContaining({ id: 's1' }));
expect((result[0] as typeof result[0] & { extra: string }).extra).toBe('data');
});
});
// ── fetchTeamCollaboratorSessions ──────────────────────────────────────────
describe('fetchTeamCollaboratorSessions', () => {
beforeEach(() => vi.clearAllMocks());
it('returns empty array when team has no members', async () => {
const result = await fetchTeamCollaboratorSessions(
vi.fn(),
vi.fn().mockResolvedValue([]),
USER_ID
);
expect(result).toHaveLength(0);
});
it('marks sessions with isTeamCollab=true, canEdit=true, role=VIEWER', async () => {
const s = makeSession({ id: 'team-s', userId: 'member-1' });
const result = await fetchTeamCollaboratorSessions(
vi.fn().mockResolvedValue([s]),
vi.fn().mockResolvedValue(['member-1']),
USER_ID
);
expect(result[0]).toMatchObject({ id: 'team-s', isOwner: false, role: 'VIEWER', isTeamCollab: true, canEdit: true });
});
it('calls fetchTeamSessions with team member ids', async () => {
const fetchTeamSessions = vi.fn().mockResolvedValue([]);
await fetchTeamCollaboratorSessions(
fetchTeamSessions,
vi.fn().mockResolvedValue(['m1', 'm2']),
USER_ID
);
expect(fetchTeamSessions).toHaveBeenCalledWith(['m1', 'm2'], USER_ID);
});
it('applies resolveParticipant callback', async () => {
const s = makeSession({ id: 't-s', userId: 'member-1' });
const resolveParticipant = vi.fn().mockResolvedValue({ resolved: true });
const result = await fetchTeamCollaboratorSessions(
vi.fn().mockResolvedValue([s]),
vi.fn().mockResolvedValue(['member-1']),
USER_ID,
resolveParticipant
);
expect(resolveParticipant).toHaveBeenCalled();
expect((result[0] as typeof result[0] & { resolved: boolean }).resolved).toBe(true);
});
});
// ── getSessionByIdGeneric ──────────────────────────────────────────────────
describe('getSessionByIdGeneric', () => {
beforeEach(() => {
vi.clearAllMocks();
mockIsAdminOfUser.mockResolvedValue(false);
});
it('returns session with isOwner=true when user is owner', async () => {
const s = makeSession({ userId: USER_ID, shares: [] });
const result = await getSessionByIdGeneric(
's1', USER_ID,
vi.fn().mockResolvedValue(s),
vi.fn()
);
expect(result).toMatchObject({ isOwner: true, role: 'OWNER', canEdit: true });
});
it('returns session with EDITOR role and canEdit=true for editor share', async () => {
const s = makeSession({ userId: 'owner-1', shares: [{ userId: USER_ID, role: 'EDITOR' }] });
const result = await getSessionByIdGeneric(
's1', USER_ID,
vi.fn().mockResolvedValue(s),
vi.fn()
);
expect(result).toMatchObject({ isOwner: false, role: 'EDITOR', canEdit: true });
});
it('returns session with VIEWER role and canEdit=false for viewer share', async () => {
const s = makeSession({ userId: 'owner-1', shares: [{ userId: USER_ID, role: 'VIEWER' }] });
const result = await getSessionByIdGeneric(
's1', USER_ID,
vi.fn().mockResolvedValue(s),
vi.fn()
);
expect(result).toMatchObject({ isOwner: false, role: 'VIEWER', canEdit: false });
});
it('returns null when session not found anywhere', async () => {
const result = await getSessionByIdGeneric(
'missing', USER_ID,
vi.fn().mockResolvedValue(null),
vi.fn().mockResolvedValue(null)
);
expect(result).toBeNull();
});
it('falls back to admin check when no direct access', async () => {
mockIsAdminOfUser.mockResolvedValue(true);
const s = makeSession({ userId: 'owner-1', shares: [] });
const result = await getSessionByIdGeneric(
's1', 'admin-1',
vi.fn().mockResolvedValue(null), // no direct access
vi.fn().mockResolvedValue(s) // but found by id
);
expect(result).not.toBeNull();
expect(mockIsAdminOfUser).toHaveBeenCalledWith('owner-1', 'admin-1');
});
it('returns null when no direct access and not admin', async () => {
const s = makeSession({ userId: 'owner-1', shares: [] });
const result = await getSessionByIdGeneric(
's1', 'stranger',
vi.fn().mockResolvedValue(null),
vi.fn().mockResolvedValue(s)
);
expect(result).toBeNull();
});
it('grants canEdit=true to team admin viewing member session', async () => {
mockIsAdminOfUser.mockResolvedValue(true);
const s = makeSession({ userId: 'owner-1', shares: [] });
const result = await getSessionByIdGeneric(
's1', 'admin-1',
vi.fn().mockResolvedValue(null),
vi.fn().mockResolvedValue(s)
);
expect(result?.canEdit).toBe(true);
});
it('applies resolveParticipant callback when provided', async () => {
const s = makeSession({ userId: USER_ID, shares: [] });
const resolveParticipant = vi.fn().mockResolvedValue({ participantName: 'Bob' });
const result = await getSessionByIdGeneric(
's1', USER_ID,
vi.fn().mockResolvedValue(s),
vi.fn(),
resolveParticipant
);
expect(resolveParticipant).toHaveBeenCalledWith(expect.objectContaining({ id: 's1' }));
expect((result as typeof result & { participantName: string })?.participantName).toBe('Bob');
});
});

View File

@@ -0,0 +1,218 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { createShareAndEventHandlers } from '@/services/session-share-events';
vi.mock('@/services/database', () => ({
prisma: {
user: {
findUnique: vi.fn(),
},
},
}));
const { prisma } = await import('@/services/database');
const mockFindUnique = vi.mocked(prisma.user.findUnique);
// ── Mock delegate factories ────────────────────────────────────────────────
const OWNER_ID = 'owner-1';
const SESSION_ID = 'session-1';
function makeSessionModel(session: object | null = { id: SESSION_ID, userId: OWNER_ID }) {
return { findFirst: vi.fn().mockResolvedValue(session) };
}
function makeShareModel() {
return {
upsert: vi.fn().mockResolvedValue({ id: 'share-1', userId: 'target-1', role: 'EDITOR' }),
deleteMany: vi.fn().mockResolvedValue({ count: 1 }),
findMany: vi.fn().mockResolvedValue([]),
};
}
function makeEventModel(eventResult = { id: 'e1', sessionId: SESSION_ID, userId: OWNER_ID, type: 'UPDATE', payload: '{}', createdAt: new Date() }) {
return {
create: vi.fn().mockResolvedValue(eventResult),
findMany: vi.fn().mockResolvedValue([]),
findFirst: vi.fn().mockResolvedValue(null),
deleteMany: vi.fn().mockResolvedValue({ count: 0 }),
};
}
const canAccess = vi.fn().mockResolvedValue(true);
// ── share ──────────────────────────────────────────────────────────────────
describe('share', () => {
beforeEach(() => vi.clearAllMocks());
it('shares session with target user', async () => {
const targetUser = { id: 'target-1', email: 'bob@example.com', name: 'Bob' };
mockFindUnique.mockResolvedValue(targetUser);
const shareModel = makeShareModel();
const { share } = createShareAndEventHandlers(makeSessionModel(), shareModel, makeEventModel(), canAccess);
await share(SESSION_ID, OWNER_ID, 'bob@example.com', 'EDITOR');
expect(shareModel.upsert).toHaveBeenCalledWith(
expect.objectContaining({
where: { sessionId_userId: { sessionId: SESSION_ID, userId: 'target-1' } },
create: expect.objectContaining({ sessionId: SESSION_ID, userId: 'target-1', role: 'EDITOR' }),
})
);
});
it('throws when target user is not found', async () => {
mockFindUnique.mockResolvedValue(null);
const { share } = createShareAndEventHandlers(makeSessionModel(), makeShareModel(), makeEventModel(), canAccess);
await expect(share(SESSION_ID, OWNER_ID, 'ghost@example.com')).rejects.toThrow('User not found');
});
it('throws when trying to share with yourself', async () => {
mockFindUnique.mockResolvedValue({ id: OWNER_ID, email: 'owner@example.com', name: 'Owner' });
const { share } = createShareAndEventHandlers(makeSessionModel(), makeShareModel(), makeEventModel(), canAccess);
await expect(share(SESSION_ID, OWNER_ID, 'owner@example.com')).rejects.toThrow('Cannot share session with yourself');
});
it('throws when session not owned by caller', async () => {
mockFindUnique.mockResolvedValue({ id: 'target-1', email: 'bob@example.com', name: 'Bob' });
const sessionModel = makeSessionModel(null); // findFirst returns null → not owned
const { share } = createShareAndEventHandlers(sessionModel, makeShareModel(), makeEventModel(), canAccess);
await expect(share(SESSION_ID, 'not-owner', 'bob@example.com')).rejects.toThrow('Session not found or not owned');
});
it('defaults role to EDITOR', async () => {
const targetUser = { id: 'target-1', email: 'bob@example.com', name: 'Bob' };
mockFindUnique.mockResolvedValue(targetUser);
const shareModel = makeShareModel();
const { share } = createShareAndEventHandlers(makeSessionModel(), shareModel, makeEventModel(), canAccess);
await share(SESSION_ID, OWNER_ID, 'bob@example.com');
expect(shareModel.upsert).toHaveBeenCalledWith(
expect.objectContaining({ create: expect.objectContaining({ role: 'EDITOR' }) })
);
});
});
// ── removeShare ────────────────────────────────────────────────────────────
describe('removeShare', () => {
beforeEach(() => vi.clearAllMocks());
it('removes share when caller is owner', async () => {
const shareModel = makeShareModel();
const { removeShare } = createShareAndEventHandlers(makeSessionModel(), shareModel, makeEventModel(), canAccess);
await removeShare(SESSION_ID, OWNER_ID, 'target-1');
expect(shareModel.deleteMany).toHaveBeenCalledWith({ where: { sessionId: SESSION_ID, userId: 'target-1' } });
});
it('throws when session not owned by caller', async () => {
const { removeShare } = createShareAndEventHandlers(makeSessionModel(null), makeShareModel(), makeEventModel(), canAccess);
await expect(removeShare(SESSION_ID, 'not-owner', 'target-1')).rejects.toThrow('Session not found or not owned');
});
});
// ── createEvent ────────────────────────────────────────────────────────────
describe('createEvent', () => {
beforeEach(() => vi.clearAllMocks());
it('creates event with correct data', async () => {
const eventModel = makeEventModel();
const { createEvent } = createShareAndEventHandlers(makeSessionModel(), makeShareModel(), eventModel, canAccess);
await createEvent(SESSION_ID, OWNER_ID, 'UPDATE' as never, { key: 'value' });
expect(eventModel.create).toHaveBeenCalledWith({
data: {
sessionId: SESSION_ID,
userId: OWNER_ID,
type: 'UPDATE',
payload: JSON.stringify({ key: 'value' }),
},
});
});
it('triggers fire-and-forget cleanup after event creation', async () => {
const eventModel = makeEventModel();
const { createEvent } = createShareAndEventHandlers(makeSessionModel(), makeShareModel(), eventModel, canAccess);
await createEvent(SESSION_ID, OWNER_ID, 'UPDATE' as never, {});
// deleteMany is called as fire-and-forget (not awaited, but should be invoked)
await vi.waitFor(() => expect(eventModel.deleteMany).toHaveBeenCalledTimes(1));
expect(eventModel.deleteMany).toHaveBeenCalledWith(
expect.objectContaining({ where: { createdAt: expect.objectContaining({ lt: expect.any(Date) }) } })
);
});
it('does not throw when cleanup fails', async () => {
const eventModel = makeEventModel();
eventModel.deleteMany.mockRejectedValue(new Error('DB error'));
const { createEvent } = createShareAndEventHandlers(makeSessionModel(), makeShareModel(), eventModel, canAccess);
await expect(createEvent(SESSION_ID, OWNER_ID, 'UPDATE' as never, {})).resolves.not.toThrow();
});
it('returns the created event', async () => {
const created = { id: 'e42', sessionId: SESSION_ID, userId: OWNER_ID, type: 'UPDATE', payload: '{}', createdAt: new Date() };
const eventModel = makeEventModel(created);
const { createEvent } = createShareAndEventHandlers(makeSessionModel(), makeShareModel(), eventModel, canAccess);
const result = await createEvent(SESSION_ID, OWNER_ID, 'UPDATE' as never, {});
expect(result.id).toBe('e42');
});
});
// ── getEvents ──────────────────────────────────────────────────────────────
describe('getEvents', () => {
beforeEach(() => vi.clearAllMocks());
it('fetches all events for a session without since filter', async () => {
const eventModel = makeEventModel();
const { getEvents } = createShareAndEventHandlers(makeSessionModel(), makeShareModel(), eventModel, canAccess);
await getEvents(SESSION_ID);
expect(eventModel.findMany).toHaveBeenCalledWith(
expect.objectContaining({ where: { sessionId: SESSION_ID } })
);
});
it('filters events by since timestamp when provided', async () => {
const since = new Date('2024-01-01');
const eventModel = makeEventModel();
const { getEvents } = createShareAndEventHandlers(makeSessionModel(), makeShareModel(), eventModel, canAccess);
await getEvents(SESSION_ID, since);
expect(eventModel.findMany).toHaveBeenCalledWith(
expect.objectContaining({ where: { sessionId: SESSION_ID, createdAt: { gt: since } } })
);
});
});
// ── getShares ──────────────────────────────────────────────────────────────
describe('getShares', () => {
beforeEach(() => vi.clearAllMocks());
it('returns shares when user has access', async () => {
const shares = [{ id: 'sh-1', user: { id: 'u1', name: 'Bob', email: 'bob@ex.com' } }];
const shareModel = makeShareModel();
shareModel.findMany.mockResolvedValue(shares);
const { getShares } = createShareAndEventHandlers(makeSessionModel(), shareModel, makeEventModel(), canAccess);
const result = await getShares(SESSION_ID, OWNER_ID);
expect(result).toEqual(shares);
});
it('throws Access denied when user has no access', async () => {
const noAccess = vi.fn().mockResolvedValue(false);
const { getShares } = createShareAndEventHandlers(makeSessionModel(), makeShareModel(), makeEventModel(), noAccess);
await expect(getShares(SESSION_ID, 'stranger')).rejects.toThrow('Access denied');
});
});

View File

@@ -0,0 +1,244 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
// Make React cache a transparent pass-through in tests
vi.mock('react', () => ({
cache: (fn: unknown) => fn,
}));
vi.mock('@/services/database', () => {
const mockTx = {
team: { create: vi.fn() },
teamMember: { create: vi.fn() },
};
return {
prisma: {
team: { create: vi.fn(), update: vi.fn() },
teamMember: {
findUnique: vi.fn(),
create: vi.fn(),
findMany: vi.fn(),
},
$transaction: vi.fn().mockImplementation(async (fn: (tx: typeof mockTx) => unknown) => fn(mockTx)),
_mockTx: mockTx,
},
};
});
import { createTeam, addTeamMember, isAdminOfUser, getTeamMemberIdsForAdminTeams, getUserTeams } from '@/services/teams';
const { prisma } = await import('@/services/database');
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const mockTx = (prisma as any)._mockTx;
// ── createTeam ────────────────────────────────────────────────────────────
describe('createTeam', () => {
beforeEach(() => vi.clearAllMocks());
it('creates team and adds creator as ADMIN in a transaction', async () => {
const team = { id: 'team-1', name: 'Alpha', description: null, createdById: 'user-1' };
mockTx.team.create.mockResolvedValueOnce(team);
mockTx.teamMember.create.mockResolvedValueOnce({});
const result = await createTeam('Alpha', null, 'user-1');
expect(vi.mocked(prisma.$transaction)).toHaveBeenCalledTimes(1);
expect(mockTx.team.create).toHaveBeenCalledWith(
expect.objectContaining({ data: { name: 'Alpha', description: null, createdById: 'user-1' } })
);
expect(mockTx.teamMember.create).toHaveBeenCalledWith(
expect.objectContaining({ data: { teamId: 'team-1', userId: 'user-1', role: 'ADMIN' } })
);
expect(result).toEqual(team);
});
it('returns the created team (not the member)', async () => {
const team = { id: 'team-2', name: 'Beta', description: 'desc', createdById: 'user-2' };
mockTx.team.create.mockResolvedValueOnce(team);
mockTx.teamMember.create.mockResolvedValueOnce({ id: 'member-1' });
const result = await createTeam('Beta', 'desc', 'user-2');
expect(result).toEqual(team);
});
});
// ── addTeamMember ─────────────────────────────────────────────────────────
describe('addTeamMember', () => {
beforeEach(() => vi.clearAllMocks());
it('throws when user is already a member', async () => {
vi.mocked(prisma.teamMember.findUnique).mockResolvedValueOnce({
teamId: 'team-1', userId: 'user-1', role: 'MEMBER',
} as never);
await expect(addTeamMember('team-1', 'user-1')).rejects.toThrow('déjà membre');
expect(vi.mocked(prisma.teamMember.create)).not.toHaveBeenCalled();
});
it('creates member with MEMBER role by default', async () => {
vi.mocked(prisma.teamMember.findUnique).mockResolvedValueOnce(null);
vi.mocked(prisma.teamMember.create).mockResolvedValueOnce({
userId: 'user-2', teamId: 'team-1', role: 'MEMBER',
} as never);
await addTeamMember('team-1', 'user-2');
expect(vi.mocked(prisma.teamMember.create)).toHaveBeenCalledWith(
expect.objectContaining({ data: { teamId: 'team-1', userId: 'user-2', role: 'MEMBER' } })
);
});
it('creates member with specified role', async () => {
vi.mocked(prisma.teamMember.findUnique).mockResolvedValueOnce(null);
vi.mocked(prisma.teamMember.create).mockResolvedValueOnce({} as never);
await addTeamMember('team-1', 'user-3', 'ADMIN');
expect(vi.mocked(prisma.teamMember.create)).toHaveBeenCalledWith(
expect.objectContaining({ data: expect.objectContaining({ role: 'ADMIN' }) })
);
});
});
// ── getTeamMemberIdsForAdminTeams ──────────────────────────────────────────
describe('getTeamMemberIdsForAdminTeams', () => {
beforeEach(() => vi.clearAllMocks());
it('returns empty array when user is not admin of any team', async () => {
vi.mocked(prisma.teamMember.findMany).mockResolvedValueOnce([]);
const result = await getTeamMemberIdsForAdminTeams('user-1');
expect(result).toEqual([]);
// Should not query members when no admin teams
expect(vi.mocked(prisma.teamMember.findMany)).toHaveBeenCalledTimes(1);
});
it('returns deduplicated member IDs across admin teams', async () => {
// First call: get admin teams
vi.mocked(prisma.teamMember.findMany)
.mockResolvedValueOnce([{ teamId: 'team-1' }, { teamId: 'team-2' }] as never)
// Second call: get members of those teams
.mockResolvedValueOnce([{ userId: 'user-2' }, { userId: 'user-3' }] as never);
const result = await getTeamMemberIdsForAdminTeams('user-1');
expect(vi.mocked(prisma.teamMember.findMany)).toHaveBeenCalledTimes(2);
expect(result).toEqual(['user-2', 'user-3']);
});
it('queries members excluding the admin user themselves', async () => {
vi.mocked(prisma.teamMember.findMany)
.mockResolvedValueOnce([{ teamId: 'team-1' }] as never)
.mockResolvedValueOnce([{ userId: 'user-2' }] as never);
await getTeamMemberIdsForAdminTeams('admin-user');
expect(vi.mocked(prisma.teamMember.findMany)).toHaveBeenNthCalledWith(
2,
expect.objectContaining({ where: expect.objectContaining({ userId: { not: 'admin-user' } }) })
);
});
});
// ── isAdminOfUser ──────────────────────────────────────────────────────────
describe('isAdminOfUser', () => {
beforeEach(() => vi.clearAllMocks());
it('returns false when ownerUserId equals adminUserId', async () => {
const result = await isAdminOfUser('same-user', 'same-user');
expect(result).toBe(false);
expect(vi.mocked(prisma.teamMember.findMany)).not.toHaveBeenCalled();
});
it('returns true when adminUser is admin of a team containing ownerUser', async () => {
vi.mocked(prisma.teamMember.findMany)
.mockResolvedValueOnce([{ teamId: 'team-1' }] as never) // admin teams
.mockResolvedValueOnce([{ userId: 'owner-user' }] as never); // members
const result = await isAdminOfUser('owner-user', 'admin-user');
expect(result).toBe(true);
});
it('returns false when adminUser is not in same team as ownerUser', async () => {
vi.mocked(prisma.teamMember.findMany)
.mockResolvedValueOnce([{ teamId: 'team-1' }] as never)
.mockResolvedValueOnce([{ userId: 'other-user' }] as never); // different members
const result = await isAdminOfUser('owner-user', 'admin-user');
expect(result).toBe(false);
});
it('returns false when admin has no admin teams', async () => {
vi.mocked(prisma.teamMember.findMany).mockResolvedValueOnce([]);
const result = await isAdminOfUser('owner-user', 'admin-user');
expect(result).toBe(false);
});
});
// ── getUserTeams ───────────────────────────────────────────────────────────
describe('getUserTeams', () => {
beforeEach(() => vi.clearAllMocks());
it('transforms result to include userRole and userOkrCount', async () => {
const mockMembership = {
role: 'ADMIN',
_count: { okrs: 3 },
team: {
id: 'team-1',
name: 'Alpha',
description: null,
members: [],
_count: { members: 5 },
},
};
vi.mocked(prisma.teamMember.findMany).mockResolvedValueOnce([mockMembership] as never);
const result = await getUserTeams('user-1');
expect(result).toHaveLength(1);
expect(result[0]).toMatchObject({
id: 'team-1',
name: 'Alpha',
userRole: 'ADMIN',
userOkrCount: 3,
});
});
it('returns empty array when user is in no teams', async () => {
vi.mocked(prisma.teamMember.findMany).mockResolvedValueOnce([]);
const result = await getUserTeams('user-1');
expect(result).toEqual([]);
});
it('spreads team properties into the result', async () => {
const mockMembership = {
role: 'MEMBER',
_count: { okrs: 0 },
team: {
id: 'team-2',
name: 'Beta',
description: 'A team',
members: [{ id: 'm-1' }],
_count: { members: 1 },
},
};
vi.mocked(prisma.teamMember.findMany).mockResolvedValueOnce([mockMembership] as never);
const result = await getUserTeams('user-2');
expect(result[0]).toMatchObject({
id: 'team-2',
description: 'A team',
userRole: 'MEMBER',
userOkrCount: 0,
});
});
});

View File

@@ -0,0 +1,279 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import {
getPreviousWeatherEntriesForUsers,
shareWeatherSessionToTeam,
getWeatherSessionsHistory,
} from '@/services/weather';
vi.mock('@/services/database', () => ({
prisma: {
weatherEntry: {
findMany: vi.fn(),
},
weatherSession: {
findFirst: vi.fn(),
findMany: vi.fn(),
count: vi.fn(),
},
weatherSessionShare: {
findMany: vi.fn(),
upsert: vi.fn(),
},
teamMember: {
findMany: vi.fn(),
},
},
}));
// teams service is imported by weather.ts
vi.mock('@/services/teams', () => ({
getTeamMemberIdsForAdminTeams: vi.fn(),
}));
// session-permissions and session-share-events are module-level side effects
vi.mock('@/services/session-permissions', () => ({
createSessionPermissionChecks: () => ({
canAccess: vi.fn().mockResolvedValue(true),
canEdit: vi.fn().mockResolvedValue(true),
canDelete: vi.fn().mockResolvedValue(true),
}),
}));
vi.mock('@/services/session-share-events', () => ({
createShareAndEventHandlers: () => ({
share: vi.fn(),
removeShare: vi.fn(),
getShares: vi.fn(),
createEvent: vi.fn(),
getEvents: vi.fn(),
getLatestEventTimestamp: vi.fn(),
}),
}));
vi.mock('@/services/session-queries', () => ({
mergeSessionsByUserId: vi.fn(),
fetchTeamCollaboratorSessions: vi.fn(),
getSessionByIdGeneric: vi.fn(),
}));
const { prisma } = await import('@/services/database');
// ── getPreviousWeatherEntriesForUsers ──────────────────────────────────────
describe('getPreviousWeatherEntriesForUsers', () => {
beforeEach(() => vi.clearAllMocks());
it('returns empty map when userIds is empty', async () => {
const result = await getPreviousWeatherEntriesForUsers('s-1', new Date(), []);
expect(result.size).toBe(0);
expect(vi.mocked(prisma.weatherEntry.findMany)).not.toHaveBeenCalled();
});
it('returns one entry per user using most recent values', async () => {
const olderDate = new Date('2026-01-01');
const newerDate = new Date('2026-01-08');
vi.mocked(prisma.weatherEntry.findMany).mockResolvedValueOnce([
{ userId: 'u1', performanceEmoji: '☀️', moralEmoji: null, fluxEmoji: null, valueCreationEmoji: null, session: { date: newerDate } },
{ userId: 'u1', performanceEmoji: '🌧️', moralEmoji: '☀️', fluxEmoji: null, valueCreationEmoji: null, session: { date: olderDate } },
] as never);
const result = await getPreviousWeatherEntriesForUsers('current-session', new Date(), ['u1']);
const entry = result.get('u1');
// Most recent entry (newerDate) wins for performanceEmoji
expect(entry?.performanceEmoji).toBe('☀️');
// Falls back to older entry for moralEmoji (newer was null)
expect(entry?.moralEmoji).toBe('☀️');
});
it('uses per-axis fallback when latest entry has null values', async () => {
const older = new Date('2026-01-01');
const newer = new Date('2026-01-08');
vi.mocked(prisma.weatherEntry.findMany).mockResolvedValueOnce([
// Newer: has performance but not moral
{ userId: 'u1', performanceEmoji: '☁️', moralEmoji: null, fluxEmoji: null, valueCreationEmoji: null, session: { date: newer } },
// Older: has moral but not performance
{ userId: 'u1', performanceEmoji: null, moralEmoji: '🌤️', fluxEmoji: null, valueCreationEmoji: null, session: { date: older } },
] as never);
const result = await getPreviousWeatherEntriesForUsers('s-1', new Date(), ['u1']);
const entry = result.get('u1');
expect(entry?.performanceEmoji).toBe('☁️'); // from newer
expect(entry?.moralEmoji).toBe('🌤️'); // fallback from older
});
it('handles multiple users independently', async () => {
const date = new Date('2026-01-01');
vi.mocked(prisma.weatherEntry.findMany).mockResolvedValueOnce([
{ userId: 'u1', performanceEmoji: '☀️', moralEmoji: null, fluxEmoji: null, valueCreationEmoji: null, session: { date } },
{ userId: 'u2', performanceEmoji: '🌧️', moralEmoji: null, fluxEmoji: null, valueCreationEmoji: null, session: { date } },
] as never);
const result = await getPreviousWeatherEntriesForUsers('s-1', new Date(), ['u1', 'u2']);
expect(result.get('u1')?.performanceEmoji).toBe('☀️');
expect(result.get('u2')?.performanceEmoji).toBe('🌧️');
});
it('returns null for all axes when no previous entries for a user', async () => {
vi.mocked(prisma.weatherEntry.findMany).mockResolvedValueOnce([]);
const result = await getPreviousWeatherEntriesForUsers('s-1', new Date(), ['u1']);
// No entries returned, map is empty
expect(result.get('u1')).toBeUndefined();
});
});
// ── shareWeatherSessionToTeam ──────────────────────────────────────────────
describe('shareWeatherSessionToTeam', () => {
beforeEach(() => vi.clearAllMocks());
it('throws when session not found or not owned', async () => {
vi.mocked(prisma.weatherSession.findFirst).mockResolvedValueOnce(null);
await expect(shareWeatherSessionToTeam('s-1', 'owner-1', 'team-1')).rejects.toThrow(
'Session not found or not owned'
);
});
it('throws when another weather session exists for same team this week', async () => {
const sessionDate = new Date('2026-03-10');
vi.mocked(prisma.weatherSession.findFirst).mockResolvedValueOnce({ id: 's-1', date: sessionDate } as never);
vi.mocked(prisma.teamMember.findMany)
.mockResolvedValueOnce([{ userId: 'u2' }, { userId: 'u3' }] as never); // count check
vi.mocked(prisma.weatherSession.count).mockResolvedValueOnce(1); // existing session this week
await expect(shareWeatherSessionToTeam('s-1', 'owner-1', 'team-1')).rejects.toThrow(
'déjà une météo pour cette semaine'
);
});
it('throws when team has no members', async () => {
const sessionDate = new Date('2026-03-10');
vi.mocked(prisma.weatherSession.findFirst).mockResolvedValueOnce({ id: 's-1', date: sessionDate } as never);
vi.mocked(prisma.teamMember.findMany)
.mockResolvedValueOnce([]) // no members for count check → skip week check
.mockResolvedValueOnce([]); // no full members either
await expect(shareWeatherSessionToTeam('s-1', 'owner-1', 'team-1')).rejects.toThrow(
'Team has no members'
);
});
it('shares session with all team members except owner', async () => {
const sessionDate = new Date('2026-03-10');
vi.mocked(prisma.weatherSession.findFirst).mockResolvedValueOnce({ id: 's-1', date: sessionDate } as never);
// First findMany: team member IDs for week check (empty → skips count check)
vi.mocked(prisma.teamMember.findMany)
.mockResolvedValueOnce([]) // empty for week-check path
.mockResolvedValueOnce([
{ userId: 'owner-1', user: { id: 'owner-1', name: 'Owner', email: 'o@ex.com' } },
{ userId: 'member-1', user: { id: 'member-1', name: 'Member', email: 'm@ex.com' } },
{ userId: 'member-2', user: { id: 'member-2', name: 'Member2', email: 'm2@ex.com' } },
] as never); // full members
vi.mocked(prisma.weatherSessionShare.upsert).mockResolvedValue({ role: 'EDITOR', user: {} } as never);
await shareWeatherSessionToTeam('s-1', 'owner-1', 'team-1');
// Should call upsert for member-1 and member-2 (not owner-1)
expect(vi.mocked(prisma.weatherSessionShare.upsert)).toHaveBeenCalledTimes(2);
const calls = vi.mocked(prisma.weatherSessionShare.upsert).mock.calls;
const sharedUserIds = calls.map((c) => c[0].create.userId);
expect(sharedUserIds).toContain('member-1');
expect(sharedUserIds).toContain('member-2');
expect(sharedUserIds).not.toContain('owner-1');
});
});
// ── getWeatherSessionsHistory ──────────────────────────────────────────────
describe('getWeatherSessionsHistory', () => {
beforeEach(() => vi.clearAllMocks());
it('returns sorted history by date ascending', async () => {
const older = new Date('2026-01-01');
const newer = new Date('2026-02-01');
vi.mocked(prisma.weatherSession.findMany).mockResolvedValueOnce([
{ id: 's-2', title: 'Session 2', date: newer, entries: [] },
{ id: 's-1', title: 'Session 1', date: older, entries: [] },
] as never);
vi.mocked(prisma.weatherSessionShare.findMany).mockResolvedValueOnce([]);
const result = await getWeatherSessionsHistory('user-1');
expect(result[0].sessionId).toBe('s-1');
expect(result[1].sessionId).toBe('s-2');
});
it('deduplicates sessions appearing in both own and shared', async () => {
const date = new Date('2026-01-01');
vi.mocked(prisma.weatherSession.findMany).mockResolvedValueOnce([
{ id: 's-1', title: 'Session', date, entries: [] },
] as never);
vi.mocked(prisma.weatherSessionShare.findMany).mockResolvedValueOnce([
{ session: { id: 's-1', title: 'Session', date, entries: [] } },
] as never);
const result = await getWeatherSessionsHistory('user-1');
expect(result).toHaveLength(1);
});
it('calculates average scores from entries using emoji → number mapping', async () => {
const date = new Date('2026-01-01');
// '☀️' is index 1 in WEATHER_EMOJIS, '🌤️' is index 2
vi.mocked(prisma.weatherSession.findMany).mockResolvedValueOnce([
{
id: 's-1',
title: 'Session',
date,
entries: [
{ performanceEmoji: '☀️', moralEmoji: null, fluxEmoji: null, valueCreationEmoji: null },
{ performanceEmoji: '🌤️', moralEmoji: null, fluxEmoji: null, valueCreationEmoji: null },
],
},
] as never);
vi.mocked(prisma.weatherSessionShare.findMany).mockResolvedValueOnce([]);
const result = await getWeatherSessionsHistory('user-1');
// avgScore(['☀️', '🌤️']) = (1 + 2) / 2 = 1.5
expect(result[0].performance).toBe(1.5);
expect(result[0].moral).toBeNull(); // all null → null
});
it('returns null score when no entries have that axis', async () => {
const date = new Date('2026-01-01');
vi.mocked(prisma.weatherSession.findMany).mockResolvedValueOnce([
{ id: 's-1', title: 'Session', date, entries: [] },
] as never);
vi.mocked(prisma.weatherSessionShare.findMany).mockResolvedValueOnce([]);
const result = await getWeatherSessionsHistory('user-1');
expect(result[0].performance).toBeNull();
expect(result[0].moral).toBeNull();
expect(result[0].flux).toBeNull();
expect(result[0].valueCreation).toBeNull();
});
it('includes both own and shared sessions', async () => {
vi.mocked(prisma.weatherSession.findMany).mockResolvedValueOnce([
{ id: 's-own', title: 'Own', date: new Date('2026-01-01'), entries: [] },
] as never);
vi.mocked(prisma.weatherSessionShare.findMany).mockResolvedValueOnce([
{ session: { id: 's-shared', title: 'Shared', date: new Date('2026-02-01'), entries: [] } },
] as never);
const result = await getWeatherSessionsHistory('user-1');
expect(result).toHaveLength(2);
const ids = result.map((r) => r.sessionId);
expect(ids).toContain('s-own');
expect(ids).toContain('s-shared');
});
});

View File

@@ -0,0 +1,511 @@
/**
* Workshop-specific business logic tests:
* - sessions.ts: createSwotItem, duplicateSwotItem, updateAction
* - moving-motivators.ts: createMotivatorSession, updateCardInfluence
* - gif-mood.ts: addGifMoodItem, shareGifMoodSessionToTeam
* - session-share-events.ts: getLatestEventTimestamp, cleanupOldEvents
*/
import { describe, it, expect, vi, beforeEach } from 'vitest';
// ── Shared mocks ────────────────────────────────────────────────────────────
vi.mock('@/services/database', () => ({
prisma: {
swotItem: {
aggregate: vi.fn(),
create: vi.fn(),
findUnique: vi.fn(),
},
actionLink: {
deleteMany: vi.fn(),
createMany: vi.fn(),
},
action: {
update: vi.fn(),
},
movingMotivatorsSession: {
create: vi.fn(),
},
motivatorCard: {
update: vi.fn(),
},
gifMoodItem: {
count: vi.fn(),
create: vi.fn(),
},
gifMoodSession: {
findFirst: vi.fn(),
},
gMSessionShare: {
upsert: vi.fn(),
},
teamMember: {
findMany: vi.fn(),
},
},
}));
vi.mock('@/services/auth', () => ({
resolveCollaborator: vi.fn(),
batchResolveCollaborators: vi.fn().mockResolvedValue(new Map()),
}));
vi.mock('@/services/teams', () => ({
getTeamMemberIdsForAdminTeams: vi.fn(),
}));
vi.mock('@/services/session-permissions', () => ({
createSessionPermissionChecks: () => ({
canAccess: vi.fn().mockResolvedValue(true),
canEdit: vi.fn().mockResolvedValue(true),
canDelete: vi.fn().mockResolvedValue(true),
}),
}));
vi.mock('@/services/session-share-events', () => ({
createShareAndEventHandlers: () => ({
share: vi.fn(),
removeShare: vi.fn(),
getShares: vi.fn(),
createEvent: vi.fn(),
getEvents: vi.fn(),
getLatestEventTimestamp: vi.fn(),
}),
}));
vi.mock('@/services/session-queries', () => ({
mergeSessionsByUserId: vi.fn().mockResolvedValue([]),
fetchTeamCollaboratorSessions: vi.fn().mockResolvedValue([]),
getSessionByIdGeneric: vi.fn(),
}));
const { prisma } = await import('@/services/database');
// ── sessions.ts: createSwotItem ──────────────────────────────────────────
import { createSwotItem, duplicateSwotItem, updateAction } from '@/services/sessions';
describe('createSwotItem', () => {
beforeEach(() => vi.clearAllMocks());
it('uses maxOrder + 1 for the new item order', async () => {
vi.mocked(prisma.swotItem.aggregate).mockResolvedValueOnce({ _max: { order: 3 } } as never);
vi.mocked(prisma.swotItem.create).mockResolvedValueOnce({ id: 'item-1', order: 4 } as never);
await createSwotItem('session-1', { content: 'New item', category: 'STRENGTH' });
expect(vi.mocked(prisma.swotItem.create)).toHaveBeenCalledWith(
expect.objectContaining({ data: expect.objectContaining({ order: 4 }) })
);
});
it('uses order 0 when no items exist yet', async () => {
vi.mocked(prisma.swotItem.aggregate).mockResolvedValueOnce({ _max: { order: null } } as never);
vi.mocked(prisma.swotItem.create).mockResolvedValueOnce({ id: 'item-1', order: 0 } as never);
await createSwotItem('session-1', { content: 'First item', category: 'WEAKNESS' });
expect(vi.mocked(prisma.swotItem.create)).toHaveBeenCalledWith(
expect.objectContaining({ data: expect.objectContaining({ order: 0 }) })
);
});
it('queries aggregate for the correct session and category', async () => {
vi.mocked(prisma.swotItem.aggregate).mockResolvedValueOnce({ _max: { order: null } } as never);
vi.mocked(prisma.swotItem.create).mockResolvedValueOnce({} as never);
await createSwotItem('session-42', { content: 'Item', category: 'OPPORTUNITY' });
expect(vi.mocked(prisma.swotItem.aggregate)).toHaveBeenCalledWith(
expect.objectContaining({ where: { sessionId: 'session-42', category: 'OPPORTUNITY' } })
);
});
});
// ── sessions.ts: duplicateSwotItem ────────────────────────────────────────
describe('duplicateSwotItem', () => {
beforeEach(() => vi.clearAllMocks());
it('throws when original item not found', async () => {
vi.mocked(prisma.swotItem.findUnique).mockResolvedValueOnce(null);
await expect(duplicateSwotItem('item-999')).rejects.toThrow('Item not found');
});
it('creates copy with recalculated order (maxOrder + 1)', async () => {
const original = { id: 'item-1', content: 'Original', category: 'STRENGTH', sessionId: 'session-1' };
vi.mocked(prisma.swotItem.findUnique).mockResolvedValueOnce(original as never);
vi.mocked(prisma.swotItem.aggregate).mockResolvedValueOnce({ _max: { order: 5 } } as never);
vi.mocked(prisma.swotItem.create).mockResolvedValueOnce({ id: 'item-2', order: 6 } as never);
await duplicateSwotItem('item-1');
expect(vi.mocked(prisma.swotItem.create)).toHaveBeenCalledWith(
expect.objectContaining({
data: {
content: 'Original',
category: 'STRENGTH',
sessionId: 'session-1',
order: 6,
},
})
);
});
it('uses order 0 when category has no existing items', async () => {
const original = { id: 'item-1', content: 'Item', category: 'THREAT', sessionId: 's-1' };
vi.mocked(prisma.swotItem.findUnique).mockResolvedValueOnce(original as never);
vi.mocked(prisma.swotItem.aggregate).mockResolvedValueOnce({ _max: { order: null } } as never);
vi.mocked(prisma.swotItem.create).mockResolvedValueOnce({ id: 'item-2', order: 0 } as never);
await duplicateSwotItem('item-1');
expect(vi.mocked(prisma.swotItem.create)).toHaveBeenCalledWith(
expect.objectContaining({ data: expect.objectContaining({ order: 0 }) })
);
});
});
// ── sessions.ts: updateAction ─────────────────────────────────────────────
describe('updateAction', () => {
beforeEach(() => vi.clearAllMocks());
it('does not touch links when linkedItemIds is not provided', async () => {
vi.mocked(prisma.action.update).mockResolvedValueOnce({ id: 'a-1', links: [] } as never);
await updateAction('a-1', { title: 'Updated title' });
expect(vi.mocked(prisma.actionLink.deleteMany)).not.toHaveBeenCalled();
expect(vi.mocked(prisma.actionLink.createMany)).not.toHaveBeenCalled();
});
it('deletes all existing links and recreates when linkedItemIds is provided', async () => {
vi.mocked(prisma.actionLink.deleteMany).mockResolvedValueOnce({ count: 2 } as never);
vi.mocked(prisma.actionLink.createMany).mockResolvedValueOnce({ count: 2 } as never);
vi.mocked(prisma.action.update).mockResolvedValueOnce({ id: 'a-1', links: [] } as never);
await updateAction('a-1', { linkedItemIds: ['item-1', 'item-2'] });
expect(vi.mocked(prisma.actionLink.deleteMany)).toHaveBeenCalledWith(
expect.objectContaining({ where: { actionId: 'a-1' } })
);
expect(vi.mocked(prisma.actionLink.createMany)).toHaveBeenCalledWith({
data: [
{ actionId: 'a-1', swotItemId: 'item-1' },
{ actionId: 'a-1', swotItemId: 'item-2' },
],
});
});
it('deletes links but skips createMany when linkedItemIds is empty', async () => {
vi.mocked(prisma.actionLink.deleteMany).mockResolvedValueOnce({ count: 1 } as never);
vi.mocked(prisma.action.update).mockResolvedValueOnce({ id: 'a-1', links: [] } as never);
await updateAction('a-1', { linkedItemIds: [] });
expect(vi.mocked(prisma.actionLink.deleteMany)).toHaveBeenCalled();
expect(vi.mocked(prisma.actionLink.createMany)).not.toHaveBeenCalled();
});
});
// ── moving-motivators.ts: createMotivatorSession ──────────────────────────
import { createMotivatorSession, updateCardInfluence } from '@/services/moving-motivators';
describe('createMotivatorSession', () => {
beforeEach(() => vi.clearAllMocks());
it('creates session with exactly 10 motivator cards', async () => {
vi.mocked(prisma.movingMotivatorsSession.create).mockResolvedValueOnce({
id: 'session-1',
cards: new Array(10).fill({ id: 'c', type: 'STATUS', orderIndex: 1, influence: 0 }),
} as never);
await createMotivatorSession('user-1', { title: 'Test', participant: 'Alice' });
const call = vi.mocked(prisma.movingMotivatorsSession.create).mock.calls[0][0];
expect(call.data.cards.create).toHaveLength(10);
});
it('initializes all cards with influence 0 and correct orderIndex (1-based)', async () => {
vi.mocked(prisma.movingMotivatorsSession.create).mockResolvedValueOnce({ id: 's-1', cards: [] } as never);
await createMotivatorSession('user-1', { title: 'Test', participant: 'Alice' });
const call = vi.mocked(prisma.movingMotivatorsSession.create).mock.calls[0][0];
const cards = call.data.cards.create;
expect(cards[0]).toMatchObject({ influence: 0, orderIndex: 1 });
expect(cards[9]).toMatchObject({ influence: 0, orderIndex: 10 });
});
it('includes all 10 motivator types', async () => {
vi.mocked(prisma.movingMotivatorsSession.create).mockResolvedValueOnce({ id: 's-1', cards: [] } as never);
await createMotivatorSession('user-1', { title: 'T', participant: 'B' });
const call = vi.mocked(prisma.movingMotivatorsSession.create).mock.calls[0][0];
const types = call.data.cards.create.map((c: { type: string }) => c.type);
expect(types).toContain('STATUS');
expect(types).toContain('PURPOSE');
expect(types).toContain('CURIOSITY');
expect(types).toContain('FREEDOM');
expect(new Set(types).size).toBe(10);
});
});
// ── moving-motivators.ts: updateCardInfluence ─────────────────────────────
describe('updateCardInfluence', () => {
beforeEach(() => vi.clearAllMocks());
it('passes influence value as-is when within bounds', async () => {
vi.mocked(prisma.motivatorCard.update).mockResolvedValueOnce({ id: 'c-1', influence: 2 } as never);
await updateCardInfluence('c-1', 2);
expect(vi.mocked(prisma.motivatorCard.update)).toHaveBeenCalledWith(
expect.objectContaining({ data: { influence: 2 } })
);
});
it('clamps influence to -3 when below minimum', async () => {
vi.mocked(prisma.motivatorCard.update).mockResolvedValueOnce({ id: 'c-1', influence: -3 } as never);
await updateCardInfluence('c-1', -10);
expect(vi.mocked(prisma.motivatorCard.update)).toHaveBeenCalledWith(
expect.objectContaining({ data: { influence: -3 } })
);
});
it('clamps influence to +3 when above maximum', async () => {
vi.mocked(prisma.motivatorCard.update).mockResolvedValueOnce({ id: 'c-1', influence: 3 } as never);
await updateCardInfluence('c-1', 99);
expect(vi.mocked(prisma.motivatorCard.update)).toHaveBeenCalledWith(
expect.objectContaining({ data: { influence: 3 } })
);
});
it('allows exact boundary values -3 and +3', async () => {
vi.mocked(prisma.motivatorCard.update).mockResolvedValue({ id: 'c-1', influence: -3 } as never);
await updateCardInfluence('c-1', -3);
await updateCardInfluence('c-1', 3);
const calls = vi.mocked(prisma.motivatorCard.update).mock.calls;
expect(calls[0][0].data.influence).toBe(-3);
expect(calls[1][0].data.influence).toBe(3);
});
});
// ── gif-mood.ts: addGifMoodItem ────────────────────────────────────────────
import { addGifMoodItem, shareGifMoodSessionToTeam } from '@/services/gif-mood';
describe('addGifMoodItem', () => {
beforeEach(() => vi.clearAllMocks());
it('throws when user has reached GIF_MOOD_MAX_ITEMS limit (5)', async () => {
vi.mocked(prisma.gifMoodItem.count).mockResolvedValueOnce(5);
await expect(addGifMoodItem('session-1', 'user-1', { gifUrl: 'https://example.com/gif.gif' }))
.rejects.toThrow('Maximum 5');
});
it('creates item with order set to current count (append at end)', async () => {
vi.mocked(prisma.gifMoodItem.count).mockResolvedValueOnce(2);
vi.mocked(prisma.gifMoodItem.create).mockResolvedValueOnce({ id: 'gif-1', order: 2 } as never);
await addGifMoodItem('session-1', 'user-1', { gifUrl: 'https://example.com/gif.gif' });
expect(vi.mocked(prisma.gifMoodItem.create)).toHaveBeenCalledWith(
expect.objectContaining({ data: expect.objectContaining({ order: 2 }) })
);
});
it('allows adding when count is below limit', async () => {
vi.mocked(prisma.gifMoodItem.count).mockResolvedValueOnce(4);
vi.mocked(prisma.gifMoodItem.create).mockResolvedValueOnce({ id: 'gif-1' } as never);
await expect(addGifMoodItem('s-1', 'u-1', { gifUrl: 'url', note: 'hello' })).resolves.not.toThrow();
expect(vi.mocked(prisma.gifMoodItem.create)).toHaveBeenCalledWith(
expect.objectContaining({ data: expect.objectContaining({ note: 'hello' }) })
);
});
it('sets note to null when not provided', async () => {
vi.mocked(prisma.gifMoodItem.count).mockResolvedValueOnce(0);
vi.mocked(prisma.gifMoodItem.create).mockResolvedValueOnce({ id: 'gif-1' } as never);
await addGifMoodItem('s-1', 'u-1', { gifUrl: 'url' });
expect(vi.mocked(prisma.gifMoodItem.create)).toHaveBeenCalledWith(
expect.objectContaining({ data: expect.objectContaining({ note: null }) })
);
});
});
// ── gif-mood.ts: shareGifMoodSessionToTeam ────────────────────────────────
describe('shareGifMoodSessionToTeam', () => {
beforeEach(() => vi.clearAllMocks());
it('throws when session not found or not owned', async () => {
vi.mocked(prisma.gifMoodSession.findFirst).mockResolvedValueOnce(null);
await expect(shareGifMoodSessionToTeam('s-1', 'owner-1', 'team-1')).rejects.toThrow(
'Session not found or not owned'
);
});
it('throws when team has no members', async () => {
vi.mocked(prisma.gifMoodSession.findFirst).mockResolvedValueOnce({ id: 's-1' } as never);
vi.mocked(prisma.teamMember.findMany).mockResolvedValueOnce([]);
await expect(shareGifMoodSessionToTeam('s-1', 'owner-1', 'team-1')).rejects.toThrow(
'Team has no members'
);
});
it('shares with all team members except owner', async () => {
vi.mocked(prisma.gifMoodSession.findFirst).mockResolvedValueOnce({ id: 's-1' } as never);
vi.mocked(prisma.teamMember.findMany).mockResolvedValueOnce([
{ userId: 'owner-1', user: { id: 'owner-1', name: 'Owner', email: 'o@ex.com' } },
{ userId: 'member-1', user: { id: 'member-1', name: 'Member', email: 'm@ex.com' } },
] as never);
vi.mocked(prisma.gMSessionShare.upsert).mockResolvedValue({ role: 'EDITOR', user: {} } as never);
await shareGifMoodSessionToTeam('s-1', 'owner-1', 'team-1');
// Only member-1 gets shared (not owner-1)
expect(vi.mocked(prisma.gMSessionShare.upsert)).toHaveBeenCalledTimes(1);
expect(vi.mocked(prisma.gMSessionShare.upsert)).toHaveBeenCalledWith(
expect.objectContaining({ create: expect.objectContaining({ userId: 'member-1' }) })
);
});
it('uses EDITOR role by default', async () => {
vi.mocked(prisma.gifMoodSession.findFirst).mockResolvedValueOnce({ id: 's-1' } as never);
vi.mocked(prisma.teamMember.findMany).mockResolvedValueOnce([
{ userId: 'member-1', user: { id: 'member-1', name: 'M', email: 'm@ex.com' } },
] as never);
vi.mocked(prisma.gMSessionShare.upsert).mockResolvedValue({ role: 'EDITOR', user: {} } as never);
await shareGifMoodSessionToTeam('s-1', 'different-owner', 'team-1');
expect(vi.mocked(prisma.gMSessionShare.upsert)).toHaveBeenCalledWith(
expect.objectContaining({ create: expect.objectContaining({ role: 'EDITOR' }) })
);
});
});
// ── session-share-events.ts: getLatestEventTimestamp & cleanupOldEvents ──────
// Use the real implementation (not the mock above which is for internal service use)
const { createShareAndEventHandlers } = await vi.importActual<
typeof import('@/services/session-share-events')
>('@/services/session-share-events');
describe('getLatestEventTimestamp', () => {
it('returns the createdAt of the most recent event', async () => {
const ts = new Date('2026-03-10T10:00:00Z');
const eventModel = {
create: vi.fn(),
findMany: vi.fn(),
findFirst: vi.fn().mockResolvedValueOnce({ createdAt: ts }),
deleteMany: vi.fn(),
};
const { getLatestEventTimestamp } = createShareAndEventHandlers(
{ findFirst: vi.fn() },
{ upsert: vi.fn(), deleteMany: vi.fn(), findMany: vi.fn() },
eventModel,
vi.fn()
);
const result = await getLatestEventTimestamp('session-1');
expect(result).toEqual(ts);
});
it('returns undefined when no events exist', async () => {
const eventModel = {
create: vi.fn(),
findMany: vi.fn(),
findFirst: vi.fn().mockResolvedValueOnce(null),
deleteMany: vi.fn(),
};
const { getLatestEventTimestamp } = createShareAndEventHandlers(
{ findFirst: vi.fn() },
{ upsert: vi.fn(), deleteMany: vi.fn(), findMany: vi.fn() },
eventModel,
vi.fn()
);
const result = await getLatestEventTimestamp('session-1');
expect(result).toBeUndefined();
});
});
describe('cleanupOldEvents', () => {
it('deletes events older than 24 hours by default', async () => {
const deleteMany = vi.fn().mockResolvedValueOnce({ count: 5 });
const eventModel = { create: vi.fn(), findMany: vi.fn(), findFirst: vi.fn(), deleteMany };
const { cleanupOldEvents } = createShareAndEventHandlers(
{ findFirst: vi.fn() },
{ upsert: vi.fn(), deleteMany: vi.fn(), findMany: vi.fn() },
eventModel,
vi.fn()
);
const before = Date.now();
await cleanupOldEvents();
const after = Date.now();
const cutoff: Date = deleteMany.mock.calls[0][0].where.createdAt.lt;
expect(cutoff.getTime()).toBeGreaterThanOrEqual(before - 24 * 60 * 60 * 1000 - 100);
expect(cutoff.getTime()).toBeLessThanOrEqual(after - 24 * 60 * 60 * 1000 + 100);
});
it('deletes events older than custom maxAgeHours', async () => {
const deleteMany = vi.fn().mockResolvedValueOnce({ count: 0 });
const eventModel = { create: vi.fn(), findMany: vi.fn(), findFirst: vi.fn(), deleteMany };
const { cleanupOldEvents } = createShareAndEventHandlers(
{ findFirst: vi.fn() },
{ upsert: vi.fn(), deleteMany: vi.fn(), findMany: vi.fn() },
eventModel,
vi.fn()
);
const before = Date.now();
await cleanupOldEvents(1); // 1 hour
const after = Date.now();
const cutoff: Date = deleteMany.mock.calls[0][0].where.createdAt.lt;
expect(cutoff.getTime()).toBeGreaterThanOrEqual(before - 60 * 60 * 1000 - 100);
expect(cutoff.getTime()).toBeLessThanOrEqual(after - 60 * 60 * 1000 + 100);
});
it('returns the count of deleted events', async () => {
const eventModel = {
create: vi.fn(),
findMany: vi.fn(),
findFirst: vi.fn(),
deleteMany: vi.fn().mockResolvedValueOnce({ count: 42 }),
};
const { cleanupOldEvents } = createShareAndEventHandlers(
{ findFirst: vi.fn() },
{ upsert: vi.fn(), deleteMany: vi.fn(), findMany: vi.fn() },
eventModel,
vi.fn()
);
const result = await cleanupOldEvents();
expect(result).toEqual({ count: 42 });
});
});

View File

@@ -74,6 +74,48 @@ export interface ResolvedCollaborator {
} | null;
}
// Batch resolve multiple collaborator strings — 2 DB queries max regardless of count
export async function batchResolveCollaborators(
collaborators: string[]
): Promise<Map<string, ResolvedCollaborator>> {
if (collaborators.length === 0) return new Map();
const unique = [...new Set(collaborators.map((c) => c.trim()))];
const emails = unique.filter(isEmail).map((e) => e.toLowerCase());
const names = unique.filter((c) => !isEmail(c));
const [byEmail, byName] = await Promise.all([
emails.length > 0
? prisma.user.findMany({
where: { email: { in: emails } },
select: { id: true, email: true, name: true },
})
: [],
names.length > 0
? prisma.user.findMany({
where: { OR: names.map((n) => ({ name: { contains: n } })) },
select: { id: true, email: true, name: true },
})
: [],
]);
const emailMap = new Map(byEmail.map((u) => [u.email.toLowerCase(), u]));
const nameMap = new Map(
byName.filter((u) => u.name).map((u) => [u.name!.toLowerCase(), u])
);
const result = new Map<string, ResolvedCollaborator>();
for (const c of unique) {
if (isEmail(c)) {
result.set(c, { raw: c, matchedUser: emailMap.get(c.toLowerCase()) ?? null });
} else {
const match = nameMap.get(c.toLowerCase()) ?? null;
result.set(c, { raw: c, matchedUser: match });
}
}
return result;
}
// Resolve collaborator string to user - try email first, then name
export async function resolveCollaborator(collaborator: string): Promise<ResolvedCollaborator> {
const trimmed = collaborator.trim();

View File

@@ -1,3 +1,4 @@
import { unstable_cache } from 'next/cache';
import { prisma } from '@/services/database';
import { getTeamMemberIdsForAdminTeams } from '@/services/teams';
import { createSessionPermissionChecks } from '@/services/session-permissions';
@@ -8,33 +9,44 @@ import {
getSessionByIdGeneric,
} from '@/services/session-queries';
import { GIF_MOOD_MAX_ITEMS } from '@/lib/types';
import { sessionsListTag } from '@/lib/cache-tags';
import type { ShareRole } from '@prisma/client';
const gifMoodInclude = {
const gifMoodListSelect = {
id: true,
title: true,
date: true,
updatedAt: true,
userId: true,
user: { select: { id: true, name: true, email: true } },
shares: { include: { user: { select: { id: true, name: true, email: true } } } },
shares: { select: { id: true, role: true, user: { select: { id: true, name: true, email: true } } } },
_count: { select: { items: true } },
};
} as const;
// ============================================
// GifMood Session CRUD
// ============================================
export async function getGifMoodSessionsByUserId(userId: string) {
return mergeSessionsByUserId(
(uid) =>
prisma.gifMoodSession.findMany({
where: { userId: uid },
include: gifMoodInclude,
orderBy: { updatedAt: 'desc' },
}),
(uid) =>
prisma.gMSessionShare.findMany({
where: { userId: uid },
include: { session: { include: gifMoodInclude } },
}),
userId
);
return unstable_cache(
() =>
mergeSessionsByUserId(
(uid) =>
prisma.gifMoodSession.findMany({
where: { userId: uid },
select: gifMoodListSelect,
orderBy: { updatedAt: 'desc' },
}),
(uid) =>
prisma.gMSessionShare.findMany({
where: { userId: uid },
select: { role: true, createdAt: true, session: { select: gifMoodListSelect } },
}),
userId
),
[`gif-mood-sessions-list-${userId}`],
{ tags: [sessionsListTag(userId)], revalidate: 60 }
)();
}
export async function getTeamCollaboratorSessionsForAdmin(userId: string) {
@@ -42,7 +54,7 @@ export async function getTeamCollaboratorSessionsForAdmin(userId: string) {
(teamMemberIds, uid) =>
prisma.gifMoodSession.findMany({
where: { userId: { in: teamMemberIds }, shares: { none: { userId: uid } } },
include: gifMoodInclude,
select: gifMoodListSelect,
orderBy: { updatedAt: 'desc' },
}),
getTeamMemberIdsForAdminTeams,

View File

@@ -1,5 +1,6 @@
import { unstable_cache } from 'next/cache';
import { prisma } from '@/services/database';
import { resolveCollaborator } from '@/services/auth';
import { resolveCollaborator, batchResolveCollaborators } from '@/services/auth';
import { getTeamMemberIdsForAdminTeams } from '@/services/teams';
import { createSessionPermissionChecks } from '@/services/session-permissions';
import { createShareAndEventHandlers } from '@/services/session-share-events';
@@ -8,49 +9,69 @@ import {
fetchTeamCollaboratorSessions,
getSessionByIdGeneric,
} from '@/services/session-queries';
import { sessionsListTag } from '@/lib/cache-tags';
import type { MotivatorType } from '@prisma/client';
const motivatorInclude = {
const motivatorListSelect = {
id: true,
title: true,
participant: true,
updatedAt: true,
userId: true,
user: { select: { id: true, name: true, email: true } },
shares: { include: { user: { select: { id: true, name: true, email: true } } } },
shares: { select: { id: true, role: true, user: { select: { id: true, name: true, email: true } } } },
_count: { select: { cards: true } },
};
} as const;
// ============================================
// Moving Motivators Session CRUD
// ============================================
export async function getMotivatorSessionsByUserId(userId: string) {
return mergeSessionsByUserId(
(uid) =>
prisma.movingMotivatorsSession.findMany({
where: { userId: uid },
include: motivatorInclude,
orderBy: { updatedAt: 'desc' },
}),
(uid) =>
prisma.mMSessionShare.findMany({
where: { userId: uid },
include: { session: { include: motivatorInclude } },
}),
userId,
(s) => resolveCollaborator(s.participant).then((r) => ({ resolvedParticipant: r }))
);
return unstable_cache(
async () => {
const sessions = await mergeSessionsByUserId(
(uid) =>
prisma.movingMotivatorsSession.findMany({
where: { userId: uid },
select: motivatorListSelect,
orderBy: { updatedAt: 'desc' },
}),
(uid) =>
prisma.mMSessionShare.findMany({
where: { userId: uid },
select: { role: true, createdAt: true, session: { select: motivatorListSelect } },
}),
userId
);
const resolved = await batchResolveCollaborators(sessions.map((s) => s.participant));
return sessions.map((s) => ({
...s,
resolvedParticipant: resolved.get(s.participant.trim()) ?? { raw: s.participant, matchedUser: null },
}));
},
[`motivator-sessions-list-${userId}`],
{ tags: [sessionsListTag(userId)], revalidate: 60 }
)();
}
/** Sessions owned by team members (where user is team admin) that are NOT shared with the user. */
export async function getTeamCollaboratorSessionsForAdmin(userId: string) {
return fetchTeamCollaboratorSessions(
const sessions = await fetchTeamCollaboratorSessions(
(teamMemberIds, uid) =>
prisma.movingMotivatorsSession.findMany({
where: { userId: { in: teamMemberIds }, shares: { none: { userId: uid } } },
include: motivatorInclude,
select: motivatorListSelect,
orderBy: { updatedAt: 'desc' },
}),
getTeamMemberIdsForAdminTeams,
userId,
(s) => resolveCollaborator(s.participant).then((r) => ({ resolvedParticipant: r }))
userId
);
const resolved = await batchResolveCollaborators(sessions.map((s) => s.participant));
return sessions.map((s) => ({
...s,
resolvedParticipant: resolved.get(s.participant.trim()) ?? { raw: s.participant, matchedUser: null },
}));
}
const motivatorByIdInclude = {

View File

@@ -131,14 +131,21 @@ export function createShareAndEventHandlers<TEventType extends string>(
type: TEventType,
payload: Record<string, unknown>
): Promise<SessionEventWithUser> {
return eventModel.create({
const event = await eventModel.create({
data: {
sessionId,
userId,
type,
payload: JSON.stringify(payload),
},
}) as Promise<SessionEventWithUser>;
});
// Fire-and-forget: purge old events without blocking the response
eventModel.deleteMany({ where: { createdAt: { lt: new Date(Date.now() - 24 * 60 * 60 * 1000) } } }).catch((err: unknown) => {
console.error('[cleanupOldEvents] Failed to purge old events:', err);
});
return event as SessionEventWithUser;
},
async getEvents(sessionId: string, since?: Date): Promise<SessionEventWithUser[]> {

View File

@@ -1,5 +1,6 @@
import { unstable_cache } from 'next/cache';
import { prisma } from '@/services/database';
import { resolveCollaborator } from '@/services/auth';
import { resolveCollaborator, batchResolveCollaborators } from '@/services/auth';
import { getTeamMemberIdsForAdminTeams } from '@/services/teams';
import { createSessionPermissionChecks } from '@/services/session-permissions';
import { createShareAndEventHandlers } from '@/services/session-share-events';
@@ -8,49 +9,69 @@ import {
fetchTeamCollaboratorSessions,
getSessionByIdGeneric,
} from '@/services/session-queries';
import { sessionsListTag } from '@/lib/cache-tags';
import type { SwotCategory, ShareRole } from '@prisma/client';
const sessionInclude = {
const sessionListSelect = {
id: true,
title: true,
collaborator: true,
updatedAt: true,
userId: true,
user: { select: { id: true, name: true, email: true } },
shares: { include: { user: { select: { id: true, name: true, email: true } } } },
shares: { select: { id: true, role: true, user: { select: { id: true, name: true, email: true } } } },
_count: { select: { items: true, actions: true } },
};
} as const;
// ============================================
// Session CRUD
// ============================================
export async function getSessionsByUserId(userId: string) {
return mergeSessionsByUserId(
(uid) =>
prisma.session.findMany({
where: { userId: uid },
include: sessionInclude,
orderBy: { updatedAt: 'desc' },
}),
(uid) =>
prisma.sessionShare.findMany({
where: { userId: uid },
include: { session: { include: sessionInclude } },
}),
userId,
(s) => resolveCollaborator(s.collaborator).then((r) => ({ resolvedCollaborator: r }))
);
return unstable_cache(
async () => {
const sessions = await mergeSessionsByUserId(
(uid) =>
prisma.session.findMany({
where: { userId: uid },
select: sessionListSelect,
orderBy: { updatedAt: 'desc' },
}),
(uid) =>
prisma.sessionShare.findMany({
where: { userId: uid },
select: { role: true, createdAt: true, session: { select: sessionListSelect } },
}),
userId
);
const resolved = await batchResolveCollaborators(sessions.map((s) => s.collaborator));
return sessions.map((s) => ({
...s,
resolvedCollaborator: resolved.get(s.collaborator.trim()) ?? { raw: s.collaborator, matchedUser: null },
}));
},
[`swot-sessions-list-${userId}`],
{ tags: [sessionsListTag(userId)], revalidate: 60 }
)();
}
/** Sessions owned by team members (where user is team admin) that are NOT shared with the user. */
export async function getTeamCollaboratorSessionsForAdmin(userId: string) {
return fetchTeamCollaboratorSessions(
const sessions = await fetchTeamCollaboratorSessions(
(teamMemberIds, uid) =>
prisma.session.findMany({
where: { userId: { in: teamMemberIds }, shares: { none: { userId: uid } } },
include: sessionInclude,
select: sessionListSelect,
orderBy: { updatedAt: 'desc' },
}),
getTeamMemberIdsForAdminTeams,
userId,
(s) => resolveCollaborator(s.collaborator).then((r) => ({ resolvedCollaborator: r }))
userId
);
const resolved = await batchResolveCollaborators(sessions.map((s) => s.collaborator));
return sessions.map((s) => ({
...s,
resolvedCollaborator: resolved.get(s.collaborator.trim()) ?? { raw: s.collaborator, matchedUser: null },
}));
}
const sessionByIdInclude = {

View File

@@ -7,8 +7,11 @@ import {
fetchTeamCollaboratorSessions,
getSessionByIdGeneric,
} from '@/services/session-queries';
import { unstable_cache } from 'next/cache';
import { getWeekBounds } from '@/lib/date-utils';
import { getEmojiScore } from '@/lib/weather-utils';
import { WEATHER_HISTORY_LIMIT } from '@/lib/types';
import { sessionsListTag } from '@/lib/cache-tags';
import type { ShareRole } from '@prisma/client';
export type WeatherHistoryPoint = {
@@ -21,31 +24,41 @@ export type WeatherHistoryPoint = {
valueCreation: number | null;
};
const weatherInclude = {
const weatherListSelect = {
id: true,
title: true,
date: true,
updatedAt: true,
userId: true,
user: { select: { id: true, name: true, email: true } },
shares: { include: { user: { select: { id: true, name: true, email: true } } } },
shares: { select: { id: true, role: true, user: { select: { id: true, name: true, email: true } } } },
_count: { select: { entries: true } },
};
} as const;
// ============================================
// Weather Session CRUD
// ============================================
export async function getWeatherSessionsByUserId(userId: string) {
return mergeSessionsByUserId(
(uid) =>
prisma.weatherSession.findMany({
where: { userId: uid },
include: weatherInclude,
orderBy: { updatedAt: 'desc' },
}),
(uid) =>
prisma.weatherSessionShare.findMany({
where: { userId: uid },
include: { session: { include: weatherInclude } },
}),
userId
);
return unstable_cache(
() =>
mergeSessionsByUserId(
(uid) =>
prisma.weatherSession.findMany({
where: { userId: uid },
select: weatherListSelect,
orderBy: { updatedAt: 'desc' },
}),
(uid) =>
prisma.weatherSessionShare.findMany({
where: { userId: uid },
select: { role: true, createdAt: true, session: { select: weatherListSelect } },
}),
userId
),
[`weather-sessions-list-${userId}`],
{ tags: [sessionsListTag(userId)], revalidate: 60 }
)();
}
/** Sessions owned by team members (where user is team admin) that are NOT shared with the user. */
@@ -54,7 +67,7 @@ export async function getTeamCollaboratorSessionsForAdmin(userId: string) {
(teamMemberIds, uid) =>
prisma.weatherSession.findMany({
where: { userId: { in: teamMemberIds }, shares: { none: { userId: uid } } },
include: weatherInclude,
select: weatherListSelect,
orderBy: { updatedAt: 'desc' },
}),
getTeamMemberIdsForAdminTeams,
@@ -376,10 +389,14 @@ export async function getWeatherSessionsHistory(userId: string): Promise<Weather
const [ownSessions, sharedRaw] = await Promise.all([
prisma.weatherSession.findMany({
where: { userId },
orderBy: { date: 'desc' },
take: WEATHER_HISTORY_LIMIT,
select: { id: true, title: true, date: true, entries: { select: entrySelect } },
}),
prisma.weatherSessionShare.findMany({
where: { userId },
orderBy: { session: { date: 'desc' } },
take: WEATHER_HISTORY_LIMIT,
select: {
session: {
select: { id: true, title: true, date: true, entries: { select: entrySelect } },

View File

@@ -1,5 +1,6 @@
import { unstable_cache } from 'next/cache';
import { prisma } from '@/services/database';
import { resolveCollaborator } from '@/services/auth';
import { resolveCollaborator, batchResolveCollaborators } from '@/services/auth';
import { getTeamMemberIdsForAdminTeams } from '@/services/teams';
import { createSessionPermissionChecks } from '@/services/session-permissions';
import { createShareAndEventHandlers } from '@/services/session-share-events';
@@ -8,49 +9,70 @@ import {
fetchTeamCollaboratorSessions,
getSessionByIdGeneric,
} from '@/services/session-queries';
import { sessionsListTag } from '@/lib/cache-tags';
import type { WeeklyCheckInCategory, Emotion } from '@prisma/client';
const weeklyCheckInInclude = {
const weeklyCheckInListSelect = {
id: true,
title: true,
participant: true,
date: true,
updatedAt: true,
userId: true,
user: { select: { id: true, name: true, email: true } },
shares: { include: { user: { select: { id: true, name: true, email: true } } } },
shares: { select: { id: true, role: true, user: { select: { id: true, name: true, email: true } } } },
_count: { select: { items: true } },
};
} as const;
// ============================================
// Weekly Check-in Session CRUD
// ============================================
export async function getWeeklyCheckInSessionsByUserId(userId: string) {
return mergeSessionsByUserId(
(uid) =>
prisma.weeklyCheckInSession.findMany({
where: { userId: uid },
include: weeklyCheckInInclude,
orderBy: { updatedAt: 'desc' },
}),
(uid) =>
prisma.wCISessionShare.findMany({
where: { userId: uid },
include: { session: { include: weeklyCheckInInclude } },
}),
userId,
(s) => resolveCollaborator(s.participant).then((r) => ({ resolvedParticipant: r }))
);
return unstable_cache(
async () => {
const sessions = await mergeSessionsByUserId(
(uid) =>
prisma.weeklyCheckInSession.findMany({
where: { userId: uid },
select: weeklyCheckInListSelect,
orderBy: { updatedAt: 'desc' },
}),
(uid) =>
prisma.wCISessionShare.findMany({
where: { userId: uid },
select: { role: true, createdAt: true, session: { select: weeklyCheckInListSelect } },
}),
userId
);
const resolved = await batchResolveCollaborators(sessions.map((s) => s.participant));
return sessions.map((s) => ({
...s,
resolvedParticipant: resolved.get(s.participant.trim()) ?? { raw: s.participant, matchedUser: null },
}));
},
[`weekly-checkin-sessions-list-${userId}`],
{ tags: [sessionsListTag(userId)], revalidate: 60 }
)();
}
/** Sessions owned by team members (where user is team admin) that are NOT shared with the user. */
export async function getTeamCollaboratorSessionsForAdmin(userId: string) {
return fetchTeamCollaboratorSessions(
const sessions = await fetchTeamCollaboratorSessions(
(teamMemberIds, uid) =>
prisma.weeklyCheckInSession.findMany({
where: { userId: { in: teamMemberIds }, shares: { none: { userId: uid } } },
include: weeklyCheckInInclude,
select: weeklyCheckInListSelect,
orderBy: { updatedAt: 'desc' },
}),
getTeamMemberIdsForAdminTeams,
userId,
(s) => resolveCollaborator(s.participant).then((r) => ({ resolvedParticipant: r }))
userId
);
const resolved = await batchResolveCollaborators(sessions.map((s) => s.participant));
return sessions.map((s) => ({
...s,
resolvedParticipant: resolved.get(s.participant.trim()) ?? { raw: s.participant, matchedUser: null },
}));
}
const weeklyCheckInByIdInclude = {

View File

@@ -1,5 +1,6 @@
import { unstable_cache } from 'next/cache';
import { prisma } from '@/services/database';
import { resolveCollaborator } from '@/services/auth';
import { resolveCollaborator, batchResolveCollaborators } from '@/services/auth';
import { getTeamMemberIdsForAdminTeams } from '@/services/teams';
import { createSessionPermissionChecks } from '@/services/session-permissions';
import { createShareAndEventHandlers } from '@/services/session-share-events';
@@ -8,49 +9,70 @@ import {
fetchTeamCollaboratorSessions,
getSessionByIdGeneric,
} from '@/services/session-queries';
import { sessionsListTag } from '@/lib/cache-tags';
import type { YearReviewCategory } from '@prisma/client';
const yearReviewInclude = {
const yearReviewListSelect = {
id: true,
title: true,
participant: true,
year: true,
updatedAt: true,
userId: true,
user: { select: { id: true, name: true, email: true } },
shares: { include: { user: { select: { id: true, name: true, email: true } } } },
shares: { select: { id: true, role: true, user: { select: { id: true, name: true, email: true } } } },
_count: { select: { items: true } },
};
} as const;
// ============================================
// Year Review Session CRUD
// ============================================
export async function getYearReviewSessionsByUserId(userId: string) {
return mergeSessionsByUserId(
(uid) =>
prisma.yearReviewSession.findMany({
where: { userId: uid },
include: yearReviewInclude,
orderBy: { updatedAt: 'desc' },
}),
(uid) =>
prisma.yRSessionShare.findMany({
where: { userId: uid },
include: { session: { include: yearReviewInclude } },
}),
userId,
(s) => resolveCollaborator(s.participant).then((r) => ({ resolvedParticipant: r }))
);
return unstable_cache(
async () => {
const sessions = await mergeSessionsByUserId(
(uid) =>
prisma.yearReviewSession.findMany({
where: { userId: uid },
select: yearReviewListSelect,
orderBy: { updatedAt: 'desc' },
}),
(uid) =>
prisma.yRSessionShare.findMany({
where: { userId: uid },
select: { role: true, createdAt: true, session: { select: yearReviewListSelect } },
}),
userId
);
const resolved = await batchResolveCollaborators(sessions.map((s) => s.participant));
return sessions.map((s) => ({
...s,
resolvedParticipant: resolved.get(s.participant.trim()) ?? { raw: s.participant, matchedUser: null },
}));
},
[`year-review-sessions-list-${userId}`],
{ tags: [sessionsListTag(userId)], revalidate: 60 }
)();
}
/** Sessions owned by team members (where user is team admin) that are NOT shared with the user. */
export async function getTeamCollaboratorSessionsForAdmin(userId: string) {
return fetchTeamCollaboratorSessions(
const sessions = await fetchTeamCollaboratorSessions(
(teamMemberIds, uid) =>
prisma.yearReviewSession.findMany({
where: { userId: { in: teamMemberIds }, shares: { none: { userId: uid } } },
include: yearReviewInclude,
select: yearReviewListSelect,
orderBy: { updatedAt: 'desc' },
}),
getTeamMemberIdsForAdminTeams,
userId,
(s) => resolveCollaborator(s.participant).then((r) => ({ resolvedParticipant: r }))
userId
);
const resolved = await batchResolveCollaborators(sessions.map((s) => s.participant));
return sessions.map((s) => ({
...s,
resolvedParticipant: resolved.get(s.participant.trim()) ?? { raw: s.participant, matchedUser: null },
}));
}
const yearReviewByIdInclude = {

23
vitest.config.ts Normal file
View File

@@ -0,0 +1,23 @@
import { defineConfig } from 'vitest/config';
import tsconfigPaths from 'vite-tsconfig-paths';
export default defineConfig({
plugins: [tsconfigPaths()],
test: {
environment: 'node',
globals: true,
coverage: {
provider: 'v8',
include: ['src/services/**/*.ts', 'src/lib/**/*.ts'],
exclude: [
'src/services/__tests__/**',
'src/lib/__tests__/**',
'src/services/database.ts',
'src/lib/types.ts',
'src/lib/auth.ts',
'src/lib/auth.config.ts',
],
reporter: ['text', 'html'],
},
},
});