Compare commits
26 Commits
feat/thumb
...
3ab5b223a8
| Author | SHA1 | Date | |
|---|---|---|---|
| 3ab5b223a8 | |||
| 7cfb6cf001 | |||
| d2fe7f12ab | |||
| 64347edabc | |||
| 8261050943 | |||
| a2da5081ea | |||
| 648d86970f | |||
| 278f422206 | |||
| ff59ac1eff | |||
| 7eb9e2dcad | |||
| c81f7ce1b7 | |||
| 137e8ce11c | |||
| e0b80cae38 | |||
| e8bb014874 | |||
| 4c75e08056 | |||
| f1b3aec94a | |||
| 473e849dfa | |||
| cfc896e92f | |||
| 36af34443e | |||
| 85cad1a7e7 | |||
| 0f5094575a | |||
| 131c50b1a1 | |||
| 6d4c400017 | |||
| 539dc77d57 | |||
| 9c7120c3dc | |||
| b1844a4f01 |
152
.claude/commands/opsx/apply.md
Normal file
152
.claude/commands/opsx/apply.md
Normal file
@@ -0,0 +1,152 @@
|
|||||||
|
---
|
||||||
|
name: "OPSX: Apply"
|
||||||
|
description: Implement tasks from an OpenSpec change (Experimental)
|
||||||
|
category: Workflow
|
||||||
|
tags: [workflow, artifacts, experimental]
|
||||||
|
---
|
||||||
|
|
||||||
|
Implement tasks from an OpenSpec change.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name (e.g., `/opsx:apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **Select the change**
|
||||||
|
|
||||||
|
If a name is provided, use it. Otherwise:
|
||||||
|
- Infer from conversation context if the user mentioned a change
|
||||||
|
- Auto-select if only one active change exists
|
||||||
|
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
|
||||||
|
|
||||||
|
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
|
||||||
|
|
||||||
|
2. **Check status to understand the schema**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used (e.g., "spec-driven")
|
||||||
|
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
|
||||||
|
|
||||||
|
3. **Get apply instructions**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openspec instructions apply --change "<name>" --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This returns:
|
||||||
|
- Context file paths (varies by schema)
|
||||||
|
- Progress (total, complete, remaining)
|
||||||
|
- Task list with status
|
||||||
|
- Dynamic instruction based on current state
|
||||||
|
|
||||||
|
**Handle states:**
|
||||||
|
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx:continue`
|
||||||
|
- If `state: "all_done"`: congratulate, suggest archive
|
||||||
|
- Otherwise: proceed to implementation
|
||||||
|
|
||||||
|
4. **Read context files**
|
||||||
|
|
||||||
|
Read the files listed in `contextFiles` from the apply instructions output.
|
||||||
|
The files depend on the schema being used:
|
||||||
|
- **spec-driven**: proposal, specs, design, tasks
|
||||||
|
- Other schemas: follow the contextFiles from CLI output
|
||||||
|
|
||||||
|
5. **Show current progress**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Schema being used
|
||||||
|
- Progress: "N/M tasks complete"
|
||||||
|
- Remaining tasks overview
|
||||||
|
- Dynamic instruction from CLI
|
||||||
|
|
||||||
|
6. **Implement tasks (loop until done or blocked)**
|
||||||
|
|
||||||
|
For each pending task:
|
||||||
|
- Show which task is being worked on
|
||||||
|
- Make the code changes required
|
||||||
|
- Keep changes minimal and focused
|
||||||
|
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
|
||||||
|
- Continue to next task
|
||||||
|
|
||||||
|
**Pause if:**
|
||||||
|
- Task is unclear → ask for clarification
|
||||||
|
- Implementation reveals a design issue → suggest updating artifacts
|
||||||
|
- Error or blocker encountered → report and wait for guidance
|
||||||
|
- User interrupts
|
||||||
|
|
||||||
|
7. **On completion or pause, show status**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Tasks completed this session
|
||||||
|
- Overall progress: "N/M tasks complete"
|
||||||
|
- If all done: suggest archive
|
||||||
|
- If paused: explain why and wait for guidance
|
||||||
|
|
||||||
|
**Output During Implementation**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementing: <change-name> (schema: <schema-name>)
|
||||||
|
|
||||||
|
Working on task 3/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
|
||||||
|
Working on task 4/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Completion**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 7/7 tasks complete ✓
|
||||||
|
|
||||||
|
### Completed This Session
|
||||||
|
- [x] Task 1
|
||||||
|
- [x] Task 2
|
||||||
|
...
|
||||||
|
|
||||||
|
All tasks complete! You can archive this change with `/opsx:archive`.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Pause (Issue Encountered)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Paused
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 4/7 tasks complete
|
||||||
|
|
||||||
|
### Issue Encountered
|
||||||
|
<description of the issue>
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
1. <option 1>
|
||||||
|
2. <option 2>
|
||||||
|
3. Other approach
|
||||||
|
|
||||||
|
What would you like to do?
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Keep going through tasks until done or blocked
|
||||||
|
- Always read context files before starting (from the apply instructions output)
|
||||||
|
- If task is ambiguous, pause and ask before implementing
|
||||||
|
- If implementation reveals issues, pause and suggest artifact updates
|
||||||
|
- Keep code changes minimal and scoped to each task
|
||||||
|
- Update task checkbox immediately after completing each task
|
||||||
|
- Pause on errors, blockers, or unclear requirements - don't guess
|
||||||
|
- Use contextFiles from CLI output, don't assume specific file names
|
||||||
|
|
||||||
|
**Fluid Workflow Integration**
|
||||||
|
|
||||||
|
This skill supports the "actions on a change" model:
|
||||||
|
|
||||||
|
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
|
||||||
|
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly
|
||||||
157
.claude/commands/opsx/archive.md
Normal file
157
.claude/commands/opsx/archive.md
Normal file
@@ -0,0 +1,157 @@
|
|||||||
|
---
|
||||||
|
name: "OPSX: Archive"
|
||||||
|
description: Archive a completed change in the experimental workflow
|
||||||
|
category: Workflow
|
||||||
|
tags: [workflow, archive, experimental]
|
||||||
|
---
|
||||||
|
|
||||||
|
Archive a completed change in the experimental workflow.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name after `/opsx:archive` (e.g., `/opsx:archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no change name provided, prompt for selection**
|
||||||
|
|
||||||
|
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
|
||||||
|
|
||||||
|
Show only active changes (not already archived).
|
||||||
|
Include the schema used for each change if available.
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
|
||||||
|
|
||||||
|
2. **Check artifact completion status**
|
||||||
|
|
||||||
|
Run `openspec status --change "<name>" --json` to check artifact completion.
|
||||||
|
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used
|
||||||
|
- `artifacts`: List of artifacts with their status (`done` or other)
|
||||||
|
|
||||||
|
**If any artifacts are not `done`:**
|
||||||
|
- Display warning listing incomplete artifacts
|
||||||
|
- Prompt user for confirmation to continue
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
3. **Check task completion status**
|
||||||
|
|
||||||
|
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
|
||||||
|
|
||||||
|
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
|
||||||
|
|
||||||
|
**If incomplete tasks found:**
|
||||||
|
- Display warning showing count of incomplete tasks
|
||||||
|
- Prompt user for confirmation to continue
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
**If no tasks file exists:** Proceed without task-related warning.
|
||||||
|
|
||||||
|
4. **Assess delta spec sync state**
|
||||||
|
|
||||||
|
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
|
||||||
|
|
||||||
|
**If delta specs exist:**
|
||||||
|
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
|
||||||
|
- Determine what changes would be applied (adds, modifications, removals, renames)
|
||||||
|
- Show a combined summary before prompting
|
||||||
|
|
||||||
|
**Prompt options:**
|
||||||
|
- If changes needed: "Sync now (recommended)", "Archive without syncing"
|
||||||
|
- If already synced: "Archive now", "Sync anyway", "Cancel"
|
||||||
|
|
||||||
|
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
|
||||||
|
|
||||||
|
5. **Perform the archive**
|
||||||
|
|
||||||
|
Create the archive directory if it doesn't exist:
|
||||||
|
```bash
|
||||||
|
mkdir -p openspec/changes/archive
|
||||||
|
```
|
||||||
|
|
||||||
|
Generate target name using current date: `YYYY-MM-DD-<change-name>`
|
||||||
|
|
||||||
|
**Check if target already exists:**
|
||||||
|
- If yes: Fail with error, suggest renaming existing archive or using different date
|
||||||
|
- If no: Move the change directory to archive
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
|
||||||
|
```
|
||||||
|
|
||||||
|
6. **Display summary**
|
||||||
|
|
||||||
|
Show archive completion summary including:
|
||||||
|
- Change name
|
||||||
|
- Schema that was used
|
||||||
|
- Archive location
|
||||||
|
- Spec sync status (synced / sync skipped / no delta specs)
|
||||||
|
- Note about any warnings (incomplete artifacts/tasks)
|
||||||
|
|
||||||
|
**Output On Success**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
**Specs:** ✓ Synced to main specs
|
||||||
|
|
||||||
|
All artifacts complete. All tasks complete.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Success (No Delta Specs)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
**Specs:** No delta specs
|
||||||
|
|
||||||
|
All artifacts complete. All tasks complete.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Success With Warnings**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Complete (with warnings)
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
**Specs:** Sync skipped (user chose to skip)
|
||||||
|
|
||||||
|
**Warnings:**
|
||||||
|
- Archived with 2 incomplete artifacts
|
||||||
|
- Archived with 3 incomplete tasks
|
||||||
|
- Delta spec sync was skipped (user chose to skip)
|
||||||
|
|
||||||
|
Review the archive if this was not intentional.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Error (Archive Exists)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Failed
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
|
||||||
|
Target archive directory already exists.
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
1. Rename the existing archive
|
||||||
|
2. Delete the existing archive if it's a duplicate
|
||||||
|
3. Wait until a different date to archive
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Always prompt for change selection if not provided
|
||||||
|
- Use artifact graph (openspec status --json) for completion checking
|
||||||
|
- Don't block archive on warnings - just inform and confirm
|
||||||
|
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
|
||||||
|
- Show clear summary of what happened
|
||||||
|
- If sync is requested, use the Skill tool to invoke `openspec-sync-specs` (agent-driven)
|
||||||
|
- If delta specs exist, always run the sync assessment and show the combined summary before prompting
|
||||||
173
.claude/commands/opsx/explore.md
Normal file
173
.claude/commands/opsx/explore.md
Normal file
@@ -0,0 +1,173 @@
|
|||||||
|
---
|
||||||
|
name: "OPSX: Explore"
|
||||||
|
description: "Enter explore mode - think through ideas, investigate problems, clarify requirements"
|
||||||
|
category: Workflow
|
||||||
|
tags: [workflow, explore, experimental, thinking]
|
||||||
|
---
|
||||||
|
|
||||||
|
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
|
||||||
|
|
||||||
|
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
|
||||||
|
|
||||||
|
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
|
||||||
|
|
||||||
|
**Input**: The argument after `/opsx:explore` is whatever the user wants to think about. Could be:
|
||||||
|
- A vague idea: "real-time collaboration"
|
||||||
|
- A specific problem: "the auth system is getting unwieldy"
|
||||||
|
- A change name: "add-dark-mode" (to explore in context of that change)
|
||||||
|
- A comparison: "postgres vs sqlite for this"
|
||||||
|
- Nothing (just enter explore mode)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Stance
|
||||||
|
|
||||||
|
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
|
||||||
|
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
|
||||||
|
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
|
||||||
|
- **Adaptive** - Follow interesting threads, pivot when new information emerges
|
||||||
|
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
|
||||||
|
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Might Do
|
||||||
|
|
||||||
|
Depending on what the user brings, you might:
|
||||||
|
|
||||||
|
**Explore the problem space**
|
||||||
|
- Ask clarifying questions that emerge from what they said
|
||||||
|
- Challenge assumptions
|
||||||
|
- Reframe the problem
|
||||||
|
- Find analogies
|
||||||
|
|
||||||
|
**Investigate the codebase**
|
||||||
|
- Map existing architecture relevant to the discussion
|
||||||
|
- Find integration points
|
||||||
|
- Identify patterns already in use
|
||||||
|
- Surface hidden complexity
|
||||||
|
|
||||||
|
**Compare options**
|
||||||
|
- Brainstorm multiple approaches
|
||||||
|
- Build comparison tables
|
||||||
|
- Sketch tradeoffs
|
||||||
|
- Recommend a path (if asked)
|
||||||
|
|
||||||
|
**Visualize**
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Use ASCII diagrams liberally │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ ┌────────┐ ┌────────┐ │
|
||||||
|
│ │ State │────────▶│ State │ │
|
||||||
|
│ │ A │ │ B │ │
|
||||||
|
│ └────────┘ └────────┘ │
|
||||||
|
│ │
|
||||||
|
│ System diagrams, state machines, │
|
||||||
|
│ data flows, architecture sketches, │
|
||||||
|
│ dependency graphs, comparison tables │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Surface risks and unknowns**
|
||||||
|
- Identify what could go wrong
|
||||||
|
- Find gaps in understanding
|
||||||
|
- Suggest spikes or investigations
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## OpenSpec Awareness
|
||||||
|
|
||||||
|
You have full context of the OpenSpec system. Use it naturally, don't force it.
|
||||||
|
|
||||||
|
### Check for context
|
||||||
|
|
||||||
|
At the start, quickly check what exists:
|
||||||
|
```bash
|
||||||
|
openspec list --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This tells you:
|
||||||
|
- If there are active changes
|
||||||
|
- Their names, schemas, and status
|
||||||
|
- What the user might be working on
|
||||||
|
|
||||||
|
If the user mentioned a specific change name, read its artifacts for context.
|
||||||
|
|
||||||
|
### When no change exists
|
||||||
|
|
||||||
|
Think freely. When insights crystallize, you might offer:
|
||||||
|
|
||||||
|
- "This feels solid enough to start a change. Want me to create a proposal?"
|
||||||
|
- Or keep exploring - no pressure to formalize
|
||||||
|
|
||||||
|
### When a change exists
|
||||||
|
|
||||||
|
If the user mentions a change or you detect one is relevant:
|
||||||
|
|
||||||
|
1. **Read existing artifacts for context**
|
||||||
|
- `openspec/changes/<name>/proposal.md`
|
||||||
|
- `openspec/changes/<name>/design.md`
|
||||||
|
- `openspec/changes/<name>/tasks.md`
|
||||||
|
- etc.
|
||||||
|
|
||||||
|
2. **Reference them naturally in conversation**
|
||||||
|
- "Your design mentions using Redis, but we just realized SQLite fits better..."
|
||||||
|
- "The proposal scopes this to premium users, but we're now thinking everyone..."
|
||||||
|
|
||||||
|
3. **Offer to capture when decisions are made**
|
||||||
|
|
||||||
|
| Insight Type | Where to Capture |
|
||||||
|
|--------------|------------------|
|
||||||
|
| New requirement discovered | `specs/<capability>/spec.md` |
|
||||||
|
| Requirement changed | `specs/<capability>/spec.md` |
|
||||||
|
| Design decision made | `design.md` |
|
||||||
|
| Scope changed | `proposal.md` |
|
||||||
|
| New work identified | `tasks.md` |
|
||||||
|
| Assumption invalidated | Relevant artifact |
|
||||||
|
|
||||||
|
Example offers:
|
||||||
|
- "That's a design decision. Capture it in design.md?"
|
||||||
|
- "This is a new requirement. Add it to specs?"
|
||||||
|
- "This changes scope. Update the proposal?"
|
||||||
|
|
||||||
|
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Don't Have To Do
|
||||||
|
|
||||||
|
- Follow a script
|
||||||
|
- Ask the same questions every time
|
||||||
|
- Produce a specific artifact
|
||||||
|
- Reach a conclusion
|
||||||
|
- Stay on topic if a tangent is valuable
|
||||||
|
- Be brief (this is thinking time)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Ending Discovery
|
||||||
|
|
||||||
|
There's no required ending. Discovery might:
|
||||||
|
|
||||||
|
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
|
||||||
|
- **Result in artifact updates**: "Updated design.md with these decisions"
|
||||||
|
- **Just provide clarity**: User has what they need, moves on
|
||||||
|
- **Continue later**: "We can pick this up anytime"
|
||||||
|
|
||||||
|
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Guardrails
|
||||||
|
|
||||||
|
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
|
||||||
|
- **Don't fake understanding** - If something is unclear, dig deeper
|
||||||
|
- **Don't rush** - Discovery is thinking time, not task time
|
||||||
|
- **Don't force structure** - Let patterns emerge naturally
|
||||||
|
- **Don't auto-capture** - Offer to save insights, don't just do it
|
||||||
|
- **Do visualize** - A good diagram is worth many paragraphs
|
||||||
|
- **Do explore the codebase** - Ground discussions in reality
|
||||||
|
- **Do question assumptions** - Including the user's and your own
|
||||||
106
.claude/commands/opsx/propose.md
Normal file
106
.claude/commands/opsx/propose.md
Normal file
@@ -0,0 +1,106 @@
|
|||||||
|
---
|
||||||
|
name: "OPSX: Propose"
|
||||||
|
description: Propose a new change - create it and generate all artifacts in one step
|
||||||
|
category: Workflow
|
||||||
|
tags: [workflow, artifacts, experimental]
|
||||||
|
---
|
||||||
|
|
||||||
|
Propose a new change - create the change and generate all artifacts in one step.
|
||||||
|
|
||||||
|
I'll create a change with artifacts:
|
||||||
|
- proposal.md (what & why)
|
||||||
|
- design.md (how)
|
||||||
|
- tasks.md (implementation steps)
|
||||||
|
|
||||||
|
When ready to implement, run /opsx:apply
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Input**: The argument after `/opsx:propose` is the change name (kebab-case), OR a description of what the user wants to build.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no input provided, ask what they want to build**
|
||||||
|
|
||||||
|
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
|
||||||
|
> "What change do you want to work on? Describe what you want to build or fix."
|
||||||
|
|
||||||
|
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
|
||||||
|
|
||||||
|
2. **Create the change directory**
|
||||||
|
```bash
|
||||||
|
openspec new change "<name>"
|
||||||
|
```
|
||||||
|
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
|
||||||
|
|
||||||
|
3. **Get the artifact build order**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to get:
|
||||||
|
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
|
||||||
|
- `artifacts`: list of all artifacts with their status and dependencies
|
||||||
|
|
||||||
|
4. **Create artifacts in sequence until apply-ready**
|
||||||
|
|
||||||
|
Use the **TodoWrite tool** to track progress through the artifacts.
|
||||||
|
|
||||||
|
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
|
||||||
|
|
||||||
|
a. **For each artifact that is `ready` (dependencies satisfied)**:
|
||||||
|
- Get instructions:
|
||||||
|
```bash
|
||||||
|
openspec instructions <artifact-id> --change "<name>" --json
|
||||||
|
```
|
||||||
|
- The instructions JSON includes:
|
||||||
|
- `context`: Project background (constraints for you - do NOT include in output)
|
||||||
|
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
|
||||||
|
- `template`: The structure to use for your output file
|
||||||
|
- `instruction`: Schema-specific guidance for this artifact type
|
||||||
|
- `outputPath`: Where to write the artifact
|
||||||
|
- `dependencies`: Completed artifacts to read for context
|
||||||
|
- Read any completed dependency files for context
|
||||||
|
- Create the artifact file using `template` as the structure
|
||||||
|
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
|
||||||
|
- Show brief progress: "Created <artifact-id>"
|
||||||
|
|
||||||
|
b. **Continue until all `applyRequires` artifacts are complete**
|
||||||
|
- After creating each artifact, re-run `openspec status --change "<name>" --json`
|
||||||
|
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
|
||||||
|
- Stop when all `applyRequires` artifacts are done
|
||||||
|
|
||||||
|
c. **If an artifact requires user input** (unclear context):
|
||||||
|
- Use **AskUserQuestion tool** to clarify
|
||||||
|
- Then continue with creation
|
||||||
|
|
||||||
|
5. **Show final status**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**
|
||||||
|
|
||||||
|
After completing all artifacts, summarize:
|
||||||
|
- Change name and location
|
||||||
|
- List of artifacts created with brief descriptions
|
||||||
|
- What's ready: "All artifacts created! Ready for implementation."
|
||||||
|
- Prompt: "Run `/opsx:apply` to start implementing."
|
||||||
|
|
||||||
|
**Artifact Creation Guidelines**
|
||||||
|
|
||||||
|
- Follow the `instruction` field from `openspec instructions` for each artifact type
|
||||||
|
- The schema defines what each artifact should contain - follow it
|
||||||
|
- Read dependency artifacts for context before creating new ones
|
||||||
|
- Use `template` as the structure for your output file - fill in its sections
|
||||||
|
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
|
||||||
|
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
|
||||||
|
- These guide what you write, but should never appear in the output
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
|
||||||
|
- Always read dependency artifacts before creating a new one
|
||||||
|
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
|
||||||
|
- If a change with that name already exists, ask if user wants to continue it or create a new one
|
||||||
|
- Verify each artifact file exists after writing before proceeding to next
|
||||||
156
.claude/skills/openspec-apply-change/SKILL.md
Normal file
156
.claude/skills/openspec-apply-change/SKILL.md
Normal file
@@ -0,0 +1,156 @@
|
|||||||
|
---
|
||||||
|
name: openspec-apply-change
|
||||||
|
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Implement tasks from an OpenSpec change.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **Select the change**
|
||||||
|
|
||||||
|
If a name is provided, use it. Otherwise:
|
||||||
|
- Infer from conversation context if the user mentioned a change
|
||||||
|
- Auto-select if only one active change exists
|
||||||
|
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
|
||||||
|
|
||||||
|
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
|
||||||
|
|
||||||
|
2. **Check status to understand the schema**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used (e.g., "spec-driven")
|
||||||
|
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
|
||||||
|
|
||||||
|
3. **Get apply instructions**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openspec instructions apply --change "<name>" --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This returns:
|
||||||
|
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
|
||||||
|
- Progress (total, complete, remaining)
|
||||||
|
- Task list with status
|
||||||
|
- Dynamic instruction based on current state
|
||||||
|
|
||||||
|
**Handle states:**
|
||||||
|
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
|
||||||
|
- If `state: "all_done"`: congratulate, suggest archive
|
||||||
|
- Otherwise: proceed to implementation
|
||||||
|
|
||||||
|
4. **Read context files**
|
||||||
|
|
||||||
|
Read the files listed in `contextFiles` from the apply instructions output.
|
||||||
|
The files depend on the schema being used:
|
||||||
|
- **spec-driven**: proposal, specs, design, tasks
|
||||||
|
- Other schemas: follow the contextFiles from CLI output
|
||||||
|
|
||||||
|
5. **Show current progress**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Schema being used
|
||||||
|
- Progress: "N/M tasks complete"
|
||||||
|
- Remaining tasks overview
|
||||||
|
- Dynamic instruction from CLI
|
||||||
|
|
||||||
|
6. **Implement tasks (loop until done or blocked)**
|
||||||
|
|
||||||
|
For each pending task:
|
||||||
|
- Show which task is being worked on
|
||||||
|
- Make the code changes required
|
||||||
|
- Keep changes minimal and focused
|
||||||
|
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
|
||||||
|
- Continue to next task
|
||||||
|
|
||||||
|
**Pause if:**
|
||||||
|
- Task is unclear → ask for clarification
|
||||||
|
- Implementation reveals a design issue → suggest updating artifacts
|
||||||
|
- Error or blocker encountered → report and wait for guidance
|
||||||
|
- User interrupts
|
||||||
|
|
||||||
|
7. **On completion or pause, show status**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Tasks completed this session
|
||||||
|
- Overall progress: "N/M tasks complete"
|
||||||
|
- If all done: suggest archive
|
||||||
|
- If paused: explain why and wait for guidance
|
||||||
|
|
||||||
|
**Output During Implementation**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementing: <change-name> (schema: <schema-name>)
|
||||||
|
|
||||||
|
Working on task 3/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
|
||||||
|
Working on task 4/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Completion**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 7/7 tasks complete ✓
|
||||||
|
|
||||||
|
### Completed This Session
|
||||||
|
- [x] Task 1
|
||||||
|
- [x] Task 2
|
||||||
|
...
|
||||||
|
|
||||||
|
All tasks complete! Ready to archive this change.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Pause (Issue Encountered)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Paused
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 4/7 tasks complete
|
||||||
|
|
||||||
|
### Issue Encountered
|
||||||
|
<description of the issue>
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
1. <option 1>
|
||||||
|
2. <option 2>
|
||||||
|
3. Other approach
|
||||||
|
|
||||||
|
What would you like to do?
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Keep going through tasks until done or blocked
|
||||||
|
- Always read context files before starting (from the apply instructions output)
|
||||||
|
- If task is ambiguous, pause and ask before implementing
|
||||||
|
- If implementation reveals issues, pause and suggest artifact updates
|
||||||
|
- Keep code changes minimal and scoped to each task
|
||||||
|
- Update task checkbox immediately after completing each task
|
||||||
|
- Pause on errors, blockers, or unclear requirements - don't guess
|
||||||
|
- Use contextFiles from CLI output, don't assume specific file names
|
||||||
|
|
||||||
|
**Fluid Workflow Integration**
|
||||||
|
|
||||||
|
This skill supports the "actions on a change" model:
|
||||||
|
|
||||||
|
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
|
||||||
|
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly
|
||||||
114
.claude/skills/openspec-archive-change/SKILL.md
Normal file
114
.claude/skills/openspec-archive-change/SKILL.md
Normal file
@@ -0,0 +1,114 @@
|
|||||||
|
---
|
||||||
|
name: openspec-archive-change
|
||||||
|
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Archive a completed change in the experimental workflow.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no change name provided, prompt for selection**
|
||||||
|
|
||||||
|
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
|
||||||
|
|
||||||
|
Show only active changes (not already archived).
|
||||||
|
Include the schema used for each change if available.
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
|
||||||
|
|
||||||
|
2. **Check artifact completion status**
|
||||||
|
|
||||||
|
Run `openspec status --change "<name>" --json` to check artifact completion.
|
||||||
|
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used
|
||||||
|
- `artifacts`: List of artifacts with their status (`done` or other)
|
||||||
|
|
||||||
|
**If any artifacts are not `done`:**
|
||||||
|
- Display warning listing incomplete artifacts
|
||||||
|
- Use **AskUserQuestion tool** to confirm user wants to proceed
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
3. **Check task completion status**
|
||||||
|
|
||||||
|
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
|
||||||
|
|
||||||
|
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
|
||||||
|
|
||||||
|
**If incomplete tasks found:**
|
||||||
|
- Display warning showing count of incomplete tasks
|
||||||
|
- Use **AskUserQuestion tool** to confirm user wants to proceed
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
**If no tasks file exists:** Proceed without task-related warning.
|
||||||
|
|
||||||
|
4. **Assess delta spec sync state**
|
||||||
|
|
||||||
|
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
|
||||||
|
|
||||||
|
**If delta specs exist:**
|
||||||
|
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
|
||||||
|
- Determine what changes would be applied (adds, modifications, removals, renames)
|
||||||
|
- Show a combined summary before prompting
|
||||||
|
|
||||||
|
**Prompt options:**
|
||||||
|
- If changes needed: "Sync now (recommended)", "Archive without syncing"
|
||||||
|
- If already synced: "Archive now", "Sync anyway", "Cancel"
|
||||||
|
|
||||||
|
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
|
||||||
|
|
||||||
|
5. **Perform the archive**
|
||||||
|
|
||||||
|
Create the archive directory if it doesn't exist:
|
||||||
|
```bash
|
||||||
|
mkdir -p openspec/changes/archive
|
||||||
|
```
|
||||||
|
|
||||||
|
Generate target name using current date: `YYYY-MM-DD-<change-name>`
|
||||||
|
|
||||||
|
**Check if target already exists:**
|
||||||
|
- If yes: Fail with error, suggest renaming existing archive or using different date
|
||||||
|
- If no: Move the change directory to archive
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
|
||||||
|
```
|
||||||
|
|
||||||
|
6. **Display summary**
|
||||||
|
|
||||||
|
Show archive completion summary including:
|
||||||
|
- Change name
|
||||||
|
- Schema that was used
|
||||||
|
- Archive location
|
||||||
|
- Whether specs were synced (if applicable)
|
||||||
|
- Note about any warnings (incomplete artifacts/tasks)
|
||||||
|
|
||||||
|
**Output On Success**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
|
||||||
|
|
||||||
|
All artifacts complete. All tasks complete.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Always prompt for change selection if not provided
|
||||||
|
- Use artifact graph (openspec status --json) for completion checking
|
||||||
|
- Don't block archive on warnings - just inform and confirm
|
||||||
|
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
|
||||||
|
- Show clear summary of what happened
|
||||||
|
- If sync is requested, use openspec-sync-specs approach (agent-driven)
|
||||||
|
- If delta specs exist, always run the sync assessment and show the combined summary before prompting
|
||||||
288
.claude/skills/openspec-explore/SKILL.md
Normal file
288
.claude/skills/openspec-explore/SKILL.md
Normal file
@@ -0,0 +1,288 @@
|
|||||||
|
---
|
||||||
|
name: openspec-explore
|
||||||
|
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
|
||||||
|
|
||||||
|
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
|
||||||
|
|
||||||
|
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Stance
|
||||||
|
|
||||||
|
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
|
||||||
|
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
|
||||||
|
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
|
||||||
|
- **Adaptive** - Follow interesting threads, pivot when new information emerges
|
||||||
|
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
|
||||||
|
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Might Do
|
||||||
|
|
||||||
|
Depending on what the user brings, you might:
|
||||||
|
|
||||||
|
**Explore the problem space**
|
||||||
|
- Ask clarifying questions that emerge from what they said
|
||||||
|
- Challenge assumptions
|
||||||
|
- Reframe the problem
|
||||||
|
- Find analogies
|
||||||
|
|
||||||
|
**Investigate the codebase**
|
||||||
|
- Map existing architecture relevant to the discussion
|
||||||
|
- Find integration points
|
||||||
|
- Identify patterns already in use
|
||||||
|
- Surface hidden complexity
|
||||||
|
|
||||||
|
**Compare options**
|
||||||
|
- Brainstorm multiple approaches
|
||||||
|
- Build comparison tables
|
||||||
|
- Sketch tradeoffs
|
||||||
|
- Recommend a path (if asked)
|
||||||
|
|
||||||
|
**Visualize**
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Use ASCII diagrams liberally │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ ┌────────┐ ┌────────┐ │
|
||||||
|
│ │ State │────────▶│ State │ │
|
||||||
|
│ │ A │ │ B │ │
|
||||||
|
│ └────────┘ └────────┘ │
|
||||||
|
│ │
|
||||||
|
│ System diagrams, state machines, │
|
||||||
|
│ data flows, architecture sketches, │
|
||||||
|
│ dependency graphs, comparison tables │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Surface risks and unknowns**
|
||||||
|
- Identify what could go wrong
|
||||||
|
- Find gaps in understanding
|
||||||
|
- Suggest spikes or investigations
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## OpenSpec Awareness
|
||||||
|
|
||||||
|
You have full context of the OpenSpec system. Use it naturally, don't force it.
|
||||||
|
|
||||||
|
### Check for context
|
||||||
|
|
||||||
|
At the start, quickly check what exists:
|
||||||
|
```bash
|
||||||
|
openspec list --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This tells you:
|
||||||
|
- If there are active changes
|
||||||
|
- Their names, schemas, and status
|
||||||
|
- What the user might be working on
|
||||||
|
|
||||||
|
### When no change exists
|
||||||
|
|
||||||
|
Think freely. When insights crystallize, you might offer:
|
||||||
|
|
||||||
|
- "This feels solid enough to start a change. Want me to create a proposal?"
|
||||||
|
- Or keep exploring - no pressure to formalize
|
||||||
|
|
||||||
|
### When a change exists
|
||||||
|
|
||||||
|
If the user mentions a change or you detect one is relevant:
|
||||||
|
|
||||||
|
1. **Read existing artifacts for context**
|
||||||
|
- `openspec/changes/<name>/proposal.md`
|
||||||
|
- `openspec/changes/<name>/design.md`
|
||||||
|
- `openspec/changes/<name>/tasks.md`
|
||||||
|
- etc.
|
||||||
|
|
||||||
|
2. **Reference them naturally in conversation**
|
||||||
|
- "Your design mentions using Redis, but we just realized SQLite fits better..."
|
||||||
|
- "The proposal scopes this to premium users, but we're now thinking everyone..."
|
||||||
|
|
||||||
|
3. **Offer to capture when decisions are made**
|
||||||
|
|
||||||
|
| Insight Type | Where to Capture |
|
||||||
|
|--------------|------------------|
|
||||||
|
| New requirement discovered | `specs/<capability>/spec.md` |
|
||||||
|
| Requirement changed | `specs/<capability>/spec.md` |
|
||||||
|
| Design decision made | `design.md` |
|
||||||
|
| Scope changed | `proposal.md` |
|
||||||
|
| New work identified | `tasks.md` |
|
||||||
|
| Assumption invalidated | Relevant artifact |
|
||||||
|
|
||||||
|
Example offers:
|
||||||
|
- "That's a design decision. Capture it in design.md?"
|
||||||
|
- "This is a new requirement. Add it to specs?"
|
||||||
|
- "This changes scope. Update the proposal?"
|
||||||
|
|
||||||
|
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Don't Have To Do
|
||||||
|
|
||||||
|
- Follow a script
|
||||||
|
- Ask the same questions every time
|
||||||
|
- Produce a specific artifact
|
||||||
|
- Reach a conclusion
|
||||||
|
- Stay on topic if a tangent is valuable
|
||||||
|
- Be brief (this is thinking time)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Handling Different Entry Points
|
||||||
|
|
||||||
|
**User brings a vague idea:**
|
||||||
|
```
|
||||||
|
User: I'm thinking about adding real-time collaboration
|
||||||
|
|
||||||
|
You: Real-time collab is a big space. Let me think about this...
|
||||||
|
|
||||||
|
COLLABORATION SPECTRUM
|
||||||
|
════════════════════════════════════════════
|
||||||
|
|
||||||
|
Awareness Coordination Sync
|
||||||
|
│ │ │
|
||||||
|
▼ ▼ ▼
|
||||||
|
┌────────┐ ┌────────┐ ┌────────┐
|
||||||
|
│Presence│ │Cursors │ │ CRDT │
|
||||||
|
│ "3 │ │ Multi │ │Conflict│
|
||||||
|
│online" │ │ select │ │ free │
|
||||||
|
└────────┘ └────────┘ └────────┘
|
||||||
|
│ │ │
|
||||||
|
trivial moderate complex
|
||||||
|
|
||||||
|
Where's your head at?
|
||||||
|
```
|
||||||
|
|
||||||
|
**User brings a specific problem:**
|
||||||
|
```
|
||||||
|
User: The auth system is a mess
|
||||||
|
|
||||||
|
You: [reads codebase]
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────┐
|
||||||
|
│ CURRENT AUTH FLOW │
|
||||||
|
└─────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
┌─────────────┼─────────────┐
|
||||||
|
▼ ▼ ▼
|
||||||
|
┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||||
|
│ Google │ │ GitHub │ │ Email │
|
||||||
|
│ OAuth │ │ OAuth │ │ Magic │
|
||||||
|
└────┬────┘ └────┬────┘ └────┬────┘
|
||||||
|
│ │ │
|
||||||
|
└─────────────┼─────────────┘
|
||||||
|
▼
|
||||||
|
┌───────────┐
|
||||||
|
│ Session │
|
||||||
|
└─────┬─────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌───────────┐
|
||||||
|
│ Perms │
|
||||||
|
└───────────┘
|
||||||
|
|
||||||
|
I see three tangles. Which one's burning?
|
||||||
|
```
|
||||||
|
|
||||||
|
**User is stuck mid-implementation:**
|
||||||
|
```
|
||||||
|
User: /opsx:explore add-auth-system
|
||||||
|
The OAuth integration is more complex than expected
|
||||||
|
|
||||||
|
You: [reads change artifacts]
|
||||||
|
|
||||||
|
You're on task 4: "Implement OAuth flow"
|
||||||
|
|
||||||
|
Let me trace what's involved...
|
||||||
|
|
||||||
|
[draws diagram, explores options, suggests paths]
|
||||||
|
|
||||||
|
Want to update the design to reflect this?
|
||||||
|
Or add a spike task to investigate?
|
||||||
|
```
|
||||||
|
|
||||||
|
**User wants to compare options:**
|
||||||
|
```
|
||||||
|
User: Should we use Postgres or SQLite?
|
||||||
|
|
||||||
|
You: Generic answer is boring. What's the context?
|
||||||
|
|
||||||
|
User: A CLI tool that tracks local dev environments
|
||||||
|
|
||||||
|
You: That changes everything.
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────────┐
|
||||||
|
│ CLI TOOL DATA STORAGE │
|
||||||
|
└─────────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
Key constraints:
|
||||||
|
• No daemon running
|
||||||
|
• Must work offline
|
||||||
|
• Single user
|
||||||
|
|
||||||
|
SQLite Postgres
|
||||||
|
Deployment embedded ✓ needs server ✗
|
||||||
|
Offline yes ✓ no ✗
|
||||||
|
Single file yes ✓ no ✗
|
||||||
|
|
||||||
|
SQLite. Not even close.
|
||||||
|
|
||||||
|
Unless... is there a sync component?
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Ending Discovery
|
||||||
|
|
||||||
|
There's no required ending. Discovery might:
|
||||||
|
|
||||||
|
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
|
||||||
|
- **Result in artifact updates**: "Updated design.md with these decisions"
|
||||||
|
- **Just provide clarity**: User has what they need, moves on
|
||||||
|
- **Continue later**: "We can pick this up anytime"
|
||||||
|
|
||||||
|
When it feels like things are crystallizing, you might summarize:
|
||||||
|
|
||||||
|
```
|
||||||
|
## What We Figured Out
|
||||||
|
|
||||||
|
**The problem**: [crystallized understanding]
|
||||||
|
|
||||||
|
**The approach**: [if one emerged]
|
||||||
|
|
||||||
|
**Open questions**: [if any remain]
|
||||||
|
|
||||||
|
**Next steps** (if ready):
|
||||||
|
- Create a change proposal
|
||||||
|
- Keep exploring: just keep talking
|
||||||
|
```
|
||||||
|
|
||||||
|
But this summary is optional. Sometimes the thinking IS the value.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Guardrails
|
||||||
|
|
||||||
|
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
|
||||||
|
- **Don't fake understanding** - If something is unclear, dig deeper
|
||||||
|
- **Don't rush** - Discovery is thinking time, not task time
|
||||||
|
- **Don't force structure** - Let patterns emerge naturally
|
||||||
|
- **Don't auto-capture** - Offer to save insights, don't just do it
|
||||||
|
- **Do visualize** - A good diagram is worth many paragraphs
|
||||||
|
- **Do explore the codebase** - Ground discussions in reality
|
||||||
|
- **Do question assumptions** - Including the user's and your own
|
||||||
110
.claude/skills/openspec-propose/SKILL.md
Normal file
110
.claude/skills/openspec-propose/SKILL.md
Normal file
@@ -0,0 +1,110 @@
|
|||||||
|
---
|
||||||
|
name: openspec-propose
|
||||||
|
description: Propose a new change with all artifacts generated in one step. Use when the user wants to quickly describe what they want to build and get a complete proposal with design, specs, and tasks ready for implementation.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Propose a new change - create the change and generate all artifacts in one step.
|
||||||
|
|
||||||
|
I'll create a change with artifacts:
|
||||||
|
- proposal.md (what & why)
|
||||||
|
- design.md (how)
|
||||||
|
- tasks.md (implementation steps)
|
||||||
|
|
||||||
|
When ready to implement, run /opsx:apply
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no clear input provided, ask what they want to build**
|
||||||
|
|
||||||
|
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
|
||||||
|
> "What change do you want to work on? Describe what you want to build or fix."
|
||||||
|
|
||||||
|
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
|
||||||
|
|
||||||
|
2. **Create the change directory**
|
||||||
|
```bash
|
||||||
|
openspec new change "<name>"
|
||||||
|
```
|
||||||
|
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
|
||||||
|
|
||||||
|
3. **Get the artifact build order**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to get:
|
||||||
|
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
|
||||||
|
- `artifacts`: list of all artifacts with their status and dependencies
|
||||||
|
|
||||||
|
4. **Create artifacts in sequence until apply-ready**
|
||||||
|
|
||||||
|
Use the **TodoWrite tool** to track progress through the artifacts.
|
||||||
|
|
||||||
|
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
|
||||||
|
|
||||||
|
a. **For each artifact that is `ready` (dependencies satisfied)**:
|
||||||
|
- Get instructions:
|
||||||
|
```bash
|
||||||
|
openspec instructions <artifact-id> --change "<name>" --json
|
||||||
|
```
|
||||||
|
- The instructions JSON includes:
|
||||||
|
- `context`: Project background (constraints for you - do NOT include in output)
|
||||||
|
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
|
||||||
|
- `template`: The structure to use for your output file
|
||||||
|
- `instruction`: Schema-specific guidance for this artifact type
|
||||||
|
- `outputPath`: Where to write the artifact
|
||||||
|
- `dependencies`: Completed artifacts to read for context
|
||||||
|
- Read any completed dependency files for context
|
||||||
|
- Create the artifact file using `template` as the structure
|
||||||
|
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
|
||||||
|
- Show brief progress: "Created <artifact-id>"
|
||||||
|
|
||||||
|
b. **Continue until all `applyRequires` artifacts are complete**
|
||||||
|
- After creating each artifact, re-run `openspec status --change "<name>" --json`
|
||||||
|
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
|
||||||
|
- Stop when all `applyRequires` artifacts are done
|
||||||
|
|
||||||
|
c. **If an artifact requires user input** (unclear context):
|
||||||
|
- Use **AskUserQuestion tool** to clarify
|
||||||
|
- Then continue with creation
|
||||||
|
|
||||||
|
5. **Show final status**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**
|
||||||
|
|
||||||
|
After completing all artifacts, summarize:
|
||||||
|
- Change name and location
|
||||||
|
- List of artifacts created with brief descriptions
|
||||||
|
- What's ready: "All artifacts created! Ready for implementation."
|
||||||
|
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks."
|
||||||
|
|
||||||
|
**Artifact Creation Guidelines**
|
||||||
|
|
||||||
|
- Follow the `instruction` field from `openspec instructions` for each artifact type
|
||||||
|
- The schema defines what each artifact should contain - follow it
|
||||||
|
- Read dependency artifacts for context before creating new ones
|
||||||
|
- Use `template` as the structure for your output file - fill in its sections
|
||||||
|
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
|
||||||
|
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
|
||||||
|
- These guide what you write, but should never appear in the output
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
|
||||||
|
- Always read dependency artifacts before creating a new one
|
||||||
|
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
|
||||||
|
- If a change with that name already exists, ask if user wants to continue it or create a new one
|
||||||
|
- Verify each artifact file exists after writing before proceeding to next
|
||||||
156
.codex/skills/openspec-apply-change/SKILL.md
Normal file
156
.codex/skills/openspec-apply-change/SKILL.md
Normal file
@@ -0,0 +1,156 @@
|
|||||||
|
---
|
||||||
|
name: openspec-apply-change
|
||||||
|
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Implement tasks from an OpenSpec change.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **Select the change**
|
||||||
|
|
||||||
|
If a name is provided, use it. Otherwise:
|
||||||
|
- Infer from conversation context if the user mentioned a change
|
||||||
|
- Auto-select if only one active change exists
|
||||||
|
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
|
||||||
|
|
||||||
|
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
|
||||||
|
|
||||||
|
2. **Check status to understand the schema**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used (e.g., "spec-driven")
|
||||||
|
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
|
||||||
|
|
||||||
|
3. **Get apply instructions**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openspec instructions apply --change "<name>" --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This returns:
|
||||||
|
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
|
||||||
|
- Progress (total, complete, remaining)
|
||||||
|
- Task list with status
|
||||||
|
- Dynamic instruction based on current state
|
||||||
|
|
||||||
|
**Handle states:**
|
||||||
|
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
|
||||||
|
- If `state: "all_done"`: congratulate, suggest archive
|
||||||
|
- Otherwise: proceed to implementation
|
||||||
|
|
||||||
|
4. **Read context files**
|
||||||
|
|
||||||
|
Read the files listed in `contextFiles` from the apply instructions output.
|
||||||
|
The files depend on the schema being used:
|
||||||
|
- **spec-driven**: proposal, specs, design, tasks
|
||||||
|
- Other schemas: follow the contextFiles from CLI output
|
||||||
|
|
||||||
|
5. **Show current progress**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Schema being used
|
||||||
|
- Progress: "N/M tasks complete"
|
||||||
|
- Remaining tasks overview
|
||||||
|
- Dynamic instruction from CLI
|
||||||
|
|
||||||
|
6. **Implement tasks (loop until done or blocked)**
|
||||||
|
|
||||||
|
For each pending task:
|
||||||
|
- Show which task is being worked on
|
||||||
|
- Make the code changes required
|
||||||
|
- Keep changes minimal and focused
|
||||||
|
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
|
||||||
|
- Continue to next task
|
||||||
|
|
||||||
|
**Pause if:**
|
||||||
|
- Task is unclear → ask for clarification
|
||||||
|
- Implementation reveals a design issue → suggest updating artifacts
|
||||||
|
- Error or blocker encountered → report and wait for guidance
|
||||||
|
- User interrupts
|
||||||
|
|
||||||
|
7. **On completion or pause, show status**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Tasks completed this session
|
||||||
|
- Overall progress: "N/M tasks complete"
|
||||||
|
- If all done: suggest archive
|
||||||
|
- If paused: explain why and wait for guidance
|
||||||
|
|
||||||
|
**Output During Implementation**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementing: <change-name> (schema: <schema-name>)
|
||||||
|
|
||||||
|
Working on task 3/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
|
||||||
|
Working on task 4/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Completion**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 7/7 tasks complete ✓
|
||||||
|
|
||||||
|
### Completed This Session
|
||||||
|
- [x] Task 1
|
||||||
|
- [x] Task 2
|
||||||
|
...
|
||||||
|
|
||||||
|
All tasks complete! Ready to archive this change.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Pause (Issue Encountered)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Paused
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 4/7 tasks complete
|
||||||
|
|
||||||
|
### Issue Encountered
|
||||||
|
<description of the issue>
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
1. <option 1>
|
||||||
|
2. <option 2>
|
||||||
|
3. Other approach
|
||||||
|
|
||||||
|
What would you like to do?
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Keep going through tasks until done or blocked
|
||||||
|
- Always read context files before starting (from the apply instructions output)
|
||||||
|
- If task is ambiguous, pause and ask before implementing
|
||||||
|
- If implementation reveals issues, pause and suggest artifact updates
|
||||||
|
- Keep code changes minimal and scoped to each task
|
||||||
|
- Update task checkbox immediately after completing each task
|
||||||
|
- Pause on errors, blockers, or unclear requirements - don't guess
|
||||||
|
- Use contextFiles from CLI output, don't assume specific file names
|
||||||
|
|
||||||
|
**Fluid Workflow Integration**
|
||||||
|
|
||||||
|
This skill supports the "actions on a change" model:
|
||||||
|
|
||||||
|
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
|
||||||
|
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly
|
||||||
114
.codex/skills/openspec-archive-change/SKILL.md
Normal file
114
.codex/skills/openspec-archive-change/SKILL.md
Normal file
@@ -0,0 +1,114 @@
|
|||||||
|
---
|
||||||
|
name: openspec-archive-change
|
||||||
|
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Archive a completed change in the experimental workflow.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no change name provided, prompt for selection**
|
||||||
|
|
||||||
|
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
|
||||||
|
|
||||||
|
Show only active changes (not already archived).
|
||||||
|
Include the schema used for each change if available.
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
|
||||||
|
|
||||||
|
2. **Check artifact completion status**
|
||||||
|
|
||||||
|
Run `openspec status --change "<name>" --json` to check artifact completion.
|
||||||
|
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used
|
||||||
|
- `artifacts`: List of artifacts with their status (`done` or other)
|
||||||
|
|
||||||
|
**If any artifacts are not `done`:**
|
||||||
|
- Display warning listing incomplete artifacts
|
||||||
|
- Use **AskUserQuestion tool** to confirm user wants to proceed
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
3. **Check task completion status**
|
||||||
|
|
||||||
|
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
|
||||||
|
|
||||||
|
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
|
||||||
|
|
||||||
|
**If incomplete tasks found:**
|
||||||
|
- Display warning showing count of incomplete tasks
|
||||||
|
- Use **AskUserQuestion tool** to confirm user wants to proceed
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
**If no tasks file exists:** Proceed without task-related warning.
|
||||||
|
|
||||||
|
4. **Assess delta spec sync state**
|
||||||
|
|
||||||
|
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
|
||||||
|
|
||||||
|
**If delta specs exist:**
|
||||||
|
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
|
||||||
|
- Determine what changes would be applied (adds, modifications, removals, renames)
|
||||||
|
- Show a combined summary before prompting
|
||||||
|
|
||||||
|
**Prompt options:**
|
||||||
|
- If changes needed: "Sync now (recommended)", "Archive without syncing"
|
||||||
|
- If already synced: "Archive now", "Sync anyway", "Cancel"
|
||||||
|
|
||||||
|
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
|
||||||
|
|
||||||
|
5. **Perform the archive**
|
||||||
|
|
||||||
|
Create the archive directory if it doesn't exist:
|
||||||
|
```bash
|
||||||
|
mkdir -p openspec/changes/archive
|
||||||
|
```
|
||||||
|
|
||||||
|
Generate target name using current date: `YYYY-MM-DD-<change-name>`
|
||||||
|
|
||||||
|
**Check if target already exists:**
|
||||||
|
- If yes: Fail with error, suggest renaming existing archive or using different date
|
||||||
|
- If no: Move the change directory to archive
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
|
||||||
|
```
|
||||||
|
|
||||||
|
6. **Display summary**
|
||||||
|
|
||||||
|
Show archive completion summary including:
|
||||||
|
- Change name
|
||||||
|
- Schema that was used
|
||||||
|
- Archive location
|
||||||
|
- Whether specs were synced (if applicable)
|
||||||
|
- Note about any warnings (incomplete artifacts/tasks)
|
||||||
|
|
||||||
|
**Output On Success**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
|
||||||
|
|
||||||
|
All artifacts complete. All tasks complete.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Always prompt for change selection if not provided
|
||||||
|
- Use artifact graph (openspec status --json) for completion checking
|
||||||
|
- Don't block archive on warnings - just inform and confirm
|
||||||
|
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
|
||||||
|
- Show clear summary of what happened
|
||||||
|
- If sync is requested, use openspec-sync-specs approach (agent-driven)
|
||||||
|
- If delta specs exist, always run the sync assessment and show the combined summary before prompting
|
||||||
288
.codex/skills/openspec-explore/SKILL.md
Normal file
288
.codex/skills/openspec-explore/SKILL.md
Normal file
@@ -0,0 +1,288 @@
|
|||||||
|
---
|
||||||
|
name: openspec-explore
|
||||||
|
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
|
||||||
|
|
||||||
|
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
|
||||||
|
|
||||||
|
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Stance
|
||||||
|
|
||||||
|
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
|
||||||
|
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
|
||||||
|
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
|
||||||
|
- **Adaptive** - Follow interesting threads, pivot when new information emerges
|
||||||
|
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
|
||||||
|
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Might Do
|
||||||
|
|
||||||
|
Depending on what the user brings, you might:
|
||||||
|
|
||||||
|
**Explore the problem space**
|
||||||
|
- Ask clarifying questions that emerge from what they said
|
||||||
|
- Challenge assumptions
|
||||||
|
- Reframe the problem
|
||||||
|
- Find analogies
|
||||||
|
|
||||||
|
**Investigate the codebase**
|
||||||
|
- Map existing architecture relevant to the discussion
|
||||||
|
- Find integration points
|
||||||
|
- Identify patterns already in use
|
||||||
|
- Surface hidden complexity
|
||||||
|
|
||||||
|
**Compare options**
|
||||||
|
- Brainstorm multiple approaches
|
||||||
|
- Build comparison tables
|
||||||
|
- Sketch tradeoffs
|
||||||
|
- Recommend a path (if asked)
|
||||||
|
|
||||||
|
**Visualize**
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Use ASCII diagrams liberally │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ ┌────────┐ ┌────────┐ │
|
||||||
|
│ │ State │────────▶│ State │ │
|
||||||
|
│ │ A │ │ B │ │
|
||||||
|
│ └────────┘ └────────┘ │
|
||||||
|
│ │
|
||||||
|
│ System diagrams, state machines, │
|
||||||
|
│ data flows, architecture sketches, │
|
||||||
|
│ dependency graphs, comparison tables │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Surface risks and unknowns**
|
||||||
|
- Identify what could go wrong
|
||||||
|
- Find gaps in understanding
|
||||||
|
- Suggest spikes or investigations
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## OpenSpec Awareness
|
||||||
|
|
||||||
|
You have full context of the OpenSpec system. Use it naturally, don't force it.
|
||||||
|
|
||||||
|
### Check for context
|
||||||
|
|
||||||
|
At the start, quickly check what exists:
|
||||||
|
```bash
|
||||||
|
openspec list --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This tells you:
|
||||||
|
- If there are active changes
|
||||||
|
- Their names, schemas, and status
|
||||||
|
- What the user might be working on
|
||||||
|
|
||||||
|
### When no change exists
|
||||||
|
|
||||||
|
Think freely. When insights crystallize, you might offer:
|
||||||
|
|
||||||
|
- "This feels solid enough to start a change. Want me to create a proposal?"
|
||||||
|
- Or keep exploring - no pressure to formalize
|
||||||
|
|
||||||
|
### When a change exists
|
||||||
|
|
||||||
|
If the user mentions a change or you detect one is relevant:
|
||||||
|
|
||||||
|
1. **Read existing artifacts for context**
|
||||||
|
- `openspec/changes/<name>/proposal.md`
|
||||||
|
- `openspec/changes/<name>/design.md`
|
||||||
|
- `openspec/changes/<name>/tasks.md`
|
||||||
|
- etc.
|
||||||
|
|
||||||
|
2. **Reference them naturally in conversation**
|
||||||
|
- "Your design mentions using Redis, but we just realized SQLite fits better..."
|
||||||
|
- "The proposal scopes this to premium users, but we're now thinking everyone..."
|
||||||
|
|
||||||
|
3. **Offer to capture when decisions are made**
|
||||||
|
|
||||||
|
| Insight Type | Where to Capture |
|
||||||
|
|--------------|------------------|
|
||||||
|
| New requirement discovered | `specs/<capability>/spec.md` |
|
||||||
|
| Requirement changed | `specs/<capability>/spec.md` |
|
||||||
|
| Design decision made | `design.md` |
|
||||||
|
| Scope changed | `proposal.md` |
|
||||||
|
| New work identified | `tasks.md` |
|
||||||
|
| Assumption invalidated | Relevant artifact |
|
||||||
|
|
||||||
|
Example offers:
|
||||||
|
- "That's a design decision. Capture it in design.md?"
|
||||||
|
- "This is a new requirement. Add it to specs?"
|
||||||
|
- "This changes scope. Update the proposal?"
|
||||||
|
|
||||||
|
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Don't Have To Do
|
||||||
|
|
||||||
|
- Follow a script
|
||||||
|
- Ask the same questions every time
|
||||||
|
- Produce a specific artifact
|
||||||
|
- Reach a conclusion
|
||||||
|
- Stay on topic if a tangent is valuable
|
||||||
|
- Be brief (this is thinking time)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Handling Different Entry Points
|
||||||
|
|
||||||
|
**User brings a vague idea:**
|
||||||
|
```
|
||||||
|
User: I'm thinking about adding real-time collaboration
|
||||||
|
|
||||||
|
You: Real-time collab is a big space. Let me think about this...
|
||||||
|
|
||||||
|
COLLABORATION SPECTRUM
|
||||||
|
════════════════════════════════════════════
|
||||||
|
|
||||||
|
Awareness Coordination Sync
|
||||||
|
│ │ │
|
||||||
|
▼ ▼ ▼
|
||||||
|
┌────────┐ ┌────────┐ ┌────────┐
|
||||||
|
│Presence│ │Cursors │ │ CRDT │
|
||||||
|
│ "3 │ │ Multi │ │Conflict│
|
||||||
|
│online" │ │ select │ │ free │
|
||||||
|
└────────┘ └────────┘ └────────┘
|
||||||
|
│ │ │
|
||||||
|
trivial moderate complex
|
||||||
|
|
||||||
|
Where's your head at?
|
||||||
|
```
|
||||||
|
|
||||||
|
**User brings a specific problem:**
|
||||||
|
```
|
||||||
|
User: The auth system is a mess
|
||||||
|
|
||||||
|
You: [reads codebase]
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────┐
|
||||||
|
│ CURRENT AUTH FLOW │
|
||||||
|
└─────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
┌─────────────┼─────────────┐
|
||||||
|
▼ ▼ ▼
|
||||||
|
┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||||
|
│ Google │ │ GitHub │ │ Email │
|
||||||
|
│ OAuth │ │ OAuth │ │ Magic │
|
||||||
|
└────┬────┘ └────┬────┘ └────┬────┘
|
||||||
|
│ │ │
|
||||||
|
└─────────────┼─────────────┘
|
||||||
|
▼
|
||||||
|
┌───────────┐
|
||||||
|
│ Session │
|
||||||
|
└─────┬─────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌───────────┐
|
||||||
|
│ Perms │
|
||||||
|
└───────────┘
|
||||||
|
|
||||||
|
I see three tangles. Which one's burning?
|
||||||
|
```
|
||||||
|
|
||||||
|
**User is stuck mid-implementation:**
|
||||||
|
```
|
||||||
|
User: /opsx:explore add-auth-system
|
||||||
|
The OAuth integration is more complex than expected
|
||||||
|
|
||||||
|
You: [reads change artifacts]
|
||||||
|
|
||||||
|
You're on task 4: "Implement OAuth flow"
|
||||||
|
|
||||||
|
Let me trace what's involved...
|
||||||
|
|
||||||
|
[draws diagram, explores options, suggests paths]
|
||||||
|
|
||||||
|
Want to update the design to reflect this?
|
||||||
|
Or add a spike task to investigate?
|
||||||
|
```
|
||||||
|
|
||||||
|
**User wants to compare options:**
|
||||||
|
```
|
||||||
|
User: Should we use Postgres or SQLite?
|
||||||
|
|
||||||
|
You: Generic answer is boring. What's the context?
|
||||||
|
|
||||||
|
User: A CLI tool that tracks local dev environments
|
||||||
|
|
||||||
|
You: That changes everything.
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────────┐
|
||||||
|
│ CLI TOOL DATA STORAGE │
|
||||||
|
└─────────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
Key constraints:
|
||||||
|
• No daemon running
|
||||||
|
• Must work offline
|
||||||
|
• Single user
|
||||||
|
|
||||||
|
SQLite Postgres
|
||||||
|
Deployment embedded ✓ needs server ✗
|
||||||
|
Offline yes ✓ no ✗
|
||||||
|
Single file yes ✓ no ✗
|
||||||
|
|
||||||
|
SQLite. Not even close.
|
||||||
|
|
||||||
|
Unless... is there a sync component?
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Ending Discovery
|
||||||
|
|
||||||
|
There's no required ending. Discovery might:
|
||||||
|
|
||||||
|
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
|
||||||
|
- **Result in artifact updates**: "Updated design.md with these decisions"
|
||||||
|
- **Just provide clarity**: User has what they need, moves on
|
||||||
|
- **Continue later**: "We can pick this up anytime"
|
||||||
|
|
||||||
|
When it feels like things are crystallizing, you might summarize:
|
||||||
|
|
||||||
|
```
|
||||||
|
## What We Figured Out
|
||||||
|
|
||||||
|
**The problem**: [crystallized understanding]
|
||||||
|
|
||||||
|
**The approach**: [if one emerged]
|
||||||
|
|
||||||
|
**Open questions**: [if any remain]
|
||||||
|
|
||||||
|
**Next steps** (if ready):
|
||||||
|
- Create a change proposal
|
||||||
|
- Keep exploring: just keep talking
|
||||||
|
```
|
||||||
|
|
||||||
|
But this summary is optional. Sometimes the thinking IS the value.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Guardrails
|
||||||
|
|
||||||
|
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
|
||||||
|
- **Don't fake understanding** - If something is unclear, dig deeper
|
||||||
|
- **Don't rush** - Discovery is thinking time, not task time
|
||||||
|
- **Don't force structure** - Let patterns emerge naturally
|
||||||
|
- **Don't auto-capture** - Offer to save insights, don't just do it
|
||||||
|
- **Do visualize** - A good diagram is worth many paragraphs
|
||||||
|
- **Do explore the codebase** - Ground discussions in reality
|
||||||
|
- **Do question assumptions** - Including the user's and your own
|
||||||
110
.codex/skills/openspec-propose/SKILL.md
Normal file
110
.codex/skills/openspec-propose/SKILL.md
Normal file
@@ -0,0 +1,110 @@
|
|||||||
|
---
|
||||||
|
name: openspec-propose
|
||||||
|
description: Propose a new change with all artifacts generated in one step. Use when the user wants to quickly describe what they want to build and get a complete proposal with design, specs, and tasks ready for implementation.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Propose a new change - create the change and generate all artifacts in one step.
|
||||||
|
|
||||||
|
I'll create a change with artifacts:
|
||||||
|
- proposal.md (what & why)
|
||||||
|
- design.md (how)
|
||||||
|
- tasks.md (implementation steps)
|
||||||
|
|
||||||
|
When ready to implement, run /opsx:apply
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no clear input provided, ask what they want to build**
|
||||||
|
|
||||||
|
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
|
||||||
|
> "What change do you want to work on? Describe what you want to build or fix."
|
||||||
|
|
||||||
|
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
|
||||||
|
|
||||||
|
2. **Create the change directory**
|
||||||
|
```bash
|
||||||
|
openspec new change "<name>"
|
||||||
|
```
|
||||||
|
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
|
||||||
|
|
||||||
|
3. **Get the artifact build order**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to get:
|
||||||
|
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
|
||||||
|
- `artifacts`: list of all artifacts with their status and dependencies
|
||||||
|
|
||||||
|
4. **Create artifacts in sequence until apply-ready**
|
||||||
|
|
||||||
|
Use the **TodoWrite tool** to track progress through the artifacts.
|
||||||
|
|
||||||
|
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
|
||||||
|
|
||||||
|
a. **For each artifact that is `ready` (dependencies satisfied)**:
|
||||||
|
- Get instructions:
|
||||||
|
```bash
|
||||||
|
openspec instructions <artifact-id> --change "<name>" --json
|
||||||
|
```
|
||||||
|
- The instructions JSON includes:
|
||||||
|
- `context`: Project background (constraints for you - do NOT include in output)
|
||||||
|
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
|
||||||
|
- `template`: The structure to use for your output file
|
||||||
|
- `instruction`: Schema-specific guidance for this artifact type
|
||||||
|
- `outputPath`: Where to write the artifact
|
||||||
|
- `dependencies`: Completed artifacts to read for context
|
||||||
|
- Read any completed dependency files for context
|
||||||
|
- Create the artifact file using `template` as the structure
|
||||||
|
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
|
||||||
|
- Show brief progress: "Created <artifact-id>"
|
||||||
|
|
||||||
|
b. **Continue until all `applyRequires` artifacts are complete**
|
||||||
|
- After creating each artifact, re-run `openspec status --change "<name>" --json`
|
||||||
|
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
|
||||||
|
- Stop when all `applyRequires` artifacts are done
|
||||||
|
|
||||||
|
c. **If an artifact requires user input** (unclear context):
|
||||||
|
- Use **AskUserQuestion tool** to clarify
|
||||||
|
- Then continue with creation
|
||||||
|
|
||||||
|
5. **Show final status**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**
|
||||||
|
|
||||||
|
After completing all artifacts, summarize:
|
||||||
|
- Change name and location
|
||||||
|
- List of artifacts created with brief descriptions
|
||||||
|
- What's ready: "All artifacts created! Ready for implementation."
|
||||||
|
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks."
|
||||||
|
|
||||||
|
**Artifact Creation Guidelines**
|
||||||
|
|
||||||
|
- Follow the `instruction` field from `openspec instructions` for each artifact type
|
||||||
|
- The schema defines what each artifact should contain - follow it
|
||||||
|
- Read dependency artifacts for context before creating new ones
|
||||||
|
- Use `template` as the structure for your output file - fill in its sections
|
||||||
|
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
|
||||||
|
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
|
||||||
|
- These guide what you write, but should never appear in the output
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
|
||||||
|
- Always read dependency artifacts before creating a new one
|
||||||
|
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
|
||||||
|
- If a change with that name already exists, ask if user wants to continue it or create a new one
|
||||||
|
- Verify each artifact file exists after writing before proceeding to next
|
||||||
152
.cursor/commands/opsx-apply.md
Normal file
152
.cursor/commands/opsx-apply.md
Normal file
@@ -0,0 +1,152 @@
|
|||||||
|
---
|
||||||
|
name: /opsx-apply
|
||||||
|
id: opsx-apply
|
||||||
|
category: Workflow
|
||||||
|
description: Implement tasks from an OpenSpec change (Experimental)
|
||||||
|
---
|
||||||
|
|
||||||
|
Implement tasks from an OpenSpec change.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name (e.g., `/opsx:apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **Select the change**
|
||||||
|
|
||||||
|
If a name is provided, use it. Otherwise:
|
||||||
|
- Infer from conversation context if the user mentioned a change
|
||||||
|
- Auto-select if only one active change exists
|
||||||
|
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
|
||||||
|
|
||||||
|
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
|
||||||
|
|
||||||
|
2. **Check status to understand the schema**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used (e.g., "spec-driven")
|
||||||
|
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
|
||||||
|
|
||||||
|
3. **Get apply instructions**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openspec instructions apply --change "<name>" --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This returns:
|
||||||
|
- Context file paths (varies by schema)
|
||||||
|
- Progress (total, complete, remaining)
|
||||||
|
- Task list with status
|
||||||
|
- Dynamic instruction based on current state
|
||||||
|
|
||||||
|
**Handle states:**
|
||||||
|
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx:continue`
|
||||||
|
- If `state: "all_done"`: congratulate, suggest archive
|
||||||
|
- Otherwise: proceed to implementation
|
||||||
|
|
||||||
|
4. **Read context files**
|
||||||
|
|
||||||
|
Read the files listed in `contextFiles` from the apply instructions output.
|
||||||
|
The files depend on the schema being used:
|
||||||
|
- **spec-driven**: proposal, specs, design, tasks
|
||||||
|
- Other schemas: follow the contextFiles from CLI output
|
||||||
|
|
||||||
|
5. **Show current progress**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Schema being used
|
||||||
|
- Progress: "N/M tasks complete"
|
||||||
|
- Remaining tasks overview
|
||||||
|
- Dynamic instruction from CLI
|
||||||
|
|
||||||
|
6. **Implement tasks (loop until done or blocked)**
|
||||||
|
|
||||||
|
For each pending task:
|
||||||
|
- Show which task is being worked on
|
||||||
|
- Make the code changes required
|
||||||
|
- Keep changes minimal and focused
|
||||||
|
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
|
||||||
|
- Continue to next task
|
||||||
|
|
||||||
|
**Pause if:**
|
||||||
|
- Task is unclear → ask for clarification
|
||||||
|
- Implementation reveals a design issue → suggest updating artifacts
|
||||||
|
- Error or blocker encountered → report and wait for guidance
|
||||||
|
- User interrupts
|
||||||
|
|
||||||
|
7. **On completion or pause, show status**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Tasks completed this session
|
||||||
|
- Overall progress: "N/M tasks complete"
|
||||||
|
- If all done: suggest archive
|
||||||
|
- If paused: explain why and wait for guidance
|
||||||
|
|
||||||
|
**Output During Implementation**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementing: <change-name> (schema: <schema-name>)
|
||||||
|
|
||||||
|
Working on task 3/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
|
||||||
|
Working on task 4/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Completion**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 7/7 tasks complete ✓
|
||||||
|
|
||||||
|
### Completed This Session
|
||||||
|
- [x] Task 1
|
||||||
|
- [x] Task 2
|
||||||
|
...
|
||||||
|
|
||||||
|
All tasks complete! You can archive this change with `/opsx:archive`.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Pause (Issue Encountered)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Paused
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 4/7 tasks complete
|
||||||
|
|
||||||
|
### Issue Encountered
|
||||||
|
<description of the issue>
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
1. <option 1>
|
||||||
|
2. <option 2>
|
||||||
|
3. Other approach
|
||||||
|
|
||||||
|
What would you like to do?
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Keep going through tasks until done or blocked
|
||||||
|
- Always read context files before starting (from the apply instructions output)
|
||||||
|
- If task is ambiguous, pause and ask before implementing
|
||||||
|
- If implementation reveals issues, pause and suggest artifact updates
|
||||||
|
- Keep code changes minimal and scoped to each task
|
||||||
|
- Update task checkbox immediately after completing each task
|
||||||
|
- Pause on errors, blockers, or unclear requirements - don't guess
|
||||||
|
- Use contextFiles from CLI output, don't assume specific file names
|
||||||
|
|
||||||
|
**Fluid Workflow Integration**
|
||||||
|
|
||||||
|
This skill supports the "actions on a change" model:
|
||||||
|
|
||||||
|
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
|
||||||
|
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly
|
||||||
157
.cursor/commands/opsx-archive.md
Normal file
157
.cursor/commands/opsx-archive.md
Normal file
@@ -0,0 +1,157 @@
|
|||||||
|
---
|
||||||
|
name: /opsx-archive
|
||||||
|
id: opsx-archive
|
||||||
|
category: Workflow
|
||||||
|
description: Archive a completed change in the experimental workflow
|
||||||
|
---
|
||||||
|
|
||||||
|
Archive a completed change in the experimental workflow.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name after `/opsx:archive` (e.g., `/opsx:archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no change name provided, prompt for selection**
|
||||||
|
|
||||||
|
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
|
||||||
|
|
||||||
|
Show only active changes (not already archived).
|
||||||
|
Include the schema used for each change if available.
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
|
||||||
|
|
||||||
|
2. **Check artifact completion status**
|
||||||
|
|
||||||
|
Run `openspec status --change "<name>" --json` to check artifact completion.
|
||||||
|
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used
|
||||||
|
- `artifacts`: List of artifacts with their status (`done` or other)
|
||||||
|
|
||||||
|
**If any artifacts are not `done`:**
|
||||||
|
- Display warning listing incomplete artifacts
|
||||||
|
- Prompt user for confirmation to continue
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
3. **Check task completion status**
|
||||||
|
|
||||||
|
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
|
||||||
|
|
||||||
|
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
|
||||||
|
|
||||||
|
**If incomplete tasks found:**
|
||||||
|
- Display warning showing count of incomplete tasks
|
||||||
|
- Prompt user for confirmation to continue
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
**If no tasks file exists:** Proceed without task-related warning.
|
||||||
|
|
||||||
|
4. **Assess delta spec sync state**
|
||||||
|
|
||||||
|
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
|
||||||
|
|
||||||
|
**If delta specs exist:**
|
||||||
|
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
|
||||||
|
- Determine what changes would be applied (adds, modifications, removals, renames)
|
||||||
|
- Show a combined summary before prompting
|
||||||
|
|
||||||
|
**Prompt options:**
|
||||||
|
- If changes needed: "Sync now (recommended)", "Archive without syncing"
|
||||||
|
- If already synced: "Archive now", "Sync anyway", "Cancel"
|
||||||
|
|
||||||
|
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
|
||||||
|
|
||||||
|
5. **Perform the archive**
|
||||||
|
|
||||||
|
Create the archive directory if it doesn't exist:
|
||||||
|
```bash
|
||||||
|
mkdir -p openspec/changes/archive
|
||||||
|
```
|
||||||
|
|
||||||
|
Generate target name using current date: `YYYY-MM-DD-<change-name>`
|
||||||
|
|
||||||
|
**Check if target already exists:**
|
||||||
|
- If yes: Fail with error, suggest renaming existing archive or using different date
|
||||||
|
- If no: Move the change directory to archive
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
|
||||||
|
```
|
||||||
|
|
||||||
|
6. **Display summary**
|
||||||
|
|
||||||
|
Show archive completion summary including:
|
||||||
|
- Change name
|
||||||
|
- Schema that was used
|
||||||
|
- Archive location
|
||||||
|
- Spec sync status (synced / sync skipped / no delta specs)
|
||||||
|
- Note about any warnings (incomplete artifacts/tasks)
|
||||||
|
|
||||||
|
**Output On Success**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
**Specs:** ✓ Synced to main specs
|
||||||
|
|
||||||
|
All artifacts complete. All tasks complete.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Success (No Delta Specs)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
**Specs:** No delta specs
|
||||||
|
|
||||||
|
All artifacts complete. All tasks complete.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Success With Warnings**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Complete (with warnings)
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
**Specs:** Sync skipped (user chose to skip)
|
||||||
|
|
||||||
|
**Warnings:**
|
||||||
|
- Archived with 2 incomplete artifacts
|
||||||
|
- Archived with 3 incomplete tasks
|
||||||
|
- Delta spec sync was skipped (user chose to skip)
|
||||||
|
|
||||||
|
Review the archive if this was not intentional.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Error (Archive Exists)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Failed
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
|
||||||
|
Target archive directory already exists.
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
1. Rename the existing archive
|
||||||
|
2. Delete the existing archive if it's a duplicate
|
||||||
|
3. Wait until a different date to archive
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Always prompt for change selection if not provided
|
||||||
|
- Use artifact graph (openspec status --json) for completion checking
|
||||||
|
- Don't block archive on warnings - just inform and confirm
|
||||||
|
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
|
||||||
|
- Show clear summary of what happened
|
||||||
|
- If sync is requested, use the Skill tool to invoke `openspec-sync-specs` (agent-driven)
|
||||||
|
- If delta specs exist, always run the sync assessment and show the combined summary before prompting
|
||||||
173
.cursor/commands/opsx-explore.md
Normal file
173
.cursor/commands/opsx-explore.md
Normal file
@@ -0,0 +1,173 @@
|
|||||||
|
---
|
||||||
|
name: /opsx-explore
|
||||||
|
id: opsx-explore
|
||||||
|
category: Workflow
|
||||||
|
description: "Enter explore mode - think through ideas, investigate problems, clarify requirements"
|
||||||
|
---
|
||||||
|
|
||||||
|
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
|
||||||
|
|
||||||
|
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
|
||||||
|
|
||||||
|
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
|
||||||
|
|
||||||
|
**Input**: The argument after `/opsx:explore` is whatever the user wants to think about. Could be:
|
||||||
|
- A vague idea: "real-time collaboration"
|
||||||
|
- A specific problem: "the auth system is getting unwieldy"
|
||||||
|
- A change name: "add-dark-mode" (to explore in context of that change)
|
||||||
|
- A comparison: "postgres vs sqlite for this"
|
||||||
|
- Nothing (just enter explore mode)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Stance
|
||||||
|
|
||||||
|
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
|
||||||
|
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
|
||||||
|
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
|
||||||
|
- **Adaptive** - Follow interesting threads, pivot when new information emerges
|
||||||
|
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
|
||||||
|
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Might Do
|
||||||
|
|
||||||
|
Depending on what the user brings, you might:
|
||||||
|
|
||||||
|
**Explore the problem space**
|
||||||
|
- Ask clarifying questions that emerge from what they said
|
||||||
|
- Challenge assumptions
|
||||||
|
- Reframe the problem
|
||||||
|
- Find analogies
|
||||||
|
|
||||||
|
**Investigate the codebase**
|
||||||
|
- Map existing architecture relevant to the discussion
|
||||||
|
- Find integration points
|
||||||
|
- Identify patterns already in use
|
||||||
|
- Surface hidden complexity
|
||||||
|
|
||||||
|
**Compare options**
|
||||||
|
- Brainstorm multiple approaches
|
||||||
|
- Build comparison tables
|
||||||
|
- Sketch tradeoffs
|
||||||
|
- Recommend a path (if asked)
|
||||||
|
|
||||||
|
**Visualize**
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Use ASCII diagrams liberally │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ ┌────────┐ ┌────────┐ │
|
||||||
|
│ │ State │────────▶│ State │ │
|
||||||
|
│ │ A │ │ B │ │
|
||||||
|
│ └────────┘ └────────┘ │
|
||||||
|
│ │
|
||||||
|
│ System diagrams, state machines, │
|
||||||
|
│ data flows, architecture sketches, │
|
||||||
|
│ dependency graphs, comparison tables │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Surface risks and unknowns**
|
||||||
|
- Identify what could go wrong
|
||||||
|
- Find gaps in understanding
|
||||||
|
- Suggest spikes or investigations
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## OpenSpec Awareness
|
||||||
|
|
||||||
|
You have full context of the OpenSpec system. Use it naturally, don't force it.
|
||||||
|
|
||||||
|
### Check for context
|
||||||
|
|
||||||
|
At the start, quickly check what exists:
|
||||||
|
```bash
|
||||||
|
openspec list --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This tells you:
|
||||||
|
- If there are active changes
|
||||||
|
- Their names, schemas, and status
|
||||||
|
- What the user might be working on
|
||||||
|
|
||||||
|
If the user mentioned a specific change name, read its artifacts for context.
|
||||||
|
|
||||||
|
### When no change exists
|
||||||
|
|
||||||
|
Think freely. When insights crystallize, you might offer:
|
||||||
|
|
||||||
|
- "This feels solid enough to start a change. Want me to create a proposal?"
|
||||||
|
- Or keep exploring - no pressure to formalize
|
||||||
|
|
||||||
|
### When a change exists
|
||||||
|
|
||||||
|
If the user mentions a change or you detect one is relevant:
|
||||||
|
|
||||||
|
1. **Read existing artifacts for context**
|
||||||
|
- `openspec/changes/<name>/proposal.md`
|
||||||
|
- `openspec/changes/<name>/design.md`
|
||||||
|
- `openspec/changes/<name>/tasks.md`
|
||||||
|
- etc.
|
||||||
|
|
||||||
|
2. **Reference them naturally in conversation**
|
||||||
|
- "Your design mentions using Redis, but we just realized SQLite fits better..."
|
||||||
|
- "The proposal scopes this to premium users, but we're now thinking everyone..."
|
||||||
|
|
||||||
|
3. **Offer to capture when decisions are made**
|
||||||
|
|
||||||
|
| Insight Type | Where to Capture |
|
||||||
|
|--------------|------------------|
|
||||||
|
| New requirement discovered | `specs/<capability>/spec.md` |
|
||||||
|
| Requirement changed | `specs/<capability>/spec.md` |
|
||||||
|
| Design decision made | `design.md` |
|
||||||
|
| Scope changed | `proposal.md` |
|
||||||
|
| New work identified | `tasks.md` |
|
||||||
|
| Assumption invalidated | Relevant artifact |
|
||||||
|
|
||||||
|
Example offers:
|
||||||
|
- "That's a design decision. Capture it in design.md?"
|
||||||
|
- "This is a new requirement. Add it to specs?"
|
||||||
|
- "This changes scope. Update the proposal?"
|
||||||
|
|
||||||
|
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Don't Have To Do
|
||||||
|
|
||||||
|
- Follow a script
|
||||||
|
- Ask the same questions every time
|
||||||
|
- Produce a specific artifact
|
||||||
|
- Reach a conclusion
|
||||||
|
- Stay on topic if a tangent is valuable
|
||||||
|
- Be brief (this is thinking time)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Ending Discovery
|
||||||
|
|
||||||
|
There's no required ending. Discovery might:
|
||||||
|
|
||||||
|
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
|
||||||
|
- **Result in artifact updates**: "Updated design.md with these decisions"
|
||||||
|
- **Just provide clarity**: User has what they need, moves on
|
||||||
|
- **Continue later**: "We can pick this up anytime"
|
||||||
|
|
||||||
|
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Guardrails
|
||||||
|
|
||||||
|
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
|
||||||
|
- **Don't fake understanding** - If something is unclear, dig deeper
|
||||||
|
- **Don't rush** - Discovery is thinking time, not task time
|
||||||
|
- **Don't force structure** - Let patterns emerge naturally
|
||||||
|
- **Don't auto-capture** - Offer to save insights, don't just do it
|
||||||
|
- **Do visualize** - A good diagram is worth many paragraphs
|
||||||
|
- **Do explore the codebase** - Ground discussions in reality
|
||||||
|
- **Do question assumptions** - Including the user's and your own
|
||||||
106
.cursor/commands/opsx-propose.md
Normal file
106
.cursor/commands/opsx-propose.md
Normal file
@@ -0,0 +1,106 @@
|
|||||||
|
---
|
||||||
|
name: /opsx-propose
|
||||||
|
id: opsx-propose
|
||||||
|
category: Workflow
|
||||||
|
description: Propose a new change - create it and generate all artifacts in one step
|
||||||
|
---
|
||||||
|
|
||||||
|
Propose a new change - create the change and generate all artifacts in one step.
|
||||||
|
|
||||||
|
I'll create a change with artifacts:
|
||||||
|
- proposal.md (what & why)
|
||||||
|
- design.md (how)
|
||||||
|
- tasks.md (implementation steps)
|
||||||
|
|
||||||
|
When ready to implement, run /opsx:apply
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Input**: The argument after `/opsx:propose` is the change name (kebab-case), OR a description of what the user wants to build.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no input provided, ask what they want to build**
|
||||||
|
|
||||||
|
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
|
||||||
|
> "What change do you want to work on? Describe what you want to build or fix."
|
||||||
|
|
||||||
|
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
|
||||||
|
|
||||||
|
2. **Create the change directory**
|
||||||
|
```bash
|
||||||
|
openspec new change "<name>"
|
||||||
|
```
|
||||||
|
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
|
||||||
|
|
||||||
|
3. **Get the artifact build order**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to get:
|
||||||
|
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
|
||||||
|
- `artifacts`: list of all artifacts with their status and dependencies
|
||||||
|
|
||||||
|
4. **Create artifacts in sequence until apply-ready**
|
||||||
|
|
||||||
|
Use the **TodoWrite tool** to track progress through the artifacts.
|
||||||
|
|
||||||
|
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
|
||||||
|
|
||||||
|
a. **For each artifact that is `ready` (dependencies satisfied)**:
|
||||||
|
- Get instructions:
|
||||||
|
```bash
|
||||||
|
openspec instructions <artifact-id> --change "<name>" --json
|
||||||
|
```
|
||||||
|
- The instructions JSON includes:
|
||||||
|
- `context`: Project background (constraints for you - do NOT include in output)
|
||||||
|
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
|
||||||
|
- `template`: The structure to use for your output file
|
||||||
|
- `instruction`: Schema-specific guidance for this artifact type
|
||||||
|
- `outputPath`: Where to write the artifact
|
||||||
|
- `dependencies`: Completed artifacts to read for context
|
||||||
|
- Read any completed dependency files for context
|
||||||
|
- Create the artifact file using `template` as the structure
|
||||||
|
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
|
||||||
|
- Show brief progress: "Created <artifact-id>"
|
||||||
|
|
||||||
|
b. **Continue until all `applyRequires` artifacts are complete**
|
||||||
|
- After creating each artifact, re-run `openspec status --change "<name>" --json`
|
||||||
|
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
|
||||||
|
- Stop when all `applyRequires` artifacts are done
|
||||||
|
|
||||||
|
c. **If an artifact requires user input** (unclear context):
|
||||||
|
- Use **AskUserQuestion tool** to clarify
|
||||||
|
- Then continue with creation
|
||||||
|
|
||||||
|
5. **Show final status**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**
|
||||||
|
|
||||||
|
After completing all artifacts, summarize:
|
||||||
|
- Change name and location
|
||||||
|
- List of artifacts created with brief descriptions
|
||||||
|
- What's ready: "All artifacts created! Ready for implementation."
|
||||||
|
- Prompt: "Run `/opsx:apply` to start implementing."
|
||||||
|
|
||||||
|
**Artifact Creation Guidelines**
|
||||||
|
|
||||||
|
- Follow the `instruction` field from `openspec instructions` for each artifact type
|
||||||
|
- The schema defines what each artifact should contain - follow it
|
||||||
|
- Read dependency artifacts for context before creating new ones
|
||||||
|
- Use `template` as the structure for your output file - fill in its sections
|
||||||
|
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
|
||||||
|
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
|
||||||
|
- These guide what you write, but should never appear in the output
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
|
||||||
|
- Always read dependency artifacts before creating a new one
|
||||||
|
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
|
||||||
|
- If a change with that name already exists, ask if user wants to continue it or create a new one
|
||||||
|
- Verify each artifact file exists after writing before proceeding to next
|
||||||
156
.cursor/skills/openspec-apply-change/SKILL.md
Normal file
156
.cursor/skills/openspec-apply-change/SKILL.md
Normal file
@@ -0,0 +1,156 @@
|
|||||||
|
---
|
||||||
|
name: openspec-apply-change
|
||||||
|
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Implement tasks from an OpenSpec change.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **Select the change**
|
||||||
|
|
||||||
|
If a name is provided, use it. Otherwise:
|
||||||
|
- Infer from conversation context if the user mentioned a change
|
||||||
|
- Auto-select if only one active change exists
|
||||||
|
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
|
||||||
|
|
||||||
|
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
|
||||||
|
|
||||||
|
2. **Check status to understand the schema**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used (e.g., "spec-driven")
|
||||||
|
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
|
||||||
|
|
||||||
|
3. **Get apply instructions**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openspec instructions apply --change "<name>" --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This returns:
|
||||||
|
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
|
||||||
|
- Progress (total, complete, remaining)
|
||||||
|
- Task list with status
|
||||||
|
- Dynamic instruction based on current state
|
||||||
|
|
||||||
|
**Handle states:**
|
||||||
|
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
|
||||||
|
- If `state: "all_done"`: congratulate, suggest archive
|
||||||
|
- Otherwise: proceed to implementation
|
||||||
|
|
||||||
|
4. **Read context files**
|
||||||
|
|
||||||
|
Read the files listed in `contextFiles` from the apply instructions output.
|
||||||
|
The files depend on the schema being used:
|
||||||
|
- **spec-driven**: proposal, specs, design, tasks
|
||||||
|
- Other schemas: follow the contextFiles from CLI output
|
||||||
|
|
||||||
|
5. **Show current progress**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Schema being used
|
||||||
|
- Progress: "N/M tasks complete"
|
||||||
|
- Remaining tasks overview
|
||||||
|
- Dynamic instruction from CLI
|
||||||
|
|
||||||
|
6. **Implement tasks (loop until done or blocked)**
|
||||||
|
|
||||||
|
For each pending task:
|
||||||
|
- Show which task is being worked on
|
||||||
|
- Make the code changes required
|
||||||
|
- Keep changes minimal and focused
|
||||||
|
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
|
||||||
|
- Continue to next task
|
||||||
|
|
||||||
|
**Pause if:**
|
||||||
|
- Task is unclear → ask for clarification
|
||||||
|
- Implementation reveals a design issue → suggest updating artifacts
|
||||||
|
- Error or blocker encountered → report and wait for guidance
|
||||||
|
- User interrupts
|
||||||
|
|
||||||
|
7. **On completion or pause, show status**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Tasks completed this session
|
||||||
|
- Overall progress: "N/M tasks complete"
|
||||||
|
- If all done: suggest archive
|
||||||
|
- If paused: explain why and wait for guidance
|
||||||
|
|
||||||
|
**Output During Implementation**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementing: <change-name> (schema: <schema-name>)
|
||||||
|
|
||||||
|
Working on task 3/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
|
||||||
|
Working on task 4/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Completion**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 7/7 tasks complete ✓
|
||||||
|
|
||||||
|
### Completed This Session
|
||||||
|
- [x] Task 1
|
||||||
|
- [x] Task 2
|
||||||
|
...
|
||||||
|
|
||||||
|
All tasks complete! Ready to archive this change.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Pause (Issue Encountered)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Paused
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 4/7 tasks complete
|
||||||
|
|
||||||
|
### Issue Encountered
|
||||||
|
<description of the issue>
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
1. <option 1>
|
||||||
|
2. <option 2>
|
||||||
|
3. Other approach
|
||||||
|
|
||||||
|
What would you like to do?
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Keep going through tasks until done or blocked
|
||||||
|
- Always read context files before starting (from the apply instructions output)
|
||||||
|
- If task is ambiguous, pause and ask before implementing
|
||||||
|
- If implementation reveals issues, pause and suggest artifact updates
|
||||||
|
- Keep code changes minimal and scoped to each task
|
||||||
|
- Update task checkbox immediately after completing each task
|
||||||
|
- Pause on errors, blockers, or unclear requirements - don't guess
|
||||||
|
- Use contextFiles from CLI output, don't assume specific file names
|
||||||
|
|
||||||
|
**Fluid Workflow Integration**
|
||||||
|
|
||||||
|
This skill supports the "actions on a change" model:
|
||||||
|
|
||||||
|
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
|
||||||
|
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly
|
||||||
114
.cursor/skills/openspec-archive-change/SKILL.md
Normal file
114
.cursor/skills/openspec-archive-change/SKILL.md
Normal file
@@ -0,0 +1,114 @@
|
|||||||
|
---
|
||||||
|
name: openspec-archive-change
|
||||||
|
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Archive a completed change in the experimental workflow.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no change name provided, prompt for selection**
|
||||||
|
|
||||||
|
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
|
||||||
|
|
||||||
|
Show only active changes (not already archived).
|
||||||
|
Include the schema used for each change if available.
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
|
||||||
|
|
||||||
|
2. **Check artifact completion status**
|
||||||
|
|
||||||
|
Run `openspec status --change "<name>" --json` to check artifact completion.
|
||||||
|
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used
|
||||||
|
- `artifacts`: List of artifacts with their status (`done` or other)
|
||||||
|
|
||||||
|
**If any artifacts are not `done`:**
|
||||||
|
- Display warning listing incomplete artifacts
|
||||||
|
- Use **AskUserQuestion tool** to confirm user wants to proceed
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
3. **Check task completion status**
|
||||||
|
|
||||||
|
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
|
||||||
|
|
||||||
|
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
|
||||||
|
|
||||||
|
**If incomplete tasks found:**
|
||||||
|
- Display warning showing count of incomplete tasks
|
||||||
|
- Use **AskUserQuestion tool** to confirm user wants to proceed
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
**If no tasks file exists:** Proceed without task-related warning.
|
||||||
|
|
||||||
|
4. **Assess delta spec sync state**
|
||||||
|
|
||||||
|
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
|
||||||
|
|
||||||
|
**If delta specs exist:**
|
||||||
|
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
|
||||||
|
- Determine what changes would be applied (adds, modifications, removals, renames)
|
||||||
|
- Show a combined summary before prompting
|
||||||
|
|
||||||
|
**Prompt options:**
|
||||||
|
- If changes needed: "Sync now (recommended)", "Archive without syncing"
|
||||||
|
- If already synced: "Archive now", "Sync anyway", "Cancel"
|
||||||
|
|
||||||
|
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
|
||||||
|
|
||||||
|
5. **Perform the archive**
|
||||||
|
|
||||||
|
Create the archive directory if it doesn't exist:
|
||||||
|
```bash
|
||||||
|
mkdir -p openspec/changes/archive
|
||||||
|
```
|
||||||
|
|
||||||
|
Generate target name using current date: `YYYY-MM-DD-<change-name>`
|
||||||
|
|
||||||
|
**Check if target already exists:**
|
||||||
|
- If yes: Fail with error, suggest renaming existing archive or using different date
|
||||||
|
- If no: Move the change directory to archive
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
|
||||||
|
```
|
||||||
|
|
||||||
|
6. **Display summary**
|
||||||
|
|
||||||
|
Show archive completion summary including:
|
||||||
|
- Change name
|
||||||
|
- Schema that was used
|
||||||
|
- Archive location
|
||||||
|
- Whether specs were synced (if applicable)
|
||||||
|
- Note about any warnings (incomplete artifacts/tasks)
|
||||||
|
|
||||||
|
**Output On Success**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
|
||||||
|
|
||||||
|
All artifacts complete. All tasks complete.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Always prompt for change selection if not provided
|
||||||
|
- Use artifact graph (openspec status --json) for completion checking
|
||||||
|
- Don't block archive on warnings - just inform and confirm
|
||||||
|
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
|
||||||
|
- Show clear summary of what happened
|
||||||
|
- If sync is requested, use openspec-sync-specs approach (agent-driven)
|
||||||
|
- If delta specs exist, always run the sync assessment and show the combined summary before prompting
|
||||||
288
.cursor/skills/openspec-explore/SKILL.md
Normal file
288
.cursor/skills/openspec-explore/SKILL.md
Normal file
@@ -0,0 +1,288 @@
|
|||||||
|
---
|
||||||
|
name: openspec-explore
|
||||||
|
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
|
||||||
|
|
||||||
|
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
|
||||||
|
|
||||||
|
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Stance
|
||||||
|
|
||||||
|
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
|
||||||
|
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
|
||||||
|
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
|
||||||
|
- **Adaptive** - Follow interesting threads, pivot when new information emerges
|
||||||
|
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
|
||||||
|
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Might Do
|
||||||
|
|
||||||
|
Depending on what the user brings, you might:
|
||||||
|
|
||||||
|
**Explore the problem space**
|
||||||
|
- Ask clarifying questions that emerge from what they said
|
||||||
|
- Challenge assumptions
|
||||||
|
- Reframe the problem
|
||||||
|
- Find analogies
|
||||||
|
|
||||||
|
**Investigate the codebase**
|
||||||
|
- Map existing architecture relevant to the discussion
|
||||||
|
- Find integration points
|
||||||
|
- Identify patterns already in use
|
||||||
|
- Surface hidden complexity
|
||||||
|
|
||||||
|
**Compare options**
|
||||||
|
- Brainstorm multiple approaches
|
||||||
|
- Build comparison tables
|
||||||
|
- Sketch tradeoffs
|
||||||
|
- Recommend a path (if asked)
|
||||||
|
|
||||||
|
**Visualize**
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Use ASCII diagrams liberally │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ ┌────────┐ ┌────────┐ │
|
||||||
|
│ │ State │────────▶│ State │ │
|
||||||
|
│ │ A │ │ B │ │
|
||||||
|
│ └────────┘ └────────┘ │
|
||||||
|
│ │
|
||||||
|
│ System diagrams, state machines, │
|
||||||
|
│ data flows, architecture sketches, │
|
||||||
|
│ dependency graphs, comparison tables │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Surface risks and unknowns**
|
||||||
|
- Identify what could go wrong
|
||||||
|
- Find gaps in understanding
|
||||||
|
- Suggest spikes or investigations
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## OpenSpec Awareness
|
||||||
|
|
||||||
|
You have full context of the OpenSpec system. Use it naturally, don't force it.
|
||||||
|
|
||||||
|
### Check for context
|
||||||
|
|
||||||
|
At the start, quickly check what exists:
|
||||||
|
```bash
|
||||||
|
openspec list --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This tells you:
|
||||||
|
- If there are active changes
|
||||||
|
- Their names, schemas, and status
|
||||||
|
- What the user might be working on
|
||||||
|
|
||||||
|
### When no change exists
|
||||||
|
|
||||||
|
Think freely. When insights crystallize, you might offer:
|
||||||
|
|
||||||
|
- "This feels solid enough to start a change. Want me to create a proposal?"
|
||||||
|
- Or keep exploring - no pressure to formalize
|
||||||
|
|
||||||
|
### When a change exists
|
||||||
|
|
||||||
|
If the user mentions a change or you detect one is relevant:
|
||||||
|
|
||||||
|
1. **Read existing artifacts for context**
|
||||||
|
- `openspec/changes/<name>/proposal.md`
|
||||||
|
- `openspec/changes/<name>/design.md`
|
||||||
|
- `openspec/changes/<name>/tasks.md`
|
||||||
|
- etc.
|
||||||
|
|
||||||
|
2. **Reference them naturally in conversation**
|
||||||
|
- "Your design mentions using Redis, but we just realized SQLite fits better..."
|
||||||
|
- "The proposal scopes this to premium users, but we're now thinking everyone..."
|
||||||
|
|
||||||
|
3. **Offer to capture when decisions are made**
|
||||||
|
|
||||||
|
| Insight Type | Where to Capture |
|
||||||
|
|--------------|------------------|
|
||||||
|
| New requirement discovered | `specs/<capability>/spec.md` |
|
||||||
|
| Requirement changed | `specs/<capability>/spec.md` |
|
||||||
|
| Design decision made | `design.md` |
|
||||||
|
| Scope changed | `proposal.md` |
|
||||||
|
| New work identified | `tasks.md` |
|
||||||
|
| Assumption invalidated | Relevant artifact |
|
||||||
|
|
||||||
|
Example offers:
|
||||||
|
- "That's a design decision. Capture it in design.md?"
|
||||||
|
- "This is a new requirement. Add it to specs?"
|
||||||
|
- "This changes scope. Update the proposal?"
|
||||||
|
|
||||||
|
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Don't Have To Do
|
||||||
|
|
||||||
|
- Follow a script
|
||||||
|
- Ask the same questions every time
|
||||||
|
- Produce a specific artifact
|
||||||
|
- Reach a conclusion
|
||||||
|
- Stay on topic if a tangent is valuable
|
||||||
|
- Be brief (this is thinking time)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Handling Different Entry Points
|
||||||
|
|
||||||
|
**User brings a vague idea:**
|
||||||
|
```
|
||||||
|
User: I'm thinking about adding real-time collaboration
|
||||||
|
|
||||||
|
You: Real-time collab is a big space. Let me think about this...
|
||||||
|
|
||||||
|
COLLABORATION SPECTRUM
|
||||||
|
════════════════════════════════════════════
|
||||||
|
|
||||||
|
Awareness Coordination Sync
|
||||||
|
│ │ │
|
||||||
|
▼ ▼ ▼
|
||||||
|
┌────────┐ ┌────────┐ ┌────────┐
|
||||||
|
│Presence│ │Cursors │ │ CRDT │
|
||||||
|
│ "3 │ │ Multi │ │Conflict│
|
||||||
|
│online" │ │ select │ │ free │
|
||||||
|
└────────┘ └────────┘ └────────┘
|
||||||
|
│ │ │
|
||||||
|
trivial moderate complex
|
||||||
|
|
||||||
|
Where's your head at?
|
||||||
|
```
|
||||||
|
|
||||||
|
**User brings a specific problem:**
|
||||||
|
```
|
||||||
|
User: The auth system is a mess
|
||||||
|
|
||||||
|
You: [reads codebase]
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────┐
|
||||||
|
│ CURRENT AUTH FLOW │
|
||||||
|
└─────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
┌─────────────┼─────────────┐
|
||||||
|
▼ ▼ ▼
|
||||||
|
┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||||
|
│ Google │ │ GitHub │ │ Email │
|
||||||
|
│ OAuth │ │ OAuth │ │ Magic │
|
||||||
|
└────┬────┘ └────┬────┘ └────┬────┘
|
||||||
|
│ │ │
|
||||||
|
└─────────────┼─────────────┘
|
||||||
|
▼
|
||||||
|
┌───────────┐
|
||||||
|
│ Session │
|
||||||
|
└─────┬─────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌───────────┐
|
||||||
|
│ Perms │
|
||||||
|
└───────────┘
|
||||||
|
|
||||||
|
I see three tangles. Which one's burning?
|
||||||
|
```
|
||||||
|
|
||||||
|
**User is stuck mid-implementation:**
|
||||||
|
```
|
||||||
|
User: /opsx:explore add-auth-system
|
||||||
|
The OAuth integration is more complex than expected
|
||||||
|
|
||||||
|
You: [reads change artifacts]
|
||||||
|
|
||||||
|
You're on task 4: "Implement OAuth flow"
|
||||||
|
|
||||||
|
Let me trace what's involved...
|
||||||
|
|
||||||
|
[draws diagram, explores options, suggests paths]
|
||||||
|
|
||||||
|
Want to update the design to reflect this?
|
||||||
|
Or add a spike task to investigate?
|
||||||
|
```
|
||||||
|
|
||||||
|
**User wants to compare options:**
|
||||||
|
```
|
||||||
|
User: Should we use Postgres or SQLite?
|
||||||
|
|
||||||
|
You: Generic answer is boring. What's the context?
|
||||||
|
|
||||||
|
User: A CLI tool that tracks local dev environments
|
||||||
|
|
||||||
|
You: That changes everything.
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────────┐
|
||||||
|
│ CLI TOOL DATA STORAGE │
|
||||||
|
└─────────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
Key constraints:
|
||||||
|
• No daemon running
|
||||||
|
• Must work offline
|
||||||
|
• Single user
|
||||||
|
|
||||||
|
SQLite Postgres
|
||||||
|
Deployment embedded ✓ needs server ✗
|
||||||
|
Offline yes ✓ no ✗
|
||||||
|
Single file yes ✓ no ✗
|
||||||
|
|
||||||
|
SQLite. Not even close.
|
||||||
|
|
||||||
|
Unless... is there a sync component?
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Ending Discovery
|
||||||
|
|
||||||
|
There's no required ending. Discovery might:
|
||||||
|
|
||||||
|
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
|
||||||
|
- **Result in artifact updates**: "Updated design.md with these decisions"
|
||||||
|
- **Just provide clarity**: User has what they need, moves on
|
||||||
|
- **Continue later**: "We can pick this up anytime"
|
||||||
|
|
||||||
|
When it feels like things are crystallizing, you might summarize:
|
||||||
|
|
||||||
|
```
|
||||||
|
## What We Figured Out
|
||||||
|
|
||||||
|
**The problem**: [crystallized understanding]
|
||||||
|
|
||||||
|
**The approach**: [if one emerged]
|
||||||
|
|
||||||
|
**Open questions**: [if any remain]
|
||||||
|
|
||||||
|
**Next steps** (if ready):
|
||||||
|
- Create a change proposal
|
||||||
|
- Keep exploring: just keep talking
|
||||||
|
```
|
||||||
|
|
||||||
|
But this summary is optional. Sometimes the thinking IS the value.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Guardrails
|
||||||
|
|
||||||
|
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
|
||||||
|
- **Don't fake understanding** - If something is unclear, dig deeper
|
||||||
|
- **Don't rush** - Discovery is thinking time, not task time
|
||||||
|
- **Don't force structure** - Let patterns emerge naturally
|
||||||
|
- **Don't auto-capture** - Offer to save insights, don't just do it
|
||||||
|
- **Do visualize** - A good diagram is worth many paragraphs
|
||||||
|
- **Do explore the codebase** - Ground discussions in reality
|
||||||
|
- **Do question assumptions** - Including the user's and your own
|
||||||
110
.cursor/skills/openspec-propose/SKILL.md
Normal file
110
.cursor/skills/openspec-propose/SKILL.md
Normal file
@@ -0,0 +1,110 @@
|
|||||||
|
---
|
||||||
|
name: openspec-propose
|
||||||
|
description: Propose a new change with all artifacts generated in one step. Use when the user wants to quickly describe what they want to build and get a complete proposal with design, specs, and tasks ready for implementation.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Propose a new change - create the change and generate all artifacts in one step.
|
||||||
|
|
||||||
|
I'll create a change with artifacts:
|
||||||
|
- proposal.md (what & why)
|
||||||
|
- design.md (how)
|
||||||
|
- tasks.md (implementation steps)
|
||||||
|
|
||||||
|
When ready to implement, run /opsx:apply
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no clear input provided, ask what they want to build**
|
||||||
|
|
||||||
|
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
|
||||||
|
> "What change do you want to work on? Describe what you want to build or fix."
|
||||||
|
|
||||||
|
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
|
||||||
|
|
||||||
|
2. **Create the change directory**
|
||||||
|
```bash
|
||||||
|
openspec new change "<name>"
|
||||||
|
```
|
||||||
|
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
|
||||||
|
|
||||||
|
3. **Get the artifact build order**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to get:
|
||||||
|
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
|
||||||
|
- `artifacts`: list of all artifacts with their status and dependencies
|
||||||
|
|
||||||
|
4. **Create artifacts in sequence until apply-ready**
|
||||||
|
|
||||||
|
Use the **TodoWrite tool** to track progress through the artifacts.
|
||||||
|
|
||||||
|
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
|
||||||
|
|
||||||
|
a. **For each artifact that is `ready` (dependencies satisfied)**:
|
||||||
|
- Get instructions:
|
||||||
|
```bash
|
||||||
|
openspec instructions <artifact-id> --change "<name>" --json
|
||||||
|
```
|
||||||
|
- The instructions JSON includes:
|
||||||
|
- `context`: Project background (constraints for you - do NOT include in output)
|
||||||
|
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
|
||||||
|
- `template`: The structure to use for your output file
|
||||||
|
- `instruction`: Schema-specific guidance for this artifact type
|
||||||
|
- `outputPath`: Where to write the artifact
|
||||||
|
- `dependencies`: Completed artifacts to read for context
|
||||||
|
- Read any completed dependency files for context
|
||||||
|
- Create the artifact file using `template` as the structure
|
||||||
|
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
|
||||||
|
- Show brief progress: "Created <artifact-id>"
|
||||||
|
|
||||||
|
b. **Continue until all `applyRequires` artifacts are complete**
|
||||||
|
- After creating each artifact, re-run `openspec status --change "<name>" --json`
|
||||||
|
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
|
||||||
|
- Stop when all `applyRequires` artifacts are done
|
||||||
|
|
||||||
|
c. **If an artifact requires user input** (unclear context):
|
||||||
|
- Use **AskUserQuestion tool** to clarify
|
||||||
|
- Then continue with creation
|
||||||
|
|
||||||
|
5. **Show final status**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**
|
||||||
|
|
||||||
|
After completing all artifacts, summarize:
|
||||||
|
- Change name and location
|
||||||
|
- List of artifacts created with brief descriptions
|
||||||
|
- What's ready: "All artifacts created! Ready for implementation."
|
||||||
|
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks."
|
||||||
|
|
||||||
|
**Artifact Creation Guidelines**
|
||||||
|
|
||||||
|
- Follow the `instruction` field from `openspec instructions` for each artifact type
|
||||||
|
- The schema defines what each artifact should contain - follow it
|
||||||
|
- Read dependency artifacts for context before creating new ones
|
||||||
|
- Use `template` as the structure for your output file - fill in its sections
|
||||||
|
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
|
||||||
|
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
|
||||||
|
- These guide what you write, but should never appear in the output
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
|
||||||
|
- Always read dependency artifacts before creating a new one
|
||||||
|
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
|
||||||
|
- If a change with that name already exists, ask if user wants to continue it or create a new one
|
||||||
|
- Verify each artifact file exists after writing before proceeding to next
|
||||||
12
.env.example
12
.env.example
@@ -21,11 +21,11 @@ API_BOOTSTRAP_TOKEN=change-me-in-production
|
|||||||
# =============================================================================
|
# =============================================================================
|
||||||
|
|
||||||
# API Service
|
# API Service
|
||||||
API_LISTEN_ADDR=0.0.0.0:8080
|
API_LISTEN_ADDR=0.0.0.0:7080
|
||||||
API_BASE_URL=http://api:8080
|
API_BASE_URL=http://api:7080
|
||||||
|
|
||||||
# Indexer Service
|
# Indexer Service
|
||||||
INDEXER_LISTEN_ADDR=0.0.0.0:8081
|
INDEXER_LISTEN_ADDR=0.0.0.0:7081
|
||||||
INDEXER_SCAN_INTERVAL_SECONDS=5
|
INDEXER_SCAN_INTERVAL_SECONDS=5
|
||||||
|
|
||||||
# Meilisearch Search Engine
|
# Meilisearch Search Engine
|
||||||
@@ -56,8 +56,8 @@ THUMBNAILS_HOST_PATH=../data/thumbnails
|
|||||||
# Port Configuration
|
# Port Configuration
|
||||||
# =============================================================================
|
# =============================================================================
|
||||||
# To change ports, edit docker-compose.yml directly:
|
# To change ports, edit docker-compose.yml directly:
|
||||||
# - API: change "7080:8080" to "YOUR_PORT:8080"
|
# - API: change "7080:7080" to "YOUR_PORT:7080"
|
||||||
# - Indexer: change "7081:8081" to "YOUR_PORT:8081"
|
# - Indexer: change "7081:7081" to "YOUR_PORT:7081"
|
||||||
# - Backoffice: change "7082:8082" to "YOUR_PORT:8082"
|
# - Backoffice: change "7082:7082" to "YOUR_PORT:7082"
|
||||||
# - Meilisearch: change "7700:7700" to "YOUR_PORT:7700"
|
# - Meilisearch: change "7700:7700" to "YOUR_PORT:7700"
|
||||||
# - PostgreSQL: change "6432:5432" to "YOUR_PORT:5432"
|
# - PostgreSQL: change "6432:5432" to "YOUR_PORT:5432"
|
||||||
|
|||||||
2
.gitignore
vendored
2
.gitignore
vendored
@@ -2,7 +2,7 @@ target/
|
|||||||
.env
|
.env
|
||||||
.DS_Store
|
.DS_Store
|
||||||
tmp/
|
tmp/
|
||||||
libraries/
|
/libraries/
|
||||||
node_modules/
|
node_modules/
|
||||||
.next/
|
.next/
|
||||||
data/thumbnails
|
data/thumbnails
|
||||||
|
|||||||
149
.opencode/command/opsx-apply.md
Normal file
149
.opencode/command/opsx-apply.md
Normal file
@@ -0,0 +1,149 @@
|
|||||||
|
---
|
||||||
|
description: Implement tasks from an OpenSpec change (Experimental)
|
||||||
|
---
|
||||||
|
|
||||||
|
Implement tasks from an OpenSpec change.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name (e.g., `/opsx-apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **Select the change**
|
||||||
|
|
||||||
|
If a name is provided, use it. Otherwise:
|
||||||
|
- Infer from conversation context if the user mentioned a change
|
||||||
|
- Auto-select if only one active change exists
|
||||||
|
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
|
||||||
|
|
||||||
|
Always announce: "Using change: <name>" and how to override (e.g., `/opsx-apply <other>`).
|
||||||
|
|
||||||
|
2. **Check status to understand the schema**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used (e.g., "spec-driven")
|
||||||
|
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
|
||||||
|
|
||||||
|
3. **Get apply instructions**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openspec instructions apply --change "<name>" --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This returns:
|
||||||
|
- Context file paths (varies by schema)
|
||||||
|
- Progress (total, complete, remaining)
|
||||||
|
- Task list with status
|
||||||
|
- Dynamic instruction based on current state
|
||||||
|
|
||||||
|
**Handle states:**
|
||||||
|
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx-continue`
|
||||||
|
- If `state: "all_done"`: congratulate, suggest archive
|
||||||
|
- Otherwise: proceed to implementation
|
||||||
|
|
||||||
|
4. **Read context files**
|
||||||
|
|
||||||
|
Read the files listed in `contextFiles` from the apply instructions output.
|
||||||
|
The files depend on the schema being used:
|
||||||
|
- **spec-driven**: proposal, specs, design, tasks
|
||||||
|
- Other schemas: follow the contextFiles from CLI output
|
||||||
|
|
||||||
|
5. **Show current progress**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Schema being used
|
||||||
|
- Progress: "N/M tasks complete"
|
||||||
|
- Remaining tasks overview
|
||||||
|
- Dynamic instruction from CLI
|
||||||
|
|
||||||
|
6. **Implement tasks (loop until done or blocked)**
|
||||||
|
|
||||||
|
For each pending task:
|
||||||
|
- Show which task is being worked on
|
||||||
|
- Make the code changes required
|
||||||
|
- Keep changes minimal and focused
|
||||||
|
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
|
||||||
|
- Continue to next task
|
||||||
|
|
||||||
|
**Pause if:**
|
||||||
|
- Task is unclear → ask for clarification
|
||||||
|
- Implementation reveals a design issue → suggest updating artifacts
|
||||||
|
- Error or blocker encountered → report and wait for guidance
|
||||||
|
- User interrupts
|
||||||
|
|
||||||
|
7. **On completion or pause, show status**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Tasks completed this session
|
||||||
|
- Overall progress: "N/M tasks complete"
|
||||||
|
- If all done: suggest archive
|
||||||
|
- If paused: explain why and wait for guidance
|
||||||
|
|
||||||
|
**Output During Implementation**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementing: <change-name> (schema: <schema-name>)
|
||||||
|
|
||||||
|
Working on task 3/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
|
||||||
|
Working on task 4/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Completion**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 7/7 tasks complete ✓
|
||||||
|
|
||||||
|
### Completed This Session
|
||||||
|
- [x] Task 1
|
||||||
|
- [x] Task 2
|
||||||
|
...
|
||||||
|
|
||||||
|
All tasks complete! You can archive this change with `/opsx-archive`.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Pause (Issue Encountered)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Paused
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 4/7 tasks complete
|
||||||
|
|
||||||
|
### Issue Encountered
|
||||||
|
<description of the issue>
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
1. <option 1>
|
||||||
|
2. <option 2>
|
||||||
|
3. Other approach
|
||||||
|
|
||||||
|
What would you like to do?
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Keep going through tasks until done or blocked
|
||||||
|
- Always read context files before starting (from the apply instructions output)
|
||||||
|
- If task is ambiguous, pause and ask before implementing
|
||||||
|
- If implementation reveals issues, pause and suggest artifact updates
|
||||||
|
- Keep code changes minimal and scoped to each task
|
||||||
|
- Update task checkbox immediately after completing each task
|
||||||
|
- Pause on errors, blockers, or unclear requirements - don't guess
|
||||||
|
- Use contextFiles from CLI output, don't assume specific file names
|
||||||
|
|
||||||
|
**Fluid Workflow Integration**
|
||||||
|
|
||||||
|
This skill supports the "actions on a change" model:
|
||||||
|
|
||||||
|
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
|
||||||
|
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly
|
||||||
154
.opencode/command/opsx-archive.md
Normal file
154
.opencode/command/opsx-archive.md
Normal file
@@ -0,0 +1,154 @@
|
|||||||
|
---
|
||||||
|
description: Archive a completed change in the experimental workflow
|
||||||
|
---
|
||||||
|
|
||||||
|
Archive a completed change in the experimental workflow.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name after `/opsx-archive` (e.g., `/opsx-archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no change name provided, prompt for selection**
|
||||||
|
|
||||||
|
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
|
||||||
|
|
||||||
|
Show only active changes (not already archived).
|
||||||
|
Include the schema used for each change if available.
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
|
||||||
|
|
||||||
|
2. **Check artifact completion status**
|
||||||
|
|
||||||
|
Run `openspec status --change "<name>" --json` to check artifact completion.
|
||||||
|
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used
|
||||||
|
- `artifacts`: List of artifacts with their status (`done` or other)
|
||||||
|
|
||||||
|
**If any artifacts are not `done`:**
|
||||||
|
- Display warning listing incomplete artifacts
|
||||||
|
- Prompt user for confirmation to continue
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
3. **Check task completion status**
|
||||||
|
|
||||||
|
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
|
||||||
|
|
||||||
|
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
|
||||||
|
|
||||||
|
**If incomplete tasks found:**
|
||||||
|
- Display warning showing count of incomplete tasks
|
||||||
|
- Prompt user for confirmation to continue
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
**If no tasks file exists:** Proceed without task-related warning.
|
||||||
|
|
||||||
|
4. **Assess delta spec sync state**
|
||||||
|
|
||||||
|
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
|
||||||
|
|
||||||
|
**If delta specs exist:**
|
||||||
|
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
|
||||||
|
- Determine what changes would be applied (adds, modifications, removals, renames)
|
||||||
|
- Show a combined summary before prompting
|
||||||
|
|
||||||
|
**Prompt options:**
|
||||||
|
- If changes needed: "Sync now (recommended)", "Archive without syncing"
|
||||||
|
- If already synced: "Archive now", "Sync anyway", "Cancel"
|
||||||
|
|
||||||
|
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
|
||||||
|
|
||||||
|
5. **Perform the archive**
|
||||||
|
|
||||||
|
Create the archive directory if it doesn't exist:
|
||||||
|
```bash
|
||||||
|
mkdir -p openspec/changes/archive
|
||||||
|
```
|
||||||
|
|
||||||
|
Generate target name using current date: `YYYY-MM-DD-<change-name>`
|
||||||
|
|
||||||
|
**Check if target already exists:**
|
||||||
|
- If yes: Fail with error, suggest renaming existing archive or using different date
|
||||||
|
- If no: Move the change directory to archive
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
|
||||||
|
```
|
||||||
|
|
||||||
|
6. **Display summary**
|
||||||
|
|
||||||
|
Show archive completion summary including:
|
||||||
|
- Change name
|
||||||
|
- Schema that was used
|
||||||
|
- Archive location
|
||||||
|
- Spec sync status (synced / sync skipped / no delta specs)
|
||||||
|
- Note about any warnings (incomplete artifacts/tasks)
|
||||||
|
|
||||||
|
**Output On Success**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
**Specs:** ✓ Synced to main specs
|
||||||
|
|
||||||
|
All artifacts complete. All tasks complete.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Success (No Delta Specs)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
**Specs:** No delta specs
|
||||||
|
|
||||||
|
All artifacts complete. All tasks complete.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Success With Warnings**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Complete (with warnings)
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
**Specs:** Sync skipped (user chose to skip)
|
||||||
|
|
||||||
|
**Warnings:**
|
||||||
|
- Archived with 2 incomplete artifacts
|
||||||
|
- Archived with 3 incomplete tasks
|
||||||
|
- Delta spec sync was skipped (user chose to skip)
|
||||||
|
|
||||||
|
Review the archive if this was not intentional.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Error (Archive Exists)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Failed
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
|
||||||
|
Target archive directory already exists.
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
1. Rename the existing archive
|
||||||
|
2. Delete the existing archive if it's a duplicate
|
||||||
|
3. Wait until a different date to archive
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Always prompt for change selection if not provided
|
||||||
|
- Use artifact graph (openspec status --json) for completion checking
|
||||||
|
- Don't block archive on warnings - just inform and confirm
|
||||||
|
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
|
||||||
|
- Show clear summary of what happened
|
||||||
|
- If sync is requested, use the Skill tool to invoke `openspec-sync-specs` (agent-driven)
|
||||||
|
- If delta specs exist, always run the sync assessment and show the combined summary before prompting
|
||||||
170
.opencode/command/opsx-explore.md
Normal file
170
.opencode/command/opsx-explore.md
Normal file
@@ -0,0 +1,170 @@
|
|||||||
|
---
|
||||||
|
description: Enter explore mode - think through ideas, investigate problems, clarify requirements
|
||||||
|
---
|
||||||
|
|
||||||
|
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
|
||||||
|
|
||||||
|
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
|
||||||
|
|
||||||
|
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
|
||||||
|
|
||||||
|
**Input**: The argument after `/opsx-explore` is whatever the user wants to think about. Could be:
|
||||||
|
- A vague idea: "real-time collaboration"
|
||||||
|
- A specific problem: "the auth system is getting unwieldy"
|
||||||
|
- A change name: "add-dark-mode" (to explore in context of that change)
|
||||||
|
- A comparison: "postgres vs sqlite for this"
|
||||||
|
- Nothing (just enter explore mode)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Stance
|
||||||
|
|
||||||
|
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
|
||||||
|
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
|
||||||
|
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
|
||||||
|
- **Adaptive** - Follow interesting threads, pivot when new information emerges
|
||||||
|
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
|
||||||
|
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Might Do
|
||||||
|
|
||||||
|
Depending on what the user brings, you might:
|
||||||
|
|
||||||
|
**Explore the problem space**
|
||||||
|
- Ask clarifying questions that emerge from what they said
|
||||||
|
- Challenge assumptions
|
||||||
|
- Reframe the problem
|
||||||
|
- Find analogies
|
||||||
|
|
||||||
|
**Investigate the codebase**
|
||||||
|
- Map existing architecture relevant to the discussion
|
||||||
|
- Find integration points
|
||||||
|
- Identify patterns already in use
|
||||||
|
- Surface hidden complexity
|
||||||
|
|
||||||
|
**Compare options**
|
||||||
|
- Brainstorm multiple approaches
|
||||||
|
- Build comparison tables
|
||||||
|
- Sketch tradeoffs
|
||||||
|
- Recommend a path (if asked)
|
||||||
|
|
||||||
|
**Visualize**
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Use ASCII diagrams liberally │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ ┌────────┐ ┌────────┐ │
|
||||||
|
│ │ State │────────▶│ State │ │
|
||||||
|
│ │ A │ │ B │ │
|
||||||
|
│ └────────┘ └────────┘ │
|
||||||
|
│ │
|
||||||
|
│ System diagrams, state machines, │
|
||||||
|
│ data flows, architecture sketches, │
|
||||||
|
│ dependency graphs, comparison tables │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Surface risks and unknowns**
|
||||||
|
- Identify what could go wrong
|
||||||
|
- Find gaps in understanding
|
||||||
|
- Suggest spikes or investigations
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## OpenSpec Awareness
|
||||||
|
|
||||||
|
You have full context of the OpenSpec system. Use it naturally, don't force it.
|
||||||
|
|
||||||
|
### Check for context
|
||||||
|
|
||||||
|
At the start, quickly check what exists:
|
||||||
|
```bash
|
||||||
|
openspec list --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This tells you:
|
||||||
|
- If there are active changes
|
||||||
|
- Their names, schemas, and status
|
||||||
|
- What the user might be working on
|
||||||
|
|
||||||
|
If the user mentioned a specific change name, read its artifacts for context.
|
||||||
|
|
||||||
|
### When no change exists
|
||||||
|
|
||||||
|
Think freely. When insights crystallize, you might offer:
|
||||||
|
|
||||||
|
- "This feels solid enough to start a change. Want me to create a proposal?"
|
||||||
|
- Or keep exploring - no pressure to formalize
|
||||||
|
|
||||||
|
### When a change exists
|
||||||
|
|
||||||
|
If the user mentions a change or you detect one is relevant:
|
||||||
|
|
||||||
|
1. **Read existing artifacts for context**
|
||||||
|
- `openspec/changes/<name>/proposal.md`
|
||||||
|
- `openspec/changes/<name>/design.md`
|
||||||
|
- `openspec/changes/<name>/tasks.md`
|
||||||
|
- etc.
|
||||||
|
|
||||||
|
2. **Reference them naturally in conversation**
|
||||||
|
- "Your design mentions using Redis, but we just realized SQLite fits better..."
|
||||||
|
- "The proposal scopes this to premium users, but we're now thinking everyone..."
|
||||||
|
|
||||||
|
3. **Offer to capture when decisions are made**
|
||||||
|
|
||||||
|
| Insight Type | Where to Capture |
|
||||||
|
|--------------|------------------|
|
||||||
|
| New requirement discovered | `specs/<capability>/spec.md` |
|
||||||
|
| Requirement changed | `specs/<capability>/spec.md` |
|
||||||
|
| Design decision made | `design.md` |
|
||||||
|
| Scope changed | `proposal.md` |
|
||||||
|
| New work identified | `tasks.md` |
|
||||||
|
| Assumption invalidated | Relevant artifact |
|
||||||
|
|
||||||
|
Example offers:
|
||||||
|
- "That's a design decision. Capture it in design.md?"
|
||||||
|
- "This is a new requirement. Add it to specs?"
|
||||||
|
- "This changes scope. Update the proposal?"
|
||||||
|
|
||||||
|
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Don't Have To Do
|
||||||
|
|
||||||
|
- Follow a script
|
||||||
|
- Ask the same questions every time
|
||||||
|
- Produce a specific artifact
|
||||||
|
- Reach a conclusion
|
||||||
|
- Stay on topic if a tangent is valuable
|
||||||
|
- Be brief (this is thinking time)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Ending Discovery
|
||||||
|
|
||||||
|
There's no required ending. Discovery might:
|
||||||
|
|
||||||
|
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
|
||||||
|
- **Result in artifact updates**: "Updated design.md with these decisions"
|
||||||
|
- **Just provide clarity**: User has what they need, moves on
|
||||||
|
- **Continue later**: "We can pick this up anytime"
|
||||||
|
|
||||||
|
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Guardrails
|
||||||
|
|
||||||
|
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
|
||||||
|
- **Don't fake understanding** - If something is unclear, dig deeper
|
||||||
|
- **Don't rush** - Discovery is thinking time, not task time
|
||||||
|
- **Don't force structure** - Let patterns emerge naturally
|
||||||
|
- **Don't auto-capture** - Offer to save insights, don't just do it
|
||||||
|
- **Do visualize** - A good diagram is worth many paragraphs
|
||||||
|
- **Do explore the codebase** - Ground discussions in reality
|
||||||
|
- **Do question assumptions** - Including the user's and your own
|
||||||
103
.opencode/command/opsx-propose.md
Normal file
103
.opencode/command/opsx-propose.md
Normal file
@@ -0,0 +1,103 @@
|
|||||||
|
---
|
||||||
|
description: Propose a new change - create it and generate all artifacts in one step
|
||||||
|
---
|
||||||
|
|
||||||
|
Propose a new change - create the change and generate all artifacts in one step.
|
||||||
|
|
||||||
|
I'll create a change with artifacts:
|
||||||
|
- proposal.md (what & why)
|
||||||
|
- design.md (how)
|
||||||
|
- tasks.md (implementation steps)
|
||||||
|
|
||||||
|
When ready to implement, run /opsx-apply
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Input**: The argument after `/opsx-propose` is the change name (kebab-case), OR a description of what the user wants to build.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no input provided, ask what they want to build**
|
||||||
|
|
||||||
|
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
|
||||||
|
> "What change do you want to work on? Describe what you want to build or fix."
|
||||||
|
|
||||||
|
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
|
||||||
|
|
||||||
|
2. **Create the change directory**
|
||||||
|
```bash
|
||||||
|
openspec new change "<name>"
|
||||||
|
```
|
||||||
|
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
|
||||||
|
|
||||||
|
3. **Get the artifact build order**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to get:
|
||||||
|
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
|
||||||
|
- `artifacts`: list of all artifacts with their status and dependencies
|
||||||
|
|
||||||
|
4. **Create artifacts in sequence until apply-ready**
|
||||||
|
|
||||||
|
Use the **TodoWrite tool** to track progress through the artifacts.
|
||||||
|
|
||||||
|
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
|
||||||
|
|
||||||
|
a. **For each artifact that is `ready` (dependencies satisfied)**:
|
||||||
|
- Get instructions:
|
||||||
|
```bash
|
||||||
|
openspec instructions <artifact-id> --change "<name>" --json
|
||||||
|
```
|
||||||
|
- The instructions JSON includes:
|
||||||
|
- `context`: Project background (constraints for you - do NOT include in output)
|
||||||
|
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
|
||||||
|
- `template`: The structure to use for your output file
|
||||||
|
- `instruction`: Schema-specific guidance for this artifact type
|
||||||
|
- `outputPath`: Where to write the artifact
|
||||||
|
- `dependencies`: Completed artifacts to read for context
|
||||||
|
- Read any completed dependency files for context
|
||||||
|
- Create the artifact file using `template` as the structure
|
||||||
|
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
|
||||||
|
- Show brief progress: "Created <artifact-id>"
|
||||||
|
|
||||||
|
b. **Continue until all `applyRequires` artifacts are complete**
|
||||||
|
- After creating each artifact, re-run `openspec status --change "<name>" --json`
|
||||||
|
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
|
||||||
|
- Stop when all `applyRequires` artifacts are done
|
||||||
|
|
||||||
|
c. **If an artifact requires user input** (unclear context):
|
||||||
|
- Use **AskUserQuestion tool** to clarify
|
||||||
|
- Then continue with creation
|
||||||
|
|
||||||
|
5. **Show final status**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**
|
||||||
|
|
||||||
|
After completing all artifacts, summarize:
|
||||||
|
- Change name and location
|
||||||
|
- List of artifacts created with brief descriptions
|
||||||
|
- What's ready: "All artifacts created! Ready for implementation."
|
||||||
|
- Prompt: "Run `/opsx-apply` to start implementing."
|
||||||
|
|
||||||
|
**Artifact Creation Guidelines**
|
||||||
|
|
||||||
|
- Follow the `instruction` field from `openspec instructions` for each artifact type
|
||||||
|
- The schema defines what each artifact should contain - follow it
|
||||||
|
- Read dependency artifacts for context before creating new ones
|
||||||
|
- Use `template` as the structure for your output file - fill in its sections
|
||||||
|
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
|
||||||
|
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
|
||||||
|
- These guide what you write, but should never appear in the output
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
|
||||||
|
- Always read dependency artifacts before creating a new one
|
||||||
|
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
|
||||||
|
- If a change with that name already exists, ask if user wants to continue it or create a new one
|
||||||
|
- Verify each artifact file exists after writing before proceeding to next
|
||||||
156
.opencode/skills/openspec-apply-change/SKILL.md
Normal file
156
.opencode/skills/openspec-apply-change/SKILL.md
Normal file
@@ -0,0 +1,156 @@
|
|||||||
|
---
|
||||||
|
name: openspec-apply-change
|
||||||
|
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Implement tasks from an OpenSpec change.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **Select the change**
|
||||||
|
|
||||||
|
If a name is provided, use it. Otherwise:
|
||||||
|
- Infer from conversation context if the user mentioned a change
|
||||||
|
- Auto-select if only one active change exists
|
||||||
|
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
|
||||||
|
|
||||||
|
Always announce: "Using change: <name>" and how to override (e.g., `/opsx-apply <other>`).
|
||||||
|
|
||||||
|
2. **Check status to understand the schema**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used (e.g., "spec-driven")
|
||||||
|
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
|
||||||
|
|
||||||
|
3. **Get apply instructions**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openspec instructions apply --change "<name>" --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This returns:
|
||||||
|
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
|
||||||
|
- Progress (total, complete, remaining)
|
||||||
|
- Task list with status
|
||||||
|
- Dynamic instruction based on current state
|
||||||
|
|
||||||
|
**Handle states:**
|
||||||
|
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
|
||||||
|
- If `state: "all_done"`: congratulate, suggest archive
|
||||||
|
- Otherwise: proceed to implementation
|
||||||
|
|
||||||
|
4. **Read context files**
|
||||||
|
|
||||||
|
Read the files listed in `contextFiles` from the apply instructions output.
|
||||||
|
The files depend on the schema being used:
|
||||||
|
- **spec-driven**: proposal, specs, design, tasks
|
||||||
|
- Other schemas: follow the contextFiles from CLI output
|
||||||
|
|
||||||
|
5. **Show current progress**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Schema being used
|
||||||
|
- Progress: "N/M tasks complete"
|
||||||
|
- Remaining tasks overview
|
||||||
|
- Dynamic instruction from CLI
|
||||||
|
|
||||||
|
6. **Implement tasks (loop until done or blocked)**
|
||||||
|
|
||||||
|
For each pending task:
|
||||||
|
- Show which task is being worked on
|
||||||
|
- Make the code changes required
|
||||||
|
- Keep changes minimal and focused
|
||||||
|
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
|
||||||
|
- Continue to next task
|
||||||
|
|
||||||
|
**Pause if:**
|
||||||
|
- Task is unclear → ask for clarification
|
||||||
|
- Implementation reveals a design issue → suggest updating artifacts
|
||||||
|
- Error or blocker encountered → report and wait for guidance
|
||||||
|
- User interrupts
|
||||||
|
|
||||||
|
7. **On completion or pause, show status**
|
||||||
|
|
||||||
|
Display:
|
||||||
|
- Tasks completed this session
|
||||||
|
- Overall progress: "N/M tasks complete"
|
||||||
|
- If all done: suggest archive
|
||||||
|
- If paused: explain why and wait for guidance
|
||||||
|
|
||||||
|
**Output During Implementation**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementing: <change-name> (schema: <schema-name>)
|
||||||
|
|
||||||
|
Working on task 3/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
|
||||||
|
Working on task 4/7: <task description>
|
||||||
|
[...implementation happening...]
|
||||||
|
✓ Task complete
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Completion**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 7/7 tasks complete ✓
|
||||||
|
|
||||||
|
### Completed This Session
|
||||||
|
- [x] Task 1
|
||||||
|
- [x] Task 2
|
||||||
|
...
|
||||||
|
|
||||||
|
All tasks complete! Ready to archive this change.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output On Pause (Issue Encountered)**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementation Paused
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Progress:** 4/7 tasks complete
|
||||||
|
|
||||||
|
### Issue Encountered
|
||||||
|
<description of the issue>
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
1. <option 1>
|
||||||
|
2. <option 2>
|
||||||
|
3. Other approach
|
||||||
|
|
||||||
|
What would you like to do?
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Keep going through tasks until done or blocked
|
||||||
|
- Always read context files before starting (from the apply instructions output)
|
||||||
|
- If task is ambiguous, pause and ask before implementing
|
||||||
|
- If implementation reveals issues, pause and suggest artifact updates
|
||||||
|
- Keep code changes minimal and scoped to each task
|
||||||
|
- Update task checkbox immediately after completing each task
|
||||||
|
- Pause on errors, blockers, or unclear requirements - don't guess
|
||||||
|
- Use contextFiles from CLI output, don't assume specific file names
|
||||||
|
|
||||||
|
**Fluid Workflow Integration**
|
||||||
|
|
||||||
|
This skill supports the "actions on a change" model:
|
||||||
|
|
||||||
|
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
|
||||||
|
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly
|
||||||
114
.opencode/skills/openspec-archive-change/SKILL.md
Normal file
114
.opencode/skills/openspec-archive-change/SKILL.md
Normal file
@@ -0,0 +1,114 @@
|
|||||||
|
---
|
||||||
|
name: openspec-archive-change
|
||||||
|
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Archive a completed change in the experimental workflow.
|
||||||
|
|
||||||
|
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no change name provided, prompt for selection**
|
||||||
|
|
||||||
|
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
|
||||||
|
|
||||||
|
Show only active changes (not already archived).
|
||||||
|
Include the schema used for each change if available.
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
|
||||||
|
|
||||||
|
2. **Check artifact completion status**
|
||||||
|
|
||||||
|
Run `openspec status --change "<name>" --json` to check artifact completion.
|
||||||
|
|
||||||
|
Parse the JSON to understand:
|
||||||
|
- `schemaName`: The workflow being used
|
||||||
|
- `artifacts`: List of artifacts with their status (`done` or other)
|
||||||
|
|
||||||
|
**If any artifacts are not `done`:**
|
||||||
|
- Display warning listing incomplete artifacts
|
||||||
|
- Use **AskUserQuestion tool** to confirm user wants to proceed
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
3. **Check task completion status**
|
||||||
|
|
||||||
|
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
|
||||||
|
|
||||||
|
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
|
||||||
|
|
||||||
|
**If incomplete tasks found:**
|
||||||
|
- Display warning showing count of incomplete tasks
|
||||||
|
- Use **AskUserQuestion tool** to confirm user wants to proceed
|
||||||
|
- Proceed if user confirms
|
||||||
|
|
||||||
|
**If no tasks file exists:** Proceed without task-related warning.
|
||||||
|
|
||||||
|
4. **Assess delta spec sync state**
|
||||||
|
|
||||||
|
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
|
||||||
|
|
||||||
|
**If delta specs exist:**
|
||||||
|
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
|
||||||
|
- Determine what changes would be applied (adds, modifications, removals, renames)
|
||||||
|
- Show a combined summary before prompting
|
||||||
|
|
||||||
|
**Prompt options:**
|
||||||
|
- If changes needed: "Sync now (recommended)", "Archive without syncing"
|
||||||
|
- If already synced: "Archive now", "Sync anyway", "Cancel"
|
||||||
|
|
||||||
|
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
|
||||||
|
|
||||||
|
5. **Perform the archive**
|
||||||
|
|
||||||
|
Create the archive directory if it doesn't exist:
|
||||||
|
```bash
|
||||||
|
mkdir -p openspec/changes/archive
|
||||||
|
```
|
||||||
|
|
||||||
|
Generate target name using current date: `YYYY-MM-DD-<change-name>`
|
||||||
|
|
||||||
|
**Check if target already exists:**
|
||||||
|
- If yes: Fail with error, suggest renaming existing archive or using different date
|
||||||
|
- If no: Move the change directory to archive
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
|
||||||
|
```
|
||||||
|
|
||||||
|
6. **Display summary**
|
||||||
|
|
||||||
|
Show archive completion summary including:
|
||||||
|
- Change name
|
||||||
|
- Schema that was used
|
||||||
|
- Archive location
|
||||||
|
- Whether specs were synced (if applicable)
|
||||||
|
- Note about any warnings (incomplete artifacts/tasks)
|
||||||
|
|
||||||
|
**Output On Success**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Archive Complete
|
||||||
|
|
||||||
|
**Change:** <change-name>
|
||||||
|
**Schema:** <schema-name>
|
||||||
|
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
|
||||||
|
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
|
||||||
|
|
||||||
|
All artifacts complete. All tasks complete.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Always prompt for change selection if not provided
|
||||||
|
- Use artifact graph (openspec status --json) for completion checking
|
||||||
|
- Don't block archive on warnings - just inform and confirm
|
||||||
|
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
|
||||||
|
- Show clear summary of what happened
|
||||||
|
- If sync is requested, use openspec-sync-specs approach (agent-driven)
|
||||||
|
- If delta specs exist, always run the sync assessment and show the combined summary before prompting
|
||||||
288
.opencode/skills/openspec-explore/SKILL.md
Normal file
288
.opencode/skills/openspec-explore/SKILL.md
Normal file
@@ -0,0 +1,288 @@
|
|||||||
|
---
|
||||||
|
name: openspec-explore
|
||||||
|
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
|
||||||
|
|
||||||
|
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
|
||||||
|
|
||||||
|
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Stance
|
||||||
|
|
||||||
|
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
|
||||||
|
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
|
||||||
|
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
|
||||||
|
- **Adaptive** - Follow interesting threads, pivot when new information emerges
|
||||||
|
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
|
||||||
|
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Might Do
|
||||||
|
|
||||||
|
Depending on what the user brings, you might:
|
||||||
|
|
||||||
|
**Explore the problem space**
|
||||||
|
- Ask clarifying questions that emerge from what they said
|
||||||
|
- Challenge assumptions
|
||||||
|
- Reframe the problem
|
||||||
|
- Find analogies
|
||||||
|
|
||||||
|
**Investigate the codebase**
|
||||||
|
- Map existing architecture relevant to the discussion
|
||||||
|
- Find integration points
|
||||||
|
- Identify patterns already in use
|
||||||
|
- Surface hidden complexity
|
||||||
|
|
||||||
|
**Compare options**
|
||||||
|
- Brainstorm multiple approaches
|
||||||
|
- Build comparison tables
|
||||||
|
- Sketch tradeoffs
|
||||||
|
- Recommend a path (if asked)
|
||||||
|
|
||||||
|
**Visualize**
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ Use ASCII diagrams liberally │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ ┌────────┐ ┌────────┐ │
|
||||||
|
│ │ State │────────▶│ State │ │
|
||||||
|
│ │ A │ │ B │ │
|
||||||
|
│ └────────┘ └────────┘ │
|
||||||
|
│ │
|
||||||
|
│ System diagrams, state machines, │
|
||||||
|
│ data flows, architecture sketches, │
|
||||||
|
│ dependency graphs, comparison tables │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Surface risks and unknowns**
|
||||||
|
- Identify what could go wrong
|
||||||
|
- Find gaps in understanding
|
||||||
|
- Suggest spikes or investigations
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## OpenSpec Awareness
|
||||||
|
|
||||||
|
You have full context of the OpenSpec system. Use it naturally, don't force it.
|
||||||
|
|
||||||
|
### Check for context
|
||||||
|
|
||||||
|
At the start, quickly check what exists:
|
||||||
|
```bash
|
||||||
|
openspec list --json
|
||||||
|
```
|
||||||
|
|
||||||
|
This tells you:
|
||||||
|
- If there are active changes
|
||||||
|
- Their names, schemas, and status
|
||||||
|
- What the user might be working on
|
||||||
|
|
||||||
|
### When no change exists
|
||||||
|
|
||||||
|
Think freely. When insights crystallize, you might offer:
|
||||||
|
|
||||||
|
- "This feels solid enough to start a change. Want me to create a proposal?"
|
||||||
|
- Or keep exploring - no pressure to formalize
|
||||||
|
|
||||||
|
### When a change exists
|
||||||
|
|
||||||
|
If the user mentions a change or you detect one is relevant:
|
||||||
|
|
||||||
|
1. **Read existing artifacts for context**
|
||||||
|
- `openspec/changes/<name>/proposal.md`
|
||||||
|
- `openspec/changes/<name>/design.md`
|
||||||
|
- `openspec/changes/<name>/tasks.md`
|
||||||
|
- etc.
|
||||||
|
|
||||||
|
2. **Reference them naturally in conversation**
|
||||||
|
- "Your design mentions using Redis, but we just realized SQLite fits better..."
|
||||||
|
- "The proposal scopes this to premium users, but we're now thinking everyone..."
|
||||||
|
|
||||||
|
3. **Offer to capture when decisions are made**
|
||||||
|
|
||||||
|
| Insight Type | Where to Capture |
|
||||||
|
|--------------|------------------|
|
||||||
|
| New requirement discovered | `specs/<capability>/spec.md` |
|
||||||
|
| Requirement changed | `specs/<capability>/spec.md` |
|
||||||
|
| Design decision made | `design.md` |
|
||||||
|
| Scope changed | `proposal.md` |
|
||||||
|
| New work identified | `tasks.md` |
|
||||||
|
| Assumption invalidated | Relevant artifact |
|
||||||
|
|
||||||
|
Example offers:
|
||||||
|
- "That's a design decision. Capture it in design.md?"
|
||||||
|
- "This is a new requirement. Add it to specs?"
|
||||||
|
- "This changes scope. Update the proposal?"
|
||||||
|
|
||||||
|
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You Don't Have To Do
|
||||||
|
|
||||||
|
- Follow a script
|
||||||
|
- Ask the same questions every time
|
||||||
|
- Produce a specific artifact
|
||||||
|
- Reach a conclusion
|
||||||
|
- Stay on topic if a tangent is valuable
|
||||||
|
- Be brief (this is thinking time)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Handling Different Entry Points
|
||||||
|
|
||||||
|
**User brings a vague idea:**
|
||||||
|
```
|
||||||
|
User: I'm thinking about adding real-time collaboration
|
||||||
|
|
||||||
|
You: Real-time collab is a big space. Let me think about this...
|
||||||
|
|
||||||
|
COLLABORATION SPECTRUM
|
||||||
|
════════════════════════════════════════════
|
||||||
|
|
||||||
|
Awareness Coordination Sync
|
||||||
|
│ │ │
|
||||||
|
▼ ▼ ▼
|
||||||
|
┌────────┐ ┌────────┐ ┌────────┐
|
||||||
|
│Presence│ │Cursors │ │ CRDT │
|
||||||
|
│ "3 │ │ Multi │ │Conflict│
|
||||||
|
│online" │ │ select │ │ free │
|
||||||
|
└────────┘ └────────┘ └────────┘
|
||||||
|
│ │ │
|
||||||
|
trivial moderate complex
|
||||||
|
|
||||||
|
Where's your head at?
|
||||||
|
```
|
||||||
|
|
||||||
|
**User brings a specific problem:**
|
||||||
|
```
|
||||||
|
User: The auth system is a mess
|
||||||
|
|
||||||
|
You: [reads codebase]
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────┐
|
||||||
|
│ CURRENT AUTH FLOW │
|
||||||
|
└─────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
┌─────────────┼─────────────┐
|
||||||
|
▼ ▼ ▼
|
||||||
|
┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||||
|
│ Google │ │ GitHub │ │ Email │
|
||||||
|
│ OAuth │ │ OAuth │ │ Magic │
|
||||||
|
└────┬────┘ └────┬────┘ └────┬────┘
|
||||||
|
│ │ │
|
||||||
|
└─────────────┼─────────────┘
|
||||||
|
▼
|
||||||
|
┌───────────┐
|
||||||
|
│ Session │
|
||||||
|
└─────┬─────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌───────────┐
|
||||||
|
│ Perms │
|
||||||
|
└───────────┘
|
||||||
|
|
||||||
|
I see three tangles. Which one's burning?
|
||||||
|
```
|
||||||
|
|
||||||
|
**User is stuck mid-implementation:**
|
||||||
|
```
|
||||||
|
User: /opsx-explore add-auth-system
|
||||||
|
The OAuth integration is more complex than expected
|
||||||
|
|
||||||
|
You: [reads change artifacts]
|
||||||
|
|
||||||
|
You're on task 4: "Implement OAuth flow"
|
||||||
|
|
||||||
|
Let me trace what's involved...
|
||||||
|
|
||||||
|
[draws diagram, explores options, suggests paths]
|
||||||
|
|
||||||
|
Want to update the design to reflect this?
|
||||||
|
Or add a spike task to investigate?
|
||||||
|
```
|
||||||
|
|
||||||
|
**User wants to compare options:**
|
||||||
|
```
|
||||||
|
User: Should we use Postgres or SQLite?
|
||||||
|
|
||||||
|
You: Generic answer is boring. What's the context?
|
||||||
|
|
||||||
|
User: A CLI tool that tracks local dev environments
|
||||||
|
|
||||||
|
You: That changes everything.
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────────┐
|
||||||
|
│ CLI TOOL DATA STORAGE │
|
||||||
|
└─────────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
Key constraints:
|
||||||
|
• No daemon running
|
||||||
|
• Must work offline
|
||||||
|
• Single user
|
||||||
|
|
||||||
|
SQLite Postgres
|
||||||
|
Deployment embedded ✓ needs server ✗
|
||||||
|
Offline yes ✓ no ✗
|
||||||
|
Single file yes ✓ no ✗
|
||||||
|
|
||||||
|
SQLite. Not even close.
|
||||||
|
|
||||||
|
Unless... is there a sync component?
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Ending Discovery
|
||||||
|
|
||||||
|
There's no required ending. Discovery might:
|
||||||
|
|
||||||
|
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
|
||||||
|
- **Result in artifact updates**: "Updated design.md with these decisions"
|
||||||
|
- **Just provide clarity**: User has what they need, moves on
|
||||||
|
- **Continue later**: "We can pick this up anytime"
|
||||||
|
|
||||||
|
When it feels like things are crystallizing, you might summarize:
|
||||||
|
|
||||||
|
```
|
||||||
|
## What We Figured Out
|
||||||
|
|
||||||
|
**The problem**: [crystallized understanding]
|
||||||
|
|
||||||
|
**The approach**: [if one emerged]
|
||||||
|
|
||||||
|
**Open questions**: [if any remain]
|
||||||
|
|
||||||
|
**Next steps** (if ready):
|
||||||
|
- Create a change proposal
|
||||||
|
- Keep exploring: just keep talking
|
||||||
|
```
|
||||||
|
|
||||||
|
But this summary is optional. Sometimes the thinking IS the value.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Guardrails
|
||||||
|
|
||||||
|
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
|
||||||
|
- **Don't fake understanding** - If something is unclear, dig deeper
|
||||||
|
- **Don't rush** - Discovery is thinking time, not task time
|
||||||
|
- **Don't force structure** - Let patterns emerge naturally
|
||||||
|
- **Don't auto-capture** - Offer to save insights, don't just do it
|
||||||
|
- **Do visualize** - A good diagram is worth many paragraphs
|
||||||
|
- **Do explore the codebase** - Ground discussions in reality
|
||||||
|
- **Do question assumptions** - Including the user's and your own
|
||||||
110
.opencode/skills/openspec-propose/SKILL.md
Normal file
110
.opencode/skills/openspec-propose/SKILL.md
Normal file
@@ -0,0 +1,110 @@
|
|||||||
|
---
|
||||||
|
name: openspec-propose
|
||||||
|
description: Propose a new change with all artifacts generated in one step. Use when the user wants to quickly describe what they want to build and get a complete proposal with design, specs, and tasks ready for implementation.
|
||||||
|
license: MIT
|
||||||
|
compatibility: Requires openspec CLI.
|
||||||
|
metadata:
|
||||||
|
author: openspec
|
||||||
|
version: "1.0"
|
||||||
|
generatedBy: "1.2.0"
|
||||||
|
---
|
||||||
|
|
||||||
|
Propose a new change - create the change and generate all artifacts in one step.
|
||||||
|
|
||||||
|
I'll create a change with artifacts:
|
||||||
|
- proposal.md (what & why)
|
||||||
|
- design.md (how)
|
||||||
|
- tasks.md (implementation steps)
|
||||||
|
|
||||||
|
When ready to implement, run /opsx-apply
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
|
||||||
|
|
||||||
|
**Steps**
|
||||||
|
|
||||||
|
1. **If no clear input provided, ask what they want to build**
|
||||||
|
|
||||||
|
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
|
||||||
|
> "What change do you want to work on? Describe what you want to build or fix."
|
||||||
|
|
||||||
|
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
|
||||||
|
|
||||||
|
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
|
||||||
|
|
||||||
|
2. **Create the change directory**
|
||||||
|
```bash
|
||||||
|
openspec new change "<name>"
|
||||||
|
```
|
||||||
|
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
|
||||||
|
|
||||||
|
3. **Get the artifact build order**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>" --json
|
||||||
|
```
|
||||||
|
Parse the JSON to get:
|
||||||
|
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
|
||||||
|
- `artifacts`: list of all artifacts with their status and dependencies
|
||||||
|
|
||||||
|
4. **Create artifacts in sequence until apply-ready**
|
||||||
|
|
||||||
|
Use the **TodoWrite tool** to track progress through the artifacts.
|
||||||
|
|
||||||
|
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
|
||||||
|
|
||||||
|
a. **For each artifact that is `ready` (dependencies satisfied)**:
|
||||||
|
- Get instructions:
|
||||||
|
```bash
|
||||||
|
openspec instructions <artifact-id> --change "<name>" --json
|
||||||
|
```
|
||||||
|
- The instructions JSON includes:
|
||||||
|
- `context`: Project background (constraints for you - do NOT include in output)
|
||||||
|
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
|
||||||
|
- `template`: The structure to use for your output file
|
||||||
|
- `instruction`: Schema-specific guidance for this artifact type
|
||||||
|
- `outputPath`: Where to write the artifact
|
||||||
|
- `dependencies`: Completed artifacts to read for context
|
||||||
|
- Read any completed dependency files for context
|
||||||
|
- Create the artifact file using `template` as the structure
|
||||||
|
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
|
||||||
|
- Show brief progress: "Created <artifact-id>"
|
||||||
|
|
||||||
|
b. **Continue until all `applyRequires` artifacts are complete**
|
||||||
|
- After creating each artifact, re-run `openspec status --change "<name>" --json`
|
||||||
|
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
|
||||||
|
- Stop when all `applyRequires` artifacts are done
|
||||||
|
|
||||||
|
c. **If an artifact requires user input** (unclear context):
|
||||||
|
- Use **AskUserQuestion tool** to clarify
|
||||||
|
- Then continue with creation
|
||||||
|
|
||||||
|
5. **Show final status**
|
||||||
|
```bash
|
||||||
|
openspec status --change "<name>"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**
|
||||||
|
|
||||||
|
After completing all artifacts, summarize:
|
||||||
|
- Change name and location
|
||||||
|
- List of artifacts created with brief descriptions
|
||||||
|
- What's ready: "All artifacts created! Ready for implementation."
|
||||||
|
- Prompt: "Run `/opsx-apply` or ask me to implement to start working on the tasks."
|
||||||
|
|
||||||
|
**Artifact Creation Guidelines**
|
||||||
|
|
||||||
|
- Follow the `instruction` field from `openspec instructions` for each artifact type
|
||||||
|
- The schema defines what each artifact should contain - follow it
|
||||||
|
- Read dependency artifacts for context before creating new ones
|
||||||
|
- Use `template` as the structure for your output file - fill in its sections
|
||||||
|
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
|
||||||
|
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
|
||||||
|
- These guide what you write, but should never appear in the output
|
||||||
|
|
||||||
|
**Guardrails**
|
||||||
|
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
|
||||||
|
- Always read dependency artifacts before creating a new one
|
||||||
|
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
|
||||||
|
- If a change with that name already exists, ask if user wants to continue it or create a new one
|
||||||
|
- Verify each artifact file exists after writing before proceeding to next
|
||||||
46
AGENTS.md
46
AGENTS.md
@@ -73,12 +73,14 @@ sqlx migrate add -r migration_name
|
|||||||
|
|
||||||
### Docker Development
|
### Docker Development
|
||||||
|
|
||||||
|
`docker-compose.yml` est à la **racine** du projet (pas dans `infra/`).
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Start infrastructure only
|
# Start infrastructure only
|
||||||
cd infra && docker compose up -d postgres meilisearch
|
docker compose up -d postgres meilisearch
|
||||||
|
|
||||||
# Start full stack
|
# Start full stack
|
||||||
cd infra && docker compose up -d
|
docker compose up -d
|
||||||
|
|
||||||
# View logs
|
# View logs
|
||||||
docker compose logs -f api
|
docker compose logs -f api
|
||||||
@@ -226,24 +228,21 @@ pub struct BookItem {
|
|||||||
```
|
```
|
||||||
stripstream-librarian/
|
stripstream-librarian/
|
||||||
├── apps/
|
├── apps/
|
||||||
│ ├── api/ # REST API (axum)
|
│ ├── api/ # REST API (axum) — port 7080
|
||||||
│ │ └── src/
|
│ │ └── src/ # books.rs, pages.rs, thumbnails.rs, state.rs, auth.rs...
|
||||||
│ │ ├── main.rs
|
│ ├── indexer/ # Background indexing service — port 7081
|
||||||
│ │ ├── books.rs
|
│ │ └── src/ # worker.rs, scanner.rs, batch.rs, scheduler.rs, watcher.rs...
|
||||||
│ │ ├── pages.rs
|
│ └── backoffice/ # Next.js admin UI — port 7082
|
||||||
│ │ └── ...
|
|
||||||
│ ├── indexer/ # Background indexing service
|
|
||||||
│ │ └── src/
|
|
||||||
│ │ └── main.rs
|
|
||||||
│ └── backoffice/ # Next.js admin UI
|
|
||||||
├── crates/
|
├── crates/
|
||||||
│ ├── core/ # Shared config
|
│ ├── core/ # Shared config (env vars)
|
||||||
│ │ └── src/config.rs
|
│ │ └── src/config.rs
|
||||||
│ └── parsers/ # Book parsing (CBZ, CBR, PDF)
|
│ └── parsers/ # Book parsing (CBZ, CBR, PDF)
|
||||||
├── infra/
|
├── infra/
|
||||||
│ ├── migrations/ # SQL migrations
|
│ └── migrations/ # SQL migrations (sqlx)
|
||||||
│ └── docker-compose.yml
|
├── data/
|
||||||
└── libraries/ # Book storage (mounted volume)
|
│ └── thumbnails/ # Thumbnails générés par l'API
|
||||||
|
├── libraries/ # Book storage (mounted volume)
|
||||||
|
└── docker-compose.yml # À la racine (pas dans infra/)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Key Files
|
### Key Files
|
||||||
@@ -251,8 +250,13 @@ stripstream-librarian/
|
|||||||
| File | Purpose |
|
| File | Purpose |
|
||||||
|------|---------|
|
|------|---------|
|
||||||
| `apps/api/src/books.rs` | Book CRUD endpoints |
|
| `apps/api/src/books.rs` | Book CRUD endpoints |
|
||||||
| `apps/api/src/pages.rs` | Page rendering & caching |
|
| `apps/api/src/pages.rs` | Page rendering & caching (LRU + disk) |
|
||||||
| `apps/indexer/src/main.rs` | Indexing logic, batch processing |
|
| `apps/api/src/thumbnails.rs` | Endpoints pour créer des jobs thumbnail (rebuild/regenerate) |
|
||||||
|
| `apps/api/src/state.rs` | AppState, Semaphore concurrent_renders |
|
||||||
|
| `apps/indexer/src/scanner.rs` | Phase 1 discovery : scan rapide sans I/O archive, skip dossiers inchangés |
|
||||||
|
| `apps/indexer/src/analyzer.rs` | Phase 2 analysis : `analyze_book` + génération thumbnails WebP |
|
||||||
|
| `apps/indexer/src/batch.rs` | Bulk DB ops via UNNEST |
|
||||||
|
| `apps/indexer/src/worker.rs` | Job loop, watcher, scheduler orchestration |
|
||||||
| `crates/parsers/src/lib.rs` | Format detection, metadata parsing |
|
| `crates/parsers/src/lib.rs` | Format detection, metadata parsing |
|
||||||
| `crates/core/src/config.rs` | Configuration from environment |
|
| `crates/core/src/config.rs` | Configuration from environment |
|
||||||
| `infra/migrations/*.sql` | Database schema |
|
| `infra/migrations/*.sql` | Database schema |
|
||||||
@@ -269,7 +273,7 @@ impl IndexerConfig {
|
|||||||
pub fn from_env() -> Result<Self> {
|
pub fn from_env() -> Result<Self> {
|
||||||
Ok(Self {
|
Ok(Self {
|
||||||
listen_addr: std::env::var("INDEXER_LISTEN_ADDR")
|
listen_addr: std::env::var("INDEXER_LISTEN_ADDR")
|
||||||
.unwrap_or_else(|_| "0.0.0.0:8081".to_string()),
|
.unwrap_or_else(|_| "0.0.0.0:7081".to_string()),
|
||||||
database_url: std::env::var("DATABASE_URL")
|
database_url: std::env::var("DATABASE_URL")
|
||||||
.context("DATABASE_URL is required")?,
|
.context("DATABASE_URL is required")?,
|
||||||
// ...
|
// ...
|
||||||
@@ -298,4 +302,6 @@ fn remap_libraries_path(path: &str) -> String {
|
|||||||
- **Workspace**: This is a Cargo workspace. Always specify the package when building specific apps.
|
- **Workspace**: This is a Cargo workspace. Always specify the package when building specific apps.
|
||||||
- **Dependencies**: External crates are defined in workspace `Cargo.toml`, not individual `Cargo.toml`.
|
- **Dependencies**: External crates are defined in workspace `Cargo.toml`, not individual `Cargo.toml`.
|
||||||
- **Database**: PostgreSQL is required. Run migrations before starting services.
|
- **Database**: PostgreSQL is required. Run migrations before starting services.
|
||||||
- **External Tools**: The indexer relies on `unar` (for CBR) and `pdftoppm` (for PDF) being installed on the system.
|
- **External Tools**: 4 system tools required — `unrar` (CBR page count), `unar` (CBR extraction), `pdfinfo` (PDF page count), `pdftoppm` (PDF page render). Note: `unrar` and `unar` are distinct tools.
|
||||||
|
- **Thumbnails**: generated by the **indexer** service (phase 2, `analyzer.rs`). The API only creates jobs in DB — it does not generate thumbnails directly.
|
||||||
|
- **Sub-AGENTS.md**: module-specific guidelines in `apps/api/`, `apps/indexer/`, `apps/backoffice/`, `crates/parsers/`.
|
||||||
|
|||||||
72
CLAUDE.md
Normal file
72
CLAUDE.md
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
# Stripstream Librarian
|
||||||
|
|
||||||
|
Gestionnaire de bibliothèque de bandes dessinées/ebooks. Workspace Cargo multi-crates avec backoffice Next.js.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
| Service | Dossier | Port local |
|
||||||
|
|---------|---------|------------|
|
||||||
|
| API REST (axum) | `apps/api/` | 7080 |
|
||||||
|
| Indexer (background) | `apps/indexer/` | 7081 |
|
||||||
|
| Backoffice (Next.js) | `apps/backoffice/` | 7082 |
|
||||||
|
| PostgreSQL | infra | 6432 |
|
||||||
|
| Meilisearch | infra | 7700 |
|
||||||
|
|
||||||
|
Crates partagés : `crates/core` (config env), `crates/parsers` (CBZ/CBR/PDF).
|
||||||
|
|
||||||
|
## Commandes
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Build
|
||||||
|
cargo build # workspace entier
|
||||||
|
cargo build -p api # crate spécifique
|
||||||
|
cargo build --release # version optimisée
|
||||||
|
|
||||||
|
# Linting / format
|
||||||
|
cargo clippy
|
||||||
|
cargo fmt
|
||||||
|
|
||||||
|
# Tests
|
||||||
|
cargo test
|
||||||
|
cargo test -p parsers
|
||||||
|
|
||||||
|
# Infra (dépendances uniquement) — docker-compose.yml est à la racine
|
||||||
|
docker compose up -d postgres meilisearch
|
||||||
|
|
||||||
|
# Backoffice dev
|
||||||
|
cd apps/backoffice && npm install && npm run dev # http://localhost:7082
|
||||||
|
|
||||||
|
# Migrations
|
||||||
|
sqlx migrate run # DATABASE_URL doit être défini
|
||||||
|
```
|
||||||
|
|
||||||
|
## Environnement
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cp .env.example .env # puis éditer les valeurs REQUIRED
|
||||||
|
```
|
||||||
|
|
||||||
|
Variables **requises** au démarrage : `DATABASE_URL`, `MEILI_URL`, `MEILI_MASTER_KEY`, `API_BOOTSTRAP_TOKEN`.
|
||||||
|
|
||||||
|
## Gotchas
|
||||||
|
|
||||||
|
- **Dépendances système** : 4 outils requis — `unrar` (CBR listing), `unar` (CBR extraction), `pdfinfo` (PDF page count), `pdftoppm` (PDF rendu). `unrar` ≠ `unar`.
|
||||||
|
- **Port backoffice** : `npm run dev` écoute sur **7082**, pas 3000.
|
||||||
|
- **LIBRARIES_ROOT_PATH** : les chemins en DB commencent par `/libraries/` ; en dev local, définir cette variable pour remapper vers le dossier réel.
|
||||||
|
- **Thumbnails** : stockés dans `THUMBNAIL_DIRECTORY` (défaut `/data/thumbnails`), générés par **l'API** (pas l'indexer) — l'indexer déclenche un checkup via `POST /index/jobs/:id/thumbnails/checkup`.
|
||||||
|
- **Workspace Cargo** : les dépendances externes sont définies dans le `Cargo.toml` racine, pas dans les crates individuels.
|
||||||
|
- **Migrations** : dossier `infra/migrations/`, géré par sqlx. Toujours migrer avant de démarrer les services.
|
||||||
|
|
||||||
|
## Fichiers clés
|
||||||
|
|
||||||
|
| Fichier | Rôle |
|
||||||
|
|---------|------|
|
||||||
|
| `crates/core/src/config.rs` | Config depuis env (API, Indexer, AdminUI) |
|
||||||
|
| `crates/parsers/src/lib.rs` | Détection format, extraction métadonnées |
|
||||||
|
| `apps/api/src/books.rs` | Endpoints CRUD livres |
|
||||||
|
| `apps/api/src/pages.rs` | Rendu pages + cache LRU |
|
||||||
|
| `apps/indexer/src/scanner.rs` | Scan filesystem |
|
||||||
|
| `infra/migrations/*.sql` | Schéma DB |
|
||||||
|
|
||||||
|
> Voir `AGENTS.md` pour les conventions de code détaillées (error handling, patterns sqlx, async/tokio).
|
||||||
|
> Des `AGENTS.md` spécifiques existent dans `apps/api/`, `apps/indexer/`, `apps/backoffice/`, `crates/parsers/`.
|
||||||
3
Cargo.lock
generated
3
Cargo.lock
generated
@@ -1146,6 +1146,8 @@ dependencies = [
|
|||||||
"anyhow",
|
"anyhow",
|
||||||
"axum",
|
"axum",
|
||||||
"chrono",
|
"chrono",
|
||||||
|
"futures",
|
||||||
|
"image",
|
||||||
"notify",
|
"notify",
|
||||||
"parsers",
|
"parsers",
|
||||||
"rand 0.8.5",
|
"rand 0.8.5",
|
||||||
@@ -1161,6 +1163,7 @@ dependencies = [
|
|||||||
"tracing-subscriber",
|
"tracing-subscriber",
|
||||||
"uuid",
|
"uuid",
|
||||||
"walkdir",
|
"walkdir",
|
||||||
|
"webp",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
|
|||||||
@@ -33,5 +33,6 @@ tracing = "0.1"
|
|||||||
tracing-subscriber = { version = "0.3", features = ["env-filter", "fmt"] }
|
tracing-subscriber = { version = "0.3", features = ["env-filter", "fmt"] }
|
||||||
uuid = { version = "1.12", features = ["serde", "v4"] }
|
uuid = { version = "1.12", features = ["serde", "v4"] }
|
||||||
walkdir = "2.5"
|
walkdir = "2.5"
|
||||||
|
webp = "0.3"
|
||||||
utoipa = "4.0"
|
utoipa = "4.0"
|
||||||
utoipa-swagger-ui = "6.0"
|
utoipa-swagger-ui = "6.0"
|
||||||
|
|||||||
95
README.md
95
README.md
@@ -38,16 +38,16 @@ docker compose up -d
|
|||||||
```
|
```
|
||||||
|
|
||||||
This will start:
|
This will start:
|
||||||
- PostgreSQL (port 5432)
|
- PostgreSQL (port 6432)
|
||||||
- Meilisearch (port 7700)
|
- Meilisearch (port 7700)
|
||||||
- API service (port 8080)
|
- API service (port 7080)
|
||||||
- Indexer service (port 8081)
|
- Indexer service (port 7081)
|
||||||
- Backoffice web UI (port 8082)
|
- Backoffice web UI (port 7082)
|
||||||
|
|
||||||
### Accessing the Application
|
### Accessing the Application
|
||||||
|
|
||||||
- **Backoffice**: http://localhost:8082
|
- **Backoffice**: http://localhost:7082
|
||||||
- **API**: http://localhost:8080
|
- **API**: http://localhost:7080
|
||||||
- **Meilisearch**: http://localhost:7700
|
- **Meilisearch**: http://localhost:7700
|
||||||
|
|
||||||
### Default Credentials
|
### Default Credentials
|
||||||
@@ -113,9 +113,9 @@ The backoffice will be available at http://localhost:3000
|
|||||||
|
|
||||||
| Variable | Description | Default |
|
| Variable | Description | Default |
|
||||||
|----------|-------------|---------|
|
|----------|-------------|---------|
|
||||||
| `API_LISTEN_ADDR` | API service bind address | `0.0.0.0:8080` |
|
| `API_LISTEN_ADDR` | API service bind address | `0.0.0.0:7080` |
|
||||||
| `INDEXER_LISTEN_ADDR` | Indexer service bind address | `0.0.0.0:8081` |
|
| `INDEXER_LISTEN_ADDR` | Indexer service bind address | `0.0.0.0:7081` |
|
||||||
| `BACKOFFICE_PORT` | Backoffice web UI port | `8082` |
|
| `BACKOFFICE_PORT` | Backoffice web UI port | `7082` |
|
||||||
| `DATABASE_URL` | PostgreSQL connection string | `postgres://stripstream:stripstream@postgres:5432/stripstream` |
|
| `DATABASE_URL` | PostgreSQL connection string | `postgres://stripstream:stripstream@postgres:5432/stripstream` |
|
||||||
| `MEILI_URL` | Meilisearch connection URL | `http://meilisearch:7700` |
|
| `MEILI_URL` | Meilisearch connection URL | `http://meilisearch:7700` |
|
||||||
| `MEILI_MASTER_KEY` | Meilisearch master key (required) | - |
|
| `MEILI_MASTER_KEY` | Meilisearch master key (required) | - |
|
||||||
@@ -128,7 +128,7 @@ The backoffice will be available at http://localhost:3000
|
|||||||
The API is documented with OpenAPI/Swagger. When running locally, access the docs at:
|
The API is documented with OpenAPI/Swagger. When running locally, access the docs at:
|
||||||
|
|
||||||
```
|
```
|
||||||
http://localhost:8080/api-docs
|
http://localhost:7080/swagger-ui
|
||||||
```
|
```
|
||||||
|
|
||||||
## Project Structure
|
## Project Structure
|
||||||
@@ -146,6 +146,81 @@ stripstream-librarian/
|
|||||||
└── .env # Environment configuration
|
└── .env # Environment configuration
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Docker Registry
|
||||||
|
|
||||||
|
Images are built and pushed to Docker Hub with the naming convention `docker.io/{owner}/stripstream-{service}`.
|
||||||
|
|
||||||
|
### Publishing Images (Maintainers)
|
||||||
|
|
||||||
|
To build and push all service images to the registry:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Login to Docker Hub first
|
||||||
|
docker login -u julienfroidefond32
|
||||||
|
|
||||||
|
# Build and push all images
|
||||||
|
./scripts/docker-push.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
This script will:
|
||||||
|
- Build images for `api`, `indexer`, and `backoffice`
|
||||||
|
- Tag them with the current version (from `Cargo.toml`) and `latest`
|
||||||
|
- Push to the registry
|
||||||
|
|
||||||
|
### Using Published Images
|
||||||
|
|
||||||
|
To use the pre-built images in your own `docker-compose.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
postgres:
|
||||||
|
image: postgres:16-alpine
|
||||||
|
environment:
|
||||||
|
POSTGRES_DB: stripstream
|
||||||
|
POSTGRES_USER: stripstream
|
||||||
|
POSTGRES_PASSWORD: stripstream
|
||||||
|
volumes:
|
||||||
|
- postgres_data:/var/lib/postgresql/data
|
||||||
|
|
||||||
|
meilisearch:
|
||||||
|
image: getmeili/meilisearch:v1.12
|
||||||
|
environment:
|
||||||
|
MEILI_MASTER_KEY: ${MEILI_MASTER_KEY}
|
||||||
|
|
||||||
|
api:
|
||||||
|
image: julienfroidefond32/stripstream-api:latest
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
ports:
|
||||||
|
- "7080:7080"
|
||||||
|
volumes:
|
||||||
|
- ${LIBRARIES_HOST_PATH:-./libraries}:/libraries
|
||||||
|
- ${THUMBNAILS_HOST_PATH:-./data/thumbnails}:/data/thumbnails
|
||||||
|
|
||||||
|
indexer:
|
||||||
|
image: julienfroidefond32/stripstream-indexer:latest
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
ports:
|
||||||
|
- "7081:7081"
|
||||||
|
volumes:
|
||||||
|
- ${LIBRARIES_HOST_PATH:-./libraries}:/libraries
|
||||||
|
- ${THUMBNAILS_HOST_PATH:-./data/thumbnails}:/data/thumbnails
|
||||||
|
|
||||||
|
backoffice:
|
||||||
|
image: julienfroidefond32/stripstream-backoffice:latest
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
environment:
|
||||||
|
- PORT=7082
|
||||||
|
- HOST=0.0.0.0
|
||||||
|
ports:
|
||||||
|
- "7082:7082"
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
postgres_data:
|
||||||
|
```
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
[Your License Here]
|
[Your License Here]
|
||||||
|
|||||||
73
apps/api/AGENTS.md
Normal file
73
apps/api/AGENTS.md
Normal file
@@ -0,0 +1,73 @@
|
|||||||
|
# apps/api — REST API (axum)
|
||||||
|
|
||||||
|
Service HTTP sur le port **7080**. Voir `AGENTS.md` racine pour les conventions globales.
|
||||||
|
|
||||||
|
## Structure des fichiers
|
||||||
|
|
||||||
|
| Fichier | Rôle |
|
||||||
|
|---------|------|
|
||||||
|
| `main.rs` | Routes, initialisation AppState, Semaphore concurrent_renders |
|
||||||
|
| `state.rs` | `AppState` (pool, caches, métriques), `load_concurrent_renders` |
|
||||||
|
| `auth.rs` | Middlewares `require_admin` / `require_read`, authentification tokens |
|
||||||
|
| `error.rs` | `ApiError` avec constructeurs `bad_request`, `not_found`, `internal`, etc. |
|
||||||
|
| `books.rs` | CRUD livres, thumbnails |
|
||||||
|
| `pages.rs` | Rendu page + double cache (mémoire LRU + disque) |
|
||||||
|
| `libraries.rs` | CRUD bibliothèques, déclenchement scans |
|
||||||
|
| `index_jobs.rs` | Suivi jobs, SSE streaming progression |
|
||||||
|
| `thumbnails.rs` | Rebuild/regénération thumbnails |
|
||||||
|
| `tokens.rs` | Gestion tokens API (create/revoke) |
|
||||||
|
| `settings.rs` | Paramètres applicatifs (stockés en DB, clé `limits`) |
|
||||||
|
| `openapi.rs` | Doc OpenAPI via utoipa, accessible sur `/swagger-ui` |
|
||||||
|
|
||||||
|
## Patterns clés
|
||||||
|
|
||||||
|
### Handler type
|
||||||
|
```rust
|
||||||
|
async fn my_handler(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Path(id): Path<Uuid>,
|
||||||
|
) -> Result<Json<MyDto>, ApiError> {
|
||||||
|
// ...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Erreurs API
|
||||||
|
```rust
|
||||||
|
// Constructeurs disponibles dans error.rs
|
||||||
|
ApiError::bad_request("message")
|
||||||
|
ApiError::not_found("resource not found")
|
||||||
|
ApiError::internal("unexpected error")
|
||||||
|
ApiError::unauthorized("missing token")
|
||||||
|
ApiError::forbidden("admin required")
|
||||||
|
|
||||||
|
// Conversion auto depuis sqlx::Error et std::io::Error
|
||||||
|
```
|
||||||
|
|
||||||
|
### Authentification
|
||||||
|
- **Bootstrap token** : comparaison directe (`API_BOOTSTRAP_TOKEN`), scope Admin
|
||||||
|
- **Tokens DB** : format `stl_<prefix>_<secret>`, hash argon2 en DB, scope `admin` ou `read`
|
||||||
|
- Middleware `require_admin` → routes admin ; `require_read` → routes lecture
|
||||||
|
|
||||||
|
### OpenAPI (utoipa)
|
||||||
|
```rust
|
||||||
|
#[utoipa::path(get, path = "/books/{id}", ...)]
|
||||||
|
async fn get_book(...) { }
|
||||||
|
// Ajouter le handler dans openapi.rs (ApiDoc)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cache pages (`pages.rs`)
|
||||||
|
- **Cache mémoire** : LRU 512 entrées (`AppState.page_cache`)
|
||||||
|
- **Cache disque** : `IMAGE_CACHE_DIR` (défaut `/tmp/stripstream-image-cache`), clé SHA256
|
||||||
|
- Concurrence limitée par `AppState.page_render_limit` (Semaphore, configurable en DB)
|
||||||
|
- `spawn_blocking` pour le rendu image (CPU-bound)
|
||||||
|
|
||||||
|
### Paramètre concurrent_renders
|
||||||
|
Stocké en DB : `SELECT value FROM app_settings WHERE key = 'limits'` → JSON `{"concurrent_renders": N}`.
|
||||||
|
Chargé au démarrage dans `load_concurrent_renders`.
|
||||||
|
|
||||||
|
## Gotchas
|
||||||
|
|
||||||
|
- **LIBRARIES_ROOT_PATH** : les `abs_path` en DB commencent par `/libraries/`. Appeler `remap_libraries_path()` avant tout accès fichier.
|
||||||
|
- **Rate limit lecture** : middleware `read_rate_limit` sur les routes read (100 req/5s par défaut).
|
||||||
|
- **Métriques** : `/metrics` expose `requests_total`, `page_cache_hits`, `page_cache_misses` (atomics dans `AppState.metrics`).
|
||||||
|
- **Swagger** : accessible sur `/swagger-ui`, spec JSON sur `/openapi.json`.
|
||||||
@@ -31,5 +31,5 @@ uuid.workspace = true
|
|||||||
zip = { version = "2.2", default-features = false, features = ["deflate"] }
|
zip = { version = "2.2", default-features = false, features = ["deflate"] }
|
||||||
utoipa.workspace = true
|
utoipa.workspace = true
|
||||||
utoipa-swagger-ui = { workspace = true, features = ["axum"] }
|
utoipa-swagger-ui = { workspace = true, features = ["axum"] }
|
||||||
webp = "0.3"
|
webp.workspace = true
|
||||||
walkdir = "2"
|
walkdir = "2"
|
||||||
|
|||||||
@@ -18,13 +18,20 @@ COPY crates/parsers/src crates/parsers/src
|
|||||||
|
|
||||||
# Build with sccache (cache persisted between builds via Docker cache mount)
|
# Build with sccache (cache persisted between builds via Docker cache mount)
|
||||||
RUN --mount=type=cache,target=/sccache \
|
RUN --mount=type=cache,target=/sccache \
|
||||||
cargo build --release -p api
|
cargo build --release -p api && \
|
||||||
|
cargo install sqlx-cli --no-default-features --features postgres --locked
|
||||||
|
|
||||||
FROM debian:bookworm-slim
|
FROM debian:bookworm-slim
|
||||||
RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates wget unar poppler-utils locales && rm -rf /var/lib/apt/lists/*
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
|
ca-certificates wget unar poppler-utils locales postgresql-client \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
RUN sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && locale-gen
|
RUN sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && locale-gen
|
||||||
ENV LANG=en_US.UTF-8
|
ENV LANG=en_US.UTF-8
|
||||||
ENV LC_ALL=en_US.UTF-8
|
ENV LC_ALL=en_US.UTF-8
|
||||||
COPY --from=builder /app/target/release/api /usr/local/bin/api
|
COPY --from=builder /app/target/release/api /usr/local/bin/api
|
||||||
EXPOSE 8080
|
COPY --from=builder /usr/local/cargo/bin/sqlx /usr/local/bin/sqlx
|
||||||
CMD ["/usr/local/bin/api"]
|
COPY infra/migrations /app/migrations
|
||||||
|
COPY apps/api/entrypoint.sh /usr/local/bin/entrypoint.sh
|
||||||
|
RUN chmod +x /usr/local/bin/entrypoint.sh
|
||||||
|
EXPOSE 7080
|
||||||
|
CMD ["/usr/local/bin/entrypoint.sh"]
|
||||||
|
|||||||
63
apps/api/entrypoint.sh
Normal file
63
apps/api/entrypoint.sh
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# psql requires "postgresql://" but Rust/sqlx accepts both "postgres://" and "postgresql://"
|
||||||
|
PSQL_URL=$(echo "$DATABASE_URL" | sed 's|^postgres://|postgresql://|')
|
||||||
|
|
||||||
|
# Check 1: does the old schema exist (index_jobs table)?
|
||||||
|
HAS_OLD_TABLES=$(psql "$PSQL_URL" -tAc \
|
||||||
|
"SELECT EXISTS(SELECT 1 FROM information_schema.tables WHERE table_name='index_jobs')::text" \
|
||||||
|
2>/dev/null || echo "false")
|
||||||
|
|
||||||
|
# Check 2: is sqlx tracking present and non-empty?
|
||||||
|
HAS_SQLX_TABLE=$(psql "$PSQL_URL" -tAc \
|
||||||
|
"SELECT EXISTS(SELECT 1 FROM information_schema.tables WHERE table_name='_sqlx_migrations')::text" \
|
||||||
|
2>/dev/null || echo "false")
|
||||||
|
|
||||||
|
if [ "$HAS_SQLX_TABLE" = "true" ]; then
|
||||||
|
HAS_SQLX_ROWS=$(psql "$PSQL_URL" -tAc \
|
||||||
|
"SELECT EXISTS(SELECT 1 FROM _sqlx_migrations LIMIT 1)::text" \
|
||||||
|
2>/dev/null || echo "false")
|
||||||
|
else
|
||||||
|
HAS_SQLX_ROWS="false"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "==> Migration check: old_tables=$HAS_OLD_TABLES sqlx_table=$HAS_SQLX_TABLE sqlx_rows=$HAS_SQLX_ROWS"
|
||||||
|
|
||||||
|
if [ "$HAS_OLD_TABLES" = "true" ] && [ "$HAS_SQLX_ROWS" = "false" ]; then
|
||||||
|
echo "==> Upgrade from pre-sqlx migration system detected: creating baseline..."
|
||||||
|
|
||||||
|
psql "$PSQL_URL" -c "
|
||||||
|
CREATE TABLE IF NOT EXISTS _sqlx_migrations (
|
||||||
|
version BIGINT PRIMARY KEY,
|
||||||
|
description TEXT NOT NULL,
|
||||||
|
installed_on TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
|
success BOOLEAN NOT NULL,
|
||||||
|
checksum BYTEA NOT NULL,
|
||||||
|
execution_time BIGINT NOT NULL
|
||||||
|
)
|
||||||
|
"
|
||||||
|
|
||||||
|
for f in /app/migrations/*.sql; do
|
||||||
|
filename=$(basename "$f")
|
||||||
|
# Strip leading zeros to get the integer version (e.g. "0005" -> "5")
|
||||||
|
version=$(echo "$filename" | sed 's/^0*//' | cut -d'_' -f1)
|
||||||
|
description=$(echo "$filename" | sed 's/^[0-9]*_//' | sed 's/\.sql$//')
|
||||||
|
checksum=$(sha384sum "$f" | awk '{print $1}')
|
||||||
|
|
||||||
|
psql "$PSQL_URL" -c "
|
||||||
|
INSERT INTO _sqlx_migrations (version, description, installed_on, success, checksum, execution_time)
|
||||||
|
VALUES ($version, '$description', NOW(), TRUE, decode('$checksum', 'hex'), 0)
|
||||||
|
ON CONFLICT (version) DO NOTHING
|
||||||
|
"
|
||||||
|
echo " baselined: $filename"
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "==> Baseline complete."
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "==> Running migrations..."
|
||||||
|
sqlx migrate run --source /app/migrations
|
||||||
|
|
||||||
|
echo "==> Starting API..."
|
||||||
|
exec /usr/local/bin/api
|
||||||
43
apps/api/src/api_middleware.rs
Normal file
43
apps/api/src/api_middleware.rs
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
use axum::{
|
||||||
|
extract::State,
|
||||||
|
middleware::Next,
|
||||||
|
response::{IntoResponse, Response},
|
||||||
|
};
|
||||||
|
use std::time::Duration;
|
||||||
|
use std::sync::atomic::Ordering;
|
||||||
|
|
||||||
|
use crate::state::AppState;
|
||||||
|
|
||||||
|
pub async fn request_counter(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
req: axum::extract::Request,
|
||||||
|
next: Next,
|
||||||
|
) -> Response {
|
||||||
|
state.metrics.requests_total.fetch_add(1, Ordering::Relaxed);
|
||||||
|
next.run(req).await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn read_rate_limit(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
req: axum::extract::Request,
|
||||||
|
next: Next,
|
||||||
|
) -> Response {
|
||||||
|
let mut limiter = state.read_rate_limit.lock().await;
|
||||||
|
if limiter.window_started_at.elapsed() >= Duration::from_secs(1) {
|
||||||
|
limiter.window_started_at = std::time::Instant::now();
|
||||||
|
limiter.requests_in_window = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
let rate_limit = state.settings.read().await.rate_limit_per_second;
|
||||||
|
if limiter.requests_in_window >= rate_limit {
|
||||||
|
return (
|
||||||
|
axum::http::StatusCode::TOO_MANY_REQUESTS,
|
||||||
|
"rate limit exceeded",
|
||||||
|
)
|
||||||
|
.into_response();
|
||||||
|
}
|
||||||
|
|
||||||
|
limiter.requests_in_window += 1;
|
||||||
|
drop(limiter);
|
||||||
|
next.run(req).await
|
||||||
|
}
|
||||||
@@ -8,7 +8,7 @@ use axum::{
|
|||||||
use chrono::Utc;
|
use chrono::Utc;
|
||||||
use sqlx::Row;
|
use sqlx::Row;
|
||||||
|
|
||||||
use crate::{error::ApiError, AppState};
|
use crate::{error::ApiError, state::AppState};
|
||||||
|
|
||||||
#[derive(Clone, Debug)]
|
#[derive(Clone, Debug)]
|
||||||
pub enum Scope {
|
pub enum Scope {
|
||||||
@@ -94,11 +94,15 @@ async fn authenticate(state: &AppState, token: &str) -> Result<Scope, ApiError>
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn parse_prefix(token: &str) -> Option<&str> {
|
fn parse_prefix(token: &str) -> Option<&str> {
|
||||||
let mut parts = token.split('_');
|
// Format: stl_{8-char prefix}_{secret}
|
||||||
let namespace = parts.next()?;
|
// Base64 URL_SAFE peut contenir '_', donc on ne peut pas splitter aveuglément
|
||||||
let prefix = parts.next()?;
|
let rest = token.strip_prefix("stl_")?;
|
||||||
let secret = parts.next()?;
|
if rest.len() < 10 {
|
||||||
if namespace != "stl" || secret.is_empty() || prefix.len() < 6 {
|
// 8 (prefix) + 1 ('_') + 1 (secret min)
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
let prefix = &rest[..8];
|
||||||
|
if rest.as_bytes().get(8) != Some(&b'_') {
|
||||||
return None;
|
return None;
|
||||||
}
|
}
|
||||||
Some(prefix)
|
Some(prefix)
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ use sqlx::Row;
|
|||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
use utoipa::ToSchema;
|
use utoipa::ToSchema;
|
||||||
|
|
||||||
use crate::{error::ApiError, AppState};
|
use crate::{error::ApiError, index_jobs::IndexJobResponse, state::AppState};
|
||||||
|
|
||||||
#[derive(Deserialize, ToSchema)]
|
#[derive(Deserialize, ToSchema)]
|
||||||
pub struct ListBooksQuery {
|
pub struct ListBooksQuery {
|
||||||
@@ -15,8 +15,10 @@ pub struct ListBooksQuery {
|
|||||||
pub kind: Option<String>,
|
pub kind: Option<String>,
|
||||||
#[schema(value_type = Option<String>)]
|
#[schema(value_type = Option<String>)]
|
||||||
pub series: Option<String>,
|
pub series: Option<String>,
|
||||||
#[schema(value_type = Option<String>)]
|
#[schema(value_type = Option<String>, example = "unread,reading")]
|
||||||
pub cursor: Option<Uuid>,
|
pub reading_status: Option<String>,
|
||||||
|
#[schema(value_type = Option<i64>, example = 1)]
|
||||||
|
pub page: Option<i64>,
|
||||||
#[schema(value_type = Option<i64>, example = 50)]
|
#[schema(value_type = Option<i64>, example = 50)]
|
||||||
pub limit: Option<i64>,
|
pub limit: Option<i64>,
|
||||||
}
|
}
|
||||||
@@ -37,13 +39,19 @@ pub struct BookItem {
|
|||||||
pub thumbnail_url: Option<String>,
|
pub thumbnail_url: Option<String>,
|
||||||
#[schema(value_type = String)]
|
#[schema(value_type = String)]
|
||||||
pub updated_at: DateTime<Utc>,
|
pub updated_at: DateTime<Utc>,
|
||||||
|
/// Reading status: "unread", "reading", or "read"
|
||||||
|
pub reading_status: String,
|
||||||
|
pub reading_current_page: Option<i32>,
|
||||||
|
#[schema(value_type = Option<String>)]
|
||||||
|
pub reading_last_read_at: Option<DateTime<Utc>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Serialize, ToSchema)]
|
#[derive(Serialize, ToSchema)]
|
||||||
pub struct BooksPage {
|
pub struct BooksPage {
|
||||||
pub items: Vec<BookItem>,
|
pub items: Vec<BookItem>,
|
||||||
#[schema(value_type = Option<String>)]
|
pub total: i64,
|
||||||
pub next_cursor: Option<Uuid>,
|
pub page: i64,
|
||||||
|
pub limit: i64,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Serialize, ToSchema)]
|
#[derive(Serialize, ToSchema)]
|
||||||
@@ -63,6 +71,11 @@ pub struct BookDetails {
|
|||||||
pub file_path: Option<String>,
|
pub file_path: Option<String>,
|
||||||
pub file_format: Option<String>,
|
pub file_format: Option<String>,
|
||||||
pub file_parse_status: Option<String>,
|
pub file_parse_status: Option<String>,
|
||||||
|
/// Reading status: "unread", "reading", or "read"
|
||||||
|
pub reading_status: String,
|
||||||
|
pub reading_current_page: Option<i32>,
|
||||||
|
#[schema(value_type = Option<String>)]
|
||||||
|
pub reading_last_read_at: Option<DateTime<Utc>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// List books with optional filtering and pagination
|
/// List books with optional filtering and pagination
|
||||||
@@ -74,8 +87,9 @@ pub struct BookDetails {
|
|||||||
("library_id" = Option<String>, Query, description = "Filter by library ID"),
|
("library_id" = Option<String>, Query, description = "Filter by library ID"),
|
||||||
("kind" = Option<String>, Query, description = "Filter by book kind (cbz, cbr, pdf)"),
|
("kind" = Option<String>, Query, description = "Filter by book kind (cbz, cbr, pdf)"),
|
||||||
("series" = Option<String>, Query, description = "Filter by series name (use 'unclassified' for books without series)"),
|
("series" = Option<String>, Query, description = "Filter by series name (use 'unclassified' for books without series)"),
|
||||||
("cursor" = Option<String>, Query, description = "Cursor for pagination"),
|
("reading_status" = Option<String>, Query, description = "Filter by reading status, comma-separated (e.g. 'unread,reading')"),
|
||||||
("limit" = Option<i64>, Query, description = "Max items to return (max 200)"),
|
("page" = Option<i64>, Query, description = "Page number (1-indexed, default 1)"),
|
||||||
|
("limit" = Option<i64>, Query, description = "Items per page (max 200, default 50)"),
|
||||||
),
|
),
|
||||||
responses(
|
responses(
|
||||||
(status = 200, body = BooksPage),
|
(status = 200, body = BooksPage),
|
||||||
@@ -88,55 +102,88 @@ pub async fn list_books(
|
|||||||
Query(query): Query<ListBooksQuery>,
|
Query(query): Query<ListBooksQuery>,
|
||||||
) -> Result<Json<BooksPage>, ApiError> {
|
) -> Result<Json<BooksPage>, ApiError> {
|
||||||
let limit = query.limit.unwrap_or(50).clamp(1, 200);
|
let limit = query.limit.unwrap_or(50).clamp(1, 200);
|
||||||
|
let page = query.page.unwrap_or(1).max(1);
|
||||||
|
let offset = (page - 1) * limit;
|
||||||
|
|
||||||
// Build series filter condition
|
// Parse reading_status CSV → Vec<String>
|
||||||
let series_condition = match query.series.as_deref() {
|
let reading_statuses: Option<Vec<String>> = query.reading_status.as_deref().map(|s| {
|
||||||
Some("unclassified") => "AND (series IS NULL OR series = '')",
|
s.split(',').map(|v| v.trim().to_string()).filter(|v| !v.is_empty()).collect()
|
||||||
Some(_series_name) => "AND series = $5",
|
});
|
||||||
None => "",
|
|
||||||
|
// Conditions partagées COUNT et DATA — $1=library_id $2=kind, puis optionnels
|
||||||
|
let mut p: usize = 2;
|
||||||
|
let series_cond = match query.series.as_deref() {
|
||||||
|
Some("unclassified") => "AND (b.series IS NULL OR b.series = '')".to_string(),
|
||||||
|
Some(_) => { p += 1; format!("AND b.series = ${p}") }
|
||||||
|
None => String::new(),
|
||||||
};
|
};
|
||||||
|
let rs_cond = if reading_statuses.is_some() {
|
||||||
|
p += 1; format!("AND COALESCE(brp.status, 'unread') = ANY(${p})")
|
||||||
|
} else { String::new() };
|
||||||
|
|
||||||
let sql = format!(
|
let count_sql = format!(
|
||||||
r#"
|
r#"SELECT COUNT(*) FROM books b
|
||||||
SELECT id, library_id, kind, title, author, series, volume, language, page_count, thumbnail_path, updated_at
|
LEFT JOIN book_reading_progress brp ON brp.book_id = b.id
|
||||||
FROM books
|
WHERE ($1::uuid IS NULL OR b.library_id = $1)
|
||||||
WHERE ($1::uuid IS NULL OR library_id = $1)
|
AND ($2::text IS NULL OR b.kind = $2)
|
||||||
AND ($2::text IS NULL OR kind = $2)
|
{series_cond}
|
||||||
AND ($3::uuid IS NULL OR id > $3)
|
{rs_cond}"#
|
||||||
{}
|
|
||||||
ORDER BY
|
|
||||||
-- Extract text part before numbers (case insensitive)
|
|
||||||
REGEXP_REPLACE(LOWER(title), '[0-9]+', '', 'g'),
|
|
||||||
-- Extract first number group and convert to integer for numeric sort
|
|
||||||
COALESCE(
|
|
||||||
(REGEXP_MATCH(LOWER(title), '\d+'))[1]::int,
|
|
||||||
0
|
|
||||||
),
|
|
||||||
-- Then by full title as fallback
|
|
||||||
title ASC
|
|
||||||
LIMIT $4
|
|
||||||
"#,
|
|
||||||
series_condition
|
|
||||||
);
|
);
|
||||||
|
|
||||||
let mut query_builder = sqlx::query(&sql)
|
// DATA: mêmes params filtre, puis $N+1=limit $N+2=offset
|
||||||
|
let limit_p = p + 1;
|
||||||
|
let offset_p = p + 2;
|
||||||
|
let data_sql = format!(
|
||||||
|
r#"
|
||||||
|
SELECT b.id, b.library_id, b.kind, b.title, b.author, b.series, b.volume, b.language, b.page_count, b.thumbnail_path, b.updated_at,
|
||||||
|
COALESCE(brp.status, 'unread') AS reading_status,
|
||||||
|
brp.current_page AS reading_current_page,
|
||||||
|
brp.last_read_at AS reading_last_read_at
|
||||||
|
FROM books b
|
||||||
|
LEFT JOIN book_reading_progress brp ON brp.book_id = b.id
|
||||||
|
WHERE ($1::uuid IS NULL OR b.library_id = $1)
|
||||||
|
AND ($2::text IS NULL OR b.kind = $2)
|
||||||
|
{series_cond}
|
||||||
|
{rs_cond}
|
||||||
|
ORDER BY
|
||||||
|
REGEXP_REPLACE(LOWER(b.title), '[0-9]+', '', 'g'),
|
||||||
|
COALESCE(
|
||||||
|
(REGEXP_MATCH(LOWER(b.title), '\d+'))[1]::int,
|
||||||
|
0
|
||||||
|
),
|
||||||
|
b.title ASC
|
||||||
|
LIMIT ${limit_p} OFFSET ${offset_p}
|
||||||
|
"#
|
||||||
|
);
|
||||||
|
|
||||||
|
let mut count_builder = sqlx::query(&count_sql)
|
||||||
.bind(query.library_id)
|
.bind(query.library_id)
|
||||||
.bind(query.kind.as_deref())
|
.bind(query.kind.as_deref());
|
||||||
.bind(query.cursor)
|
let mut data_builder = sqlx::query(&data_sql)
|
||||||
.bind(limit + 1);
|
.bind(query.library_id)
|
||||||
|
.bind(query.kind.as_deref());
|
||||||
|
|
||||||
// Bind series parameter if it's not unclassified
|
if let Some(s) = query.series.as_deref() {
|
||||||
if let Some(series) = query.series.as_deref() {
|
if s != "unclassified" {
|
||||||
if series != "unclassified" {
|
count_builder = count_builder.bind(s);
|
||||||
query_builder = query_builder.bind(series);
|
data_builder = data_builder.bind(s);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if let Some(ref statuses) = reading_statuses {
|
||||||
|
count_builder = count_builder.bind(statuses.clone());
|
||||||
|
data_builder = data_builder.bind(statuses.clone());
|
||||||
|
}
|
||||||
|
|
||||||
let rows = query_builder.fetch_all(&state.pool).await?;
|
data_builder = data_builder.bind(limit).bind(offset);
|
||||||
|
|
||||||
|
let (count_row, rows) = tokio::try_join!(
|
||||||
|
count_builder.fetch_one(&state.pool),
|
||||||
|
data_builder.fetch_all(&state.pool),
|
||||||
|
)?;
|
||||||
|
let total: i64 = count_row.get(0);
|
||||||
|
|
||||||
let mut items: Vec<BookItem> = rows
|
let mut items: Vec<BookItem> = rows
|
||||||
.iter()
|
.iter()
|
||||||
.take(limit as usize)
|
|
||||||
.map(|row| {
|
.map(|row| {
|
||||||
let thumbnail_path: Option<String> = row.get("thumbnail_path");
|
let thumbnail_path: Option<String> = row.get("thumbnail_path");
|
||||||
BookItem {
|
BookItem {
|
||||||
@@ -151,19 +198,18 @@ pub async fn list_books(
|
|||||||
page_count: row.get("page_count"),
|
page_count: row.get("page_count"),
|
||||||
thumbnail_url: thumbnail_path.map(|_p| format!("/books/{}/thumbnail", row.get::<Uuid, _>("id"))),
|
thumbnail_url: thumbnail_path.map(|_p| format!("/books/{}/thumbnail", row.get::<Uuid, _>("id"))),
|
||||||
updated_at: row.get("updated_at"),
|
updated_at: row.get("updated_at"),
|
||||||
|
reading_status: row.get("reading_status"),
|
||||||
|
reading_current_page: row.get("reading_current_page"),
|
||||||
|
reading_last_read_at: row.get("reading_last_read_at"),
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
.collect();
|
.collect();
|
||||||
|
|
||||||
let next_cursor = if rows.len() > limit as usize {
|
|
||||||
items.last().map(|b| b.id)
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
};
|
|
||||||
|
|
||||||
Ok(Json(BooksPage {
|
Ok(Json(BooksPage {
|
||||||
items: std::mem::take(&mut items),
|
items: std::mem::take(&mut items),
|
||||||
next_cursor,
|
total,
|
||||||
|
page,
|
||||||
|
limit,
|
||||||
}))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -189,7 +235,10 @@ pub async fn get_book(
|
|||||||
let row = sqlx::query(
|
let row = sqlx::query(
|
||||||
r#"
|
r#"
|
||||||
SELECT b.id, b.library_id, b.kind, b.title, b.author, b.series, b.volume, b.language, b.page_count, b.thumbnail_path,
|
SELECT b.id, b.library_id, b.kind, b.title, b.author, b.series, b.volume, b.language, b.page_count, b.thumbnail_path,
|
||||||
bf.abs_path, bf.format, bf.parse_status
|
bf.abs_path, bf.format, bf.parse_status,
|
||||||
|
COALESCE(brp.status, 'unread') AS reading_status,
|
||||||
|
brp.current_page AS reading_current_page,
|
||||||
|
brp.last_read_at AS reading_last_read_at
|
||||||
FROM books b
|
FROM books b
|
||||||
LEFT JOIN LATERAL (
|
LEFT JOIN LATERAL (
|
||||||
SELECT abs_path, format, parse_status
|
SELECT abs_path, format, parse_status
|
||||||
@@ -198,6 +247,7 @@ pub async fn get_book(
|
|||||||
ORDER BY updated_at DESC
|
ORDER BY updated_at DESC
|
||||||
LIMIT 1
|
LIMIT 1
|
||||||
) bf ON TRUE
|
) bf ON TRUE
|
||||||
|
LEFT JOIN book_reading_progress brp ON brp.book_id = b.id
|
||||||
WHERE b.id = $1
|
WHERE b.id = $1
|
||||||
"#,
|
"#,
|
||||||
)
|
)
|
||||||
@@ -221,6 +271,9 @@ pub async fn get_book(
|
|||||||
file_path: row.get("abs_path"),
|
file_path: row.get("abs_path"),
|
||||||
file_format: row.get("format"),
|
file_format: row.get("format"),
|
||||||
file_parse_status: row.get("parse_status"),
|
file_parse_status: row.get("parse_status"),
|
||||||
|
reading_status: row.get("reading_status"),
|
||||||
|
reading_current_page: row.get("reading_current_page"),
|
||||||
|
reading_last_read_at: row.get("reading_last_read_at"),
|
||||||
}))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -228,6 +281,7 @@ pub async fn get_book(
|
|||||||
pub struct SeriesItem {
|
pub struct SeriesItem {
|
||||||
pub name: String,
|
pub name: String,
|
||||||
pub book_count: i64,
|
pub book_count: i64,
|
||||||
|
pub books_read_count: i64,
|
||||||
#[schema(value_type = String)]
|
#[schema(value_type = String)]
|
||||||
pub first_book_id: Uuid,
|
pub first_book_id: Uuid,
|
||||||
}
|
}
|
||||||
@@ -235,14 +289,19 @@ pub struct SeriesItem {
|
|||||||
#[derive(Serialize, ToSchema)]
|
#[derive(Serialize, ToSchema)]
|
||||||
pub struct SeriesPage {
|
pub struct SeriesPage {
|
||||||
pub items: Vec<SeriesItem>,
|
pub items: Vec<SeriesItem>,
|
||||||
#[schema(value_type = Option<String>)]
|
pub total: i64,
|
||||||
pub next_cursor: Option<String>,
|
pub page: i64,
|
||||||
|
pub limit: i64,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, ToSchema)]
|
#[derive(Deserialize, ToSchema)]
|
||||||
pub struct ListSeriesQuery {
|
pub struct ListSeriesQuery {
|
||||||
#[schema(value_type = Option<String>)]
|
#[schema(value_type = Option<String>, example = "dragon")]
|
||||||
pub cursor: Option<String>,
|
pub q: Option<String>,
|
||||||
|
#[schema(value_type = Option<String>, example = "unread,reading")]
|
||||||
|
pub reading_status: Option<String>,
|
||||||
|
#[schema(value_type = Option<i64>, example = 1)]
|
||||||
|
pub page: Option<i64>,
|
||||||
#[schema(value_type = Option<i64>, example = 50)]
|
#[schema(value_type = Option<i64>, example = 50)]
|
||||||
pub limit: Option<i64>,
|
pub limit: Option<i64>,
|
||||||
}
|
}
|
||||||
@@ -254,8 +313,10 @@ pub struct ListSeriesQuery {
|
|||||||
tag = "books",
|
tag = "books",
|
||||||
params(
|
params(
|
||||||
("library_id" = String, Path, description = "Library UUID"),
|
("library_id" = String, Path, description = "Library UUID"),
|
||||||
("cursor" = Option<String>, Query, description = "Cursor for pagination (series name)"),
|
("q" = Option<String>, Query, description = "Filter by series name (case-insensitive, partial match)"),
|
||||||
("limit" = Option<i64>, Query, description = "Max items to return (max 200)"),
|
("reading_status" = Option<String>, Query, description = "Filter by reading status, comma-separated (e.g. 'unread,reading')"),
|
||||||
|
("page" = Option<i64>, Query, description = "Page number (1-indexed, default 1)"),
|
||||||
|
("limit" = Option<i64>, Query, description = "Items per page (max 200, default 50)"),
|
||||||
),
|
),
|
||||||
responses(
|
responses(
|
||||||
(status = 200, body = SeriesPage),
|
(status = 200, body = SeriesPage),
|
||||||
@@ -269,14 +330,59 @@ pub async fn list_series(
|
|||||||
Query(query): Query<ListSeriesQuery>,
|
Query(query): Query<ListSeriesQuery>,
|
||||||
) -> Result<Json<SeriesPage>, ApiError> {
|
) -> Result<Json<SeriesPage>, ApiError> {
|
||||||
let limit = query.limit.unwrap_or(50).clamp(1, 200);
|
let limit = query.limit.unwrap_or(50).clamp(1, 200);
|
||||||
|
let page = query.page.unwrap_or(1).max(1);
|
||||||
|
let offset = (page - 1) * limit;
|
||||||
|
|
||||||
let rows = sqlx::query(
|
let reading_statuses: Option<Vec<String>> = query.reading_status.as_deref().map(|s| {
|
||||||
|
s.split(',').map(|v| v.trim().to_string()).filter(|v| !v.is_empty()).collect()
|
||||||
|
});
|
||||||
|
|
||||||
|
let series_status_expr = r#"CASE
|
||||||
|
WHEN sc.books_read_count = sc.book_count THEN 'read'
|
||||||
|
WHEN sc.books_read_count = 0 THEN 'unread'
|
||||||
|
ELSE 'reading'
|
||||||
|
END"#;
|
||||||
|
|
||||||
|
// Paramètres dynamiques — $1 = library_id fixe, puis optionnels dans l'ordre
|
||||||
|
let mut p: usize = 1;
|
||||||
|
|
||||||
|
let q_cond = if query.q.is_some() {
|
||||||
|
p += 1; format!("AND sc.name ILIKE ${p}")
|
||||||
|
} else { String::new() };
|
||||||
|
|
||||||
|
let count_rs_cond = if reading_statuses.is_some() {
|
||||||
|
p += 1; format!("AND {series_status_expr} = ANY(${p})")
|
||||||
|
} else { String::new() };
|
||||||
|
|
||||||
|
// q_cond et count_rs_cond partagent le même p — le count_sql les réutilise directement
|
||||||
|
let count_sql = format!(
|
||||||
|
r#"
|
||||||
|
WITH sorted_books AS (
|
||||||
|
SELECT COALESCE(NULLIF(series, ''), 'unclassified') as name, id
|
||||||
|
FROM books WHERE library_id = $1
|
||||||
|
),
|
||||||
|
series_counts AS (
|
||||||
|
SELECT sb.name,
|
||||||
|
COUNT(*) as book_count,
|
||||||
|
COUNT(brp.book_id) FILTER (WHERE brp.status = 'read') as books_read_count
|
||||||
|
FROM sorted_books sb
|
||||||
|
LEFT JOIN book_reading_progress brp ON brp.book_id = sb.id
|
||||||
|
GROUP BY sb.name
|
||||||
|
)
|
||||||
|
SELECT COUNT(*) FROM series_counts sc WHERE TRUE {q_cond} {count_rs_cond}
|
||||||
|
"#
|
||||||
|
);
|
||||||
|
|
||||||
|
// DATA: mêmes params dans le même ordre, puis limit/offset à la fin
|
||||||
|
let limit_p = p + 1;
|
||||||
|
let offset_p = p + 2;
|
||||||
|
|
||||||
|
let data_sql = format!(
|
||||||
r#"
|
r#"
|
||||||
WITH sorted_books AS (
|
WITH sorted_books AS (
|
||||||
SELECT
|
SELECT
|
||||||
COALESCE(NULLIF(series, ''), 'unclassified') as name,
|
COALESCE(NULLIF(series, ''), 'unclassified') as name,
|
||||||
id,
|
id,
|
||||||
-- Natural sort order for books within series
|
|
||||||
ROW_NUMBER() OVER (
|
ROW_NUMBER() OVER (
|
||||||
PARTITION BY COALESCE(NULLIF(series, ''), 'unclassified')
|
PARTITION BY COALESCE(NULLIF(series, ''), 'unclassified')
|
||||||
ORDER BY
|
ORDER BY
|
||||||
@@ -289,64 +395,202 @@ pub async fn list_series(
|
|||||||
),
|
),
|
||||||
series_counts AS (
|
series_counts AS (
|
||||||
SELECT
|
SELECT
|
||||||
name,
|
sb.name,
|
||||||
COUNT(*) as book_count
|
COUNT(*) as book_count,
|
||||||
FROM sorted_books
|
COUNT(brp.book_id) FILTER (WHERE brp.status = 'read') as books_read_count
|
||||||
GROUP BY name
|
FROM sorted_books sb
|
||||||
|
LEFT JOIN book_reading_progress brp ON brp.book_id = sb.id
|
||||||
|
GROUP BY sb.name
|
||||||
)
|
)
|
||||||
SELECT
|
SELECT
|
||||||
sc.name,
|
sc.name,
|
||||||
sc.book_count,
|
sc.book_count,
|
||||||
|
sc.books_read_count,
|
||||||
sb.id as first_book_id
|
sb.id as first_book_id
|
||||||
FROM series_counts sc
|
FROM series_counts sc
|
||||||
JOIN sorted_books sb ON sb.name = sc.name AND sb.rn = 1
|
JOIN sorted_books sb ON sb.name = sc.name AND sb.rn = 1
|
||||||
WHERE ($2::text IS NULL OR sc.name > $2)
|
WHERE TRUE
|
||||||
|
{q_cond}
|
||||||
|
{count_rs_cond}
|
||||||
ORDER BY
|
ORDER BY
|
||||||
-- Natural sort: extract text part before numbers
|
|
||||||
REGEXP_REPLACE(LOWER(sc.name), '[0-9]+', '', 'g'),
|
REGEXP_REPLACE(LOWER(sc.name), '[0-9]+', '', 'g'),
|
||||||
-- Extract first number group and convert to integer
|
|
||||||
COALESCE(
|
COALESCE(
|
||||||
(REGEXP_MATCH(LOWER(sc.name), '\d+'))[1]::int,
|
(REGEXP_MATCH(LOWER(sc.name), '\d+'))[1]::int,
|
||||||
0
|
0
|
||||||
),
|
),
|
||||||
sc.name ASC
|
sc.name ASC
|
||||||
LIMIT $3
|
LIMIT ${limit_p} OFFSET ${offset_p}
|
||||||
"#,
|
"#
|
||||||
)
|
);
|
||||||
.bind(library_id)
|
|
||||||
.bind(query.cursor.as_deref())
|
let q_pattern = query.q.as_deref().map(|q| format!("%{}%", q));
|
||||||
.bind(limit + 1)
|
|
||||||
.fetch_all(&state.pool)
|
let mut count_builder = sqlx::query(&count_sql).bind(library_id);
|
||||||
.await?;
|
let mut data_builder = sqlx::query(&data_sql).bind(library_id);
|
||||||
|
|
||||||
|
if let Some(ref pat) = q_pattern {
|
||||||
|
count_builder = count_builder.bind(pat);
|
||||||
|
data_builder = data_builder.bind(pat);
|
||||||
|
}
|
||||||
|
if let Some(ref statuses) = reading_statuses {
|
||||||
|
count_builder = count_builder.bind(statuses.clone());
|
||||||
|
data_builder = data_builder.bind(statuses.clone());
|
||||||
|
}
|
||||||
|
|
||||||
|
data_builder = data_builder.bind(limit).bind(offset);
|
||||||
|
|
||||||
|
let (count_row, rows) = tokio::try_join!(
|
||||||
|
count_builder.fetch_one(&state.pool),
|
||||||
|
data_builder.fetch_all(&state.pool),
|
||||||
|
)?;
|
||||||
|
let total: i64 = count_row.get(0);
|
||||||
|
|
||||||
let mut items: Vec<SeriesItem> = rows
|
let mut items: Vec<SeriesItem> = rows
|
||||||
.iter()
|
.iter()
|
||||||
.take(limit as usize)
|
|
||||||
.map(|row| SeriesItem {
|
.map(|row| SeriesItem {
|
||||||
name: row.get("name"),
|
name: row.get("name"),
|
||||||
book_count: row.get("book_count"),
|
book_count: row.get("book_count"),
|
||||||
|
books_read_count: row.get("books_read_count"),
|
||||||
first_book_id: row.get("first_book_id"),
|
first_book_id: row.get("first_book_id"),
|
||||||
})
|
})
|
||||||
.collect();
|
.collect();
|
||||||
|
|
||||||
let next_cursor = if rows.len() > limit as usize {
|
|
||||||
items.last().map(|s| s.name.clone())
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
};
|
|
||||||
|
|
||||||
Ok(Json(SeriesPage {
|
Ok(Json(SeriesPage {
|
||||||
items: std::mem::take(&mut items),
|
items: std::mem::take(&mut items),
|
||||||
next_cursor,
|
total,
|
||||||
|
page,
|
||||||
|
limit,
|
||||||
}))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn remap_libraries_path(path: &str) -> String {
|
||||||
|
if let Ok(root) = std::env::var("LIBRARIES_ROOT_PATH") {
|
||||||
|
if path.starts_with("/libraries/") {
|
||||||
|
return path.replacen("/libraries", &root, 1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
path.to_string()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn unmap_libraries_path(path: &str) -> String {
|
||||||
|
if let Ok(root) = std::env::var("LIBRARIES_ROOT_PATH") {
|
||||||
|
if path.starts_with(&root) {
|
||||||
|
return path.replacen(&root, "/libraries", 1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
path.to_string()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Enqueue a CBR → CBZ conversion job for a single book
|
||||||
|
#[utoipa::path(
|
||||||
|
post,
|
||||||
|
path = "/books/{id}/convert",
|
||||||
|
tag = "books",
|
||||||
|
params(
|
||||||
|
("id" = String, Path, description = "Book UUID"),
|
||||||
|
),
|
||||||
|
responses(
|
||||||
|
(status = 200, body = IndexJobResponse),
|
||||||
|
(status = 404, description = "Book not found"),
|
||||||
|
(status = 409, description = "Book is not CBR, or target CBZ already exists"),
|
||||||
|
(status = 401, description = "Unauthorized"),
|
||||||
|
(status = 403, description = "Forbidden - Admin scope required"),
|
||||||
|
),
|
||||||
|
security(("Bearer" = []))
|
||||||
|
)]
|
||||||
|
pub async fn convert_book(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Path(book_id): Path<Uuid>,
|
||||||
|
) -> Result<Json<IndexJobResponse>, ApiError> {
|
||||||
|
// Fetch book file info
|
||||||
|
let row = sqlx::query(
|
||||||
|
r#"
|
||||||
|
SELECT b.id, bf.abs_path, bf.format
|
||||||
|
FROM books b
|
||||||
|
LEFT JOIN LATERAL (
|
||||||
|
SELECT abs_path, format
|
||||||
|
FROM book_files
|
||||||
|
WHERE book_id = b.id
|
||||||
|
ORDER BY updated_at DESC
|
||||||
|
LIMIT 1
|
||||||
|
) bf ON TRUE
|
||||||
|
WHERE b.id = $1
|
||||||
|
"#,
|
||||||
|
)
|
||||||
|
.bind(book_id)
|
||||||
|
.fetch_optional(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let row = row.ok_or_else(|| ApiError::not_found("book not found"))?;
|
||||||
|
let abs_path: Option<String> = row.get("abs_path");
|
||||||
|
let format: Option<String> = row.get("format");
|
||||||
|
|
||||||
|
if format.as_deref() != Some("cbr") {
|
||||||
|
return Err(ApiError {
|
||||||
|
status: axum::http::StatusCode::CONFLICT,
|
||||||
|
message: "book is not in CBR format".to_string(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
let abs_path = abs_path.ok_or_else(|| ApiError::not_found("book file path not found"))?;
|
||||||
|
|
||||||
|
// Check for existing CBZ with same stem
|
||||||
|
let physical_path = remap_libraries_path(&abs_path);
|
||||||
|
let cbr_path = std::path::Path::new(&physical_path);
|
||||||
|
if let (Some(parent), Some(stem)) = (cbr_path.parent(), cbr_path.file_stem()) {
|
||||||
|
let cbz_path = parent.join(format!("{}.cbz", stem.to_string_lossy()));
|
||||||
|
if cbz_path.exists() {
|
||||||
|
return Err(ApiError {
|
||||||
|
status: axum::http::StatusCode::CONFLICT,
|
||||||
|
message: format!(
|
||||||
|
"CBZ file already exists: {}",
|
||||||
|
unmap_libraries_path(&cbz_path.to_string_lossy())
|
||||||
|
),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create the conversion job
|
||||||
|
let job_id = Uuid::new_v4();
|
||||||
|
sqlx::query(
|
||||||
|
"INSERT INTO index_jobs (id, book_id, type, status) VALUES ($1, $2, 'cbr_to_cbz', 'pending')",
|
||||||
|
)
|
||||||
|
.bind(job_id)
|
||||||
|
.bind(book_id)
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let job_row = sqlx::query(
|
||||||
|
"SELECT id, library_id, book_id, type, status, started_at, finished_at, stats_json, error_opt, created_at, progress_percent, processed_files, total_files FROM index_jobs WHERE id = $1",
|
||||||
|
)
|
||||||
|
.bind(job_id)
|
||||||
|
.fetch_one(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(Json(crate::index_jobs::map_row(job_row)))
|
||||||
|
}
|
||||||
|
|
||||||
use axum::{
|
use axum::{
|
||||||
body::Body,
|
body::Body,
|
||||||
http::{header, HeaderMap, HeaderValue, StatusCode},
|
http::{header, HeaderMap, HeaderValue, StatusCode},
|
||||||
response::IntoResponse,
|
response::IntoResponse,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/// Get book thumbnail image
|
||||||
|
#[utoipa::path(
|
||||||
|
get,
|
||||||
|
path = "/books/{id}/thumbnail",
|
||||||
|
tag = "books",
|
||||||
|
params(
|
||||||
|
("id" = String, Path, description = "Book UUID"),
|
||||||
|
),
|
||||||
|
responses(
|
||||||
|
(status = 200, description = "WebP thumbnail image", content_type = "image/webp"),
|
||||||
|
(status = 404, description = "Book not found or thumbnail not available"),
|
||||||
|
(status = 401, description = "Unauthorized"),
|
||||||
|
),
|
||||||
|
security(("Bearer" = []))
|
||||||
|
)]
|
||||||
pub async fn get_thumbnail(
|
pub async fn get_thumbnail(
|
||||||
State(state): State<AppState>,
|
State(state): State<AppState>,
|
||||||
Path(book_id): Path<Uuid>,
|
Path(book_id): Path<Uuid>,
|
||||||
@@ -361,10 +605,15 @@ pub async fn get_thumbnail(
|
|||||||
let thumbnail_path: Option<String> = row.get("thumbnail_path");
|
let thumbnail_path: Option<String> = row.get("thumbnail_path");
|
||||||
|
|
||||||
let data = if let Some(ref path) = thumbnail_path {
|
let data = if let Some(ref path) = thumbnail_path {
|
||||||
std::fs::read(path)
|
match std::fs::read(path) {
|
||||||
.map_err(|e| ApiError::internal(format!("cannot read thumbnail: {}", e)))?
|
Ok(bytes) => bytes,
|
||||||
|
Err(_) => {
|
||||||
|
// File missing on disk (e.g. different mount in dev) — fall back to live render
|
||||||
|
crate::pages::render_book_page_1(&state, book_id, 300, 80).await?
|
||||||
|
}
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
// Fallback: render page 1 on the fly (same as pages logic)
|
// No stored thumbnail yet — render page 1 on the fly
|
||||||
crate::pages::render_book_page_1(&state, book_id, 300, 80).await?
|
crate::pages::render_book_page_1(&state, book_id, 300, 80).await?
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -38,6 +38,13 @@ impl ApiError {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub fn unprocessable_entity(message: impl Into<String>) -> Self {
|
||||||
|
Self {
|
||||||
|
status: StatusCode::UNPROCESSABLE_ENTITY,
|
||||||
|
message: message.into(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
pub fn not_found(message: impl Into<String>) -> Self {
|
pub fn not_found(message: impl Into<String>) -> Self {
|
||||||
Self {
|
Self {
|
||||||
status: StatusCode::NOT_FOUND,
|
status: StatusCode::NOT_FOUND,
|
||||||
|
|||||||
26
apps/api/src/handlers.rs
Normal file
26
apps/api/src/handlers.rs
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
use axum::{extract::State, Json};
|
||||||
|
use std::sync::atomic::Ordering;
|
||||||
|
|
||||||
|
use crate::{error::ApiError, state::AppState};
|
||||||
|
|
||||||
|
pub async fn health() -> &'static str {
|
||||||
|
"ok"
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn docs_redirect() -> impl axum::response::IntoResponse {
|
||||||
|
axum::response::Redirect::to("/swagger-ui/")
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn ready(State(state): State<AppState>) -> Result<Json<serde_json::Value>, ApiError> {
|
||||||
|
sqlx::query("SELECT 1").execute(&state.pool).await?;
|
||||||
|
Ok(Json(serde_json::json!({"status": "ready"})))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn metrics(State(state): State<AppState>) -> String {
|
||||||
|
format!(
|
||||||
|
"requests_total {}\npage_cache_hits {}\npage_cache_misses {}\n",
|
||||||
|
state.metrics.requests_total.load(Ordering::Relaxed),
|
||||||
|
state.metrics.page_cache_hits.load(Ordering::Relaxed),
|
||||||
|
state.metrics.page_cache_misses.load(Ordering::Relaxed),
|
||||||
|
)
|
||||||
|
}
|
||||||
@@ -8,7 +8,7 @@ use tokio_stream::Stream;
|
|||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
use utoipa::ToSchema;
|
use utoipa::ToSchema;
|
||||||
|
|
||||||
use crate::{error::ApiError, AppState};
|
use crate::{error::ApiError, state::AppState};
|
||||||
|
|
||||||
#[derive(Deserialize, ToSchema)]
|
#[derive(Deserialize, ToSchema)]
|
||||||
pub struct RebuildRequest {
|
pub struct RebuildRequest {
|
||||||
@@ -24,6 +24,8 @@ pub struct IndexJobResponse {
|
|||||||
pub id: Uuid,
|
pub id: Uuid,
|
||||||
#[schema(value_type = Option<String>)]
|
#[schema(value_type = Option<String>)]
|
||||||
pub library_id: Option<Uuid>,
|
pub library_id: Option<Uuid>,
|
||||||
|
#[schema(value_type = Option<String>)]
|
||||||
|
pub book_id: Option<Uuid>,
|
||||||
pub r#type: String,
|
pub r#type: String,
|
||||||
pub status: String,
|
pub status: String,
|
||||||
#[schema(value_type = Option<String>)]
|
#[schema(value_type = Option<String>)]
|
||||||
@@ -53,12 +55,16 @@ pub struct IndexJobDetailResponse {
|
|||||||
pub id: Uuid,
|
pub id: Uuid,
|
||||||
#[schema(value_type = Option<String>)]
|
#[schema(value_type = Option<String>)]
|
||||||
pub library_id: Option<Uuid>,
|
pub library_id: Option<Uuid>,
|
||||||
|
#[schema(value_type = Option<String>)]
|
||||||
|
pub book_id: Option<Uuid>,
|
||||||
pub r#type: String,
|
pub r#type: String,
|
||||||
pub status: String,
|
pub status: String,
|
||||||
#[schema(value_type = Option<String>)]
|
#[schema(value_type = Option<String>)]
|
||||||
pub started_at: Option<DateTime<Utc>>,
|
pub started_at: Option<DateTime<Utc>>,
|
||||||
#[schema(value_type = Option<String>)]
|
#[schema(value_type = Option<String>)]
|
||||||
pub finished_at: Option<DateTime<Utc>>,
|
pub finished_at: Option<DateTime<Utc>>,
|
||||||
|
#[schema(value_type = Option<String>)]
|
||||||
|
pub phase2_started_at: Option<DateTime<Utc>>,
|
||||||
pub stats_json: Option<serde_json::Value>,
|
pub stats_json: Option<serde_json::Value>,
|
||||||
pub error_opt: Option<String>,
|
pub error_opt: Option<String>,
|
||||||
#[schema(value_type = String)]
|
#[schema(value_type = String)]
|
||||||
@@ -122,7 +128,7 @@ pub async fn enqueue_rebuild(
|
|||||||
.await?;
|
.await?;
|
||||||
|
|
||||||
let row = sqlx::query(
|
let row = sqlx::query(
|
||||||
"SELECT id, library_id, type, status, started_at, finished_at, stats_json, error_opt, created_at FROM index_jobs WHERE id = $1",
|
"SELECT id, library_id, book_id, type, status, started_at, finished_at, stats_json, error_opt, created_at FROM index_jobs WHERE id = $1",
|
||||||
)
|
)
|
||||||
.bind(id)
|
.bind(id)
|
||||||
.fetch_one(&state.pool)
|
.fetch_one(&state.pool)
|
||||||
@@ -145,7 +151,7 @@ pub async fn enqueue_rebuild(
|
|||||||
)]
|
)]
|
||||||
pub async fn list_index_jobs(State(state): State<AppState>) -> Result<Json<Vec<IndexJobResponse>>, ApiError> {
|
pub async fn list_index_jobs(State(state): State<AppState>) -> Result<Json<Vec<IndexJobResponse>>, ApiError> {
|
||||||
let rows = sqlx::query(
|
let rows = sqlx::query(
|
||||||
"SELECT id, library_id, type, status, started_at, finished_at, stats_json, error_opt, created_at, progress_percent, processed_files, total_files FROM index_jobs ORDER BY created_at DESC LIMIT 100",
|
"SELECT id, library_id, book_id, type, status, started_at, finished_at, stats_json, error_opt, created_at, progress_percent, processed_files, total_files FROM index_jobs ORDER BY created_at DESC LIMIT 100",
|
||||||
)
|
)
|
||||||
.fetch_all(&state.pool)
|
.fetch_all(&state.pool)
|
||||||
.await?;
|
.await?;
|
||||||
@@ -185,7 +191,7 @@ pub async fn cancel_job(
|
|||||||
}
|
}
|
||||||
|
|
||||||
let row = sqlx::query(
|
let row = sqlx::query(
|
||||||
"SELECT id, library_id, type, status, started_at, finished_at, stats_json, error_opt, created_at, progress_percent, processed_files, total_files FROM index_jobs WHERE id = $1",
|
"SELECT id, library_id, book_id, type, status, started_at, finished_at, stats_json, error_opt, created_at, progress_percent, processed_files, total_files FROM index_jobs WHERE id = $1",
|
||||||
)
|
)
|
||||||
.bind(id.0)
|
.bind(id.0)
|
||||||
.fetch_one(&state.pool)
|
.fetch_one(&state.pool)
|
||||||
@@ -247,7 +253,7 @@ pub async fn list_folders(
|
|||||||
}
|
}
|
||||||
|
|
||||||
let mut folders = Vec::new();
|
let mut folders = Vec::new();
|
||||||
let depth = if params.get("path").is_some() {
|
let depth = if params.contains_key("path") {
|
||||||
canonical_target.strip_prefix(&canonical_base)
|
canonical_target.strip_prefix(&canonical_base)
|
||||||
.map(|p| p.components().count())
|
.map(|p| p.components().count())
|
||||||
.unwrap_or(0)
|
.unwrap_or(0)
|
||||||
@@ -294,6 +300,7 @@ pub fn map_row(row: sqlx::postgres::PgRow) -> IndexJobResponse {
|
|||||||
IndexJobResponse {
|
IndexJobResponse {
|
||||||
id: row.get("id"),
|
id: row.get("id"),
|
||||||
library_id: row.get("library_id"),
|
library_id: row.get("library_id"),
|
||||||
|
book_id: row.try_get("book_id").ok().flatten(),
|
||||||
r#type: row.get("type"),
|
r#type: row.get("type"),
|
||||||
status: row.get("status"),
|
status: row.get("status"),
|
||||||
started_at: row.get("started_at"),
|
started_at: row.get("started_at"),
|
||||||
@@ -311,10 +318,12 @@ fn map_row_detail(row: sqlx::postgres::PgRow) -> IndexJobDetailResponse {
|
|||||||
IndexJobDetailResponse {
|
IndexJobDetailResponse {
|
||||||
id: row.get("id"),
|
id: row.get("id"),
|
||||||
library_id: row.get("library_id"),
|
library_id: row.get("library_id"),
|
||||||
|
book_id: row.try_get("book_id").ok().flatten(),
|
||||||
r#type: row.get("type"),
|
r#type: row.get("type"),
|
||||||
status: row.get("status"),
|
status: row.get("status"),
|
||||||
started_at: row.get("started_at"),
|
started_at: row.get("started_at"),
|
||||||
finished_at: row.get("finished_at"),
|
finished_at: row.get("finished_at"),
|
||||||
|
phase2_started_at: row.try_get("phase2_started_at").ok().flatten(),
|
||||||
stats_json: row.get("stats_json"),
|
stats_json: row.get("stats_json"),
|
||||||
error_opt: row.get("error_opt"),
|
error_opt: row.get("error_opt"),
|
||||||
created_at: row.get("created_at"),
|
created_at: row.get("created_at"),
|
||||||
@@ -339,7 +348,7 @@ fn map_row_detail(row: sqlx::postgres::PgRow) -> IndexJobDetailResponse {
|
|||||||
)]
|
)]
|
||||||
pub async fn get_active_jobs(State(state): State<AppState>) -> Result<Json<Vec<IndexJobResponse>>, ApiError> {
|
pub async fn get_active_jobs(State(state): State<AppState>) -> Result<Json<Vec<IndexJobResponse>>, ApiError> {
|
||||||
let rows = sqlx::query(
|
let rows = sqlx::query(
|
||||||
"SELECT id, library_id, type, status, started_at, finished_at, stats_json, error_opt, created_at, progress_percent, processed_files, total_files
|
"SELECT id, library_id, book_id, type, status, started_at, finished_at, stats_json, error_opt, created_at, progress_percent, processed_files, total_files
|
||||||
FROM index_jobs
|
FROM index_jobs
|
||||||
WHERE status IN ('pending', 'running', 'generating_thumbnails')
|
WHERE status IN ('pending', 'running', 'generating_thumbnails')
|
||||||
ORDER BY created_at ASC"
|
ORDER BY created_at ASC"
|
||||||
@@ -371,8 +380,8 @@ pub async fn get_job_details(
|
|||||||
id: axum::extract::Path<Uuid>,
|
id: axum::extract::Path<Uuid>,
|
||||||
) -> Result<Json<IndexJobDetailResponse>, ApiError> {
|
) -> Result<Json<IndexJobDetailResponse>, ApiError> {
|
||||||
let row = sqlx::query(
|
let row = sqlx::query(
|
||||||
"SELECT id, library_id, type, status, started_at, finished_at, stats_json, error_opt, created_at,
|
"SELECT id, library_id, book_id, type, status, started_at, finished_at, phase2_started_at,
|
||||||
current_file, progress_percent, total_files, processed_files
|
stats_json, error_opt, created_at, current_file, progress_percent, total_files, processed_files
|
||||||
FROM index_jobs WHERE id = $1"
|
FROM index_jobs WHERE id = $1"
|
||||||
)
|
)
|
||||||
.bind(id.0)
|
.bind(id.0)
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ use sqlx::Row;
|
|||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
use utoipa::ToSchema;
|
use utoipa::ToSchema;
|
||||||
|
|
||||||
use crate::{error::ApiError, AppState};
|
use crate::{error::ApiError, state::AppState};
|
||||||
|
|
||||||
#[derive(Serialize, ToSchema)]
|
#[derive(Serialize, ToSchema)]
|
||||||
pub struct LibraryResponse {
|
pub struct LibraryResponse {
|
||||||
@@ -18,6 +18,7 @@ pub struct LibraryResponse {
|
|||||||
pub book_count: i64,
|
pub book_count: i64,
|
||||||
pub monitor_enabled: bool,
|
pub monitor_enabled: bool,
|
||||||
pub scan_mode: String,
|
pub scan_mode: String,
|
||||||
|
#[schema(value_type = Option<String>)]
|
||||||
pub next_scan_at: Option<chrono::DateTime<chrono::Utc>>,
|
pub next_scan_at: Option<chrono::DateTime<chrono::Utc>>,
|
||||||
pub watcher_enabled: bool,
|
pub watcher_enabled: bool,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,70 +1,37 @@
|
|||||||
mod auth;
|
mod auth;
|
||||||
mod books;
|
mod books;
|
||||||
mod error;
|
mod error;
|
||||||
|
mod handlers;
|
||||||
mod index_jobs;
|
mod index_jobs;
|
||||||
mod libraries;
|
mod libraries;
|
||||||
|
mod api_middleware;
|
||||||
mod openapi;
|
mod openapi;
|
||||||
mod pages;
|
mod pages;
|
||||||
|
mod reading_progress;
|
||||||
mod search;
|
mod search;
|
||||||
mod settings;
|
mod settings;
|
||||||
|
mod state;
|
||||||
mod thumbnails;
|
mod thumbnails;
|
||||||
mod tokens;
|
mod tokens;
|
||||||
|
|
||||||
use std::{
|
use std::sync::Arc;
|
||||||
num::NonZeroUsize,
|
use std::time::Instant;
|
||||||
sync::{
|
|
||||||
atomic::{AtomicU64, Ordering},
|
|
||||||
Arc,
|
|
||||||
},
|
|
||||||
time::{Duration, Instant},
|
|
||||||
};
|
|
||||||
|
|
||||||
use axum::{
|
use axum::{
|
||||||
middleware,
|
middleware,
|
||||||
response::IntoResponse,
|
|
||||||
routing::{delete, get},
|
routing::{delete, get},
|
||||||
Json, Router,
|
Router,
|
||||||
};
|
};
|
||||||
use utoipa::OpenApi;
|
use utoipa::OpenApi;
|
||||||
use utoipa_swagger_ui::SwaggerUi;
|
use utoipa_swagger_ui::SwaggerUi;
|
||||||
use lru::LruCache;
|
use lru::LruCache;
|
||||||
|
use std::num::NonZeroUsize;
|
||||||
use stripstream_core::config::ApiConfig;
|
use stripstream_core::config::ApiConfig;
|
||||||
use sqlx::postgres::PgPoolOptions;
|
use sqlx::postgres::PgPoolOptions;
|
||||||
use tokio::sync::{Mutex, Semaphore};
|
use tokio::sync::{Mutex, RwLock, Semaphore};
|
||||||
use tracing::info;
|
use tracing::info;
|
||||||
|
|
||||||
#[derive(Clone)]
|
use crate::state::{load_concurrent_renders, load_dynamic_settings, AppState, Metrics, ReadRateLimit};
|
||||||
struct AppState {
|
|
||||||
pool: sqlx::PgPool,
|
|
||||||
bootstrap_token: Arc<str>,
|
|
||||||
meili_url: Arc<str>,
|
|
||||||
meili_master_key: Arc<str>,
|
|
||||||
page_cache: Arc<Mutex<LruCache<String, Arc<Vec<u8>>>>>,
|
|
||||||
page_render_limit: Arc<Semaphore>,
|
|
||||||
metrics: Arc<Metrics>,
|
|
||||||
read_rate_limit: Arc<Mutex<ReadRateLimit>>,
|
|
||||||
}
|
|
||||||
|
|
||||||
struct Metrics {
|
|
||||||
requests_total: AtomicU64,
|
|
||||||
page_cache_hits: AtomicU64,
|
|
||||||
page_cache_misses: AtomicU64,
|
|
||||||
}
|
|
||||||
|
|
||||||
struct ReadRateLimit {
|
|
||||||
window_started_at: Instant,
|
|
||||||
requests_in_window: u32,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Metrics {
|
|
||||||
fn new() -> Self {
|
|
||||||
Self {
|
|
||||||
requests_total: AtomicU64::new(0),
|
|
||||||
page_cache_hits: AtomicU64::new(0),
|
|
||||||
page_cache_misses: AtomicU64::new(0),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::main]
|
#[tokio::main]
|
||||||
async fn main() -> anyhow::Result<()> {
|
async fn main() -> anyhow::Result<()> {
|
||||||
@@ -80,18 +47,35 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
.connect(&config.database_url)
|
.connect(&config.database_url)
|
||||||
.await?;
|
.await?;
|
||||||
|
|
||||||
|
// Load concurrent_renders from settings, default to 8
|
||||||
|
let concurrent_renders = load_concurrent_renders(&pool).await;
|
||||||
|
info!("Using concurrent_renders limit: {}", concurrent_renders);
|
||||||
|
|
||||||
|
let dynamic_settings = load_dynamic_settings(&pool).await;
|
||||||
|
info!(
|
||||||
|
"Dynamic settings: rate_limit={}, timeout={}s, format={}, quality={}, filter={}, max_width={}, cache_dir={}",
|
||||||
|
dynamic_settings.rate_limit_per_second,
|
||||||
|
dynamic_settings.timeout_seconds,
|
||||||
|
dynamic_settings.image_format,
|
||||||
|
dynamic_settings.image_quality,
|
||||||
|
dynamic_settings.image_filter,
|
||||||
|
dynamic_settings.image_max_width,
|
||||||
|
dynamic_settings.cache_directory,
|
||||||
|
);
|
||||||
|
|
||||||
let state = AppState {
|
let state = AppState {
|
||||||
pool,
|
pool,
|
||||||
bootstrap_token: Arc::from(config.api_bootstrap_token),
|
bootstrap_token: Arc::from(config.api_bootstrap_token),
|
||||||
meili_url: Arc::from(config.meili_url),
|
meili_url: Arc::from(config.meili_url),
|
||||||
meili_master_key: Arc::from(config.meili_master_key),
|
meili_master_key: Arc::from(config.meili_master_key),
|
||||||
page_cache: Arc::new(Mutex::new(LruCache::new(NonZeroUsize::new(512).expect("non-zero")))),
|
page_cache: Arc::new(Mutex::new(LruCache::new(NonZeroUsize::new(512).expect("non-zero")))),
|
||||||
page_render_limit: Arc::new(Semaphore::new(8)),
|
page_render_limit: Arc::new(Semaphore::new(concurrent_renders)),
|
||||||
metrics: Arc::new(Metrics::new()),
|
metrics: Arc::new(Metrics::new()),
|
||||||
read_rate_limit: Arc::new(Mutex::new(ReadRateLimit {
|
read_rate_limit: Arc::new(Mutex::new(ReadRateLimit {
|
||||||
window_started_at: Instant::now(),
|
window_started_at: Instant::now(),
|
||||||
requests_in_window: 0,
|
requests_in_window: 0,
|
||||||
})),
|
})),
|
||||||
|
settings: Arc::new(RwLock::new(dynamic_settings)),
|
||||||
};
|
};
|
||||||
|
|
||||||
let admin_routes = Router::new()
|
let admin_routes = Router::new()
|
||||||
@@ -99,6 +83,7 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
.route("/libraries/:id", delete(libraries::delete_library))
|
.route("/libraries/:id", delete(libraries::delete_library))
|
||||||
.route("/libraries/:id/scan", axum::routing::post(libraries::scan_library))
|
.route("/libraries/:id/scan", axum::routing::post(libraries::scan_library))
|
||||||
.route("/libraries/:id/monitoring", axum::routing::patch(libraries::update_monitoring))
|
.route("/libraries/:id/monitoring", axum::routing::patch(libraries::update_monitoring))
|
||||||
|
.route("/books/:id/convert", axum::routing::post(books::convert_book))
|
||||||
.route("/index/rebuild", axum::routing::post(index_jobs::enqueue_rebuild))
|
.route("/index/rebuild", axum::routing::post(index_jobs::enqueue_rebuild))
|
||||||
.route("/index/thumbnails/rebuild", axum::routing::post(thumbnails::start_thumbnails_rebuild))
|
.route("/index/thumbnails/rebuild", axum::routing::post(thumbnails::start_thumbnails_rebuild))
|
||||||
.route("/index/thumbnails/regenerate", axum::routing::post(thumbnails::start_thumbnails_regenerate))
|
.route("/index/thumbnails/regenerate", axum::routing::post(thumbnails::start_thumbnails_regenerate))
|
||||||
@@ -106,7 +91,6 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
.route("/index/jobs/active", get(index_jobs::get_active_jobs))
|
.route("/index/jobs/active", get(index_jobs::get_active_jobs))
|
||||||
.route("/index/jobs/:id", get(index_jobs::get_job_details))
|
.route("/index/jobs/:id", get(index_jobs::get_job_details))
|
||||||
.route("/index/jobs/:id/stream", get(index_jobs::stream_job_progress))
|
.route("/index/jobs/:id/stream", get(index_jobs::stream_job_progress))
|
||||||
.route("/index/jobs/:id/thumbnails/checkup", axum::routing::post(thumbnails::start_checkup))
|
|
||||||
.route("/index/jobs/:id/errors", get(index_jobs::get_job_errors))
|
.route("/index/jobs/:id/errors", get(index_jobs::get_job_errors))
|
||||||
.route("/index/cancel/:id", axum::routing::post(index_jobs::cancel_job))
|
.route("/index/cancel/:id", axum::routing::post(index_jobs::cancel_job))
|
||||||
.route("/folders", get(index_jobs::list_folders))
|
.route("/folders", get(index_jobs::list_folders))
|
||||||
@@ -123,23 +107,24 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
.route("/books/:id", get(books::get_book))
|
.route("/books/:id", get(books::get_book))
|
||||||
.route("/books/:id/thumbnail", get(books::get_thumbnail))
|
.route("/books/:id/thumbnail", get(books::get_thumbnail))
|
||||||
.route("/books/:id/pages/:n", get(pages::get_page))
|
.route("/books/:id/pages/:n", get(pages::get_page))
|
||||||
|
.route("/books/:id/progress", get(reading_progress::get_reading_progress).patch(reading_progress::update_reading_progress))
|
||||||
.route("/libraries/:library_id/series", get(books::list_series))
|
.route("/libraries/:library_id/series", get(books::list_series))
|
||||||
.route("/search", get(search::search_books))
|
.route("/search", get(search::search_books))
|
||||||
.route_layer(middleware::from_fn_with_state(state.clone(), read_rate_limit))
|
.route_layer(middleware::from_fn_with_state(state.clone(), api_middleware::read_rate_limit))
|
||||||
.route_layer(middleware::from_fn_with_state(
|
.route_layer(middleware::from_fn_with_state(
|
||||||
state.clone(),
|
state.clone(),
|
||||||
auth::require_read,
|
auth::require_read,
|
||||||
));
|
));
|
||||||
|
|
||||||
let app = Router::new()
|
let app = Router::new()
|
||||||
.route("/health", get(health))
|
.route("/health", get(handlers::health))
|
||||||
.route("/ready", get(ready))
|
.route("/ready", get(handlers::ready))
|
||||||
.route("/metrics", get(metrics))
|
.route("/metrics", get(handlers::metrics))
|
||||||
.route("/docs", get(docs_redirect))
|
.route("/docs", get(handlers::docs_redirect))
|
||||||
.merge(SwaggerUi::new("/swagger-ui").url("/openapi.json", openapi::ApiDoc::openapi()))
|
.merge(SwaggerUi::new("/swagger-ui").url("/openapi.json", openapi::ApiDoc::openapi()))
|
||||||
.merge(admin_routes)
|
.merge(admin_routes)
|
||||||
.merge(read_routes)
|
.merge(read_routes)
|
||||||
.layer(middleware::from_fn_with_state(state.clone(), request_counter))
|
.layer(middleware::from_fn_with_state(state.clone(), api_middleware::request_counter))
|
||||||
.with_state(state);
|
.with_state(state);
|
||||||
|
|
||||||
let listener = tokio::net::TcpListener::bind(&config.listen_addr).await?;
|
let listener = tokio::net::TcpListener::bind(&config.listen_addr).await?;
|
||||||
@@ -148,57 +133,3 @@ async fn main() -> anyhow::Result<()> {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn health() -> &'static str {
|
|
||||||
"ok"
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn docs_redirect() -> impl axum::response::IntoResponse {
|
|
||||||
axum::response::Redirect::to("/swagger-ui/")
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn ready(axum::extract::State(state): axum::extract::State<AppState>) -> Result<Json<serde_json::Value>, error::ApiError> {
|
|
||||||
sqlx::query("SELECT 1").execute(&state.pool).await?;
|
|
||||||
Ok(Json(serde_json::json!({"status": "ready"})))
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn metrics(axum::extract::State(state): axum::extract::State<AppState>) -> String {
|
|
||||||
format!(
|
|
||||||
"requests_total {}\npage_cache_hits {}\npage_cache_misses {}\n",
|
|
||||||
state.metrics.requests_total.load(Ordering::Relaxed),
|
|
||||||
state.metrics.page_cache_hits.load(Ordering::Relaxed),
|
|
||||||
state.metrics.page_cache_misses.load(Ordering::Relaxed),
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn request_counter(
|
|
||||||
axum::extract::State(state): axum::extract::State<AppState>,
|
|
||||||
req: axum::extract::Request,
|
|
||||||
next: axum::middleware::Next,
|
|
||||||
) -> axum::response::Response {
|
|
||||||
state.metrics.requests_total.fetch_add(1, Ordering::Relaxed);
|
|
||||||
next.run(req).await
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn read_rate_limit(
|
|
||||||
axum::extract::State(state): axum::extract::State<AppState>,
|
|
||||||
req: axum::extract::Request,
|
|
||||||
next: axum::middleware::Next,
|
|
||||||
) -> axum::response::Response {
|
|
||||||
let mut limiter = state.read_rate_limit.lock().await;
|
|
||||||
if limiter.window_started_at.elapsed() >= Duration::from_secs(1) {
|
|
||||||
limiter.window_started_at = Instant::now();
|
|
||||||
limiter.requests_in_window = 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
if limiter.requests_in_window >= 120 {
|
|
||||||
return (
|
|
||||||
axum::http::StatusCode::TOO_MANY_REQUESTS,
|
|
||||||
"rate limit exceeded",
|
|
||||||
)
|
|
||||||
.into_response();
|
|
||||||
}
|
|
||||||
|
|
||||||
limiter.requests_in_window += 1;
|
|
||||||
drop(limiter);
|
|
||||||
next.run(req).await
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -6,7 +6,11 @@ use utoipa::OpenApi;
|
|||||||
paths(
|
paths(
|
||||||
crate::books::list_books,
|
crate::books::list_books,
|
||||||
crate::books::get_book,
|
crate::books::get_book,
|
||||||
|
crate::reading_progress::get_reading_progress,
|
||||||
|
crate::reading_progress::update_reading_progress,
|
||||||
|
crate::books::get_thumbnail,
|
||||||
crate::books::list_series,
|
crate::books::list_series,
|
||||||
|
crate::books::convert_book,
|
||||||
crate::pages::get_page,
|
crate::pages::get_page,
|
||||||
crate::search::search_books,
|
crate::search::search_books,
|
||||||
crate::index_jobs::enqueue_rebuild,
|
crate::index_jobs::enqueue_rebuild,
|
||||||
@@ -27,6 +31,12 @@ use utoipa::OpenApi;
|
|||||||
crate::tokens::list_tokens,
|
crate::tokens::list_tokens,
|
||||||
crate::tokens::create_token,
|
crate::tokens::create_token,
|
||||||
crate::tokens::revoke_token,
|
crate::tokens::revoke_token,
|
||||||
|
crate::settings::get_settings,
|
||||||
|
crate::settings::get_setting,
|
||||||
|
crate::settings::update_setting,
|
||||||
|
crate::settings::clear_cache,
|
||||||
|
crate::settings::get_cache_stats,
|
||||||
|
crate::settings::get_thumbnail_stats,
|
||||||
),
|
),
|
||||||
components(
|
components(
|
||||||
schemas(
|
schemas(
|
||||||
@@ -34,10 +44,14 @@ use utoipa::OpenApi;
|
|||||||
crate::books::BookItem,
|
crate::books::BookItem,
|
||||||
crate::books::BooksPage,
|
crate::books::BooksPage,
|
||||||
crate::books::BookDetails,
|
crate::books::BookDetails,
|
||||||
|
crate::reading_progress::ReadingProgressResponse,
|
||||||
|
crate::reading_progress::UpdateReadingProgressRequest,
|
||||||
crate::books::SeriesItem,
|
crate::books::SeriesItem,
|
||||||
|
crate::books::SeriesPage,
|
||||||
crate::pages::PageQuery,
|
crate::pages::PageQuery,
|
||||||
crate::search::SearchQuery,
|
crate::search::SearchQuery,
|
||||||
crate::search::SearchResponse,
|
crate::search::SearchResponse,
|
||||||
|
crate::search::SeriesHit,
|
||||||
crate::index_jobs::RebuildRequest,
|
crate::index_jobs::RebuildRequest,
|
||||||
crate::thumbnails::ThumbnailsRebuildRequest,
|
crate::thumbnails::ThumbnailsRebuildRequest,
|
||||||
crate::index_jobs::IndexJobResponse,
|
crate::index_jobs::IndexJobResponse,
|
||||||
@@ -51,6 +65,10 @@ use utoipa::OpenApi;
|
|||||||
crate::tokens::CreateTokenRequest,
|
crate::tokens::CreateTokenRequest,
|
||||||
crate::tokens::TokenResponse,
|
crate::tokens::TokenResponse,
|
||||||
crate::tokens::CreatedTokenResponse,
|
crate::tokens::CreatedTokenResponse,
|
||||||
|
crate::settings::UpdateSettingRequest,
|
||||||
|
crate::settings::ClearCacheResponse,
|
||||||
|
crate::settings::CacheStats,
|
||||||
|
crate::settings::ThumbnailStats,
|
||||||
ErrorResponse,
|
ErrorResponse,
|
||||||
)
|
)
|
||||||
),
|
),
|
||||||
@@ -59,9 +77,11 @@ use utoipa::OpenApi;
|
|||||||
),
|
),
|
||||||
tags(
|
tags(
|
||||||
(name = "books", description = "Read-only endpoints for browsing and searching books"),
|
(name = "books", description = "Read-only endpoints for browsing and searching books"),
|
||||||
|
(name = "reading-progress", description = "Reading progress tracking per book"),
|
||||||
(name = "libraries", description = "Library management endpoints (Admin only)"),
|
(name = "libraries", description = "Library management endpoints (Admin only)"),
|
||||||
(name = "indexing", description = "Search index management and job control (Admin only)"),
|
(name = "indexing", description = "Search index management and job control (Admin only)"),
|
||||||
(name = "tokens", description = "API token management (Admin only)"),
|
(name = "tokens", description = "API token management (Admin only)"),
|
||||||
|
(name = "settings", description = "Application settings and cache management (Admin only)"),
|
||||||
),
|
),
|
||||||
modifiers(&SecurityAddon)
|
modifiers(&SecurityAddon)
|
||||||
)]
|
)]
|
||||||
@@ -106,15 +126,24 @@ mod tests {
|
|||||||
.to_pretty_json()
|
.to_pretty_json()
|
||||||
.expect("Failed to serialize OpenAPI");
|
.expect("Failed to serialize OpenAPI");
|
||||||
|
|
||||||
// Check that there are no references to non-existent schemas
|
// Check that all $ref targets exist in components/schemas
|
||||||
assert!(
|
let doc: serde_json::Value =
|
||||||
!json.contains("\"/components/schemas/Uuid\""),
|
serde_json::from_str(&json).expect("OpenAPI JSON should be valid");
|
||||||
"Uuid schema should not be referenced"
|
let empty = serde_json::Map::new();
|
||||||
);
|
let schemas = doc["components"]["schemas"]
|
||||||
assert!(
|
.as_object()
|
||||||
!json.contains("\"/components/schemas/DateTime\""),
|
.unwrap_or(&empty);
|
||||||
"DateTime schema should not be referenced"
|
let prefix = "#/components/schemas/";
|
||||||
);
|
let mut broken: Vec<String> = Vec::new();
|
||||||
|
for part in json.split(prefix).skip(1) {
|
||||||
|
if let Some(name) = part.split('"').next() {
|
||||||
|
if !schemas.contains_key(name) {
|
||||||
|
broken.push(name.to_string());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
broken.dedup();
|
||||||
|
assert!(broken.is_empty(), "Unresolved schema refs: {:?}", broken);
|
||||||
|
|
||||||
// Save to file for inspection
|
// Save to file for inspection
|
||||||
std::fs::write("/tmp/openapi.json", &json).expect("Failed to write file");
|
std::fs::write("/tmp/openapi.json", &json).expect("Failed to write file");
|
||||||
|
|||||||
@@ -20,7 +20,7 @@ use tracing::{debug, error, info, instrument, warn};
|
|||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
use walkdir::WalkDir;
|
use walkdir::WalkDir;
|
||||||
|
|
||||||
use crate::{error::ApiError, AppState};
|
use crate::{error::ApiError, state::AppState};
|
||||||
|
|
||||||
fn remap_libraries_path(path: &str) -> String {
|
fn remap_libraries_path(path: &str) -> String {
|
||||||
if let Ok(root) = std::env::var("LIBRARIES_ROOT_PATH") {
|
if let Ok(root) = std::env::var("LIBRARIES_ROOT_PATH") {
|
||||||
@@ -31,10 +31,12 @@ fn remap_libraries_path(path: &str) -> String {
|
|||||||
path.to_string()
|
path.to_string()
|
||||||
}
|
}
|
||||||
|
|
||||||
fn get_image_cache_dir() -> PathBuf {
|
fn parse_filter(s: &str) -> image::imageops::FilterType {
|
||||||
std::env::var("IMAGE_CACHE_DIR")
|
match s {
|
||||||
.map(PathBuf::from)
|
"triangle" => image::imageops::FilterType::Triangle,
|
||||||
.unwrap_or_else(|_| PathBuf::from("/tmp/stripstream-image-cache"))
|
"nearest" => image::imageops::FilterType::Nearest,
|
||||||
|
_ => image::imageops::FilterType::Lanczos3,
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn get_cache_key(abs_path: &str, page: u32, format: &str, quality: u8, width: u32) -> String {
|
fn get_cache_key(abs_path: &str, page: u32, format: &str, quality: u8, width: u32) -> String {
|
||||||
@@ -47,8 +49,7 @@ fn get_cache_key(abs_path: &str, page: u32, format: &str, quality: u8, width: u3
|
|||||||
format!("{:x}", hasher.finalize())
|
format!("{:x}", hasher.finalize())
|
||||||
}
|
}
|
||||||
|
|
||||||
fn get_cache_path(cache_key: &str, format: &OutputFormat) -> PathBuf {
|
fn get_cache_path(cache_key: &str, format: &OutputFormat, cache_dir: &Path) -> PathBuf {
|
||||||
let cache_dir = get_image_cache_dir();
|
|
||||||
let prefix = &cache_key[..2];
|
let prefix = &cache_key[..2];
|
||||||
let ext = format.extension();
|
let ext = format.extension();
|
||||||
cache_dir.join(prefix).join(format!("{}.{}", cache_key, ext))
|
cache_dir.join(prefix).join(format!("{}.{}", cache_key, ext))
|
||||||
@@ -145,13 +146,21 @@ pub async fn get_page(
|
|||||||
return Err(ApiError::bad_request("page index starts at 1"));
|
return Err(ApiError::bad_request("page index starts at 1"));
|
||||||
}
|
}
|
||||||
|
|
||||||
let format = OutputFormat::parse(query.format.as_deref())?;
|
let (default_format, default_quality, max_width, filter_str, timeout_secs, cache_dir) = {
|
||||||
let quality = query.quality.unwrap_or(80).clamp(1, 100);
|
let s = state.settings.read().await;
|
||||||
|
(s.image_format.clone(), s.image_quality, s.image_max_width, s.image_filter.clone(), s.timeout_seconds, s.cache_directory.clone())
|
||||||
|
};
|
||||||
|
|
||||||
|
let format_str = query.format.as_deref().unwrap_or(default_format.as_str());
|
||||||
|
let format = OutputFormat::parse(Some(format_str))?;
|
||||||
|
let quality = query.quality.unwrap_or(default_quality).clamp(1, 100);
|
||||||
let width = query.width.unwrap_or(0);
|
let width = query.width.unwrap_or(0);
|
||||||
if width > 2160 {
|
if width > max_width {
|
||||||
warn!("Invalid width: {}", width);
|
warn!("Invalid width: {}", width);
|
||||||
return Err(ApiError::bad_request("width must be <= 2160"));
|
return Err(ApiError::bad_request(format!("width must be <= {}", max_width)));
|
||||||
}
|
}
|
||||||
|
let filter = parse_filter(&filter_str);
|
||||||
|
let cache_dir_path = std::path::PathBuf::from(&cache_dir);
|
||||||
|
|
||||||
let memory_cache_key = format!("{book_id}:{n}:{}:{quality}:{width}", format.extension());
|
let memory_cache_key = format!("{book_id}:{n}:{}:{quality}:{width}", format.extension());
|
||||||
|
|
||||||
@@ -195,7 +204,7 @@ pub async fn get_page(
|
|||||||
info!("Processing book file: {} (format: {})", abs_path, input_format);
|
info!("Processing book file: {} (format: {})", abs_path, input_format);
|
||||||
|
|
||||||
let disk_cache_key = get_cache_key(&abs_path, n, format.extension(), quality, width);
|
let disk_cache_key = get_cache_key(&abs_path, n, format.extension(), quality, width);
|
||||||
let cache_path = get_cache_path(&disk_cache_key, &format);
|
let cache_path = get_cache_path(&disk_cache_key, &format, &cache_dir_path);
|
||||||
|
|
||||||
if let Some(cached_bytes) = read_from_disk_cache(&cache_path) {
|
if let Some(cached_bytes) = read_from_disk_cache(&cache_path) {
|
||||||
info!("Disk cache hit for: {}", cache_path.display());
|
info!("Disk cache hit for: {}", cache_path.display());
|
||||||
@@ -221,9 +230,9 @@ pub async fn get_page(
|
|||||||
let start_time = std::time::Instant::now();
|
let start_time = std::time::Instant::now();
|
||||||
|
|
||||||
let bytes = tokio::time::timeout(
|
let bytes = tokio::time::timeout(
|
||||||
Duration::from_secs(60),
|
Duration::from_secs(timeout_secs),
|
||||||
tokio::task::spawn_blocking(move || {
|
tokio::task::spawn_blocking(move || {
|
||||||
render_page(&abs_path_clone, &input_format, n, &format_clone, quality, width)
|
render_page(&abs_path_clone, &input_format, n, &format_clone, quality, width, filter)
|
||||||
}),
|
}),
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
@@ -306,9 +315,15 @@ pub async fn render_book_page_1(
|
|||||||
.await
|
.await
|
||||||
.map_err(|_| ApiError::internal("render limiter unavailable"))?;
|
.map_err(|_| ApiError::internal("render limiter unavailable"))?;
|
||||||
|
|
||||||
|
let (timeout_secs, filter_str) = {
|
||||||
|
let s = state.settings.read().await;
|
||||||
|
(s.timeout_seconds, s.image_filter.clone())
|
||||||
|
};
|
||||||
|
let filter = parse_filter(&filter_str);
|
||||||
|
|
||||||
let abs_path_clone = abs_path.clone();
|
let abs_path_clone = abs_path.clone();
|
||||||
let bytes = tokio::time::timeout(
|
let bytes = tokio::time::timeout(
|
||||||
Duration::from_secs(60),
|
Duration::from_secs(timeout_secs),
|
||||||
tokio::task::spawn_blocking(move || {
|
tokio::task::spawn_blocking(move || {
|
||||||
render_page(
|
render_page(
|
||||||
&abs_path_clone,
|
&abs_path_clone,
|
||||||
@@ -317,6 +332,7 @@ pub async fn render_book_page_1(
|
|||||||
&OutputFormat::Webp,
|
&OutputFormat::Webp,
|
||||||
quality,
|
quality,
|
||||||
width,
|
width,
|
||||||
|
filter,
|
||||||
)
|
)
|
||||||
}),
|
}),
|
||||||
)
|
)
|
||||||
@@ -334,6 +350,7 @@ fn render_page(
|
|||||||
out_format: &OutputFormat,
|
out_format: &OutputFormat,
|
||||||
quality: u8,
|
quality: u8,
|
||||||
width: u32,
|
width: u32,
|
||||||
|
filter: image::imageops::FilterType,
|
||||||
) -> Result<Vec<u8>, ApiError> {
|
) -> Result<Vec<u8>, ApiError> {
|
||||||
let page_bytes = match input_format {
|
let page_bytes = match input_format {
|
||||||
"cbz" => extract_cbz_page(abs_path, page_number)?,
|
"cbz" => extract_cbz_page(abs_path, page_number)?,
|
||||||
@@ -342,14 +359,18 @@ fn render_page(
|
|||||||
_ => return Err(ApiError::bad_request("unsupported source format")),
|
_ => return Err(ApiError::bad_request("unsupported source format")),
|
||||||
};
|
};
|
||||||
|
|
||||||
transcode_image(&page_bytes, out_format, quality, width)
|
transcode_image(&page_bytes, out_format, quality, width, filter)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn extract_cbz_page(abs_path: &str, page_number: u32) -> Result<Vec<u8>, ApiError> {
|
fn extract_cbz_page(abs_path: &str, page_number: u32) -> Result<Vec<u8>, ApiError> {
|
||||||
debug!("Opening CBZ archive: {}", abs_path);
|
debug!("Opening CBZ archive: {}", abs_path);
|
||||||
let file = std::fs::File::open(abs_path).map_err(|e| {
|
let file = std::fs::File::open(abs_path).map_err(|e| {
|
||||||
|
if e.kind() == std::io::ErrorKind::NotFound {
|
||||||
|
ApiError::not_found("book file not accessible")
|
||||||
|
} else {
|
||||||
error!("Cannot open CBZ file {}: {}", abs_path, e);
|
error!("Cannot open CBZ file {}: {}", abs_path, e);
|
||||||
ApiError::internal(format!("cannot open cbz: {e}"))
|
ApiError::internal(format!("cannot open cbz: {e}"))
|
||||||
|
}
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
let mut archive = zip::ZipArchive::new(file).map_err(|e| {
|
let mut archive = zip::ZipArchive::new(file).map_err(|e| {
|
||||||
@@ -495,7 +516,7 @@ fn render_pdf_page(abs_path: &str, page_number: u32, width: u32) -> Result<Vec<u
|
|||||||
Ok(bytes)
|
Ok(bytes)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn transcode_image(input: &[u8], out_format: &OutputFormat, quality: u8, width: u32) -> Result<Vec<u8>, ApiError> {
|
fn transcode_image(input: &[u8], out_format: &OutputFormat, quality: u8, width: u32, filter: image::imageops::FilterType) -> Result<Vec<u8>, ApiError> {
|
||||||
debug!("Transcoding image: {} bytes, format: {:?}, quality: {}, width: {}", input.len(), out_format, quality, width);
|
debug!("Transcoding image: {} bytes, format: {:?}, quality: {}, width: {}", input.len(), out_format, quality, width);
|
||||||
let source_format = image::guess_format(input).ok();
|
let source_format = image::guess_format(input).ok();
|
||||||
debug!("Source format detected: {:?}", source_format);
|
debug!("Source format detected: {:?}", source_format);
|
||||||
@@ -514,7 +535,7 @@ fn transcode_image(input: &[u8], out_format: &OutputFormat, quality: u8, width:
|
|||||||
|
|
||||||
if width > 0 {
|
if width > 0 {
|
||||||
debug!("Resizing image to width: {}", width);
|
debug!("Resizing image to width: {}", width);
|
||||||
image = image.resize(width, u32::MAX, image::imageops::FilterType::Lanczos3);
|
image = image.resize(width, u32::MAX, filter);
|
||||||
}
|
}
|
||||||
|
|
||||||
debug!("Converting to RGBA...");
|
debug!("Converting to RGBA...");
|
||||||
@@ -550,12 +571,12 @@ fn transcode_image(input: &[u8], out_format: &OutputFormat, quality: u8, width:
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn format_matches(source: &ImageFormat, target: &OutputFormat) -> bool {
|
fn format_matches(source: &ImageFormat, target: &OutputFormat) -> bool {
|
||||||
match (source, target) {
|
matches!(
|
||||||
(ImageFormat::Jpeg, OutputFormat::Jpeg) => true,
|
(source, target),
|
||||||
(ImageFormat::Png, OutputFormat::Png) => true,
|
(ImageFormat::Jpeg, OutputFormat::Jpeg)
|
||||||
(ImageFormat::WebP, OutputFormat::Webp) => true,
|
| (ImageFormat::Png, OutputFormat::Png)
|
||||||
_ => false,
|
| (ImageFormat::WebP, OutputFormat::Webp)
|
||||||
}
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn is_image_name(name: &str) -> bool {
|
fn is_image_name(name: &str) -> bool {
|
||||||
|
|||||||
167
apps/api/src/reading_progress.rs
Normal file
167
apps/api/src/reading_progress.rs
Normal file
@@ -0,0 +1,167 @@
|
|||||||
|
use axum::{extract::{Path, State}, Json};
|
||||||
|
use chrono::{DateTime, Utc};
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use sqlx::Row;
|
||||||
|
use uuid::Uuid;
|
||||||
|
use utoipa::ToSchema;
|
||||||
|
|
||||||
|
use crate::{error::ApiError, state::AppState};
|
||||||
|
|
||||||
|
#[derive(Serialize, ToSchema)]
|
||||||
|
pub struct ReadingProgressResponse {
|
||||||
|
/// Reading status: "unread", "reading", or "read"
|
||||||
|
pub status: String,
|
||||||
|
/// Current page (only set when status is "reading")
|
||||||
|
pub current_page: Option<i32>,
|
||||||
|
#[schema(value_type = Option<String>)]
|
||||||
|
pub last_read_at: Option<DateTime<Utc>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Deserialize, ToSchema)]
|
||||||
|
pub struct UpdateReadingProgressRequest {
|
||||||
|
/// Reading status: "unread", "reading", or "read"
|
||||||
|
pub status: String,
|
||||||
|
/// Required when status is "reading", must be > 0
|
||||||
|
pub current_page: Option<i32>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get reading progress for a book
|
||||||
|
#[utoipa::path(
|
||||||
|
get,
|
||||||
|
path = "/books/{id}/progress",
|
||||||
|
tag = "reading-progress",
|
||||||
|
params(
|
||||||
|
("id" = String, Path, description = "Book UUID"),
|
||||||
|
),
|
||||||
|
responses(
|
||||||
|
(status = 200, body = ReadingProgressResponse),
|
||||||
|
(status = 404, description = "Book not found"),
|
||||||
|
(status = 401, description = "Unauthorized"),
|
||||||
|
),
|
||||||
|
security(("Bearer" = []))
|
||||||
|
)]
|
||||||
|
pub async fn get_reading_progress(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Path(id): Path<Uuid>,
|
||||||
|
) -> Result<Json<ReadingProgressResponse>, ApiError> {
|
||||||
|
// Verify book exists
|
||||||
|
let exists: bool = sqlx::query_scalar("SELECT EXISTS(SELECT 1 FROM books WHERE id = $1)")
|
||||||
|
.bind(id)
|
||||||
|
.fetch_one(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
if !exists {
|
||||||
|
return Err(ApiError::not_found("book not found"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let row = sqlx::query(
|
||||||
|
"SELECT status, current_page, last_read_at FROM book_reading_progress WHERE book_id = $1",
|
||||||
|
)
|
||||||
|
.bind(id)
|
||||||
|
.fetch_optional(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let response = match row {
|
||||||
|
Some(r) => ReadingProgressResponse {
|
||||||
|
status: r.get("status"),
|
||||||
|
current_page: r.get("current_page"),
|
||||||
|
last_read_at: r.get("last_read_at"),
|
||||||
|
},
|
||||||
|
None => ReadingProgressResponse {
|
||||||
|
status: "unread".to_string(),
|
||||||
|
current_page: None,
|
||||||
|
last_read_at: None,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(Json(response))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Update reading progress for a book
|
||||||
|
#[utoipa::path(
|
||||||
|
patch,
|
||||||
|
path = "/books/{id}/progress",
|
||||||
|
tag = "reading-progress",
|
||||||
|
params(
|
||||||
|
("id" = String, Path, description = "Book UUID"),
|
||||||
|
),
|
||||||
|
request_body = UpdateReadingProgressRequest,
|
||||||
|
responses(
|
||||||
|
(status = 200, body = ReadingProgressResponse),
|
||||||
|
(status = 404, description = "Book not found"),
|
||||||
|
(status = 422, description = "Validation error (missing or invalid current_page for status 'reading')"),
|
||||||
|
(status = 401, description = "Unauthorized"),
|
||||||
|
),
|
||||||
|
security(("Bearer" = []))
|
||||||
|
)]
|
||||||
|
pub async fn update_reading_progress(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
Path(id): Path<Uuid>,
|
||||||
|
Json(body): Json<UpdateReadingProgressRequest>,
|
||||||
|
) -> Result<Json<ReadingProgressResponse>, ApiError> {
|
||||||
|
// Validate status value
|
||||||
|
if !["unread", "reading", "read"].contains(&body.status.as_str()) {
|
||||||
|
return Err(ApiError::bad_request(format!(
|
||||||
|
"invalid status '{}': must be one of unread, reading, read",
|
||||||
|
body.status
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate current_page for "reading" status
|
||||||
|
if body.status == "reading" {
|
||||||
|
match body.current_page {
|
||||||
|
None => {
|
||||||
|
return Err(ApiError::unprocessable_entity(
|
||||||
|
"current_page is required when status is 'reading'",
|
||||||
|
))
|
||||||
|
}
|
||||||
|
Some(p) if p <= 0 => {
|
||||||
|
return Err(ApiError::unprocessable_entity(
|
||||||
|
"current_page must be greater than 0",
|
||||||
|
))
|
||||||
|
}
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify book exists
|
||||||
|
let exists: bool = sqlx::query_scalar("SELECT EXISTS(SELECT 1 FROM books WHERE id = $1)")
|
||||||
|
.bind(id)
|
||||||
|
.fetch_one(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
if !exists {
|
||||||
|
return Err(ApiError::not_found("book not found"));
|
||||||
|
}
|
||||||
|
|
||||||
|
// current_page is only stored for "reading" status
|
||||||
|
let current_page = if body.status == "reading" {
|
||||||
|
body.current_page
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
|
||||||
|
let row = sqlx::query(
|
||||||
|
r#"
|
||||||
|
INSERT INTO book_reading_progress (book_id, status, current_page, last_read_at, updated_at)
|
||||||
|
VALUES ($1, $2, $3, NOW(), NOW())
|
||||||
|
ON CONFLICT (book_id) DO UPDATE
|
||||||
|
SET status = EXCLUDED.status,
|
||||||
|
current_page = EXCLUDED.current_page,
|
||||||
|
last_read_at = NOW(),
|
||||||
|
updated_at = NOW()
|
||||||
|
RETURNING status, current_page, last_read_at
|
||||||
|
"#,
|
||||||
|
)
|
||||||
|
.bind(id)
|
||||||
|
.bind(&body.status)
|
||||||
|
.bind(current_page)
|
||||||
|
.fetch_one(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(Json(ReadingProgressResponse {
|
||||||
|
status: row.get("status"),
|
||||||
|
current_page: row.get("current_page"),
|
||||||
|
last_read_at: row.get("last_read_at"),
|
||||||
|
}))
|
||||||
|
}
|
||||||
@@ -1,8 +1,10 @@
|
|||||||
use axum::{extract::{Query, State}, Json};
|
use axum::{extract::{Query, State}, Json};
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
|
use sqlx::Row;
|
||||||
use utoipa::ToSchema;
|
use utoipa::ToSchema;
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
use crate::{error::ApiError, AppState};
|
use crate::{error::ApiError, state::AppState};
|
||||||
|
|
||||||
#[derive(Deserialize, ToSchema)]
|
#[derive(Deserialize, ToSchema)]
|
||||||
pub struct SearchQuery {
|
pub struct SearchQuery {
|
||||||
@@ -18,9 +20,21 @@ pub struct SearchQuery {
|
|||||||
pub limit: Option<usize>,
|
pub limit: Option<usize>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Serialize, ToSchema)]
|
||||||
|
pub struct SeriesHit {
|
||||||
|
#[schema(value_type = String)]
|
||||||
|
pub library_id: Uuid,
|
||||||
|
pub name: String,
|
||||||
|
pub book_count: i64,
|
||||||
|
pub books_read_count: i64,
|
||||||
|
#[schema(value_type = String)]
|
||||||
|
pub first_book_id: Uuid,
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Serialize, ToSchema)]
|
#[derive(Serialize, ToSchema)]
|
||||||
pub struct SearchResponse {
|
pub struct SearchResponse {
|
||||||
pub hits: serde_json::Value,
|
pub hits: serde_json::Value,
|
||||||
|
pub series_hits: Vec<SeriesHit>,
|
||||||
pub estimated_total_hits: Option<u64>,
|
pub estimated_total_hits: Option<u64>,
|
||||||
pub processing_time_ms: Option<u64>,
|
pub processing_time_ms: Option<u64>,
|
||||||
}
|
}
|
||||||
@@ -31,11 +45,11 @@ pub struct SearchResponse {
|
|||||||
path = "/search",
|
path = "/search",
|
||||||
tag = "books",
|
tag = "books",
|
||||||
params(
|
params(
|
||||||
("q" = String, Query, description = "Search query"),
|
("q" = String, Query, description = "Search query (books via Meilisearch + series via ILIKE)"),
|
||||||
("library_id" = Option<String>, Query, description = "Filter by library ID"),
|
("library_id" = Option<String>, Query, description = "Filter by library ID"),
|
||||||
("type" = Option<String>, Query, description = "Filter by type (cbz, cbr, pdf)"),
|
("type" = Option<String>, Query, description = "Filter by type (cbz, cbr, pdf)"),
|
||||||
("kind" = Option<String>, Query, description = "Filter by kind (alias for type)"),
|
("kind" = Option<String>, Query, description = "Filter by kind (alias for type)"),
|
||||||
("limit" = Option<usize>, Query, description = "Max results (max 100)"),
|
("limit" = Option<usize>, Query, description = "Max results per type (max 100)"),
|
||||||
),
|
),
|
||||||
responses(
|
responses(
|
||||||
(status = 200, body = SearchResponse),
|
(status = 200, body = SearchResponse),
|
||||||
@@ -66,36 +80,98 @@ pub async fn search_books(
|
|||||||
"filter": if filters.is_empty() { serde_json::Value::Null } else { serde_json::Value::String(filters.join(" AND ")) }
|
"filter": if filters.is_empty() { serde_json::Value::Null } else { serde_json::Value::String(filters.join(" AND ")) }
|
||||||
});
|
});
|
||||||
|
|
||||||
|
let limit_val = query.limit.unwrap_or(20).clamp(1, 100);
|
||||||
|
let q_pattern = format!("%{}%", query.q);
|
||||||
|
let library_id_uuid: Option<uuid::Uuid> = query.library_id.as_deref()
|
||||||
|
.and_then(|s| s.parse().ok());
|
||||||
|
|
||||||
|
// Recherche Meilisearch (books) + séries PG en parallèle
|
||||||
let client = reqwest::Client::new();
|
let client = reqwest::Client::new();
|
||||||
let url = format!("{}/indexes/books/search", state.meili_url.trim_end_matches('/'));
|
let url = format!("{}/indexes/books/search", state.meili_url.trim_end_matches('/'));
|
||||||
let response = client
|
let meili_fut = client
|
||||||
.post(url)
|
.post(&url)
|
||||||
.header("Authorization", format!("Bearer {}", state.meili_master_key))
|
.header("Authorization", format!("Bearer {}", state.meili_master_key))
|
||||||
.json(&body)
|
.json(&body)
|
||||||
.send()
|
.send();
|
||||||
.await
|
|
||||||
.map_err(|e| ApiError::internal(format!("meili request failed: {e}")))?;
|
|
||||||
|
|
||||||
if !response.status().is_success() {
|
let series_sql = r#"
|
||||||
let body = response.text().await.unwrap_or_else(|_| "unknown meili error".to_string());
|
WITH sorted_books AS (
|
||||||
|
SELECT
|
||||||
|
library_id,
|
||||||
|
COALESCE(NULLIF(series, ''), 'unclassified') as name,
|
||||||
|
id,
|
||||||
|
ROW_NUMBER() OVER (
|
||||||
|
PARTITION BY library_id, COALESCE(NULLIF(series, ''), 'unclassified')
|
||||||
|
ORDER BY
|
||||||
|
REGEXP_REPLACE(LOWER(title), '[0-9]+', '', 'g'),
|
||||||
|
COALESCE((REGEXP_MATCH(LOWER(title), '\d+'))[1]::int, 0),
|
||||||
|
title ASC
|
||||||
|
) as rn
|
||||||
|
FROM books
|
||||||
|
WHERE ($1::uuid IS NULL OR library_id = $1)
|
||||||
|
),
|
||||||
|
series_counts AS (
|
||||||
|
SELECT
|
||||||
|
sb.library_id,
|
||||||
|
sb.name,
|
||||||
|
COUNT(*) as book_count,
|
||||||
|
COUNT(brp.book_id) FILTER (WHERE brp.status = 'read') as books_read_count
|
||||||
|
FROM sorted_books sb
|
||||||
|
LEFT JOIN book_reading_progress brp ON brp.book_id = sb.id
|
||||||
|
GROUP BY sb.library_id, sb.name
|
||||||
|
)
|
||||||
|
SELECT sc.library_id, sc.name, sc.book_count, sc.books_read_count, sb.id as first_book_id
|
||||||
|
FROM series_counts sc
|
||||||
|
JOIN sorted_books sb ON sb.library_id = sc.library_id AND sb.name = sc.name AND sb.rn = 1
|
||||||
|
WHERE sc.name ILIKE $2
|
||||||
|
ORDER BY sc.name ASC
|
||||||
|
LIMIT $3
|
||||||
|
"#;
|
||||||
|
|
||||||
|
let series_fut = sqlx::query(series_sql)
|
||||||
|
.bind(library_id_uuid)
|
||||||
|
.bind(&q_pattern)
|
||||||
|
.bind(limit_val as i64)
|
||||||
|
.fetch_all(&state.pool);
|
||||||
|
|
||||||
|
let (meili_resp, series_rows) = tokio::join!(meili_fut, series_fut);
|
||||||
|
|
||||||
|
// Traitement Meilisearch
|
||||||
|
let meili_resp = meili_resp.map_err(|e| ApiError::internal(format!("meili request failed: {e}")))?;
|
||||||
|
let (hits, estimated_total_hits, processing_time_ms) = if !meili_resp.status().is_success() {
|
||||||
|
let body = meili_resp.text().await.unwrap_or_default();
|
||||||
if body.contains("index_not_found") {
|
if body.contains("index_not_found") {
|
||||||
return Ok(Json(SearchResponse {
|
(serde_json::json!([]), Some(0u64), Some(0u64))
|
||||||
hits: serde_json::json!([]),
|
} else {
|
||||||
estimated_total_hits: Some(0),
|
|
||||||
processing_time_ms: Some(0),
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
return Err(ApiError::internal(format!("meili error: {body}")));
|
return Err(ApiError::internal(format!("meili error: {body}")));
|
||||||
}
|
}
|
||||||
|
} else {
|
||||||
let payload: serde_json::Value = response
|
let payload: serde_json::Value = meili_resp.json().await
|
||||||
.json()
|
|
||||||
.await
|
|
||||||
.map_err(|e| ApiError::internal(format!("invalid meili response: {e}")))?;
|
.map_err(|e| ApiError::internal(format!("invalid meili response: {e}")))?;
|
||||||
|
(
|
||||||
|
payload.get("hits").cloned().unwrap_or_else(|| serde_json::json!([])),
|
||||||
|
payload.get("estimatedTotalHits").and_then(|v| v.as_u64()),
|
||||||
|
payload.get("processingTimeMs").and_then(|v| v.as_u64()),
|
||||||
|
)
|
||||||
|
};
|
||||||
|
|
||||||
|
// Traitement séries
|
||||||
|
let series_hits: Vec<SeriesHit> = series_rows
|
||||||
|
.unwrap_or_default()
|
||||||
|
.iter()
|
||||||
|
.map(|row| SeriesHit {
|
||||||
|
library_id: row.get("library_id"),
|
||||||
|
name: row.get("name"),
|
||||||
|
book_count: row.get("book_count"),
|
||||||
|
books_read_count: row.get("books_read_count"),
|
||||||
|
first_book_id: row.get("first_book_id"),
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
Ok(Json(SearchResponse {
|
Ok(Json(SearchResponse {
|
||||||
hits: payload.get("hits").cloned().unwrap_or_else(|| serde_json::json!([])),
|
hits,
|
||||||
estimated_total_hits: payload.get("estimatedTotalHits").and_then(|v| v.as_u64()),
|
series_hits,
|
||||||
processing_time_ms: payload.get("processingTimeMs").and_then(|v| v.as_u64()),
|
estimated_total_hits,
|
||||||
|
processing_time_ms,
|
||||||
}))
|
}))
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -6,28 +6,29 @@ use axum::{
|
|||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
use serde_json::Value;
|
use serde_json::Value;
|
||||||
use sqlx::Row;
|
use sqlx::Row;
|
||||||
|
use utoipa::ToSchema;
|
||||||
|
|
||||||
use crate::{error::ApiError, AppState};
|
use crate::{error::ApiError, state::{AppState, load_dynamic_settings}};
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, Serialize, Deserialize, ToSchema)]
|
||||||
pub struct UpdateSettingRequest {
|
pub struct UpdateSettingRequest {
|
||||||
pub value: Value,
|
pub value: Value,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, Serialize, Deserialize, ToSchema)]
|
||||||
pub struct ClearCacheResponse {
|
pub struct ClearCacheResponse {
|
||||||
pub success: bool,
|
pub success: bool,
|
||||||
pub message: String,
|
pub message: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, Serialize, Deserialize, ToSchema)]
|
||||||
pub struct CacheStats {
|
pub struct CacheStats {
|
||||||
pub total_size_mb: f64,
|
pub total_size_mb: f64,
|
||||||
pub file_count: u64,
|
pub file_count: u64,
|
||||||
pub directory: String,
|
pub directory: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, Serialize, Deserialize, ToSchema)]
|
||||||
pub struct ThumbnailStats {
|
pub struct ThumbnailStats {
|
||||||
pub total_size_mb: f64,
|
pub total_size_mb: f64,
|
||||||
pub file_count: u64,
|
pub file_count: u64,
|
||||||
@@ -43,7 +44,18 @@ pub fn settings_routes() -> Router<AppState> {
|
|||||||
.route("/settings/thumbnail/stats", get(get_thumbnail_stats))
|
.route("/settings/thumbnail/stats", get(get_thumbnail_stats))
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn get_settings(State(state): State<AppState>) -> Result<Json<Value>, ApiError> {
|
/// List all settings
|
||||||
|
#[utoipa::path(
|
||||||
|
get,
|
||||||
|
path = "/settings",
|
||||||
|
tag = "settings",
|
||||||
|
responses(
|
||||||
|
(status = 200, description = "All settings as key/value object"),
|
||||||
|
(status = 401, description = "Unauthorized"),
|
||||||
|
),
|
||||||
|
security(("Bearer" = []))
|
||||||
|
)]
|
||||||
|
pub async fn get_settings(State(state): State<AppState>) -> Result<Json<Value>, ApiError> {
|
||||||
let rows = sqlx::query(r#"SELECT key, value FROM app_settings"#)
|
let rows = sqlx::query(r#"SELECT key, value FROM app_settings"#)
|
||||||
.fetch_all(&state.pool)
|
.fetch_all(&state.pool)
|
||||||
.await?;
|
.await?;
|
||||||
@@ -58,7 +70,20 @@ async fn get_settings(State(state): State<AppState>) -> Result<Json<Value>, ApiE
|
|||||||
Ok(Json(Value::Object(settings)))
|
Ok(Json(Value::Object(settings)))
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn get_setting(
|
/// Get a single setting by key
|
||||||
|
#[utoipa::path(
|
||||||
|
get,
|
||||||
|
path = "/settings/{key}",
|
||||||
|
tag = "settings",
|
||||||
|
params(("key" = String, Path, description = "Setting key")),
|
||||||
|
responses(
|
||||||
|
(status = 200, description = "Setting value"),
|
||||||
|
(status = 404, description = "Setting not found"),
|
||||||
|
(status = 401, description = "Unauthorized"),
|
||||||
|
),
|
||||||
|
security(("Bearer" = []))
|
||||||
|
)]
|
||||||
|
pub async fn get_setting(
|
||||||
State(state): State<AppState>,
|
State(state): State<AppState>,
|
||||||
axum::extract::Path(key): axum::extract::Path<String>,
|
axum::extract::Path(key): axum::extract::Path<String>,
|
||||||
) -> Result<Json<Value>, ApiError> {
|
) -> Result<Json<Value>, ApiError> {
|
||||||
@@ -76,7 +101,20 @@ async fn get_setting(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn update_setting(
|
/// Create or update a setting
|
||||||
|
#[utoipa::path(
|
||||||
|
post,
|
||||||
|
path = "/settings/{key}",
|
||||||
|
tag = "settings",
|
||||||
|
params(("key" = String, Path, description = "Setting key")),
|
||||||
|
request_body = UpdateSettingRequest,
|
||||||
|
responses(
|
||||||
|
(status = 200, description = "Updated setting value"),
|
||||||
|
(status = 401, description = "Unauthorized"),
|
||||||
|
),
|
||||||
|
security(("Bearer" = []))
|
||||||
|
)]
|
||||||
|
pub async fn update_setting(
|
||||||
State(state): State<AppState>,
|
State(state): State<AppState>,
|
||||||
axum::extract::Path(key): axum::extract::Path<String>,
|
axum::extract::Path(key): axum::extract::Path<String>,
|
||||||
Json(body): Json<UpdateSettingRequest>,
|
Json(body): Json<UpdateSettingRequest>,
|
||||||
@@ -96,12 +134,29 @@ async fn update_setting(
|
|||||||
.await?;
|
.await?;
|
||||||
|
|
||||||
let value: Value = row.get("value");
|
let value: Value = row.get("value");
|
||||||
|
|
||||||
|
// Rechargement des settings dynamiques si la clé affecte le comportement runtime
|
||||||
|
if key == "limits" || key == "image_processing" || key == "cache" {
|
||||||
|
let new_settings = load_dynamic_settings(&state.pool).await;
|
||||||
|
*state.settings.write().await = new_settings;
|
||||||
|
}
|
||||||
|
|
||||||
Ok(Json(value))
|
Ok(Json(value))
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn clear_cache(State(_state): State<AppState>) -> Result<Json<ClearCacheResponse>, ApiError> {
|
/// Clear the image page cache
|
||||||
let cache_dir = std::env::var("IMAGE_CACHE_DIR")
|
#[utoipa::path(
|
||||||
.unwrap_or_else(|_| "/tmp/stripstream-image-cache".to_string());
|
post,
|
||||||
|
path = "/settings/cache/clear",
|
||||||
|
tag = "settings",
|
||||||
|
responses(
|
||||||
|
(status = 200, body = ClearCacheResponse),
|
||||||
|
(status = 401, description = "Unauthorized"),
|
||||||
|
),
|
||||||
|
security(("Bearer" = []))
|
||||||
|
)]
|
||||||
|
pub async fn clear_cache(State(state): State<AppState>) -> Result<Json<ClearCacheResponse>, ApiError> {
|
||||||
|
let cache_dir = state.settings.read().await.cache_directory.clone();
|
||||||
|
|
||||||
let result = tokio::task::spawn_blocking(move || {
|
let result = tokio::task::spawn_blocking(move || {
|
||||||
if std::path::Path::new(&cache_dir).exists() {
|
if std::path::Path::new(&cache_dir).exists() {
|
||||||
@@ -128,9 +183,19 @@ async fn clear_cache(State(_state): State<AppState>) -> Result<Json<ClearCacheRe
|
|||||||
Ok(Json(result))
|
Ok(Json(result))
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn get_cache_stats(State(_state): State<AppState>) -> Result<Json<CacheStats>, ApiError> {
|
/// Get image page cache statistics
|
||||||
let cache_dir = std::env::var("IMAGE_CACHE_DIR")
|
#[utoipa::path(
|
||||||
.unwrap_or_else(|_| "/tmp/stripstream-image-cache".to_string());
|
get,
|
||||||
|
path = "/settings/cache/stats",
|
||||||
|
tag = "settings",
|
||||||
|
responses(
|
||||||
|
(status = 200, body = CacheStats),
|
||||||
|
(status = 401, description = "Unauthorized"),
|
||||||
|
),
|
||||||
|
security(("Bearer" = []))
|
||||||
|
)]
|
||||||
|
pub async fn get_cache_stats(State(state): State<AppState>) -> Result<Json<CacheStats>, ApiError> {
|
||||||
|
let cache_dir = state.settings.read().await.cache_directory.clone();
|
||||||
|
|
||||||
let cache_dir_clone = cache_dir.clone();
|
let cache_dir_clone = cache_dir.clone();
|
||||||
let stats = tokio::task::spawn_blocking(move || {
|
let stats = tokio::task::spawn_blocking(move || {
|
||||||
@@ -208,7 +273,18 @@ fn compute_dir_stats(path: &std::path::Path) -> (u64, u64) {
|
|||||||
(total_size, file_count)
|
(total_size, file_count)
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn get_thumbnail_stats(State(_state): State<AppState>) -> Result<Json<ThumbnailStats>, ApiError> {
|
/// Get thumbnail storage statistics
|
||||||
|
#[utoipa::path(
|
||||||
|
get,
|
||||||
|
path = "/settings/thumbnail/stats",
|
||||||
|
tag = "settings",
|
||||||
|
responses(
|
||||||
|
(status = 200, body = ThumbnailStats),
|
||||||
|
(status = 401, description = "Unauthorized"),
|
||||||
|
),
|
||||||
|
security(("Bearer" = []))
|
||||||
|
)]
|
||||||
|
pub async fn get_thumbnail_stats(State(_state): State<AppState>) -> Result<Json<ThumbnailStats>, ApiError> {
|
||||||
let settings = sqlx::query(r#"SELECT value FROM app_settings WHERE key = 'thumbnail'"#)
|
let settings = sqlx::query(r#"SELECT value FROM app_settings WHERE key = 'thumbnail'"#)
|
||||||
.fetch_optional(&_state.pool)
|
.fetch_optional(&_state.pool)
|
||||||
.await?;
|
.await?;
|
||||||
|
|||||||
136
apps/api/src/state.rs
Normal file
136
apps/api/src/state.rs
Normal file
@@ -0,0 +1,136 @@
|
|||||||
|
use std::sync::{
|
||||||
|
atomic::AtomicU64,
|
||||||
|
Arc,
|
||||||
|
};
|
||||||
|
use std::time::Instant;
|
||||||
|
|
||||||
|
use lru::LruCache;
|
||||||
|
use sqlx::{Pool, Postgres, Row};
|
||||||
|
use tokio::sync::{Mutex, RwLock, Semaphore};
|
||||||
|
|
||||||
|
#[derive(Clone)]
|
||||||
|
pub struct AppState {
|
||||||
|
pub pool: sqlx::PgPool,
|
||||||
|
pub bootstrap_token: Arc<str>,
|
||||||
|
pub meili_url: Arc<str>,
|
||||||
|
pub meili_master_key: Arc<str>,
|
||||||
|
pub page_cache: Arc<Mutex<LruCache<String, Arc<Vec<u8>>>>>,
|
||||||
|
pub page_render_limit: Arc<Semaphore>,
|
||||||
|
pub metrics: Arc<Metrics>,
|
||||||
|
pub read_rate_limit: Arc<Mutex<ReadRateLimit>>,
|
||||||
|
pub settings: Arc<RwLock<DynamicSettings>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone)]
|
||||||
|
pub struct DynamicSettings {
|
||||||
|
pub rate_limit_per_second: u32,
|
||||||
|
pub timeout_seconds: u64,
|
||||||
|
pub image_format: String,
|
||||||
|
pub image_quality: u8,
|
||||||
|
pub image_filter: String,
|
||||||
|
pub image_max_width: u32,
|
||||||
|
pub cache_directory: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for DynamicSettings {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self {
|
||||||
|
rate_limit_per_second: 120,
|
||||||
|
timeout_seconds: 12,
|
||||||
|
image_format: "webp".to_string(),
|
||||||
|
image_quality: 85,
|
||||||
|
image_filter: "lanczos3".to_string(),
|
||||||
|
image_max_width: 2160,
|
||||||
|
cache_directory: std::env::var("IMAGE_CACHE_DIR")
|
||||||
|
.unwrap_or_else(|_| "/tmp/stripstream-image-cache".to_string()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct Metrics {
|
||||||
|
pub requests_total: AtomicU64,
|
||||||
|
pub page_cache_hits: AtomicU64,
|
||||||
|
pub page_cache_misses: AtomicU64,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct ReadRateLimit {
|
||||||
|
pub window_started_at: Instant,
|
||||||
|
pub requests_in_window: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Metrics {
|
||||||
|
pub fn new() -> Self {
|
||||||
|
Self {
|
||||||
|
requests_total: AtomicU64::new(0),
|
||||||
|
page_cache_hits: AtomicU64::new(0),
|
||||||
|
page_cache_misses: AtomicU64::new(0),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn load_concurrent_renders(pool: &Pool<Postgres>) -> usize {
|
||||||
|
let default_concurrency = 8;
|
||||||
|
let row = sqlx::query(r#"SELECT value FROM app_settings WHERE key = 'limits'"#)
|
||||||
|
.fetch_optional(pool)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
match row {
|
||||||
|
Ok(Some(row)) => {
|
||||||
|
let value: serde_json::Value = row.get("value");
|
||||||
|
value
|
||||||
|
.get("concurrent_renders")
|
||||||
|
.and_then(|v: &serde_json::Value| v.as_u64())
|
||||||
|
.map(|v| v as usize)
|
||||||
|
.unwrap_or(default_concurrency)
|
||||||
|
}
|
||||||
|
_ => default_concurrency,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn load_dynamic_settings(pool: &Pool<Postgres>) -> DynamicSettings {
|
||||||
|
let mut s = DynamicSettings::default();
|
||||||
|
|
||||||
|
if let Ok(Some(row)) = sqlx::query(r#"SELECT value FROM app_settings WHERE key = 'limits'"#)
|
||||||
|
.fetch_optional(pool)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
let v: serde_json::Value = row.get("value");
|
||||||
|
if let Some(n) = v.get("rate_limit_per_second").and_then(|x| x.as_u64()) {
|
||||||
|
s.rate_limit_per_second = n as u32;
|
||||||
|
}
|
||||||
|
if let Some(n) = v.get("timeout_seconds").and_then(|x| x.as_u64()) {
|
||||||
|
s.timeout_seconds = n;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Ok(Some(row)) = sqlx::query(r#"SELECT value FROM app_settings WHERE key = 'image_processing'"#)
|
||||||
|
.fetch_optional(pool)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
let v: serde_json::Value = row.get("value");
|
||||||
|
if let Some(s2) = v.get("format").and_then(|x| x.as_str()) {
|
||||||
|
s.image_format = s2.to_string();
|
||||||
|
}
|
||||||
|
if let Some(n) = v.get("quality").and_then(|x| x.as_u64()) {
|
||||||
|
s.image_quality = n.clamp(1, 100) as u8;
|
||||||
|
}
|
||||||
|
if let Some(s2) = v.get("filter").and_then(|x| x.as_str()) {
|
||||||
|
s.image_filter = s2.to_string();
|
||||||
|
}
|
||||||
|
if let Some(n) = v.get("max_width").and_then(|x| x.as_u64()) {
|
||||||
|
s.image_max_width = n as u32;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Ok(Some(row)) = sqlx::query(r#"SELECT value FROM app_settings WHERE key = 'cache'"#)
|
||||||
|
.fetch_optional(pool)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
let v: serde_json::Value = row.get("value");
|
||||||
|
if let Some(dir) = v.get("directory").and_then(|x| x.as_str()) {
|
||||||
|
s.cache_directory = dir.to_string();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
s
|
||||||
|
}
|
||||||
@@ -1,203 +1,12 @@
|
|||||||
use std::path::Path;
|
|
||||||
|
|
||||||
use anyhow::Context;
|
|
||||||
use axum::{
|
use axum::{
|
||||||
extract::{Path as AxumPath, State},
|
extract::State,
|
||||||
http::StatusCode,
|
|
||||||
Json,
|
Json,
|
||||||
};
|
};
|
||||||
use image::GenericImageView;
|
|
||||||
use serde::Deserialize;
|
use serde::Deserialize;
|
||||||
use sqlx::Row;
|
|
||||||
use tracing::{info, warn};
|
|
||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
use utoipa::ToSchema;
|
use utoipa::ToSchema;
|
||||||
|
|
||||||
use crate::{error::ApiError, index_jobs, pages, AppState};
|
use crate::{error::ApiError, index_jobs, state::AppState};
|
||||||
|
|
||||||
#[derive(Clone)]
|
|
||||||
struct ThumbnailConfig {
|
|
||||||
enabled: bool,
|
|
||||||
width: u32,
|
|
||||||
height: u32,
|
|
||||||
quality: u8,
|
|
||||||
directory: String,
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn load_thumbnail_config(pool: &sqlx::PgPool) -> ThumbnailConfig {
|
|
||||||
let fallback = ThumbnailConfig {
|
|
||||||
enabled: true,
|
|
||||||
width: 300,
|
|
||||||
height: 400,
|
|
||||||
quality: 80,
|
|
||||||
directory: "/data/thumbnails".to_string(),
|
|
||||||
};
|
|
||||||
let row = sqlx::query(r#"SELECT value FROM app_settings WHERE key = 'thumbnail'"#)
|
|
||||||
.fetch_optional(pool)
|
|
||||||
.await;
|
|
||||||
|
|
||||||
match row {
|
|
||||||
Ok(Some(row)) => {
|
|
||||||
let value: serde_json::Value = row.get("value");
|
|
||||||
ThumbnailConfig {
|
|
||||||
enabled: value
|
|
||||||
.get("enabled")
|
|
||||||
.and_then(|v| v.as_bool())
|
|
||||||
.unwrap_or(fallback.enabled),
|
|
||||||
width: value
|
|
||||||
.get("width")
|
|
||||||
.and_then(|v| v.as_u64())
|
|
||||||
.map(|v| v as u32)
|
|
||||||
.unwrap_or(fallback.width),
|
|
||||||
height: value
|
|
||||||
.get("height")
|
|
||||||
.and_then(|v| v.as_u64())
|
|
||||||
.map(|v| v as u32)
|
|
||||||
.unwrap_or(fallback.height),
|
|
||||||
quality: value
|
|
||||||
.get("quality")
|
|
||||||
.and_then(|v| v.as_u64())
|
|
||||||
.map(|v| v as u8)
|
|
||||||
.unwrap_or(fallback.quality),
|
|
||||||
directory: value
|
|
||||||
.get("directory")
|
|
||||||
.and_then(|v| v.as_str())
|
|
||||||
.map(|s| s.to_string())
|
|
||||||
.unwrap_or_else(|| fallback.directory.clone()),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
_ => fallback,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn generate_thumbnail(image_bytes: &[u8], config: &ThumbnailConfig) -> anyhow::Result<Vec<u8>> {
|
|
||||||
let img = image::load_from_memory(image_bytes).context("failed to load image")?;
|
|
||||||
let (orig_w, orig_h) = img.dimensions();
|
|
||||||
let ratio_w = config.width as f32 / orig_w as f32;
|
|
||||||
let ratio_h = config.height as f32 / orig_h as f32;
|
|
||||||
let ratio = ratio_w.min(ratio_h);
|
|
||||||
let new_w = (orig_w as f32 * ratio) as u32;
|
|
||||||
let new_h = (orig_h as f32 * ratio) as u32;
|
|
||||||
let resized = img.resize(new_w, new_h, image::imageops::FilterType::Lanczos3);
|
|
||||||
let rgba = resized.to_rgba8();
|
|
||||||
let (w, h) = rgba.dimensions();
|
|
||||||
let rgb_data: Vec<u8> = rgba.pixels().flat_map(|p| [p[0], p[1], p[2]]).collect();
|
|
||||||
let quality = f32::max(config.quality as f32, 85.0);
|
|
||||||
let webp_data =
|
|
||||||
webp::Encoder::new(&rgb_data, webp::PixelLayout::Rgb, w, h).encode(quality);
|
|
||||||
Ok(webp_data.to_vec())
|
|
||||||
}
|
|
||||||
|
|
||||||
fn save_thumbnail(book_id: Uuid, thumbnail_bytes: &[u8], config: &ThumbnailConfig) -> anyhow::Result<String> {
|
|
||||||
let dir = Path::new(&config.directory);
|
|
||||||
std::fs::create_dir_all(dir)?;
|
|
||||||
let filename = format!("{}.webp", book_id);
|
|
||||||
let path = dir.join(&filename);
|
|
||||||
std::fs::write(&path, thumbnail_bytes)?;
|
|
||||||
Ok(path.to_string_lossy().to_string())
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn run_checkup(state: AppState, job_id: Uuid) {
|
|
||||||
let pool = &state.pool;
|
|
||||||
let row = sqlx::query("SELECT library_id, type FROM index_jobs WHERE id = $1")
|
|
||||||
.bind(job_id)
|
|
||||||
.fetch_optional(pool)
|
|
||||||
.await;
|
|
||||||
|
|
||||||
let (library_id, job_type) = match row {
|
|
||||||
Ok(Some(r)) => (
|
|
||||||
r.get::<Option<Uuid>, _>("library_id"),
|
|
||||||
r.get::<String, _>("type"),
|
|
||||||
),
|
|
||||||
_ => {
|
|
||||||
warn!("thumbnails checkup: job {} not found", job_id);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// Regenerate: clear existing thumbnails in scope so they get regenerated
|
|
||||||
if job_type == "thumbnail_regenerate" {
|
|
||||||
let cleared = sqlx::query(
|
|
||||||
r#"UPDATE books SET thumbnail_path = NULL WHERE (library_id = $1 OR $1 IS NULL)"#,
|
|
||||||
)
|
|
||||||
.bind(library_id)
|
|
||||||
.execute(pool)
|
|
||||||
.await;
|
|
||||||
if let Ok(res) = cleared {
|
|
||||||
info!("thumbnails regenerate: cleared {} books", res.rows_affected());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
let book_ids: Vec<Uuid> = sqlx::query_scalar(
|
|
||||||
r#"SELECT id FROM books WHERE (library_id = $1 OR $1 IS NULL) AND thumbnail_path IS NULL"#,
|
|
||||||
)
|
|
||||||
.bind(library_id)
|
|
||||||
.fetch_all(pool)
|
|
||||||
.await
|
|
||||||
.unwrap_or_default();
|
|
||||||
|
|
||||||
let config = load_thumbnail_config(pool).await;
|
|
||||||
if !config.enabled || book_ids.is_empty() {
|
|
||||||
let _ = sqlx::query(
|
|
||||||
"UPDATE index_jobs SET status = 'success', finished_at = NOW(), progress_percent = 100, current_file = NULL WHERE id = $1",
|
|
||||||
)
|
|
||||||
.bind(job_id)
|
|
||||||
.execute(pool)
|
|
||||||
.await;
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
let total = book_ids.len() as i32;
|
|
||||||
let _ = sqlx::query(
|
|
||||||
"UPDATE index_jobs SET status = 'generating_thumbnails', total_files = $2, processed_files = 0, current_file = NULL WHERE id = $1",
|
|
||||||
)
|
|
||||||
.bind(job_id)
|
|
||||||
.bind(total)
|
|
||||||
.execute(pool)
|
|
||||||
.await;
|
|
||||||
|
|
||||||
for (i, &book_id) in book_ids.iter().enumerate() {
|
|
||||||
match pages::render_book_page_1(&state, book_id, config.width, config.quality).await {
|
|
||||||
Ok(page_bytes) => {
|
|
||||||
match generate_thumbnail(&page_bytes, &config) {
|
|
||||||
Ok(thumb_bytes) => {
|
|
||||||
if let Ok(path) = save_thumbnail(book_id, &thumb_bytes, &config) {
|
|
||||||
if sqlx::query("UPDATE books SET thumbnail_path = $1 WHERE id = $2")
|
|
||||||
.bind(&path)
|
|
||||||
.bind(book_id)
|
|
||||||
.execute(pool)
|
|
||||||
.await
|
|
||||||
.is_ok()
|
|
||||||
{
|
|
||||||
let processed = (i + 1) as i32;
|
|
||||||
let percent = ((i + 1) as f64 / total as f64 * 100.0) as i32;
|
|
||||||
let _ = sqlx::query(
|
|
||||||
"UPDATE index_jobs SET processed_files = $2, progress_percent = $3 WHERE id = $1",
|
|
||||||
)
|
|
||||||
.bind(job_id)
|
|
||||||
.bind(processed)
|
|
||||||
.bind(percent)
|
|
||||||
.execute(pool)
|
|
||||||
.await;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Err(e) => warn!("thumbnail generate failed for book {}: {:?}", book_id, e),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Err(e) => warn!("render page 1 failed for book {}: {:?}", book_id, e),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
let _ = sqlx::query(
|
|
||||||
"UPDATE index_jobs SET status = 'success', finished_at = NOW(), progress_percent = 100, current_file = NULL WHERE id = $1",
|
|
||||||
)
|
|
||||||
.bind(job_id)
|
|
||||||
.execute(pool)
|
|
||||||
.await;
|
|
||||||
|
|
||||||
info!("thumbnails checkup finished for job {} ({} books)", job_id, total);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Deserialize, ToSchema)]
|
#[derive(Deserialize, ToSchema)]
|
||||||
pub struct ThumbnailsRebuildRequest {
|
pub struct ThumbnailsRebuildRequest {
|
||||||
@@ -205,14 +14,14 @@ pub struct ThumbnailsRebuildRequest {
|
|||||||
pub library_id: Option<Uuid>,
|
pub library_id: Option<Uuid>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// POST /index/thumbnails/rebuild — create a job and generate thumbnails for books that don't have one (optional library scope).
|
/// POST /index/thumbnails/rebuild — create a job to generate thumbnails for books that don't have one.
|
||||||
#[utoipa::path(
|
#[utoipa::path(
|
||||||
post,
|
post,
|
||||||
path = "/index/thumbnails/rebuild",
|
path = "/index/thumbnails/rebuild",
|
||||||
tag = "indexing",
|
tag = "indexing",
|
||||||
request_body = Option<ThumbnailsRebuildRequest>,
|
request_body = Option<ThumbnailsRebuildRequest>,
|
||||||
responses(
|
responses(
|
||||||
(status = 200, body = index_jobs::IndexJobResponse),
|
(status = 200, body = IndexJobResponse),
|
||||||
(status = 401, description = "Unauthorized"),
|
(status = 401, description = "Unauthorized"),
|
||||||
(status = 403, description = "Forbidden - Admin scope required"),
|
(status = 403, description = "Forbidden - Admin scope required"),
|
||||||
),
|
),
|
||||||
@@ -239,14 +48,14 @@ pub async fn start_thumbnails_rebuild(
|
|||||||
Ok(Json(index_jobs::map_row(row)))
|
Ok(Json(index_jobs::map_row(row)))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// POST /index/thumbnails/regenerate — create a job and regenerate all thumbnails in scope (clears then regenerates).
|
/// POST /index/thumbnails/regenerate — create a job to regenerate all thumbnails (clears then regenerates).
|
||||||
#[utoipa::path(
|
#[utoipa::path(
|
||||||
post,
|
post,
|
||||||
path = "/index/thumbnails/regenerate",
|
path = "/index/thumbnails/regenerate",
|
||||||
tag = "indexing",
|
tag = "indexing",
|
||||||
request_body = Option<ThumbnailsRebuildRequest>,
|
request_body = Option<ThumbnailsRebuildRequest>,
|
||||||
responses(
|
responses(
|
||||||
(status = 200, body = index_jobs::IndexJobResponse),
|
(status = 200, body = IndexJobResponse),
|
||||||
(status = 401, description = "Unauthorized"),
|
(status = 401, description = "Unauthorized"),
|
||||||
(status = 403, description = "Forbidden - Admin scope required"),
|
(status = 403, description = "Forbidden - Admin scope required"),
|
||||||
),
|
),
|
||||||
@@ -272,13 +81,3 @@ pub async fn start_thumbnails_regenerate(
|
|||||||
|
|
||||||
Ok(Json(index_jobs::map_row(row)))
|
Ok(Json(index_jobs::map_row(row)))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// POST /index/jobs/:id/thumbnails/checkup — start thumbnail generation for books missing thumbnails (called by indexer at end of build).
|
|
||||||
pub async fn start_checkup(
|
|
||||||
State(state): State<AppState>,
|
|
||||||
AxumPath(job_id): AxumPath<Uuid>,
|
|
||||||
) -> Result<StatusCode, ApiError> {
|
|
||||||
let state = state.clone();
|
|
||||||
tokio::spawn(async move { run_checkup(state, job_id).await });
|
|
||||||
Ok(StatusCode::ACCEPTED)
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ use sqlx::Row;
|
|||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
use utoipa::ToSchema;
|
use utoipa::ToSchema;
|
||||||
|
|
||||||
use crate::{error::ApiError, AppState};
|
use crate::{error::ApiError, state::AppState};
|
||||||
|
|
||||||
#[derive(Deserialize, ToSchema)]
|
#[derive(Deserialize, ToSchema)]
|
||||||
pub struct CreateTokenRequest {
|
pub struct CreateTokenRequest {
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
API_BASE_URL=http://localhost:8080
|
API_BASE_URL=http://localhost:7080
|
||||||
API_BOOTSTRAP_TOKEN=stripstream-dev-bootstrap-token
|
API_BOOTSTRAP_TOKEN=stripstream-dev-bootstrap-token
|
||||||
NEXT_PUBLIC_API_BASE_URL=http://localhost:8080
|
NEXT_PUBLIC_API_BASE_URL=http://localhost:7080
|
||||||
NEXT_PUBLIC_API_BOOTSTRAP_TOKEN=stripstream-dev-bootstrap-token
|
NEXT_PUBLIC_API_BOOTSTRAP_TOKEN=stripstream-dev-bootstrap-token
|
||||||
|
|||||||
66
apps/backoffice/AGENTS.md
Normal file
66
apps/backoffice/AGENTS.md
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
# apps/backoffice — Interface d'administration (Next.js)
|
||||||
|
|
||||||
|
App Next.js 16 avec React 19, Tailwind CSS v4, TypeScript. Port de dev : **7082** (`npm run dev`).
|
||||||
|
|
||||||
|
## Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
app/
|
||||||
|
├── layout.tsx # Layout global (nav sticky glassmorphism, ThemeProvider)
|
||||||
|
├── page.tsx # Dashboard
|
||||||
|
├── books/ # Liste et détail des livres
|
||||||
|
├── libraries/ # Gestion bibliothèques
|
||||||
|
├── jobs/ # Monitoring jobs
|
||||||
|
├── tokens/ # Tokens API
|
||||||
|
├── settings/ # Paramètres
|
||||||
|
├── components/ # Composants métier
|
||||||
|
│ ├── ui/ # Composants génériques (Button, Card, Badge, Icon, Input, ProgressBar, StatBox...)
|
||||||
|
│ ├── BookCard.tsx
|
||||||
|
│ ├── JobProgress.tsx
|
||||||
|
│ ├── JobsList.tsx
|
||||||
|
│ ├── LibraryForm.tsx
|
||||||
|
│ ├── FolderBrowser.tsx / FolderPicker.tsx
|
||||||
|
│ └── ...
|
||||||
|
└── globals.css # Variables CSS, Tailwind base
|
||||||
|
lib/
|
||||||
|
└── api.ts # Client API : types DTO + fonctions fetch vers l'API Rust
|
||||||
|
```
|
||||||
|
|
||||||
|
## Client API (lib/api.ts)
|
||||||
|
|
||||||
|
Tous les appels vers l'API Rust passent par `lib/api.ts`. Les types DTO sont définis là :
|
||||||
|
- `LibraryDto`, `IndexJobDto`, `BookDto`, `TokenDto`, `FolderItem`
|
||||||
|
|
||||||
|
Ajouter les nouveaux endpoints et types dans ce fichier.
|
||||||
|
|
||||||
|
## Composants UI
|
||||||
|
|
||||||
|
Les composants génériques sont dans `app/components/ui/`. Utiliser ces composants plutôt que des éléments HTML bruts :
|
||||||
|
|
||||||
|
```tsx
|
||||||
|
import { Button, Card, Badge, Icon, Input, ProgressBar, StatBox } from "@/app/components/ui";
|
||||||
|
```
|
||||||
|
|
||||||
|
## Conventions
|
||||||
|
|
||||||
|
- **App Router** : toutes les pages sont des Server Components par défaut. Utiliser `"use client"` seulement pour l'interactivité.
|
||||||
|
- **Tailwind v4** : config dans `postcss.config.js` + `tailwind.config.js`. Variables CSS dans `globals.css`.
|
||||||
|
- **Thème** : `ThemeProvider` + `ThemeToggle` pour dark/light mode via `next-themes`.
|
||||||
|
- **Icônes** : composant `<Icon name="..." size="sm|md|lg" />` dans `ui/Icon.tsx` — pas de librairie externe.
|
||||||
|
- **Navigation** : routes typées dans `layout.tsx` (`"/" | "/books" | "/libraries" | "/jobs" | "/tokens" | "/settings"`).
|
||||||
|
|
||||||
|
## Commandes
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install
|
||||||
|
npm run dev # http://localhost:7082
|
||||||
|
npm run build
|
||||||
|
npm run start # Production sur http://localhost:7082
|
||||||
|
```
|
||||||
|
|
||||||
|
## Gotchas
|
||||||
|
|
||||||
|
- **Port 7082** : pas le port Next.js par défaut (3000). Défini dans `package.json` scripts (`-p 7082`).
|
||||||
|
- **API_BASE_URL** : en prod, configuré via env. En dev local, l'API doit tourner sur `http://localhost:7080`.
|
||||||
|
- **React 19 + Next.js 16** : utiliser les nouvelles APIs (actions serveur, `use()` hook) si disponibles.
|
||||||
|
- **Pas de gestion d'état global** : fetch direct depuis les Server Components ou `useState`/`useEffect` dans les Client Components.
|
||||||
@@ -12,11 +12,11 @@ RUN npm run build
|
|||||||
FROM node:22-alpine AS runner
|
FROM node:22-alpine AS runner
|
||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
ENV NODE_ENV=production
|
ENV NODE_ENV=production
|
||||||
ENV PORT=8082
|
ENV PORT=7082
|
||||||
ENV HOST=0.0.0.0
|
ENV HOST=0.0.0.0
|
||||||
RUN apk add --no-cache wget
|
RUN apk add --no-cache wget
|
||||||
COPY --from=builder /app/.next/standalone ./
|
COPY --from=builder /app/.next/standalone ./
|
||||||
COPY --from=builder /app/.next/static ./.next/static
|
COPY --from=builder /app/.next/static ./.next/static
|
||||||
COPY --from=builder /app/public ./public
|
COPY --from=builder /app/public ./public
|
||||||
EXPOSE 8082
|
EXPOSE 7082
|
||||||
CMD ["node", "server.js"]
|
CMD ["node", "server.js"]
|
||||||
|
|||||||
17
apps/backoffice/app/api/books/[bookId]/convert/route.ts
Normal file
17
apps/backoffice/app/api/books/[bookId]/convert/route.ts
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
import { NextRequest, NextResponse } from "next/server";
|
||||||
|
import { convertBook } from "@/lib/api";
|
||||||
|
|
||||||
|
export async function POST(
|
||||||
|
_request: NextRequest,
|
||||||
|
{ params }: { params: Promise<{ bookId: string }> }
|
||||||
|
) {
|
||||||
|
const { bookId } = await params;
|
||||||
|
try {
|
||||||
|
const data = await convertBook(bookId);
|
||||||
|
return NextResponse.json(data);
|
||||||
|
} catch (error) {
|
||||||
|
const message = error instanceof Error ? error.message : "Failed to start conversion";
|
||||||
|
const status = message.includes("409") ? 409 : 500;
|
||||||
|
return NextResponse.json({ error: message }, { status });
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,35 +1,25 @@
|
|||||||
import { NextRequest, NextResponse } from "next/server";
|
import { NextRequest, NextResponse } from "next/server";
|
||||||
|
import { config } from "@/lib/api";
|
||||||
|
|
||||||
export async function GET(
|
export async function GET(
|
||||||
request: NextRequest,
|
request: NextRequest,
|
||||||
{ params }: { params: Promise<{ bookId: string; pageNum: string }> }
|
{ params }: { params: Promise<{ bookId: string; pageNum: string }> }
|
||||||
) {
|
) {
|
||||||
const { bookId, pageNum } = await params;
|
const { bookId, pageNum } = await params;
|
||||||
|
try {
|
||||||
// Récupérer les query params (format, width, quality)
|
const { baseUrl, token } = config();
|
||||||
const { searchParams } = new URL(request.url);
|
const { searchParams } = new URL(request.url);
|
||||||
const format = searchParams.get("format") || "webp";
|
const format = searchParams.get("format") || "webp";
|
||||||
const width = searchParams.get("width") || "";
|
const width = searchParams.get("width") || "";
|
||||||
const quality = searchParams.get("quality") || "";
|
const quality = searchParams.get("quality") || "";
|
||||||
|
|
||||||
// Construire l'URL vers l'API backend
|
const apiUrl = new URL(`${baseUrl}/books/${bookId}/pages/${pageNum}`);
|
||||||
const apiBaseUrl = process.env.API_BASE_URL || "http://api:8080";
|
|
||||||
const apiUrl = new URL(`${apiBaseUrl}/books/${bookId}/pages/${pageNum}`);
|
|
||||||
apiUrl.searchParams.set("format", format);
|
apiUrl.searchParams.set("format", format);
|
||||||
if (width) apiUrl.searchParams.set("width", width);
|
if (width) apiUrl.searchParams.set("width", width);
|
||||||
if (quality) apiUrl.searchParams.set("quality", quality);
|
if (quality) apiUrl.searchParams.set("quality", quality);
|
||||||
|
|
||||||
// Faire la requête à l'API
|
|
||||||
const token = process.env.API_BOOTSTRAP_TOKEN;
|
|
||||||
if (!token) {
|
|
||||||
return new NextResponse("API token not configured", { status: 500 });
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await fetch(apiUrl.toString(), {
|
const response = await fetch(apiUrl.toString(), {
|
||||||
headers: {
|
headers: { Authorization: `Bearer ${token}` },
|
||||||
Authorization: `Bearer ${token}`,
|
|
||||||
},
|
|
||||||
});
|
});
|
||||||
|
|
||||||
if (!response.ok) {
|
if (!response.ok) {
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
import { NextRequest, NextResponse } from "next/server";
|
import { NextRequest, NextResponse } from "next/server";
|
||||||
|
import { config } from "@/lib/api";
|
||||||
|
|
||||||
export async function GET(
|
export async function GET(
|
||||||
request: NextRequest,
|
request: NextRequest,
|
||||||
@@ -6,19 +7,10 @@ export async function GET(
|
|||||||
) {
|
) {
|
||||||
const { bookId } = await params;
|
const { bookId } = await params;
|
||||||
|
|
||||||
const apiBaseUrl = process.env.API_BASE_URL || "http://api:8080";
|
|
||||||
const apiUrl = `${apiBaseUrl}/books/${bookId}/thumbnail`;
|
|
||||||
|
|
||||||
const token = process.env.API_BOOTSTRAP_TOKEN;
|
|
||||||
if (!token) {
|
|
||||||
return new NextResponse("API token not configured", { status: 500 });
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const response = await fetch(apiUrl, {
|
const { baseUrl, token } = config();
|
||||||
headers: {
|
const response = await fetch(`${baseUrl}/books/${bookId}/thumbnail`, {
|
||||||
Authorization: `Bearer ${token}`,
|
headers: { Authorization: `Bearer ${token}` },
|
||||||
},
|
|
||||||
});
|
});
|
||||||
|
|
||||||
if (!response.ok) {
|
if (!response.ok) {
|
||||||
|
|||||||
@@ -1,39 +1,13 @@
|
|||||||
import { NextRequest, NextResponse } from "next/server";
|
import { NextRequest, NextResponse } from "next/server";
|
||||||
|
import { listFolders } from "@/lib/api";
|
||||||
|
|
||||||
export async function GET(request: NextRequest) {
|
export async function GET(request: NextRequest) {
|
||||||
const apiBaseUrl = process.env.API_BASE_URL || "http://api:8080";
|
|
||||||
const apiToken = process.env.API_BOOTSTRAP_TOKEN;
|
|
||||||
|
|
||||||
if (!apiToken) {
|
|
||||||
return NextResponse.json({ error: "API token not configured" }, { status: 500 });
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const { searchParams } = new URL(request.url);
|
const { searchParams } = new URL(request.url);
|
||||||
const path = searchParams.get("path");
|
const path = searchParams.get("path") || undefined;
|
||||||
|
const data = await listFolders(path);
|
||||||
let apiUrl = `${apiBaseUrl}/folders`;
|
|
||||||
if (path) {
|
|
||||||
apiUrl += `?path=${encodeURIComponent(path)}`;
|
|
||||||
}
|
|
||||||
|
|
||||||
const response = await fetch(apiUrl, {
|
|
||||||
headers: {
|
|
||||||
Authorization: `Bearer ${apiToken}`,
|
|
||||||
},
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
return NextResponse.json(
|
|
||||||
{ error: `API error: ${response.status}` },
|
|
||||||
{ status: response.status }
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
return NextResponse.json(data);
|
return NextResponse.json(data);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error("Proxy error:", error);
|
|
||||||
return NextResponse.json({ error: "Failed to fetch folders" }, { status: 500 });
|
return NextResponse.json({ error: "Failed to fetch folders" }, { status: 500 });
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,36 +1,15 @@
|
|||||||
import { NextRequest, NextResponse } from "next/server";
|
import { NextRequest, NextResponse } from "next/server";
|
||||||
|
import { cancelJob } from "@/lib/api";
|
||||||
|
|
||||||
export async function POST(
|
export async function POST(
|
||||||
request: NextRequest,
|
_request: NextRequest,
|
||||||
{ params }: { params: Promise<{ id: string }> }
|
{ params }: { params: Promise<{ id: string }> }
|
||||||
) {
|
) {
|
||||||
const { id } = await params;
|
const { id } = await params;
|
||||||
const apiBaseUrl = process.env.API_BASE_URL || "http://api:8080";
|
|
||||||
const apiToken = process.env.API_BOOTSTRAP_TOKEN;
|
|
||||||
|
|
||||||
if (!apiToken) {
|
|
||||||
return NextResponse.json({ error: "API token not configured" }, { status: 500 });
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const response = await fetch(`${apiBaseUrl}/index/cancel/${id}`, {
|
const data = await cancelJob(id);
|
||||||
method: "POST",
|
|
||||||
headers: {
|
|
||||||
Authorization: `Bearer ${apiToken}`,
|
|
||||||
},
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
return NextResponse.json(
|
|
||||||
{ error: `API error: ${response.status}` },
|
|
||||||
{ status: response.status }
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
return NextResponse.json(data);
|
return NextResponse.json(data);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error("Proxy error:", error);
|
|
||||||
return NextResponse.json({ error: "Failed to cancel job" }, { status: 500 });
|
return NextResponse.json({ error: "Failed to cancel job" }, { status: 500 });
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,35 +1,15 @@
|
|||||||
import { NextRequest, NextResponse } from "next/server";
|
import { NextRequest, NextResponse } from "next/server";
|
||||||
|
import { apiFetch, IndexJobDto } from "@/lib/api";
|
||||||
|
|
||||||
export async function GET(
|
export async function GET(
|
||||||
request: NextRequest,
|
_request: NextRequest,
|
||||||
{ params }: { params: Promise<{ id: string }> }
|
{ params }: { params: Promise<{ id: string }> }
|
||||||
) {
|
) {
|
||||||
const { id } = await params;
|
const { id } = await params;
|
||||||
const apiBaseUrl = process.env.API_BASE_URL || "http://api:8080";
|
|
||||||
const apiToken = process.env.API_BOOTSTRAP_TOKEN;
|
|
||||||
|
|
||||||
if (!apiToken) {
|
|
||||||
return NextResponse.json({ error: "API token not configured" }, { status: 500 });
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const response = await fetch(`${apiBaseUrl}/index/jobs/${id}`, {
|
const data = await apiFetch<IndexJobDto>(`/index/jobs/${id}`);
|
||||||
headers: {
|
|
||||||
Authorization: `Bearer ${apiToken}`,
|
|
||||||
},
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
return NextResponse.json(
|
|
||||||
{ error: `API error: ${response.status}` },
|
|
||||||
{ status: response.status }
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
return NextResponse.json(data);
|
return NextResponse.json(data);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error("Proxy error:", error);
|
|
||||||
return NextResponse.json({ error: "Failed to fetch job" }, { status: 500 });
|
return NextResponse.json({ error: "Failed to fetch job" }, { status: 500 });
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,19 +1,12 @@
|
|||||||
import { NextRequest } from "next/server";
|
import { NextRequest } from "next/server";
|
||||||
|
import { config } from "@/lib/api";
|
||||||
|
|
||||||
export async function GET(
|
export async function GET(
|
||||||
request: NextRequest,
|
request: NextRequest,
|
||||||
{ params }: { params: Promise<{ id: string }> }
|
{ params }: { params: Promise<{ id: string }> }
|
||||||
) {
|
) {
|
||||||
const { id } = await params;
|
const { id } = await params;
|
||||||
const apiBaseUrl = process.env.API_BASE_URL || "http://api:8080";
|
const { baseUrl, token } = config();
|
||||||
const apiToken = process.env.API_BOOTSTRAP_TOKEN;
|
|
||||||
|
|
||||||
if (!apiToken) {
|
|
||||||
return new Response(
|
|
||||||
`data: ${JSON.stringify({ error: "API token not configured" })}\n\n`,
|
|
||||||
{ status: 500, headers: { "Content-Type": "text/event-stream" } }
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
const stream = new ReadableStream({
|
const stream = new ReadableStream({
|
||||||
async start(controller) {
|
async start(controller) {
|
||||||
@@ -27,10 +20,8 @@ export async function GET(
|
|||||||
if (!isActive) return;
|
if (!isActive) return;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const response = await fetch(`${apiBaseUrl}/index/jobs/${id}`, {
|
const response = await fetch(`${baseUrl}/index/jobs/${id}`, {
|
||||||
headers: {
|
headers: { Authorization: `Bearer ${token}` },
|
||||||
Authorization: `Bearer ${apiToken}`,
|
|
||||||
},
|
|
||||||
});
|
});
|
||||||
|
|
||||||
if (response.ok && isActive) {
|
if (response.ok && isActive) {
|
||||||
|
|||||||
11
apps/backoffice/app/api/jobs/active/route.ts
Normal file
11
apps/backoffice/app/api/jobs/active/route.ts
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
import { NextResponse } from "next/server";
|
||||||
|
import { apiFetch, IndexJobDto } from "@/lib/api";
|
||||||
|
|
||||||
|
export async function GET() {
|
||||||
|
try {
|
||||||
|
const data = await apiFetch<IndexJobDto[]>("/index/jobs/active");
|
||||||
|
return NextResponse.json(data);
|
||||||
|
} catch (error) {
|
||||||
|
return NextResponse.json({ error: "Failed to fetch active jobs" }, { status: 500 });
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,31 +0,0 @@
|
|||||||
import { NextRequest, NextResponse } from "next/server";
|
|
||||||
|
|
||||||
export async function GET(request: NextRequest) {
|
|
||||||
const apiBaseUrl = process.env.API_BASE_URL || "http://api:8080";
|
|
||||||
const apiToken = process.env.API_BOOTSTRAP_TOKEN;
|
|
||||||
|
|
||||||
if (!apiToken) {
|
|
||||||
return NextResponse.json({ error: "API token not configured" }, { status: 500 });
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await fetch(`${apiBaseUrl}/index/status`, {
|
|
||||||
headers: {
|
|
||||||
Authorization: `Bearer ${apiToken}`,
|
|
||||||
},
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
return NextResponse.json(
|
|
||||||
{ error: `API error: ${response.status}` },
|
|
||||||
{ status: response.status }
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
return NextResponse.json(data);
|
|
||||||
} catch (error) {
|
|
||||||
console.error("Proxy error:", error);
|
|
||||||
return NextResponse.json({ error: "Failed to fetch jobs" }, { status: 500 });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,15 +1,8 @@
|
|||||||
import { NextRequest } from "next/server";
|
import { NextRequest } from "next/server";
|
||||||
|
import { config } from "@/lib/api";
|
||||||
|
|
||||||
export async function GET(request: NextRequest) {
|
export async function GET(request: NextRequest) {
|
||||||
const apiBaseUrl = process.env.API_BASE_URL || "http://api:8080";
|
const { baseUrl, token } = config();
|
||||||
const apiToken = process.env.API_BOOTSTRAP_TOKEN;
|
|
||||||
|
|
||||||
if (!apiToken) {
|
|
||||||
return new Response(
|
|
||||||
`data: ${JSON.stringify({ error: "API token not configured" })}\n\n`,
|
|
||||||
{ status: 500, headers: { "Content-Type": "text/event-stream" } }
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
const stream = new ReadableStream({
|
const stream = new ReadableStream({
|
||||||
async start(controller) {
|
async start(controller) {
|
||||||
@@ -22,10 +15,8 @@ export async function GET(request: NextRequest) {
|
|||||||
if (!isActive) return;
|
if (!isActive) return;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const response = await fetch(`${apiBaseUrl}/index/status`, {
|
const response = await fetch(`${baseUrl}/index/status`, {
|
||||||
headers: {
|
headers: { Authorization: `Bearer ${token}` },
|
||||||
Authorization: `Bearer ${apiToken}`,
|
|
||||||
},
|
|
||||||
});
|
});
|
||||||
|
|
||||||
if (response.ok && isActive) {
|
if (response.ok && isActive) {
|
||||||
|
|||||||
18
apps/backoffice/app/api/libraries/[id]/monitoring/route.ts
Normal file
18
apps/backoffice/app/api/libraries/[id]/monitoring/route.ts
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
import { NextRequest, NextResponse } from "next/server";
|
||||||
|
import { updateLibraryMonitoring } from "@/lib/api";
|
||||||
|
|
||||||
|
export async function PATCH(
|
||||||
|
request: NextRequest,
|
||||||
|
{ params }: { params: Promise<{ id: string }> }
|
||||||
|
) {
|
||||||
|
const { id } = await params;
|
||||||
|
try {
|
||||||
|
const { monitor_enabled, scan_mode, watcher_enabled } = await request.json();
|
||||||
|
const data = await updateLibraryMonitoring(id, monitor_enabled, scan_mode, watcher_enabled);
|
||||||
|
return NextResponse.json(data);
|
||||||
|
} catch (error) {
|
||||||
|
const message = error instanceof Error ? error.message : "Failed to update monitoring settings";
|
||||||
|
console.error("[monitoring PATCH]", message);
|
||||||
|
return NextResponse.json({ error: message }, { status: 500 });
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,29 +1,16 @@
|
|||||||
import { NextRequest, NextResponse } from "next/server";
|
import { NextRequest, NextResponse } from "next/server";
|
||||||
|
import { apiFetch, updateSetting } from "@/lib/api";
|
||||||
|
|
||||||
export async function GET(
|
export async function GET(
|
||||||
request: NextRequest,
|
_request: NextRequest,
|
||||||
{ params }: { params: Promise<{ key: string }> }
|
{ params }: { params: Promise<{ key: string }> }
|
||||||
) {
|
) {
|
||||||
try {
|
|
||||||
const { key } = await params;
|
const { key } = await params;
|
||||||
const baseUrl = process.env.API_BASE_URL || "http://api:8080";
|
try {
|
||||||
const token = process.env.API_BOOTSTRAP_TOKEN;
|
const data = await apiFetch<unknown>(`/settings/${key}`);
|
||||||
|
|
||||||
const response = await fetch(`${baseUrl}/settings/${key}`, {
|
|
||||||
headers: {
|
|
||||||
Authorization: `Bearer ${token}`,
|
|
||||||
},
|
|
||||||
cache: "no-store"
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
return NextResponse.json({ error: "Failed to fetch setting" }, { status: response.status });
|
|
||||||
}
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
return NextResponse.json(data);
|
return NextResponse.json(data);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
return NextResponse.json({ error: "Internal server error" }, { status: 500 });
|
return NextResponse.json({ error: "Failed to fetch setting" }, { status: 500 });
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -31,29 +18,12 @@ export async function POST(
|
|||||||
request: NextRequest,
|
request: NextRequest,
|
||||||
{ params }: { params: Promise<{ key: string }> }
|
{ params }: { params: Promise<{ key: string }> }
|
||||||
) {
|
) {
|
||||||
try {
|
|
||||||
const { key } = await params;
|
const { key } = await params;
|
||||||
const baseUrl = process.env.API_BASE_URL || "http://api:8080";
|
try {
|
||||||
const token = process.env.API_BOOTSTRAP_TOKEN;
|
const { value } = await request.json();
|
||||||
const body = await request.json();
|
const data = await updateSetting(key, value);
|
||||||
|
|
||||||
const response = await fetch(`${baseUrl}/settings/${key}`, {
|
|
||||||
method: "POST",
|
|
||||||
headers: {
|
|
||||||
Authorization: `Bearer ${token}`,
|
|
||||||
"Content-Type": "application/json",
|
|
||||||
},
|
|
||||||
body: JSON.stringify(body),
|
|
||||||
cache: "no-store"
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
return NextResponse.json({ error: "Failed to update setting" }, { status: response.status });
|
|
||||||
}
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
return NextResponse.json(data);
|
return NextResponse.json(data);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
return NextResponse.json({ error: "Internal server error" }, { status: 500 });
|
return NextResponse.json({ error: "Failed to update setting" }, { status: 500 });
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,25 +1,11 @@
|
|||||||
import { NextRequest, NextResponse } from "next/server";
|
import { NextResponse } from "next/server";
|
||||||
|
import { clearCache } from "@/lib/api";
|
||||||
|
|
||||||
export async function POST(request: NextRequest) {
|
export async function POST() {
|
||||||
try {
|
try {
|
||||||
const baseUrl = process.env.API_BASE_URL || "http://api:8080";
|
const data = await clearCache();
|
||||||
const token = process.env.API_BOOTSTRAP_TOKEN;
|
|
||||||
|
|
||||||
const response = await fetch(`${baseUrl}/settings/cache/clear`, {
|
|
||||||
method: "POST",
|
|
||||||
headers: {
|
|
||||||
Authorization: `Bearer ${token}`,
|
|
||||||
},
|
|
||||||
cache: "no-store"
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
return NextResponse.json({ error: "Failed to clear cache" }, { status: response.status });
|
|
||||||
}
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
return NextResponse.json(data);
|
return NextResponse.json(data);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
return NextResponse.json({ error: "Internal server error" }, { status: 500 });
|
return NextResponse.json({ error: "Failed to clear cache" }, { status: 500 });
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,24 +1,11 @@
|
|||||||
import { NextRequest, NextResponse } from "next/server";
|
import { NextResponse } from "next/server";
|
||||||
|
import { getCacheStats } from "@/lib/api";
|
||||||
|
|
||||||
export async function GET(request: NextRequest) {
|
export async function GET() {
|
||||||
try {
|
try {
|
||||||
const baseUrl = process.env.API_BASE_URL || "http://api:8080";
|
const data = await getCacheStats();
|
||||||
const token = process.env.API_BOOTSTRAP_TOKEN;
|
|
||||||
|
|
||||||
const response = await fetch(`${baseUrl}/settings/cache/stats`, {
|
|
||||||
headers: {
|
|
||||||
Authorization: `Bearer ${token}`,
|
|
||||||
},
|
|
||||||
cache: "no-store"
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
return NextResponse.json({ error: "Failed to fetch cache stats" }, { status: response.status });
|
|
||||||
}
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
return NextResponse.json(data);
|
return NextResponse.json(data);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
return NextResponse.json({ error: "Internal server error" }, { status: 500 });
|
return NextResponse.json({ error: "Failed to fetch cache stats" }, { status: 500 });
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,24 +0,0 @@
|
|||||||
import { NextRequest, NextResponse } from "next/server";
|
|
||||||
|
|
||||||
export async function GET(request: NextRequest) {
|
|
||||||
try {
|
|
||||||
const baseUrl = process.env.API_BASE_URL || "http://api:8080";
|
|
||||||
const token = process.env.API_BOOTSTRAP_TOKEN;
|
|
||||||
|
|
||||||
const response = await fetch(`${baseUrl}/settings`, {
|
|
||||||
headers: {
|
|
||||||
Authorization: `Bearer ${token}`,
|
|
||||||
},
|
|
||||||
cache: "no-store"
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
return NextResponse.json({ error: "Failed to fetch settings" }, { status: response.status });
|
|
||||||
}
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
return NextResponse.json(data);
|
|
||||||
} catch (error) {
|
|
||||||
return NextResponse.json({ error: "Internal server error" }, { status: 500 });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,10 +1,43 @@
|
|||||||
import { fetchLibraries, getBookCoverUrl, BookDto, apiFetch } from "../../../lib/api";
|
import { fetchLibraries, getBookCoverUrl, BookDto, apiFetch, ReadingStatus } from "../../../lib/api";
|
||||||
|
import { BookPreview } from "../../components/BookPreview";
|
||||||
|
import { ConvertButton } from "../../components/ConvertButton";
|
||||||
import Image from "next/image";
|
import Image from "next/image";
|
||||||
import Link from "next/link";
|
import Link from "next/link";
|
||||||
import { notFound } from "next/navigation";
|
import { notFound } from "next/navigation";
|
||||||
|
|
||||||
export const dynamic = "force-dynamic";
|
export const dynamic = "force-dynamic";
|
||||||
|
|
||||||
|
const readingStatusConfig: Record<ReadingStatus, { label: string; className: string }> = {
|
||||||
|
unread: { label: "Non lu", className: "bg-muted/60 text-muted-foreground border border-border" },
|
||||||
|
reading: { label: "En cours", className: "bg-amber-500/15 text-amber-600 dark:text-amber-400 border border-amber-500/30" },
|
||||||
|
read: { label: "Lu", className: "bg-green-500/15 text-green-600 dark:text-green-400 border border-green-500/30" },
|
||||||
|
};
|
||||||
|
|
||||||
|
function ReadingStatusBadge({
|
||||||
|
status,
|
||||||
|
currentPage,
|
||||||
|
lastReadAt,
|
||||||
|
}: {
|
||||||
|
status: ReadingStatus;
|
||||||
|
currentPage: number | null;
|
||||||
|
lastReadAt: string | null;
|
||||||
|
}) {
|
||||||
|
const { label, className } = readingStatusConfig[status];
|
||||||
|
return (
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<span className={`inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-semibold ${className}`}>
|
||||||
|
{label}
|
||||||
|
{status === "reading" && currentPage != null && ` · p. ${currentPage}`}
|
||||||
|
</span>
|
||||||
|
{lastReadAt && (
|
||||||
|
<span className="text-xs text-muted-foreground">
|
||||||
|
{new Date(lastReadAt).toLocaleDateString()}
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
async function fetchBook(bookId: string): Promise<BookDto | null> {
|
async function fetchBook(bookId: string): Promise<BookDto | null> {
|
||||||
try {
|
try {
|
||||||
return await apiFetch<BookDto>(`/books/${bookId}`);
|
return await apiFetch<BookDto>(`/books/${bookId}`);
|
||||||
@@ -69,6 +102,17 @@ export default async function BookDetailPage({
|
|||||||
)}
|
)}
|
||||||
|
|
||||||
<div className="space-y-3">
|
<div className="space-y-3">
|
||||||
|
{book.reading_status && (
|
||||||
|
<div className="flex items-center justify-between py-2 border-b border-border">
|
||||||
|
<span className="text-sm text-muted-foreground">Lecture :</span>
|
||||||
|
<ReadingStatusBadge
|
||||||
|
status={book.reading_status}
|
||||||
|
currentPage={book.reading_current_page ?? null}
|
||||||
|
lastReadAt={book.reading_last_read_at ?? null}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
<div className="flex items-center justify-between py-2 border-b border-border">
|
<div className="flex items-center justify-between py-2 border-b border-border">
|
||||||
<span className="text-sm text-muted-foreground">Format:</span>
|
<span className="text-sm text-muted-foreground">Format:</span>
|
||||||
<span className={`inline-flex px-2.5 py-1 rounded-full text-xs font-semibold ${
|
<span className={`inline-flex px-2.5 py-1 rounded-full text-xs font-semibold ${
|
||||||
@@ -114,7 +158,10 @@ export default async function BookDetailPage({
|
|||||||
{book.file_format && (
|
{book.file_format && (
|
||||||
<div className="flex items-center justify-between py-2 border-b border-border">
|
<div className="flex items-center justify-between py-2 border-b border-border">
|
||||||
<span className="text-sm text-muted-foreground">File Format:</span>
|
<span className="text-sm text-muted-foreground">File Format:</span>
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
<span className="text-sm text-foreground">{book.file_format.toUpperCase()}</span>
|
<span className="text-sm text-foreground">{book.file_format.toUpperCase()}</span>
|
||||||
|
{book.file_format === "cbr" && <ConvertButton bookId={book.id} />}
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
@@ -157,6 +204,12 @@ export default async function BookDetailPage({
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
{book.page_count && book.page_count > 0 && (
|
||||||
|
<div className="mt-8">
|
||||||
|
<BookPreview bookId={book.id} pageCount={book.page_count} />
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
</>
|
</>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,7 +1,8 @@
|
|||||||
import { fetchBooks, searchBooks, fetchLibraries, BookDto, LibraryDto, getBookCoverUrl } from "../../lib/api";
|
import { fetchBooks, searchBooks, fetchLibraries, BookDto, LibraryDto, SeriesHitDto, getBookCoverUrl } from "../../lib/api";
|
||||||
import { BooksGrid, EmptyState } from "../components/BookCard";
|
import { BooksGrid, EmptyState } from "../components/BookCard";
|
||||||
import { Card, CardContent, Button, FormField, FormInput, FormSelect, FormRow, CursorPagination } from "../components/ui";
|
import { Card, CardContent, Button, FormField, FormInput, FormSelect, FormRow, OffsetPagination } from "../components/ui";
|
||||||
import Link from "next/link";
|
import Link from "next/link";
|
||||||
|
import Image from "next/image";
|
||||||
|
|
||||||
export const dynamic = "force-dynamic";
|
export const dynamic = "force-dynamic";
|
||||||
|
|
||||||
@@ -13,7 +14,7 @@ export default async function BooksPage({
|
|||||||
const searchParamsAwaited = await searchParams;
|
const searchParamsAwaited = await searchParams;
|
||||||
const libraryId = typeof searchParamsAwaited.library === "string" ? searchParamsAwaited.library : undefined;
|
const libraryId = typeof searchParamsAwaited.library === "string" ? searchParamsAwaited.library : undefined;
|
||||||
const searchQuery = typeof searchParamsAwaited.q === "string" ? searchParamsAwaited.q : "";
|
const searchQuery = typeof searchParamsAwaited.q === "string" ? searchParamsAwaited.q : "";
|
||||||
const cursor = typeof searchParamsAwaited.cursor === "string" ? searchParamsAwaited.cursor : undefined;
|
const page = typeof searchParamsAwaited.page === "string" ? parseInt(searchParamsAwaited.page) : 1;
|
||||||
const limit = typeof searchParamsAwaited.limit === "string" ? parseInt(searchParamsAwaited.limit) : 20;
|
const limit = typeof searchParamsAwaited.limit === "string" ? parseInt(searchParamsAwaited.limit) : 20;
|
||||||
|
|
||||||
const [libraries] = await Promise.all([
|
const [libraries] = await Promise.all([
|
||||||
@@ -21,13 +22,15 @@ export default async function BooksPage({
|
|||||||
]);
|
]);
|
||||||
|
|
||||||
let books: BookDto[] = [];
|
let books: BookDto[] = [];
|
||||||
let nextCursor: string | null = null;
|
let total = 0;
|
||||||
let searchResults: BookDto[] | null = null;
|
let searchResults: BookDto[] | null = null;
|
||||||
|
let seriesHits: SeriesHitDto[] = [];
|
||||||
let totalHits: number | null = null;
|
let totalHits: number | null = null;
|
||||||
|
|
||||||
if (searchQuery) {
|
if (searchQuery) {
|
||||||
const searchResponse = await searchBooks(searchQuery, libraryId, limit).catch(() => null);
|
const searchResponse = await searchBooks(searchQuery, libraryId, limit).catch(() => null);
|
||||||
if (searchResponse) {
|
if (searchResponse) {
|
||||||
|
seriesHits = searchResponse.series_hits ?? [];
|
||||||
searchResults = searchResponse.hits.map(hit => ({
|
searchResults = searchResponse.hits.map(hit => ({
|
||||||
id: hit.id,
|
id: hit.id,
|
||||||
library_id: hit.library_id,
|
library_id: hit.library_id,
|
||||||
@@ -41,18 +44,22 @@ export default async function BooksPage({
|
|||||||
file_path: null,
|
file_path: null,
|
||||||
file_format: null,
|
file_format: null,
|
||||||
file_parse_status: null,
|
file_parse_status: null,
|
||||||
updated_at: ""
|
updated_at: "",
|
||||||
|
reading_status: "unread" as const,
|
||||||
|
reading_current_page: null,
|
||||||
|
reading_last_read_at: null,
|
||||||
}));
|
}));
|
||||||
totalHits = searchResponse.estimated_total_hits;
|
totalHits = searchResponse.estimated_total_hits;
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
const booksPage = await fetchBooks(libraryId, undefined, cursor, limit).catch(() => ({
|
const booksPage = await fetchBooks(libraryId, undefined, page, limit).catch(() => ({
|
||||||
items: [] as BookDto[],
|
items: [] as BookDto[],
|
||||||
next_cursor: null,
|
total: 0,
|
||||||
prev_cursor: null
|
page: 1,
|
||||||
|
limit,
|
||||||
}));
|
}));
|
||||||
books = booksPage.items;
|
books = booksPage.items;
|
||||||
nextCursor = booksPage.next_cursor;
|
total = booksPage.total;
|
||||||
}
|
}
|
||||||
|
|
||||||
const displayBooks = (searchResults || books).map(book => ({
|
const displayBooks = (searchResults || books).map(book => ({
|
||||||
@@ -60,8 +67,7 @@ export default async function BooksPage({
|
|||||||
coverUrl: getBookCoverUrl(book.id)
|
coverUrl: getBookCoverUrl(book.id)
|
||||||
}));
|
}));
|
||||||
|
|
||||||
const hasNextPage = !!nextCursor;
|
const totalPages = Math.ceil(total / limit);
|
||||||
const hasPrevPage = !!cursor;
|
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<>
|
<>
|
||||||
@@ -136,18 +142,54 @@ export default async function BooksPage({
|
|||||||
</p>
|
</p>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
|
{/* Séries matchantes */}
|
||||||
|
{seriesHits.length > 0 && (
|
||||||
|
<div className="mb-8">
|
||||||
|
<h2 className="text-lg font-semibold text-foreground mb-3">Series</h2>
|
||||||
|
<div className="grid grid-cols-2 sm:grid-cols-3 md:grid-cols-4 lg:grid-cols-6 gap-4">
|
||||||
|
{seriesHits.map((s) => (
|
||||||
|
<Link
|
||||||
|
key={`${s.library_id}-${s.name}`}
|
||||||
|
href={`/libraries/${s.library_id}/books?series=${encodeURIComponent(s.name)}`}
|
||||||
|
className="group"
|
||||||
|
>
|
||||||
|
<div className="bg-card rounded-xl shadow-sm border border-border/60 overflow-hidden hover:shadow-md transition-shadow duration-200">
|
||||||
|
<div className="aspect-[2/3] relative bg-muted/50">
|
||||||
|
<Image
|
||||||
|
src={getBookCoverUrl(s.first_book_id)}
|
||||||
|
alt={`Cover of ${s.name}`}
|
||||||
|
fill
|
||||||
|
className="object-cover"
|
||||||
|
unoptimized
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
<div className="p-2">
|
||||||
|
<h3 className="font-medium text-foreground truncate text-sm" title={s.name}>
|
||||||
|
{s.name === "unclassified" ? "Unclassified" : s.name}
|
||||||
|
</h3>
|
||||||
|
<p className="text-xs text-muted-foreground mt-0.5">
|
||||||
|
{s.book_count} book{s.book_count !== 1 ? 's' : ''}
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</Link>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
{/* Grille de livres */}
|
{/* Grille de livres */}
|
||||||
{displayBooks.length > 0 ? (
|
{displayBooks.length > 0 ? (
|
||||||
<>
|
<>
|
||||||
|
{searchQuery && <h2 className="text-lg font-semibold text-foreground mb-3">Books</h2>}
|
||||||
<BooksGrid books={displayBooks} />
|
<BooksGrid books={displayBooks} />
|
||||||
|
|
||||||
{!searchQuery && (
|
{!searchQuery && (
|
||||||
<CursorPagination
|
<OffsetPagination
|
||||||
hasNextPage={hasNextPage}
|
currentPage={page}
|
||||||
hasPrevPage={hasPrevPage}
|
totalPages={totalPages}
|
||||||
pageSize={limit}
|
pageSize={limit}
|
||||||
currentCount={displayBooks.length}
|
totalItems={total}
|
||||||
nextCursor={nextCursor}
|
|
||||||
/>
|
/>
|
||||||
)}
|
)}
|
||||||
</>
|
</>
|
||||||
|
|||||||
@@ -3,14 +3,32 @@
|
|||||||
import { useState } from "react";
|
import { useState } from "react";
|
||||||
import Image from "next/image";
|
import Image from "next/image";
|
||||||
import Link from "next/link";
|
import Link from "next/link";
|
||||||
import { BookDto } from "../../lib/api";
|
import { BookDto, ReadingStatus } from "../../lib/api";
|
||||||
|
|
||||||
|
const readingStatusOverlay: Record<ReadingStatus, { label: string; className: string } | null> = {
|
||||||
|
unread: null,
|
||||||
|
reading: { label: "En cours", className: "bg-amber-500/90 text-white" },
|
||||||
|
read: { label: "Lu", className: "bg-green-600/90 text-white" },
|
||||||
|
};
|
||||||
|
|
||||||
interface BookCardProps {
|
interface BookCardProps {
|
||||||
book: BookDto & { coverUrl?: string };
|
book: BookDto & { coverUrl?: string };
|
||||||
|
readingStatus?: ReadingStatus;
|
||||||
}
|
}
|
||||||
|
|
||||||
function BookImage({ src, alt }: { src: string; alt: string }) {
|
function BookImage({ src, alt }: { src: string; alt: string }) {
|
||||||
const [isLoaded, setIsLoaded] = useState(false);
|
const [isLoaded, setIsLoaded] = useState(false);
|
||||||
|
const [hasError, setHasError] = useState(false);
|
||||||
|
|
||||||
|
if (hasError) {
|
||||||
|
return (
|
||||||
|
<div className="relative aspect-[2/3] overflow-hidden bg-muted flex items-center justify-center">
|
||||||
|
<svg className="w-10 h-10 text-muted-foreground/30" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||||
|
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={1.5} d="M12 6.253v13m0-13C10.832 5.477 9.246 5 7.5 5S4.168 5.477 3 6.253v13C4.168 18.477 5.754 18 7.5 18s3.332.477 4.5 1.253m0-13C13.168 5.477 14.754 5 16.5 5c1.747 0 3.332.477 4.5 1.253v13C19.832 18.477 18.247 18 16.5 18c-1.746 0-3.332.477-4.5 1.253" />
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<div className="relative aspect-[2/3] overflow-hidden bg-muted">
|
<div className="relative aspect-[2/3] overflow-hidden bg-muted">
|
||||||
@@ -31,24 +49,34 @@ function BookImage({ src, alt }: { src: string; alt: string }) {
|
|||||||
}`}
|
}`}
|
||||||
sizes="(max-width: 640px) 50vw, (max-width: 768px) 33vw, (max-width: 1024px) 25vw, 16vw"
|
sizes="(max-width: 640px) 50vw, (max-width: 768px) 33vw, (max-width: 1024px) 25vw, 16vw"
|
||||||
onLoad={() => setIsLoaded(true)}
|
onLoad={() => setIsLoaded(true)}
|
||||||
|
onError={() => setHasError(true)}
|
||||||
unoptimized
|
unoptimized
|
||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
export function BookCard({ book }: BookCardProps) {
|
export function BookCard({ book, readingStatus }: BookCardProps) {
|
||||||
const coverUrl = book.coverUrl || `/api/books/${book.id}/thumbnail`;
|
const coverUrl = book.coverUrl || `/api/books/${book.id}/thumbnail`;
|
||||||
|
const status = readingStatus ?? book.reading_status;
|
||||||
|
const overlay = status ? readingStatusOverlay[status] : null;
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<Link
|
<Link
|
||||||
href={`/books/${book.id}`}
|
href={`/books/${book.id}`}
|
||||||
className="group block bg-card rounded-xl border border-border/60 shadow-sm hover:shadow-md hover:-translate-y-1 transition-all duration-200 overflow-hidden"
|
className="group block bg-card rounded-xl border border-border/60 shadow-sm hover:shadow-md hover:-translate-y-1 transition-all duration-200 overflow-hidden"
|
||||||
>
|
>
|
||||||
|
<div className="relative">
|
||||||
<BookImage
|
<BookImage
|
||||||
src={coverUrl}
|
src={coverUrl}
|
||||||
alt={`Cover of ${book.title}`}
|
alt={`Cover of ${book.title}`}
|
||||||
/>
|
/>
|
||||||
|
{overlay && (
|
||||||
|
<span className={`absolute bottom-2 left-2 px-2 py-0.5 rounded-full text-[10px] font-bold tracking-wide ${overlay.className}`}>
|
||||||
|
{overlay.label}
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
{/* Book Info */}
|
{/* Book Info */}
|
||||||
<div className="p-4">
|
<div className="p-4">
|
||||||
|
|||||||
60
apps/backoffice/app/components/BookPreview.tsx
Normal file
60
apps/backoffice/app/components/BookPreview.tsx
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
"use client";
|
||||||
|
|
||||||
|
import { useState } from "react";
|
||||||
|
import Image from "next/image";
|
||||||
|
|
||||||
|
const PAGE_SIZE = 5;
|
||||||
|
|
||||||
|
export function BookPreview({ bookId, pageCount }: { bookId: string; pageCount: number }) {
|
||||||
|
const [offset, setOffset] = useState(0);
|
||||||
|
|
||||||
|
const pages = Array.from({ length: PAGE_SIZE }, (_, i) => offset + i + 1).filter(
|
||||||
|
(p) => p <= pageCount
|
||||||
|
);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="bg-card rounded-xl border border-border p-6">
|
||||||
|
<div className="flex items-center justify-between mb-4">
|
||||||
|
<h2 className="text-lg font-semibold text-foreground">
|
||||||
|
Preview
|
||||||
|
<span className="ml-2 text-sm font-normal text-muted-foreground">
|
||||||
|
pages {offset + 1}–{Math.min(offset + PAGE_SIZE, pageCount)} / {pageCount}
|
||||||
|
</span>
|
||||||
|
</h2>
|
||||||
|
<div className="flex gap-2">
|
||||||
|
<button
|
||||||
|
onClick={() => setOffset((o) => Math.max(0, o - PAGE_SIZE))}
|
||||||
|
disabled={offset === 0}
|
||||||
|
className="px-3 py-1.5 text-sm rounded-lg border border-border bg-muted/50 text-foreground hover:bg-muted disabled:opacity-40 disabled:cursor-not-allowed transition-colors"
|
||||||
|
>
|
||||||
|
← Prev
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={() => setOffset((o) => Math.min(o + PAGE_SIZE, pageCount - 1))}
|
||||||
|
disabled={offset + PAGE_SIZE >= pageCount}
|
||||||
|
className="px-3 py-1.5 text-sm rounded-lg border border-border bg-muted/50 text-foreground hover:bg-muted disabled:opacity-40 disabled:cursor-not-allowed transition-colors"
|
||||||
|
>
|
||||||
|
Next →
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div className="grid grid-cols-5 gap-3">
|
||||||
|
{pages.map((pageNum) => (
|
||||||
|
<div key={pageNum} className="flex flex-col items-center gap-1.5">
|
||||||
|
<div className="relative w-full aspect-[2/3] bg-muted rounded-lg overflow-hidden border border-border">
|
||||||
|
<Image
|
||||||
|
src={`/api/books/${bookId}/pages/${pageNum}?format=webp&width=600&quality=80`}
|
||||||
|
alt={`Page ${pageNum}`}
|
||||||
|
fill
|
||||||
|
className="object-contain"
|
||||||
|
unoptimized
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
<span className="text-xs text-muted-foreground">{pageNum}</span>
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
71
apps/backoffice/app/components/ConvertButton.tsx
Normal file
71
apps/backoffice/app/components/ConvertButton.tsx
Normal file
@@ -0,0 +1,71 @@
|
|||||||
|
"use client";
|
||||||
|
|
||||||
|
import { useState } from "react";
|
||||||
|
import Link from "next/link";
|
||||||
|
import { Button } from "./ui";
|
||||||
|
|
||||||
|
interface ConvertButtonProps {
|
||||||
|
bookId: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
type ConvertState =
|
||||||
|
| { type: "idle" }
|
||||||
|
| { type: "loading" }
|
||||||
|
| { type: "success"; jobId: string }
|
||||||
|
| { type: "error"; message: string };
|
||||||
|
|
||||||
|
export function ConvertButton({ bookId }: ConvertButtonProps) {
|
||||||
|
const [state, setState] = useState<ConvertState>({ type: "idle" });
|
||||||
|
|
||||||
|
const handleConvert = async () => {
|
||||||
|
setState({ type: "loading" });
|
||||||
|
try {
|
||||||
|
const res = await fetch(`/api/books/${bookId}/convert`, { method: "POST" });
|
||||||
|
if (!res.ok) {
|
||||||
|
const body = await res.json().catch(() => ({ error: res.statusText }));
|
||||||
|
setState({ type: "error", message: body.error || "Conversion failed" });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const job = await res.json();
|
||||||
|
setState({ type: "success", jobId: job.id });
|
||||||
|
} catch (err) {
|
||||||
|
setState({ type: "error", message: err instanceof Error ? err.message : "Unknown error" });
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
if (state.type === "success") {
|
||||||
|
return (
|
||||||
|
<div className="flex items-center gap-2 text-sm text-success">
|
||||||
|
<span>Conversion started.</span>
|
||||||
|
<Link href={`/jobs/${state.jobId}`} className="text-primary hover:underline font-medium">
|
||||||
|
View job →
|
||||||
|
</Link>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (state.type === "error") {
|
||||||
|
return (
|
||||||
|
<div className="flex flex-col gap-1">
|
||||||
|
<span className="text-sm text-destructive">{state.message}</span>
|
||||||
|
<button
|
||||||
|
className="text-xs text-muted-foreground hover:underline text-left"
|
||||||
|
onClick={() => setState({ type: "idle" })}
|
||||||
|
>
|
||||||
|
Dismiss
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<Button
|
||||||
|
variant="secondary"
|
||||||
|
size="sm"
|
||||||
|
onClick={handleConvert}
|
||||||
|
disabled={state.type === "loading"}
|
||||||
|
>
|
||||||
|
{state.type === "loading" ? "Converting…" : "Convert to CBZ"}
|
||||||
|
</Button>
|
||||||
|
);
|
||||||
|
}
|
||||||
@@ -3,7 +3,7 @@
|
|||||||
import { useState } from "react";
|
import { useState } from "react";
|
||||||
import Link from "next/link";
|
import Link from "next/link";
|
||||||
import { JobProgress } from "./JobProgress";
|
import { JobProgress } from "./JobProgress";
|
||||||
import { StatusBadge, Button, MiniProgressBar } from "./ui";
|
import { StatusBadge, JobTypeBadge, Button, MiniProgressBar } from "./ui";
|
||||||
|
|
||||||
interface JobRowProps {
|
interface JobRowProps {
|
||||||
job: {
|
job: {
|
||||||
@@ -93,7 +93,9 @@ export function JobRow({ job, libraryName, highlighted, onCancel, formatDate, fo
|
|||||||
<td className="px-4 py-3 text-sm text-foreground">
|
<td className="px-4 py-3 text-sm text-foreground">
|
||||||
{job.library_id ? libraryName || job.library_id.slice(0, 8) : "—"}
|
{job.library_id ? libraryName || job.library_id.slice(0, 8) : "—"}
|
||||||
</td>
|
</td>
|
||||||
<td className="px-4 py-3 text-sm text-foreground">{job.type}</td>
|
<td className="px-4 py-3">
|
||||||
|
<JobTypeBadge type={job.type} />
|
||||||
|
</td>
|
||||||
<td className="px-4 py-3">
|
<td className="px-4 py-3">
|
||||||
<div className="flex items-center gap-2 flex-wrap">
|
<div className="flex items-center gap-2 flex-wrap">
|
||||||
<StatusBadge status={job.status} />
|
<StatusBadge status={job.status} />
|
||||||
|
|||||||
@@ -146,10 +146,22 @@ export function JobsIndicator() {
|
|||||||
/>
|
/>
|
||||||
</button>
|
</button>
|
||||||
|
|
||||||
|
{/* Backdrop mobile */}
|
||||||
|
{isOpen && (
|
||||||
|
<div
|
||||||
|
className="fixed inset-0 z-40 sm:hidden bg-background/60 backdrop-blur-sm"
|
||||||
|
onClick={() => setIsOpen(false)}
|
||||||
|
aria-hidden="true"
|
||||||
|
/>
|
||||||
|
)}
|
||||||
|
|
||||||
{/* Popin/Dropdown with glassmorphism */}
|
{/* Popin/Dropdown with glassmorphism */}
|
||||||
{isOpen && (
|
{isOpen && (
|
||||||
<div className="
|
<div className="
|
||||||
absolute right-0 top-full mt-2 w-96
|
fixed sm:absolute
|
||||||
|
inset-x-3 sm:inset-x-auto
|
||||||
|
top-[4.5rem] sm:top-full sm:mt-2
|
||||||
|
sm:w-96
|
||||||
bg-popover/95 backdrop-blur-md
|
bg-popover/95 backdrop-blur-md
|
||||||
rounded-xl
|
rounded-xl
|
||||||
shadow-elevation-2
|
shadow-elevation-2
|
||||||
|
|||||||
@@ -20,6 +20,7 @@ export function LibraryActions({
|
|||||||
}: LibraryActionsProps) {
|
}: LibraryActionsProps) {
|
||||||
const [isOpen, setIsOpen] = useState(false);
|
const [isOpen, setIsOpen] = useState(false);
|
||||||
const [isPending, startTransition] = useTransition();
|
const [isPending, startTransition] = useTransition();
|
||||||
|
const [saveError, setSaveError] = useState<string | null>(null);
|
||||||
const dropdownRef = useRef<HTMLDivElement>(null);
|
const dropdownRef = useRef<HTMLDivElement>(null);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
@@ -33,6 +34,7 @@ export function LibraryActions({
|
|||||||
}, []);
|
}, []);
|
||||||
|
|
||||||
const handleSubmit = (formData: FormData) => {
|
const handleSubmit = (formData: FormData) => {
|
||||||
|
setSaveError(null);
|
||||||
startTransition(async () => {
|
startTransition(async () => {
|
||||||
const monitorEnabled = formData.get("monitor_enabled") === "true";
|
const monitorEnabled = formData.get("monitor_enabled") === "true";
|
||||||
const watcherEnabled = formData.get("watcher_enabled") === "true";
|
const watcherEnabled = formData.get("watcher_enabled") === "true";
|
||||||
@@ -53,12 +55,15 @@ export function LibraryActions({
|
|||||||
setIsOpen(false);
|
setIsOpen(false);
|
||||||
window.location.reload();
|
window.location.reload();
|
||||||
} else {
|
} else {
|
||||||
console.error("Failed to save settings:", response.statusText);
|
const body = await response.json().catch(() => ({}));
|
||||||
alert("Failed to save settings. Please try again.");
|
const msg = body?.error || `HTTP ${response.status}`;
|
||||||
|
console.error("Failed to save settings:", msg);
|
||||||
|
setSaveError(msg);
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error("Failed to save settings:", error);
|
const msg = error instanceof Error ? error.message : "Network error";
|
||||||
alert("Failed to save settings. Please try again.");
|
console.error("Failed to save settings:", msg);
|
||||||
|
setSaveError(msg);
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
};
|
};
|
||||||
@@ -121,6 +126,12 @@ export function LibraryActions({
|
|||||||
</select>
|
</select>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
{saveError && (
|
||||||
|
<p className="text-xs text-destructive bg-destructive/10 px-2 py-1.5 rounded-lg break-all">
|
||||||
|
{saveError}
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
|
||||||
<Button
|
<Button
|
||||||
type="submit"
|
type="submit"
|
||||||
size="sm"
|
size="sm"
|
||||||
|
|||||||
93
apps/backoffice/app/components/MobileNav.tsx
Normal file
93
apps/backoffice/app/components/MobileNav.tsx
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
"use client";
|
||||||
|
|
||||||
|
import { useState, useEffect } from "react";
|
||||||
|
import { createPortal } from "react-dom";
|
||||||
|
import Link from "next/link";
|
||||||
|
import { NavIcon } from "./ui";
|
||||||
|
|
||||||
|
type NavItem = {
|
||||||
|
href: "/" | "/books" | "/libraries" | "/jobs" | "/tokens" | "/settings";
|
||||||
|
label: string;
|
||||||
|
icon: "dashboard" | "books" | "libraries" | "jobs" | "tokens" | "settings";
|
||||||
|
};
|
||||||
|
|
||||||
|
const HamburgerIcon = () => (
|
||||||
|
<svg viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth={2} className="w-5 h-5">
|
||||||
|
<path d="M3 6h18M3 12h18M3 18h18" strokeLinecap="round" />
|
||||||
|
</svg>
|
||||||
|
);
|
||||||
|
|
||||||
|
const XIcon = () => (
|
||||||
|
<svg viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth={2} className="w-5 h-5">
|
||||||
|
<path d="M18 6L6 18M6 6l12 12" strokeLinecap="round" />
|
||||||
|
</svg>
|
||||||
|
);
|
||||||
|
|
||||||
|
export function MobileNav({ navItems }: { navItems: NavItem[] }) {
|
||||||
|
const [isOpen, setIsOpen] = useState(false);
|
||||||
|
const [mounted, setMounted] = useState(false);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
setMounted(true);
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
const overlay = (
|
||||||
|
<>
|
||||||
|
{/* Backdrop */}
|
||||||
|
<div
|
||||||
|
className={`fixed inset-0 z-[60] bg-background/80 backdrop-blur-sm md:hidden transition-opacity duration-300 ${isOpen ? "opacity-100" : "opacity-0 pointer-events-none"}`}
|
||||||
|
onClick={() => setIsOpen(false)}
|
||||||
|
aria-hidden="true"
|
||||||
|
/>
|
||||||
|
|
||||||
|
{/* Drawer */}
|
||||||
|
<div
|
||||||
|
className={`
|
||||||
|
fixed inset-y-0 left-0 z-[70] w-64
|
||||||
|
bg-background/95 backdrop-blur-xl
|
||||||
|
border-r border-border/60
|
||||||
|
flex flex-col
|
||||||
|
transform transition-transform duration-300 ease-in-out
|
||||||
|
md:hidden
|
||||||
|
${isOpen ? "translate-x-0" : "-translate-x-full"}
|
||||||
|
`}
|
||||||
|
>
|
||||||
|
<div className="h-16 border-b border-border/40 flex items-center px-4">
|
||||||
|
<span className="text-sm font-semibold text-muted-foreground tracking-wide uppercase">Navigation</span>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<nav className="flex flex-col gap-1 p-3 flex-1">
|
||||||
|
{navItems.map((item) => (
|
||||||
|
<Link
|
||||||
|
key={item.href}
|
||||||
|
href={item.href}
|
||||||
|
className="flex items-center gap-3 px-3 py-3 rounded-lg text-muted-foreground hover:text-foreground hover:bg-accent transition-colors duration-200 active:scale-[0.98]"
|
||||||
|
onClick={() => setIsOpen(false)}
|
||||||
|
>
|
||||||
|
<NavIcon name={item.icon} />
|
||||||
|
<span className="font-medium">{item.label}</span>
|
||||||
|
</Link>
|
||||||
|
))}
|
||||||
|
</nav>
|
||||||
|
</div>
|
||||||
|
</>
|
||||||
|
);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<>
|
||||||
|
{/* Hamburger button — reste dans le header */}
|
||||||
|
<button
|
||||||
|
className="md:hidden p-2 rounded-lg text-muted-foreground hover:text-foreground hover:bg-accent transition-colors"
|
||||||
|
onClick={() => setIsOpen(!isOpen)}
|
||||||
|
aria-label={isOpen ? "Close menu" : "Open menu"}
|
||||||
|
aria-expanded={isOpen}
|
||||||
|
>
|
||||||
|
{isOpen ? <XIcon /> : <HamburgerIcon />}
|
||||||
|
</button>
|
||||||
|
|
||||||
|
{/* Backdrop + Drawer portés directement sur document.body,
|
||||||
|
hors du header et de son backdrop-filter */}
|
||||||
|
{mounted && createPortal(overlay, document.body)}
|
||||||
|
</>
|
||||||
|
);
|
||||||
|
}
|
||||||
@@ -94,8 +94,11 @@ const jobTypeVariants: Record<string, BadgeVariant> = {
|
|||||||
};
|
};
|
||||||
|
|
||||||
const jobTypeLabels: Record<string, string> = {
|
const jobTypeLabels: Record<string, string> = {
|
||||||
|
rebuild: "Index",
|
||||||
|
full_rebuild: "Full Index",
|
||||||
thumbnail_rebuild: "Thumbnails",
|
thumbnail_rebuild: "Thumbnails",
|
||||||
thumbnail_regenerate: "Regenerate",
|
thumbnail_regenerate: "Regen. Thumbnails",
|
||||||
|
cbr_to_cbz: "CBR → CBZ",
|
||||||
};
|
};
|
||||||
|
|
||||||
interface JobTypeBadgeProps {
|
interface JobTypeBadgeProps {
|
||||||
|
|||||||
@@ -248,6 +248,29 @@ body::after {
|
|||||||
overflow: hidden;
|
overflow: hidden;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Reading progress badge variants */
|
||||||
|
.badge-unread {
|
||||||
|
background: hsl(var(--color-muted) / 0.6);
|
||||||
|
color: hsl(var(--color-muted-foreground));
|
||||||
|
border-color: hsl(var(--color-border));
|
||||||
|
}
|
||||||
|
|
||||||
|
.badge-in-progress {
|
||||||
|
background: hsl(38 92% 50% / 0.15);
|
||||||
|
color: hsl(38 92% 40%);
|
||||||
|
border-color: hsl(38 92% 50% / 0.3);
|
||||||
|
}
|
||||||
|
|
||||||
|
.dark .badge-in-progress {
|
||||||
|
color: hsl(38 92% 65%);
|
||||||
|
}
|
||||||
|
|
||||||
|
.badge-completed {
|
||||||
|
background: hsl(var(--color-success) / 0.15);
|
||||||
|
color: hsl(var(--color-success));
|
||||||
|
border-color: hsl(var(--color-success) / 0.3);
|
||||||
|
}
|
||||||
|
|
||||||
/* Hide scrollbar */
|
/* Hide scrollbar */
|
||||||
.scrollbar-hide {
|
.scrollbar-hide {
|
||||||
-ms-overflow-style: none;
|
-ms-overflow-style: none;
|
||||||
|
|||||||
@@ -13,11 +13,13 @@ interface JobDetailPageProps {
|
|||||||
interface JobDetails {
|
interface JobDetails {
|
||||||
id: string;
|
id: string;
|
||||||
library_id: string | null;
|
library_id: string | null;
|
||||||
|
book_id: string | null;
|
||||||
type: string;
|
type: string;
|
||||||
status: string;
|
status: string;
|
||||||
created_at: string;
|
created_at: string;
|
||||||
started_at: string | null;
|
started_at: string | null;
|
||||||
finished_at: string | null;
|
finished_at: string | null;
|
||||||
|
phase2_started_at: string | null;
|
||||||
current_file: string | null;
|
current_file: string | null;
|
||||||
progress_percent: number | null;
|
progress_percent: number | null;
|
||||||
processed_files: number | null;
|
processed_files: number | null;
|
||||||
@@ -38,6 +40,34 @@ interface JobError {
|
|||||||
created_at: string;
|
created_at: string;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const JOB_TYPE_INFO: Record<string, { label: string; description: string; isThumbnailOnly: boolean }> = {
|
||||||
|
rebuild: {
|
||||||
|
label: "Incremental index",
|
||||||
|
description: "Scans for new/modified files, analyzes them and generates missing thumbnails.",
|
||||||
|
isThumbnailOnly: false,
|
||||||
|
},
|
||||||
|
full_rebuild: {
|
||||||
|
label: "Full re-index",
|
||||||
|
description: "Clears all existing data then performs a complete re-scan, re-analysis and thumbnail generation.",
|
||||||
|
isThumbnailOnly: false,
|
||||||
|
},
|
||||||
|
thumbnail_rebuild: {
|
||||||
|
label: "Thumbnail rebuild",
|
||||||
|
description: "Generates thumbnails only for books that are missing one. Existing thumbnails are preserved.",
|
||||||
|
isThumbnailOnly: true,
|
||||||
|
},
|
||||||
|
thumbnail_regenerate: {
|
||||||
|
label: "Thumbnail regeneration",
|
||||||
|
description: "Regenerates all thumbnails from scratch, replacing existing ones.",
|
||||||
|
isThumbnailOnly: true,
|
||||||
|
},
|
||||||
|
cbr_to_cbz: {
|
||||||
|
label: "CBR → CBZ conversion",
|
||||||
|
description: "Converts a CBR archive to the open CBZ format.",
|
||||||
|
isThumbnailOnly: false,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
async function getJobDetails(jobId: string): Promise<JobDetails | null> {
|
async function getJobDetails(jobId: string): Promise<JobDetails | null> {
|
||||||
try {
|
try {
|
||||||
return await apiFetch<JobDetails>(`/index/jobs/${jobId}`);
|
return await apiFetch<JobDetails>(`/index/jobs/${jobId}`);
|
||||||
@@ -64,10 +94,9 @@ function formatDuration(start: string, end: string | null): string {
|
|||||||
return `${Math.floor(diff / 3600000)}h ${Math.floor((diff % 3600000) / 60000)}m`;
|
return `${Math.floor(diff / 3600000)}h ${Math.floor((diff % 3600000) / 60000)}m`;
|
||||||
}
|
}
|
||||||
|
|
||||||
function formatSpeed(stats: { scanned_files: number } | null, duration: number): string {
|
function formatSpeed(count: number, durationMs: number): string {
|
||||||
if (!stats || duration === 0) return "-";
|
if (durationMs === 0 || count === 0) return "-";
|
||||||
const filesPerSecond = stats.scanned_files / (duration / 1000);
|
return `${(count / (durationMs / 1000)).toFixed(1)}/s`;
|
||||||
return `${filesPerSecond.toFixed(1)} f/s`;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
export default async function JobDetailPage({ params }: JobDetailPageProps) {
|
export default async function JobDetailPage({ params }: JobDetailPageProps) {
|
||||||
@@ -81,10 +110,44 @@ export default async function JobDetailPage({ params }: JobDetailPageProps) {
|
|||||||
notFound();
|
notFound();
|
||||||
}
|
}
|
||||||
|
|
||||||
const duration = job.started_at
|
const typeInfo = JOB_TYPE_INFO[job.type] ?? {
|
||||||
|
label: job.type,
|
||||||
|
description: null,
|
||||||
|
isThumbnailOnly: false,
|
||||||
|
};
|
||||||
|
|
||||||
|
const durationMs = job.started_at
|
||||||
? new Date(job.finished_at || new Date()).getTime() - new Date(job.started_at).getTime()
|
? new Date(job.finished_at || new Date()).getTime() - new Date(job.started_at).getTime()
|
||||||
: 0;
|
: 0;
|
||||||
|
|
||||||
|
const isCompleted = job.status === "success";
|
||||||
|
const isFailed = job.status === "failed";
|
||||||
|
const isCancelled = job.status === "cancelled";
|
||||||
|
const isThumbnailPhase = job.status === "generating_thumbnails";
|
||||||
|
const { isThumbnailOnly } = typeInfo;
|
||||||
|
|
||||||
|
// Which label to use for the progress card
|
||||||
|
const progressTitle = isThumbnailOnly
|
||||||
|
? "Thumbnails"
|
||||||
|
: isThumbnailPhase
|
||||||
|
? "Phase 2 — Thumbnails"
|
||||||
|
: "Phase 1 — Discovery";
|
||||||
|
|
||||||
|
const progressDescription = isThumbnailOnly
|
||||||
|
? undefined
|
||||||
|
: isThumbnailPhase
|
||||||
|
? "Generating thumbnails for the analyzed books"
|
||||||
|
: "Scanning and indexing files in the library";
|
||||||
|
|
||||||
|
// Speed metric: thumbnail count for thumbnail jobs, scanned files for index jobs
|
||||||
|
const speedCount = isThumbnailOnly
|
||||||
|
? (job.processed_files ?? 0)
|
||||||
|
: (job.stats_json?.scanned_files ?? 0);
|
||||||
|
|
||||||
|
const showProgressCard =
|
||||||
|
(isCompleted || isFailed || job.status === "running" || isThumbnailPhase) &&
|
||||||
|
(job.total_files != null || !!job.current_file);
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<>
|
<>
|
||||||
<div className="mb-6">
|
<div className="mb-6">
|
||||||
@@ -100,11 +163,72 @@ export default async function JobDetailPage({ params }: JobDetailPageProps) {
|
|||||||
<h1 className="text-3xl font-bold text-foreground mt-2">Job Details</h1>
|
<h1 className="text-3xl font-bold text-foreground mt-2">Job Details</h1>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
{/* Summary banner — completed */}
|
||||||
|
{isCompleted && job.started_at && (
|
||||||
|
<div className="mb-6 p-4 rounded-xl bg-success/10 border border-success/30 flex items-start gap-3">
|
||||||
|
<svg className="w-5 h-5 text-success mt-0.5 shrink-0" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||||
|
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M9 12l2 2 4-4m6 2a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||||
|
</svg>
|
||||||
|
<div className="text-sm text-success">
|
||||||
|
<span className="font-semibold">Completed in {formatDuration(job.started_at, job.finished_at)}</span>
|
||||||
|
{job.stats_json && (
|
||||||
|
<span className="ml-2 text-success/80">
|
||||||
|
— {job.stats_json.scanned_files} scanned, {job.stats_json.indexed_files} indexed
|
||||||
|
{job.stats_json.removed_files > 0 && `, ${job.stats_json.removed_files} removed`}
|
||||||
|
{job.stats_json.errors > 0 && `, ${job.stats_json.errors} errors`}
|
||||||
|
{job.total_files != null && job.total_files > 0 && `, ${job.total_files} thumbnails`}
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
{!job.stats_json && isThumbnailOnly && job.total_files != null && (
|
||||||
|
<span className="ml-2 text-success/80">
|
||||||
|
— {job.processed_files ?? job.total_files} thumbnails generated
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Summary banner — failed */}
|
||||||
|
{isFailed && (
|
||||||
|
<div className="mb-6 p-4 rounded-xl bg-destructive/10 border border-destructive/30 flex items-start gap-3">
|
||||||
|
<svg className="w-5 h-5 text-destructive mt-0.5 shrink-0" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||||
|
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M12 8v4m0 4h.01M21 12a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||||
|
</svg>
|
||||||
|
<div className="text-sm text-destructive">
|
||||||
|
<span className="font-semibold">Job failed</span>
|
||||||
|
{job.started_at && (
|
||||||
|
<span className="ml-2 text-destructive/80">after {formatDuration(job.started_at, job.finished_at)}</span>
|
||||||
|
)}
|
||||||
|
{job.error_opt && (
|
||||||
|
<p className="mt-1 text-destructive/70 font-mono text-xs break-all">{job.error_opt}</p>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Summary banner — cancelled */}
|
||||||
|
{isCancelled && (
|
||||||
|
<div className="mb-6 p-4 rounded-xl bg-muted border border-border flex items-start gap-3">
|
||||||
|
<svg className="w-5 h-5 text-muted-foreground mt-0.5 shrink-0" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||||
|
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M18.364 18.364A9 9 0 005.636 5.636m12.728 12.728A9 9 0 015.636 5.636m12.728 12.728L5.636 5.636" />
|
||||||
|
</svg>
|
||||||
|
<span className="text-sm text-muted-foreground">
|
||||||
|
<span className="font-semibold">Cancelled</span>
|
||||||
|
{job.started_at && (
|
||||||
|
<span className="ml-2">after {formatDuration(job.started_at, job.finished_at)}</span>
|
||||||
|
)}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
|
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
|
||||||
{/* Overview Card */}
|
{/* Overview Card */}
|
||||||
<Card>
|
<Card>
|
||||||
<CardHeader>
|
<CardHeader>
|
||||||
<CardTitle>Overview</CardTitle>
|
<CardTitle>Overview</CardTitle>
|
||||||
|
{typeInfo.description && (
|
||||||
|
<CardDescription>{typeInfo.description}</CardDescription>
|
||||||
|
)}
|
||||||
</CardHeader>
|
</CardHeader>
|
||||||
<CardContent className="space-y-3">
|
<CardContent className="space-y-3">
|
||||||
<div className="flex items-center justify-between py-2 border-b border-border/60">
|
<div className="flex items-center justify-between py-2 border-b border-border/60">
|
||||||
@@ -113,16 +237,38 @@ export default async function JobDetailPage({ params }: JobDetailPageProps) {
|
|||||||
</div>
|
</div>
|
||||||
<div className="flex items-center justify-between py-2 border-b border-border/60">
|
<div className="flex items-center justify-between py-2 border-b border-border/60">
|
||||||
<span className="text-sm text-muted-foreground">Type</span>
|
<span className="text-sm text-muted-foreground">Type</span>
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
<JobTypeBadge type={job.type} />
|
<JobTypeBadge type={job.type} />
|
||||||
|
<span className="text-sm text-muted-foreground">{typeInfo.label}</span>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div className="flex items-center justify-between py-2 border-b border-border/60">
|
<div className="flex items-center justify-between py-2 border-b border-border/60">
|
||||||
<span className="text-sm text-muted-foreground">Status</span>
|
<span className="text-sm text-muted-foreground">Status</span>
|
||||||
<StatusBadge status={job.status} />
|
<StatusBadge status={job.status} />
|
||||||
</div>
|
</div>
|
||||||
<div className="flex items-center justify-between py-2">
|
<div className={`flex items-center justify-between py-2 ${(job.book_id || job.started_at) ? "border-b border-border/60" : ""}`}>
|
||||||
<span className="text-sm text-muted-foreground">Library</span>
|
<span className="text-sm text-muted-foreground">Library</span>
|
||||||
<span className="text-sm text-foreground">{job.library_id || "All libraries"}</span>
|
<span className="text-sm text-foreground">{job.library_id || "All libraries"}</span>
|
||||||
</div>
|
</div>
|
||||||
|
{job.book_id && (
|
||||||
|
<div className={`flex items-center justify-between py-2 ${job.started_at ? "border-b border-border/60" : ""}`}>
|
||||||
|
<span className="text-sm text-muted-foreground">Book</span>
|
||||||
|
<Link
|
||||||
|
href={`/books/${job.book_id}`}
|
||||||
|
className="text-sm text-primary hover:text-primary/80 font-mono hover:underline"
|
||||||
|
>
|
||||||
|
{job.book_id.slice(0, 8)}…
|
||||||
|
</Link>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
{job.started_at && (
|
||||||
|
<div className="flex items-center justify-between py-2">
|
||||||
|
<span className="text-sm text-muted-foreground">Duration</span>
|
||||||
|
<span className="text-sm font-semibold text-foreground">
|
||||||
|
{formatDuration(job.started_at, job.finished_at)}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
</CardContent>
|
</CardContent>
|
||||||
</Card>
|
</Card>
|
||||||
|
|
||||||
@@ -131,101 +277,194 @@ export default async function JobDetailPage({ params }: JobDetailPageProps) {
|
|||||||
<CardHeader>
|
<CardHeader>
|
||||||
<CardTitle>Timeline</CardTitle>
|
<CardTitle>Timeline</CardTitle>
|
||||||
</CardHeader>
|
</CardHeader>
|
||||||
<CardContent className="space-y-4">
|
<CardContent>
|
||||||
|
<div className="relative">
|
||||||
|
{/* Vertical line */}
|
||||||
|
<div className="absolute left-[7px] top-2 bottom-2 w-px bg-border" />
|
||||||
|
|
||||||
|
<div className="space-y-5">
|
||||||
|
{/* Created */}
|
||||||
<div className="flex items-start gap-4">
|
<div className="flex items-start gap-4">
|
||||||
<div className={`w-2 h-2 rounded-full mt-2 ${job.created_at ? 'bg-success' : 'bg-muted'}`} />
|
<div className="w-3.5 h-3.5 rounded-full mt-0.5 bg-muted border-2 border-border shrink-0 z-10" />
|
||||||
<div className="flex-1">
|
<div className="flex-1 min-w-0">
|
||||||
<span className="text-sm font-medium text-foreground">Created</span>
|
<span className="text-sm font-medium text-foreground">Created</span>
|
||||||
<p className="text-sm text-muted-foreground">{new Date(job.created_at).toLocaleString()}</p>
|
<p className="text-xs text-muted-foreground">{new Date(job.created_at).toLocaleString()}</p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
{/* Phase 1 start — for index jobs that have two phases */}
|
||||||
|
{job.started_at && job.phase2_started_at && (
|
||||||
<div className="flex items-start gap-4">
|
<div className="flex items-start gap-4">
|
||||||
<div className={`w-2 h-2 rounded-full mt-2 ${job.started_at ? 'bg-success' : job.created_at ? 'bg-warning' : 'bg-muted'}`} />
|
<div className="w-3.5 h-3.5 rounded-full mt-0.5 bg-primary shrink-0 z-10" />
|
||||||
<div className="flex-1">
|
<div className="flex-1 min-w-0">
|
||||||
<span className="text-sm font-medium text-foreground">Started</span>
|
<span className="text-sm font-medium text-foreground">Phase 1 — Discovery</span>
|
||||||
<p className="text-sm text-muted-foreground">
|
<p className="text-xs text-muted-foreground">{new Date(job.started_at).toLocaleString()}</p>
|
||||||
{job.started_at ? new Date(job.started_at).toLocaleString() : "Pending..."}
|
<p className="text-xs text-primary/80 font-medium mt-0.5">
|
||||||
|
Duration: {formatDuration(job.started_at, job.phase2_started_at)}
|
||||||
|
{job.stats_json && (
|
||||||
|
<span className="text-muted-foreground font-normal ml-1">
|
||||||
|
· {job.stats_json.scanned_files} scanned, {job.stats_json.indexed_files} indexed
|
||||||
|
{job.stats_json.removed_files > 0 && `, ${job.stats_json.removed_files} removed`}
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div className="flex items-start gap-4">
|
|
||||||
<div className={`w-2 h-2 rounded-full mt-2 ${job.finished_at ? 'bg-success' : job.started_at ? 'bg-primary animate-pulse' : 'bg-muted'}`} />
|
|
||||||
<div className="flex-1">
|
|
||||||
<span className="text-sm font-medium text-foreground">Finished</span>
|
|
||||||
<p className="text-sm text-muted-foreground">
|
|
||||||
{job.finished_at
|
|
||||||
? new Date(job.finished_at).toLocaleString()
|
|
||||||
: job.started_at
|
|
||||||
? "Running..."
|
|
||||||
: "Waiting..."
|
|
||||||
}
|
|
||||||
</p>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
{job.started_at && (
|
|
||||||
<div className="mt-4 inline-flex items-center px-3 py-1.5 bg-primary/10 text-primary rounded-lg text-sm font-medium">
|
|
||||||
Duration: {formatDuration(job.started_at, job.finished_at)}
|
|
||||||
</div>
|
|
||||||
)}
|
)}
|
||||||
|
|
||||||
|
{/* Phase 2 start — for index jobs that have two phases */}
|
||||||
|
{job.phase2_started_at && (
|
||||||
|
<div className="flex items-start gap-4">
|
||||||
|
<div className={`w-3.5 h-3.5 rounded-full mt-0.5 shrink-0 z-10 ${
|
||||||
|
job.finished_at ? "bg-primary" : "bg-primary animate-pulse"
|
||||||
|
}`} />
|
||||||
|
<div className="flex-1 min-w-0">
|
||||||
|
<span className="text-sm font-medium text-foreground">
|
||||||
|
{isThumbnailOnly ? "Thumbnails" : "Phase 2 — Thumbnails"}
|
||||||
|
</span>
|
||||||
|
<p className="text-xs text-muted-foreground">{new Date(job.phase2_started_at).toLocaleString()}</p>
|
||||||
|
{job.finished_at && (
|
||||||
|
<p className="text-xs text-primary/80 font-medium mt-0.5">
|
||||||
|
Duration: {formatDuration(job.phase2_started_at, job.finished_at)}
|
||||||
|
{job.total_files != null && job.total_files > 0 && (
|
||||||
|
<span className="text-muted-foreground font-normal ml-1">
|
||||||
|
· {job.processed_files ?? job.total_files} thumbnails
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Started — for jobs without phase2 (cbr_to_cbz, or no phase yet) */}
|
||||||
|
{job.started_at && !job.phase2_started_at && (
|
||||||
|
<div className="flex items-start gap-4">
|
||||||
|
<div className={`w-3.5 h-3.5 rounded-full mt-0.5 shrink-0 z-10 ${
|
||||||
|
job.finished_at ? "bg-primary" : "bg-primary animate-pulse"
|
||||||
|
}`} />
|
||||||
|
<div className="flex-1 min-w-0">
|
||||||
|
<span className="text-sm font-medium text-foreground">Started</span>
|
||||||
|
<p className="text-xs text-muted-foreground">{new Date(job.started_at).toLocaleString()}</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Pending — not started yet */}
|
||||||
|
{!job.started_at && (
|
||||||
|
<div className="flex items-start gap-4">
|
||||||
|
<div className="w-3.5 h-3.5 rounded-full mt-0.5 bg-warning shrink-0 z-10" />
|
||||||
|
<div className="flex-1 min-w-0">
|
||||||
|
<span className="text-sm font-medium text-foreground">Waiting to start…</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Finished */}
|
||||||
|
{job.finished_at && (
|
||||||
|
<div className="flex items-start gap-4">
|
||||||
|
<div className={`w-3.5 h-3.5 rounded-full mt-0.5 shrink-0 z-10 ${
|
||||||
|
isCompleted ? "bg-success" : isFailed ? "bg-destructive" : "bg-muted"
|
||||||
|
}`} />
|
||||||
|
<div className="flex-1 min-w-0">
|
||||||
|
<span className="text-sm font-medium text-foreground">
|
||||||
|
{isCompleted ? "Completed" : isFailed ? "Failed" : "Cancelled"}
|
||||||
|
</span>
|
||||||
|
<p className="text-xs text-muted-foreground">{new Date(job.finished_at).toLocaleString()}</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
</CardContent>
|
</CardContent>
|
||||||
</Card>
|
</Card>
|
||||||
|
|
||||||
{/* Progress Card */}
|
{/* Progress Card */}
|
||||||
{(job.status === "running" || job.status === "generating_thumbnails" || job.status === "success" || job.status === "failed") && (
|
{showProgressCard && (
|
||||||
<Card>
|
<Card>
|
||||||
<CardHeader>
|
<CardHeader>
|
||||||
<CardTitle>{job.status === "generating_thumbnails" ? "Thumbnails" : "Progress"}</CardTitle>
|
<CardTitle>{progressTitle}</CardTitle>
|
||||||
|
{progressDescription && <CardDescription>{progressDescription}</CardDescription>}
|
||||||
</CardHeader>
|
</CardHeader>
|
||||||
<CardContent>
|
<CardContent>
|
||||||
{job.total_files != null && job.total_files > 0 && (
|
{job.total_files != null && job.total_files > 0 && (
|
||||||
<>
|
<>
|
||||||
<ProgressBar value={job.progress_percent || 0} showLabel size="lg" className="mb-4" />
|
<ProgressBar value={job.progress_percent || 0} showLabel size="lg" className="mb-4" />
|
||||||
<div className="grid grid-cols-3 gap-4">
|
<div className="grid grid-cols-3 gap-4">
|
||||||
<StatBox value={job.processed_files ?? 0} label="Processed" variant="primary" />
|
<StatBox
|
||||||
<StatBox value={job.total_files} label={job.status === "generating_thumbnails" ? "Total thumbnails" : "Total"} />
|
value={job.processed_files ?? 0}
|
||||||
<StatBox value={job.total_files - (job.processed_files ?? 0)} label="Remaining" variant="warning" />
|
label={isThumbnailOnly || isThumbnailPhase ? "Generated" : "Processed"}
|
||||||
|
variant="primary"
|
||||||
|
/>
|
||||||
|
<StatBox value={job.total_files} label="Total" />
|
||||||
|
<StatBox
|
||||||
|
value={Math.max(0, job.total_files - (job.processed_files ?? 0))}
|
||||||
|
label="Remaining"
|
||||||
|
variant={isCompleted ? "default" : "warning"}
|
||||||
|
/>
|
||||||
</div>
|
</div>
|
||||||
</>
|
</>
|
||||||
)}
|
)}
|
||||||
{job.current_file && (
|
{job.current_file && (
|
||||||
<div className="mt-4 p-3 bg-muted/50 rounded-lg">
|
<div className="mt-4 p-3 bg-muted/50 rounded-lg">
|
||||||
<span className="text-sm text-muted-foreground">Current file:</span>
|
<span className="text-xs text-muted-foreground uppercase tracking-wide">Current file</span>
|
||||||
<code className="block mt-1 text-xs font-mono text-foreground truncate">{job.current_file}</code>
|
<code className="block mt-1 text-xs font-mono text-foreground break-all">{job.current_file}</code>
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
</CardContent>
|
</CardContent>
|
||||||
</Card>
|
</Card>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
{/* Statistics Card */}
|
{/* Index Statistics — index jobs only */}
|
||||||
{job.stats_json && (
|
{job.stats_json && !isThumbnailOnly && (
|
||||||
<Card>
|
<Card>
|
||||||
<CardHeader>
|
<CardHeader>
|
||||||
<CardTitle>Statistics</CardTitle>
|
<CardTitle>Index statistics</CardTitle>
|
||||||
|
{job.started_at && (
|
||||||
|
<CardDescription>
|
||||||
|
{formatDuration(job.started_at, job.finished_at)}
|
||||||
|
{speedCount > 0 && ` · ${formatSpeed(speedCount, durationMs)} scan rate`}
|
||||||
|
</CardDescription>
|
||||||
|
)}
|
||||||
</CardHeader>
|
</CardHeader>
|
||||||
<CardContent>
|
<CardContent>
|
||||||
<div className="grid grid-cols-2 sm:grid-cols-4 gap-4 mb-4">
|
<div className="grid grid-cols-2 sm:grid-cols-4 gap-4">
|
||||||
<StatBox value={job.stats_json.scanned_files} label="Scanned" variant="success" />
|
<StatBox value={job.stats_json.scanned_files} label="Scanned" variant="success" />
|
||||||
<StatBox value={job.stats_json.indexed_files} label="Indexed" variant="primary" />
|
<StatBox value={job.stats_json.indexed_files} label="Indexed" variant="primary" />
|
||||||
<StatBox value={job.stats_json.removed_files} label="Removed" variant="warning" />
|
<StatBox value={job.stats_json.removed_files} label="Removed" variant="warning" />
|
||||||
<StatBox value={job.stats_json.errors} label="Errors" variant={job.stats_json.errors > 0 ? "error" : "default"} />
|
<StatBox value={job.stats_json.errors} label="Errors" variant={job.stats_json.errors > 0 ? "error" : "default"} />
|
||||||
</div>
|
</div>
|
||||||
{job.started_at && (
|
|
||||||
<div className="flex items-center justify-between py-2 border-t border-border/60">
|
|
||||||
<span className="text-sm text-muted-foreground">Speed:</span>
|
|
||||||
<span className="text-sm font-medium text-foreground">{formatSpeed(job.stats_json, duration)}</span>
|
|
||||||
</div>
|
|
||||||
)}
|
|
||||||
</CardContent>
|
</CardContent>
|
||||||
</Card>
|
</Card>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
{/* Errors Card */}
|
{/* Thumbnail statistics — thumbnail-only jobs, completed */}
|
||||||
|
{isThumbnailOnly && isCompleted && job.total_files != null && (
|
||||||
|
<Card>
|
||||||
|
<CardHeader>
|
||||||
|
<CardTitle>Thumbnail statistics</CardTitle>
|
||||||
|
{job.started_at && (
|
||||||
|
<CardDescription>
|
||||||
|
{formatDuration(job.started_at, job.finished_at)}
|
||||||
|
{speedCount > 0 && ` · ${formatSpeed(speedCount, durationMs)} thumbnails/s`}
|
||||||
|
</CardDescription>
|
||||||
|
)}
|
||||||
|
</CardHeader>
|
||||||
|
<CardContent>
|
||||||
|
<div className="grid grid-cols-2 gap-4">
|
||||||
|
<StatBox value={job.processed_files ?? job.total_files} label="Generated" variant="success" />
|
||||||
|
<StatBox value={job.total_files} label="Total" />
|
||||||
|
</div>
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* File errors */}
|
||||||
{errors.length > 0 && (
|
{errors.length > 0 && (
|
||||||
<Card className="lg:col-span-2">
|
<Card className="lg:col-span-2">
|
||||||
<CardHeader>
|
<CardHeader>
|
||||||
<CardTitle>Errors ({errors.length})</CardTitle>
|
<CardTitle>File errors ({errors.length})</CardTitle>
|
||||||
<CardDescription>Errors encountered during job execution</CardDescription>
|
<CardDescription>Errors encountered while processing individual files</CardDescription>
|
||||||
</CardHeader>
|
</CardHeader>
|
||||||
<CardContent className="space-y-2 max-h-80 overflow-y-auto">
|
<CardContent className="space-y-2 max-h-80 overflow-y-auto">
|
||||||
{errors.map((error) => (
|
{errors.map((error) => (
|
||||||
@@ -238,19 +477,6 @@ export default async function JobDetailPage({ params }: JobDetailPageProps) {
|
|||||||
</CardContent>
|
</CardContent>
|
||||||
</Card>
|
</Card>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
{/* Error Message */}
|
|
||||||
{job.error_opt && (
|
|
||||||
<Card className="lg:col-span-2">
|
|
||||||
<CardHeader>
|
|
||||||
<CardTitle>Error</CardTitle>
|
|
||||||
<CardDescription>Job failed with error</CardDescription>
|
|
||||||
</CardHeader>
|
|
||||||
<CardContent>
|
|
||||||
<pre className="p-4 bg-destructive/10 rounded-lg text-sm text-destructive overflow-x-auto border border-destructive/20">{job.error_opt}</pre>
|
|
||||||
</CardContent>
|
|
||||||
</Card>
|
|
||||||
)}
|
|
||||||
</div>
|
</div>
|
||||||
</>
|
</>
|
||||||
);
|
);
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ import { ThemeProvider } from "./theme-provider";
|
|||||||
import { ThemeToggle } from "./theme-toggle";
|
import { ThemeToggle } from "./theme-toggle";
|
||||||
import { JobsIndicator } from "./components/JobsIndicator";
|
import { JobsIndicator } from "./components/JobsIndicator";
|
||||||
import { NavIcon, Icon } from "./components/ui";
|
import { NavIcon, Icon } from "./components/ui";
|
||||||
|
import { MobileNav } from "./components/MobileNav";
|
||||||
|
|
||||||
export const metadata: Metadata = {
|
export const metadata: Metadata = {
|
||||||
title: "StripStream Backoffice",
|
title: "StripStream Backoffice",
|
||||||
@@ -61,9 +62,9 @@ export default function RootLayout({ children }: { children: ReactNode }) {
|
|||||||
<div className="flex items-center gap-2">
|
<div className="flex items-center gap-2">
|
||||||
<div className="hidden md:flex items-center gap-1">
|
<div className="hidden md:flex items-center gap-1">
|
||||||
{navItems.map((item) => (
|
{navItems.map((item) => (
|
||||||
<NavLink key={item.href} href={item.href}>
|
<NavLink key={item.href} href={item.href} title={item.label}>
|
||||||
<NavIcon name={item.icon} />
|
<NavIcon name={item.icon} />
|
||||||
<span className="ml-2">{item.label}</span>
|
<span className="ml-2 hidden lg:inline">{item.label}</span>
|
||||||
</NavLink>
|
</NavLink>
|
||||||
))}
|
))}
|
||||||
</div>
|
</div>
|
||||||
@@ -79,6 +80,7 @@ export default function RootLayout({ children }: { children: ReactNode }) {
|
|||||||
<Icon name="settings" size="md" />
|
<Icon name="settings" size="md" />
|
||||||
</Link>
|
</Link>
|
||||||
<ThemeToggle />
|
<ThemeToggle />
|
||||||
|
<MobileNav navItems={navItems} />
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</nav>
|
</nav>
|
||||||
@@ -95,13 +97,14 @@ export default function RootLayout({ children }: { children: ReactNode }) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Navigation Link Component
|
// Navigation Link Component
|
||||||
function NavLink({ href, children }: { href: NavItem["href"]; children: React.ReactNode }) {
|
function NavLink({ href, title, children }: { href: NavItem["href"]; title?: string; children: React.ReactNode }) {
|
||||||
return (
|
return (
|
||||||
<Link
|
<Link
|
||||||
href={href}
|
href={href}
|
||||||
|
title={title}
|
||||||
className="
|
className="
|
||||||
flex items-center
|
flex items-center
|
||||||
px-3 py-2
|
px-2 lg:px-3 py-2
|
||||||
rounded-lg
|
rounded-lg
|
||||||
text-sm font-medium
|
text-sm font-medium
|
||||||
text-muted-foreground
|
text-muted-foreground
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
import { fetchLibraries, fetchBooks, getBookCoverUrl, LibraryDto, BookDto } from "../../../../lib/api";
|
import { fetchLibraries, fetchBooks, getBookCoverUrl, LibraryDto, BookDto } from "../../../../lib/api";
|
||||||
import { BooksGrid, EmptyState } from "../../../components/BookCard";
|
import { BooksGrid, EmptyState } from "../../../components/BookCard";
|
||||||
import { LibrarySubPageHeader } from "../../../components/LibrarySubPageHeader";
|
import { LibrarySubPageHeader } from "../../../components/LibrarySubPageHeader";
|
||||||
import { CursorPagination } from "../../../components/ui";
|
import { OffsetPagination } from "../../../components/ui";
|
||||||
import { notFound } from "next/navigation";
|
import { notFound } from "next/navigation";
|
||||||
|
|
||||||
export const dynamic = "force-dynamic";
|
export const dynamic = "force-dynamic";
|
||||||
@@ -15,15 +15,17 @@ export default async function LibraryBooksPage({
|
|||||||
}) {
|
}) {
|
||||||
const { id } = await params;
|
const { id } = await params;
|
||||||
const searchParamsAwaited = await searchParams;
|
const searchParamsAwaited = await searchParams;
|
||||||
const cursor = typeof searchParamsAwaited.cursor === "string" ? searchParamsAwaited.cursor : undefined;
|
const page = typeof searchParamsAwaited.page === "string" ? parseInt(searchParamsAwaited.page) : 1;
|
||||||
const series = typeof searchParamsAwaited.series === "string" ? searchParamsAwaited.series : undefined;
|
const series = typeof searchParamsAwaited.series === "string" ? searchParamsAwaited.series : undefined;
|
||||||
const limit = typeof searchParamsAwaited.limit === "string" ? parseInt(searchParamsAwaited.limit) : 20;
|
const limit = typeof searchParamsAwaited.limit === "string" ? parseInt(searchParamsAwaited.limit) : 20;
|
||||||
|
|
||||||
const [library, booksPage] = await Promise.all([
|
const [library, booksPage] = await Promise.all([
|
||||||
fetchLibraries().then(libs => libs.find(l => l.id === id)),
|
fetchLibraries().then(libs => libs.find(l => l.id === id)),
|
||||||
fetchBooks(id, series, cursor, limit).catch(() => ({
|
fetchBooks(id, series, page, limit).catch(() => ({
|
||||||
items: [] as BookDto[],
|
items: [] as BookDto[],
|
||||||
next_cursor: null
|
total: 0,
|
||||||
|
page: 1,
|
||||||
|
limit,
|
||||||
}))
|
}))
|
||||||
]);
|
]);
|
||||||
|
|
||||||
@@ -35,11 +37,9 @@ export default async function LibraryBooksPage({
|
|||||||
...book,
|
...book,
|
||||||
coverUrl: getBookCoverUrl(book.id)
|
coverUrl: getBookCoverUrl(book.id)
|
||||||
}));
|
}));
|
||||||
const nextCursor = booksPage.next_cursor;
|
|
||||||
|
|
||||||
const seriesDisplayName = series === "unclassified" ? "Unclassified" : series;
|
const seriesDisplayName = series === "unclassified" ? "Unclassified" : series;
|
||||||
const hasNextPage = !!nextCursor;
|
const totalPages = Math.ceil(booksPage.total / limit);
|
||||||
const hasPrevPage = !!cursor;
|
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<div className="space-y-6">
|
<div className="space-y-6">
|
||||||
@@ -63,12 +63,11 @@ export default async function LibraryBooksPage({
|
|||||||
<>
|
<>
|
||||||
<BooksGrid books={books} />
|
<BooksGrid books={books} />
|
||||||
|
|
||||||
<CursorPagination
|
<OffsetPagination
|
||||||
hasNextPage={hasNextPage}
|
currentPage={page}
|
||||||
hasPrevPage={hasPrevPage}
|
totalPages={totalPages}
|
||||||
pageSize={limit}
|
pageSize={limit}
|
||||||
currentCount={books.length}
|
totalItems={booksPage.total}
|
||||||
nextCursor={nextCursor}
|
|
||||||
/>
|
/>
|
||||||
</>
|
</>
|
||||||
) : (
|
) : (
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
import { fetchLibraries, fetchSeries, getBookCoverUrl, LibraryDto, SeriesDto, SeriesPageDto } from "../../../../lib/api";
|
import { fetchLibraries, fetchSeries, getBookCoverUrl, LibraryDto, SeriesDto, SeriesPageDto } from "../../../../lib/api";
|
||||||
import { CursorPagination } from "../../../components/ui";
|
import { OffsetPagination } from "../../../components/ui";
|
||||||
import Image from "next/image";
|
import Image from "next/image";
|
||||||
import Link from "next/link";
|
import Link from "next/link";
|
||||||
import { notFound } from "next/navigation";
|
import { notFound } from "next/navigation";
|
||||||
@@ -16,12 +16,12 @@ export default async function LibrarySeriesPage({
|
|||||||
}) {
|
}) {
|
||||||
const { id } = await params;
|
const { id } = await params;
|
||||||
const searchParamsAwaited = await searchParams;
|
const searchParamsAwaited = await searchParams;
|
||||||
const cursor = typeof searchParamsAwaited.cursor === "string" ? searchParamsAwaited.cursor : undefined;
|
const page = typeof searchParamsAwaited.page === "string" ? parseInt(searchParamsAwaited.page) : 1;
|
||||||
const limit = typeof searchParamsAwaited.limit === "string" ? parseInt(searchParamsAwaited.limit) : 20;
|
const limit = typeof searchParamsAwaited.limit === "string" ? parseInt(searchParamsAwaited.limit) : 20;
|
||||||
|
|
||||||
const [library, seriesPage] = await Promise.all([
|
const [library, seriesPage] = await Promise.all([
|
||||||
fetchLibraries().then(libs => libs.find(l => l.id === id)),
|
fetchLibraries().then(libs => libs.find(l => l.id === id)),
|
||||||
fetchSeries(id, cursor, limit).catch(() => ({ items: [] as SeriesDto[], next_cursor: null }) as SeriesPageDto)
|
fetchSeries(id, page, limit).catch(() => ({ items: [] as SeriesDto[], total: 0, page: 1, limit }) as SeriesPageDto)
|
||||||
]);
|
]);
|
||||||
|
|
||||||
if (!library) {
|
if (!library) {
|
||||||
@@ -29,9 +29,7 @@ export default async function LibrarySeriesPage({
|
|||||||
}
|
}
|
||||||
|
|
||||||
const series = seriesPage.items;
|
const series = seriesPage.items;
|
||||||
const nextCursor = seriesPage.next_cursor;
|
const totalPages = Math.ceil(seriesPage.total / limit);
|
||||||
const hasNextPage = !!nextCursor;
|
|
||||||
const hasPrevPage = !!cursor;
|
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<div className="space-y-6">
|
<div className="space-y-6">
|
||||||
@@ -78,12 +76,11 @@ export default async function LibrarySeriesPage({
|
|||||||
))}
|
))}
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<CursorPagination
|
<OffsetPagination
|
||||||
hasNextPage={hasNextPage}
|
currentPage={page}
|
||||||
hasPrevPage={hasPrevPage}
|
totalPages={totalPages}
|
||||||
pageSize={limit}
|
pageSize={limit}
|
||||||
currentCount={series.length}
|
totalItems={seriesPage.total}
|
||||||
nextCursor={nextCursor}
|
|
||||||
/>
|
/>
|
||||||
</>
|
</>
|
||||||
) : (
|
) : (
|
||||||
|
|||||||
@@ -247,7 +247,7 @@ export default function SettingsPage({ initialSettings, initialCacheStats, initi
|
|||||||
<Icon name="performance" size="md" />
|
<Icon name="performance" size="md" />
|
||||||
Performance Limits
|
Performance Limits
|
||||||
</CardTitle>
|
</CardTitle>
|
||||||
<CardDescription>Configure API performance and rate limiting</CardDescription>
|
<CardDescription>Configure API performance, rate limiting, and thumbnail generation concurrency</CardDescription>
|
||||||
</CardHeader>
|
</CardHeader>
|
||||||
<CardContent>
|
<CardContent>
|
||||||
<div className="space-y-4">
|
<div className="space-y-4">
|
||||||
@@ -266,6 +266,9 @@ export default function SettingsPage({ initialSettings, initialCacheStats, initi
|
|||||||
}}
|
}}
|
||||||
onBlur={() => handleUpdateSetting("limits", settings.limits)}
|
onBlur={() => handleUpdateSetting("limits", settings.limits)}
|
||||||
/>
|
/>
|
||||||
|
<p className="text-xs text-muted-foreground mt-1">
|
||||||
|
Maximum number of page renders and thumbnail generations running in parallel
|
||||||
|
</p>
|
||||||
</FormField>
|
</FormField>
|
||||||
<FormField className="flex-1">
|
<FormField className="flex-1">
|
||||||
<label className="text-sm font-medium text-muted-foreground mb-1 block">Timeout (seconds)</label>
|
<label className="text-sm font-medium text-muted-foreground mb-1 block">Timeout (seconds)</label>
|
||||||
@@ -299,7 +302,7 @@ export default function SettingsPage({ initialSettings, initialCacheStats, initi
|
|||||||
</FormField>
|
</FormField>
|
||||||
</FormRow>
|
</FormRow>
|
||||||
<p className="text-sm text-muted-foreground">
|
<p className="text-sm text-muted-foreground">
|
||||||
Note: Changes to limits require a server restart to take effect.
|
Note: Changes to limits require a server restart to take effect. The "Concurrent Renders" setting controls both page rendering and thumbnail generation parallelism.
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</CardContent>
|
</CardContent>
|
||||||
@@ -424,7 +427,7 @@ export default function SettingsPage({ initialSettings, initialCacheStats, initi
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
<p className="text-sm text-muted-foreground">
|
<p className="text-sm text-muted-foreground">
|
||||||
Note: Thumbnail settings are used during indexing. Existing thumbnails will not be regenerated automatically.
|
Note: Thumbnail settings are used during indexing. Existing thumbnails will not be regenerated automatically. The concurrency for thumbnail generation is controlled by the "Concurrent Renders" setting in Performance Limits above.
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</CardContent>
|
</CardContent>
|
||||||
|
|||||||
@@ -13,6 +13,7 @@ export type LibraryDto = {
|
|||||||
export type IndexJobDto = {
|
export type IndexJobDto = {
|
||||||
id: string;
|
id: string;
|
||||||
library_id: string | null;
|
library_id: string | null;
|
||||||
|
book_id: string | null;
|
||||||
type: string;
|
type: string;
|
||||||
status: string;
|
status: string;
|
||||||
started_at: string | null;
|
started_at: string | null;
|
||||||
@@ -45,6 +46,14 @@ export type FolderItem = {
|
|||||||
has_children: boolean;
|
has_children: boolean;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
export type ReadingStatus = "unread" | "reading" | "read";
|
||||||
|
|
||||||
|
export type ReadingProgressDto = {
|
||||||
|
status: ReadingStatus;
|
||||||
|
current_page: number | null;
|
||||||
|
last_read_at: string | null;
|
||||||
|
};
|
||||||
|
|
||||||
export type BookDto = {
|
export type BookDto = {
|
||||||
id: string;
|
id: string;
|
||||||
library_id: string;
|
library_id: string;
|
||||||
@@ -59,11 +68,16 @@ export type BookDto = {
|
|||||||
file_format: string | null;
|
file_format: string | null;
|
||||||
file_parse_status: string | null;
|
file_parse_status: string | null;
|
||||||
updated_at: string;
|
updated_at: string;
|
||||||
|
reading_status: ReadingStatus;
|
||||||
|
reading_current_page: number | null;
|
||||||
|
reading_last_read_at: string | null;
|
||||||
};
|
};
|
||||||
|
|
||||||
export type BooksPageDto = {
|
export type BooksPageDto = {
|
||||||
items: BookDto[];
|
items: BookDto[];
|
||||||
next_cursor: string | null;
|
total: number;
|
||||||
|
page: number;
|
||||||
|
limit: number;
|
||||||
};
|
};
|
||||||
|
|
||||||
export type SearchHitDto = {
|
export type SearchHitDto = {
|
||||||
@@ -77,8 +91,17 @@ export type SearchHitDto = {
|
|||||||
language: string | null;
|
language: string | null;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
export type SeriesHitDto = {
|
||||||
|
library_id: string;
|
||||||
|
name: string;
|
||||||
|
book_count: number;
|
||||||
|
books_read_count: number;
|
||||||
|
first_book_id: string;
|
||||||
|
};
|
||||||
|
|
||||||
export type SearchResponseDto = {
|
export type SearchResponseDto = {
|
||||||
hits: SearchHitDto[];
|
hits: SearchHitDto[];
|
||||||
|
series_hits: SeriesHitDto[];
|
||||||
estimated_total_hits: number | null;
|
estimated_total_hits: number | null;
|
||||||
processing_time_ms: number | null;
|
processing_time_ms: number | null;
|
||||||
};
|
};
|
||||||
@@ -86,11 +109,12 @@ export type SearchResponseDto = {
|
|||||||
export type SeriesDto = {
|
export type SeriesDto = {
|
||||||
name: string;
|
name: string;
|
||||||
book_count: number;
|
book_count: number;
|
||||||
|
books_read_count: number;
|
||||||
first_book_id: string;
|
first_book_id: string;
|
||||||
};
|
};
|
||||||
|
|
||||||
function config() {
|
export function config() {
|
||||||
const baseUrl = process.env.API_BASE_URL || "http://api:8080";
|
const baseUrl = process.env.API_BASE_URL || "http://api:7080";
|
||||||
const token = process.env.API_BOOTSTRAP_TOKEN;
|
const token = process.env.API_BOOTSTRAP_TOKEN;
|
||||||
if (!token) {
|
if (!token) {
|
||||||
throw new Error("API_BOOTSTRAP_TOKEN is required for backoffice");
|
throw new Error("API_BOOTSTRAP_TOKEN is required for backoffice");
|
||||||
@@ -232,13 +256,13 @@ export async function revokeToken(id: string) {
|
|||||||
export async function fetchBooks(
|
export async function fetchBooks(
|
||||||
libraryId?: string,
|
libraryId?: string,
|
||||||
series?: string,
|
series?: string,
|
||||||
cursor?: string,
|
page: number = 1,
|
||||||
limit: number = 50,
|
limit: number = 50,
|
||||||
): Promise<BooksPageDto> {
|
): Promise<BooksPageDto> {
|
||||||
const params = new URLSearchParams();
|
const params = new URLSearchParams();
|
||||||
if (libraryId) params.set("library_id", libraryId);
|
if (libraryId) params.set("library_id", libraryId);
|
||||||
if (series) params.set("series", series);
|
if (series) params.set("series", series);
|
||||||
if (cursor) params.set("cursor", cursor);
|
params.set("page", page.toString());
|
||||||
params.set("limit", limit.toString());
|
params.set("limit", limit.toString());
|
||||||
|
|
||||||
return apiFetch<BooksPageDto>(`/books?${params.toString()}`);
|
return apiFetch<BooksPageDto>(`/books?${params.toString()}`);
|
||||||
@@ -246,16 +270,18 @@ export async function fetchBooks(
|
|||||||
|
|
||||||
export type SeriesPageDto = {
|
export type SeriesPageDto = {
|
||||||
items: SeriesDto[];
|
items: SeriesDto[];
|
||||||
next_cursor: string | null;
|
total: number;
|
||||||
|
page: number;
|
||||||
|
limit: number;
|
||||||
};
|
};
|
||||||
|
|
||||||
export async function fetchSeries(
|
export async function fetchSeries(
|
||||||
libraryId: string,
|
libraryId: string,
|
||||||
cursor?: string,
|
page: number = 1,
|
||||||
limit: number = 50,
|
limit: number = 50,
|
||||||
): Promise<SeriesPageDto> {
|
): Promise<SeriesPageDto> {
|
||||||
const params = new URLSearchParams();
|
const params = new URLSearchParams();
|
||||||
if (cursor) params.set("cursor", cursor);
|
params.set("page", page.toString());
|
||||||
params.set("limit", limit.toString());
|
params.set("limit", limit.toString());
|
||||||
|
|
||||||
return apiFetch<SeriesPageDto>(
|
return apiFetch<SeriesPageDto>(
|
||||||
@@ -348,3 +374,22 @@ export async function clearCache() {
|
|||||||
export async function getThumbnailStats() {
|
export async function getThumbnailStats() {
|
||||||
return apiFetch<ThumbnailStats>("/settings/thumbnail/stats");
|
return apiFetch<ThumbnailStats>("/settings/thumbnail/stats");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export async function convertBook(bookId: string) {
|
||||||
|
return apiFetch<IndexJobDto>(`/books/${bookId}/convert`, { method: "POST" });
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function fetchReadingProgress(bookId: string) {
|
||||||
|
return apiFetch<ReadingProgressDto>(`/books/${bookId}/progress`);
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function updateReadingProgress(
|
||||||
|
bookId: string,
|
||||||
|
status: ReadingStatus,
|
||||||
|
currentPage?: number,
|
||||||
|
) {
|
||||||
|
return apiFetch<ReadingProgressDto>(`/books/${bookId}/progress`, {
|
||||||
|
method: "PATCH",
|
||||||
|
body: JSON.stringify({ status, current_page: currentPage ?? null }),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|||||||
@@ -21,7 +21,10 @@
|
|||||||
{
|
{
|
||||||
"name": "next"
|
"name": "next"
|
||||||
}
|
}
|
||||||
]
|
],
|
||||||
|
"paths": {
|
||||||
|
"@/*": ["./*"]
|
||||||
|
}
|
||||||
},
|
},
|
||||||
"include": [
|
"include": [
|
||||||
"next-env.d.ts",
|
"next-env.d.ts",
|
||||||
|
|||||||
File diff suppressed because one or more lines are too long
104
apps/indexer/AGENTS.md
Normal file
104
apps/indexer/AGENTS.md
Normal file
@@ -0,0 +1,104 @@
|
|||||||
|
# apps/indexer — Service d'indexation
|
||||||
|
|
||||||
|
Service background sur le port **7081**. Voir `AGENTS.md` racine pour les conventions globales.
|
||||||
|
|
||||||
|
## Structure des fichiers
|
||||||
|
|
||||||
|
| Fichier | Rôle |
|
||||||
|
|---------|------|
|
||||||
|
| `main.rs` | Point d'entrée, initialisation, lancement du worker |
|
||||||
|
| `lib.rs` | `AppState` (pool, meili_url, meili_master_key) |
|
||||||
|
| `worker.rs` | Boucle principale : claim job → process → cleanup stale |
|
||||||
|
| `job.rs` | `claim_next_job`, `process_job`, `fail_job`, `cleanup_stale_jobs` |
|
||||||
|
| `scanner.rs` | Phase 1 discovery : WalkDir + `parse_metadata_fast` (zéro I/O archive), skip dossiers inchangés via mtime, batching DB |
|
||||||
|
| `analyzer.rs` | Phase 2 analysis : ouvre chaque archive une fois (`analyze_book`), génère page_count + thumbnail WebP |
|
||||||
|
| `batch.rs` | `flush_all_batches` avec UNNEST, structures `BookInsert/Update/FileInsert/Update/ErrorInsert` |
|
||||||
|
| `scheduler.rs` | Auto-scan : vérifie toutes les 60s les bibliothèques à monitorer |
|
||||||
|
| `watcher.rs` | File watcher temps réel |
|
||||||
|
| `meili.rs` | Indexation/sync Meilisearch |
|
||||||
|
| `api.rs` | Endpoints HTTP de l'indexer (/health, /ready) |
|
||||||
|
| `utils.rs` | `remap_libraries_path`, `unmap_libraries_path`, `compute_fingerprint`, `kind_from_format` |
|
||||||
|
|
||||||
|
## Cycle de vie d'un job
|
||||||
|
|
||||||
|
```
|
||||||
|
claim_next_job (UPDATE ... RETURNING, status pending→running)
|
||||||
|
└─ process_job
|
||||||
|
├─ Phase 1 : scanner::scan_library_discovery
|
||||||
|
│ ├─ WalkDir + parse_metadata_fast (zéro I/O archive)
|
||||||
|
│ ├─ skip dossiers via directory_mtimes (table DB)
|
||||||
|
│ └─ INSERT books (page_count=NULL) → livres visibles immédiatement
|
||||||
|
├─ meili::sync_meili
|
||||||
|
├─ analyzer::cleanup_orphaned_thumbnails (full_rebuild uniquement)
|
||||||
|
└─ Phase 2 : analyzer::analyze_library_books
|
||||||
|
├─ SELECT books WHERE page_count IS NULL
|
||||||
|
├─ parsers::analyze_book → (page_count, first_page_bytes)
|
||||||
|
├─ generate_thumbnail (WebP, Lanczos3)
|
||||||
|
└─ UPDATE books SET page_count, thumbnail_path
|
||||||
|
|
||||||
|
Jobs spéciaux :
|
||||||
|
thumbnail_rebuild → analyze_library_books(thumbnail_only=true)
|
||||||
|
thumbnail_regenerate → regenerate_thumbnails (clear + re-analyze)
|
||||||
|
```
|
||||||
|
|
||||||
|
- Annulation : `is_job_cancelled` vérifié toutes les 10 fichiers ou 1s — retourne `Err("Job cancelled")`
|
||||||
|
- Jobs stale (running au redémarrage) → nettoyés par `cleanup_stale_jobs` au boot
|
||||||
|
|
||||||
|
## Pattern batch (batch.rs)
|
||||||
|
|
||||||
|
Toutes les opérations DB massives passent par `flush_all_batches` avec UNNEST :
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// Accumuler dans des Vec<BookInsert>, Vec<FileInsert>, etc.
|
||||||
|
books_to_insert.push(BookInsert { ... });
|
||||||
|
|
||||||
|
// Flush quand plein ou en fin de scan
|
||||||
|
if books_to_insert.len() >= BATCH_SIZE {
|
||||||
|
flush_all_batches(&pool, &mut books_update, &mut files_update,
|
||||||
|
&mut books_insert, &mut files_insert, &mut errors_insert).await?;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Toutes les opérations du flush sont dans une seule transaction.
|
||||||
|
|
||||||
|
## Scan filesystem — architecture 2 phases
|
||||||
|
|
||||||
|
### Phase 1 : Discovery (`scanner.rs`)
|
||||||
|
|
||||||
|
Pipeline allégé — **zéro ouverture d'archive** :
|
||||||
|
1. Charger `directory_mtimes` depuis la DB
|
||||||
|
2. WalkDir : pour chaque dossier, comparer mtime filesystem vs mtime stocké → skip si inchangé
|
||||||
|
3. Pour chaque fichier : `parse_metadata_fast` (title/series/volume depuis filename uniquement)
|
||||||
|
4. INSERT/UPDATE avec `page_count = NULL` — les livres sont visibles immédiatement
|
||||||
|
5. Upsert `directory_mtimes` en fin de scan
|
||||||
|
|
||||||
|
Fingerprint = SHA256(taille + mtime + filename) pour détecter les changements sans relire le fichier.
|
||||||
|
|
||||||
|
### Phase 2 : Analysis (`analyzer.rs`)
|
||||||
|
|
||||||
|
Traitement progressif en background :
|
||||||
|
- Query `WHERE page_count IS NULL` (ou `thumbnail_path IS NULL` pour thumbnail jobs)
|
||||||
|
- Concurrence bornée (`futures::stream::for_each_concurrent`, défaut 4)
|
||||||
|
- Par livre : `parsers::analyze_book(path, format)` → `(page_count, first_page_bytes)`
|
||||||
|
- Génération thumbnail : resize Lanczos3 + encode WebP
|
||||||
|
- UPDATE `books SET page_count, thumbnail_path`
|
||||||
|
- Config lue depuis `app_settings` (clés `'thumbnail'` et `'limits'`)
|
||||||
|
|
||||||
|
## Path remapping
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// abs_path en DB = chemin conteneur (/libraries/...)
|
||||||
|
// Sur l'hôte : LIBRARIES_ROOT_PATH remplace /libraries
|
||||||
|
utils::remap_libraries_path(&abs_path) // DB → filesystem local
|
||||||
|
utils::unmap_libraries_path(&local_path) // filesystem local → DB
|
||||||
|
```
|
||||||
|
|
||||||
|
## Gotchas
|
||||||
|
|
||||||
|
- **Thumbnails** : générés **directement par l'indexer** (phase 2, `analyzer.rs`). L'API ne gère plus la génération — elle crée juste les jobs en DB.
|
||||||
|
- **page_count = NULL** : après la phase discovery, tous les nouveaux livres ont `page_count = NULL`. La phase analysis les remplit progressivement. Ne pas confondre avec une erreur.
|
||||||
|
- **directory_mtimes** : table DB qui stocke le mtime de chaque dossier scanné. Vidée au full_rebuild, mise à jour après chaque scan. Permet de skipper les dossiers inchangés en scan incrémental.
|
||||||
|
- **full_rebuild** : supprime toutes les données puis re-insère. Ignore les fingerprints et les directory_mtimes.
|
||||||
|
- **Annulation** : vérifier `is_job_cancelled` régulièrement pour respecter les annulations utilisateur.
|
||||||
|
- **Watcher + scheduler** : tournent en tâches tokio séparées dans `worker.rs`, en parallèle de la boucle principale.
|
||||||
|
- **spawn_blocking** : l'ouverture d'archive (`analyze_book`) et la génération de thumbnail sont des opérations bloquantes — toujours les wrapper dans `tokio::task::spawn_blocking`.
|
||||||
@@ -4,10 +4,14 @@ version.workspace = true
|
|||||||
edition.workspace = true
|
edition.workspace = true
|
||||||
license.workspace = true
|
license.workspace = true
|
||||||
|
|
||||||
|
[lib]
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
anyhow.workspace = true
|
anyhow.workspace = true
|
||||||
axum.workspace = true
|
axum.workspace = true
|
||||||
chrono.workspace = true
|
chrono.workspace = true
|
||||||
|
futures = "0.3"
|
||||||
|
image.workspace = true
|
||||||
notify = "6.1"
|
notify = "6.1"
|
||||||
parsers = { path = "../../crates/parsers" }
|
parsers = { path = "../../crates/parsers" }
|
||||||
rand.workspace = true
|
rand.workspace = true
|
||||||
@@ -23,3 +27,4 @@ tracing.workspace = true
|
|||||||
tracing-subscriber.workspace = true
|
tracing-subscriber.workspace = true
|
||||||
uuid.workspace = true
|
uuid.workspace = true
|
||||||
walkdir.workspace = true
|
walkdir.workspace = true
|
||||||
|
webp.workspace = true
|
||||||
|
|||||||
@@ -21,7 +21,11 @@ RUN --mount=type=cache,target=/sccache \
|
|||||||
cargo build --release -p indexer
|
cargo build --release -p indexer
|
||||||
|
|
||||||
FROM debian:bookworm-slim
|
FROM debian:bookworm-slim
|
||||||
RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates wget unrar-free && rm -rf /var/lib/apt/lists/*
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
|
ca-certificates wget \
|
||||||
|
unrar-free unar \
|
||||||
|
poppler-utils \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
COPY --from=builder /app/target/release/indexer /usr/local/bin/indexer
|
COPY --from=builder /app/target/release/indexer /usr/local/bin/indexer
|
||||||
EXPOSE 8081
|
EXPOSE 7081
|
||||||
CMD ["/usr/local/bin/indexer"]
|
CMD ["/usr/local/bin/indexer"]
|
||||||
|
|||||||
470
apps/indexer/src/analyzer.rs
Normal file
470
apps/indexer/src/analyzer.rs
Normal file
@@ -0,0 +1,470 @@
|
|||||||
|
use anyhow::Result;
|
||||||
|
use futures::stream::{self, StreamExt};
|
||||||
|
use image::GenericImageView;
|
||||||
|
use parsers::{analyze_book, BookFormat};
|
||||||
|
use sqlx::Row;
|
||||||
|
use std::path::Path;
|
||||||
|
use std::sync::atomic::{AtomicBool, AtomicI32, Ordering};
|
||||||
|
use std::sync::Arc;
|
||||||
|
use tracing::{info, warn};
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
use crate::{job::is_job_cancelled, utils, AppState};
|
||||||
|
|
||||||
|
#[derive(Clone)]
|
||||||
|
struct ThumbnailConfig {
|
||||||
|
enabled: bool,
|
||||||
|
width: u32,
|
||||||
|
height: u32,
|
||||||
|
quality: u8,
|
||||||
|
directory: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn load_thumbnail_config(pool: &sqlx::PgPool) -> ThumbnailConfig {
|
||||||
|
let fallback = ThumbnailConfig {
|
||||||
|
enabled: true,
|
||||||
|
width: 300,
|
||||||
|
height: 400,
|
||||||
|
quality: 80,
|
||||||
|
directory: "/data/thumbnails".to_string(),
|
||||||
|
};
|
||||||
|
let row = sqlx::query(r#"SELECT value FROM app_settings WHERE key = 'thumbnail'"#)
|
||||||
|
.fetch_optional(pool)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
match row {
|
||||||
|
Ok(Some(row)) => {
|
||||||
|
let value: serde_json::Value = row.get("value");
|
||||||
|
ThumbnailConfig {
|
||||||
|
enabled: value
|
||||||
|
.get("enabled")
|
||||||
|
.and_then(|v| v.as_bool())
|
||||||
|
.unwrap_or(fallback.enabled),
|
||||||
|
width: value
|
||||||
|
.get("width")
|
||||||
|
.and_then(|v| v.as_u64())
|
||||||
|
.map(|v| v as u32)
|
||||||
|
.unwrap_or(fallback.width),
|
||||||
|
height: value
|
||||||
|
.get("height")
|
||||||
|
.and_then(|v| v.as_u64())
|
||||||
|
.map(|v| v as u32)
|
||||||
|
.unwrap_or(fallback.height),
|
||||||
|
quality: value
|
||||||
|
.get("quality")
|
||||||
|
.and_then(|v| v.as_u64())
|
||||||
|
.map(|v| v as u8)
|
||||||
|
.unwrap_or(fallback.quality),
|
||||||
|
directory: value
|
||||||
|
.get("directory")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.map(|s| s.to_string())
|
||||||
|
.unwrap_or_else(|| fallback.directory.clone()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ => fallback,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn load_thumbnail_concurrency(pool: &sqlx::PgPool) -> usize {
|
||||||
|
let default_concurrency = 2;
|
||||||
|
let row = sqlx::query(r#"SELECT value FROM app_settings WHERE key = 'limits'"#)
|
||||||
|
.fetch_optional(pool)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
match row {
|
||||||
|
Ok(Some(row)) => {
|
||||||
|
let value: serde_json::Value = row.get("value");
|
||||||
|
value
|
||||||
|
.get("concurrent_renders")
|
||||||
|
.and_then(|v| v.as_u64())
|
||||||
|
.map(|v| v as usize)
|
||||||
|
.unwrap_or(default_concurrency)
|
||||||
|
}
|
||||||
|
_ => default_concurrency,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn generate_thumbnail(image_bytes: &[u8], config: &ThumbnailConfig) -> anyhow::Result<Vec<u8>> {
|
||||||
|
let img = image::load_from_memory(image_bytes)
|
||||||
|
.map_err(|e| anyhow::anyhow!("failed to load image: {}", e))?;
|
||||||
|
let (orig_w, orig_h) = img.dimensions();
|
||||||
|
let ratio_w = config.width as f32 / orig_w as f32;
|
||||||
|
let ratio_h = config.height as f32 / orig_h as f32;
|
||||||
|
let ratio = ratio_w.min(ratio_h);
|
||||||
|
let new_w = (orig_w as f32 * ratio) as u32;
|
||||||
|
let new_h = (orig_h as f32 * ratio) as u32;
|
||||||
|
let resized = img.resize(new_w, new_h, image::imageops::FilterType::Triangle);
|
||||||
|
let rgba = resized.to_rgba8();
|
||||||
|
let (w, h) = rgba.dimensions();
|
||||||
|
let rgb_data: Vec<u8> = rgba.pixels().flat_map(|p| [p[0], p[1], p[2]]).collect();
|
||||||
|
let quality = config.quality as f32;
|
||||||
|
let webp_data = webp::Encoder::new(&rgb_data, webp::PixelLayout::Rgb, w, h).encode(quality);
|
||||||
|
Ok(webp_data.to_vec())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn save_thumbnail(
|
||||||
|
book_id: Uuid,
|
||||||
|
thumbnail_bytes: &[u8],
|
||||||
|
config: &ThumbnailConfig,
|
||||||
|
) -> anyhow::Result<String> {
|
||||||
|
let dir = Path::new(&config.directory);
|
||||||
|
std::fs::create_dir_all(dir)?;
|
||||||
|
let filename = format!("{}.webp", book_id);
|
||||||
|
let path = dir.join(&filename);
|
||||||
|
std::fs::write(&path, thumbnail_bytes)?;
|
||||||
|
Ok(path.to_string_lossy().to_string())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn book_format_from_str(s: &str) -> Option<BookFormat> {
|
||||||
|
match s {
|
||||||
|
"cbz" => Some(BookFormat::Cbz),
|
||||||
|
"cbr" => Some(BookFormat::Cbr),
|
||||||
|
"pdf" => Some(BookFormat::Pdf),
|
||||||
|
_ => None,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Phase 2 — Analysis: open each unanalyzed archive once, extract page_count + thumbnail.
|
||||||
|
/// `thumbnail_only` = true: only process books missing thumbnail (page_count may already be set).
|
||||||
|
/// `thumbnail_only` = false: process books missing page_count.
|
||||||
|
pub async fn analyze_library_books(
|
||||||
|
state: &AppState,
|
||||||
|
job_id: Uuid,
|
||||||
|
library_id: Option<Uuid>,
|
||||||
|
thumbnail_only: bool,
|
||||||
|
) -> Result<()> {
|
||||||
|
let config = load_thumbnail_config(&state.pool).await;
|
||||||
|
|
||||||
|
if !config.enabled {
|
||||||
|
info!("[ANALYZER] Thumbnails disabled, skipping analysis phase");
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
let concurrency = load_thumbnail_concurrency(&state.pool).await;
|
||||||
|
|
||||||
|
// Query books that need analysis
|
||||||
|
let query_filter = if thumbnail_only {
|
||||||
|
"b.thumbnail_path IS NULL"
|
||||||
|
} else {
|
||||||
|
"b.page_count IS NULL"
|
||||||
|
};
|
||||||
|
|
||||||
|
let sql = format!(
|
||||||
|
r#"
|
||||||
|
SELECT b.id AS book_id, bf.abs_path, bf.format
|
||||||
|
FROM books b
|
||||||
|
JOIN book_files bf ON bf.book_id = b.id
|
||||||
|
WHERE (b.library_id = $1 OR $1 IS NULL)
|
||||||
|
AND {}
|
||||||
|
"#,
|
||||||
|
query_filter
|
||||||
|
);
|
||||||
|
|
||||||
|
let rows = sqlx::query(&sql)
|
||||||
|
.bind(library_id)
|
||||||
|
.fetch_all(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
if rows.is_empty() {
|
||||||
|
info!("[ANALYZER] No books to analyze");
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
let total = rows.len() as i32;
|
||||||
|
info!(
|
||||||
|
"[ANALYZER] Analyzing {} books (thumbnail_only={}, concurrency={})",
|
||||||
|
total, thumbnail_only, concurrency
|
||||||
|
);
|
||||||
|
|
||||||
|
// Update job status
|
||||||
|
let _ = sqlx::query(
|
||||||
|
"UPDATE index_jobs SET status = 'generating_thumbnails', total_files = $2, processed_files = 0, current_file = NULL WHERE id = $1",
|
||||||
|
)
|
||||||
|
.bind(job_id)
|
||||||
|
.bind(total)
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
let processed_count = Arc::new(AtomicI32::new(0));
|
||||||
|
let cancelled_flag = Arc::new(AtomicBool::new(false));
|
||||||
|
|
||||||
|
// Background task: poll DB every 2s to detect cancellation
|
||||||
|
let cancel_pool = state.pool.clone();
|
||||||
|
let cancel_flag_for_poller = cancelled_flag.clone();
|
||||||
|
let cancel_handle = tokio::spawn(async move {
|
||||||
|
loop {
|
||||||
|
tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
|
||||||
|
match is_job_cancelled(&cancel_pool, job_id).await {
|
||||||
|
Ok(true) => {
|
||||||
|
cancel_flag_for_poller.store(true, Ordering::Relaxed);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
Ok(false) => {}
|
||||||
|
Err(_) => break,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
struct BookTask {
|
||||||
|
book_id: Uuid,
|
||||||
|
abs_path: String,
|
||||||
|
format: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
let tasks: Vec<BookTask> = rows
|
||||||
|
.into_iter()
|
||||||
|
.map(|row| BookTask {
|
||||||
|
book_id: row.get("book_id"),
|
||||||
|
abs_path: row.get("abs_path"),
|
||||||
|
format: row.get("format"),
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
stream::iter(tasks)
|
||||||
|
.for_each_concurrent(concurrency, |task| {
|
||||||
|
let processed_count = processed_count.clone();
|
||||||
|
let pool = state.pool.clone();
|
||||||
|
let config = config.clone();
|
||||||
|
let cancelled = cancelled_flag.clone();
|
||||||
|
|
||||||
|
async move {
|
||||||
|
if cancelled.load(Ordering::Relaxed) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
let local_path = utils::remap_libraries_path(&task.abs_path);
|
||||||
|
let path = Path::new(&local_path);
|
||||||
|
|
||||||
|
let format = match book_format_from_str(&task.format) {
|
||||||
|
Some(f) => f,
|
||||||
|
None => {
|
||||||
|
warn!("[ANALYZER] Unknown format '{}' for book {}", task.format, task.book_id);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Run blocking archive I/O on a thread pool
|
||||||
|
let book_id = task.book_id;
|
||||||
|
let path_owned = path.to_path_buf();
|
||||||
|
let analyze_result = tokio::task::spawn_blocking(move || {
|
||||||
|
analyze_book(&path_owned, format)
|
||||||
|
})
|
||||||
|
.await;
|
||||||
|
|
||||||
|
let (page_count, image_bytes) = match analyze_result {
|
||||||
|
Ok(Ok(result)) => result,
|
||||||
|
Ok(Err(e)) => {
|
||||||
|
warn!("[ANALYZER] analyze_book failed for book {}: {}", book_id, e);
|
||||||
|
// Mark parse_status = error in book_files
|
||||||
|
let _ = sqlx::query(
|
||||||
|
"UPDATE book_files SET parse_status = 'error', parse_error_opt = $2 WHERE book_id = $1",
|
||||||
|
)
|
||||||
|
.bind(book_id)
|
||||||
|
.bind(e.to_string())
|
||||||
|
.execute(&pool)
|
||||||
|
.await;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
warn!("[ANALYZER] spawn_blocking error for book {}: {}", book_id, e);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Generate thumbnail
|
||||||
|
let thumb_result = tokio::task::spawn_blocking({
|
||||||
|
let config = config.clone();
|
||||||
|
move || generate_thumbnail(&image_bytes, &config)
|
||||||
|
})
|
||||||
|
.await;
|
||||||
|
|
||||||
|
let thumb_bytes = match thumb_result {
|
||||||
|
Ok(Ok(b)) => b,
|
||||||
|
Ok(Err(e)) => {
|
||||||
|
warn!("[ANALYZER] thumbnail generation failed for book {}: {}", book_id, e);
|
||||||
|
// Still update page_count even if thumbnail fails
|
||||||
|
let _ = sqlx::query(
|
||||||
|
"UPDATE books SET page_count = $1 WHERE id = $2",
|
||||||
|
)
|
||||||
|
.bind(page_count)
|
||||||
|
.bind(book_id)
|
||||||
|
.execute(&pool)
|
||||||
|
.await;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
warn!("[ANALYZER] spawn_blocking thumbnail error for book {}: {}", book_id, e);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Save thumbnail file
|
||||||
|
let save_result = {
|
||||||
|
let config = config.clone();
|
||||||
|
tokio::task::spawn_blocking(move || save_thumbnail(book_id, &thumb_bytes, &config))
|
||||||
|
.await
|
||||||
|
};
|
||||||
|
|
||||||
|
let thumb_path = match save_result {
|
||||||
|
Ok(Ok(p)) => p,
|
||||||
|
Ok(Err(e)) => {
|
||||||
|
warn!("[ANALYZER] save_thumbnail failed for book {}: {}", book_id, e);
|
||||||
|
let _ = sqlx::query("UPDATE books SET page_count = $1 WHERE id = $2")
|
||||||
|
.bind(page_count)
|
||||||
|
.bind(book_id)
|
||||||
|
.execute(&pool)
|
||||||
|
.await;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
warn!("[ANALYZER] spawn_blocking save error for book {}: {}", book_id, e);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Update DB
|
||||||
|
if let Err(e) = sqlx::query(
|
||||||
|
"UPDATE books SET page_count = $1, thumbnail_path = $2 WHERE id = $3",
|
||||||
|
)
|
||||||
|
.bind(page_count)
|
||||||
|
.bind(&thumb_path)
|
||||||
|
.bind(book_id)
|
||||||
|
.execute(&pool)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
warn!("[ANALYZER] DB update failed for book {}: {}", book_id, e);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
let processed = processed_count.fetch_add(1, Ordering::Relaxed) + 1;
|
||||||
|
let percent = (processed as f64 / total as f64 * 100.0) as i32;
|
||||||
|
let _ = sqlx::query(
|
||||||
|
"UPDATE index_jobs SET processed_files = $2, progress_percent = $3 WHERE id = $1",
|
||||||
|
)
|
||||||
|
.bind(job_id)
|
||||||
|
.bind(processed)
|
||||||
|
.bind(percent)
|
||||||
|
.execute(&pool)
|
||||||
|
.await;
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.await;
|
||||||
|
|
||||||
|
cancel_handle.abort();
|
||||||
|
|
||||||
|
if cancelled_flag.load(Ordering::Relaxed) {
|
||||||
|
info!("[ANALYZER] Job {} cancelled by user, stopping analysis", job_id);
|
||||||
|
return Err(anyhow::anyhow!("Job cancelled by user"));
|
||||||
|
}
|
||||||
|
|
||||||
|
let final_count = processed_count.load(Ordering::Relaxed);
|
||||||
|
info!(
|
||||||
|
"[ANALYZER] Analysis complete: {}/{} books processed",
|
||||||
|
final_count, total
|
||||||
|
);
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Clear thumbnail files and DB references for books in scope, then re-analyze.
|
||||||
|
pub async fn regenerate_thumbnails(
|
||||||
|
state: &AppState,
|
||||||
|
job_id: Uuid,
|
||||||
|
library_id: Option<Uuid>,
|
||||||
|
) -> Result<()> {
|
||||||
|
let config = load_thumbnail_config(&state.pool).await;
|
||||||
|
|
||||||
|
// Delete thumbnail files for all books in scope
|
||||||
|
let book_ids_to_clear: Vec<Uuid> = sqlx::query_scalar(
|
||||||
|
r#"SELECT id FROM books WHERE (library_id = $1 OR $1 IS NULL) AND thumbnail_path IS NOT NULL"#,
|
||||||
|
)
|
||||||
|
.bind(library_id)
|
||||||
|
.fetch_all(&state.pool)
|
||||||
|
.await
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
let mut deleted_count = 0usize;
|
||||||
|
for book_id in &book_ids_to_clear {
|
||||||
|
let filename = format!("{}.webp", book_id);
|
||||||
|
let thumbnail_path = Path::new(&config.directory).join(&filename);
|
||||||
|
if thumbnail_path.exists() {
|
||||||
|
if let Err(e) = std::fs::remove_file(&thumbnail_path) {
|
||||||
|
warn!(
|
||||||
|
"[ANALYZER] Failed to delete thumbnail {}: {}",
|
||||||
|
thumbnail_path.display(),
|
||||||
|
e
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
deleted_count += 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
info!(
|
||||||
|
"[ANALYZER] Deleted {} thumbnail files for regeneration",
|
||||||
|
deleted_count
|
||||||
|
);
|
||||||
|
|
||||||
|
// Clear thumbnail_path in DB
|
||||||
|
sqlx::query(
|
||||||
|
r#"UPDATE books SET thumbnail_path = NULL WHERE (library_id = $1 OR $1 IS NULL)"#,
|
||||||
|
)
|
||||||
|
.bind(library_id)
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// Re-analyze all books (now thumbnail_path IS NULL for all)
|
||||||
|
analyze_library_books(state, job_id, library_id, true).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Delete orphaned thumbnail files (books deleted in full_rebuild get new UUIDs).
|
||||||
|
pub async fn cleanup_orphaned_thumbnails(state: &AppState) -> Result<()> {
|
||||||
|
let config = load_thumbnail_config(&state.pool).await;
|
||||||
|
|
||||||
|
// Load ALL book IDs across all libraries — we need the complete set to avoid
|
||||||
|
// deleting thumbnails that belong to other libraries during a per-library rebuild.
|
||||||
|
let existing_book_ids: std::collections::HashSet<Uuid> = sqlx::query_scalar(
|
||||||
|
r#"SELECT id FROM books"#,
|
||||||
|
)
|
||||||
|
.fetch_all(&state.pool)
|
||||||
|
.await
|
||||||
|
.unwrap_or_default()
|
||||||
|
.into_iter()
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let thumbnail_dir = Path::new(&config.directory);
|
||||||
|
if !thumbnail_dir.exists() {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut deleted_count = 0usize;
|
||||||
|
if let Ok(entries) = std::fs::read_dir(thumbnail_dir) {
|
||||||
|
for entry in entries.flatten() {
|
||||||
|
if let Some(file_name) = entry.file_name().to_str() {
|
||||||
|
if file_name.ends_with(".webp") {
|
||||||
|
if let Some(book_id_str) = file_name.strip_suffix(".webp") {
|
||||||
|
if let Ok(book_id) = Uuid::parse_str(book_id_str) {
|
||||||
|
if !existing_book_ids.contains(&book_id) {
|
||||||
|
if let Err(e) = std::fs::remove_file(entry.path()) {
|
||||||
|
warn!(
|
||||||
|
"Failed to delete orphaned thumbnail {}: {}",
|
||||||
|
entry.path().display(),
|
||||||
|
e
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
deleted_count += 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
info!(
|
||||||
|
"[ANALYZER] Deleted {} orphaned thumbnail files",
|
||||||
|
deleted_count
|
||||||
|
);
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
16
apps/indexer/src/api.rs
Normal file
16
apps/indexer/src/api.rs
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
use axum::{extract::State, http::StatusCode, Json};
|
||||||
|
use serde_json;
|
||||||
|
|
||||||
|
use crate::AppState;
|
||||||
|
|
||||||
|
pub async fn health() -> &'static str {
|
||||||
|
"ok"
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn ready(State(state): State<AppState>) -> Result<Json<serde_json::Value>, StatusCode> {
|
||||||
|
sqlx::query("SELECT 1")
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await
|
||||||
|
.map_err(|_| StatusCode::SERVICE_UNAVAILABLE)?;
|
||||||
|
Ok(Json(serde_json::json!({"status": "ready"})))
|
||||||
|
}
|
||||||
233
apps/indexer/src/batch.rs
Normal file
233
apps/indexer/src/batch.rs
Normal file
@@ -0,0 +1,233 @@
|
|||||||
|
use anyhow::Result;
|
||||||
|
use chrono::{DateTime, Utc};
|
||||||
|
use sqlx::PgPool;
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
// Batched update data structures
|
||||||
|
pub struct BookUpdate {
|
||||||
|
pub book_id: Uuid,
|
||||||
|
pub title: String,
|
||||||
|
pub kind: String,
|
||||||
|
pub series: Option<String>,
|
||||||
|
pub volume: Option<i32>,
|
||||||
|
pub page_count: Option<i32>,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct FileUpdate {
|
||||||
|
pub file_id: Uuid,
|
||||||
|
pub format: String,
|
||||||
|
pub size_bytes: i64,
|
||||||
|
pub mtime: DateTime<Utc>,
|
||||||
|
pub fingerprint: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct BookInsert {
|
||||||
|
pub book_id: Uuid,
|
||||||
|
pub library_id: Uuid,
|
||||||
|
pub kind: String,
|
||||||
|
pub title: String,
|
||||||
|
pub series: Option<String>,
|
||||||
|
pub volume: Option<i32>,
|
||||||
|
pub page_count: Option<i32>,
|
||||||
|
pub thumbnail_path: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct FileInsert {
|
||||||
|
pub file_id: Uuid,
|
||||||
|
pub book_id: Uuid,
|
||||||
|
pub format: String,
|
||||||
|
pub abs_path: String,
|
||||||
|
pub size_bytes: i64,
|
||||||
|
pub mtime: DateTime<Utc>,
|
||||||
|
pub fingerprint: String,
|
||||||
|
pub parse_status: String,
|
||||||
|
pub parse_error: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct ErrorInsert {
|
||||||
|
pub job_id: Uuid,
|
||||||
|
pub file_path: String,
|
||||||
|
pub error_message: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn flush_all_batches(
|
||||||
|
pool: &PgPool,
|
||||||
|
books_update: &mut Vec<BookUpdate>,
|
||||||
|
files_update: &mut Vec<FileUpdate>,
|
||||||
|
books_insert: &mut Vec<BookInsert>,
|
||||||
|
files_insert: &mut Vec<FileInsert>,
|
||||||
|
errors_insert: &mut Vec<ErrorInsert>,
|
||||||
|
) -> Result<()> {
|
||||||
|
if books_update.is_empty() && files_update.is_empty() && books_insert.is_empty() && files_insert.is_empty() && errors_insert.is_empty() {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
let start = std::time::Instant::now();
|
||||||
|
let mut tx = pool.begin().await?;
|
||||||
|
|
||||||
|
// Batch update books using UNNEST
|
||||||
|
if !books_update.is_empty() {
|
||||||
|
let book_ids: Vec<Uuid> = books_update.iter().map(|b| b.book_id).collect();
|
||||||
|
let titles: Vec<String> = books_update.iter().map(|b| b.title.clone()).collect();
|
||||||
|
let kinds: Vec<String> = books_update.iter().map(|b| b.kind.clone()).collect();
|
||||||
|
let series: Vec<Option<String>> = books_update.iter().map(|b| b.series.clone()).collect();
|
||||||
|
let volumes: Vec<Option<i32>> = books_update.iter().map(|b| b.volume).collect();
|
||||||
|
let page_counts: Vec<Option<i32>> = books_update.iter().map(|b| b.page_count).collect();
|
||||||
|
|
||||||
|
sqlx::query(
|
||||||
|
r#"
|
||||||
|
UPDATE books SET
|
||||||
|
title = data.title,
|
||||||
|
kind = data.kind,
|
||||||
|
series = data.series,
|
||||||
|
volume = data.volume,
|
||||||
|
page_count = data.page_count,
|
||||||
|
updated_at = NOW()
|
||||||
|
FROM (
|
||||||
|
SELECT * FROM UNNEST($1::uuid[], $2::text[], $3::text[], $4::text[], $5::int[], $6::int[])
|
||||||
|
AS t(book_id, title, kind, series, volume, page_count)
|
||||||
|
) AS data
|
||||||
|
WHERE books.id = data.book_id
|
||||||
|
"#
|
||||||
|
)
|
||||||
|
.bind(&book_ids)
|
||||||
|
.bind(&titles)
|
||||||
|
.bind(&kinds)
|
||||||
|
.bind(&series)
|
||||||
|
.bind(&volumes)
|
||||||
|
.bind(&page_counts)
|
||||||
|
.execute(&mut *tx)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
books_update.clear();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Batch update files using UNNEST
|
||||||
|
if !files_update.is_empty() {
|
||||||
|
let file_ids: Vec<Uuid> = files_update.iter().map(|f| f.file_id).collect();
|
||||||
|
let formats: Vec<String> = files_update.iter().map(|f| f.format.clone()).collect();
|
||||||
|
let sizes: Vec<i64> = files_update.iter().map(|f| f.size_bytes).collect();
|
||||||
|
let mtimes: Vec<DateTime<Utc>> = files_update.iter().map(|f| f.mtime).collect();
|
||||||
|
let fingerprints: Vec<String> = files_update.iter().map(|f| f.fingerprint.clone()).collect();
|
||||||
|
|
||||||
|
sqlx::query(
|
||||||
|
r#"
|
||||||
|
UPDATE book_files SET
|
||||||
|
format = data.format,
|
||||||
|
size_bytes = data.size,
|
||||||
|
mtime = data.mtime,
|
||||||
|
fingerprint = data.fp,
|
||||||
|
parse_status = 'ok',
|
||||||
|
parse_error_opt = NULL,
|
||||||
|
updated_at = NOW()
|
||||||
|
FROM (
|
||||||
|
SELECT * FROM UNNEST($1::uuid[], $2::text[], $3::bigint[], $4::timestamptz[], $5::text[])
|
||||||
|
AS t(file_id, format, size, mtime, fp)
|
||||||
|
) AS data
|
||||||
|
WHERE book_files.id = data.file_id
|
||||||
|
"#
|
||||||
|
)
|
||||||
|
.bind(&file_ids)
|
||||||
|
.bind(&formats)
|
||||||
|
.bind(&sizes)
|
||||||
|
.bind(&mtimes)
|
||||||
|
.bind(&fingerprints)
|
||||||
|
.execute(&mut *tx)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
files_update.clear();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Batch insert books using UNNEST
|
||||||
|
if !books_insert.is_empty() {
|
||||||
|
let book_ids: Vec<Uuid> = books_insert.iter().map(|b| b.book_id).collect();
|
||||||
|
let library_ids: Vec<Uuid> = books_insert.iter().map(|b| b.library_id).collect();
|
||||||
|
let kinds: Vec<String> = books_insert.iter().map(|b| b.kind.clone()).collect();
|
||||||
|
let titles: Vec<String> = books_insert.iter().map(|b| b.title.clone()).collect();
|
||||||
|
let series: Vec<Option<String>> = books_insert.iter().map(|b| b.series.clone()).collect();
|
||||||
|
let volumes: Vec<Option<i32>> = books_insert.iter().map(|b| b.volume).collect();
|
||||||
|
let page_counts: Vec<Option<i32>> = books_insert.iter().map(|b| b.page_count).collect();
|
||||||
|
let thumbnail_paths: Vec<Option<String>> = books_insert.iter().map(|b| b.thumbnail_path.clone()).collect();
|
||||||
|
|
||||||
|
sqlx::query(
|
||||||
|
r#"
|
||||||
|
INSERT INTO books (id, library_id, kind, title, series, volume, page_count, thumbnail_path)
|
||||||
|
SELECT * FROM UNNEST($1::uuid[], $2::uuid[], $3::text[], $4::text[], $5::text[], $6::int[], $7::int[], $8::text[])
|
||||||
|
AS t(id, library_id, kind, title, series, volume, page_count, thumbnail_path)
|
||||||
|
"#
|
||||||
|
)
|
||||||
|
.bind(&book_ids)
|
||||||
|
.bind(&library_ids)
|
||||||
|
.bind(&kinds)
|
||||||
|
.bind(&titles)
|
||||||
|
.bind(&series)
|
||||||
|
.bind(&volumes)
|
||||||
|
.bind(&page_counts)
|
||||||
|
.bind(&thumbnail_paths)
|
||||||
|
.execute(&mut *tx)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
books_insert.clear();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Batch insert files using UNNEST
|
||||||
|
if !files_insert.is_empty() {
|
||||||
|
let file_ids: Vec<Uuid> = files_insert.iter().map(|f| f.file_id).collect();
|
||||||
|
let book_ids: Vec<Uuid> = files_insert.iter().map(|f| f.book_id).collect();
|
||||||
|
let formats: Vec<String> = files_insert.iter().map(|f| f.format.clone()).collect();
|
||||||
|
let abs_paths: Vec<String> = files_insert.iter().map(|f| f.abs_path.clone()).collect();
|
||||||
|
let sizes: Vec<i64> = files_insert.iter().map(|f| f.size_bytes).collect();
|
||||||
|
let mtimes: Vec<DateTime<Utc>> = files_insert.iter().map(|f| f.mtime).collect();
|
||||||
|
let fingerprints: Vec<String> = files_insert.iter().map(|f| f.fingerprint.clone()).collect();
|
||||||
|
let statuses: Vec<String> = files_insert.iter().map(|f| f.parse_status.clone()).collect();
|
||||||
|
let errors: Vec<Option<String>> = files_insert.iter().map(|f| f.parse_error.clone()).collect();
|
||||||
|
|
||||||
|
sqlx::query(
|
||||||
|
r#"
|
||||||
|
INSERT INTO book_files (id, book_id, format, abs_path, size_bytes, mtime, fingerprint, parse_status, parse_error_opt)
|
||||||
|
SELECT * FROM UNNEST($1::uuid[], $2::uuid[], $3::text[], $4::text[], $5::bigint[], $6::timestamptz[], $7::text[], $8::text[], $9::text[])
|
||||||
|
AS t(id, book_id, format, abs_path, size_bytes, mtime, fingerprint, parse_status, parse_error_opt)
|
||||||
|
"#
|
||||||
|
)
|
||||||
|
.bind(&file_ids)
|
||||||
|
.bind(&book_ids)
|
||||||
|
.bind(&formats)
|
||||||
|
.bind(&abs_paths)
|
||||||
|
.bind(&sizes)
|
||||||
|
.bind(&mtimes)
|
||||||
|
.bind(&fingerprints)
|
||||||
|
.bind(&statuses)
|
||||||
|
.bind(&errors)
|
||||||
|
.execute(&mut *tx)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
files_insert.clear();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Batch insert errors using UNNEST
|
||||||
|
if !errors_insert.is_empty() {
|
||||||
|
let job_ids: Vec<Uuid> = errors_insert.iter().map(|e| e.job_id).collect();
|
||||||
|
let file_paths: Vec<String> = errors_insert.iter().map(|e| e.file_path.clone()).collect();
|
||||||
|
let messages: Vec<String> = errors_insert.iter().map(|e| e.error_message.clone()).collect();
|
||||||
|
|
||||||
|
sqlx::query(
|
||||||
|
r#"
|
||||||
|
INSERT INTO index_job_errors (job_id, file_path, error_message)
|
||||||
|
SELECT * FROM UNNEST($1::uuid[], $2::text[], $3::text[])
|
||||||
|
AS t(job_id, file_path, error_message)
|
||||||
|
"#
|
||||||
|
)
|
||||||
|
.bind(&job_ids)
|
||||||
|
.bind(&file_paths)
|
||||||
|
.bind(&messages)
|
||||||
|
.execute(&mut *tx)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
errors_insert.clear();
|
||||||
|
}
|
||||||
|
|
||||||
|
tx.commit().await?;
|
||||||
|
tracing::info!("[BATCH] Flushed all batches in {:?}", start.elapsed());
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
108
apps/indexer/src/converter.rs
Normal file
108
apps/indexer/src/converter.rs
Normal file
@@ -0,0 +1,108 @@
|
|||||||
|
use anyhow::Result;
|
||||||
|
use sqlx::Row;
|
||||||
|
use tracing::{info, warn};
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
use crate::{utils, AppState};
|
||||||
|
|
||||||
|
/// Execute a `cbr_to_cbz` job for the given `book_id`.
|
||||||
|
///
|
||||||
|
/// Flow:
|
||||||
|
/// 1. Read book file info from DB
|
||||||
|
/// 2. Resolve physical path
|
||||||
|
/// 3. Convert CBR → CBZ via `parsers::convert_cbr_to_cbz`
|
||||||
|
/// 4. Update `book_files` and `books` in DB
|
||||||
|
/// 5. Delete the original CBR (failure here does not fail the job)
|
||||||
|
/// 6. Mark job as success
|
||||||
|
pub async fn convert_book(state: &AppState, job_id: Uuid, book_id: Uuid) -> Result<()> {
|
||||||
|
info!("[CONVERTER] Starting CBR→CBZ conversion for book {} (job {})", book_id, job_id);
|
||||||
|
|
||||||
|
// Fetch current file info
|
||||||
|
let row = sqlx::query(
|
||||||
|
r#"
|
||||||
|
SELECT bf.id as file_id, bf.abs_path, bf.format
|
||||||
|
FROM book_files bf
|
||||||
|
WHERE bf.book_id = $1
|
||||||
|
ORDER BY bf.updated_at DESC
|
||||||
|
LIMIT 1
|
||||||
|
"#,
|
||||||
|
)
|
||||||
|
.bind(book_id)
|
||||||
|
.fetch_optional(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let row = row.ok_or_else(|| anyhow::anyhow!("no book file found for book {}", book_id))?;
|
||||||
|
|
||||||
|
let file_id: Uuid = row.get("file_id");
|
||||||
|
let abs_path: String = row.get("abs_path");
|
||||||
|
let format: String = row.get("format");
|
||||||
|
|
||||||
|
if format != "cbr" {
|
||||||
|
return Err(anyhow::anyhow!(
|
||||||
|
"book {} is not CBR (format={}), skipping conversion",
|
||||||
|
book_id,
|
||||||
|
format
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
let physical_path = utils::remap_libraries_path(&abs_path);
|
||||||
|
let cbr_path = std::path::Path::new(&physical_path);
|
||||||
|
|
||||||
|
info!("[CONVERTER] Converting {} → CBZ", cbr_path.display());
|
||||||
|
|
||||||
|
// Update job status to running (already set by claim_next_job, this updates current_file)
|
||||||
|
sqlx::query(
|
||||||
|
"UPDATE index_jobs SET current_file = $2 WHERE id = $1",
|
||||||
|
)
|
||||||
|
.bind(job_id)
|
||||||
|
.bind(&abs_path)
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// Do the conversion
|
||||||
|
let cbz_path = parsers::convert_cbr_to_cbz(cbr_path)?;
|
||||||
|
|
||||||
|
info!("[CONVERTER] CBZ created at {}", cbz_path.display());
|
||||||
|
|
||||||
|
// Remap physical path back to /libraries/ canonical form
|
||||||
|
let new_abs_path = utils::unmap_libraries_path(&cbz_path.to_string_lossy());
|
||||||
|
|
||||||
|
// Update book_files: abs_path + format
|
||||||
|
sqlx::query(
|
||||||
|
"UPDATE book_files SET abs_path = $2, format = 'cbz', updated_at = NOW() WHERE id = $1",
|
||||||
|
)
|
||||||
|
.bind(file_id)
|
||||||
|
.bind(&new_abs_path)
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// Update books: kind stays 'comic', updated_at refreshed
|
||||||
|
sqlx::query("UPDATE books SET updated_at = NOW() WHERE id = $1")
|
||||||
|
.bind(book_id)
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
info!("[CONVERTER] DB updated for book {}", book_id);
|
||||||
|
|
||||||
|
// Delete the original CBR file (best-effort)
|
||||||
|
if let Err(e) = std::fs::remove_file(cbr_path) {
|
||||||
|
warn!(
|
||||||
|
"[CONVERTER] Could not delete original CBR {}: {} (non-fatal)",
|
||||||
|
cbr_path.display(),
|
||||||
|
e
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
info!("[CONVERTER] Deleted original CBR {}", cbr_path.display());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mark job success
|
||||||
|
sqlx::query(
|
||||||
|
"UPDATE index_jobs SET status = 'success', finished_at = NOW(), progress_percent = 100, current_file = NULL WHERE id = $1",
|
||||||
|
)
|
||||||
|
.bind(job_id)
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
info!("[CONVERTER] Job {} completed successfully", job_id);
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
341
apps/indexer/src/job.rs
Normal file
341
apps/indexer/src/job.rs
Normal file
@@ -0,0 +1,341 @@
|
|||||||
|
use anyhow::Result;
|
||||||
|
use rayon::prelude::*;
|
||||||
|
use sqlx::{PgPool, Row};
|
||||||
|
use tracing::{error, info};
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
use crate::{analyzer, converter, meili, scanner, AppState};
|
||||||
|
|
||||||
|
pub async fn cleanup_stale_jobs(pool: &PgPool) -> Result<()> {
|
||||||
|
let result = sqlx::query(
|
||||||
|
r#"
|
||||||
|
UPDATE index_jobs
|
||||||
|
SET status = 'failed',
|
||||||
|
finished_at = NOW(),
|
||||||
|
error_opt = 'Job interrupted by indexer restart'
|
||||||
|
WHERE status = 'running'
|
||||||
|
AND started_at < NOW() - INTERVAL '5 minutes'
|
||||||
|
RETURNING id
|
||||||
|
"#,
|
||||||
|
)
|
||||||
|
.fetch_all(pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
if !result.is_empty() {
|
||||||
|
let count = result.len();
|
||||||
|
let ids: Vec<String> = result
|
||||||
|
.iter()
|
||||||
|
.map(|row| row.get::<Uuid, _>("id").to_string())
|
||||||
|
.collect();
|
||||||
|
info!(
|
||||||
|
"[CLEANUP] Marked {} stale job(s) as failed: {}",
|
||||||
|
count,
|
||||||
|
ids.join(", ")
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn claim_next_job(pool: &PgPool) -> Result<Option<(Uuid, Option<Uuid>)>> {
|
||||||
|
let mut tx = pool.begin().await?;
|
||||||
|
|
||||||
|
let row = sqlx::query(
|
||||||
|
r#"
|
||||||
|
SELECT j.id, j.type, j.library_id
|
||||||
|
FROM index_jobs j
|
||||||
|
WHERE j.status = 'pending'
|
||||||
|
AND (
|
||||||
|
(j.type IN ('rebuild', 'full_rebuild') AND NOT EXISTS (
|
||||||
|
SELECT 1 FROM index_jobs
|
||||||
|
WHERE status = 'running'
|
||||||
|
AND type IN ('rebuild', 'full_rebuild')
|
||||||
|
))
|
||||||
|
OR
|
||||||
|
j.type NOT IN ('rebuild', 'full_rebuild')
|
||||||
|
)
|
||||||
|
ORDER BY
|
||||||
|
CASE j.type
|
||||||
|
WHEN 'full_rebuild' THEN 1
|
||||||
|
WHEN 'rebuild' THEN 2
|
||||||
|
ELSE 3
|
||||||
|
END,
|
||||||
|
j.created_at ASC
|
||||||
|
FOR UPDATE SKIP LOCKED
|
||||||
|
LIMIT 1
|
||||||
|
"#,
|
||||||
|
)
|
||||||
|
.fetch_optional(&mut *tx)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let Some(row) = row else {
|
||||||
|
tx.commit().await?;
|
||||||
|
return Ok(None);
|
||||||
|
};
|
||||||
|
|
||||||
|
let id: Uuid = row.get("id");
|
||||||
|
let job_type: String = row.get("type");
|
||||||
|
let library_id: Option<Uuid> = row.get("library_id");
|
||||||
|
|
||||||
|
if job_type == "rebuild" || job_type == "full_rebuild" {
|
||||||
|
let has_running_rebuild: bool = sqlx::query_scalar(
|
||||||
|
r#"
|
||||||
|
SELECT EXISTS(
|
||||||
|
SELECT 1 FROM index_jobs
|
||||||
|
WHERE status = 'running'
|
||||||
|
AND type IN ('rebuild', 'full_rebuild')
|
||||||
|
AND id != $1
|
||||||
|
)
|
||||||
|
"#,
|
||||||
|
)
|
||||||
|
.bind(id)
|
||||||
|
.fetch_one(&mut *tx)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
if has_running_rebuild {
|
||||||
|
tx.rollback().await?;
|
||||||
|
return Ok(None);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
sqlx::query(
|
||||||
|
"UPDATE index_jobs SET status = 'running', started_at = NOW(), error_opt = NULL WHERE id = $1",
|
||||||
|
)
|
||||||
|
.bind(id)
|
||||||
|
.execute(&mut *tx)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
tx.commit().await?;
|
||||||
|
Ok(Some((id, library_id)))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn fail_job(pool: &PgPool, job_id: Uuid, error_message: &str) -> Result<()> {
|
||||||
|
sqlx::query(
|
||||||
|
"UPDATE index_jobs SET status = 'failed', finished_at = NOW(), error_opt = $2 WHERE id = $1",
|
||||||
|
)
|
||||||
|
.bind(job_id)
|
||||||
|
.bind(error_message)
|
||||||
|
.execute(pool)
|
||||||
|
.await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn is_job_cancelled(pool: &PgPool, job_id: Uuid) -> Result<bool> {
|
||||||
|
let status: Option<String> =
|
||||||
|
sqlx::query_scalar("SELECT status FROM index_jobs WHERE id = $1")
|
||||||
|
.bind(job_id)
|
||||||
|
.fetch_optional(pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(status.as_deref() == Some("cancelled"))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn process_job(
|
||||||
|
state: &AppState,
|
||||||
|
job_id: Uuid,
|
||||||
|
target_library_id: Option<Uuid>,
|
||||||
|
) -> Result<()> {
|
||||||
|
info!("[JOB] Processing {} library={:?}", job_id, target_library_id);
|
||||||
|
|
||||||
|
let (job_type, book_id): (String, Option<Uuid>) = {
|
||||||
|
let row = sqlx::query("SELECT type, book_id FROM index_jobs WHERE id = $1")
|
||||||
|
.bind(job_id)
|
||||||
|
.fetch_one(&state.pool)
|
||||||
|
.await?;
|
||||||
|
(row.get("type"), row.get("book_id"))
|
||||||
|
};
|
||||||
|
|
||||||
|
// CBR to CBZ conversion
|
||||||
|
if job_type == "cbr_to_cbz" {
|
||||||
|
let book_id = book_id.ok_or_else(|| {
|
||||||
|
anyhow::anyhow!("cbr_to_cbz job {} has no book_id", job_id)
|
||||||
|
})?;
|
||||||
|
converter::convert_book(state, job_id, book_id).await?;
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Thumbnail rebuild: generate thumbnails for books missing them
|
||||||
|
if job_type == "thumbnail_rebuild" {
|
||||||
|
sqlx::query(
|
||||||
|
"UPDATE index_jobs SET status = 'generating_thumbnails', started_at = NOW(), phase2_started_at = NOW() WHERE id = $1",
|
||||||
|
)
|
||||||
|
.bind(job_id)
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
analyzer::analyze_library_books(state, job_id, target_library_id, true).await?;
|
||||||
|
|
||||||
|
sqlx::query(
|
||||||
|
"UPDATE index_jobs SET status = 'success', finished_at = NOW(), progress_percent = 100, current_file = NULL WHERE id = $1",
|
||||||
|
)
|
||||||
|
.bind(job_id)
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Thumbnail regenerate: clear all thumbnails then re-generate
|
||||||
|
if job_type == "thumbnail_regenerate" {
|
||||||
|
sqlx::query(
|
||||||
|
"UPDATE index_jobs SET status = 'generating_thumbnails', started_at = NOW(), phase2_started_at = NOW() WHERE id = $1",
|
||||||
|
)
|
||||||
|
.bind(job_id)
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
analyzer::regenerate_thumbnails(state, job_id, target_library_id).await?;
|
||||||
|
|
||||||
|
sqlx::query(
|
||||||
|
"UPDATE index_jobs SET status = 'success', finished_at = NOW(), progress_percent = 100, current_file = NULL WHERE id = $1",
|
||||||
|
)
|
||||||
|
.bind(job_id)
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
let is_full_rebuild = job_type == "full_rebuild";
|
||||||
|
info!(
|
||||||
|
"[JOB] {} type={} full_rebuild={}",
|
||||||
|
job_id, job_type, is_full_rebuild
|
||||||
|
);
|
||||||
|
|
||||||
|
// Full rebuild: delete existing data first
|
||||||
|
if is_full_rebuild {
|
||||||
|
info!("[JOB] Full rebuild: deleting existing data");
|
||||||
|
|
||||||
|
if let Some(library_id) = target_library_id {
|
||||||
|
sqlx::query(
|
||||||
|
"DELETE FROM book_files WHERE book_id IN (SELECT id FROM books WHERE library_id = $1)",
|
||||||
|
)
|
||||||
|
.bind(library_id)
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await?;
|
||||||
|
sqlx::query("DELETE FROM books WHERE library_id = $1")
|
||||||
|
.bind(library_id)
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await?;
|
||||||
|
info!("[JOB] Deleted existing data for library {}", library_id);
|
||||||
|
} else {
|
||||||
|
sqlx::query("DELETE FROM book_files")
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await?;
|
||||||
|
sqlx::query("DELETE FROM books").execute(&state.pool).await?;
|
||||||
|
info!("[JOB] Deleted all existing data");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let libraries = if let Some(library_id) = target_library_id {
|
||||||
|
sqlx::query("SELECT id, root_path FROM libraries WHERE id = $1 AND enabled = TRUE")
|
||||||
|
.bind(library_id)
|
||||||
|
.fetch_all(&state.pool)
|
||||||
|
.await?
|
||||||
|
} else {
|
||||||
|
sqlx::query("SELECT id, root_path FROM libraries WHERE enabled = TRUE")
|
||||||
|
.fetch_all(&state.pool)
|
||||||
|
.await?
|
||||||
|
};
|
||||||
|
|
||||||
|
// Count total files for progress estimation
|
||||||
|
let library_paths: Vec<String> = libraries
|
||||||
|
.iter()
|
||||||
|
.map(|library| {
|
||||||
|
crate::utils::remap_libraries_path(&library.get::<String, _>("root_path"))
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let total_files: usize = library_paths
|
||||||
|
.par_iter()
|
||||||
|
.map(|root_path| {
|
||||||
|
walkdir::WalkDir::new(root_path)
|
||||||
|
.into_iter()
|
||||||
|
.filter_map(Result::ok)
|
||||||
|
.filter(|entry| {
|
||||||
|
entry.file_type().is_file()
|
||||||
|
&& parsers::detect_format(entry.path()).is_some()
|
||||||
|
})
|
||||||
|
.count()
|
||||||
|
})
|
||||||
|
.sum();
|
||||||
|
|
||||||
|
info!(
|
||||||
|
"[JOB] Found {} libraries, {} total files to index",
|
||||||
|
libraries.len(),
|
||||||
|
total_files
|
||||||
|
);
|
||||||
|
|
||||||
|
sqlx::query("UPDATE index_jobs SET total_files = $2 WHERE id = $1")
|
||||||
|
.bind(job_id)
|
||||||
|
.bind(total_files as i32)
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let mut stats = scanner::JobStats {
|
||||||
|
scanned_files: 0,
|
||||||
|
indexed_files: 0,
|
||||||
|
removed_files: 0,
|
||||||
|
errors: 0,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut total_processed_count = 0i32;
|
||||||
|
|
||||||
|
// Phase 1: Discovery
|
||||||
|
for library in &libraries {
|
||||||
|
let library_id: Uuid = library.get("id");
|
||||||
|
let root_path: String = library.get("root_path");
|
||||||
|
let root_path = crate::utils::remap_libraries_path(&root_path);
|
||||||
|
match scanner::scan_library_discovery(
|
||||||
|
state,
|
||||||
|
job_id,
|
||||||
|
library_id,
|
||||||
|
std::path::Path::new(&root_path),
|
||||||
|
&mut stats,
|
||||||
|
&mut total_processed_count,
|
||||||
|
total_files,
|
||||||
|
is_full_rebuild,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(()) => {}
|
||||||
|
Err(err) => {
|
||||||
|
let err_str = err.to_string();
|
||||||
|
if err_str.contains("cancelled") || err_str.contains("Cancelled") {
|
||||||
|
return Err(err);
|
||||||
|
}
|
||||||
|
stats.errors += 1;
|
||||||
|
error!(library_id = %library_id, error = %err, "library scan failed");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sync search index after discovery (books are visible immediately)
|
||||||
|
meili::sync_meili(&state.pool, &state.meili_url, &state.meili_master_key).await?;
|
||||||
|
|
||||||
|
// For full rebuild: clean up orphaned thumbnail files (old UUIDs)
|
||||||
|
if is_full_rebuild {
|
||||||
|
analyzer::cleanup_orphaned_thumbnails(state).await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Phase 2: Analysis (extract page_count + thumbnails for new/updated books)
|
||||||
|
sqlx::query(
|
||||||
|
"UPDATE index_jobs SET status = 'generating_thumbnails', phase2_started_at = NOW(), stats_json = $2, current_file = NULL, processed_files = $3 WHERE id = $1",
|
||||||
|
)
|
||||||
|
.bind(job_id)
|
||||||
|
.bind(serde_json::to_value(&stats)?)
|
||||||
|
.bind(total_processed_count)
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
analyzer::analyze_library_books(state, job_id, target_library_id, false).await?;
|
||||||
|
|
||||||
|
sqlx::query(
|
||||||
|
"UPDATE index_jobs SET status = 'success', finished_at = NOW(), progress_percent = 100, current_file = NULL WHERE id = $1",
|
||||||
|
)
|
||||||
|
.bind(job_id)
|
||||||
|
.execute(&state.pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user