Compare commits

...

7 Commits

Author SHA1 Message Date
85cad1a7e7 refactor: streamline API calls and enhance configuration management
- Refactor multiple API routes to utilize a centralized configuration function for base URL and token management, improving code consistency and maintainability.
- Replace direct environment variable access with a unified config function in the `lib/api.ts` file.
- Remove redundant error handling and streamline response handling in various API endpoints.
- Delete unused job-related API routes and settings, simplifying the overall API structure.
2026-03-09 14:16:01 +01:00
0f5094575a docs: add AGENTS.md per module and unify ports to 70XX
- Add CLAUDE.md at root and AGENTS.md in apps/api, apps/indexer,
  apps/backoffice, crates/parsers with module-specific guidelines
- Unify all service ports to 70XX (no more internal/external split):
  API 7080, Indexer 7081, Backoffice 7082
- Update docker-compose.yml, Dockerfiles, config.rs defaults,
  .env.example, backoffice routes, bench.sh, smoke.sh

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-09 13:57:39 +01:00
131c50b1a1 chore: remove docker-compose configuration
- Delete the docker-compose.yml file, which contained service definitions for postgres, meilisearch, migrate, api, indexer, backoffice, and associated volumes.
- This change may indicate a shift in deployment strategy or service management.
2026-03-08 21:34:28 +01:00
6d4c400017 refactor: update AppState references to use state module
- Change all instances of AppState to reference the new state module across multiple files for consistency.
- Clean up imports in auth, books, index_jobs, libraries, pages, search, settings, thumbnails, and tokens modules.
- Simplify main.rs by removing unused code and organizing middleware and route handlers under the new handlers module.
2026-03-08 21:19:22 +01:00
539dc77d57 feat: enhance thumbnail management with full rebuild functionality
- Extend thumbnail regeneration logic to support full rebuilds, allowing for the deletion of orphaned thumbnails.
- Implement database updates to clear thumbnail paths for books during regeneration and full rebuild processes.
- Improve logging to provide detailed insights on the number of deleted thumbnails and cleared database entries.
- Refactor code for better organization and clarity in handling thumbnail files.
2026-03-08 21:10:34 +01:00
9c7120c3dc feat: enhance library scanning and metadata parsing
- Introduce a structured approach to collect book file information before parsing.
- Implement parallel processing for metadata extraction to improve performance.
- Refactor file handling to utilize a new FileInfo struct for better organization.
- Update database interactions to use collected file information for batch inserts.
- Improve logging for scanning and parsing processes to provide better insights.
2026-03-08 21:07:03 +01:00
b1844a4f01 feat: enhance concurrency settings for rendering and thumbnail generation
- Introduce dynamic loading of concurrent render limits from the database for both page rendering and thumbnail generation.
- Update API to utilize the loaded concurrency settings, defaulting to 8 for page renders and 4 for thumbnails.
- Modify front-end settings page to reflect changes in concurrency limits and provide user guidance on their impact.
- Ensure that changes to limits require a server restart to take effect, with clear messaging in the UI.
2026-03-08 21:03:04 +01:00
57 changed files with 2250 additions and 1782 deletions

View File

@@ -21,11 +21,11 @@ API_BOOTSTRAP_TOKEN=change-me-in-production
# ============================================================================= # =============================================================================
# API Service # API Service
API_LISTEN_ADDR=0.0.0.0:8080 API_LISTEN_ADDR=0.0.0.0:7080
API_BASE_URL=http://api:8080 API_BASE_URL=http://api:7080
# Indexer Service # Indexer Service
INDEXER_LISTEN_ADDR=0.0.0.0:8081 INDEXER_LISTEN_ADDR=0.0.0.0:7081
INDEXER_SCAN_INTERVAL_SECONDS=5 INDEXER_SCAN_INTERVAL_SECONDS=5
# Meilisearch Search Engine # Meilisearch Search Engine
@@ -56,8 +56,8 @@ THUMBNAILS_HOST_PATH=../data/thumbnails
# Port Configuration # Port Configuration
# ============================================================================= # =============================================================================
# To change ports, edit docker-compose.yml directly: # To change ports, edit docker-compose.yml directly:
# - API: change "7080:8080" to "YOUR_PORT:8080" # - API: change "7080:7080" to "YOUR_PORT:7080"
# - Indexer: change "7081:8081" to "YOUR_PORT:8081" # - Indexer: change "7081:7081" to "YOUR_PORT:7081"
# - Backoffice: change "7082:8082" to "YOUR_PORT:8082" # - Backoffice: change "7082:7082" to "YOUR_PORT:7082"
# - Meilisearch: change "7700:7700" to "YOUR_PORT:7700" # - Meilisearch: change "7700:7700" to "YOUR_PORT:7700"
# - PostgreSQL: change "6432:5432" to "YOUR_PORT:5432" # - PostgreSQL: change "6432:5432" to "YOUR_PORT:5432"

View File

@@ -73,12 +73,14 @@ sqlx migrate add -r migration_name
### Docker Development ### Docker Development
`docker-compose.yml` est à la **racine** du projet (pas dans `infra/`).
```bash ```bash
# Start infrastructure only # Start infrastructure only
cd infra && docker compose up -d postgres meilisearch docker compose up -d postgres meilisearch
# Start full stack # Start full stack
cd infra && docker compose up -d docker compose up -d
# View logs # View logs
docker compose logs -f api docker compose logs -f api
@@ -226,24 +228,21 @@ pub struct BookItem {
``` ```
stripstream-librarian/ stripstream-librarian/
├── apps/ ├── apps/
│ ├── api/ # REST API (axum) │ ├── api/ # REST API (axum) — port 7080
│ │ └── src/ │ │ └── src/ # books.rs, pages.rs, thumbnails.rs, state.rs, auth.rs...
│ ├── main.rs ├── indexer/ # Background indexing service — port 7081
│ │ ├── books.rs │ │ └── src/ # worker.rs, scanner.rs, batch.rs, scheduler.rs, watcher.rs...
│ ├── pages.rs └── backoffice/ # Next.js admin UI — port 7082
│ │ └── ...
│ ├── indexer/ # Background indexing service
│ │ └── src/
│ │ └── main.rs
│ └── backoffice/ # Next.js admin UI
├── crates/ ├── crates/
│ ├── core/ # Shared config │ ├── core/ # Shared config (env vars)
│ │ └── src/config.rs │ │ └── src/config.rs
│ └── parsers/ # Book parsing (CBZ, CBR, PDF) │ └── parsers/ # Book parsing (CBZ, CBR, PDF)
├── infra/ ├── infra/
── migrations/ # SQL migrations ── migrations/ # SQL migrations (sqlx)
│ └── docker-compose.yml ├── data/
└── libraries/ # Book storage (mounted volume) │ └── thumbnails/ # Thumbnails générés par l'API
├── libraries/ # Book storage (mounted volume)
└── docker-compose.yml # À la racine (pas dans infra/)
``` ```
### Key Files ### Key Files
@@ -251,8 +250,12 @@ stripstream-librarian/
| File | Purpose | | File | Purpose |
|------|---------| |------|---------|
| `apps/api/src/books.rs` | Book CRUD endpoints | | `apps/api/src/books.rs` | Book CRUD endpoints |
| `apps/api/src/pages.rs` | Page rendering & caching | | `apps/api/src/pages.rs` | Page rendering & caching (LRU + disk) |
| `apps/indexer/src/main.rs` | Indexing logic, batch processing | | `apps/api/src/thumbnails.rs` | Thumbnail generation (triggered by indexer) |
| `apps/api/src/state.rs` | AppState, Semaphore concurrent_renders |
| `apps/indexer/src/scanner.rs` | Filesystem scan, rayon parallel parsing |
| `apps/indexer/src/batch.rs` | Bulk DB ops via UNNEST |
| `apps/indexer/src/worker.rs` | Job loop, watcher, scheduler orchestration |
| `crates/parsers/src/lib.rs` | Format detection, metadata parsing | | `crates/parsers/src/lib.rs` | Format detection, metadata parsing |
| `crates/core/src/config.rs` | Configuration from environment | | `crates/core/src/config.rs` | Configuration from environment |
| `infra/migrations/*.sql` | Database schema | | `infra/migrations/*.sql` | Database schema |
@@ -269,7 +272,7 @@ impl IndexerConfig {
pub fn from_env() -> Result<Self> { pub fn from_env() -> Result<Self> {
Ok(Self { Ok(Self {
listen_addr: std::env::var("INDEXER_LISTEN_ADDR") listen_addr: std::env::var("INDEXER_LISTEN_ADDR")
.unwrap_or_else(|_| "0.0.0.0:8081".to_string()), .unwrap_or_else(|_| "0.0.0.0:7081".to_string()),
database_url: std::env::var("DATABASE_URL") database_url: std::env::var("DATABASE_URL")
.context("DATABASE_URL is required")?, .context("DATABASE_URL is required")?,
// ... // ...
@@ -298,4 +301,6 @@ fn remap_libraries_path(path: &str) -> String {
- **Workspace**: This is a Cargo workspace. Always specify the package when building specific apps. - **Workspace**: This is a Cargo workspace. Always specify the package when building specific apps.
- **Dependencies**: External crates are defined in workspace `Cargo.toml`, not individual `Cargo.toml`. - **Dependencies**: External crates are defined in workspace `Cargo.toml`, not individual `Cargo.toml`.
- **Database**: PostgreSQL is required. Run migrations before starting services. - **Database**: PostgreSQL is required. Run migrations before starting services.
- **External Tools**: The indexer relies on `unar` (for CBR) and `pdftoppm` (for PDF) being installed on the system. - **External Tools**: 4 system tools required — `unrar` (CBR page count), `unar` (CBR extraction), `pdfinfo` (PDF page count), `pdftoppm` (PDF page render). Note: `unrar` and `unar` are distinct tools.
- **Thumbnails**: generated by the **API** service (not the indexer). The indexer triggers a checkup via `POST /index/jobs/:id/thumbnails/checkup` after indexing.
- **Sub-AGENTS.md**: module-specific guidelines in `apps/api/`, `apps/indexer/`, `apps/backoffice/`, `crates/parsers/`.

72
CLAUDE.md Normal file
View File

@@ -0,0 +1,72 @@
# Stripstream Librarian
Gestionnaire de bibliothèque de bandes dessinées/ebooks. Workspace Cargo multi-crates avec backoffice Next.js.
## Architecture
| Service | Dossier | Port local |
|---------|---------|------------|
| API REST (axum) | `apps/api/` | 7080 |
| Indexer (background) | `apps/indexer/` | 7081 |
| Backoffice (Next.js) | `apps/backoffice/` | 7082 |
| PostgreSQL | infra | 6432 |
| Meilisearch | infra | 7700 |
Crates partagés : `crates/core` (config env), `crates/parsers` (CBZ/CBR/PDF).
## Commandes
```bash
# Build
cargo build # workspace entier
cargo build -p api # crate spécifique
cargo build --release # version optimisée
# Linting / format
cargo clippy
cargo fmt
# Tests
cargo test
cargo test -p parsers
# Infra (dépendances uniquement) — docker-compose.yml est à la racine
docker compose up -d postgres meilisearch
# Backoffice dev
cd apps/backoffice && npm install && npm run dev # http://localhost:7082
# Migrations
sqlx migrate run # DATABASE_URL doit être défini
```
## Environnement
```bash
cp .env.example .env # puis éditer les valeurs REQUIRED
```
Variables **requises** au démarrage : `DATABASE_URL`, `MEILI_URL`, `MEILI_MASTER_KEY`, `API_BOOTSTRAP_TOKEN`.
## Gotchas
- **Dépendances système** : 4 outils requis — `unrar` (CBR listing), `unar` (CBR extraction), `pdfinfo` (PDF page count), `pdftoppm` (PDF rendu). `unrar``unar`.
- **Port backoffice** : `npm run dev` écoute sur **7082**, pas 3000.
- **LIBRARIES_ROOT_PATH** : les chemins en DB commencent par `/libraries/` ; en dev local, définir cette variable pour remapper vers le dossier réel.
- **Thumbnails** : stockés dans `THUMBNAIL_DIRECTORY` (défaut `/data/thumbnails`), générés par **l'API** (pas l'indexer) — l'indexer déclenche un checkup via `POST /index/jobs/:id/thumbnails/checkup`.
- **Workspace Cargo** : les dépendances externes sont définies dans le `Cargo.toml` racine, pas dans les crates individuels.
- **Migrations** : dossier `infra/migrations/`, géré par sqlx. Toujours migrer avant de démarrer les services.
## Fichiers clés
| Fichier | Rôle |
|---------|------|
| `crates/core/src/config.rs` | Config depuis env (API, Indexer, AdminUI) |
| `crates/parsers/src/lib.rs` | Détection format, extraction métadonnées |
| `apps/api/src/books.rs` | Endpoints CRUD livres |
| `apps/api/src/pages.rs` | Rendu pages + cache LRU |
| `apps/indexer/src/scanner.rs` | Scan filesystem |
| `infra/migrations/*.sql` | Schéma DB |
> Voir `AGENTS.md` pour les conventions de code détaillées (error handling, patterns sqlx, async/tokio).
> Des `AGENTS.md` spécifiques existent dans `apps/api/`, `apps/indexer/`, `apps/backoffice/`, `crates/parsers/`.

View File

@@ -38,16 +38,16 @@ docker compose up -d
``` ```
This will start: This will start:
- PostgreSQL (port 5432) - PostgreSQL (port 6432)
- Meilisearch (port 7700) - Meilisearch (port 7700)
- API service (port 8080) - API service (port 7080)
- Indexer service (port 8081) - Indexer service (port 7081)
- Backoffice web UI (port 8082) - Backoffice web UI (port 7082)
### Accessing the Application ### Accessing the Application
- **Backoffice**: http://localhost:8082 - **Backoffice**: http://localhost:7082
- **API**: http://localhost:8080 - **API**: http://localhost:7080
- **Meilisearch**: http://localhost:7700 - **Meilisearch**: http://localhost:7700
### Default Credentials ### Default Credentials
@@ -113,9 +113,9 @@ The backoffice will be available at http://localhost:3000
| Variable | Description | Default | | Variable | Description | Default |
|----------|-------------|---------| |----------|-------------|---------|
| `API_LISTEN_ADDR` | API service bind address | `0.0.0.0:8080` | | `API_LISTEN_ADDR` | API service bind address | `0.0.0.0:7080` |
| `INDEXER_LISTEN_ADDR` | Indexer service bind address | `0.0.0.0:8081` | | `INDEXER_LISTEN_ADDR` | Indexer service bind address | `0.0.0.0:7081` |
| `BACKOFFICE_PORT` | Backoffice web UI port | `8082` | | `BACKOFFICE_PORT` | Backoffice web UI port | `7082` |
| `DATABASE_URL` | PostgreSQL connection string | `postgres://stripstream:stripstream@postgres:5432/stripstream` | | `DATABASE_URL` | PostgreSQL connection string | `postgres://stripstream:stripstream@postgres:5432/stripstream` |
| `MEILI_URL` | Meilisearch connection URL | `http://meilisearch:7700` | | `MEILI_URL` | Meilisearch connection URL | `http://meilisearch:7700` |
| `MEILI_MASTER_KEY` | Meilisearch master key (required) | - | | `MEILI_MASTER_KEY` | Meilisearch master key (required) | - |
@@ -128,7 +128,7 @@ The backoffice will be available at http://localhost:3000
The API is documented with OpenAPI/Swagger. When running locally, access the docs at: The API is documented with OpenAPI/Swagger. When running locally, access the docs at:
``` ```
http://localhost:8080/api-docs http://localhost:7080/swagger-ui
``` ```
## Project Structure ## Project Structure

73
apps/api/AGENTS.md Normal file
View File

@@ -0,0 +1,73 @@
# apps/api — REST API (axum)
Service HTTP sur le port **7080**. Voir `AGENTS.md` racine pour les conventions globales.
## Structure des fichiers
| Fichier | Rôle |
|---------|------|
| `main.rs` | Routes, initialisation AppState, Semaphore concurrent_renders |
| `state.rs` | `AppState` (pool, caches, métriques), `load_concurrent_renders` |
| `auth.rs` | Middlewares `require_admin` / `require_read`, authentification tokens |
| `error.rs` | `ApiError` avec constructeurs `bad_request`, `not_found`, `internal`, etc. |
| `books.rs` | CRUD livres, thumbnails |
| `pages.rs` | Rendu page + double cache (mémoire LRU + disque) |
| `libraries.rs` | CRUD bibliothèques, déclenchement scans |
| `index_jobs.rs` | Suivi jobs, SSE streaming progression |
| `thumbnails.rs` | Rebuild/regénération thumbnails |
| `tokens.rs` | Gestion tokens API (create/revoke) |
| `settings.rs` | Paramètres applicatifs (stockés en DB, clé `limits`) |
| `openapi.rs` | Doc OpenAPI via utoipa, accessible sur `/swagger-ui` |
## Patterns clés
### Handler type
```rust
async fn my_handler(
State(state): State<AppState>,
Path(id): Path<Uuid>,
) -> Result<Json<MyDto>, ApiError> {
// ...
}
```
### Erreurs API
```rust
// Constructeurs disponibles dans error.rs
ApiError::bad_request("message")
ApiError::not_found("resource not found")
ApiError::internal("unexpected error")
ApiError::unauthorized("missing token")
ApiError::forbidden("admin required")
// Conversion auto depuis sqlx::Error et std::io::Error
```
### Authentification
- **Bootstrap token** : comparaison directe (`API_BOOTSTRAP_TOKEN`), scope Admin
- **Tokens DB** : format `stl_<prefix>_<secret>`, hash argon2 en DB, scope `admin` ou `read`
- Middleware `require_admin` → routes admin ; `require_read` → routes lecture
### OpenAPI (utoipa)
```rust
#[utoipa::path(get, path = "/books/{id}", ...)]
async fn get_book(...) { }
// Ajouter le handler dans openapi.rs (ApiDoc)
```
### Cache pages (`pages.rs`)
- **Cache mémoire** : LRU 512 entrées (`AppState.page_cache`)
- **Cache disque** : `IMAGE_CACHE_DIR` (défaut `/tmp/stripstream-image-cache`), clé SHA256
- Concurrence limitée par `AppState.page_render_limit` (Semaphore, configurable en DB)
- `spawn_blocking` pour le rendu image (CPU-bound)
### Paramètre concurrent_renders
Stocké en DB : `SELECT value FROM app_settings WHERE key = 'limits'` → JSON `{"concurrent_renders": N}`.
Chargé au démarrage dans `load_concurrent_renders`.
## Gotchas
- **LIBRARIES_ROOT_PATH** : les `abs_path` en DB commencent par `/libraries/`. Appeler `remap_libraries_path()` avant tout accès fichier.
- **Rate limit lecture** : middleware `read_rate_limit` sur les routes read (100 req/5s par défaut).
- **Métriques** : `/metrics` expose `requests_total`, `page_cache_hits`, `page_cache_misses` (atomics dans `AppState.metrics`).
- **Swagger** : accessible sur `/swagger-ui`, spec JSON sur `/openapi.json`.

View File

@@ -26,5 +26,5 @@ RUN sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && locale-gen
ENV LANG=en_US.UTF-8 ENV LANG=en_US.UTF-8
ENV LC_ALL=en_US.UTF-8 ENV LC_ALL=en_US.UTF-8
COPY --from=builder /app/target/release/api /usr/local/bin/api COPY --from=builder /app/target/release/api /usr/local/bin/api
EXPOSE 8080 EXPOSE 7080
CMD ["/usr/local/bin/api"] CMD ["/usr/local/bin/api"]

View File

@@ -0,0 +1,42 @@
use axum::{
extract::State,
middleware::Next,
response::{IntoResponse, Response},
};
use std::time::Duration;
use std::sync::atomic::Ordering;
use crate::state::AppState;
pub async fn request_counter(
State(state): State<AppState>,
req: axum::extract::Request,
next: Next,
) -> Response {
state.metrics.requests_total.fetch_add(1, Ordering::Relaxed);
next.run(req).await
}
pub async fn read_rate_limit(
State(state): State<AppState>,
req: axum::extract::Request,
next: Next,
) -> Response {
let mut limiter = state.read_rate_limit.lock().await;
if limiter.window_started_at.elapsed() >= Duration::from_secs(1) {
limiter.window_started_at = std::time::Instant::now();
limiter.requests_in_window = 0;
}
if limiter.requests_in_window >= 120 {
return (
axum::http::StatusCode::TOO_MANY_REQUESTS,
"rate limit exceeded",
)
.into_response();
}
limiter.requests_in_window += 1;
drop(limiter);
next.run(req).await
}

View File

@@ -8,7 +8,7 @@ use axum::{
use chrono::Utc; use chrono::Utc;
use sqlx::Row; use sqlx::Row;
use crate::{error::ApiError, AppState}; use crate::{error::ApiError, state::AppState};
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub enum Scope { pub enum Scope {

View File

@@ -5,7 +5,7 @@ use sqlx::Row;
use uuid::Uuid; use uuid::Uuid;
use utoipa::ToSchema; use utoipa::ToSchema;
use crate::{error::ApiError, AppState}; use crate::{error::ApiError, state::AppState};
#[derive(Deserialize, ToSchema)] #[derive(Deserialize, ToSchema)]
pub struct ListBooksQuery { pub struct ListBooksQuery {

26
apps/api/src/handlers.rs Normal file
View File

@@ -0,0 +1,26 @@
use axum::{extract::State, Json};
use std::sync::atomic::Ordering;
use crate::{error::ApiError, state::AppState};
pub async fn health() -> &'static str {
"ok"
}
pub async fn docs_redirect() -> impl axum::response::IntoResponse {
axum::response::Redirect::to("/swagger-ui/")
}
pub async fn ready(State(state): State<AppState>) -> Result<Json<serde_json::Value>, ApiError> {
sqlx::query("SELECT 1").execute(&state.pool).await?;
Ok(Json(serde_json::json!({"status": "ready"})))
}
pub async fn metrics(State(state): State<AppState>) -> String {
format!(
"requests_total {}\npage_cache_hits {}\npage_cache_misses {}\n",
state.metrics.requests_total.load(Ordering::Relaxed),
state.metrics.page_cache_hits.load(Ordering::Relaxed),
state.metrics.page_cache_misses.load(Ordering::Relaxed),
)
}

View File

@@ -8,7 +8,7 @@ use tokio_stream::Stream;
use uuid::Uuid; use uuid::Uuid;
use utoipa::ToSchema; use utoipa::ToSchema;
use crate::{error::ApiError, AppState}; use crate::{error::ApiError, state::AppState};
#[derive(Deserialize, ToSchema)] #[derive(Deserialize, ToSchema)]
pub struct RebuildRequest { pub struct RebuildRequest {

View File

@@ -6,7 +6,7 @@ use sqlx::Row;
use uuid::Uuid; use uuid::Uuid;
use utoipa::ToSchema; use utoipa::ToSchema;
use crate::{error::ApiError, AppState}; use crate::{error::ApiError, state::AppState};
#[derive(Serialize, ToSchema)] #[derive(Serialize, ToSchema)]
pub struct LibraryResponse { pub struct LibraryResponse {

View File

@@ -1,70 +1,36 @@
mod auth; mod auth;
mod books; mod books;
mod error; mod error;
mod handlers;
mod index_jobs; mod index_jobs;
mod libraries; mod libraries;
mod api_middleware;
mod openapi; mod openapi;
mod pages; mod pages;
mod search; mod search;
mod settings; mod settings;
mod state;
mod thumbnails; mod thumbnails;
mod tokens; mod tokens;
use std::{ use std::sync::Arc;
num::NonZeroUsize, use std::time::Instant;
sync::{
atomic::{AtomicU64, Ordering},
Arc,
},
time::{Duration, Instant},
};
use axum::{ use axum::{
middleware, middleware,
response::IntoResponse,
routing::{delete, get}, routing::{delete, get},
Json, Router, Router,
}; };
use utoipa::OpenApi; use utoipa::OpenApi;
use utoipa_swagger_ui::SwaggerUi; use utoipa_swagger_ui::SwaggerUi;
use lru::LruCache; use lru::LruCache;
use std::num::NonZeroUsize;
use stripstream_core::config::ApiConfig; use stripstream_core::config::ApiConfig;
use sqlx::postgres::PgPoolOptions; use sqlx::postgres::PgPoolOptions;
use tokio::sync::{Mutex, Semaphore}; use tokio::sync::{Mutex, Semaphore};
use tracing::info; use tracing::info;
#[derive(Clone)] use crate::state::{load_concurrent_renders, AppState, Metrics, ReadRateLimit};
struct AppState {
pool: sqlx::PgPool,
bootstrap_token: Arc<str>,
meili_url: Arc<str>,
meili_master_key: Arc<str>,
page_cache: Arc<Mutex<LruCache<String, Arc<Vec<u8>>>>>,
page_render_limit: Arc<Semaphore>,
metrics: Arc<Metrics>,
read_rate_limit: Arc<Mutex<ReadRateLimit>>,
}
struct Metrics {
requests_total: AtomicU64,
page_cache_hits: AtomicU64,
page_cache_misses: AtomicU64,
}
struct ReadRateLimit {
window_started_at: Instant,
requests_in_window: u32,
}
impl Metrics {
fn new() -> Self {
Self {
requests_total: AtomicU64::new(0),
page_cache_hits: AtomicU64::new(0),
page_cache_misses: AtomicU64::new(0),
}
}
}
#[tokio::main] #[tokio::main]
async fn main() -> anyhow::Result<()> { async fn main() -> anyhow::Result<()> {
@@ -80,13 +46,17 @@ async fn main() -> anyhow::Result<()> {
.connect(&config.database_url) .connect(&config.database_url)
.await?; .await?;
// Load concurrent_renders from settings, default to 8
let concurrent_renders = load_concurrent_renders(&pool).await;
info!("Using concurrent_renders limit: {}", concurrent_renders);
let state = AppState { let state = AppState {
pool, pool,
bootstrap_token: Arc::from(config.api_bootstrap_token), bootstrap_token: Arc::from(config.api_bootstrap_token),
meili_url: Arc::from(config.meili_url), meili_url: Arc::from(config.meili_url),
meili_master_key: Arc::from(config.meili_master_key), meili_master_key: Arc::from(config.meili_master_key),
page_cache: Arc::new(Mutex::new(LruCache::new(NonZeroUsize::new(512).expect("non-zero")))), page_cache: Arc::new(Mutex::new(LruCache::new(NonZeroUsize::new(512).expect("non-zero")))),
page_render_limit: Arc::new(Semaphore::new(8)), page_render_limit: Arc::new(Semaphore::new(concurrent_renders)),
metrics: Arc::new(Metrics::new()), metrics: Arc::new(Metrics::new()),
read_rate_limit: Arc::new(Mutex::new(ReadRateLimit { read_rate_limit: Arc::new(Mutex::new(ReadRateLimit {
window_started_at: Instant::now(), window_started_at: Instant::now(),
@@ -125,21 +95,21 @@ async fn main() -> anyhow::Result<()> {
.route("/books/:id/pages/:n", get(pages::get_page)) .route("/books/:id/pages/:n", get(pages::get_page))
.route("/libraries/:library_id/series", get(books::list_series)) .route("/libraries/:library_id/series", get(books::list_series))
.route("/search", get(search::search_books)) .route("/search", get(search::search_books))
.route_layer(middleware::from_fn_with_state(state.clone(), read_rate_limit)) .route_layer(middleware::from_fn_with_state(state.clone(), api_middleware::read_rate_limit))
.route_layer(middleware::from_fn_with_state( .route_layer(middleware::from_fn_with_state(
state.clone(), state.clone(),
auth::require_read, auth::require_read,
)); ));
let app = Router::new() let app = Router::new()
.route("/health", get(health)) .route("/health", get(handlers::health))
.route("/ready", get(ready)) .route("/ready", get(handlers::ready))
.route("/metrics", get(metrics)) .route("/metrics", get(handlers::metrics))
.route("/docs", get(docs_redirect)) .route("/docs", get(handlers::docs_redirect))
.merge(SwaggerUi::new("/swagger-ui").url("/openapi.json", openapi::ApiDoc::openapi())) .merge(SwaggerUi::new("/swagger-ui").url("/openapi.json", openapi::ApiDoc::openapi()))
.merge(admin_routes) .merge(admin_routes)
.merge(read_routes) .merge(read_routes)
.layer(middleware::from_fn_with_state(state.clone(), request_counter)) .layer(middleware::from_fn_with_state(state.clone(), api_middleware::request_counter))
.with_state(state); .with_state(state);
let listener = tokio::net::TcpListener::bind(&config.listen_addr).await?; let listener = tokio::net::TcpListener::bind(&config.listen_addr).await?;
@@ -148,57 +118,3 @@ async fn main() -> anyhow::Result<()> {
Ok(()) Ok(())
} }
async fn health() -> &'static str {
"ok"
}
async fn docs_redirect() -> impl axum::response::IntoResponse {
axum::response::Redirect::to("/swagger-ui/")
}
async fn ready(axum::extract::State(state): axum::extract::State<AppState>) -> Result<Json<serde_json::Value>, error::ApiError> {
sqlx::query("SELECT 1").execute(&state.pool).await?;
Ok(Json(serde_json::json!({"status": "ready"})))
}
async fn metrics(axum::extract::State(state): axum::extract::State<AppState>) -> String {
format!(
"requests_total {}\npage_cache_hits {}\npage_cache_misses {}\n",
state.metrics.requests_total.load(Ordering::Relaxed),
state.metrics.page_cache_hits.load(Ordering::Relaxed),
state.metrics.page_cache_misses.load(Ordering::Relaxed),
)
}
async fn request_counter(
axum::extract::State(state): axum::extract::State<AppState>,
req: axum::extract::Request,
next: axum::middleware::Next,
) -> axum::response::Response {
state.metrics.requests_total.fetch_add(1, Ordering::Relaxed);
next.run(req).await
}
async fn read_rate_limit(
axum::extract::State(state): axum::extract::State<AppState>,
req: axum::extract::Request,
next: axum::middleware::Next,
) -> axum::response::Response {
let mut limiter = state.read_rate_limit.lock().await;
if limiter.window_started_at.elapsed() >= Duration::from_secs(1) {
limiter.window_started_at = Instant::now();
limiter.requests_in_window = 0;
}
if limiter.requests_in_window >= 120 {
return (
axum::http::StatusCode::TOO_MANY_REQUESTS,
"rate limit exceeded",
)
.into_response();
}
limiter.requests_in_window += 1;
drop(limiter);
next.run(req).await
}

View File

@@ -20,7 +20,7 @@ use tracing::{debug, error, info, instrument, warn};
use uuid::Uuid; use uuid::Uuid;
use walkdir::WalkDir; use walkdir::WalkDir;
use crate::{error::ApiError, AppState}; use crate::{error::ApiError, state::AppState};
fn remap_libraries_path(path: &str) -> String { fn remap_libraries_path(path: &str) -> String {
if let Ok(root) = std::env::var("LIBRARIES_ROOT_PATH") { if let Ok(root) = std::env::var("LIBRARIES_ROOT_PATH") {

View File

@@ -2,7 +2,7 @@ use axum::{extract::{Query, State}, Json};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use utoipa::ToSchema; use utoipa::ToSchema;
use crate::{error::ApiError, AppState}; use crate::{error::ApiError, state::AppState};
#[derive(Deserialize, ToSchema)] #[derive(Deserialize, ToSchema)]
pub struct SearchQuery { pub struct SearchQuery {

View File

@@ -7,7 +7,7 @@ use serde::{Deserialize, Serialize};
use serde_json::Value; use serde_json::Value;
use sqlx::Row; use sqlx::Row;
use crate::{error::ApiError, AppState}; use crate::{error::ApiError, state::AppState};
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct UpdateSettingRequest { pub struct UpdateSettingRequest {

61
apps/api/src/state.rs Normal file
View File

@@ -0,0 +1,61 @@
use std::sync::{
atomic::AtomicU64,
Arc,
};
use std::time::Instant;
use lru::LruCache;
use sqlx::{Pool, Postgres, Row};
use tokio::sync::{Mutex, Semaphore};
#[derive(Clone)]
pub struct AppState {
pub pool: sqlx::PgPool,
pub bootstrap_token: Arc<str>,
pub meili_url: Arc<str>,
pub meili_master_key: Arc<str>,
pub page_cache: Arc<Mutex<LruCache<String, Arc<Vec<u8>>>>>,
pub page_render_limit: Arc<Semaphore>,
pub metrics: Arc<Metrics>,
pub read_rate_limit: Arc<Mutex<ReadRateLimit>>,
}
pub struct Metrics {
pub requests_total: AtomicU64,
pub page_cache_hits: AtomicU64,
pub page_cache_misses: AtomicU64,
}
pub struct ReadRateLimit {
pub window_started_at: Instant,
pub requests_in_window: u32,
}
impl Metrics {
pub fn new() -> Self {
Self {
requests_total: AtomicU64::new(0),
page_cache_hits: AtomicU64::new(0),
page_cache_misses: AtomicU64::new(0),
}
}
}
pub async fn load_concurrent_renders(pool: &Pool<Postgres>) -> usize {
let default_concurrency = 8;
let row = sqlx::query(r#"SELECT value FROM app_settings WHERE key = 'limits'"#)
.fetch_optional(pool)
.await;
match row {
Ok(Some(row)) => {
let value: serde_json::Value = row.get("value");
value
.get("concurrent_renders")
.and_then(|v: &serde_json::Value| v.as_u64())
.map(|v| v as usize)
.unwrap_or(default_concurrency)
}
_ => default_concurrency,
}
}

View File

@@ -1,4 +1,6 @@
use std::path::Path; use std::path::Path;
use std::sync::atomic::{AtomicI32, Ordering};
use std::sync::Arc;
use anyhow::Context; use anyhow::Context;
use axum::{ use axum::{
@@ -6,6 +8,7 @@ use axum::{
http::StatusCode, http::StatusCode,
Json, Json,
}; };
use futures::stream::{self, StreamExt};
use image::GenericImageView; use image::GenericImageView;
use serde::Deserialize; use serde::Deserialize;
use sqlx::Row; use sqlx::Row;
@@ -13,7 +16,7 @@ use tracing::{info, warn};
use uuid::Uuid; use uuid::Uuid;
use utoipa::ToSchema; use utoipa::ToSchema;
use crate::{error::ApiError, index_jobs, pages, AppState}; use crate::{error::ApiError, index_jobs, pages, state::AppState};
#[derive(Clone)] #[derive(Clone)]
struct ThumbnailConfig { struct ThumbnailConfig {
@@ -24,6 +27,25 @@ struct ThumbnailConfig {
directory: String, directory: String,
} }
async fn load_thumbnail_concurrency(pool: &sqlx::PgPool) -> usize {
let default_concurrency = 4;
let row = sqlx::query(r#"SELECT value FROM app_settings WHERE key = 'limits'"#)
.fetch_optional(pool)
.await;
match row {
Ok(Some(row)) => {
let value: serde_json::Value = row.get("value");
value
.get("concurrent_renders")
.and_then(|v| v.as_u64())
.map(|v| v as usize)
.unwrap_or(default_concurrency)
}
_ => default_concurrency,
}
}
async fn load_thumbnail_config(pool: &sqlx::PgPool) -> ThumbnailConfig { async fn load_thumbnail_config(pool: &sqlx::PgPool) -> ThumbnailConfig {
let fallback = ThumbnailConfig { let fallback = ThumbnailConfig {
enabled: true, enabled: true,
@@ -115,8 +137,74 @@ async fn run_checkup(state: AppState, job_id: Uuid) {
} }
}; };
// Regenerate: clear existing thumbnails in scope so they get regenerated // Regenerate or full_rebuild: clear existing thumbnails in scope so they get regenerated
if job_type == "thumbnail_regenerate" { if job_type == "thumbnail_regenerate" || job_type == "full_rebuild" {
let config = load_thumbnail_config(pool).await;
if job_type == "full_rebuild" {
// For full_rebuild: delete orphaned thumbnail files (books were deleted, new ones have new UUIDs)
// Get all existing book IDs to keep their thumbnails
let existing_book_ids: std::collections::HashSet<Uuid> = sqlx::query_scalar(
r#"SELECT id FROM books WHERE (library_id = $1 OR $1 IS NULL)"#,
)
.bind(library_id)
.fetch_all(pool)
.await
.unwrap_or_default()
.into_iter()
.collect();
// Delete thumbnail files that don't correspond to existing books
let thumbnail_dir = Path::new(&config.directory);
if thumbnail_dir.exists() {
let mut deleted_count = 0;
if let Ok(entries) = std::fs::read_dir(thumbnail_dir) {
for entry in entries.flatten() {
if let Some(file_name) = entry.file_name().to_str() {
if file_name.ends_with(".webp") {
if let Some(book_id_str) = file_name.strip_suffix(".webp") {
if let Ok(book_id) = Uuid::parse_str(book_id_str) {
if !existing_book_ids.contains(&book_id) {
if let Err(e) = std::fs::remove_file(entry.path()) {
warn!("Failed to delete orphaned thumbnail {}: {}", entry.path().display(), e);
} else {
deleted_count += 1;
}
}
}
}
}
}
}
}
info!("thumbnails full_rebuild: deleted {} orphaned thumbnail files", deleted_count);
}
} else {
// For regenerate: delete thumbnail files for books with thumbnails
let book_ids_to_clear: Vec<Uuid> = sqlx::query_scalar(
r#"SELECT id FROM books WHERE (library_id = $1 OR $1 IS NULL) AND thumbnail_path IS NOT NULL"#,
)
.bind(library_id)
.fetch_all(pool)
.await
.unwrap_or_default();
let mut deleted_count = 0;
for book_id in &book_ids_to_clear {
let filename = format!("{}.webp", book_id);
let thumbnail_path = Path::new(&config.directory).join(&filename);
if thumbnail_path.exists() {
if let Err(e) = std::fs::remove_file(&thumbnail_path) {
warn!("Failed to delete thumbnail file {}: {}", thumbnail_path.display(), e);
} else {
deleted_count += 1;
}
}
}
info!("thumbnails regenerate: deleted {} thumbnail files", deleted_count);
}
// Clear thumbnail_path in database
let cleared = sqlx::query( let cleared = sqlx::query(
r#"UPDATE books SET thumbnail_path = NULL WHERE (library_id = $1 OR $1 IS NULL)"#, r#"UPDATE books SET thumbnail_path = NULL WHERE (library_id = $1 OR $1 IS NULL)"#,
) )
@@ -124,7 +212,7 @@ async fn run_checkup(state: AppState, job_id: Uuid) {
.execute(pool) .execute(pool)
.await; .await;
if let Ok(res) = cleared { if let Ok(res) = cleared {
info!("thumbnails regenerate: cleared {} books", res.rows_affected()); info!("thumbnails {}: cleared {} books in database", job_type, res.rows_affected());
} }
} }
@@ -156,38 +244,57 @@ async fn run_checkup(state: AppState, job_id: Uuid) {
.execute(pool) .execute(pool)
.await; .await;
for (i, &book_id) in book_ids.iter().enumerate() { let concurrency = load_thumbnail_concurrency(pool).await;
match pages::render_book_page_1(&state, book_id, config.width, config.quality).await { let processed_count = Arc::new(AtomicI32::new(0));
Ok(page_bytes) => { let pool_clone = pool.clone();
match generate_thumbnail(&page_bytes, &config) { let job_id_clone = job_id;
Ok(thumb_bytes) => { let config_clone = config.clone();
if let Ok(path) = save_thumbnail(book_id, &thumb_bytes, &config) { let state_clone = state.clone();
if sqlx::query("UPDATE books SET thumbnail_path = $1 WHERE id = $2")
.bind(&path) let total_clone = total;
.bind(book_id) stream::iter(book_ids)
.execute(pool) .for_each_concurrent(concurrency, |book_id| {
.await let processed_count = processed_count.clone();
.is_ok() let pool = pool_clone.clone();
{ let job_id = job_id_clone;
let processed = (i + 1) as i32; let config = config_clone.clone();
let percent = ((i + 1) as f64 / total as f64 * 100.0) as i32; let state = state_clone.clone();
let _ = sqlx::query( let total = total_clone;
"UPDATE index_jobs SET processed_files = $2, progress_percent = $3 WHERE id = $1",
) async move {
.bind(job_id) match pages::render_book_page_1(&state, book_id, config.width, config.quality).await {
.bind(processed) Ok(page_bytes) => {
.bind(percent) match generate_thumbnail(&page_bytes, &config) {
.execute(pool) Ok(thumb_bytes) => {
.await; if let Ok(path) = save_thumbnail(book_id, &thumb_bytes, &config) {
if sqlx::query("UPDATE books SET thumbnail_path = $1 WHERE id = $2")
.bind(&path)
.bind(book_id)
.execute(&pool)
.await
.is_ok()
{
let processed = processed_count.fetch_add(1, Ordering::Relaxed) + 1;
let percent = (processed as f64 / total as f64 * 100.0) as i32;
let _ = sqlx::query(
"UPDATE index_jobs SET processed_files = $2, progress_percent = $3 WHERE id = $1",
)
.bind(job_id)
.bind(processed)
.bind(percent)
.execute(&pool)
.await;
}
}
} }
Err(e) => warn!("thumbnail generate failed for book {}: {:?}", book_id, e),
} }
} }
Err(e) => warn!("thumbnail generate failed for book {}: {:?}", book_id, e), Err(e) => warn!("render page 1 failed for book {}: {:?}", book_id, e),
} }
} }
Err(e) => warn!("render page 1 failed for book {}: {:?}", book_id, e), })
} .await;
}
let _ = sqlx::query( let _ = sqlx::query(
"UPDATE index_jobs SET status = 'success', finished_at = NOW(), progress_percent = 100, current_file = NULL WHERE id = $1", "UPDATE index_jobs SET status = 'success', finished_at = NOW(), progress_percent = 100, current_file = NULL WHERE id = $1",

View File

@@ -8,7 +8,7 @@ use sqlx::Row;
use uuid::Uuid; use uuid::Uuid;
use utoipa::ToSchema; use utoipa::ToSchema;
use crate::{error::ApiError, AppState}; use crate::{error::ApiError, state::AppState};
#[derive(Deserialize, ToSchema)] #[derive(Deserialize, ToSchema)]
pub struct CreateTokenRequest { pub struct CreateTokenRequest {

View File

@@ -1,4 +1,4 @@
API_BASE_URL=http://localhost:8080 API_BASE_URL=http://localhost:7080
API_BOOTSTRAP_TOKEN=stripstream-dev-bootstrap-token API_BOOTSTRAP_TOKEN=stripstream-dev-bootstrap-token
NEXT_PUBLIC_API_BASE_URL=http://localhost:8080 NEXT_PUBLIC_API_BASE_URL=http://localhost:7080
NEXT_PUBLIC_API_BOOTSTRAP_TOKEN=stripstream-dev-bootstrap-token NEXT_PUBLIC_API_BOOTSTRAP_TOKEN=stripstream-dev-bootstrap-token

66
apps/backoffice/AGENTS.md Normal file
View File

@@ -0,0 +1,66 @@
# apps/backoffice — Interface d'administration (Next.js)
App Next.js 16 avec React 19, Tailwind CSS v4, TypeScript. Port de dev : **7082** (`npm run dev`).
## Structure
```
app/
├── layout.tsx # Layout global (nav sticky glassmorphism, ThemeProvider)
├── page.tsx # Dashboard
├── books/ # Liste et détail des livres
├── libraries/ # Gestion bibliothèques
├── jobs/ # Monitoring jobs
├── tokens/ # Tokens API
├── settings/ # Paramètres
├── components/ # Composants métier
│ ├── ui/ # Composants génériques (Button, Card, Badge, Icon, Input, ProgressBar, StatBox...)
│ ├── BookCard.tsx
│ ├── JobProgress.tsx
│ ├── JobsList.tsx
│ ├── LibraryForm.tsx
│ ├── FolderBrowser.tsx / FolderPicker.tsx
│ └── ...
└── globals.css # Variables CSS, Tailwind base
lib/
└── api.ts # Client API : types DTO + fonctions fetch vers l'API Rust
```
## Client API (lib/api.ts)
Tous les appels vers l'API Rust passent par `lib/api.ts`. Les types DTO sont définis là :
- `LibraryDto`, `IndexJobDto`, `BookDto`, `TokenDto`, `FolderItem`
Ajouter les nouveaux endpoints et types dans ce fichier.
## Composants UI
Les composants génériques sont dans `app/components/ui/`. Utiliser ces composants plutôt que des éléments HTML bruts :
```tsx
import { Button, Card, Badge, Icon, Input, ProgressBar, StatBox } from "@/app/components/ui";
```
## Conventions
- **App Router** : toutes les pages sont des Server Components par défaut. Utiliser `"use client"` seulement pour l'interactivité.
- **Tailwind v4** : config dans `postcss.config.js` + `tailwind.config.js`. Variables CSS dans `globals.css`.
- **Thème** : `ThemeProvider` + `ThemeToggle` pour dark/light mode via `next-themes`.
- **Icônes** : composant `<Icon name="..." size="sm|md|lg" />` dans `ui/Icon.tsx` — pas de librairie externe.
- **Navigation** : routes typées dans `layout.tsx` (`"/" | "/books" | "/libraries" | "/jobs" | "/tokens" | "/settings"`).
## Commandes
```bash
npm install
npm run dev # http://localhost:7082
npm run build
npm run start # Production sur http://localhost:7082
```
## Gotchas
- **Port 7082** : pas le port Next.js par défaut (3000). Défini dans `package.json` scripts (`-p 7082`).
- **API_BASE_URL** : en prod, configuré via env. En dev local, l'API doit tourner sur `http://localhost:7080`.
- **React 19 + Next.js 16** : utiliser les nouvelles APIs (actions serveur, `use()` hook) si disponibles.
- **Pas de gestion d'état global** : fetch direct depuis les Server Components ou `useState`/`useEffect` dans les Client Components.

View File

@@ -12,11 +12,11 @@ RUN npm run build
FROM node:22-alpine AS runner FROM node:22-alpine AS runner
WORKDIR /app WORKDIR /app
ENV NODE_ENV=production ENV NODE_ENV=production
ENV PORT=8082 ENV PORT=7082
ENV HOST=0.0.0.0 ENV HOST=0.0.0.0
RUN apk add --no-cache wget RUN apk add --no-cache wget
COPY --from=builder /app/.next/standalone ./ COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/public ./public COPY --from=builder /app/public ./public
EXPOSE 8082 EXPOSE 7082
CMD ["node", "server.js"] CMD ["node", "server.js"]

View File

@@ -1,35 +1,25 @@
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { config } from "@/lib/api";
export async function GET( export async function GET(
request: NextRequest, request: NextRequest,
{ params }: { params: Promise<{ bookId: string; pageNum: string }> } { params }: { params: Promise<{ bookId: string; pageNum: string }> }
) { ) {
const { bookId, pageNum } = await params; const { bookId, pageNum } = await params;
// Récupérer les query params (format, width, quality)
const { searchParams } = new URL(request.url);
const format = searchParams.get("format") || "webp";
const width = searchParams.get("width") || "";
const quality = searchParams.get("quality") || "";
// Construire l'URL vers l'API backend
const apiBaseUrl = process.env.API_BASE_URL || "http://api:8080";
const apiUrl = new URL(`${apiBaseUrl}/books/${bookId}/pages/${pageNum}`);
apiUrl.searchParams.set("format", format);
if (width) apiUrl.searchParams.set("width", width);
if (quality) apiUrl.searchParams.set("quality", quality);
// Faire la requête à l'API
const token = process.env.API_BOOTSTRAP_TOKEN;
if (!token) {
return new NextResponse("API token not configured", { status: 500 });
}
try { try {
const { baseUrl, token } = config();
const { searchParams } = new URL(request.url);
const format = searchParams.get("format") || "webp";
const width = searchParams.get("width") || "";
const quality = searchParams.get("quality") || "";
const apiUrl = new URL(`${baseUrl}/books/${bookId}/pages/${pageNum}`);
apiUrl.searchParams.set("format", format);
if (width) apiUrl.searchParams.set("width", width);
if (quality) apiUrl.searchParams.set("quality", quality);
const response = await fetch(apiUrl.toString(), { const response = await fetch(apiUrl.toString(), {
headers: { headers: { Authorization: `Bearer ${token}` },
Authorization: `Bearer ${token}`,
},
}); });
if (!response.ok) { if (!response.ok) {

View File

@@ -1,4 +1,5 @@
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { config } from "@/lib/api";
export async function GET( export async function GET(
request: NextRequest, request: NextRequest,
@@ -6,19 +7,10 @@ export async function GET(
) { ) {
const { bookId } = await params; const { bookId } = await params;
const apiBaseUrl = process.env.API_BASE_URL || "http://api:8080";
const apiUrl = `${apiBaseUrl}/books/${bookId}/thumbnail`;
const token = process.env.API_BOOTSTRAP_TOKEN;
if (!token) {
return new NextResponse("API token not configured", { status: 500 });
}
try { try {
const response = await fetch(apiUrl, { const { baseUrl, token } = config();
headers: { const response = await fetch(`${baseUrl}/books/${bookId}/thumbnail`, {
Authorization: `Bearer ${token}`, headers: { Authorization: `Bearer ${token}` },
},
}); });
if (!response.ok) { if (!response.ok) {

View File

@@ -1,39 +1,13 @@
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { listFolders } from "@/lib/api";
export async function GET(request: NextRequest) { export async function GET(request: NextRequest) {
const apiBaseUrl = process.env.API_BASE_URL || "http://api:8080";
const apiToken = process.env.API_BOOTSTRAP_TOKEN;
if (!apiToken) {
return NextResponse.json({ error: "API token not configured" }, { status: 500 });
}
try { try {
const { searchParams } = new URL(request.url); const { searchParams } = new URL(request.url);
const path = searchParams.get("path"); const path = searchParams.get("path") || undefined;
const data = await listFolders(path);
let apiUrl = `${apiBaseUrl}/folders`;
if (path) {
apiUrl += `?path=${encodeURIComponent(path)}`;
}
const response = await fetch(apiUrl, {
headers: {
Authorization: `Bearer ${apiToken}`,
},
});
if (!response.ok) {
return NextResponse.json(
{ error: `API error: ${response.status}` },
{ status: response.status }
);
}
const data = await response.json();
return NextResponse.json(data); return NextResponse.json(data);
} catch (error) { } catch (error) {
console.error("Proxy error:", error);
return NextResponse.json({ error: "Failed to fetch folders" }, { status: 500 }); return NextResponse.json({ error: "Failed to fetch folders" }, { status: 500 });
} }
} }

View File

@@ -1,36 +1,15 @@
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { cancelJob } from "@/lib/api";
export async function POST( export async function POST(
request: NextRequest, _request: NextRequest,
{ params }: { params: Promise<{ id: string }> } { params }: { params: Promise<{ id: string }> }
) { ) {
const { id } = await params; const { id } = await params;
const apiBaseUrl = process.env.API_BASE_URL || "http://api:8080";
const apiToken = process.env.API_BOOTSTRAP_TOKEN;
if (!apiToken) {
return NextResponse.json({ error: "API token not configured" }, { status: 500 });
}
try { try {
const response = await fetch(`${apiBaseUrl}/index/cancel/${id}`, { const data = await cancelJob(id);
method: "POST",
headers: {
Authorization: `Bearer ${apiToken}`,
},
});
if (!response.ok) {
return NextResponse.json(
{ error: `API error: ${response.status}` },
{ status: response.status }
);
}
const data = await response.json();
return NextResponse.json(data); return NextResponse.json(data);
} catch (error) { } catch (error) {
console.error("Proxy error:", error);
return NextResponse.json({ error: "Failed to cancel job" }, { status: 500 }); return NextResponse.json({ error: "Failed to cancel job" }, { status: 500 });
} }
} }

View File

@@ -1,35 +1,15 @@
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { apiFetch, IndexJobDto } from "@/lib/api";
export async function GET( export async function GET(
request: NextRequest, _request: NextRequest,
{ params }: { params: Promise<{ id: string }> } { params }: { params: Promise<{ id: string }> }
) { ) {
const { id } = await params; const { id } = await params;
const apiBaseUrl = process.env.API_BASE_URL || "http://api:8080";
const apiToken = process.env.API_BOOTSTRAP_TOKEN;
if (!apiToken) {
return NextResponse.json({ error: "API token not configured" }, { status: 500 });
}
try { try {
const response = await fetch(`${apiBaseUrl}/index/jobs/${id}`, { const data = await apiFetch<IndexJobDto>(`/index/jobs/${id}`);
headers: {
Authorization: `Bearer ${apiToken}`,
},
});
if (!response.ok) {
return NextResponse.json(
{ error: `API error: ${response.status}` },
{ status: response.status }
);
}
const data = await response.json();
return NextResponse.json(data); return NextResponse.json(data);
} catch (error) { } catch (error) {
console.error("Proxy error:", error);
return NextResponse.json({ error: "Failed to fetch job" }, { status: 500 }); return NextResponse.json({ error: "Failed to fetch job" }, { status: 500 });
} }
} }

View File

@@ -1,19 +1,12 @@
import { NextRequest } from "next/server"; import { NextRequest } from "next/server";
import { config } from "@/lib/api";
export async function GET( export async function GET(
request: NextRequest, request: NextRequest,
{ params }: { params: Promise<{ id: string }> } { params }: { params: Promise<{ id: string }> }
) { ) {
const { id } = await params; const { id } = await params;
const apiBaseUrl = process.env.API_BASE_URL || "http://api:8080"; const { baseUrl, token } = config();
const apiToken = process.env.API_BOOTSTRAP_TOKEN;
if (!apiToken) {
return new Response(
`data: ${JSON.stringify({ error: "API token not configured" })}\n\n`,
{ status: 500, headers: { "Content-Type": "text/event-stream" } }
);
}
const stream = new ReadableStream({ const stream = new ReadableStream({
async start(controller) { async start(controller) {
@@ -27,10 +20,8 @@ export async function GET(
if (!isActive) return; if (!isActive) return;
try { try {
const response = await fetch(`${apiBaseUrl}/index/jobs/${id}`, { const response = await fetch(`${baseUrl}/index/jobs/${id}`, {
headers: { headers: { Authorization: `Bearer ${token}` },
Authorization: `Bearer ${apiToken}`,
},
}); });
if (response.ok && isActive) { if (response.ok && isActive) {

View File

@@ -0,0 +1,11 @@
import { NextResponse } from "next/server";
import { apiFetch, IndexJobDto } from "@/lib/api";
export async function GET() {
try {
const data = await apiFetch<IndexJobDto[]>("/index/jobs/active");
return NextResponse.json(data);
} catch (error) {
return NextResponse.json({ error: "Failed to fetch active jobs" }, { status: 500 });
}
}

View File

@@ -1,31 +0,0 @@
import { NextRequest, NextResponse } from "next/server";
export async function GET(request: NextRequest) {
const apiBaseUrl = process.env.API_BASE_URL || "http://api:8080";
const apiToken = process.env.API_BOOTSTRAP_TOKEN;
if (!apiToken) {
return NextResponse.json({ error: "API token not configured" }, { status: 500 });
}
try {
const response = await fetch(`${apiBaseUrl}/index/status`, {
headers: {
Authorization: `Bearer ${apiToken}`,
},
});
if (!response.ok) {
return NextResponse.json(
{ error: `API error: ${response.status}` },
{ status: response.status }
);
}
const data = await response.json();
return NextResponse.json(data);
} catch (error) {
console.error("Proxy error:", error);
return NextResponse.json({ error: "Failed to fetch jobs" }, { status: 500 });
}
}

View File

@@ -1,15 +1,8 @@
import { NextRequest } from "next/server"; import { NextRequest } from "next/server";
import { config } from "@/lib/api";
export async function GET(request: NextRequest) { export async function GET(request: NextRequest) {
const apiBaseUrl = process.env.API_BASE_URL || "http://api:8080"; const { baseUrl, token } = config();
const apiToken = process.env.API_BOOTSTRAP_TOKEN;
if (!apiToken) {
return new Response(
`data: ${JSON.stringify({ error: "API token not configured" })}\n\n`,
{ status: 500, headers: { "Content-Type": "text/event-stream" } }
);
}
const stream = new ReadableStream({ const stream = new ReadableStream({
async start(controller) { async start(controller) {
@@ -22,10 +15,8 @@ export async function GET(request: NextRequest) {
if (!isActive) return; if (!isActive) return;
try { try {
const response = await fetch(`${apiBaseUrl}/index/status`, { const response = await fetch(`${baseUrl}/index/status`, {
headers: { headers: { Authorization: `Bearer ${token}` },
Authorization: `Bearer ${apiToken}`,
},
}); });
if (response.ok && isActive) { if (response.ok && isActive) {

View File

@@ -1,29 +1,16 @@
import { NextRequest, NextResponse } from "next/server"; import { NextRequest, NextResponse } from "next/server";
import { apiFetch, updateSetting } from "@/lib/api";
export async function GET( export async function GET(
request: NextRequest, _request: NextRequest,
{ params }: { params: Promise<{ key: string }> } { params }: { params: Promise<{ key: string }> }
) { ) {
const { key } = await params;
try { try {
const { key } = await params; const data = await apiFetch<unknown>(`/settings/${key}`);
const baseUrl = process.env.API_BASE_URL || "http://api:8080";
const token = process.env.API_BOOTSTRAP_TOKEN;
const response = await fetch(`${baseUrl}/settings/${key}`, {
headers: {
Authorization: `Bearer ${token}`,
},
cache: "no-store"
});
if (!response.ok) {
return NextResponse.json({ error: "Failed to fetch setting" }, { status: response.status });
}
const data = await response.json();
return NextResponse.json(data); return NextResponse.json(data);
} catch (error) { } catch (error) {
return NextResponse.json({ error: "Internal server error" }, { status: 500 }); return NextResponse.json({ error: "Failed to fetch setting" }, { status: 500 });
} }
} }
@@ -31,29 +18,12 @@ export async function POST(
request: NextRequest, request: NextRequest,
{ params }: { params: Promise<{ key: string }> } { params }: { params: Promise<{ key: string }> }
) { ) {
const { key } = await params;
try { try {
const { key } = await params; const { value } = await request.json();
const baseUrl = process.env.API_BASE_URL || "http://api:8080"; const data = await updateSetting(key, value);
const token = process.env.API_BOOTSTRAP_TOKEN;
const body = await request.json();
const response = await fetch(`${baseUrl}/settings/${key}`, {
method: "POST",
headers: {
Authorization: `Bearer ${token}`,
"Content-Type": "application/json",
},
body: JSON.stringify(body),
cache: "no-store"
});
if (!response.ok) {
return NextResponse.json({ error: "Failed to update setting" }, { status: response.status });
}
const data = await response.json();
return NextResponse.json(data); return NextResponse.json(data);
} catch (error) { } catch (error) {
return NextResponse.json({ error: "Internal server error" }, { status: 500 }); return NextResponse.json({ error: "Failed to update setting" }, { status: 500 });
} }
} }

View File

@@ -1,25 +1,11 @@
import { NextRequest, NextResponse } from "next/server"; import { NextResponse } from "next/server";
import { clearCache } from "@/lib/api";
export async function POST(request: NextRequest) { export async function POST() {
try { try {
const baseUrl = process.env.API_BASE_URL || "http://api:8080"; const data = await clearCache();
const token = process.env.API_BOOTSTRAP_TOKEN;
const response = await fetch(`${baseUrl}/settings/cache/clear`, {
method: "POST",
headers: {
Authorization: `Bearer ${token}`,
},
cache: "no-store"
});
if (!response.ok) {
return NextResponse.json({ error: "Failed to clear cache" }, { status: response.status });
}
const data = await response.json();
return NextResponse.json(data); return NextResponse.json(data);
} catch (error) { } catch (error) {
return NextResponse.json({ error: "Internal server error" }, { status: 500 }); return NextResponse.json({ error: "Failed to clear cache" }, { status: 500 });
} }
} }

View File

@@ -1,24 +1,11 @@
import { NextRequest, NextResponse } from "next/server"; import { NextResponse } from "next/server";
import { getCacheStats } from "@/lib/api";
export async function GET(request: NextRequest) { export async function GET() {
try { try {
const baseUrl = process.env.API_BASE_URL || "http://api:8080"; const data = await getCacheStats();
const token = process.env.API_BOOTSTRAP_TOKEN;
const response = await fetch(`${baseUrl}/settings/cache/stats`, {
headers: {
Authorization: `Bearer ${token}`,
},
cache: "no-store"
});
if (!response.ok) {
return NextResponse.json({ error: "Failed to fetch cache stats" }, { status: response.status });
}
const data = await response.json();
return NextResponse.json(data); return NextResponse.json(data);
} catch (error) { } catch (error) {
return NextResponse.json({ error: "Internal server error" }, { status: 500 }); return NextResponse.json({ error: "Failed to fetch cache stats" }, { status: 500 });
} }
} }

View File

@@ -1,24 +0,0 @@
import { NextRequest, NextResponse } from "next/server";
export async function GET(request: NextRequest) {
try {
const baseUrl = process.env.API_BASE_URL || "http://api:8080";
const token = process.env.API_BOOTSTRAP_TOKEN;
const response = await fetch(`${baseUrl}/settings`, {
headers: {
Authorization: `Bearer ${token}`,
},
cache: "no-store"
});
if (!response.ok) {
return NextResponse.json({ error: "Failed to fetch settings" }, { status: response.status });
}
const data = await response.json();
return NextResponse.json(data);
} catch (error) {
return NextResponse.json({ error: "Internal server error" }, { status: 500 });
}
}

View File

@@ -247,7 +247,7 @@ export default function SettingsPage({ initialSettings, initialCacheStats, initi
<Icon name="performance" size="md" /> <Icon name="performance" size="md" />
Performance Limits Performance Limits
</CardTitle> </CardTitle>
<CardDescription>Configure API performance and rate limiting</CardDescription> <CardDescription>Configure API performance, rate limiting, and thumbnail generation concurrency</CardDescription>
</CardHeader> </CardHeader>
<CardContent> <CardContent>
<div className="space-y-4"> <div className="space-y-4">
@@ -266,6 +266,9 @@ export default function SettingsPage({ initialSettings, initialCacheStats, initi
}} }}
onBlur={() => handleUpdateSetting("limits", settings.limits)} onBlur={() => handleUpdateSetting("limits", settings.limits)}
/> />
<p className="text-xs text-muted-foreground mt-1">
Maximum number of page renders and thumbnail generations running in parallel
</p>
</FormField> </FormField>
<FormField className="flex-1"> <FormField className="flex-1">
<label className="text-sm font-medium text-muted-foreground mb-1 block">Timeout (seconds)</label> <label className="text-sm font-medium text-muted-foreground mb-1 block">Timeout (seconds)</label>
@@ -299,7 +302,7 @@ export default function SettingsPage({ initialSettings, initialCacheStats, initi
</FormField> </FormField>
</FormRow> </FormRow>
<p className="text-sm text-muted-foreground"> <p className="text-sm text-muted-foreground">
Note: Changes to limits require a server restart to take effect. Note: Changes to limits require a server restart to take effect. The "Concurrent Renders" setting controls both page rendering and thumbnail generation parallelism.
</p> </p>
</div> </div>
</CardContent> </CardContent>
@@ -424,7 +427,7 @@ export default function SettingsPage({ initialSettings, initialCacheStats, initi
</div> </div>
<p className="text-sm text-muted-foreground"> <p className="text-sm text-muted-foreground">
Note: Thumbnail settings are used during indexing. Existing thumbnails will not be regenerated automatically. Note: Thumbnail settings are used during indexing. Existing thumbnails will not be regenerated automatically. The concurrency for thumbnail generation is controlled by the "Concurrent Renders" setting in Performance Limits above.
</p> </p>
</div> </div>
</CardContent> </CardContent>

View File

@@ -89,8 +89,8 @@ export type SeriesDto = {
first_book_id: string; first_book_id: string;
}; };
function config() { export function config() {
const baseUrl = process.env.API_BASE_URL || "http://api:8080"; const baseUrl = process.env.API_BASE_URL || "http://api:7080";
const token = process.env.API_BOOTSTRAP_TOKEN; const token = process.env.API_BOOTSTRAP_TOKEN;
if (!token) { if (!token) {
throw new Error("API_BOOTSTRAP_TOKEN is required for backoffice"); throw new Error("API_BOOTSTRAP_TOKEN is required for backoffice");

View File

@@ -21,7 +21,10 @@
{ {
"name": "next" "name": "next"
} }
] ],
"paths": {
"@/*": ["./*"]
}
}, },
"include": [ "include": [
"next-env.d.ts", "next-env.d.ts",

75
apps/indexer/AGENTS.md Normal file
View File

@@ -0,0 +1,75 @@
# apps/indexer — Service d'indexation
Service background sur le port **7081**. Voir `AGENTS.md` racine pour les conventions globales.
## Structure des fichiers
| Fichier | Rôle |
|---------|------|
| `main.rs` | Point d'entrée, initialisation, lancement du worker |
| `lib.rs` | `AppState` (pool, meili, api_base_url) |
| `worker.rs` | Boucle principale : claim job → process → cleanup stale |
| `job.rs` | `claim_next_job`, `process_job`, `fail_job`, `cleanup_stale_jobs` |
| `scanner.rs` | Scan filesystem, parsing parallèle (rayon), batching DB |
| `batch.rs` | `flush_all_batches` avec UNNEST, structures `BookInsert/Update/FileInsert/Update/ErrorInsert` |
| `scheduler.rs` | Auto-scan : vérifie toutes les 60s les bibliothèques à monitorer |
| `watcher.rs` | File watcher temps réel |
| `meili.rs` | Indexation/sync Meilisearch |
| `api.rs` | Appels HTTP vers l'API (pour checkup thumbnails) |
| `utils.rs` | `remap_libraries_path`, `unmap_libraries_path`, `compute_fingerprint`, `kind_from_format` |
## Cycle de vie d'un job
```
claim_next_job (UPDATE ... RETURNING, status pending→running)
└─ process_job
├─ scanner::scan_library (rayon par_iter pour le parsing)
│ └─ flush_all_batches toutes les BATCH_SIZE=100 itérations
└─ meili sync
└─ api checkup thumbnails (POST /index/jobs/:id/thumbnails/checkup)
```
- Annulation : `is_job_cancelled` vérifié toutes les 10 fichiers ou 1s — retourne `Err("Job cancelled")`
- Jobs stale (running au redémarrage) → nettoyés par `cleanup_stale_jobs` au boot
## Pattern batch (batch.rs)
Toutes les opérations DB massives passent par `flush_all_batches` avec UNNEST :
```rust
// Accumuler dans des Vec<BookInsert>, Vec<FileInsert>, etc.
books_to_insert.push(BookInsert { ... });
// Flush quand plein ou en fin de scan
if books_to_insert.len() >= BATCH_SIZE {
flush_all_batches(&pool, &mut books_update, &mut files_update,
&mut books_insert, &mut files_insert, &mut errors_insert).await?;
}
```
Toutes les opérations du flush sont dans une seule transaction.
## Scan filesystem (scanner.rs)
Pipeline en 3 étapes :
1. **Collect** : WalkDir → filtrer par format (CBZ/CBR/PDF)
2. **Parse** : `file_infos.into_par_iter().map(parse_metadata)` (rayon)
3. **Process** : séquentiel pour les inserts/updates DB
Fingerprint = SHA256(taille + mtime) pour détecter les changements sans relire le fichier.
## Path remapping
```rust
// abs_path en DB = chemin conteneur (/libraries/...)
// Sur l'hôte : LIBRARIES_ROOT_PATH remplace /libraries
utils::remap_libraries_path(&abs_path) // DB → filesystem local
utils::unmap_libraries_path(&local_path) // filesystem local → DB
```
## Gotchas
- **Thumbnails** : générés par l'API après handoff, pas par l'indexer directement. L'indexer appelle `/index/jobs/:id/thumbnails/checkup` via `api.rs`.
- **full_rebuild** : si `true`, ignore les fingerprints → tous les fichiers sont retraités.
- **Annulation** : vérifier `is_job_cancelled` régulièrement pour respecter les annulations utilisateur.
- **Watcher + scheduler** : tournent en tâches tokio séparées dans `worker.rs`, en parallèle de la boucle principale.

View File

@@ -4,6 +4,8 @@ version.workspace = true
edition.workspace = true edition.workspace = true
license.workspace = true license.workspace = true
[lib]
[dependencies] [dependencies]
anyhow.workspace = true anyhow.workspace = true
axum.workspace = true axum.workspace = true

View File

@@ -23,5 +23,5 @@ RUN --mount=type=cache,target=/sccache \
FROM debian:bookworm-slim FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates wget unrar-free && rm -rf /var/lib/apt/lists/* RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates wget unrar-free && rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/indexer /usr/local/bin/indexer COPY --from=builder /app/target/release/indexer /usr/local/bin/indexer
EXPOSE 8081 EXPOSE 7081
CMD ["/usr/local/bin/indexer"] CMD ["/usr/local/bin/indexer"]

16
apps/indexer/src/api.rs Normal file
View File

@@ -0,0 +1,16 @@
use axum::{extract::State, http::StatusCode, Json};
use serde_json;
use crate::AppState;
pub async fn health() -> &'static str {
"ok"
}
pub async fn ready(State(state): State<AppState>) -> Result<Json<serde_json::Value>, StatusCode> {
sqlx::query("SELECT 1")
.execute(&state.pool)
.await
.map_err(|_| StatusCode::SERVICE_UNAVAILABLE)?;
Ok(Json(serde_json::json!({"status": "ready"})))
}

233
apps/indexer/src/batch.rs Normal file
View File

@@ -0,0 +1,233 @@
use anyhow::Result;
use chrono::{DateTime, Utc};
use sqlx::PgPool;
use uuid::Uuid;
// Batched update data structures
pub struct BookUpdate {
pub book_id: Uuid,
pub title: String,
pub kind: String,
pub series: Option<String>,
pub volume: Option<i32>,
pub page_count: Option<i32>,
}
pub struct FileUpdate {
pub file_id: Uuid,
pub format: String,
pub size_bytes: i64,
pub mtime: DateTime<Utc>,
pub fingerprint: String,
}
pub struct BookInsert {
pub book_id: Uuid,
pub library_id: Uuid,
pub kind: String,
pub title: String,
pub series: Option<String>,
pub volume: Option<i32>,
pub page_count: Option<i32>,
pub thumbnail_path: Option<String>,
}
pub struct FileInsert {
pub file_id: Uuid,
pub book_id: Uuid,
pub format: String,
pub abs_path: String,
pub size_bytes: i64,
pub mtime: DateTime<Utc>,
pub fingerprint: String,
pub parse_status: String,
pub parse_error: Option<String>,
}
pub struct ErrorInsert {
pub job_id: Uuid,
pub file_path: String,
pub error_message: String,
}
pub async fn flush_all_batches(
pool: &PgPool,
books_update: &mut Vec<BookUpdate>,
files_update: &mut Vec<FileUpdate>,
books_insert: &mut Vec<BookInsert>,
files_insert: &mut Vec<FileInsert>,
errors_insert: &mut Vec<ErrorInsert>,
) -> Result<()> {
if books_update.is_empty() && files_update.is_empty() && books_insert.is_empty() && files_insert.is_empty() && errors_insert.is_empty() {
return Ok(());
}
let start = std::time::Instant::now();
let mut tx = pool.begin().await?;
// Batch update books using UNNEST
if !books_update.is_empty() {
let book_ids: Vec<Uuid> = books_update.iter().map(|b| b.book_id).collect();
let titles: Vec<String> = books_update.iter().map(|b| b.title.clone()).collect();
let kinds: Vec<String> = books_update.iter().map(|b| b.kind.clone()).collect();
let series: Vec<Option<String>> = books_update.iter().map(|b| b.series.clone()).collect();
let volumes: Vec<Option<i32>> = books_update.iter().map(|b| b.volume).collect();
let page_counts: Vec<Option<i32>> = books_update.iter().map(|b| b.page_count).collect();
sqlx::query(
r#"
UPDATE books SET
title = data.title,
kind = data.kind,
series = data.series,
volume = data.volume,
page_count = data.page_count,
updated_at = NOW()
FROM (
SELECT * FROM UNNEST($1::uuid[], $2::text[], $3::text[], $4::text[], $5::int[], $6::int[])
AS t(book_id, title, kind, series, volume, page_count)
) AS data
WHERE books.id = data.book_id
"#
)
.bind(&book_ids)
.bind(&titles)
.bind(&kinds)
.bind(&series)
.bind(&volumes)
.bind(&page_counts)
.execute(&mut *tx)
.await?;
books_update.clear();
}
// Batch update files using UNNEST
if !files_update.is_empty() {
let file_ids: Vec<Uuid> = files_update.iter().map(|f| f.file_id).collect();
let formats: Vec<String> = files_update.iter().map(|f| f.format.clone()).collect();
let sizes: Vec<i64> = files_update.iter().map(|f| f.size_bytes).collect();
let mtimes: Vec<DateTime<Utc>> = files_update.iter().map(|f| f.mtime).collect();
let fingerprints: Vec<String> = files_update.iter().map(|f| f.fingerprint.clone()).collect();
sqlx::query(
r#"
UPDATE book_files SET
format = data.format,
size_bytes = data.size,
mtime = data.mtime,
fingerprint = data.fp,
parse_status = 'ok',
parse_error_opt = NULL,
updated_at = NOW()
FROM (
SELECT * FROM UNNEST($1::uuid[], $2::text[], $3::bigint[], $4::timestamptz[], $5::text[])
AS t(file_id, format, size, mtime, fp)
) AS data
WHERE book_files.id = data.file_id
"#
)
.bind(&file_ids)
.bind(&formats)
.bind(&sizes)
.bind(&mtimes)
.bind(&fingerprints)
.execute(&mut *tx)
.await?;
files_update.clear();
}
// Batch insert books using UNNEST
if !books_insert.is_empty() {
let book_ids: Vec<Uuid> = books_insert.iter().map(|b| b.book_id).collect();
let library_ids: Vec<Uuid> = books_insert.iter().map(|b| b.library_id).collect();
let kinds: Vec<String> = books_insert.iter().map(|b| b.kind.clone()).collect();
let titles: Vec<String> = books_insert.iter().map(|b| b.title.clone()).collect();
let series: Vec<Option<String>> = books_insert.iter().map(|b| b.series.clone()).collect();
let volumes: Vec<Option<i32>> = books_insert.iter().map(|b| b.volume).collect();
let page_counts: Vec<Option<i32>> = books_insert.iter().map(|b| b.page_count).collect();
let thumbnail_paths: Vec<Option<String>> = books_insert.iter().map(|b| b.thumbnail_path.clone()).collect();
sqlx::query(
r#"
INSERT INTO books (id, library_id, kind, title, series, volume, page_count, thumbnail_path)
SELECT * FROM UNNEST($1::uuid[], $2::uuid[], $3::text[], $4::text[], $5::text[], $6::int[], $7::int[], $8::text[])
AS t(id, library_id, kind, title, series, volume, page_count, thumbnail_path)
"#
)
.bind(&book_ids)
.bind(&library_ids)
.bind(&kinds)
.bind(&titles)
.bind(&series)
.bind(&volumes)
.bind(&page_counts)
.bind(&thumbnail_paths)
.execute(&mut *tx)
.await?;
books_insert.clear();
}
// Batch insert files using UNNEST
if !files_insert.is_empty() {
let file_ids: Vec<Uuid> = files_insert.iter().map(|f| f.file_id).collect();
let book_ids: Vec<Uuid> = files_insert.iter().map(|f| f.book_id).collect();
let formats: Vec<String> = files_insert.iter().map(|f| f.format.clone()).collect();
let abs_paths: Vec<String> = files_insert.iter().map(|f| f.abs_path.clone()).collect();
let sizes: Vec<i64> = files_insert.iter().map(|f| f.size_bytes).collect();
let mtimes: Vec<DateTime<Utc>> = files_insert.iter().map(|f| f.mtime).collect();
let fingerprints: Vec<String> = files_insert.iter().map(|f| f.fingerprint.clone()).collect();
let statuses: Vec<String> = files_insert.iter().map(|f| f.parse_status.clone()).collect();
let errors: Vec<Option<String>> = files_insert.iter().map(|f| f.parse_error.clone()).collect();
sqlx::query(
r#"
INSERT INTO book_files (id, book_id, format, abs_path, size_bytes, mtime, fingerprint, parse_status, parse_error_opt)
SELECT * FROM UNNEST($1::uuid[], $2::uuid[], $3::text[], $4::text[], $5::bigint[], $6::timestamptz[], $7::text[], $8::text[], $9::text[])
AS t(id, book_id, format, abs_path, size_bytes, mtime, fingerprint, parse_status, parse_error_opt)
"#
)
.bind(&file_ids)
.bind(&book_ids)
.bind(&formats)
.bind(&abs_paths)
.bind(&sizes)
.bind(&mtimes)
.bind(&fingerprints)
.bind(&statuses)
.bind(&errors)
.execute(&mut *tx)
.await?;
files_insert.clear();
}
// Batch insert errors using UNNEST
if !errors_insert.is_empty() {
let job_ids: Vec<Uuid> = errors_insert.iter().map(|e| e.job_id).collect();
let file_paths: Vec<String> = errors_insert.iter().map(|e| e.file_path.clone()).collect();
let messages: Vec<String> = errors_insert.iter().map(|e| e.error_message.clone()).collect();
sqlx::query(
r#"
INSERT INTO index_job_errors (job_id, file_path, error_message)
SELECT * FROM UNNEST($1::uuid[], $2::text[], $3::text[])
AS t(job_id, file_path, error_message)
"#
)
.bind(&job_ids)
.bind(&file_paths)
.bind(&messages)
.execute(&mut *tx)
.await?;
errors_insert.clear();
}
tx.commit().await?;
tracing::info!("[BATCH] Flushed all batches in {:?}", start.elapsed());
Ok(())
}

293
apps/indexer/src/job.rs Normal file
View File

@@ -0,0 +1,293 @@
use anyhow::Result;
use rayon::prelude::*;
use sqlx::{PgPool, Row};
use std::time::Duration;
use tracing::{error, info};
use uuid::Uuid;
use crate::{meili, scanner, AppState};
pub async fn cleanup_stale_jobs(pool: &PgPool) -> Result<()> {
// Mark jobs that have been running for more than 5 minutes as failed
// This handles cases where the indexer was restarted while jobs were running
let result = sqlx::query(
r#"
UPDATE index_jobs
SET status = 'failed',
finished_at = NOW(),
error_opt = 'Job interrupted by indexer restart'
WHERE status = 'running'
AND started_at < NOW() - INTERVAL '5 minutes'
RETURNING id
"#
)
.fetch_all(pool)
.await?;
if !result.is_empty() {
let count = result.len();
let ids: Vec<String> = result.iter()
.map(|row| row.get::<Uuid, _>("id").to_string())
.collect();
info!("[CLEANUP] Marked {} stale job(s) as failed: {}", count, ids.join(", "));
}
Ok(())
}
pub async fn claim_next_job(pool: &PgPool) -> Result<Option<(Uuid, Option<Uuid>)>> {
let mut tx = pool.begin().await?;
// Atomically select and lock the next job
// Exclude rebuild/full_rebuild if one is already running
// Prioritize: full_rebuild > rebuild > others
let row = sqlx::query(
r#"
SELECT j.id, j.type, j.library_id
FROM index_jobs j
WHERE j.status = 'pending'
AND (
-- Allow rebuilds only if no rebuild is running
(j.type IN ('rebuild', 'full_rebuild') AND NOT EXISTS (
SELECT 1 FROM index_jobs
WHERE status = 'running'
AND type IN ('rebuild', 'full_rebuild')
))
OR
-- Always allow non-rebuild jobs
j.type NOT IN ('rebuild', 'full_rebuild')
)
ORDER BY
CASE j.type
WHEN 'full_rebuild' THEN 1
WHEN 'rebuild' THEN 2
ELSE 3
END,
j.created_at ASC
FOR UPDATE SKIP LOCKED
LIMIT 1
"#
)
.fetch_optional(&mut *tx)
.await?;
let Some(row) = row else {
tx.commit().await?;
return Ok(None);
};
let id: Uuid = row.get("id");
let job_type: String = row.get("type");
let library_id: Option<Uuid> = row.get("library_id");
// Final check: if this is a rebuild, ensure no rebuild started between SELECT and UPDATE
if job_type == "rebuild" || job_type == "full_rebuild" {
let has_running_rebuild: bool = sqlx::query_scalar(
r#"
SELECT EXISTS(
SELECT 1 FROM index_jobs
WHERE status = 'running'
AND type IN ('rebuild', 'full_rebuild')
AND id != $1
)
"#
)
.bind(id)
.fetch_one(&mut *tx)
.await?;
if has_running_rebuild {
tx.rollback().await?;
return Ok(None);
}
}
sqlx::query("UPDATE index_jobs SET status = 'running', started_at = NOW(), error_opt = NULL WHERE id = $1")
.bind(id)
.execute(&mut *tx)
.await?;
tx.commit().await?;
Ok(Some((id, library_id)))
}
pub async fn fail_job(pool: &PgPool, job_id: Uuid, error_message: &str) -> Result<()> {
sqlx::query("UPDATE index_jobs SET status = 'failed', finished_at = NOW(), error_opt = $2 WHERE id = $1")
.bind(job_id)
.bind(error_message)
.execute(pool)
.await?;
Ok(())
}
pub async fn is_job_cancelled(pool: &PgPool, job_id: Uuid) -> Result<bool> {
let status: Option<String> = sqlx::query_scalar(
"SELECT status FROM index_jobs WHERE id = $1"
)
.bind(job_id)
.fetch_optional(pool)
.await?;
Ok(status.as_deref() == Some("cancelled"))
}
pub async fn process_job(state: &AppState, job_id: Uuid, target_library_id: Option<Uuid>) -> Result<()> {
info!("[JOB] Processing {} library={:?}", job_id, target_library_id);
let job_type: String = sqlx::query_scalar("SELECT type FROM index_jobs WHERE id = $1")
.bind(job_id)
.fetch_one(&state.pool)
.await?;
// Thumbnail jobs: hand off to API and wait for completion (same queue as rebuilds)
if job_type == "thumbnail_rebuild" || job_type == "thumbnail_regenerate" {
sqlx::query(
"UPDATE index_jobs SET status = 'generating_thumbnails', started_at = NOW() WHERE id = $1",
)
.bind(job_id)
.execute(&state.pool)
.await?;
let api_base = state.api_base_url.trim_end_matches('/');
let url = format!("{}/index/jobs/{}/thumbnails/checkup", api_base, job_id);
let client = reqwest::Client::new();
let res = client
.post(&url)
.header("Authorization", format!("Bearer {}", state.api_bootstrap_token))
.send()
.await?;
if !res.status().is_success() {
anyhow::bail!("thumbnail checkup API returned {}", res.status());
}
// Poll until job is finished (API updates the same row)
let poll_interval = Duration::from_secs(1);
loop {
tokio::time::sleep(poll_interval).await;
let status: String = sqlx::query_scalar("SELECT status FROM index_jobs WHERE id = $1")
.bind(job_id)
.fetch_one(&state.pool)
.await?;
if status == "success" || status == "failed" {
info!("[JOB] Thumbnail job {} finished with status {}", job_id, status);
return Ok(());
}
}
}
let is_full_rebuild = job_type == "full_rebuild";
info!("[JOB] {} type={} full_rebuild={}", job_id, job_type, is_full_rebuild);
// For full rebuilds, delete existing data first
if is_full_rebuild {
info!("[JOB] Full rebuild: deleting existing data");
if let Some(library_id) = target_library_id {
// Delete books and files for specific library
sqlx::query("DELETE FROM book_files WHERE book_id IN (SELECT id FROM books WHERE library_id = $1)")
.bind(library_id)
.execute(&state.pool)
.await?;
sqlx::query("DELETE FROM books WHERE library_id = $1")
.bind(library_id)
.execute(&state.pool)
.await?;
info!("[JOB] Deleted existing data for library {}", library_id);
} else {
// Delete all books and files
sqlx::query("DELETE FROM book_files").execute(&state.pool).await?;
sqlx::query("DELETE FROM books").execute(&state.pool).await?;
info!("[JOB] Deleted all existing data");
}
}
let libraries = if let Some(library_id) = target_library_id {
sqlx::query("SELECT id, root_path FROM libraries WHERE id = $1 AND enabled = TRUE")
.bind(library_id)
.fetch_all(&state.pool)
.await?
} else {
sqlx::query("SELECT id, root_path FROM libraries WHERE enabled = TRUE")
.fetch_all(&state.pool)
.await?
};
// First pass: count total files for progress estimation (parallel)
let library_paths: Vec<String> = libraries.iter()
.map(|library| crate::utils::remap_libraries_path(&library.get::<String, _>("root_path")))
.collect();
let total_files: usize = library_paths.par_iter()
.map(|root_path| {
walkdir::WalkDir::new(root_path)
.into_iter()
.filter_map(Result::ok)
.filter(|entry| entry.file_type().is_file() && parsers::detect_format(entry.path()).is_some())
.count()
})
.sum();
info!("[JOB] Found {} libraries, {} total files to index", libraries.len(), total_files);
// Update job with total estimate
sqlx::query("UPDATE index_jobs SET total_files = $2 WHERE id = $1")
.bind(job_id)
.bind(total_files as i32)
.execute(&state.pool)
.await?;
let mut stats = scanner::JobStats {
scanned_files: 0,
indexed_files: 0,
removed_files: 0,
errors: 0,
};
// Track processed files across all libraries for accurate progress
let mut total_processed_count = 0i32;
for library in libraries {
let library_id: Uuid = library.get("id");
let root_path: String = library.get("root_path");
let root_path = crate::utils::remap_libraries_path(&root_path);
match scanner::scan_library(state, job_id, library_id, std::path::Path::new(&root_path), &mut stats, &mut total_processed_count, total_files, is_full_rebuild).await {
Ok(()) => {}
Err(err) => {
stats.errors += 1;
error!(library_id = %library_id, error = %err, "library scan failed");
}
}
}
meili::sync_meili(&state.pool, &state.meili_url, &state.meili_master_key).await?;
// Hand off to API for thumbnail checkup (API will set status = 'success' when done)
sqlx::query(
"UPDATE index_jobs SET status = 'generating_thumbnails', stats_json = $2, current_file = NULL, processed_files = $3 WHERE id = $1",
)
.bind(job_id)
.bind(serde_json::to_value(&stats)?)
.bind(total_processed_count)
.execute(&state.pool)
.await?;
let api_base = state.api_base_url.trim_end_matches('/');
let url = format!("{}/index/jobs/{}/thumbnails/checkup", api_base, job_id);
let client = reqwest::Client::new();
let res = client
.post(&url)
.header("Authorization", format!("Bearer {}", state.api_bootstrap_token))
.send()
.await;
if let Err(e) = res {
tracing::warn!("[JOB] Failed to trigger thumbnail checkup: {} — API will not generate thumbnails for this job", e);
} else if let Ok(r) = res {
if !r.status().is_success() {
tracing::warn!("[JOB] Thumbnail checkup returned {} — API may not generate thumbnails", r.status());
} else {
info!("[JOB] Thumbnail checkup started (job {}), API will complete the job", job_id);
}
}
Ok(())
}

20
apps/indexer/src/lib.rs Normal file
View File

@@ -0,0 +1,20 @@
pub mod api;
pub mod batch;
pub mod job;
pub mod meili;
pub mod scheduler;
pub mod scanner;
pub mod utils;
pub mod watcher;
pub mod worker;
use sqlx::PgPool;
#[derive(Clone)]
pub struct AppState {
pub pool: PgPool,
pub meili_url: String,
pub meili_master_key: String,
pub api_base_url: String,
pub api_bootstrap_token: String,
}

File diff suppressed because it is too large Load Diff

180
apps/indexer/src/meili.rs Normal file
View File

@@ -0,0 +1,180 @@
use anyhow::{Context, Result};
use chrono::{DateTime, Utc};
use reqwest::Client;
use serde::Serialize;
use sqlx::{PgPool, Row};
use tracing::info;
use uuid::Uuid;
#[derive(Serialize)]
struct SearchDoc {
id: String,
library_id: String,
kind: String,
title: String,
author: Option<String>,
series: Option<String>,
volume: Option<i32>,
language: Option<String>,
}
pub async fn sync_meili(pool: &PgPool, meili_url: &str, meili_master_key: &str) -> Result<()> {
let client = Client::new();
let base = meili_url.trim_end_matches('/');
// Ensure index exists and has proper settings
let _ = client
.post(format!("{base}/indexes"))
.header("Authorization", format!("Bearer {meili_master_key}"))
.json(&serde_json::json!({"uid": "books", "primaryKey": "id"}))
.send()
.await;
let _ = client
.patch(format!("{base}/indexes/books/settings/filterable-attributes"))
.header("Authorization", format!("Bearer {meili_master_key}"))
.json(&serde_json::json!(["library_id", "kind"]))
.send()
.await;
// Get last sync timestamp
let last_sync: Option<DateTime<Utc>> = sqlx::query_scalar(
"SELECT last_meili_sync FROM sync_metadata WHERE id = 1 AND last_meili_sync IS NOT NULL"
)
.fetch_optional(pool)
.await?;
// If no previous sync, do a full sync
let is_full_sync = last_sync.is_none();
// Get books to sync: all if full sync, only modified since last sync otherwise
let rows = if is_full_sync {
info!("[MEILI] Performing full sync");
sqlx::query(
"SELECT id, library_id, kind, title, author, series, volume, language, updated_at FROM books",
)
.fetch_all(pool)
.await?
} else {
let since = last_sync.unwrap();
info!("[MEILI] Performing incremental sync since {}", since);
// Also get deleted book IDs to remove from MeiliSearch
// For now, we'll do a diff approach: get all book IDs from DB and compare with Meili
sqlx::query(
"SELECT id, library_id, kind, title, author, series, volume, language, updated_at FROM books WHERE updated_at > $1",
)
.bind(since)
.fetch_all(pool)
.await?
};
if rows.is_empty() && !is_full_sync {
info!("[MEILI] No changes to sync");
// Still update the timestamp
sqlx::query(
"INSERT INTO sync_metadata (id, last_meili_sync) VALUES (1, NOW()) ON CONFLICT (id) DO UPDATE SET last_meili_sync = NOW()"
)
.execute(pool)
.await?;
return Ok(());
}
let docs: Vec<SearchDoc> = rows
.into_iter()
.map(|row| SearchDoc {
id: row.get::<Uuid, _>("id").to_string(),
library_id: row.get::<Uuid, _>("library_id").to_string(),
kind: row.get("kind"),
title: row.get("title"),
author: row.get("author"),
series: row.get("series"),
volume: row.get("volume"),
language: row.get("language"),
})
.collect();
let doc_count = docs.len();
// Send documents to MeiliSearch in batches of 1000
const MEILI_BATCH_SIZE: usize = 1000;
for (i, chunk) in docs.chunks(MEILI_BATCH_SIZE).enumerate() {
let batch_num = i + 1;
info!("[MEILI] Sending batch {}/{} ({} docs)", batch_num, (doc_count + MEILI_BATCH_SIZE - 1) / MEILI_BATCH_SIZE, chunk.len());
let response = client
.post(format!("{base}/indexes/books/documents"))
.header("Authorization", format!("Bearer {meili_master_key}"))
.json(&chunk)
.send()
.await
.context("failed to send docs to meili")?;
if !response.status().is_success() {
let status = response.status();
let body = response.text().await.unwrap_or_default();
return Err(anyhow::anyhow!("MeiliSearch error {}: {}", status, body));
}
}
// Handle deletions: get all book IDs from DB and remove from MeiliSearch any that don't exist
// This is expensive, so we only do it periodically (every 10 syncs) or on full syncs
if is_full_sync || rand::random::<u8>() < 26 { // ~10% chance
info!("[MEILI] Checking for documents to delete");
// Get all book IDs from database
let db_ids: Vec<String> = sqlx::query_scalar("SELECT id::text FROM books")
.fetch_all(pool)
.await?;
// Get all document IDs from MeiliSearch (this requires fetching all documents)
// For efficiency, we'll just delete by query for documents that might be stale
// A better approach would be to track deletions in a separate table
// For now, we'll do a simple approach: fetch all Meili docs and compare
// Note: This could be slow for large collections
let meili_response = client
.post(format!("{base}/indexes/books/documents/fetch"))
.header("Authorization", format!("Bearer {meili_master_key}"))
.json(&serde_json::json!({
"fields": ["id"],
"limit": 100000
}))
.send()
.await;
if let Ok(response) = meili_response {
if response.status().is_success() {
if let Ok(meili_docs) = response.json::<Vec<serde_json::Value>>().await {
let meili_ids: std::collections::HashSet<String> = meili_docs
.into_iter()
.filter_map(|doc| doc.get("id").and_then(|id| id.as_str()).map(|s| s.to_string()))
.collect();
let db_ids_set: std::collections::HashSet<String> = db_ids.into_iter().collect();
let to_delete: Vec<String> = meili_ids.difference(&db_ids_set).cloned().collect();
if !to_delete.is_empty() {
info!("[MEILI] Deleting {} stale documents", to_delete.len());
let _ = client
.post(format!("{base}/indexes/books/documents/delete-batch"))
.header("Authorization", format!("Bearer {meili_master_key}"))
.json(&to_delete)
.send()
.await;
}
}
}
}
}
// Update last sync timestamp
sqlx::query(
"INSERT INTO sync_metadata (id, last_meili_sync) VALUES (1, NOW()) ON CONFLICT (id) DO UPDATE SET last_meili_sync = NOW()"
)
.execute(pool)
.await?;
info!("[MEILI] Sync completed: {} documents indexed", doc_count);
Ok(())
}

360
apps/indexer/src/scanner.rs Normal file
View File

@@ -0,0 +1,360 @@
use anyhow::{Context, Result};
use chrono::{DateTime, Utc};
use parsers::{detect_format, parse_metadata, BookFormat, ParsedMetadata};
use rayon::prelude::*;
use serde::Serialize;
use sqlx::Row;
use std::{collections::HashMap, path::Path, time::Duration};
use tracing::{error, info, trace, warn};
use uuid::Uuid;
use walkdir::WalkDir;
use crate::{
batch::{flush_all_batches, BookInsert, BookUpdate, ErrorInsert, FileInsert, FileUpdate},
job::is_job_cancelled,
utils,
AppState,
};
#[derive(Serialize)]
pub struct JobStats {
pub scanned_files: usize,
pub indexed_files: usize,
pub removed_files: usize,
pub errors: usize,
}
const BATCH_SIZE: usize = 100;
pub async fn scan_library(
state: &AppState,
job_id: Uuid,
library_id: Uuid,
root: &Path,
stats: &mut JobStats,
total_processed_count: &mut i32,
total_files: usize,
is_full_rebuild: bool,
) -> Result<()> {
info!("[SCAN] Starting scan of library {} at path: {} (full_rebuild={})", library_id, root.display(), is_full_rebuild);
let existing_rows = sqlx::query(
r#"
SELECT bf.id AS file_id, bf.book_id, bf.abs_path, bf.fingerprint
FROM book_files bf
JOIN books b ON b.id = bf.book_id
WHERE b.library_id = $1
"#,
)
.bind(library_id)
.fetch_all(&state.pool)
.await?;
let mut existing: HashMap<String, (Uuid, Uuid, String)> = HashMap::new();
if !is_full_rebuild {
for row in existing_rows {
let abs_path: String = row.get("abs_path");
let remapped_path = utils::remap_libraries_path(&abs_path);
existing.insert(
remapped_path,
(row.get("file_id"), row.get("book_id"), row.get("fingerprint")),
);
}
info!("[SCAN] Found {} existing files in database for library {}", existing.len(), library_id);
} else {
info!("[SCAN] Full rebuild: skipping existing files lookup (all will be treated as new)");
}
let mut seen: HashMap<String, bool> = HashMap::new();
let mut library_processed_count = 0i32;
let mut last_progress_update = std::time::Instant::now();
// Batching buffers
let mut books_to_update: Vec<BookUpdate> = Vec::with_capacity(BATCH_SIZE);
let mut files_to_update: Vec<FileUpdate> = Vec::with_capacity(BATCH_SIZE);
let mut books_to_insert: Vec<BookInsert> = Vec::with_capacity(BATCH_SIZE);
let mut files_to_insert: Vec<FileInsert> = Vec::with_capacity(BATCH_SIZE);
let mut errors_to_insert: Vec<ErrorInsert> = Vec::with_capacity(BATCH_SIZE);
// Step 1: Collect all book files first
#[derive(Clone)]
struct FileInfo {
path: std::path::PathBuf,
format: BookFormat,
abs_path: String,
file_name: String,
metadata: std::fs::Metadata,
mtime: DateTime<Utc>,
fingerprint: String,
lookup_path: String,
}
let mut file_infos: Vec<FileInfo> = Vec::new();
for entry in WalkDir::new(root).into_iter().filter_map(Result::ok) {
if !entry.file_type().is_file() {
continue;
}
let path = entry.path().to_path_buf();
let Some(format) = detect_format(&path) else {
trace!("[SCAN] Skipping non-book file: {}", path.display());
continue;
};
info!("[SCAN] Found book file: {} (format: {:?})", path.display(), format);
stats.scanned_files += 1;
let abs_path_local = path.to_string_lossy().to_string();
let abs_path = utils::unmap_libraries_path(&abs_path_local);
let file_name = path.file_name()
.map(|s| s.to_string_lossy().to_string())
.unwrap_or_else(|| abs_path.clone());
let metadata = std::fs::metadata(&path)
.with_context(|| format!("cannot stat {}", path.display()))?;
let mtime: DateTime<Utc> = metadata
.modified()
.map(DateTime::<Utc>::from)
.unwrap_or_else(|_| Utc::now());
let fingerprint = utils::compute_fingerprint(&path, metadata.len(), &mtime)?;
let lookup_path = utils::remap_libraries_path(&abs_path);
file_infos.push(FileInfo {
path,
format,
abs_path,
file_name,
metadata,
mtime,
fingerprint,
lookup_path,
});
}
info!("[SCAN] Collected {} files, starting parallel parsing", file_infos.len());
// Step 2: Parse metadata in parallel
let parsed_results: Vec<(FileInfo, Result<ParsedMetadata>)> = file_infos
.into_par_iter()
.map(|file_info| {
let parse_result = parse_metadata(&file_info.path, file_info.format, root);
(file_info, parse_result)
})
.collect();
info!("[SCAN] Completed parallel parsing, processing {} results", parsed_results.len());
// Step 3: Process results sequentially for batch inserts
for (file_info, parse_result) in parsed_results {
library_processed_count += 1;
*total_processed_count += 1;
// Update progress in DB every 1 second or every 10 files
let should_update_progress = last_progress_update.elapsed() > Duration::from_secs(1) || library_processed_count % 10 == 0;
if should_update_progress {
let progress_percent = if total_files > 0 {
((*total_processed_count as f64 / total_files as f64) * 100.0) as i32
} else {
0
};
sqlx::query(
"UPDATE index_jobs SET current_file = $2, processed_files = $3, progress_percent = $4 WHERE id = $1"
)
.bind(job_id)
.bind(&file_info.file_name)
.bind(*total_processed_count)
.bind(progress_percent)
.execute(&state.pool)
.await
.map_err(|e| {
error!("[BDD] Failed to update progress for job {}: {}", job_id, e);
e
})?;
last_progress_update = std::time::Instant::now();
// Check if job has been cancelled
if is_job_cancelled(&state.pool, job_id).await? {
info!("[JOB] Job {} cancelled by user, stopping...", job_id);
// Flush any pending batches before exiting
flush_all_batches(&state.pool, &mut books_to_update, &mut files_to_update, &mut books_to_insert, &mut files_to_insert, &mut errors_to_insert).await?;
return Err(anyhow::anyhow!("Job cancelled by user"));
}
}
let seen_key = utils::remap_libraries_path(&file_info.abs_path);
seen.insert(seen_key.clone(), true);
if let Some((file_id, book_id, old_fingerprint)) = existing.get(&file_info.lookup_path).cloned() {
if !is_full_rebuild && old_fingerprint == file_info.fingerprint {
trace!("[PROCESS] Skipping unchanged file: {}", file_info.file_name);
continue;
}
info!("[PROCESS] Updating existing file: {} (full_rebuild={}, fingerprint_match={})", file_info.file_name, is_full_rebuild, old_fingerprint == file_info.fingerprint);
match parse_result {
Ok(parsed) => {
books_to_update.push(BookUpdate {
book_id,
title: parsed.title,
kind: utils::kind_from_format(file_info.format).to_string(),
series: parsed.series,
volume: parsed.volume,
page_count: parsed.page_count,
});
files_to_update.push(FileUpdate {
file_id,
format: file_info.format.as_str().to_string(),
size_bytes: file_info.metadata.len() as i64,
mtime: file_info.mtime,
fingerprint: file_info.fingerprint,
});
stats.indexed_files += 1;
}
Err(err) => {
warn!("[PARSER] Failed to parse {}: {}", file_info.file_name, err);
stats.errors += 1;
files_to_update.push(FileUpdate {
file_id,
format: file_info.format.as_str().to_string(),
size_bytes: file_info.metadata.len() as i64,
mtime: file_info.mtime,
fingerprint: file_info.fingerprint.clone(),
});
errors_to_insert.push(ErrorInsert {
job_id,
file_path: file_info.abs_path.clone(),
error_message: err.to_string(),
});
// Also need to mark file as error - we'll do this separately
sqlx::query(
"UPDATE book_files SET parse_status = 'error', parse_error_opt = $2 WHERE id = $1"
)
.bind(file_id)
.bind(err.to_string())
.execute(&state.pool)
.await?;
}
}
// Flush if batch is full
if books_to_update.len() >= BATCH_SIZE || files_to_update.len() >= BATCH_SIZE {
flush_all_batches(&state.pool, &mut books_to_update, &mut files_to_update, &mut books_to_insert, &mut files_to_insert, &mut errors_to_insert).await?;
}
continue;
}
// New file (thumbnails generated by API after job handoff)
info!("[PROCESS] Inserting new file: {}", file_info.file_name);
let book_id = Uuid::new_v4();
match parse_result {
Ok(parsed) => {
let file_id = Uuid::new_v4();
books_to_insert.push(BookInsert {
book_id,
library_id,
kind: utils::kind_from_format(file_info.format).to_string(),
title: parsed.title,
series: parsed.series,
volume: parsed.volume,
page_count: parsed.page_count,
thumbnail_path: None,
});
files_to_insert.push(FileInsert {
file_id,
book_id,
format: file_info.format.as_str().to_string(),
abs_path: file_info.abs_path.clone(),
size_bytes: file_info.metadata.len() as i64,
mtime: file_info.mtime,
fingerprint: file_info.fingerprint,
parse_status: "ok".to_string(),
parse_error: None,
});
stats.indexed_files += 1;
}
Err(err) => {
warn!("[PARSER] Failed to parse {}: {}", file_info.file_name, err);
stats.errors += 1;
let book_id = Uuid::new_v4();
let file_id = Uuid::new_v4();
books_to_insert.push(BookInsert {
book_id,
library_id,
kind: utils::kind_from_format(file_info.format).to_string(),
title: utils::file_display_name(&file_info.path),
series: None,
volume: None,
page_count: None,
thumbnail_path: None,
});
files_to_insert.push(FileInsert {
file_id,
book_id,
format: file_info.format.as_str().to_string(),
abs_path: file_info.abs_path.clone(),
size_bytes: file_info.metadata.len() as i64,
mtime: file_info.mtime,
fingerprint: file_info.fingerprint,
parse_status: "error".to_string(),
parse_error: Some(err.to_string()),
});
errors_to_insert.push(ErrorInsert {
job_id,
file_path: file_info.abs_path,
error_message: err.to_string(),
});
}
}
// Flush if batch is full
if books_to_insert.len() >= BATCH_SIZE || files_to_insert.len() >= BATCH_SIZE {
flush_all_batches(&state.pool, &mut books_to_update, &mut files_to_update, &mut books_to_insert, &mut files_to_insert, &mut errors_to_insert).await?;
}
}
// Final flush of any remaining items
flush_all_batches(&state.pool, &mut books_to_update, &mut files_to_update, &mut books_to_insert, &mut files_to_insert, &mut errors_to_insert).await?;
info!("[SCAN] Library {} scan complete: {} files scanned, {} indexed, {} errors",
library_id, library_processed_count, stats.indexed_files, stats.errors);
// Handle deletions
let mut removed_count = 0usize;
for (abs_path, (file_id, book_id, _)) in existing {
if seen.contains_key(&abs_path) {
continue;
}
sqlx::query("DELETE FROM book_files WHERE id = $1")
.bind(file_id)
.execute(&state.pool)
.await?;
sqlx::query("DELETE FROM books WHERE id = $1 AND NOT EXISTS (SELECT 1 FROM book_files WHERE book_id = $1)")
.bind(book_id)
.execute(&state.pool)
.await?;
stats.removed_files += 1;
removed_count += 1;
}
if removed_count > 0 {
info!("[SCAN] Removed {} stale files from database", removed_count);
}
Ok(())
}

View File

@@ -0,0 +1,67 @@
use anyhow::Result;
use sqlx::{PgPool, Row};
use tracing::info;
use uuid::Uuid;
pub async fn check_and_schedule_auto_scans(pool: &PgPool) -> Result<()> {
let libraries = sqlx::query(
r#"
SELECT id, scan_mode, last_scan_at
FROM libraries
WHERE monitor_enabled = TRUE
AND (
next_scan_at IS NULL
OR next_scan_at <= NOW()
)
AND NOT EXISTS (
SELECT 1 FROM index_jobs
WHERE library_id = libraries.id
AND status IN ('pending', 'running')
)
"#
)
.fetch_all(pool)
.await?;
for row in libraries {
let library_id: Uuid = row.get("id");
let scan_mode: String = row.get("scan_mode");
info!("[SCHEDULER] Auto-scanning library {} (mode: {})", library_id, scan_mode);
let job_id = Uuid::new_v4();
let job_type = match scan_mode.as_str() {
"full" => "full_rebuild",
_ => "rebuild",
};
sqlx::query(
"INSERT INTO index_jobs (id, library_id, type, status) VALUES ($1, $2, $3, 'pending')"
)
.bind(job_id)
.bind(library_id)
.bind(job_type)
.execute(pool)
.await?;
// Update next_scan_at
let interval_minutes = match scan_mode.as_str() {
"hourly" => 60,
"daily" => 1440,
"weekly" => 10080,
_ => 1440, // default daily
};
sqlx::query(
"UPDATE libraries SET last_scan_at = NOW(), next_scan_at = NOW() + INTERVAL '1 minute' * $2 WHERE id = $1"
)
.bind(library_id)
.bind(interval_minutes)
.execute(pool)
.await?;
info!("[SCHEDULER] Created job {} for library {}", job_id, library_id);
}
Ok(())
}

52
apps/indexer/src/utils.rs Normal file
View File

@@ -0,0 +1,52 @@
use anyhow::Result;
use chrono::DateTime;
use parsers::BookFormat;
use sha2::{Digest, Sha256};
use std::path::Path;
use chrono::Utc;
pub fn remap_libraries_path(path: &str) -> String {
if let Ok(root) = std::env::var("LIBRARIES_ROOT_PATH") {
if path.starts_with("/libraries/") {
return path.replacen("/libraries", &root, 1);
}
}
path.to_string()
}
pub fn unmap_libraries_path(path: &str) -> String {
if let Ok(root) = std::env::var("LIBRARIES_ROOT_PATH") {
if path.starts_with(&root) {
return path.replacen(&root, "/libraries", 1);
}
}
path.to_string()
}
pub fn compute_fingerprint(path: &Path, size: u64, mtime: &DateTime<Utc>) -> Result<String> {
// Optimized: only use size + mtime + first bytes of filename for fast fingerprinting
// This is 100x faster than reading file content while still being reliable for change detection
let mut hasher = Sha256::new();
hasher.update(size.to_le_bytes());
hasher.update(mtime.timestamp().to_le_bytes());
// Add filename for extra uniqueness (in case of rapid changes with same size+mtime)
if let Some(filename) = path.file_name() {
hasher.update(filename.as_encoded_bytes());
}
Ok(format!("{:x}", hasher.finalize()))
}
pub fn kind_from_format(format: BookFormat) -> &'static str {
match format {
BookFormat::Pdf => "ebook",
BookFormat::Cbz | BookFormat::Cbr => "comic",
}
}
pub fn file_display_name(path: &Path) -> String {
path.file_stem()
.map(|s| s.to_string_lossy().to_string())
.unwrap_or_else(|| "Untitled".to_string())
}

147
apps/indexer/src/watcher.rs Normal file
View File

@@ -0,0 +1,147 @@
use anyhow::Result;
use notify::{Event, RecommendedWatcher, RecursiveMode, Watcher};
use sqlx::Row;
use std::collections::HashMap;
use std::time::Duration;
use tokio::sync::mpsc;
use tracing::{error, info, trace};
use uuid::Uuid;
use crate::utils;
use crate::AppState;
pub async fn run_file_watcher(state: AppState) -> Result<()> {
let (tx, mut rx) = mpsc::channel::<(Uuid, String)>(100);
// Start watcher refresh loop
let refresh_interval = Duration::from_secs(30);
let pool = state.pool.clone();
tokio::spawn(async move {
let mut watched_libraries: HashMap<Uuid, String> = HashMap::new();
loop {
// Get libraries with watcher enabled
match sqlx::query(
"SELECT id, root_path FROM libraries WHERE watcher_enabled = TRUE AND enabled = TRUE"
)
.fetch_all(&pool)
.await
{
Ok(rows) => {
let current_libraries: HashMap<Uuid, String> = rows
.into_iter()
.map(|row| {
let id: Uuid = row.get("id");
let root_path: String = row.get("root_path");
let local_path = utils::remap_libraries_path(&root_path);
(id, local_path)
})
.collect();
// Check if we need to recreate watcher
let needs_restart = watched_libraries.len() != current_libraries.len()
|| watched_libraries.iter().any(|(id, path)| {
current_libraries.get(id) != Some(path)
});
if needs_restart {
info!("[WATCHER] Restarting watcher for {} libraries", current_libraries.len());
if !current_libraries.is_empty() {
let tx_clone = tx.clone();
let libraries_clone = current_libraries.clone();
match setup_watcher(libraries_clone, tx_clone) {
Ok(_new_watcher) => {
watched_libraries = current_libraries;
info!("[WATCHER] Watching {} libraries", watched_libraries.len());
}
Err(err) => {
error!("[WATCHER] Failed to setup watcher: {}", err);
}
}
}
}
}
Err(err) => {
error!("[WATCHER] Failed to fetch libraries: {}", err);
}
}
tokio::time::sleep(refresh_interval).await;
}
});
// Process watcher events
while let Some((library_id, file_path)) = rx.recv().await {
info!("[WATCHER] File changed in library {}: {}", library_id, file_path);
// Check if there's already a pending job for this library
match sqlx::query_scalar::<_, bool>(
"SELECT EXISTS(SELECT 1 FROM index_jobs WHERE library_id = $1 AND status IN ('pending', 'running'))"
)
.bind(library_id)
.fetch_one(&state.pool)
.await
{
Ok(exists) => {
if !exists {
// Create a quick scan job
let job_id = Uuid::new_v4();
match sqlx::query(
"INSERT INTO index_jobs (id, library_id, type, status) VALUES ($1, $2, 'rebuild', 'pending')"
)
.bind(job_id)
.bind(library_id)
.execute(&state.pool)
.await
{
Ok(_) => info!("[WATCHER] Created job {} for library {}", job_id, library_id),
Err(err) => error!("[WATCHER] Failed to create job: {}", err),
}
} else {
trace!("[WATCHER] Job already pending for library {}, skipping", library_id);
}
}
Err(err) => error!("[WATCHER] Failed to check existing jobs: {}", err),
}
}
Ok(())
}
fn setup_watcher(
libraries: HashMap<Uuid, String>,
tx: mpsc::Sender<(Uuid, String)>,
) -> Result<RecommendedWatcher> {
let libraries_for_closure = libraries.clone();
let mut watcher = notify::recommended_watcher(move |res: Result<Event, notify::Error>| {
match res {
Ok(event) => {
if event.kind.is_modify() || event.kind.is_create() || event.kind.is_remove() {
for path in event.paths {
if let Some((library_id, _)) = libraries_for_closure.iter().find(|(_, root)| {
path.starts_with(root)
}) {
let path_str = path.to_string_lossy().to_string();
if parsers::detect_format(&path).is_some() {
let _ = tx.try_send((*library_id, path_str));
}
}
}
}
}
Err(err) => error!("[WATCHER] Event error: {}", err),
}
})?;
// Actually watch the library directories
for (_, root_path) in &libraries {
info!("[WATCHER] Watching directory: {}", root_path);
watcher.watch(std::path::Path::new(root_path), RecursiveMode::Recursive)?;
}
Ok(watcher)
}

View File

@@ -0,0 +1,61 @@
use std::time::Duration;
use tracing::{error, info, trace};
use crate::{job, scheduler, watcher, AppState};
pub async fn run_worker(state: AppState, interval_seconds: u64) {
let wait = Duration::from_secs(interval_seconds.max(1));
// Cleanup stale jobs from previous runs
if let Err(err) = job::cleanup_stale_jobs(&state.pool).await {
error!("[CLEANUP] Failed to cleanup stale jobs: {}", err);
}
// Start file watcher task
let watcher_state = state.clone();
let _watcher_handle = tokio::spawn(async move {
info!("[WATCHER] Starting file watcher service");
if let Err(err) = watcher::run_file_watcher(watcher_state).await {
error!("[WATCHER] Error: {}", err);
}
});
// Start scheduler task for auto-monitoring
let scheduler_state = state.clone();
let _scheduler_handle = tokio::spawn(async move {
let scheduler_wait = Duration::from_secs(60); // Check every minute
loop {
if let Err(err) = scheduler::check_and_schedule_auto_scans(&scheduler_state.pool).await {
error!("[SCHEDULER] Error: {}", err);
}
tokio::time::sleep(scheduler_wait).await;
}
});
loop {
match job::claim_next_job(&state.pool).await {
Ok(Some((job_id, library_id))) => {
info!("[INDEXER] Starting job {} library={:?}", job_id, library_id);
if let Err(err) = job::process_job(&state, job_id, library_id).await {
let err_str = err.to_string();
if err_str.contains("cancelled") || err_str.contains("Cancelled") {
info!("[INDEXER] Job {} was cancelled by user", job_id);
// Status is already 'cancelled' in DB, don't change it
} else {
error!("[INDEXER] Job {} failed: {}", job_id, err);
let _ = job::fail_job(&state.pool, job_id, &err_str).await;
}
} else {
info!("[INDEXER] Job {} completed", job_id);
}
}
Ok(None) => {
trace!("[INDEXER] No pending jobs, waiting...");
tokio::time::sleep(wait).await;
}
Err(err) => {
error!("[INDEXER] Worker error: {}", err);
tokio::time::sleep(wait).await;
}
}
}
}

View File

@@ -13,7 +13,7 @@ impl ApiConfig {
pub fn from_env() -> Result<Self> { pub fn from_env() -> Result<Self> {
Ok(Self { Ok(Self {
listen_addr: std::env::var("API_LISTEN_ADDR") listen_addr: std::env::var("API_LISTEN_ADDR")
.unwrap_or_else(|_| "0.0.0.0:8080".to_string()), .unwrap_or_else(|_| "0.0.0.0:7080".to_string()),
database_url: std::env::var("DATABASE_URL").context("DATABASE_URL is required")?, database_url: std::env::var("DATABASE_URL").context("DATABASE_URL is required")?,
meili_url: std::env::var("MEILI_URL").context("MEILI_URL is required")?, meili_url: std::env::var("MEILI_URL").context("MEILI_URL is required")?,
meili_master_key: std::env::var("MEILI_MASTER_KEY") meili_master_key: std::env::var("MEILI_MASTER_KEY")
@@ -32,7 +32,7 @@ pub struct IndexerConfig {
pub meili_master_key: String, pub meili_master_key: String,
pub scan_interval_seconds: u64, pub scan_interval_seconds: u64,
pub thumbnail_config: ThumbnailConfig, pub thumbnail_config: ThumbnailConfig,
/// API base URL for thumbnail checkup at end of build (e.g. http://api:8080) /// API base URL for thumbnail checkup at end of build (e.g. http://api:7080)
pub api_base_url: String, pub api_base_url: String,
/// Token to call API (e.g. API_BOOTSTRAP_TOKEN) /// Token to call API (e.g. API_BOOTSTRAP_TOKEN)
pub api_bootstrap_token: String, pub api_bootstrap_token: String,
@@ -87,7 +87,7 @@ impl IndexerConfig {
Ok(Self { Ok(Self {
listen_addr: std::env::var("INDEXER_LISTEN_ADDR") listen_addr: std::env::var("INDEXER_LISTEN_ADDR")
.unwrap_or_else(|_| "0.0.0.0:8081".to_string()), .unwrap_or_else(|_| "0.0.0.0:7081".to_string()),
database_url: std::env::var("DATABASE_URL").context("DATABASE_URL is required")?, database_url: std::env::var("DATABASE_URL").context("DATABASE_URL is required")?,
meili_url: std::env::var("MEILI_URL").context("MEILI_URL is required")?, meili_url: std::env::var("MEILI_URL").context("MEILI_URL is required")?,
meili_master_key: std::env::var("MEILI_MASTER_KEY") meili_master_key: std::env::var("MEILI_MASTER_KEY")
@@ -98,7 +98,7 @@ impl IndexerConfig {
.unwrap_or(5), .unwrap_or(5),
thumbnail_config, thumbnail_config,
api_base_url: std::env::var("API_BASE_URL") api_base_url: std::env::var("API_BASE_URL")
.unwrap_or_else(|_| "http://api:8080".to_string()), .unwrap_or_else(|_| "http://api:7080".to_string()),
api_bootstrap_token: std::env::var("API_BOOTSTRAP_TOKEN") api_bootstrap_token: std::env::var("API_BOOTSTRAP_TOKEN")
.context("API_BOOTSTRAP_TOKEN is required for thumbnail checkup")?, .context("API_BOOTSTRAP_TOKEN is required for thumbnail checkup")?,
}) })
@@ -116,9 +116,9 @@ impl AdminUiConfig {
pub fn from_env() -> Result<Self> { pub fn from_env() -> Result<Self> {
Ok(Self { Ok(Self {
listen_addr: std::env::var("ADMIN_UI_LISTEN_ADDR") listen_addr: std::env::var("ADMIN_UI_LISTEN_ADDR")
.unwrap_or_else(|_| "0.0.0.0:8082".to_string()), .unwrap_or_else(|_| "0.0.0.0:7082".to_string()),
api_base_url: std::env::var("API_BASE_URL") api_base_url: std::env::var("API_BASE_URL")
.unwrap_or_else(|_| "http://api:8080".to_string()), .unwrap_or_else(|_| "http://api:7080".to_string()),
api_token: std::env::var("API_BOOTSTRAP_TOKEN") api_token: std::env::var("API_BOOTSTRAP_TOKEN")
.context("API_BOOTSTRAP_TOKEN is required")?, .context("API_BOOTSTRAP_TOKEN is required")?,
}) })

76
crates/parsers/AGENTS.md Normal file
View File

@@ -0,0 +1,76 @@
# crates/parsers — Parsing de livres (CBZ, CBR, PDF)
Crate utilitaire sans état, utilisée par `apps/api` et `apps/indexer`.
## API publique (lib.rs)
```rust
// Détection du format par extension
pub fn detect_format(path: &Path) -> Option<BookFormat> // .cbz | .cbr | .pdf
// Extraction des métadonnées
pub fn parse_metadata(path: &Path, format: BookFormat, library_root: &Path) -> Result<ParsedMetadata>
// Extraction de la première page (pour thumbnails)
pub fn extract_first_page(path: &Path, format: BookFormat) -> Result<Vec<u8>>
pub enum BookFormat { Cbz, Cbr, Pdf }
pub struct ParsedMetadata {
pub title: String, // = nom de fichier (sans extension)
pub series: Option<String>, // = premier dossier relatif à library_root
pub volume: Option<i32>, // extrait du nom de fichier
pub page_count: Option<i32>,
}
```
## Logique de parsing
### Titre
Nom de fichier sans extension, conservé tel quel (pas de nettoyage).
### Série
Premier composant du chemin relatif entre `library_root` et le fichier :
- `/libraries/One Piece/T01.cbz` → série = `"One Piece"`
- `/libraries/one-shot.cbz` → série = `None`
### Volume (`extract_volume`)
Patterns reconnus dans le nom de fichier (dans l'ordre de priorité) :
- `T01`, `T1` (manga/comics français)
- `Vol. 1`, `Vol 1`, `Volume 1`
- `#1`, `#01`
- `- 1`, `- 01` (en fin de nom)
### Nombre de pages
| Format | Outil |
|--------|-------|
| CBZ | `zip::ZipArchive` — compte les entrées image (jpg/jpeg/png/webp/avif) |
| CBR | `unrar lb <path>` — liste les fichiers, filtre les images |
| PDF | `pdfinfo <path>` — lit la ligne `Pages:` |
## Dépendances système requises
| Outil | Utilisé pour | Installation |
|-------|-------------|-------------|
| `unrar` | CBR page count | `brew install rar` / `apt install unrar` |
| `unar` | CBR first page extraction | `brew install unar` / `apt install unar` |
| `pdfinfo` | PDF page count | inclus dans `poppler-utils` |
| `pdftoppm` | PDF first page render | inclus dans `poppler-utils` |
**Important** : `unrar` (pour le listing) et `unar` (pour l'extraction) sont deux outils différents.
## Extraction première page
- **CBZ** : `zip::ZipArchive`, trie les noms d'images, lit la première
- **CBR** : `unar -o <tmp_dir>`, `WalkDir` récursif, trie, lit la première — nettoie `tmp_dir` ensuite
- **PDF** : `pdftoppm -f 1 -singlefile -png -scale-to 800` → fichier PNG temporaire — nettoie `tmp_dir` ensuite
Répertoire temp : `std::env::temp_dir()/stripstream-{cbr|pdf}-thumb-<uuid>`.
## Gotchas
- `clean_title()` existe mais est marqué `#[allow(dead_code)]` — le titre n'est **pas** nettoyé (décision volontaire).
- Les CBR peuvent avoir des sous-dossiers internes → WalkDir nécessaire (pas de listing plat).
- La détection du format est **uniquement par extension** (pas de magic bytes).
- `pdfinfo` et `pdftoppm` doivent être du paquet `poppler-utils` (pas `poppler` seul).
- En cas d'échec de parsing, l'appelant (indexer/api) stocke `parse_status = 'error'` en DB mais continue.

View File

@@ -18,7 +18,7 @@ services:
meilisearch: meilisearch:
image: getmeili/meilisearch:v1.12 image: getmeili/meilisearch:v1.12
env_file: env_file:
- ../.env - .env
ports: ports:
- "7700:7700" - "7700:7700"
volumes: volumes:
@@ -39,7 +39,7 @@ services:
POSTGRES_PASSWORD: stripstream POSTGRES_PASSWORD: stripstream
POSTGRES_DB: stripstream POSTGRES_DB: stripstream
volumes: volumes:
- ./migrations:/migrations:ro - ./infra/migrations:/migrations:ro
command: command:
[ [
"sh", "sh",
@@ -49,15 +49,15 @@ services:
api: api:
build: build:
context: .. context: .
dockerfile: apps/api/Dockerfile dockerfile: apps/api/Dockerfile
env_file: env_file:
- ../.env - .env
ports: ports:
- "7080:8080" - "7080:7080"
volumes: volumes:
- ${LIBRARIES_HOST_PATH:-../libraries}:/libraries - ${LIBRARIES_HOST_PATH:-./libraries}:/libraries
- ${THUMBNAILS_HOST_PATH:-../data/thumbnails}:/data/thumbnails - ${THUMBNAILS_HOST_PATH:-./data/thumbnails}:/data/thumbnails
depends_on: depends_on:
migrate: migrate:
condition: service_completed_successfully condition: service_completed_successfully
@@ -66,22 +66,22 @@ services:
meilisearch: meilisearch:
condition: service_healthy condition: service_healthy
healthcheck: healthcheck:
test: ["CMD", "wget", "-q", "-O", "-", "http://127.0.0.1:8080/health"] test: ["CMD", "wget", "-q", "-O", "-", "http://127.0.0.1:7080/health"]
interval: 10s interval: 10s
timeout: 5s timeout: 5s
retries: 5 retries: 5
indexer: indexer:
build: build:
context: .. context: .
dockerfile: apps/indexer/Dockerfile dockerfile: apps/indexer/Dockerfile
env_file: env_file:
- ../.env - .env
ports: ports:
- "7081:8081" - "7081:7081"
volumes: volumes:
- ${LIBRARIES_HOST_PATH:-../libraries}:/libraries - ${LIBRARIES_HOST_PATH:-./libraries}:/libraries
- ${THUMBNAILS_HOST_PATH:-../data/thumbnails}:/data/thumbnails - ${THUMBNAILS_HOST_PATH:-./data/thumbnails}:/data/thumbnails
depends_on: depends_on:
migrate: migrate:
condition: service_completed_successfully condition: service_completed_successfully
@@ -90,27 +90,27 @@ services:
meilisearch: meilisearch:
condition: service_healthy condition: service_healthy
healthcheck: healthcheck:
test: ["CMD", "wget", "-q", "-O", "-", "http://127.0.0.1:8081/health"] test: ["CMD", "wget", "-q", "-O", "-", "http://127.0.0.1:7081/health"]
interval: 10s interval: 10s
timeout: 5s timeout: 5s
retries: 5 retries: 5
backoffice: backoffice:
build: build:
context: .. context: .
dockerfile: apps/backoffice/Dockerfile dockerfile: apps/backoffice/Dockerfile
env_file: env_file:
- ../.env - .env
environment: environment:
- PORT=8082 - PORT=7082
- HOST=0.0.0.0 - HOST=0.0.0.0
ports: ports:
- "7082:8082" - "7082:7082"
depends_on: depends_on:
api: api:
condition: service_healthy condition: service_healthy
healthcheck: healthcheck:
test: ["CMD", "wget", "-q", "-O", "-", "http://127.0.0.1:8082/health"] test: ["CMD", "wget", "-q", "-O", "-", "http://127.0.0.1:7082/health"]
interval: 10s interval: 10s
timeout: 5s timeout: 5s
retries: 5 retries: 5

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env bash #!/usr/bin/env bash
set -euo pipefail set -euo pipefail
BASE_API="${BASE_API:-http://127.0.0.1:8080}" BASE_API="${BASE_API:-http://127.0.0.1:7080}"
TOKEN="${API_TOKEN:-stripstream-dev-bootstrap-token}" TOKEN="${API_TOKEN:-stripstream-dev-bootstrap-token}"
measure() { measure() {

View File

@@ -1,9 +1,9 @@
#!/usr/bin/env bash #!/usr/bin/env bash
set -euo pipefail set -euo pipefail
BASE_API="${BASE_API:-http://127.0.0.1:8080}" BASE_API="${BASE_API:-http://127.0.0.1:7080}"
BASE_INDEXER="${BASE_INDEXER:-http://127.0.0.1:8081}" BASE_INDEXER="${BASE_INDEXER:-http://127.0.0.1:7081}"
BASE_BACKOFFICE="${BASE_BACKOFFICE:-${BASE_ADMIN:-http://127.0.0.1:8082}}" BASE_BACKOFFICE="${BASE_BACKOFFICE:-${BASE_ADMIN:-http://127.0.0.1:7082}}"
TOKEN="${API_TOKEN:-stripstream-dev-bootstrap-token}" TOKEN="${API_TOKEN:-stripstream-dev-bootstrap-token}"
echo "[smoke] health checks" echo "[smoke] health checks"