refactor: replace Meilisearch with PostgreSQL full-text search

Remove Meilisearch dependency entirely. Search is now handled by
PostgreSQL ILIKE with pg_trgm indexes, joining series_metadata for
series-level authors. No external search engine needed.

- Replace search.rs Meilisearch HTTP calls with PostgreSQL queries
- Remove meili.rs from indexer, sync_meili call from job pipeline
- Remove MEILI_URL/MEILI_MASTER_KEY from config, state, env files
- Remove meilisearch service from docker-compose.yml
- Add migration 0027: drop sync_metadata, enable pg_trgm, add indexes
- Remove search resync button/endpoint (no longer needed)
- Update all documentation (CLAUDE.md, README.md, AGENTS.md, PLAN.md)

API contract unchanged — same SearchResponse shape returned.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-03-18 10:59:25 +01:00
parent 2985ef5561
commit 389d71b42f
20 changed files with 97 additions and 452 deletions

View File

@@ -9,9 +9,6 @@
# REQUIRED - Change these values in production! # REQUIRED - Change these values in production!
# ============================================================================= # =============================================================================
# Master key for Meilisearch authentication (required)
MEILI_MASTER_KEY=change-me-in-production
# Bootstrap token for initial API admin access (required) # Bootstrap token for initial API admin access (required)
# Use this token for the first API calls before creating proper API tokens # Use this token for the first API calls before creating proper API tokens
API_BOOTSTRAP_TOKEN=change-me-in-production API_BOOTSTRAP_TOKEN=change-me-in-production
@@ -28,9 +25,6 @@ API_BASE_URL=http://api:7080
INDEXER_LISTEN_ADDR=0.0.0.0:7081 INDEXER_LISTEN_ADDR=0.0.0.0:7081
INDEXER_SCAN_INTERVAL_SECONDS=5 INDEXER_SCAN_INTERVAL_SECONDS=5
# Meilisearch Search Engine
MEILI_URL=http://meilisearch:7700
# PostgreSQL Database # PostgreSQL Database
DATABASE_URL=postgres://stripstream:stripstream@postgres:5432/stripstream DATABASE_URL=postgres://stripstream:stripstream@postgres:5432/stripstream
@@ -77,5 +71,4 @@ THUMBNAILS_HOST_PATH=./data/thumbnails
# - API: change "7080:7080" to "YOUR_PORT:7080" # - API: change "7080:7080" to "YOUR_PORT:7080"
# - Indexer: change "7081:7081" to "YOUR_PORT:7081" # - Indexer: change "7081:7081" to "YOUR_PORT:7081"
# - Backoffice: change "7082:7082" to "YOUR_PORT:7082" # - Backoffice: change "7082:7082" to "YOUR_PORT:7082"
# - Meilisearch: change "7700:7700" to "YOUR_PORT:7700"
# - PostgreSQL: change "6432:5432" to "YOUR_PORT:5432" # - PostgreSQL: change "6432:5432" to "YOUR_PORT:5432"

View File

@@ -77,7 +77,7 @@ sqlx migrate add -r migration_name
```bash ```bash
# Start infrastructure only # Start infrastructure only
docker compose up -d postgres meilisearch docker compose up -d postgres
# Start full stack # Start full stack
docker compose up -d docker compose up -d

View File

@@ -10,7 +10,6 @@ Gestionnaire de bibliothèque de bandes dessinées/ebooks. Workspace Cargo multi
| Indexer (background) | `apps/indexer/` | 7081 | | Indexer (background) | `apps/indexer/` | 7081 |
| Backoffice (Next.js) | `apps/backoffice/` | 7082 | | Backoffice (Next.js) | `apps/backoffice/` | 7082 |
| PostgreSQL | infra | 6432 | | PostgreSQL | infra | 6432 |
| Meilisearch | infra | 7700 |
Crates partagés : `crates/core` (config env), `crates/parsers` (CBZ/CBR/PDF). Crates partagés : `crates/core` (config env), `crates/parsers` (CBZ/CBR/PDF).
@@ -31,7 +30,7 @@ cargo test
cargo test -p parsers cargo test -p parsers
# Infra (dépendances uniquement) — docker-compose.yml est à la racine # Infra (dépendances uniquement) — docker-compose.yml est à la racine
docker compose up -d postgres meilisearch docker compose up -d postgres
# Backoffice dev # Backoffice dev
cd apps/backoffice && npm install && npm run dev # http://localhost:7082 cd apps/backoffice && npm install && npm run dev # http://localhost:7082
@@ -46,7 +45,7 @@ sqlx migrate run # DATABASE_URL doit être défini
cp .env.example .env # puis éditer les valeurs REQUIRED cp .env.example .env # puis éditer les valeurs REQUIRED
``` ```
Variables **requises** au démarrage : `DATABASE_URL`, `MEILI_URL`, `MEILI_MASTER_KEY`, `API_BOOTSTRAP_TOKEN`. Variables **requises** au démarrage : `DATABASE_URL`, `API_BOOTSTRAP_TOKEN`.
## Gotchas ## Gotchas
@@ -56,6 +55,7 @@ Variables **requises** au démarrage : `DATABASE_URL`, `MEILI_URL`, `MEILI_MASTE
- **Thumbnails** : stockés dans `THUMBNAIL_DIRECTORY` (défaut `/data/thumbnails`), générés par **l'API** (pas l'indexer) — l'indexer déclenche un checkup via `POST /index/jobs/:id/thumbnails/checkup`. - **Thumbnails** : stockés dans `THUMBNAIL_DIRECTORY` (défaut `/data/thumbnails`), générés par **l'API** (pas l'indexer) — l'indexer déclenche un checkup via `POST /index/jobs/:id/thumbnails/checkup`.
- **Workspace Cargo** : les dépendances externes sont définies dans le `Cargo.toml` racine, pas dans les crates individuels. - **Workspace Cargo** : les dépendances externes sont définies dans le `Cargo.toml` racine, pas dans les crates individuels.
- **Migrations** : dossier `infra/migrations/`, géré par sqlx. Toujours migrer avant de démarrer les services. - **Migrations** : dossier `infra/migrations/`, géré par sqlx. Toujours migrer avant de démarrer les services.
- **Recherche** : full-text via PostgreSQL (`ILIKE` + `pg_trgm`), pas de moteur de recherche externe.
## Fichiers clés ## Fichiers clés
@@ -64,6 +64,7 @@ Variables **requises** au démarrage : `DATABASE_URL`, `MEILI_URL`, `MEILI_MASTE
| `crates/core/src/config.rs` | Config depuis env (API, Indexer, AdminUI) | | `crates/core/src/config.rs` | Config depuis env (API, Indexer, AdminUI) |
| `crates/parsers/src/lib.rs` | Détection format, extraction métadonnées | | `crates/parsers/src/lib.rs` | Détection format, extraction métadonnées |
| `apps/api/src/books.rs` | Endpoints CRUD livres | | `apps/api/src/books.rs` | Endpoints CRUD livres |
| `apps/api/src/search.rs` | Recherche full-text PostgreSQL |
| `apps/api/src/pages.rs` | Rendu pages + cache LRU | | `apps/api/src/pages.rs` | Rendu pages + cache LRU |
| `apps/indexer/src/scanner.rs` | Scan filesystem | | `apps/indexer/src/scanner.rs` | Scan filesystem |
| `infra/migrations/*.sql` | Schéma DB | | `infra/migrations/*.sql` | Schéma DB |

10
PLAN.md
View File

@@ -12,7 +12,7 @@ Construire un serveur ultra performant pour indexer et servir des bibliotheques
- Backend/API: Rust (`axum`) - Backend/API: Rust (`axum`)
- Indexation: service Rust dedie (`indexer`) - Indexation: service Rust dedie (`indexer`)
- DB: PostgreSQL - DB: PostgreSQL
- Recherche: Meilisearch - Recherche: PostgreSQL full-text (ILIKE + pg_trgm)
- Deploiement: Docker Compose - Deploiement: Docker Compose
- Auth: token bootstrap env + tokens admin en DB (creables/revocables) - Auth: token bootstrap env + tokens admin en DB (creables/revocables)
- Expiration tokens admin: aucune par defaut (revocation manuelle) - Expiration tokens admin: aucune par defaut (revocation manuelle)
@@ -33,7 +33,7 @@ Construire un serveur ultra performant pour indexer et servir des bibliotheques
**DoD:** Build des crates OK. **DoD:** Build des crates OK.
### T2 - Infra Docker Compose ### T2 - Infra Docker Compose
- [x] Definir services `postgres`, `meilisearch`, `api`, `indexer` - [x] Definir services `postgres`, `api`, `indexer`
- [x] Volumes persistants - [x] Volumes persistants
- [x] Healthchecks - [x] Healthchecks
@@ -114,7 +114,7 @@ Construire un serveur ultra performant pour indexer et servir des bibliotheques
**DoD:** Pagination/filtres fonctionnels. **DoD:** Pagination/filtres fonctionnels.
### T13 - Recherche ### T13 - Recherche
- [x] Projection vers Meilisearch - [x] Recherche full-text PostgreSQL
- [x] `GET /search?q=...&library_id=...&type=...` - [x] `GET /search?q=...&library_id=...&type=...`
- [x] Fuzzy + filtres - [x] Fuzzy + filtres
@@ -264,10 +264,10 @@ Construire un serveur ultra performant pour indexer et servir des bibliotheques
- Bootstrap token = break-glass (peut etre desactive plus tard) - Bootstrap token = break-glass (peut etre desactive plus tard)
## Journal ## Journal
- 2026-03-05: `docker compose up -d --build` valide, stack complete en healthy (`postgres`, `meilisearch`, `api`, `indexer`, `admin-ui`). - 2026-03-05: `docker compose up -d --build` valide, stack complete en healthy (`postgres`, `api`, `indexer`, `admin-ui`).
- 2026-03-05: ajustements infra appliques pour demarrage stable (`unrar` -> `unrar-free`, image `rust:1-bookworm`, healthchecks `127.0.0.1`). - 2026-03-05: ajustements infra appliques pour demarrage stable (`unrar` -> `unrar-free`, image `rust:1-bookworm`, healthchecks `127.0.0.1`).
- 2026-03-05: ajout d'un service `migrate` dans Compose pour executer automatiquement `infra/migrations/0001_init.sql` au demarrage. - 2026-03-05: ajout d'un service `migrate` dans Compose pour executer automatiquement `infra/migrations/0001_init.sql` au demarrage.
- 2026-03-05: Lot 2 termine (jobs, scan incremental, parsers `cbz/cbr/pdf`, API livres, sync + recherche Meilisearch). - 2026-03-05: Lot 2 termine (jobs, scan incremental, parsers `cbz/cbr/pdf`, API livres, recherche PostgreSQL).
- 2026-03-05: verification de bout en bout OK sur une librairie de test (`/libraries/demo`) avec indexation, listing `/books` et recherche `/search` (1 CBZ detecte). - 2026-03-05: verification de bout en bout OK sur une librairie de test (`/libraries/demo`) avec indexation, listing `/books` et recherche `/search` (1 CBZ detecte).
- 2026-03-05: Lot 3 avancee: endpoint pages (`/books/:id/pages/:n`) actif avec cache LRU, ETag/Cache-Control, limite concurrence rendu et timeouts. - 2026-03-05: Lot 3 avancee: endpoint pages (`/books/:id/pages/:n`) actif avec cache LRU, ETag/Cache-Control, limite concurrence rendu et timeouts.
- 2026-03-05: hardening API: readiness expose sans auth via `route_layer`, metriques simples `/metrics`, rate limiting lecture (120 req/s). - 2026-03-05: hardening API: readiness expose sans auth via `route_layer`, metriques simples `/metrics`, rate limiting lecture (120 req/s).

View File

@@ -9,7 +9,7 @@ The project consists of the following components:
- **API** (`apps/api/`) - Rust-based REST API service - **API** (`apps/api/`) - Rust-based REST API service
- **Indexer** (`apps/indexer/`) - Rust-based background indexing service - **Indexer** (`apps/indexer/`) - Rust-based background indexing service
- **Backoffice** (`apps/backoffice/`) - Next.js web administration interface - **Backoffice** (`apps/backoffice/`) - Next.js web administration interface
- **Infrastructure** (`infra/`) - Docker Compose setup with PostgreSQL and Meilisearch - **Infrastructure** (`infra/`) - Docker Compose setup with PostgreSQL
## Quick Start ## Quick Start
@@ -27,19 +27,16 @@ The project consists of the following components:
``` ```
2. Edit `.env` and set secure values for: 2. Edit `.env` and set secure values for:
- `MEILI_MASTER_KEY` - Master key for Meilisearch
- `API_BOOTSTRAP_TOKEN` - Bootstrap token for initial API authentication - `API_BOOTSTRAP_TOKEN` - Bootstrap token for initial API authentication
### Running with Docker ### Running with Docker
```bash ```bash
cd infra
docker compose up -d docker compose up -d
``` ```
This will start: This will start:
- PostgreSQL (port 6432) - PostgreSQL (port 6432)
- Meilisearch (port 7700)
- API service (port 7080) - API service (port 7080)
- Indexer service (port 7081) - Indexer service (port 7081)
- Backoffice web UI (port 7082) - Backoffice web UI (port 7082)
@@ -48,7 +45,6 @@ This will start:
- **Backoffice**: http://localhost:7082 - **Backoffice**: http://localhost:7082
- **API**: http://localhost:7080 - **API**: http://localhost:7080
- **Meilisearch**: http://localhost:7700
### Default Credentials ### Default Credentials
@@ -62,8 +58,7 @@ The default bootstrap token is configured in your `.env` file. Use this for init
```bash ```bash
# Start dependencies # Start dependencies
cd infra docker compose up -d postgres
docker compose up -d postgres meilisearch
# Run API # Run API
cd apps/api cd apps/api
@@ -96,7 +91,7 @@ The backoffice will be available at http://localhost:7082
- Support for CBZ, CBR, and PDF formats - Support for CBZ, CBR, and PDF formats
- Automatic metadata extraction - Automatic metadata extraction
- Series and volume detection - Series and volume detection
- Full-text search with Meilisearch - Full-text search powered by PostgreSQL
### Jobs Monitoring ### Jobs Monitoring
- Real-time job progress tracking - Real-time job progress tracking
@@ -118,8 +113,6 @@ Variables marquées **required** doivent être définies. Les autres ont une val
| Variable | Description | Défaut | | Variable | Description | Défaut |
|----------|-------------|--------| |----------|-------------|--------|
| `DATABASE_URL` | **required** — Connexion PostgreSQL | — | | `DATABASE_URL` | **required** — Connexion PostgreSQL | — |
| `MEILI_URL` | **required** — URL Meilisearch | — |
| `MEILI_MASTER_KEY` | **required** — Clé maître Meilisearch | — |
### API ### API
@@ -165,7 +158,6 @@ stripstream-librarian/
│ ├── indexer/ # Rust background indexer │ ├── indexer/ # Rust background indexer
│ └── backoffice/ # Next.js web UI │ └── backoffice/ # Next.js web UI
├── infra/ ├── infra/
│ ├── docker-compose.yml
│ └── migrations/ # SQL database migrations │ └── migrations/ # SQL database migrations
├── libraries/ # Book storage (mounted volume) ├── libraries/ # Book storage (mounted volume)
└── .env # Environment configuration └── .env # Environment configuration
@@ -207,11 +199,6 @@ services:
volumes: volumes:
- postgres_data:/var/lib/postgresql/data - postgres_data:/var/lib/postgresql/data
meilisearch:
image: getmeili/meilisearch:v1.12
environment:
MEILI_MASTER_KEY: your_meili_master_key # required — change this
api: api:
image: julienfroidefond32/stripstream-api:latest image: julienfroidefond32/stripstream-api:latest
ports: ports:
@@ -222,8 +209,6 @@ services:
environment: environment:
# --- Required --- # --- Required ---
DATABASE_URL: postgres://stripstream:stripstream@postgres:5432/stripstream DATABASE_URL: postgres://stripstream:stripstream@postgres:5432/stripstream
MEILI_URL: http://meilisearch:7700
MEILI_MASTER_KEY: your_meili_master_key # must match meilisearch above
API_BOOTSTRAP_TOKEN: your_bootstrap_token # required — change this API_BOOTSTRAP_TOKEN: your_bootstrap_token # required — change this
# --- Optional (defaults shown) --- # --- Optional (defaults shown) ---
# API_LISTEN_ADDR: 0.0.0.0:7080 # API_LISTEN_ADDR: 0.0.0.0:7080
@@ -238,8 +223,6 @@ services:
environment: environment:
# --- Required --- # --- Required ---
DATABASE_URL: postgres://stripstream:stripstream@postgres:5432/stripstream DATABASE_URL: postgres://stripstream:stripstream@postgres:5432/stripstream
MEILI_URL: http://meilisearch:7700
MEILI_MASTER_KEY: your_meili_master_key # must match meilisearch above
# --- Optional (defaults shown) --- # --- Optional (defaults shown) ---
# INDEXER_LISTEN_ADDR: 0.0.0.0:7081 # INDEXER_LISTEN_ADDR: 0.0.0.0:7081
# INDEXER_SCAN_INTERVAL_SECONDS: 5 # INDEXER_SCAN_INTERVAL_SECONDS: 5

View File

@@ -68,8 +68,6 @@ async fn main() -> anyhow::Result<()> {
let state = AppState { let state = AppState {
pool, pool,
bootstrap_token: Arc::from(config.api_bootstrap_token), bootstrap_token: Arc::from(config.api_bootstrap_token),
meili_url: Arc::from(config.meili_url),
meili_master_key: Arc::from(config.meili_master_key),
page_cache: Arc::new(Mutex::new(LruCache::new(NonZeroUsize::new(512).expect("non-zero")))), page_cache: Arc::new(Mutex::new(LruCache::new(NonZeroUsize::new(512).expect("non-zero")))),
page_render_limit: Arc::new(Semaphore::new(concurrent_renders)), page_render_limit: Arc::new(Semaphore::new(concurrent_renders)),
metrics: Arc::new(Metrics::new()), metrics: Arc::new(Metrics::new()),

View File

@@ -39,13 +39,13 @@ pub struct SearchResponse {
pub processing_time_ms: Option<u64>, pub processing_time_ms: Option<u64>,
} }
/// Search books across all libraries using Meilisearch /// Search books across all libraries
#[utoipa::path( #[utoipa::path(
get, get,
path = "/search", path = "/search",
tag = "books", tag = "books",
params( params(
("q" = String, Query, description = "Search query (books via Meilisearch + series via ILIKE)"), ("q" = String, Query, description = "Search query (books + series via PostgreSQL full-text)"),
("library_id" = Option<String>, Query, description = "Filter by library ID"), ("library_id" = Option<String>, Query, description = "Filter by library ID"),
("type" = Option<String>, Query, description = "Filter by type (cbz, cbr, pdf)"), ("type" = Option<String>, Query, description = "Filter by type (cbz, cbr, pdf)"),
("kind" = Option<String>, Query, description = "Filter by kind (alias for type)"), ("kind" = Option<String>, Query, description = "Filter by kind (alias for type)"),
@@ -65,34 +65,38 @@ pub async fn search_books(
return Err(ApiError::bad_request("q is required")); return Err(ApiError::bad_request("q is required"));
} }
let mut filters: Vec<String> = Vec::new(); let limit_val = query.limit.unwrap_or(20).clamp(1, 100) as i64;
if let Some(library_id) = query.library_id.as_deref() {
filters.push(format!("library_id = '{}'", library_id.replace('"', "")));
}
let kind_filter = query.r#type.as_deref().or(query.kind.as_deref());
if let Some(kind) = kind_filter {
filters.push(format!("kind = '{}'", kind.replace('"', "")));
}
let body = serde_json::json!({
"q": query.q,
"limit": query.limit.unwrap_or(20).clamp(1, 100),
"filter": if filters.is_empty() { serde_json::Value::Null } else { serde_json::Value::String(filters.join(" AND ")) }
});
let limit_val = query.limit.unwrap_or(20).clamp(1, 100);
let q_pattern = format!("%{}%", query.q); let q_pattern = format!("%{}%", query.q);
let library_id_uuid: Option<uuid::Uuid> = query.library_id.as_deref() let library_id_uuid: Option<Uuid> = query.library_id.as_deref()
.and_then(|s| s.parse().ok()); .and_then(|s| s.parse().ok());
let kind_filter: Option<&str> = query.r#type.as_deref().or(query.kind.as_deref());
// Recherche Meilisearch (books) + séries PG en parallèle let start = std::time::Instant::now();
let client = reqwest::Client::new();
let url = format!("{}/indexes/books/search", state.meili_url.trim_end_matches('/')); // Book search via PostgreSQL ILIKE on title, authors, series
let meili_fut = client let books_sql = r#"
.post(&url) SELECT b.id, b.library_id, b.kind, b.title,
.header("Authorization", format!("Bearer {}", state.meili_master_key)) COALESCE(b.authors, CASE WHEN b.author IS NOT NULL AND b.author != '' THEN ARRAY[b.author] ELSE ARRAY[]::text[] END) as authors,
.json(&body) b.series, b.volume, b.language
.send(); FROM books b
LEFT JOIN series_metadata sm
ON sm.library_id = b.library_id
AND sm.name = COALESCE(NULLIF(b.series, ''), 'unclassified')
WHERE (
b.title ILIKE $1
OR b.series ILIKE $1
OR EXISTS (SELECT 1 FROM unnest(
COALESCE(b.authors, CASE WHEN b.author IS NOT NULL AND b.author != '' THEN ARRAY[b.author] ELSE ARRAY[]::text[] END)
|| COALESCE(sm.authors, ARRAY[]::text[])
) AS a WHERE a ILIKE $1)
)
AND ($2::uuid IS NULL OR b.library_id = $2)
AND ($3::text IS NULL OR b.kind = $3)
ORDER BY
CASE WHEN b.title ILIKE $1 THEN 0 ELSE 1 END,
b.title ASC
LIMIT $4
"#;
let series_sql = r#" let series_sql = r#"
WITH sorted_books AS ( WITH sorted_books AS (
@@ -108,7 +112,7 @@ pub async fn search_books(
title ASC title ASC
) as rn ) as rn
FROM books FROM books
WHERE ($1::uuid IS NULL OR library_id = $1) WHERE ($2::uuid IS NULL OR library_id = $2)
), ),
series_counts AS ( series_counts AS (
SELECT SELECT
@@ -123,39 +127,49 @@ pub async fn search_books(
SELECT sc.library_id, sc.name, sc.book_count, sc.books_read_count, sb.id as first_book_id SELECT sc.library_id, sc.name, sc.book_count, sc.books_read_count, sb.id as first_book_id
FROM series_counts sc FROM series_counts sc
JOIN sorted_books sb ON sb.library_id = sc.library_id AND sb.name = sc.name AND sb.rn = 1 JOIN sorted_books sb ON sb.library_id = sc.library_id AND sb.name = sc.name AND sb.rn = 1
WHERE sc.name ILIKE $2 WHERE sc.name ILIKE $1
ORDER BY sc.name ASC ORDER BY sc.name ASC
LIMIT $3 LIMIT $4
"#; "#;
let series_fut = sqlx::query(series_sql) let (books_rows, series_rows) = tokio::join!(
.bind(library_id_uuid) sqlx::query(books_sql)
.bind(&q_pattern) .bind(&q_pattern)
.bind(limit_val as i64) .bind(library_id_uuid)
.fetch_all(&state.pool); .bind(kind_filter)
.bind(limit_val)
.fetch_all(&state.pool),
sqlx::query(series_sql)
.bind(&q_pattern)
.bind(library_id_uuid)
.bind(kind_filter) // unused in series query but keeps bind positions consistent
.bind(limit_val)
.fetch_all(&state.pool)
);
let (meili_resp, series_rows) = tokio::join!(meili_fut, series_fut); let elapsed_ms = start.elapsed().as_millis() as u64;
// Traitement Meilisearch // Build book hits as JSON array (same shape as before)
let meili_resp = meili_resp.map_err(|e| ApiError::internal(format!("meili request failed: {e}")))?; let books_rows = books_rows.map_err(|e| ApiError::internal(format!("book search failed: {e}")))?;
let (hits, estimated_total_hits, processing_time_ms) = if !meili_resp.status().is_success() { let hits: Vec<serde_json::Value> = books_rows
let body = meili_resp.text().await.unwrap_or_default(); .iter()
if body.contains("index_not_found") { .map(|row| {
(serde_json::json!([]), Some(0u64), Some(0u64)) serde_json::json!({
} else { "id": row.get::<Uuid, _>("id").to_string(),
return Err(ApiError::internal(format!("meili error: {body}"))); "library_id": row.get::<Uuid, _>("library_id").to_string(),
} "kind": row.get::<String, _>("kind"),
} else { "title": row.get::<String, _>("title"),
let payload: serde_json::Value = meili_resp.json().await "authors": row.get::<Vec<String>, _>("authors"),
.map_err(|e| ApiError::internal(format!("invalid meili response: {e}")))?; "series": row.get::<Option<String>, _>("series"),
( "volume": row.get::<Option<i32>, _>("volume"),
payload.get("hits").cloned().unwrap_or_else(|| serde_json::json!([])), "language": row.get::<Option<String>, _>("language"),
payload.get("estimatedTotalHits").and_then(|v| v.as_u64()), })
payload.get("processingTimeMs").and_then(|v| v.as_u64()), })
) .collect();
};
// Traitement séries let estimated_total_hits = hits.len() as u64;
// Series hits
let series_hits: Vec<SeriesHit> = series_rows let series_hits: Vec<SeriesHit> = series_rows
.unwrap_or_default() .unwrap_or_default()
.iter() .iter()
@@ -169,9 +183,9 @@ pub async fn search_books(
.collect(); .collect();
Ok(Json(SearchResponse { Ok(Json(SearchResponse {
hits, hits: serde_json::Value::Array(hits),
series_hits, series_hits,
estimated_total_hits, estimated_total_hits: Some(estimated_total_hits),
processing_time_ms, processing_time_ms: Some(elapsed_ms),
})) }))
} }

View File

@@ -42,7 +42,6 @@ pub fn settings_routes() -> Router<AppState> {
.route("/settings/cache/clear", post(clear_cache)) .route("/settings/cache/clear", post(clear_cache))
.route("/settings/cache/stats", get(get_cache_stats)) .route("/settings/cache/stats", get(get_cache_stats))
.route("/settings/thumbnail/stats", get(get_thumbnail_stats)) .route("/settings/thumbnail/stats", get(get_thumbnail_stats))
.route("/settings/search/resync", post(force_search_resync))
} }
/// List all settings /// List all settings
@@ -325,27 +324,3 @@ pub async fn get_thumbnail_stats(State(_state): State<AppState>) -> Result<Json<
Ok(Json(stats)) Ok(Json(stats))
} }
/// Force a full Meilisearch resync by resetting the sync timestamp
#[utoipa::path(
post,
path = "/settings/search/resync",
tag = "settings",
responses(
(status = 200, description = "Resync scheduled"),
(status = 401, description = "Unauthorized"),
),
security(("Bearer" = []))
)]
pub async fn force_search_resync(
State(state): State<AppState>,
) -> Result<Json<Value>, ApiError> {
sqlx::query("UPDATE sync_metadata SET last_meili_sync = NULL WHERE id = 1")
.execute(&state.pool)
.await?;
Ok(Json(serde_json::json!({
"success": true,
"message": "Search resync scheduled. The indexer will perform a full sync on its next cycle."
})))
}

View File

@@ -12,8 +12,6 @@ use tokio::sync::{Mutex, RwLock, Semaphore};
pub struct AppState { pub struct AppState {
pub pool: sqlx::PgPool, pub pool: sqlx::PgPool,
pub bootstrap_token: Arc<str>, pub bootstrap_token: Arc<str>,
pub meili_url: Arc<str>,
pub meili_master_key: Arc<str>,
pub page_cache: Arc<Mutex<LruCache<String, Arc<Vec<u8>>>>>, pub page_cache: Arc<Mutex<LruCache<String, Arc<Vec<u8>>>>>,
pub page_render_limit: Arc<Semaphore>, pub page_render_limit: Arc<Semaphore>,
pub metrics: Arc<Metrics>, pub metrics: Arc<Metrics>,

View File

@@ -1,11 +0,0 @@
import { NextResponse } from "next/server";
import { forceSearchResync } from "@/lib/api";
export async function POST() {
try {
const data = await forceSearchResync();
return NextResponse.json(data);
} catch (error) {
return NextResponse.json({ error: "Failed to trigger search resync" }, { status: 500 });
}
}

View File

@@ -21,9 +21,6 @@ export default function SettingsPage({ initialSettings, initialCacheStats, initi
const [clearResult, setClearResult] = useState<ClearCacheResponse | null>(null); const [clearResult, setClearResult] = useState<ClearCacheResponse | null>(null);
const [isSaving, setIsSaving] = useState(false); const [isSaving, setIsSaving] = useState(false);
const [saveMessage, setSaveMessage] = useState<string | null>(null); const [saveMessage, setSaveMessage] = useState<string | null>(null);
const [isResyncing, setIsResyncing] = useState(false);
const [resyncResult, setResyncResult] = useState<{ success: boolean; message: string } | null>(null);
// Komga sync state — URL and username are persisted in settings // Komga sync state — URL and username are persisted in settings
const [komgaUrl, setKomgaUrl] = useState(""); const [komgaUrl, setKomgaUrl] = useState("");
const [komgaUsername, setKomgaUsername] = useState(""); const [komgaUsername, setKomgaUsername] = useState("");
@@ -89,20 +86,6 @@ export default function SettingsPage({ initialSettings, initialCacheStats, initi
} }
} }
async function handleSearchResync() {
setIsResyncing(true);
setResyncResult(null);
try {
const response = await fetch("/api/settings/search/resync", { method: "POST" });
const result = await response.json();
setResyncResult(result);
} catch {
setResyncResult({ success: false, message: "Failed to trigger search resync" });
} finally {
setIsResyncing(false);
}
}
const fetchReports = useCallback(async () => { const fetchReports = useCallback(async () => {
try { try {
const resp = await fetch("/api/komga/reports"); const resp = await fetch("/api/komga/reports");
@@ -365,43 +348,6 @@ export default function SettingsPage({ initialSettings, initialCacheStats, initi
</CardContent> </CardContent>
</Card> </Card>
{/* Search Index */}
<Card className="mb-6">
<CardHeader>
<CardTitle className="flex items-center gap-2">
<Icon name="search" size="md" />
Search Index
</CardTitle>
<CardDescription>Force a full resync of the Meilisearch index. This will re-index all books on the next indexer cycle.</CardDescription>
</CardHeader>
<CardContent>
<div className="space-y-4">
{resyncResult && (
<div className={`p-3 rounded-lg ${resyncResult.success ? 'bg-success/10 text-success' : 'bg-destructive/10 text-destructive'}`}>
{resyncResult.message}
</div>
)}
<Button
onClick={handleSearchResync}
disabled={isResyncing}
>
{isResyncing ? (
<>
<Icon name="spinner" size="sm" className="animate-spin -ml-1 mr-2" />
Scheduling...
</>
) : (
<>
<Icon name="refresh" size="sm" className="mr-2" />
Force Search Resync
</>
)}
</Button>
</div>
</CardContent>
</Card>
{/* Limits Settings */} {/* Limits Settings */}
<Card className="mb-6"> <Card className="mb-6">
<CardHeader> <CardHeader>

View File

@@ -406,12 +406,6 @@ export async function getThumbnailStats() {
return apiFetch<ThumbnailStats>("/settings/thumbnail/stats"); return apiFetch<ThumbnailStats>("/settings/thumbnail/stats");
} }
export async function forceSearchResync() {
return apiFetch<{ success: boolean; message: string }>("/settings/search/resync", {
method: "POST",
});
}
export async function convertBook(bookId: string) { export async function convertBook(bookId: string) {
return apiFetch<IndexJobDto>(`/books/${bookId}/convert`, { method: "POST" }); return apiFetch<IndexJobDto>(`/books/${bookId}/convert`, { method: "POST" });
} }

View File

@@ -7,7 +7,7 @@ Service background sur le port **7081**. Voir `AGENTS.md` racine pour les conven
| Fichier | Rôle | | Fichier | Rôle |
|---------|------| |---------|------|
| `main.rs` | Point d'entrée, initialisation, lancement du worker | | `main.rs` | Point d'entrée, initialisation, lancement du worker |
| `lib.rs` | `AppState` (pool, meili_url, meili_master_key) | | `lib.rs` | `AppState` (pool) |
| `worker.rs` | Boucle principale : claim job → process → cleanup stale | | `worker.rs` | Boucle principale : claim job → process → cleanup stale |
| `job.rs` | `claim_next_job`, `process_job`, `fail_job`, `cleanup_stale_jobs` | | `job.rs` | `claim_next_job`, `process_job`, `fail_job`, `cleanup_stale_jobs` |
| `scanner.rs` | Phase 1 discovery : WalkDir + `parse_metadata_fast` (zéro I/O archive), skip dossiers inchangés via mtime, batching DB | | `scanner.rs` | Phase 1 discovery : WalkDir + `parse_metadata_fast` (zéro I/O archive), skip dossiers inchangés via mtime, batching DB |
@@ -15,7 +15,6 @@ Service background sur le port **7081**. Voir `AGENTS.md` racine pour les conven
| `batch.rs` | `flush_all_batches` avec UNNEST, structures `BookInsert/Update/FileInsert/Update/ErrorInsert` | | `batch.rs` | `flush_all_batches` avec UNNEST, structures `BookInsert/Update/FileInsert/Update/ErrorInsert` |
| `scheduler.rs` | Auto-scan : vérifie toutes les 60s les bibliothèques à monitorer | | `scheduler.rs` | Auto-scan : vérifie toutes les 60s les bibliothèques à monitorer |
| `watcher.rs` | File watcher temps réel | | `watcher.rs` | File watcher temps réel |
| `meili.rs` | Indexation/sync Meilisearch |
| `api.rs` | Endpoints HTTP de l'indexer (/health, /ready) | | `api.rs` | Endpoints HTTP de l'indexer (/health, /ready) |
| `utils.rs` | `remap_libraries_path`, `unmap_libraries_path`, `compute_fingerprint`, `kind_from_format` | | `utils.rs` | `remap_libraries_path`, `unmap_libraries_path`, `compute_fingerprint`, `kind_from_format` |
@@ -28,7 +27,6 @@ claim_next_job (UPDATE ... RETURNING, status pending→running)
│ ├─ WalkDir + parse_metadata_fast (zéro I/O archive) │ ├─ WalkDir + parse_metadata_fast (zéro I/O archive)
│ ├─ skip dossiers via directory_mtimes (table DB) │ ├─ skip dossiers via directory_mtimes (table DB)
│ └─ INSERT books (page_count=NULL) → livres visibles immédiatement │ └─ INSERT books (page_count=NULL) → livres visibles immédiatement
├─ meili::sync_meili
├─ analyzer::cleanup_orphaned_thumbnails (full_rebuild uniquement) ├─ analyzer::cleanup_orphaned_thumbnails (full_rebuild uniquement)
└─ Phase 2 : analyzer::analyze_library_books └─ Phase 2 : analyzer::analyze_library_books
├─ SELECT books WHERE page_count IS NULL ├─ SELECT books WHERE page_count IS NULL

View File

@@ -3,7 +3,7 @@ use sqlx::{PgPool, Row};
use tracing::{error, info}; use tracing::{error, info};
use uuid::Uuid; use uuid::Uuid;
use crate::{analyzer, converter, meili, scanner, AppState}; use crate::{analyzer, converter, scanner, AppState};
pub async fn cleanup_stale_jobs(pool: &PgPool) -> Result<()> { pub async fn cleanup_stale_jobs(pool: &PgPool) -> Result<()> {
let result = sqlx::query( let result = sqlx::query(
@@ -337,9 +337,6 @@ pub async fn process_job(
} }
} }
// Sync search index after discovery (books are visible immediately)
meili::sync_meili(&state.pool, &state.meili_url, &state.meili_master_key).await?;
// For full rebuild: clean up orphaned thumbnail files (old UUIDs) // For full rebuild: clean up orphaned thumbnail files (old UUIDs)
if is_full_rebuild { if is_full_rebuild {
analyzer::cleanup_orphaned_thumbnails(state).await?; analyzer::cleanup_orphaned_thumbnails(state).await?;

View File

@@ -3,7 +3,6 @@ pub mod api;
pub mod batch; pub mod batch;
pub mod converter; pub mod converter;
pub mod job; pub mod job;
pub mod meili;
pub mod scheduler; pub mod scheduler;
pub mod scanner; pub mod scanner;
pub mod utils; pub mod utils;
@@ -15,6 +14,4 @@ use sqlx::PgPool;
#[derive(Clone)] #[derive(Clone)]
pub struct AppState { pub struct AppState {
pub pool: PgPool, pub pool: PgPool,
pub meili_url: String,
pub meili_master_key: String,
} }

View File

@@ -30,11 +30,7 @@ async fn async_main() -> anyhow::Result<()> {
.connect(&config.database_url) .connect(&config.database_url)
.await?; .await?;
let state = AppState { let state = AppState { pool };
pool,
meili_url: config.meili_url.clone(),
meili_master_key: config.meili_master_key.clone(),
};
tokio::spawn(indexer::worker::run_worker(state.clone(), config.scan_interval_seconds)); tokio::spawn(indexer::worker::run_worker(state.clone(), config.scan_interval_seconds));

View File

@@ -1,214 +0,0 @@
use anyhow::{Context, Result};
use chrono::{DateTime, Utc};
use reqwest::Client;
use serde::Serialize;
use sqlx::{PgPool, Row};
use tracing::info;
use uuid::Uuid;
#[derive(Serialize)]
struct SearchDoc {
id: String,
library_id: String,
kind: String,
title: String,
authors: Vec<String>,
series: Option<String>,
volume: Option<i32>,
language: Option<String>,
}
pub async fn sync_meili(pool: &PgPool, meili_url: &str, meili_master_key: &str) -> Result<()> {
let client = Client::new();
let base = meili_url.trim_end_matches('/');
// Ensure index exists and has proper settings
let _ = client
.post(format!("{base}/indexes"))
.header("Authorization", format!("Bearer {meili_master_key}"))
.json(&serde_json::json!({"uid": "books", "primaryKey": "id"}))
.send()
.await;
let _ = client
.patch(format!("{base}/indexes/books/settings/filterable-attributes"))
.header("Authorization", format!("Bearer {meili_master_key}"))
.json(&serde_json::json!(["library_id", "kind"]))
.send()
.await;
let _ = client
.put(format!("{base}/indexes/books/settings/searchable-attributes"))
.header("Authorization", format!("Bearer {meili_master_key}"))
.json(&serde_json::json!(["title", "authors", "series"]))
.send()
.await;
// Get last sync timestamp
let last_sync: Option<DateTime<Utc>> = sqlx::query_scalar(
"SELECT last_meili_sync FROM sync_metadata WHERE id = 1 AND last_meili_sync IS NOT NULL"
)
.fetch_optional(pool)
.await?;
// If no previous sync, do a full sync
let is_full_sync = last_sync.is_none();
// Get books to sync: all if full sync, only modified since last sync otherwise.
// Join series_metadata to merge series-level authors into the search document.
let books_query = r#"
SELECT b.id, b.library_id, b.kind, b.title, b.series, b.volume, b.language, b.updated_at,
ARRAY(
SELECT DISTINCT unnest(
COALESCE(b.authors, CASE WHEN b.author IS NOT NULL AND b.author != '' THEN ARRAY[b.author] ELSE ARRAY[]::text[] END)
|| COALESCE(sm.authors, ARRAY[]::text[])
)
) as authors
FROM books b
LEFT JOIN series_metadata sm
ON sm.library_id = b.library_id
AND sm.name = COALESCE(NULLIF(b.series, ''), 'unclassified')
"#;
let rows = if is_full_sync {
info!("[MEILI] Performing full sync");
sqlx::query(books_query)
.fetch_all(pool)
.await?
} else {
let since = last_sync.unwrap();
info!("[MEILI] Performing incremental sync since {}", since);
// Include books that changed OR whose series_metadata changed
sqlx::query(&format!(
"{books_query} WHERE b.updated_at > $1 OR sm.updated_at > $1"
))
.bind(since)
.fetch_all(pool)
.await?
};
if rows.is_empty() && !is_full_sync {
info!("[MEILI] No changes to sync");
// Still update the timestamp
sqlx::query(
"INSERT INTO sync_metadata (id, last_meili_sync) VALUES (1, NOW()) ON CONFLICT (id) DO UPDATE SET last_meili_sync = NOW()"
)
.execute(pool)
.await?;
return Ok(());
}
let docs: Vec<SearchDoc> = rows
.into_iter()
.map(|row| SearchDoc {
id: row.get::<Uuid, _>("id").to_string(),
library_id: row.get::<Uuid, _>("library_id").to_string(),
kind: row.get("kind"),
title: row.get("title"),
authors: row.get::<Vec<String>, _>("authors"),
series: row.get("series"),
volume: row.get("volume"),
language: row.get("language"),
})
.collect();
let doc_count = docs.len();
// Send documents to MeiliSearch in batches of 1000
const MEILI_BATCH_SIZE: usize = 1000;
for (i, chunk) in docs.chunks(MEILI_BATCH_SIZE).enumerate() {
let batch_num = i + 1;
info!("[MEILI] Sending batch {}/{} ({} docs)", batch_num, doc_count.div_ceil(MEILI_BATCH_SIZE), chunk.len());
let response = client
.post(format!("{base}/indexes/books/documents"))
.header("Authorization", format!("Bearer {meili_master_key}"))
.json(&chunk)
.send()
.await
.context("failed to send docs to meili")?;
if !response.status().is_success() {
let status = response.status();
let body = response.text().await.unwrap_or_default();
return Err(anyhow::anyhow!("MeiliSearch error {}: {}", status, body));
}
}
// Clean up stale documents: remove from Meilisearch any IDs that no longer exist in DB.
// Runs on every sync — the cost is minimal (single fetch of IDs only).
{
let db_ids: Vec<String> = sqlx::query_scalar("SELECT id::text FROM books")
.fetch_all(pool)
.await?;
// Fetch all document IDs from Meilisearch (paginated to handle large collections)
let mut meili_ids: std::collections::HashSet<String> = std::collections::HashSet::new();
let mut offset: usize = 0;
const PAGE_SIZE: usize = 10000;
loop {
let response = client
.post(format!("{base}/indexes/books/documents/fetch"))
.header("Authorization", format!("Bearer {meili_master_key}"))
.json(&serde_json::json!({
"fields": ["id"],
"limit": PAGE_SIZE,
"offset": offset
}))
.send()
.await;
let response = match response {
Ok(r) if r.status().is_success() => r,
_ => break,
};
let payload: serde_json::Value = match response.json().await {
Ok(v) => v,
Err(_) => break,
};
let results = payload.get("results")
.and_then(|v| v.as_array())
.cloned()
.unwrap_or_default();
let page_count = results.len();
for doc in results {
if let Some(id) = doc.get("id").and_then(|v| v.as_str()) {
meili_ids.insert(id.to_string());
}
}
if page_count < PAGE_SIZE {
break; // Last page
}
offset += PAGE_SIZE;
}
let db_ids_set: std::collections::HashSet<String> = db_ids.into_iter().collect();
let to_delete: Vec<String> = meili_ids.difference(&db_ids_set).cloned().collect();
if !to_delete.is_empty() {
info!("[MEILI] Deleting {} stale documents", to_delete.len());
let _ = client
.post(format!("{base}/indexes/books/documents/delete-batch"))
.header("Authorization", format!("Bearer {meili_master_key}"))
.json(&to_delete)
.send()
.await;
}
}
// Update last sync timestamp
sqlx::query(
"INSERT INTO sync_metadata (id, last_meili_sync) VALUES (1, NOW()) ON CONFLICT (id) DO UPDATE SET last_meili_sync = NOW()"
)
.execute(pool)
.await?;
info!("[MEILI] Sync completed: {} documents indexed", doc_count);
Ok(())
}

View File

@@ -4,8 +4,6 @@ use anyhow::{Context, Result};
pub struct ApiConfig { pub struct ApiConfig {
pub listen_addr: String, pub listen_addr: String,
pub database_url: String, pub database_url: String,
pub meili_url: String,
pub meili_master_key: String,
pub api_bootstrap_token: String, pub api_bootstrap_token: String,
} }
@@ -15,9 +13,6 @@ impl ApiConfig {
listen_addr: std::env::var("API_LISTEN_ADDR") listen_addr: std::env::var("API_LISTEN_ADDR")
.unwrap_or_else(|_| "0.0.0.0:7080".to_string()), .unwrap_or_else(|_| "0.0.0.0:7080".to_string()),
database_url: std::env::var("DATABASE_URL").context("DATABASE_URL is required")?, database_url: std::env::var("DATABASE_URL").context("DATABASE_URL is required")?,
meili_url: std::env::var("MEILI_URL").context("MEILI_URL is required")?,
meili_master_key: std::env::var("MEILI_MASTER_KEY")
.context("MEILI_MASTER_KEY is required")?,
api_bootstrap_token: std::env::var("API_BOOTSTRAP_TOKEN") api_bootstrap_token: std::env::var("API_BOOTSTRAP_TOKEN")
.context("API_BOOTSTRAP_TOKEN is required")?, .context("API_BOOTSTRAP_TOKEN is required")?,
}) })
@@ -28,8 +23,6 @@ impl ApiConfig {
pub struct IndexerConfig { pub struct IndexerConfig {
pub listen_addr: String, pub listen_addr: String,
pub database_url: String, pub database_url: String,
pub meili_url: String,
pub meili_master_key: String,
pub scan_interval_seconds: u64, pub scan_interval_seconds: u64,
pub thumbnail_config: ThumbnailConfig, pub thumbnail_config: ThumbnailConfig,
} }
@@ -85,9 +78,6 @@ impl IndexerConfig {
listen_addr: std::env::var("INDEXER_LISTEN_ADDR") listen_addr: std::env::var("INDEXER_LISTEN_ADDR")
.unwrap_or_else(|_| "0.0.0.0:7081".to_string()), .unwrap_or_else(|_| "0.0.0.0:7081".to_string()),
database_url: std::env::var("DATABASE_URL").context("DATABASE_URL is required")?, database_url: std::env::var("DATABASE_URL").context("DATABASE_URL is required")?,
meili_url: std::env::var("MEILI_URL").context("MEILI_URL is required")?,
meili_master_key: std::env::var("MEILI_MASTER_KEY")
.context("MEILI_MASTER_KEY is required")?,
scan_interval_seconds: std::env::var("INDEXER_SCAN_INTERVAL_SECONDS") scan_interval_seconds: std::env::var("INDEXER_SCAN_INTERVAL_SECONDS")
.ok() .ok()
.and_then(|v| v.parse::<u64>().ok()) .and_then(|v| v.parse::<u64>().ok())

View File

@@ -15,20 +15,6 @@ services:
timeout: 5s timeout: 5s
retries: 5 retries: 5
meilisearch:
image: getmeili/meilisearch:v1.12
env_file:
- .env
ports:
- "7700:7700"
volumes:
- meili_data:/meili_data
healthcheck:
test: ["CMD", "wget", "-q", "-O", "-", "http://127.0.0.1:7700/health"]
interval: 10s
timeout: 5s
retries: 5
api: api:
build: build:
context: . context: .
@@ -43,8 +29,6 @@ services:
depends_on: depends_on:
postgres: postgres:
condition: service_healthy condition: service_healthy
meilisearch:
condition: service_healthy
healthcheck: healthcheck:
test: ["CMD", "wget", "-q", "-O", "-", "http://127.0.0.1:7080/health"] test: ["CMD", "wget", "-q", "-O", "-", "http://127.0.0.1:7080/health"]
interval: 10s interval: 10s
@@ -68,8 +52,6 @@ services:
condition: service_healthy condition: service_healthy
postgres: postgres:
condition: service_healthy condition: service_healthy
meilisearch:
condition: service_healthy
healthcheck: healthcheck:
test: ["CMD", "wget", "-q", "-O", "-", "http://127.0.0.1:7081/health"] test: ["CMD", "wget", "-q", "-O", "-", "http://127.0.0.1:7081/health"]
interval: 10s interval: 10s
@@ -98,4 +80,3 @@ services:
volumes: volumes:
postgres_data: postgres_data:
meili_data:

View File

@@ -0,0 +1,9 @@
-- Remove Meilisearch sync tracking (search is now handled by PostgreSQL)
DROP TABLE IF EXISTS sync_metadata;
-- Enable pg_trgm for fuzzy search
CREATE EXTENSION IF NOT EXISTS pg_trgm;
-- Add trigram indexes for search performance
CREATE INDEX IF NOT EXISTS idx_books_title_trgm ON books USING gin (title gin_trgm_ops);
CREATE INDEX IF NOT EXISTS idx_books_series_trgm ON books USING gin (series gin_trgm_ops);