Skip to main content
To just get going, follow the Quickstart guide.
0

Setup

A log file is created in the DIAGNOSTIC_FOLDER, where all pipeline steps will be logged, along with raw prompt requests and responses.The pipeline begins by looking for PDFs in the data directory.PDF’s are validated to ensure metadata access.
1

Preprocess pages

Pipeline, Step 1: Load and preprocess PDF. Shows an
image of the front, body and back matter pages of a book
For each PDF file, the script uses the PyMuPDF Python library to identify the front and back matter pages. What are matter pages?By default, the first 8 front matter pages are processed. Body matter is always ignored, and back matter inclusion can be optionally configured with MATTER_CONFIG.back settings.Processed page images are then enhanced for OCR with the PIL (Pillow) Python library.Original and enhanced images are written to the diagnostic folder when SAVE_DIAG_PER_PAGE_IMG=true.
2

Run traditional OCR

Pipeline, Step 2: Identify Page Types and Run Tesseract OCR.
Shows yellow text regions overlaid on the front and back matter pages, identifying the metadata fields
Each page is identified by type (text, image, or mixed). Tesseract OCR runs and quality is scored with OCR_CONFIDENCE_THRESHOLD values.This produces raw text content but doesn’t preserve document structure or hierarchy.The extracted, unstructured text files are written to the diagnostic folder when SAVE_DIAG_TXT_PER_PG=true.
3

Extract structured text

Pipeline, Step 3: Extract Structured Text with PyMuPDF4LLM. Shows the extracted text being transformed into a hierarchical structure with title, subtitle, and other metadata elements properly identified.
PyMuPDF4LLM extracts text to Markdown. Unlike Tesseract, it can differentiate between titles and subtitles by detecting font size and style nuances.Extracted markdown files are written to the diagnostic folder when SAVE_DIAG_TXT_PER_PG=true.
4

Analyze with a VLM

When ANALYZE_WITH_VISION=true, the page is analyzed by a Vision Language Model that holistically understands the page elements and their relationships.We use Meta’s llama3.2-vision, hosted by Ollama locally, on your network or in the cloud. Setup and configuration instructions.Analysis text files are written to the diagnostic folder when SAVE_DIAG_TXT_PER_PG=true.
This step is in active development, and VLM results are not yet integrated into the pipeline.
5

Compose metadata prompt

Pipeline, Step 4: Consolidate Data and Generate LLM Prompt. Shows all the arrows of the extracted text regions consolidating into one arrow, leading to the LLM prompt and rules.
All of the extracted data is consolidated into a single prompt and sent to the active provider defined by SELECTED_MODEL_NAME.The prompt includes detailed extraction rules, ie, what to do when multiple editions are found.
6

Process metadata response

Pipeline, Step 5: Process LLM Response and Validate Metadata.
Shows the structured LLM response being processed and validated.
OpenAI returns the metadata fields with confidence scores and generation rationale.This structured JSON response adheres to our defined schema, reducing hallucinations, alignment warnings, and pre/post-ambles.
7

Compose filename prompt

The prompt for filename generation is sent to OpenAI, with detailed format constraints:
[Title], [Subtitle], ([Author]), Publisher, ([Edition]), ([Year]).pdf
8

Process filename response

Pipeline, Step 6: Generate Filename with OpenAI.
Shows the final LLM response of the generated filename. This will be written to the file.The generated filename in the OpenAI response is cleaned of unsafe characters and trimmed to MAX_FILENAME_LENGTHFinally, when WRITE_PDF_CHANGES=true, the generated metadata and filename are written to the file.

Annotation Export

When we pass --with-annot-export to the main CLI the processor writes UTF-8 sidecar files beside every annotated PDF. The exporter names the markdown file after the cleaned destination filename, swaps .pdf for --ann.md, and overwrites on every run so repeated passes stay deterministic. Sidecars live next to the PDFs so downstream sync tooling can collect them without new path logic. We also generate <filename>--ann.json, which captures the geometry, colors, and metadata needed to reconstruct the annotations later. Sometimes we want annotation text without running the full metadata flow. Run the standalone helper below to walk a directory and create sidecars that mirror the original filenames. The command accepts a single PDF path or a directory of PDFs, matching file names case-insensitively.
uv run src/scripts/export_annotations.py --verbose-term /Users/<username>/Desktop/path
Both paths rely on extract_annotations_complex so they capture highlights, strikeouts, and freeform notes exactly as PyMuPDF exposes them. The JSON sidecar keeps placement and styling data so we can replay annotations onto a matching PDF.