To just get going, follow the Quickstart guide.
0

Setup

A log file is created in the DIAGNOSTIC_FOLDER, where all pipeline steps will be logged, along with raw prompt requests and responses.The pipeline begins by looking for PDFs in the data directory.PDF’s are validated to ensure metadata access.
1

Preprocess pages

Pipeline, Step 1: Load and preprocess PDF. Shows an
image of the front, body and back matter pages of a book
For each PDF file, the script uses the PyMuPDF Python library to identify the front and back matter pages. What are matter pages?By default, the first 8 front matter pages are processed. Body matter is always ignored, and back matter inclusion can be optionally configured with MATTER_CONFIG.back settings.Processed page images are then enhanced for OCR with the PIL (Pillow) Python library.Original and enhanced images are written to the diagnostic folder when SAVE_DIAGNOSTIC_FILES=true.
2

Run traditional OCR

Pipeline, Step 2: Identify Page Types and Run Tesseract OCR.
Shows yellow text regions overlaid on the front and back matter pages, identifying the metadata fields
Each page is identified by type (text, image, or mixed). Tesseract OCR runs and quality is scored with OCR_CONFIDENCE_THRESHOLD values.This produces raw text content but doesn’t preserve document structure or hierarchy.The extracted, unstructured text files are written to the diagnostic folder when SAVE_DIAGNOSTIC_FILES=true.
3

Extract structured text

Pipeline, Step 3: Extract Structured Text with PyMuPDF4LLM. Shows the extracted text being transformed into a hierarchical structure with title, subtitle, and other metadata elements properly identified.
PyMuPDF4LLM extracts text to Markdown. Unlike Tesseract, it can differentiate between titles and subtitles by detecting font size and style nuances.Extracted markdown files are written to the diagnostic folder when SAVE_DIAGNOSTIC_FILES=true.
4

Analyze with a VLM

When ANALYZE_WITH_VISION=true, the page is analyzed by a Vision Language Model that holistically understands the page elements and their relationships.We use Meta’s llama3.2-vision, hosted by Ollama locally, on your network or in the cloud. Setup and configuration instructions.Analysis text files are written to the diagnostic folder when SAVE_DIAGNOSTIC_FILES=true.
This step is in active development, and VLM results are not yet integrated into the pipeline.
5

Compose metadata prompt

Pipeline, Step 4: Consolidate Data and Generate LLM Prompt. Shows all the arrows of the extracted text regions consolidating into one arrow, leading to the LLM prompt and rules.
All of the extracted data is consolidated into a single prompt and sent to the OpenAI MODEL_NAME.The prompt includes detailed extraction rules, ie, what to do when multiple editions are found.
6

Process metadata response

Pipeline, Step 5: Process LLM Response and Validate Metadata.
Shows the structured LLM response being processed and validated.
OpenAI returns the metadata fields with confidence scores and generation rationale.This structured JSON response adheres to our defined schemas.py, reducing hallucinations, alignment warnings, and pre/post-ambles.
7

Compose filename prompt

The prompt for filename generation is sent to OpenAI, with detailed format constraints:
[Title], [Subtitle], ([Author]), Publisher, ([Edition]), ([Year]).pdf
8

Process filename response

Pipeline, Step 6: Generate Filename with OpenAI.
Shows the final LLM response of the generated filename. This will be written to the file.The generated filename in the OpenAI response is cleaned of unsafe characters and trimmed to MAX_FILENAME_LENGTHFinally, when WRITE_PDF_CHANGES=true, the generated metadata and filename are written to the file.