Meta + Ollama
Open-source Vision-Language Models for holistic understanding of complex layouts
Work in Progress: The vision analysis pipeline feature is in active development and this guide is still in draft form.
Why VLM’s?
A vision-language model is a fusion of vision and natural language models which can holistically understand and interpret images. They are emerging as a powerful tool for document analysis, particularly for complex layouts.
Cloud model providers like OpenAI charge ~$0.50
per image. This is prohibitive for batch
processing. Fortunately, open-source alternatives provide a free, increasingly viable
option you can run locally.
How To Run a VLM
Two components are required:
-
A backend application or service - We’ve tested with the excellent open-source Ollama. Another great option is LM Studio.
-
An open-source Visual Language Model - We’ve tested with Meta’s Llama 3.2 Vision Language Model, a state-of-the-art open-source VLM. However, any Ollama-compatible VLM will work.
Set up Ollama Backend Locally
Running it locally is the easiest way to get started, and has the advantage of no API costs and full privacy. It runs headless by default via a CLI, but can also be managed in the browser.
Install Ollama on MacOS with Homebrew:
Start the Ollama server:
You can transition to a more powerful machine on your local network or in the cloud once you get up and running. Ollama has a wealth of community integrations.
Verify Ollama Setup
Confirm Ollama is running correctly on whichever machine you’ve installed it on.
Test the endpoint from a machine on the same network:
Download the VLM
To verify the model works across your network, run this test command from the machine that will execute the pipeline:
Configuration
Enable Vision Language Model analysis for complex layouts and image-heavy pages.
The Vision Language Model to use. Llama 3.2 Vision is recommended for its balance of performance and accuracy.
Default port for local Ollama API.
Timeout in seconds for Ollama requests. Vision processing is more compute-intensive than text processing.
Maximum number of retry attempts for Vision Language Model requests.
Delay between retry attempts in seconds.
Full endpoint URL for Ollama API.
Configuration object controlling when Vision Language Model analysis is triggered:
FAQ
What are the hardware requirements for running a VLM?
Vision Language Models are computationally intensive. You’ll need to download a model size appropriate for your hardware.
GPU
- Recommended: NVIDIA RTX 30-series or better, or Apple Silicon M-series
- Performance: GPU acceleration provides 10x or better performance compared to CPU-only inference
- VRAM: Model size determines VRAM requirements
The parameter count doesn’t correlate with VRAM numbers. This is a common source of confusion.
A rough guide:
- 8 GB minimum for 7B parameter models
- 16 GB for 13B parameter models
- 32 GB for 33B parameter models
Storage
- Disk Space: ~10GB for the Llama 3.2 Vision model
References
-
Ollama:
-
Other VLM’s specifically for OCR to try:
- CogVLM
- Lucid_Vision
- MiniGPT-v2 and MiniGPT-4
- MMOCR
- Open VLM Leaderboard
- for Apple Silicon Macs: MLX-VLM
-
Getting Started with State-of-the-Art VLM Using the Swarms API
-
Excellent article on OCR with VLM and LM Studio