Skip to content

lumina-ai-inc/chunkr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


Chunkr Logo

Chunkr | Open Source Document Intelligence API

Production-ready service for document layout analysis, OCR, and semantic chunking.
Convert PDFs, PPTs, Word docs & images into RAG/LLM-ready chunks.

Layout Analysis | OCR + Bounding Boxes | Structured HTML & Markdown | Vision-Language Model Processing

👉 Note: The open-source AGPL version is **different** from our fully managed Cloud API. The open-source release uses community/open-source models, while the Cloud API runs **proprietary in-house models** for higher accuracy, speed, and enterprise reliability.

Try it out     Report Bug     Contact     Discord     Ask DeepWiki

Table of Contents

Open Source vs Cloud API vs Enterprise

Feature Open Source Repo (good) Cloud API - chunkr.ai (best) Enterprise
Perfect for Development & testing Production workloads Large-scale / High-security
Layout Analysis Uses open-source models Proprietary in-house models In-house + custom-tuned
OCR Accuracy Community OCR engines Optimized OCR stack Optimized + domain-tuned
VLM Processing Basic open VLMs Enhanced proprietary VLMs Custom fine-tunes
Excel Support ✅ Native parser ✅ Native parser
Document Types PDF, PPT, Word, Images PDF, PPT, Word, Images, Excel PDF, PPT, Word, Images, Excel
Infrastructure Self-hosted Fully managed cloud Managed / On-prem
Support Discord community Dedicated support Dedicated founding team
Migration Support Community-driven Docs + email Dedicated migration team

The open-source release is ideal if you want transparency, local hosting, or to experiment with Chunkr’s pipeline.
For best performance, production reliability, and access to in-house models, we recommend the Chunkr Cloud API.
For high-security or regulated industries, our Enterprise edition offers on-prem or VPC deployments.

Quick Start with Docker Compose

  1. Prerequisites:

  2. Clone the repo:

git clone https://github.com/lumina-ai-inc/chunkr
cd chunkr
  1. Set up environment variables:
# Copy the example environment file
cp .env.example .env

# Configure your llm models
cp models.example.yaml models.yaml

For more information on how to set up LLMs, see here.

  1. Start the services:
# For GPU deployment:
docker compose up -d

# For CPU-only deployment:
docker compose -f compose.yaml -f compose.cpu.yaml up -d

# For Mac ARM architecture (M1, M2, M3, etc.):
docker compose -f compose.yaml -f compose.cpu.yaml -f compose.mac.yaml up -d
  1. Access the services:

    • Web UI: http://localhost:5173
    • API: http://localhost:8000
  2. Stop the services when done:

# For GPU deployment:
docker compose down

# For CPU-only deployment:
docker compose -f compose.yaml -f compose.cpu.yaml down

# For Mac ARM architecture (M1, M2, M3, etc.):
docker compose -f compose.yaml -f compose.cpu.yaml -f compose.mac.yaml down

LLM Configuration

Chunkr supports two ways to configure LLMs:

  1. models.yaml file: Advanced configuration for multiple LLMs with additional options
  2. Environment variables: Simple configuration for a single LLM

Using models.yaml (Recommended)

For more flexible configuration with multiple models, default/fallback options, and rate limits:

  1. Copy the example file to create your configuration:
cp models.example.yaml models.yaml
  1. Edit the models.yaml file with your configuration. Example:
models:
  - id: gpt-4o
    model: gpt-4o
    provider_url: https://api.openai.com/v1/chat/completions
    api_key: "your_openai_api_key_here"
    default: true
    rate-limit: 200 # requests per minute - optional

Benefits of using models.yaml:

  • Configure multiple LLM providers simultaneously
  • Set default and fallback models
  • Add distributed rate limits per model
  • Reference models by ID in API requests (see docs for more info)

Read the models.example.yaml file for more information on the available options.

Using environment variables (Basic)

You can use any OpenAI API compatible endpoint by setting the following variables in your .env file:

LLM__KEY:
LLM__MODEL:
LLM__URL:

Common LLM API Providers

Below is a table of common LLM providers and their configuration details to get you started:

Provider API URL Documentation
OpenAI https://api.openai.com/v1/chat/completions OpenAI Docs
Google AI Studio https://generativelanguage.googleapis.com/v1beta/openai/chat/completions Google AI Docs
OpenRouter https://openrouter.ai/api/v1/chat/completions OpenRouter Models
Self-Hosted http://localhost:8000/v1 VLLM or Ollama

Licensing

The core of this project is dual-licensed:

  1. GNU Affero General Public License v3.0 (AGPL-3.0)
  2. Commercial License

To use Chunkr without complying with the AGPL-3.0 license terms you can contact us or visit our website.

Connect With Us