English-Document-OCR-Qwen3.5-2B

I built this model as part of my ongoing work in document digitization and archival OCR. My goal was to create a small, locally-runnable model that punches above its weight class, and I'm happy to say it does: despite being only 2B parameters, it outperforms several larger frontier models on text extraction from complex document layouts, including dense multi-column newsprint, historical serif typefaces, and degraded archival scans.

This is the first release. I'll be sharing updated versions with broader language coverage and improved layout handling soon. If you try it on your documents, I'd love to hear how it performs, feel free to leave feedback in the Community tab.

License: This model is intended for personal and research use only. If you want to use this model in a product or service, or need to process documents commercially, contact ocr@loay.net.


Model Details

  • Fine-tuned by: loay
  • Base Model: unsloth/Qwen3.5-2B
  • Task: Document OCR
  • Training Data: 8,000 synthetic English document images with ground-truth Markdown transcriptions, featuring faded ink, bleed-through artifacts, skewed layouts, and historical serif typefaces
  • Output Format: Markdown text preserving paragraph flow and layout structure
  • Language Support: Optimized for English and other left-to-right (LTR) scripts. See my OCR finetuned models for right-to-left document OCR.

Usage

The model does not require a specific prompt. It will perform OCR on any document image by default. To achieve the best results and prevent conversational hallucinations, use the exact instruction the model was fine-tuned on:

Extract all text from this document image and output it in markdown format.


GGUF & Local Inference

Quantized GGUF files are available for use with llama.cpp, LM Studio, Ollama, and similar runtimes.

You must load mmproj-english-document-ocr-qwen3.5-2b-f16.gguf alongside your chosen weight file. Without the multimodal projector, the model cannot process images.

File Use Case
english-document-ocr-qwen3.5-2b-f16.gguf Full precision, maximum accuracy
english-document-ocr-qwen3.5-2b-q8_0.gguf Best quality/size tradeoff for OCR precision
english-document-ocr-qwen3.5-2b-q6_k.gguf High quality, lower VRAM
english-document-ocr-qwen3.5-2b-q5_k_m.gguf Balanced quality and speed
english-document-ocr-qwen3.5-2b-q4_k_m.gguf Fast, efficient local inference
mmproj-english-document-ocr-qwen3.5-2b-f16.gguf Required multimodal projector (load with any weight above)

Example with llama.cpp:

llama-cli \
  --model english-document-ocr-qwen3.5-2b-q4_k_m.gguf \
  --mmproj mmproj-english-document-ocr-qwen3.5-2b-f16.gguf \
  --image your_document.jpg

Limitations

  • Trained exclusively on synthetic data. May degrade on severe real-world scan artifacts outside the training distribution.
  • No handwriting support, relies on base model zero-shot for cursive or marginalia.
  • Does not extract mathematical formulas, charts, or scientific figures.
  • Optimized for LTR latin scripts. For Arabic/RTL documents, see my OCR models.
  • May hallucinate or break on very long context from dense pages. If your document is text-heavy, consider splitting it into sections before inference.
Downloads last month
395
GGUF
Model size
2B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for loay/English-Document-OCR-Qwen3.5-2B

Finetuned
Qwen/Qwen3.5-2B
Adapter
(25)
this model

Collection including loay/English-Document-OCR-Qwen3.5-2B