Jan-v3-4B-base-instruct - GGUF

This is a quantized GGUF version of janhq/Jan-v3-4B-base-instruct created using llama.cpp.

Available Quantizations

Filename Quant Type Description
Jan-v3-4B-base-instruct.Q2_K.gguf Q2_K Smallest, significant quality loss
Jan-v3-4B-base-instruct.Q3_K_S.gguf Q3_K_S Very small, low quality
Jan-v3-4B-base-instruct.Q3_K_M.gguf Q3_K_M Very small, medium quality
Jan-v3-4B-base-instruct.Q3_K_L.gguf Q3_K_L Small, better quality than Q3_K_M
Jan-v3-4B-base-instruct.Q4_0.gguf Q4_0 Small, legacy format
Jan-v3-4B-base-instruct.Q4_1.gguf Q4_1 Small, legacy format with better accuracy
Jan-v3-4B-base-instruct.Q4_K_S.gguf Q4_K_S Small, good quality
Jan-v3-4B-base-instruct.Q4_K_M.gguf Q4_K_M Medium, balanced quality - recommended
Jan-v3-4B-base-instruct.Q5_0.gguf Q5_0 Medium, legacy format
Jan-v3-4B-base-instruct.Q5_1.gguf Q5_1 Medium, legacy format with better accuracy
Jan-v3-4B-base-instruct.Q5_K_S.gguf Q5_K_S Medium, good quality
Jan-v3-4B-base-instruct.Q5_K_M.gguf Q5_K_M Medium, high quality - recommended
Jan-v3-4B-base-instruct.Q6_K.gguf Q6_K Large, very high quality
Jan-v3-4B-base-instruct.Q8_0.gguf Q8_0 Large, near-lossless quality

Usage

With llama.cpp

./llama-cli -m Jan-v3-4B-base-instruct.Q4_K_M.gguf -p "Your prompt here"

With Ollama

ollama run hf.co/aashish1904/Jan-v3-4B-base-instruct-GGUF

Original Model


Original Model Card

Jan-v3-4B-base-instruct: a 4B baseline model for fine-tuning

GitHub License Jan App

image

Overview

Jan-v3-4B-base-instruct is a 4B-parameter model obtained via post-training distillation from a larger teacher, transferring capabilities while preserving general-purpose performance on standard benchmarks. The result is a compact, ownable base that is straightforward to fine-tune, broadly applicable and minimizing the usual capacity–capability trade-offs.

Building on this base, Jan-Code, a code-tuned variant, will be released soon.

Model Overview

This repo contains the BF16 version of Jan-v3-4B-base-instruct, which has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining & Post-training
  • Number of Parameters: 4B in total
  • Number of Layers: 36
  • Number of Attention Heads (GQA): 32 for Q and 8 for KV
  • Context Length: 262,144 natively.

Intended Use

  • A better small base for downstream work: improved instruction following out of the box, strong starting point for fine-tuning, and effective lightweight coding assistance.

Performance

image

Quick Start

Integration with Jan Apps

Jan-v3 demo is hosted on Jan Browser at chat.jan.ai. It is also optimized for direct integration with Jan Desktop, select the model in the app to start using it.

Local Deployment

Using vLLM:

vllm serve janhq/Jan-v3-4B-base-instruct \
    --host 0.0.0.0 \
    --port 1234 \
    --enable-auto-tool-choice \
    --tool-call-parser hermes 
    

Using llama.cpp:

llama-server --model Jan-v3-4B-base-instruct-Q8_0.gguf \
    --host 0.0.0.0 \
    --port 1234 \
    --jinja \
    --no-context-shift

Recommended Parameters

For optimal performance in agentic and general tasks, we recommend the following inference parameters:

temperature: 0.7
top_p: 0.8
top_k: 20

🀝 Community & Support

πŸ“„ Citation

Updated Soon
Downloads last month
146
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for aashish1904/Jan-v3-4B-base-instruct-GGUF

Quantized
(17)
this model