aashish1904 commited on
Commit
b2fb2dc
·
verified ·
1 Parent(s): f3a3470

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +134 -0
README.md ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: janhq/Jan-v3-4B-base-instruct
3
+ library_name: gguf
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - gguf
7
+ - quantized
8
+ - llama-cpp
9
+ ---
10
+
11
+ # Jan-v3-4B-base-instruct - GGUF
12
+
13
+ This is a quantized GGUF version of [janhq/Jan-v3-4B-base-instruct](https://huggingface.co/janhq/Jan-v3-4B-base-instruct) created using [llama.cpp](https://github.com/ggerganov/llama.cpp).
14
+
15
+ ## Available Quantizations
16
+
17
+ | Filename | Quant Type | Description |
18
+ |----------|------------|-------------|
19
+ | Jan-v3-4B-base-instruct.Q2_K.gguf | Q2_K | Smallest, significant quality loss |
20
+ | Jan-v3-4B-base-instruct.Q3_K_S.gguf | Q3_K_S | Very small, low quality |
21
+ | Jan-v3-4B-base-instruct.Q3_K_M.gguf | Q3_K_M | Very small, medium quality |
22
+ | Jan-v3-4B-base-instruct.Q3_K_L.gguf | Q3_K_L | Small, better quality than Q3_K_M |
23
+ | Jan-v3-4B-base-instruct.Q4_0.gguf | Q4_0 | Small, legacy format |
24
+ | Jan-v3-4B-base-instruct.Q4_1.gguf | Q4_1 | Small, legacy format with better accuracy |
25
+ | Jan-v3-4B-base-instruct.Q4_K_S.gguf | Q4_K_S | Small, good quality |
26
+ | Jan-v3-4B-base-instruct.Q4_K_M.gguf | Q4_K_M | Medium, balanced quality - recommended |
27
+ | Jan-v3-4B-base-instruct.Q5_0.gguf | Q5_0 | Medium, legacy format |
28
+ | Jan-v3-4B-base-instruct.Q5_1.gguf | Q5_1 | Medium, legacy format with better accuracy |
29
+ | Jan-v3-4B-base-instruct.Q5_K_S.gguf | Q5_K_S | Medium, good quality |
30
+ | Jan-v3-4B-base-instruct.Q5_K_M.gguf | Q5_K_M | Medium, high quality - recommended |
31
+ | Jan-v3-4B-base-instruct.Q6_K.gguf | Q6_K | Large, very high quality |
32
+ | Jan-v3-4B-base-instruct.Q8_0.gguf | Q8_0 | Large, near-lossless quality |
33
+
34
+
35
+ ## Usage
36
+
37
+ ### With llama.cpp
38
+
39
+ ```bash
40
+ ./llama-cli -m Jan-v3-4B-base-instruct.Q4_K_M.gguf -p "Your prompt here"
41
+ ```
42
+
43
+ ### With Ollama
44
+
45
+ ```bash
46
+ ollama run hf.co/aashish1904/Jan-v3-4B-base-instruct-GGUF
47
+ ```
48
+
49
+ ## Original Model
50
+
51
+ - **Source**: [janhq/Jan-v3-4B-base-instruct](https://huggingface.co/janhq/Jan-v3-4B-base-instruct)
52
+ - **Quantized by**: GGUF Quantizer Space
53
+
54
+ ---
55
+
56
+ ## Original Model Card
57
+
58
+ # Jan-v3-4B-base-instruct: a 4B baseline model for fine-tuning
59
+
60
+ [![GitHub](https://img.shields.io/badge/GitHub-Repository-blue?logo=github)](https://github.com/janhq/jan)
61
+ [![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://opensource.org/licenses/Apache-2.0)
62
+ [![Jan App](https://img.shields.io/badge/Powered%20by-Jan%20App-purple?style=flat&logo=android)](https://jan.ai/)
63
+
64
+ ![image](https://cdn-uploads.huggingface.co/production/uploads/655e3b59d5c0d3db5359ca3c/A65FII_r3rAi9wZtK5P_v.png)
65
+
66
+ ## Overview
67
+
68
+ **Jan-v3-4B-base-instruct** is a 4B-parameter model obtained via post-training distillation from a larger teacher, transferring capabilities while preserving general-purpose performance on standard benchmarks. The result is a compact, ownable base that is straightforward to fine-tune, broadly applicable and minimizing the usual capacity–capability trade-offs.
69
+
70
+ Building on this base, **Jan-Code**, a code-tuned variant, **will be released soon.**
71
+
72
+ ## Model Overview
73
+
74
+ This repo contains the BF16 version of **Jan-v3-4B-base-instruct**, which has the following features:
75
+ - Type: Causal Language Models
76
+ - Training Stage: Pretraining & Post-training
77
+ - Number of Parameters: 4B in total
78
+ - Number of Layers: 36
79
+ - Number of Attention Heads (GQA): 32 for Q and 8 for KV
80
+ - Context Length: **262,144 natively**.
81
+
82
+ **Intended Use**
83
+
84
+ * A better small base for downstream work: improved instruction following out of the box, strong starting point for fine-tuning, and effective lightweight coding assistance.
85
+
86
+ ## Performance
87
+
88
+ ![image](https://cdn-uploads.huggingface.co/production/uploads/655e3b59d5c0d3db5359ca3c/IGuQdKZ0_IGIwL0Wkcasi.png)
89
+
90
+ ## Quick Start
91
+
92
+ ### Integration with Jan Apps
93
+
94
+ Jan-v3 demo is hosted on **Jan Browser** at **[chat.jan.ai](https://chat.jan.ai/)**. It is also optimized for direct integration with [Jan Desktop](https://jan.ai/), select the model in the app to start using it.
95
+
96
+
97
+ ### Local Deployment
98
+
99
+ **Using vLLM:**
100
+ ```bash
101
+ vllm serve janhq/Jan-v3-4B-base-instruct \
102
+ --host 0.0.0.0 \
103
+ --port 1234 \
104
+ --enable-auto-tool-choice \
105
+ --tool-call-parser hermes
106
+
107
+ ```
108
+
109
+ **Using llama.cpp:**
110
+ ```bash
111
+ llama-server --model Jan-v3-4B-base-instruct-Q8_0.gguf \
112
+ --host 0.0.0.0 \
113
+ --port 1234 \
114
+ --jinja \
115
+ --no-context-shift
116
+ ```
117
+
118
+ ### Recommended Parameters
119
+ For optimal performance in agentic and general tasks, we recommend the following inference parameters:
120
+ ```yaml
121
+ temperature: 0.7
122
+ top_p: 0.8
123
+ top_k: 20
124
+ ```
125
+
126
+ ## 🤝 Community & Support
127
+
128
+ - **Discussions**: [Hugging Face Community](https://huggingface.co/janhq/Jan-v2-VL-8B/discussions)
129
+ - **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/)
130
+
131
+ ## 📄 Citation
132
+ ```bibtex
133
+ Updated Soon
134
+ ```