willopcbeta commited on
Commit
6ce3c5e
·
verified ·
1 Parent(s): 1490936

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -0
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: mlc-llm
3
+ base_model:
4
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
5
+ - lightblue/DeepSeek-R1-Distill-Qwen-1.5B-Multilingual
6
+ tags:
7
+ - mlc-llm
8
+ - web-llm
9
+ - Multilingual
10
+ - DeepSeek-R1
11
+ ---
12
+
13
+ # DeepSeek-R1-Distill-Qwen-1.5B-Multilingual-q4f16_1-MLC
14
+
15
+ This is the [DeepSeek-R1-Distill-Qwen-1.5B-Multilingual-q4f16_1-MLC](https://huggingface.co/lightblue/DeepSeek-R1-Distill-Qwen-1.5B-Multilingual) model in MLC format `q4f16_1`.
16
+ The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
17
+
18
+ Thank you mitulagr2 for creating the DeepSeek-R1-Distill-Qwen-1.5B-q4f16_1 custom wasm.
19
+ The MLC_LM library from the official sources does not support Distill-Qwen-1.5B-q4f16_1.
20
+
21
+ 感謝mitulagr2 製作DeepSeek-R1-Distill-Qwen-1.5B-q4f16_1專用的wasm。
22
+ MLC_LLM 官方lib無法使用Distill-Qwen-1.5B-q4f16_1。
23
+ WASM:https://huggingface.co/mitulagr2/DeepSeek-R1-Distill-Qwen-1.5B-q4f16_1-MLC/tree/main
24
+ ## Example Usage
25
+
26
+ Here are some examples of using this model in MLC LLM.
27
+ Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
28
+
29
+ ### Chat
30
+
31
+ In command line, run
32
+ ```bash
33
+ mlc_llm chat HF://willopcbeta/DeepSeek-R1-Distill-Qwen-1.5B-Multilingual-q4f16_1-MLC
34
+ ```
35
+
36
+ ### REST Server
37
+
38
+ In command line, run
39
+ ```bash
40
+ mlc_llm serve HF://willopcbeta/DeepSeek-R1-Distill-Qwen-1.5B-Multilingual-q4f16_1-MLC
41
+ ```
42
+
43
+ ### Python API
44
+
45
+ ```python
46
+ from mlc_llm import MLCEngine
47
+
48
+ # Create engine
49
+ model = "HF://willopcbeta/DeepSeek-R1-Distill-Qwen-1.5B-Multilingual-q4f16_1-MLC"
50
+ engine = MLCEngine(model)
51
+
52
+ # Run chat completion in OpenAI API.
53
+ for response in engine.chat.completions.create(
54
+ messages=[{"role": "user", "content": "What is the meaning of life?"}],
55
+ model=model,
56
+ stream=True,
57
+ ):
58
+ for choice in response.choices:
59
+ print(choice.delta.content, end="", flush=True)
60
+ print("\n")
61
+
62
+ engine.terminate()
63
+ ```
64
+
65
+ ## Documentation
66
+
67
+ For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).