bigcode/starcoderdata
Viewer • Updated • 207M • 29.2k • 507
How to use QuantFactory/diffullama-GGUF with Transformers:
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("QuantFactory/diffullama-GGUF", dtype="auto")How to use QuantFactory/diffullama-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/diffullama-GGUF", filename="diffullama.Q2_K.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
How to use QuantFactory/diffullama-GGUF with llama.cpp:
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/diffullama-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/diffullama-GGUF:Q4_K_M
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/diffullama-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/diffullama-GGUF:Q4_K_M
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantFactory/diffullama-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf QuantFactory/diffullama-GGUF:Q4_K_M
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantFactory/diffullama-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantFactory/diffullama-GGUF:Q4_K_M
docker model run hf.co/QuantFactory/diffullama-GGUF:Q4_K_M
How to use QuantFactory/diffullama-GGUF with Ollama:
ollama run hf.co/QuantFactory/diffullama-GGUF:Q4_K_M
How to use QuantFactory/diffullama-GGUF with Unsloth Studio:
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/diffullama-GGUF to start chatting
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/diffullama-GGUF to start chatting
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantFactory/diffullama-GGUF to start chatting
How to use QuantFactory/diffullama-GGUF with Docker Model Runner:
docker model run hf.co/QuantFactory/diffullama-GGUF:Q4_K_M
How to use QuantFactory/diffullama-GGUF with Lemonade:
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantFactory/diffullama-GGUF:Q4_K_M
lemonade run user.diffullama-GGUF-Q4_K_M
lemonade list
This is quantized version of diffusionfamily/diffullama created using llama.cpp
This model is a fine-tuned version of [llama2].
Details and model loading can be seen https://github.com/HKUNLP/DiffuLLaMA.
@misc{gong2024scalingdiffusionlanguagemodels,
title={Scaling Diffusion Language Models via Adaptation from Autoregressive Models},
author={Shansan Gong and Shivam Agarwal and Yizhe Zhang and Jiacheng Ye and Lin Zheng and Mukai Li and Chenxin An and Peilin Zhao and Wei Bi and Jiawei Han and Hao Peng and Lingpeng Kong},
year={2024},
eprint={2410.17891},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.17891},
}
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Base model
meta-llama/Llama-2-7b-hf