Instructions to use vsevolodl/prometheus-8x7b-v2.0-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use vsevolodl/prometheus-8x7b-v2.0-GGUF with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="vsevolodl/prometheus-8x7b-v2.0-GGUF") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("vsevolodl/prometheus-8x7b-v2.0-GGUF", dtype="auto") - llama-cpp-python
How to use vsevolodl/prometheus-8x7b-v2.0-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="vsevolodl/prometheus-8x7b-v2.0-GGUF", filename="prometheus-8x7b-v2.0.Q2_K.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use vsevolodl/prometheus-8x7b-v2.0-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf vsevolodl/prometheus-8x7b-v2.0-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf vsevolodl/prometheus-8x7b-v2.0-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf vsevolodl/prometheus-8x7b-v2.0-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf vsevolodl/prometheus-8x7b-v2.0-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf vsevolodl/prometheus-8x7b-v2.0-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf vsevolodl/prometheus-8x7b-v2.0-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf vsevolodl/prometheus-8x7b-v2.0-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf vsevolodl/prometheus-8x7b-v2.0-GGUF:Q4_K_M
Use Docker
docker model run hf.co/vsevolodl/prometheus-8x7b-v2.0-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use vsevolodl/prometheus-8x7b-v2.0-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "vsevolodl/prometheus-8x7b-v2.0-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "vsevolodl/prometheus-8x7b-v2.0-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/vsevolodl/prometheus-8x7b-v2.0-GGUF:Q4_K_M
- SGLang
How to use vsevolodl/prometheus-8x7b-v2.0-GGUF with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "vsevolodl/prometheus-8x7b-v2.0-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "vsevolodl/prometheus-8x7b-v2.0-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "vsevolodl/prometheus-8x7b-v2.0-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "vsevolodl/prometheus-8x7b-v2.0-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use vsevolodl/prometheus-8x7b-v2.0-GGUF with Ollama:
ollama run hf.co/vsevolodl/prometheus-8x7b-v2.0-GGUF:Q4_K_M
- Unsloth Studio new
How to use vsevolodl/prometheus-8x7b-v2.0-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for vsevolodl/prometheus-8x7b-v2.0-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for vsevolodl/prometheus-8x7b-v2.0-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for vsevolodl/prometheus-8x7b-v2.0-GGUF to start chatting
- Docker Model Runner
How to use vsevolodl/prometheus-8x7b-v2.0-GGUF with Docker Model Runner:
docker model run hf.co/vsevolodl/prometheus-8x7b-v2.0-GGUF:Q4_K_M
- Lemonade
How to use vsevolodl/prometheus-8x7b-v2.0-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull vsevolodl/prometheus-8x7b-v2.0-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.prometheus-8x7b-v2.0-GGUF-Q4_K_M
List all available models
lemonade list
YAML Metadata Warning:The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
Links for Reference
- Quants for: https://huggingface.co/prometheus-eval/prometheus-8x7b-v2.0
- Homepage: In Progress
- Repository: https://github.com/prometheus-eval/prometheus-eval
- Paper: https://arxiv.org/abs/2405.01535
- Point of Contact: seungone@cmu.edu
TL;DR
Prometheus 2 is an alternative of GPT-4 evaluation when doing fine-grained evaluation of an underlying LLM & a Reward model for Reinforcement Learning from Human Feedback (RLHF).
Prometheus 2 is a language model using Mistral-Instruct as a base model. It is fine-tuned on 100K feedback within the Feedback Collection and 200K feedback within the Preference Collection. It is also made by weight merging to support both absolute grading (direct assessment) and relative grading (pairwise ranking). The surprising thing is that we find weight merging also improves performance on each format.
Model Details
Model Description
- Model type: Language model
- Language(s) (NLP): English
- License: Apache 2.0
- Related Models: All Prometheus Checkpoints
- Resources for more information:
Prometheus is trained with two different sizes (7B and 8x7B). You could check the 7B sized LM on this page. Also, check out our dataset as well on this page and this page.
Prompt Format
We have made wrapper functions and classes to conveniently use Prometheus 2 at our github repository. We highly recommend you use it!
However, if you just want to use the model for your use case, please refer to the prompt format below. Note that absolute grading and relative grading requires different prompt templates and system prompts.
Absolute Grading (Direct Assessment)
Prometheus requires 4 components in the input: An instruction, a response to evaluate, a score rubric, and a reference answer. You could refer to the prompt format below. You should fill in the instruction, response, reference answer, criteria description, and score description for score in range of 1 to 5.
Fix the components with {text} inside.
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{orig_instruction}
###Response to evaluate:
{orig_response}
###Reference Answer (Score 5):
{orig_reference_answer}
###Score Rubrics:
[{orig_criteria}]
Score 1: {orig_score1_description}
Score 2: {orig_score2_description}
Score 3: {orig_score3_description}
Score 4: {orig_score4_description}
Score 5: {orig_score5_description}
###Feedback:
After this, you should apply the conversation template of Mistral (not applying it might lead to unexpected behaviors). You can find the conversation class at this link.
conv = get_conv_template("mistral")
conv.set_system_message("You are a fair judge assistant tasked with providing clear, objective feedback based on specific criteria, ensuring each assessment reflects the absolute standards set for performance.")
conv.append_message(conv.roles[0], dialogs['instruction'])
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
x = tokenizer(prompt,truncation=False)
As a result, a feedback and score decision will be generated, divided by a separating phrase [RESULT]
Relative Grading (Pairwise Ranking)
Prometheus requires 4 components in the input: An instruction, 2 responses to evaluate, a score rubric, and a reference answer. You could refer to the prompt format below. You should fill in the instruction, 2 responses, reference answer, and criteria description.
Fix the components with {text} inside.
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of two responses strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, choose a better response between Response A and Response B. You should refer to the score rubric.
3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (A or B)"
4. Please do not generate any other opening, closing, and explanations.
###Instruction:
{orig_instruction}
###Response A:
{orig_response_A}
###Response B:
{orig_response_B}
###Reference Answer:
{orig_reference_answer}
###Score Rubric:
{orig_criteria}
###Feedback:
After this, you should apply the conversation template of Mistral (not applying it might lead to unexpected behaviors). You can find the conversation class at this link.
conv = get_conv_template("mistral")
conv.set_system_message("You are a fair judge assistant assigned to deliver insightful feedback that compares individual performances, highlighting how each stands relative to others within the same cohort.")
conv.append_message(conv.roles[0], dialogs['instruction'])
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
x = tokenizer(prompt,truncation=False)
As a result, a feedback and score decision will be generated, divided by a separating phrase [RESULT]
License
Feedback Collection, Preference Collection, and Prometheus 2 are subject to OpenAI's Terms of Use for the generated data. If you suspect any violations, please reach out to us.
Citation
If you find the following model helpful, please consider citing our paper!
BibTeX:
@misc{kim2023prometheus,
title={Prometheus: Inducing Fine-grained Evaluation Capability in Language Models},
author={Seungone Kim and Jamin Shin and Yejin Cho and Joel Jang and Shayne Longpre and Hwaran Lee and Sangdoo Yun and Seongjin Shin and Sungdong Kim and James Thorne and Minjoon Seo},
year={2023},
eprint={2310.08491},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{kim2024prometheus,
title={Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models},
author={Seungone Kim and Juyoung Suk and Shayne Longpre and Bill Yuchen Lin and Jamin Shin and Sean Welleck and Graham Neubig and Moontae Lee and Kyungjae Lee and Minjoon Seo},
year={2024},
eprint={2405.01535},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 322
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit