Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="chargoddard/llama-2-34b-uncode")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("chargoddard/llama-2-34b-uncode")
model = AutoModelForCausalLM.from_pretrained("chargoddard/llama-2-34b-uncode")
Quick Links

very wip experiment.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 36.2
ARC (25-shot) 39.51
HellaSwag (10-shot) 33.9
MMLU (5-shot) 38.49
TruthfulQA (0-shot) 40.94
Winogrande (5-shot) 74.35
GSM8K (5-shot) 20.77
DROP (3-shot) 5.43
Downloads last month
250
Safetensors
Model size
34B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for chargoddard/llama-2-34b-uncode

Quantizations
2 models

Dataset used to train chargoddard/llama-2-34b-uncode

Spaces using chargoddard/llama-2-34b-uncode 29