ANAH: Analytical Annotation of Hallucinations in Large Language Models
Paper β’ 2405.20315 β’ Published
How to use opencompass/anah-7b with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="opencompass/anah-7b", trust_remote_code=True) # Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("opencompass/anah-7b", trust_remote_code=True, dtype="auto")# Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("opencompass/anah-7b", trust_remote_code=True, dtype="auto")This page holds the InternLM2-7B model which is trained with the ANAH dataset. It is fine-tuned to annotate the hallucination in LLM's responses.
More information please refer to our project page.
You have to follow the prompt in our paper to annotate the hallucination.
The models follow the conversation format of InternLM2-chat, with the template protocol as:
dict(role='user', begin='<|im_start|>user
', end='<|im_end|>
'),
dict(role='assistant', begin='<|im_start|>assistant
', end='<|im_end|>
'),
If you find this project useful in your research, please consider citing:
@article{ji2024anah,
title={ANAH: Analytical Annotation of Hallucinations in Large Language Models},
author={Ji, Ziwei and Gu, Yuzhe and Zhang, Wenwei and Lyu, Chengqi and Lin, Dahua and Chen, Kai},
journal={arXiv preprint arXiv:2405.20315},
year={2024}
}
Code: The source code for training and evaluating this model can be found at https://github.com/open-compass/ANAH
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="opencompass/anah-7b", trust_remote_code=True)