🧠Mistral-7B Customer Service Fine-tuned Model
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2
on a dataset of customer service client-agent conversations (Lakshan2003).
- Optimized for dialogue, polite response, and multi-turn coherence.
- Trained using LoRA (rank=16, alpha=32, dropout=0.05) with H2O LLM Studio.
- Dataset size: ~5k samples (balanced customer ↔ agent turns)
Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "n0xgg04/mistral-customer-service-full"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="bfloat16")
prompt = "Hi! Can you help me change my shipping address?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 5
Model tree for n0xgg04/mistral-customer-service-full
Base model
mistralai/Mistral-7B-Instruct-v0.2Dataset used to train n0xgg04/mistral-customer-service-full
Evaluation results
- Validation BLEU on Lakshan2003 Customer Service Conversationsself-reported4.220
- Validation Perplexity on Lakshan2003 Customer Service Conversationsself-reported8.940