🧠 Mistral-7B Customer Service Fine-tuned Model

This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on a dataset of customer service client-agent conversations (Lakshan2003).

  • Optimized for dialogue, polite response, and multi-turn coherence.
  • Trained using LoRA (rank=16, alpha=32, dropout=0.05) with H2O LLM Studio.
  • Dataset size: ~5k samples (balanced customer ↔ agent turns)

Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "n0xgg04/mistral-customer-service-full"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="bfloat16")

prompt = "Hi! Can you help me change my shipping address?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
5
Safetensors
Model size
7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for n0xgg04/mistral-customer-service-full

Finetuned
(1049)
this model

Dataset used to train n0xgg04/mistral-customer-service-full

Evaluation results

  • Validation BLEU on Lakshan2003 Customer Service Conversations
    self-reported
    4.220
  • Validation Perplexity on Lakshan2003 Customer Service Conversations
    self-reported
    8.940