--- license: apache-2.0 language: - en tags: - mistral - transformers - text-generation - causal-lm - conversational pipeline_tag: text-generation base_model: mistralai/Mistral-7B-Instruct-v0.2 widget: - text: Hello, how can I help you today? - text: Please explain how to reset my password. datasets: - Lakshan2003/customer_service_client_agent_conversations model-index: - name: mistral-customer-service-full results: - task: type: text-generation name: Text Generation dataset: name: Lakshan2003 Customer Service Conversations type: Lakshan2003/customer_service_client_agent_conversations metrics: - type: BLEU name: Validation BLEU value: 4.22 - type: perplexity name: Validation Perplexity value: 8.94 --- # 🧠 Mistral-7B Customer Service Fine-tuned Model This model is a fine-tuned version of `mistralai/Mistral-7B-Instruct-v0.2` on a dataset of **customer service client-agent conversations** (Lakshan2003). - Optimized for dialogue, polite response, and multi-turn coherence. - Trained using LoRA (rank=16, alpha=32, dropout=0.05) with H2O LLM Studio. - Dataset size: ~5k samples (balanced customer ↔ agent turns) ## Example Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "n0xgg04/mistral-customer-service-full" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="bfloat16") prompt = "Hi! Can you help me change my shipping address?" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=150) print(tokenizer.decode(outputs[0], skip_special_tokens=True))