Adapter for Mistral 7B Instruct (v0.3) - Fine-tuned with PEFT & QLoRA
This repository contains the adapter weights fine-tuned on top of the unsloth/mistral-7b-instruct-v0.3-bnb-4bit base model using PEFT (Parameter-Efficient Fine-Tuning) and QLoRA (Quantized Low-Rank Adaptation) techniques.
What is this adapter?
This adapter provides a lightweight fine-tuning update that can be applied to the base Mistral 7B model. Instead of saving the full model, only the delta weights (adapter weights) are saved, drastically reducing storage size while still improving model performance for your specific task.
Files included
adapter_config.jsonโ Configuration file describing the adapter setup.adapter_model.safetensorsโ The actual fine-tuned adapter weights stored efficiently.
Other training files such as optimizer states, schedulers, and trainer states are excluded as they are not needed for inference.
How to use this adapter
You need to load the base model first, then apply the adapter on top of it as follows:
from transformers import AutoModelForCausalLM
from peft import PeftModel
# Load base model from HF Hub
base_model_id = "unsloth/mistral-7b-instruct-v0.3-bnb-4bit"
base_model = AutoModelForCausalLM.from_pretrained(base_model_id)
# Load adapter from this repository
adapter_path = "YOUR_HF_USERNAME/REPO_NAME"
model = PeftModel.from_pretrained(base_model, adapter_path)
# Now you can use `model` for inference or further fine-tuning
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for alfadani2/adapter-mistral-v03-7b
Base model
mistralai/Mistral-7B-v0.3
Finetuned
mistralai/Mistral-7B-Instruct-v0.3