lavita/ChatDoctor-HealthCareMagic-100k
Viewer • Updated • 112k • 3.46k • 110
How to use jb10231/MedLLaMA-3.2-3B-LabReport with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit")
model = PeftModel.from_pretrained(base_model, "jb10231/MedLLaMA-3.2-3B-LabReport")This is a fine-tuned version of meta-llama/Llama-3.2-3B-Instruct trained on medical Q&A data to answer patient queries about lab reports and health conditions.
This model is for educational and informational purposes only. It is NOT a substitute for professional medical advice, diagnosis, or treatment. Always consult a qualified healthcare provider for medical decisions.
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch
base_model = AutoModelForCausalLM.from_pretrained(
'meta-llama/Llama-3.2-3B-Instruct',
quantization_config=BitsAndBytesConfig(load_in_4bit=True),
device_map='auto'
)
model = PeftModel.from_pretrained(base_model, 'jb10231/MedLLaMA-3.2-3B-LabReport')
tokenizer = AutoTokenizer.from_pretrained('jb10231/MedLLaMA-3.2-3B-LabReport')
Base model
meta-llama/Llama-3.2-3B-Instruct