Model Card for ShAIkespear/Phi-2_DPO_M3_Quantized_Alt_8bit
A quantized (8-bit) LoRA-finetuned variant of microsoft/phi-2 targeting STEM multiple-choice question answering (MCQA). The model was first trained with SFT on mixed STEM MCQA datasets, then aligned via DPO using human preference data (EPFL exam MCQAs). Finally, it was quantized to 8-bit to reduce memory and improve inference speed.
Model Details
Model Description
This model adapts Phi-2 (2.78B params, 2,048 ctx) for MCQA, especially STEM. Training used LoRA adapters (rank=16, α=16, dropout=0.05) and the TRL library for SFT and DPO; checkpoints focus on adapter weights for compactness. An 8-bit quantized deployment configuration (BitsAndBytes) is provided.
- Developed by: ShAIkespear team
- Shared by: ShAIkespear team
- Model type: Causal decoder-only LM (Phi-2) with LoRA adapters; DPO-aligned MCQA assistant
- Language(s) (NLP): English (training/eval datasets are primarily EN)
- License: MIT (per repository)
- Finetuned from model: microsoft/phi-2
Model Sources
- Repository: 2.8B-Phi-2-LLM-QA
- Report: “ShAIkerspear - How to replace TAs: A comprehensive study on letting LLMs answer your questions”
Uses
Direct Use
- MCQA answering for STEM and general knowledge benchmarks (e.g., MMLU, OpenBookQA).
- Educational assistants/tutors for multiple-choice reasoning with short chain-of-thought style explanations in prompts.
Out-of-Scope Use
- High-stakes domains (medical, legal, safety-critical) without human oversight.
- Generative tasks outside MCQA chat format may underperform (e.g., long-form reasoning proofs).
- Any use that violates exam integrity or leaks copyrighted/confidential test content (see ethics notes).
Bias, Risks, and Limitations
- STEM difficulty: Performance on math/science MCQA hovers near random in several sets (~0.25), indicating limited reliability for harder STEM reasoning.
- Alignment drift: DPO after SFT can affect letter-only answer formatting. Models sometimes generate extra content or follow-up questions.
- Data risk: EPFL exam-derived prompts/answers may raise confidentiality and fairness issues if reused exams are included.
Recommendations
- Keep a human in the loop for grading/teaching.
- Prefer balanced MCQA data; include explicit “Question / Explanation / Answer” formatting to stabilize outputs.
- Apply filtering/guardrails to block harmful or exam-integrity-breaking prompts.
How to Get Started with the Model
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
model_id = "ShAIkespear/Phi-2_DPO_M3_Quantized_Alt_8bit" # replace with your Hub ID
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
tok = AutoTokenizer.from_pretrained(model_id, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
model_id, device_map="auto", quantization_config=bnb_config
)
prompt = "### Question: What is 2+2?\n### Explanation: Add the integers.\n### Answer:"
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=10)
print(tok.decode(out[0], skip_special_tokens=True))
Training Details
Training Data
Mixed SFT on: MathQA, OpenBookQA, ScienceQA, TAL-SCQ5K, plus balanced/shuffled merged MCQA sets; DPO on HelpSteer and a student-curated EPFL preference dataset (~20–30k pairs; subsets for SFT/DPO). Long items (>512 tokens) dropped; large datasets clipped to 20k samples. Splits: train 50%, test_overfit 25%, test_comparison 10%, test_quantization 15%.
Training Procedure
Preprocessing
Unified MCQA schema. SFT format: id, subject, question, answer/answer_text, choices. DPO format: prompt, rejected, chosen. Prompts used a structured header:
### Question ... ### Explanation ... ### Answer
Training Hyperparameters
- Regime: Mixed precision typical for TRL (not explicitly specified); LoRA rank 16, α 16, dropout 0.05.
- Batch sizes: SFT train/eval = 4; DPO = 1 (OOM otherwise).
- LR: 1e-5 for public datasets; 1e-4 for EPFL data; cosine schedule with warmup.
- Frameworks: Hugging Face TRL + PEFT LoRA.
Evaluation
Testing Data, Factors & Metrics
Testing Data
Per-dataset held-out test sets (see splits), plus MMLU formatted to the SFT schema.
Factors
Task domain (math vs. general science vs. open-domain), data balancing, order of SFT/DPO phases.
Metrics
Accuracy for MCQA; DPO choice accuracy for preference pairs.
Results
Among several training recipes, the balanced-then-DPO configuration (model 8) performed best overall.
Summary
- Balanced MCQA SFT improved robustness.
- DPO on EPFL preferences improved alignment and EPFL-like accuracy.
- 8-bit quantization shrank memory (~11GB → ~3GB noted in report’s table) with mixed accuracy effects across tasks.
Technical Specifications
Model Architecture and Objective
Phi-2 transformer decoder LM (2.78B params) with next-token prediction objective; LoRA adapters for finetuning; DPO for preference alignment; 8-bit quantized runtime.
Software
Hugging Face TRL, PEFT/LoRA, Transformers; BitsAndBytes for quantization.
Glossary
- MCQA: Multiple-choice question answering.
- SFT: Supervised finetuning with gold answers.
- DPO: Direct Preference Optimization (pairwise preference alignment).
- LoRA: Low-Rank Adaptation for parameter-efficient finetuning.
Model tree for ShAIkespear/Phi-2_DPO_M3_Quantized_Alt_8bit
Base model
microsoft/phi-2