BERT-base uncased fine-tuned on SQuAD v2
Model Details
Model Description
This model is a fine-tuned version of BERT-base uncased on the SQuAD v2 dataset for extractive question answering.
It was trained for 3 epochs and can answer questions given a context passage, while also handling unanswerable questions (a key feature of SQuAD v2).
- Developed by: Your Name
- Model type: Extractive Question Answering
- Language(s): English
- License: Apache-2.0
- Finetuned from: bert-base-uncased
Model Sources
- Dataset: SQuAD v2
- Base model: bert-base-uncased
Uses
Direct Use
- Extractive Question Answering: Given a passage and a question, the model extracts the most likely span of text that answers the question.
- Handles unanswerable questions by predicting "no answer" when appropriate.
Downstream Use
- Can be integrated into chatbots, virtual assistants, or search systems that require question answering over text.
Out-of-Scope Use
- Generative question answering (the model cannot generate new answers).
- Non-English tasks (the model was trained only on English data).
Bias, Risks, and Limitations
- The model inherits biases from the SQuAD v2 dataset.
- Performance may degrade on domain-specific or noisy text not represented in SQuAD v2.
- Not designed for open-domain QA across large corpora — works best when the context passage is provided.
How to Get Started with the Model
You can try the model with the following code:
from transformers import pipeline
qa_pipeline = pipeline("question-answering", model="peeyush01/bert-squad-v2")
result = qa_pipeline({
"context": "Hugging Face is creating a tool that democratizes AI.",
"question": "What is Hugging Face creating?"
})
print(result)
Author Details
- Peeyush
- Github : Github
- Downloads last month
- 16
Model tree for peeyush01/bert-qa-finetuned
Base model
google-bert/bert-base-uncased