tner/ontonotes5
Viewer • Updated • 76.7k • 929 • 24
How to use nickprock/bert-finetuned-ner-ontonotes with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("token-classification", model="nickprock/bert-finetuned-ner-ontonotes") # Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nickprock/bert-finetuned-ner-ontonotes")
model = AutoModelForTokenClassification.from_pretrained("nickprock/bert-finetuned-ner-ontonotes")This model is a fine-tuned version of bert-base-cased on the ontonotes5 dataset. It achieves the following results on the evaluation set:
Token classification experiment, NER, on business topics.
The model can be used on token classification, in particular NER. It is fine tuned on business topic.
The dataset used is ontonotes5
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|---|---|---|---|---|---|---|---|
| 0.0842 | 1.0 | 7491 | 0.0950 | 0.8524 | 0.8715 | 0.8618 | 0.9745 |
| 0.0523 | 2.0 | 14982 | 0.1044 | 0.8449 | 0.8827 | 0.8634 | 0.9744 |
| 0.036 | 3.0 | 22473 | 0.1118 | 0.8529 | 0.8843 | 0.8683 | 0.9760 |
| 0.0231 | 4.0 | 29964 | 0.1240 | 0.8589 | 0.8805 | 0.8696 | 0.9752 |
| 0.0118 | 5.0 | 37455 | 0.1416 | 0.8570 | 0.8804 | 0.8685 | 0.9753 |
| 0.0077 | 6.0 | 44946 | 0.1503 | 0.8567 | 0.8842 | 0.8702 | 0.9755 |
Base model
google-bert/bert-base-cased