Commit
·
c476892
1
Parent(s):
668fd04
reverting to dataset sd-nlp
Browse files
README.md
CHANGED
|
@@ -7,7 +7,7 @@ tags:
|
|
| 7 |
-
|
| 8 |
license: agpl-3.0
|
| 9 |
datasets:
|
| 10 |
-
- EMBO/sd-
|
| 11 |
metrics:
|
| 12 |
-
|
| 13 |
---
|
|
@@ -16,7 +16,7 @@ metrics:
|
|
| 16 |
|
| 17 |
## Model description
|
| 18 |
|
| 19 |
-
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-
|
| 20 |
|
| 21 |
Figures are usually composite representations of results obtained with heterogeneous experimental approaches and systems. Breaking figures into panels allows identifying more coherent descriptions of individual scientific experiments.
|
| 22 |
|
|
@@ -44,7 +44,7 @@ The model must be used with the `roberta-base` tokenizer.
|
|
| 44 |
|
| 45 |
## Training data
|
| 46 |
|
| 47 |
-
The model was trained for token classification using the [`EMBO/sd-
|
| 48 |
|
| 49 |
## Training procedure
|
| 50 |
|
|
@@ -54,7 +54,7 @@ Training code is available at https://github.com/source-data/soda-roberta
|
|
| 54 |
|
| 55 |
- Model fine-tuned: EMBO/bio-lm
|
| 56 |
- Tokenizer vocab size: 50265
|
| 57 |
-
- Training data: EMBO/sd-
|
| 58 |
- Dataset configuration: PANELIZATION
|
| 59 |
- TTraining with 2175 examples.
|
| 60 |
- Evaluating on 622 examples.
|
|
|
|
| 7 |
-
|
| 8 |
license: agpl-3.0
|
| 9 |
datasets:
|
| 10 |
+
- EMBO/sd-nlp
|
| 11 |
metrics:
|
| 12 |
-
|
| 13 |
---
|
|
|
|
| 16 |
|
| 17 |
## Model description
|
| 18 |
|
| 19 |
+
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `PANELIZATION` task to perform 'parsing' or 'segmentation' of figure legends into fragments corresponding to sub-panels.
|
| 20 |
|
| 21 |
Figures are usually composite representations of results obtained with heterogeneous experimental approaches and systems. Breaking figures into panels allows identifying more coherent descriptions of individual scientific experiments.
|
| 22 |
|
|
|
|
| 44 |
|
| 45 |
## Training data
|
| 46 |
|
| 47 |
+
The model was trained for token classification using the [`EMBO/sd-nlp PANELIZATION`](https://huggingface.co/datasets/EMBO/sd-nlp) dataset which includes manually annotated examples.
|
| 48 |
|
| 49 |
## Training procedure
|
| 50 |
|
|
|
|
| 54 |
|
| 55 |
- Model fine-tuned: EMBO/bio-lm
|
| 56 |
- Tokenizer vocab size: 50265
|
| 57 |
+
- Training data: EMBO/sd-nlp
|
| 58 |
- Dataset configuration: PANELIZATION
|
| 59 |
- TTraining with 2175 examples.
|
| 60 |
- Evaluating on 622 examples.
|