distilbart-12-6-hybrid-2048-in
This model is a fine-tuned version of broadfield-dev/distilbart-cnn-12-6-tuned-0104-0138 on the broadfield-dev/stacked-summaries_xsum-2048-in dataset.
Training Details
- Task: SEQ_2_SEQ_LM
- Epochs: 2
- Learning Rate: 2e-05
- Gradient Accumulation Steps: 4
Entity Labels
['LABEL_0', 'LABEL_1', 'LABEL_2']
Usage
from transformers import pipeline
summarizer = pipeline("summarization", model="broadfield-dev/distilbart-12-6-hybrid-2048-in")
text = "Your long text here..."
print(summarizer(text))
Context
Trained on '2048' input '256' output
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for broadfield-dev/distilbart-12-6-hybrid-2048-in
Base model
sshleifer/distilbart-cnn-12-6