Instructions to use cgt/Roberta-wwm-ext-large-qa with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use cgt/Roberta-wwm-ext-large-qa with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="cgt/Roberta-wwm-ext-large-qa")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("cgt/Roberta-wwm-ext-large-qa") model = AutoModelForQuestionAnswering.from_pretrained("cgt/Roberta-wwm-ext-large-qa") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- abb54a88edc7f553d6a668c6a55b9a9da08ef8c63a59a813b66b34d673b7d14a
- Size of remote file:
- 1.3 GB
- SHA256:
- 82bb1602ddc512f56a3ea28e873089d567d85c4d54028071aa06be906947dc3b
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.