Instructions to use nvidia/mit-b1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nvidia/mit-b1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-classification", model="nvidia/mit-b1") pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png")# Load model directly from transformers import AutoImageProcessor, AutoModelForImageClassification processor = AutoImageProcessor.from_pretrained("nvidia/mit-b1") model = AutoModelForImageClassification.from_pretrained("nvidia/mit-b1") - Inference
- Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 9123a4d3960ae6dc7803d314bc5068d806f23b8aaab8d5563a98b37634ac08ab
- Size of remote file:
- 54.7 MB
- SHA256:
- 980b86b60db37b1b1528086f6c53d253d879e9e09ebe07397b40fd275738a3bf
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.