Image Feature Extraction
Transformers
JAX
Safetensors
MLX
PyTorch
aimv2_vision_model
vision
custom_code
Eval Results (legacy)
Instructions to use apple/aimv2-1B-patch14-336 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use apple/aimv2-1B-patch14-336 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-feature-extraction", model="apple/aimv2-1B-patch14-336", trust_remote_code=True)# Load model directly from transformers import AutoImageProcessor, AutoModel processor = AutoImageProcessor.from_pretrained("apple/aimv2-1B-patch14-336", trust_remote_code=True) model = AutoModel.from_pretrained("apple/aimv2-1B-patch14-336", trust_remote_code=True) - MLX
How to use apple/aimv2-1B-patch14-336 with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir aimv2-1B-patch14-336 apple/aimv2-1B-patch14-336
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
metadata
library_name: transformers
license: apple-amlr
metrics:
- accuracy
pipeline_tag: image-feature-extraction
tags:
- vision
- image-feature-extraction
- mlx
- pytorch
model-index:
- name: aimv2-1B-patch14-336
results:
- task:
type: classification
name: Classification
dataset:
name: imagenet-1k
type: imagenet-1k
metrics:
- type: accuracy
value: 88.7
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: inaturalist-18
type: inaturalist-18
metrics:
- type: accuracy
value: 82.7
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: cifar10
type: cifar10
metrics:
- type: accuracy
value: 99.4
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: cifar100
type: cifar100
metrics:
- type: accuracy
value: 93.9
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: food101
type: food101
metrics:
- type: accuracy
value: 97.1
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: dtd
type: dtd
metrics:
- type: accuracy
value: 88.9
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: oxford-pets
type: oxford-pets
metrics:
- type: accuracy
value: 96.9
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: stanford-cars
type: stanford-cars
metrics:
- type: accuracy
value: 96.5
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: camelyon17
type: camelyon17
metrics:
- type: accuracy
value: 94.2
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: patch-camelyon
type: patch-camelyon
metrics:
- type: accuracy
value: 89.5
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: rxrx1
type: rxrx1
metrics:
- type: accuracy
value: 8.4
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: eurosat
type: eurosat
metrics:
- type: accuracy
value: 98.9
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: fmow
type: fmow
metrics:
- type: accuracy
value: 65.1
name: Accuracy
verified: false
- task:
type: classification
name: Classification
dataset:
name: domainnet-infographic
type: domainnet-infographic
metrics:
- type: accuracy
value: 73.7
name: Accuracy
verified: false
Introduction
[AIMv2 Paper] [BibTeX]
We introduce the AIMv2 family of vision models pre-trained with a multimodal autoregressive objective. AIMv2 pre-training is simple and straightforward to train and scale effectively. Some AIMv2 highlights include:
- Outperforms OAI CLIP and SigLIP on the majority of multimodal understanding benchmarks.
- Outperforms DINOv2 on open-vocabulary object detection and referring expression comprehension.
- Exhibits strong recognition performance with AIMv2-3B achieving 89.5% on ImageNet using a frozen trunk.
Usage
PyTorch
import requests
from PIL import Image
from transformers import AutoImageProcessor, AutoModel
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained(
"apple/aimv2-1B-patch14-336",
revision="5a15f23f6757776fdd487283353a018677621939",
)
model = AutoModel.from_pretrained(
"apple/aimv2-1B-patch14-336",
revision="5a15f23f6757776fdd487283353a018677621939",
trust_remote_code=True,
)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
JAX
import requests
from PIL import Image
from transformers import AutoImageProcessor, FlaxAutoModel
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained(
"apple/aimv2-1B-patch14-336",
)
model = FlaxAutoModel.from_pretrained(
"apple/aimv2-1B-patch14-336",
trust_remote_code=True,
)
inputs = processor(images=image, return_tensors="jax")
outputs = model(**inputs)
Citation
If you find our work useful, please consider citing us as:
@misc{fini2024multimodalautoregressivepretraininglarge,
author = {Fini, Enrico and Shukor, Mustafa and Li, Xiujun and Dufter, Philipp and Klein, Michal and Haldimann, David and Aitharaju, Sai and da Costa, Victor Guilherme Turrisi and Béthune, Louis and Gan, Zhe and Toshev, Alexander T and Eichner, Marcin and Nabi, Moin and Yang, Yinfei and Susskind, Joshua M. and El-Nouby, Alaaeldin},
url = {https://arxiv.org/abs/2411.14402},
eprint = {2411.14402},
eprintclass = {cs.CV},
eprinttype = {arXiv},
title = {Multimodal Autoregressive Pre-training of Large Vision Encoders},
year = {2024},
}