Focal Modulation Networks
Paper
•
2203.11926
•
Published
A FocalNet image classification model. The model follows a two-stage training process: first undergoing intermediate training on a large-scale dataset containing diverse bird species from around the world, then fine-tuned specifically on the eu-common dataset (all the relevant bird species found in the Arabian peninsula inc. rarities).
The species list is derived from the Collins bird guide [^1].
[^1]: Svensson, L., Mullarney, K., & Zetterström, D. (2022). Collins bird guide (3rd ed.). London, England: William Collins.
Model Type: Image classification and detection backbone
Model Stats:
Dataset: eu-common (707 classes)
Papers:
import birder
from birder.inference.classification import infer_image
(net, model_info) = birder.load_pretrained_model("focalnet_b_lrf_intermediate-eu-common", inference=True)
# Get the image size the model was trained on
size = birder.get_size_from_signature(model_info.signature)
# Create an inference transform
transform = birder.classification_transform(size, model_info.rgb_stats)
image = "path/to/image.jpeg" # or a PIL image, must be loaded in RGB format
(out, _) = infer_image(net, image, transform)
# out is a NumPy array with shape of (1, 707), representing class probabilities.
import birder
from birder.inference.classification import infer_image
(net, model_info) = birder.load_pretrained_model("focalnet_b_lrf_intermediate-eu-common", inference=True)
# Get the image size the model was trained on
size = birder.get_size_from_signature(model_info.signature)
# Create an inference transform
transform = birder.classification_transform(size, model_info.rgb_stats)
image = "path/to/image.jpeg" # or a PIL image
(out, embedding) = infer_image(net, image, transform, return_embedding=True)
# embedding is a NumPy array with shape of (1, 1024)
from PIL import Image
import birder
(net, model_info) = birder.load_pretrained_model("focalnet_b_lrf_intermediate-eu-common", inference=True)
# Get the image size the model was trained on
size = birder.get_size_from_signature(model_info.signature)
# Create an inference transform
transform = birder.classification_transform(size, model_info.rgb_stats)
image = Image.open("path/to/image.jpeg")
features = net.detection_features(transform(image).unsqueeze(0))
# features is a dict (stage name -> torch.Tensor)
print([(k, v.size()) for k, v in features.items()])
# Output example:
# [('stage1', torch.Size([1, 128, 96, 96])),
# ('stage2', torch.Size([1, 256, 48, 48])),
# ('stage3', torch.Size([1, 512, 24, 24])),
# ('stage4', torch.Size([1, 1024, 12, 12]))]
@misc{yang2022focalmodulationnetworks,
title={Focal Modulation Networks},
author={Jianwei Yang and Chunyuan Li and Xiyang Dai and Lu Yuan and Jianfeng Gao},
year={2022},
eprint={2203.11926},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2203.11926},
}