模型:
facebook/levit-128S
LeViT-128S模型是在ImageNet-1k数据集上以224x224的分辨率预训练的。它是由Graham等人在论文 LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference 中提出的,并于 this repository 首次发布。
免责声明:发布LeViT模型的团队并未为此模型编写模型卡,因此这个模型卡是由Hugging Face团队编写的。
这里是如何使用该模型将COCO 2017数据集中的图像分类为ImageNet的1,000个类别之一:
from transformers import LevitFeatureExtractor, LevitForImageClassificationWithTeacher from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = LevitFeatureExtractor.from_pretrained('facebook/levit-128S') model = LevitForImageClassificationWithTeacher.from_pretrained('facebook/levit-128S') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx])