模型:
facebook/convnext-base-224-22k
ConvNeXT模型在分辨率为224x224的ImageNet-22k上进行了训练。它是由Liu等人在 A ConvNet for the 2020s 论文中提出,并在 this repository 首次发布。
免责声明:发布ConvNeXT的团队未为该模型编写模型卡片,因此此模型卡片由Hugging Face团队编写。
ConvNeXT是一个纯卷积模型(ConvNet),受到Vision Transformers的设计启发,声称在性能上超越它们。作者们从ResNet出发,通过以Swin Transformer为灵感进行“现代化”设计。
您可以使用原始模型进行图像分类。请参阅 model hub 以寻找您感兴趣的任务上的微调版本。
以下是如何使用该模型将COCO 2017数据集中的图像分类为1,000个ImageNet类别之一:
from transformers import ConvNextImageProcessor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] processor = ConvNextImageProcessor.from_pretrained("facebook/convnext-base-224-22k") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-base-224-22k") inputs = processor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 22k ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]),
有关更多代码示例,请参阅 documentation 。
@article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }