模型:
dandelin/vilt-b32-finetuned-nlvr2
Vision-and-Language Transformer(ViLT)模型在 NLVR2 上进行了微调。它由Kim等人在 this repository 的论文中介绍,并首次发布在 this repository 上。
声明:发布ViLT的团队没有为此模型编写模型卡,因此此模型卡是由Hugging Face团队撰写的。
您可以使用该模型确定给定2个图像的句子是否为真或假。
以下是使用PyTorch的模型方法:
from transformers import ViltProcessor, ViltForImagesAndTextClassification import requests from PIL import Image image1 = Image.open(requests.get("https://lil.nlp.cornell.edu/nlvr/exs/ex0_0.jpg", stream=True).raw) image2 = Image.open(requests.get("https://lil.nlp.cornell.edu/nlvr/exs/ex0_1.jpg", stream=True).raw) text = "The left image contains twice the number of dogs as the right image." processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-nlvr2") model = ViltForImagesAndTextClassification.from_pretrained("dandelin/vilt-b32-finetuned-nlvr2") # prepare inputs encoding = processor([image1, image2], text, return_tensors="pt") # forward pass outputs = model(input_ids=encoding.input_ids, pixel_values=encoding.pixel_values.unsqueeze(0)) logits = outputs.logits idx = logits.argmax(-1).item() print("Predicted answer:", model.config.id2label[idx])
(待完成)
(待完成)
(待完成)
(待完成)
@misc{kim2021vilt, title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision}, author={Wonjae Kim and Bokyung Son and Ildoo Kim}, year={2021}, eprint={2102.03334}, archivePrefix={arXiv}, primaryClass={stat.ML} }