模型:
dandelin/vilt-b32-finetuned-vqa
Vision-and-Language Transformer(ViLT)模型在 VQAv2 上进行了微调。这个模型是由Kim等人在 ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision 论文中提出并在 this repository 首次发布的。
免责声明:发布ViLT的团队并未为该模型撰写模型卡,因此本模型卡是由Hugging Face团队撰写的。
您可以使用原始模型进行视觉问答。
以下是在PyTorch中使用此模型的方法:
from transformers import ViltProcessor, ViltForQuestionAnswering import requests from PIL import Image # prepare image + question url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) text = "How many cats are there?" processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-vqa") model = ViltForQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa") # prepare inputs encoding = processor(image, text, return_tensors="pt") # forward pass outputs = model(**encoding) logits = outputs.logits idx = logits.argmax(-1).item() print("Predicted answer:", model.config.id2label[idx])
(待定)
(待定)
(待定)
(待定)
@misc{kim2021vilt, title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision}, author={Wonjae Kim and Bokyung Son and Ildoo Kim}, year={2021}, eprint={2102.03334}, archivePrefix={arXiv}, primaryClass={stat.ML} }