英文

BLIP: 基于语言和图像先预训练进行统一的视觉-语言理解和生成

BLIP在视觉问答基本架构上训练,使用ViT基本骨干结构。

1234321
Pull figure from BLIP official repo

TL;DR

作者从 paper 在摘要中写道:

视觉-语言预训练(VLP)推动了许多视觉-语言任务的性能提升。然而,大多数现有的预训练模型只擅长于理解类任务或生成类任务之一。此外,通过扩展包含来自Web的带有噪声的图像-文本对的数据集来实现性能提升,在监督方面并不是最佳选择。在本文中,我们提出了一种新的VLP框架,BLIP,可以灵活地转移至视觉-语言理解和生成任务。BLIP通过引导标题来有效利用噪声的网络数据,其中标题生成器生成合成标题,而过滤器删除噪音标题。我们在广泛的视觉-语言任务上取得了最先进的结果,如图像-文本检索(平均回归率@1上提升2.7%),图像字幕(CIDEr上提升2.8%)和VQA(VQA分数上提升1.6%)。BLIP还展示了在零样本方式下直接转移到视频-语言任务时的强大泛化能力。代码、模型和数据集已经发布。

Usage

您可以将此模型用于有条件和无条件的图像字幕。

使用Pytorch模型

在CPU上运行模型 点击展开
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForQuestionAnswering

processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
model = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base")

img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' 
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')

question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt")

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> 1
在GPU上运行模型以全精度方式展开
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForQuestionAnswering

processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
model = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base").to("cuda")

img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' 
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')

question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> 1
以半精度(float16)运行模型 点击展开
import torch
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForQuestionAnswering

processor = BlipProcessor.from_pretrained("ybelkada/blip-vqa-base")
model = BlipForQuestionAnswering.from_pretrained("ybelkada/blip-vqa-base", torch_dtype=torch.float16).to("cuda")

img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' 
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')

question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> 1

BibTex和引用信息

@misc{https://doi.org/10.48550/arxiv.2201.12086,
  doi = {10.48550/ARXIV.2201.12086},
  
  url = {https://arxiv.org/abs/2201.12086},
  
  author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven},
  
  keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
  
  publisher = {arXiv},
  
  year = {2022},
  
  copyright = {Creative Commons Attribution 4.0 International}
}