模型:
OFA-Sys/ofa-large-caption
这是OFA模型的大版本,用于图像字幕。OFA是一个统一的多模态预训练模型,将模态(即交叉模态、视觉、语言)和任务(例如图像生成、视觉定位、图像字幕、图像分类、文本生成等)统一到一个简单的序列到序列学习框架中。
目录包括4个文件,分别是config.json(包含模型配置)、vocab.json和merge.txt(用于OFA分词器),以及pytorch_model.bin(包含模型权重)。由于我们已经解决了Fairseq和transformers之间的不匹配问题,所以不必担心。
要在transformers中使用它,请参阅 https://github.com/OFA-Sys/OFA/tree/feature/add_transformers 。安装transformers并按下面展示的方式下载模型。
git clone --single-branch --branch feature/add_transformers https://github.com/OFA-Sys/OFA.git pip install OFA/transformers/ git clone https://huggingface.co/OFA-Sys/OFA-large-caption
然后,将OFA大模型的路径引用到ckpt_dir,并准备一个图像用于下面的测试示例。还要确保您的环境中安装了Pillow和torchvision。
>>> from PIL import Image >>> from torchvision import transforms >>> from transformers import OFATokenizer, OFAModel >>> from generate import sequence_generator >>> mean, std = [0.5, 0.5, 0.5], [0.5, 0.5, 0.5] >>> resolution = 480 >>> patch_resize_transform = transforms.Compose([ lambda image: image.convert("RGB"), transforms.Resize((resolution, resolution), interpolation=Image.BICUBIC), transforms.ToTensor(), transforms.Normalize(mean=mean, std=std) ]) >>> tokenizer = OFATokenizer.from_pretrained(ckpt_dir) >>> txt = " what does the image describe?" >>> inputs = tokenizer([txt], return_tensors="pt").input_ids >>> img = Image.open(path_to_image) >>> patch_img = patch_resize_transform(img).unsqueeze(0) # using the generator of fairseq version >>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=True) >>> generator = sequence_generator.SequenceGenerator( tokenizer=tokenizer, beam_size=5, max_len_b=16, min_len=0, no_repeat_ngram_size=3, ) >>> data = {} >>> data["net_input"] = {"input_ids": inputs, 'patch_images': patch_img, 'patch_masks':torch.tensor([True])} >>> gen_output = generator.generate([model], data) >>> gen = [gen_output[i][0]["tokens"] for i in range(len(gen_output))] # using the generator of huggingface version >>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=False) >>> gen = model.generate(inputs, patch_images=patch_img, num_beams=5, no_repeat_ngram_size=3) >>> print(tokenizer.batch_decode(gen, skip_special_tokens=True))