模型:

google/pix2struct-ai2d-large

英文

Pix2Struct - AI2D上微调的模型展示卡 - 大版本

目录

  • TL;DR
  • 如何使用该模型
  • 贡献
  • 引用
  • TL;DR

    Pix2Struct 是一个图像编码器 - 文本解码器模型,它通过图像和文本对进行训练,用于各种任务,包括图像字幕和视觉问答。可在论文的Table 1中找到可用模型的完整列表:

    该模型的摘要说明如下:

    视觉环境下的语言是无处不在的 - 这些资源范围从带有图表的教科书到带有图像和表格的网页,再到带有按钮和表单的移动应用程序。也许由于这种多样性,以往的研究通常依赖于具体领域的方案,很少共享底层数据、模型架构和目标。我们提出了Pix2Struct,一种针对纯视觉语言理解的预训练图像到文本模型,可通过微调应用于包含视觉环境下语言的任务中。Pix2Struct的预训练是通过将屏幕截图中的屏蔽图解析为简化的HTML来进行的。网络以其在HTML结构中清晰反映的丰富的视觉元素为来源,适合于多样的下游任务。直观地说,该目标包含了诸如OCR、语言模型、图像字幕等常见的预训练信号。除了创新的预训练策略外,我们还引入了可变分辨率输入表示和一种更灵活的语言和视觉输入集成方式,其中直接将问题等语言提示呈现在输入图像的顶部。我们首次展示了单个预训练模型在四个领域的九个任务中有六个任务达到了最先进的结果,这四个领域包括文档、插图、用户界面和自然图像。

    使用该模型

    该模型已经在VQA任务上进行了微调,您需要提供一个特定格式的问题,最好是以选择题回答的格式提问。

    从T5x转换为Hugging Face

    您可以使用 convert_pix2struct_checkpoint_to_pytorch.py 脚本进行转换,如下所示:

    python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --is_vqa
    

    如果要转换大型模型,请运行:

    python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --use-large --is_vqa
    

    一旦保存完成,您可以使用以下代码片段上传已转换的模型:

    from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
    
    model = Pix2StructForConditionalGeneration.from_pretrained(PATH_TO_SAVE)
    processor = Pix2StructProcessor.from_pretrained(PATH_TO_SAVE)
    
    model.push_to_hub("USERNAME/MODEL_NAME")
    processor.push_to_hub("USERNAME/MODEL_NAME")
    

    运行该模型

    在CPU上进行全精度运行:

    您可以在CPU上以全精度运行该模型:

    import requests
    from PIL import Image
    from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
    
    image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
    image = Image.open(requests.get(image_url, stream=True).raw)
    
    model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-ai2d-large")
    processor = Pix2StructProcessor.from_pretrained("google/pix2struct-ai2d-large")
    
    question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"
    
    inputs = processor(images=image, text=question, return_tensors="pt")
    
    predictions = model.generate(**inputs)
    print(processor.decode(predictions[0], skip_special_tokens=True))
    >>> ash cloud
    

    在GPU上进行全精度运行:

    您可以在GPU上以全精度运行该模型:

    import requests
    from PIL import Image
    from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
    
    image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
    image = Image.open(requests.get(image_url, stream=True).raw)
    
    model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-ai2d-large").to("cuda")
    processor = Pix2StructProcessor.from_pretrained("google/pix2struct-ai2d-large")
    
    question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"
    
    inputs = processor(images=image, text=question, return_tensors="pt").to("cuda")
    
    predictions = model.generate(**inputs)
    print(processor.decode(predictions[0], skip_special_tokens=True))
    >>> ash cloud
    

    在GPU上进行半精度运行:

    您可以在GPU上以半精度运行该模型:

    import requests
    from PIL import Image
    
    import torch
    from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
    
    image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
    image = Image.open(requests.get(image_url, stream=True).raw)
    
    model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-ai2d-large", torch_dtype=torch.bfloat16).to("cuda")
    processor = Pix2StructProcessor.from_pretrained("google/pix2struct-ai2d-large")
    
    question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"
    
    inputs = processor(images=image, text=question, return_tensors="pt").to("cuda", torch.bfloat16)
    
    predictions = model.generate(**inputs)
    print(processor.decode(predictions[0], skip_special_tokens=True))
    >>> ash cloud
    

    贡献

    该模型最初由Kenton Lee、Mandar Joshi等人贡献,并由 Younes Belkada 将其添加到Hugging Face生态系统中。

    引用

    如果您想引用这项工作,请考虑引用原始论文:

    @misc{https://doi.org/10.48550/arxiv.2210.03347,
      doi = {10.48550/ARXIV.2210.03347},
      
      url = {https://arxiv.org/abs/2210.03347},
      
      author = {Lee, Kenton and Joshi, Mandar and Turc, Iulia and Hu, Hexiang and Liu, Fangyu and Eisenschlos, Julian and Khandelwal, Urvashi and Shaw, Peter and Chang, Ming-Wei and Toutanova, Kristina},
      
      keywords = {Computation and Language (cs.CL), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
      
      title = {Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding},
      
      publisher = {arXiv},
      
      year = {2022},
      
      copyright = {Creative Commons Attribution 4.0 International}
    }