模型:

RUCAIBox/mtl-data-to-text

英文

MTL-data-to-text

MTL-data-to-text模型是由Tianyi Tang,Junyi Li,Wayne Xin Zhao和Ji-Rong Wen于 MVP: Multi-task Supervised Pre-training for Natural Language Generation 提出的。

详细信息和说明可以在 https://github.com/RUCAIBox/MVP 中找到。

模型描述

MTL-data-to-text是通过使用混合标记的data-to-text数据集进行有监督预训练的。它是我们主要模型( MVP )的一种变体。它遵循标准的Transformer编码器-解码器架构。

MTL-data-to-text专门用于数据到文本生成任务,例如KG到文本生成(WebNLG,DART),表格到文本生成(WikiBio,ToTTo)和MR到文本生成(E2E)。

示例

>>> from transformers import MvpTokenizer, MvpForConditionalGeneration

>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-data-to-text")

>>> inputs = tokenizer(
...     "Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man",
...     return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Iron Man is a fictional superhero appearing in American comic books published by Marvel Comics.']

相关模型

MVP: https://huggingface.co/RUCAIBox/mvp

Prompt-based models:

Multi-task models:

引用

@article{tang2022mvp,
  title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
  author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
  journal={arXiv preprint arXiv:2206.12131},
  year={2022},
  url={https://arxiv.org/abs/2206.12131},
}