模型:
akoksal/LongForm-T5-XL
LongForm数据集是通过利用英文语料库示例和扩充的指导说明创建的。我们从现有的语料库,如C4和维基百科中选择了一组多样化的人工编写文档,并通过LLMs为这些文档生成指导说明。然后,我们将这些示例与结构化语料库示例(如Stack Exchange和WikiHow)以及任务示例(如问答、电子邮件写作、语法错误纠正、故事/诗生成和文本摘要)进行扩展。
Github Repo: https://github.com/akoksal/LongForm
import torch from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("akoksal/LongForm-T5-XL") tokenizer = AutoTokenizer.from_pretrained("akoksal/LongForm-T5-XL") instruction = "Write an essay about meditation." torch.manual_seed(42) input_ids = tokenizer(instruction, return_tensors="pt").input_ids target_ids = model.generate(input_ids, do_sample=True, max_new_tokens=50, top_p=0.9) tokenizer.decode(target_ids[0], skip_special_tokens=True) # Output: # > Meditation is an ancient, spiritual practice. Meditation was first\ # practiced as early as 3000 BC by Indians. Meditation has been practiced\ # by people for thousands of years. People meditate in order to become more\ # present in their life. Meditation is
我们在论文中对LongForm模型和基准模型进行了详细评估。我们提供了模型在领域外数据集上的METEOR分数。在所有任务中,配方生成(RGen)、长文本问答(ELI5)、短篇故事生成(WritingPrompts/WP)方面,LongForm模型优于以前的经过指令调整的模型。
All | Recipe Generation | ELI5 | Writing Prompts | |
---|---|---|---|---|
T0++ | 10.9 | 18.7 | 3.8 | 10.2 |
Tk-Instruct | 6.3 | 12.9* | 3.6 | 2.4 |
Flan-T5 | 10.6 | 20.9* | 3.5 | 7.4 |
Alpaca-LLaMA-7B | 14.6 | 19.5 | 12.5 | 11.8 |
OPT-30B | 11.1 | 18.6 | 12.2 | 2.6 |
1233321 | 16.3 | 20.2 | 18.3 | 10.6 |
1234321 | 17.8 | 15.5 | 17.9 | 19.9 |
1235321 | 17.7 | 16.9 | 17.2 | 19.0 |
1236321 ‡ | 19.7 | 21.7 | 18.6 | 18.9 |
LongForm-OPT模型的较小版本也可供选择:
‡: 由于LLaMA模型的限制,我们只能公开发布LongForm-LLaMA-7B和预训练LLaMA-7B之间的区别。
LongForm数据集和模型主要关注长文本生成,并且在NLP领域中的结构化预测任务方面存在限制。此外,我们观察到LongForm模型可能会出现类似LLMs中发现的虚构问题。
LongForm项目受MIT许可证的约束,其中包括OpenAI强加的自定义限制(用于指导生成部分),以及语言模型(OPT、LLaMA和T5)的许可证。
@misc{koksal2023longform, title={LongForm: Optimizing Instruction Tuning for Long Text Generation with Corpus Extraction}, author={Abdullatif Köksal and Timo Schick and Anna Korhonen and Hinrich Schütze}, year={2023}, eprint={2304.08460}, archivePrefix={arXiv}, primaryClass={cs.CL} }