英文

lmqg/t5-large-squad-qg 模型卡片

该模型是基于 t5-large 进行微调,用于生成问题的任务,数据集为 lmqg/qg_squad (数据集名称: default),通过 lmqg 进行训练。

概述

使用

from lmqg import TransformersQG

# initialize model
model = TransformersQG(language="en", model="lmqg/t5-large-squad-qg")

# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
  • 使用 transformers
from transformers import pipeline

pipe = pipeline("text2text-generation", "lmqg/t5-large-squad-qg")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")

评估

Score Type Dataset
BERTScore 91 default 12313321
Bleu_1 59.54 default 12313321
Bleu_2 43.79 default 12313321
Bleu_3 34.14 default 12313321
Bleu_4 27.21 default 12313321
METEOR 27.7 default 12313321
MoverScore 65.29 default 12313321
ROUGE_L 54.13 default 12313321
  • 指标(问题和答案生成,参考答案):每个问题都是从正确答案生成的。 raw metric file
Score Type Dataset
QAAlignedF1Score (BERTScore) 95.57 default 12313321
QAAlignedF1Score (MoverScore) 71.1 default 12313321
QAAlignedPrecision (BERTScore) 95.62 default 12313321
QAAlignedPrecision (MoverScore) 71.41 default 12313321
QAAlignedRecall (BERTScore) 95.51 default 12313321
QAAlignedRecall (MoverScore) 70.8 default 12313321
Score Type Dataset
QAAlignedF1Score (BERTScore) 92.97 default 12313321
QAAlignedF1Score (MoverScore) 64.72 default 12313321
QAAlignedPrecision (BERTScore) 92.83 default 12313321
QAAlignedPrecision (MoverScore) 64.87 default 12313321
QAAlignedRecall (BERTScore) 93.14 default 12313321
QAAlignedRecall (MoverScore) 64.66 default 12313321
  • 指标(问题生成,领域外)
Dataset Type BERTScore Bleu_4 METEOR MoverScore ROUGE_L Link
12336321 amazon 91.15 6.9 23.01 61.22 25.34 12337321
12336321 new_wiki 93.17 11.18 27.92 66.31 30.06 12339321
12336321 nyt 92.42 8.05 25.67 64.37 25.19 12341321
12336321 reddit 90.95 5.95 21.85 60.64 21.99 12343321
12344321 books 87.94 0.0 11.97 55.48 9.87 12345321
12344321 electronics 87.86 0.84 16.16 56.05 14.13 12347321
12344321 grocery 87.5 0.76 15.4 56.76 10.5 12349321
12344321 movies 87.34 0.0 13.03 55.36 12.27 12351321
12344321 restaurants 88.25 0.0 12.45 55.91 11.93 12353321
12344321 tripadvisor 89.29 0.78 16.3 56.81 14.59 12355321

训练超参数

在微调期间使用了以下超参数:

  • 数据集路径:lmqg/qg_squad
  • 数据集名称:default
  • 输入类型:['paragraph_answer']
  • 输出类型:['question']
  • 前缀类型:['qg']
  • 模型:t5-large
  • 最大长度:512
  • 输出最大长度:32
  • 训练轮数:6
  • 批量大小:16
  • 学习率:5e-05
  • fp16:False
  • 随机种子:1
  • 梯度累积步数:4
  • 标签平滑化:0.15

完整的配置信息可以在 fine-tuning config file 中找到。

引用

@inproceedings{ushio-etal-2022-generative,
    title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
    author = "Ushio, Asahi  and
        Alva-Manchego, Fernando  and
        Camacho-Collados, Jose",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, U.A.E.",
    publisher = "Association for Computational Linguistics",
}