英文

lmqg/t5-base-squad-qg-ae的模型卡

该模型是对 t5-base 进行微调的版本,用于在 lmqg 上进行问题生成和答案提取的联合训练(数据集名称:default)。

概述

用法

from lmqg import TransformersQG

# initialize model
model = TransformersQG(language="en", model="lmqg/t5-base-squad-qg-ae")

# model prediction
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
  • 使用transformers
from transformers import pipeline

pipe = pipeline("text2text-generation", "lmqg/t5-base-squad-qg-ae")

# answer extraction
answer = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")

# question generation
question = pipe("extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.")

评估

Score Type Dataset
BERTScore 90.58 default 12313321
Bleu_1 58.59 default 12313321
Bleu_2 42.6 default 12313321
Bleu_3 32.91 default 12313321
Bleu_4 26.01 default 12313321
METEOR 27 default 12313321
MoverScore 64.72 default 12313321
ROUGE_L 53.4 default 12313321
Score Type Dataset
QAAlignedF1Score (BERTScore) 92.53 default 12313321
QAAlignedF1Score (MoverScore) 64.23 default 12313321
QAAlignedPrecision (BERTScore) 92.35 default 12313321
QAAlignedPrecision (MoverScore) 64.33 default 12313321
QAAlignedRecall (BERTScore) 92.74 default 12313321
QAAlignedRecall (MoverScore) 64.23 default 12313321
Score Type Dataset
AnswerExactMatch 58.9 default 12313321
AnswerF1Score 70.18 default 12313321
BERTScore 91.57 default 12313321
Bleu_1 56.96 default 12313321
Bleu_2 52.57 default 12313321
Bleu_3 48.21 default 12313321
Bleu_4 44.33 default 12313321
METEOR 43.94 default 12313321
MoverScore 82.16 default 12313321
ROUGE_L 69.62 default 12313321

训练超参数

在微调过程中使用了以下超参数:

  • 数据集路径:lmqg/qg_squad
  • 数据集名称:default
  • 输入类型:['paragraph_answer', 'paragraph_sentence']
  • 输出类型:['question', 'answer']
  • 前缀类型:['qg', 'ae']
  • 模型:t5-base
  • 最大长度:512
  • 最大输出长度:32
  • 迭代轮数:6
  • 批次大小:32
  • 学习率:0.0001
  • 混合精度:False
  • 随机种子:1
  • 梯度累积步数:4
  • 标签平滑:0.15

完整配置详见 fine-tuning config file

引用

@inproceedings{ushio-etal-2022-generative,
    title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
    author = "Ushio, Asahi  and
        Alva-Manchego, Fernando  and
        Camacho-Collados, Jose",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, U.A.E.",
    publisher = "Association for Computational Linguistics",
}