英文

lmqg/flan-t5-large-squad-qag模型卡片

此模型是在 lmqg/qag_squad (数据集名称:default)上通过 lmqg 进行问题和答案生成任务的fine-tune版本,基于 google/flan-t5-large 进行了训练。

概述

用法

from lmqg import TransformersQG

# initialize model
model = TransformersQG(language="en", model="lmqg/flan-t5-large-squad-qag")

# model prediction
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
  • 与transformers一起使用
from transformers import pipeline

pipe = pipeline("text2text-generation", "lmqg/flan-t5-large-squad-qag")
output = pipe("generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")

评估

Score Type Dataset
QAAlignedF1Score (BERTScore) 93.49 default 12313321
QAAlignedF1Score (MoverScore) 66.06 default 12313321
QAAlignedPrecision (BERTScore) 93.32 default 12313321
QAAlignedPrecision (MoverScore) 66.15 default 12313321
QAAlignedRecall (BERTScore) 93.68 default 12313321
QAAlignedRecall (MoverScore) 66.06 default 12313321

训练超参数

在fine-tuning期间使用了以下超参数:

  • dataset_path: lmqg/qag_squad
  • dataset_name: default
  • input_types: ['paragraph']
  • output_types: ['questions_answers']
  • prefix_types: ['qag']
  • model: google/flan-t5-large
  • max_length: 512
  • max_length_output: 256
  • epoch: 15
  • batch: 2
  • lr: 0.0001
  • fp16: False
  • random_seed: 1
  • gradient_accumulation_steps: 32
  • label_smoothing: 0.0

完整的配置信息可以在 fine-tuning config file 中找到。

引用

@inproceedings{ushio-etal-2022-generative,
    title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
    author = "Ushio, Asahi  and
        Alva-Manchego, Fernando  and
        Camacho-Collados, Jose",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, U.A.E.",
    publisher = "Association for Computational Linguistics",
}