英文

language: en

license: apache-2.0

HF-version model for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization (ACL 2022).

The original code can be found here . You can find the script and notebook to train/evaluate the model in the original github repo.

  • Note: due to the difference between the implementations of the original Longformer and the Huggingface LED model, the results of converted models are slightly different. We run a sanity check on both fine-tuned and non fine-tuned models on the Multinews dataset, and show the results below:

| Model | Rouge-1 | Rouge-2 | Rouge-L |

| --- | ----------- |----------- |----------- |

| PRIMERA | 42.0 | 13.6 | 20.8|

| PRIMERA-hf | 41.7 |13.6 | 20.5|

| PRIMERA(finetuned) | 49.9 | 21.1 | 25.9|

| PRIMERA-hf(finetuned) | 49.9 | 20.9 | 25.8|

You can use it by

from transformers import (

    AutoTokenizer,

    LEDConfig,

    LEDForConditionalGeneration,

)

tokenizer = AutoTokenizer.from_pretrained('allenai/PRIMERA')

config=LEDConfig.from_pretrained('allenai/PRIMERA')

model = LEDForConditionalGeneration.from_pretrained('allenai/PRIMERA')

语言: en

许可证: apache-2.0

PRIMERA的HF版模型:金字塔式基于掩码的句子多文档预训练摘要方法(ACL 2022)。

原始代码可以在 here 中找到。你可以在原始的github仓库中找到用于训练/评估模型的脚本和笔记本。

  • 注意:由于原始Longformer和Huggingface LED模型的实现差异,转换模型的结果略有不同。我们在Multinews数据集上对非微调和微调的模型进行了健全性检查,并在下面显示了结果:

| 模型 | Rouge-1 | Rouge-2 | Rouge-L |

| --- | ----------- |----------- |----------- |

| PRIMERA | 42.0 | 13.6 | 20.8|

| PRIMERA-hf | 41.7 |13.6 | 20.5|

| PRIMERA(微调后) | 49.9 | 21.1 | 25.9|

| PRIMERA-hf(微调后) | 49.9 | 20.9 | 25.8|

from transformers import (

    AutoTokenizer,

    LEDConfig,

    LEDForConditionalGeneration,

)

tokenizer = AutoTokenizer.from_pretrained('allenai/PRIMERA')

config=LEDConfig.from_pretrained('allenai/PRIMERA')

model = LEDForConditionalGeneration.from_pretrained('allenai/PRIMERA')
可以使用它。