数据集:

copenlu/fever_gold_evidence

英文

fever_gold_evidence数据集卡片

数据集摘要

这是用于训练只进行分类型事实检查的数据集,其中包含来自FEVER数据集的声明。该数据集在论文《Generating Label Cohesive and Well-Formed Adversarial Claims》(EMNLP 2020)中使用。

这些证据是REFUTE和SUPPORT主张的金标准证据。对于NEI主张,我们使用"Christopher Malon. 2018. Team Papelo: Transformer Networks at FEVER. In Proceedings of theFirst Workshop on Fact Extraction and VERification (FEVER), pages 109-113."中的系统提取证据句子。更多细节可以在 https://github.com/copenlu/fever-adversarial-attacks 中找到。

支持的任务和排行榜

[需要更多信息]

语言

[需要更多信息]

数据集结构

数据示例

[需要更多信息]

数据字段

[需要更多信息]

数据划分

[需要更多信息]

数据集创建

策划理由

[需要更多信息]

源数据

初始数据收集和规范化

[需要更多信息]

谁是源语言生成者?

[需要更多信息]

注释

注释过程

[需要更多信息]

谁是标注者?

[需要更多信息]

个人和敏感信息

[需要更多信息]

考虑使用数据的因素

数据的社会影响

[需要更多信息]

偏见讨论

[需要更多信息]

其他已知限制

[需要更多信息]

附加信息

数据集策划者

[需要更多信息]

许可信息

[需要更多信息]

引用信息

@inproceedings{atanasova-etal-2020-generating,
    title = "Generating Label Cohesive and Well-Formed Adversarial Claims",
    author = "Atanasova, Pepa  and
      Wright, Dustin  and
      Augenstein, Isabelle",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.emnlp-main.256",
    doi = "10.18653/v1/2020.emnlp-main.256",
    pages = "3168--3177",
    abstract = "Adversarial attacks reveal important vulnerabilities and flaws of trained models. One potent type of attack are universal adversarial triggers, which are individual n-grams that, when appended to instances of a class under attack, can trick a model into predicting a target class. However, for inference tasks such as fact checking, these triggers often inadvertently invert the meaning of instances they are inserted in. In addition, such attacks produce semantically nonsensical inputs, as they simply concatenate triggers to existing samples. Here, we investigate how to generate adversarial attacks against fact checking systems that preserve the ground truth meaning and are semantically valid. We extend the HotFlip attack algorithm used for universal trigger generation by jointly minimizing the target class loss of a fact checking model and the entailment class loss of an auxiliary natural language inference model. We then train a conditional language model to generate semantically valid statements, which include the found universal triggers. We find that the generated attacks maintain the directionality and semantic validity of the claim better than previous work.",
}