数据集:

metaeval/equate

许可:

apache-2.0
中文

EQUATE EQUATE (Evaluating Quantitative Understanding Aptitude in Textual Entailment) is a new framework for evaluating quantitative reasoning ability in textual entailment. EQUATE consists of five NLI test sets featuring quantities. You can download EQUATE here. Three of these tests for quantitative reasoning feature language from real-world sources such as news articles and social media (RTE, NewsNLI Reddit), and two are controlled synthetic tests, evaluating model ability to reason with quantifiers and perform simple arithmetic (AWP, Stress Test).

@article{ravichander2019equate,
  title={EQUATE: A Benchmark Evaluation Framework for Quantitative Reasoning in Natural Language Inference},
  author={Ravichander, Abhilasha and Naik, Aakanksha and Rose, Carolyn and Hovy, Eduard},
  journal={arXiv preprint arXiv:1901.03735},
  year={2019}
}