模型:
mideind/IceBERT
这个模型是使用RoBERTa-base架构在fairseq中训练得到的。它是我们为冰岛语训练的众多模型之一,有关详细信息请参见下面引用的论文。下表显示了使用的训练数据。
Dataset | Size | Tokens |
---|---|---|
Icelandic Gigaword Corpus v20.05 (IGC) | 8.2 GB | 1,388M |
Icelandic Common Crawl Corpus (IC3) | 4.9 GB | 824M |
Greynir News articles | 456 MB | 76M |
Icelandic Sagas | 9 MB | 1.7M |
Open Icelandic e-books (Rafbókavefurinn) | 14 MB | 2.6M |
Data from the medical library of Landspitali | 33 MB | 5.2M |
Student theses from Icelandic universities (Skemman) | 2.2 GB | 367M |
Total | 15.8 GB | 2,664M |
该模型的论文介绍在这篇论文中 https://arxiv.org/abs/2201.05601 。如果您使用该模型,请引用该论文。
@inproceedings{snaebjarnarson-etal-2022-warm, title = "A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models", author = "Sn{\ae}bjarnarson, V{\'e}steinn and S{\'\i}monarson, Haukur Barri and Ragnarsson, P{\'e}tur Orri and Ing{\'o}lfsd{\'o}ttir, Svanhv{\'\i}t Lilja and J{\'o}nsson, Haukur and Thorsteinsson, Vilhjalmur and Einarsson, Hafsteinn", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.464", pages = "4356--4366", abstract = "We train several language models for Icelandic, including IceBERT, that achieve state-of-the-art performance in a variety of downstream tasks, including part-of-speech tagging, named entity recognition, grammatical error detection and constituency parsing. To train the models we introduce a new corpus of Icelandic text, the Icelandic Common Crawl Corpus (IC3), a collection of high quality texts found online by targeting the Icelandic top-level-domain .is. Several other public data sources are also collected for a total of 16GB of Icelandic text. To enhance the evaluation of model performance and to raise the bar in baselines for Icelandic, we manually translate and adapt the WinoGrande commonsense reasoning dataset. Through these efforts we demonstrate that a properly cleaned crawled corpus is sufficient to achieve state-of-the-art results in NLP applications for low to medium resource languages, by comparison with models trained on a curated corpus. We further show that initializing models using existing multilingual models can lead to state-of-the-art results for some downstream tasks.", }