Konferensartikel

Multilingual ELMo and the Effects of Corpus Sampling

Vinit Ravishankar

Andrey Kutuzov

Lilja Øvrelid

Erik Velldal

Ladda ner artikel

Ingår i: Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), May 31-June 2, 2021.

Linköping Electronic Conference Proceedings 178:41, s. 378-384

NEALT Proceedings Series 45:41, p. 378-384

Visa mer +

Publicerad: 2021-05-21

ISBN: 978-91-7929-614-8

ISSN: 1650-3686 (tryckt), 1650-3740 (online)

Abstract

Multilingual pretrained language models are rapidly gaining popularity in NLP systems for non-English languages. Most of these models feature an important corpus sampling step in the process of accumulating training data in different languages, to ensure that the signal from better resourced languages does not drown out poorly resourced ones. In this study, we train multiple multilingual recurrent language models, based on the ELMo architecture, and analyse both the effect of varying corpus size ratios on downstream performance, as well as the performance difference between monolingual models for each language, and broader multilingual language models. As part of this effort, we also make these trained models available for public use.

Nyckelord

multilingual, pretrained, interpretability

Referenser

Inga referenser tillgängliga

Citeringar i Crossref