This paper describes our submission to SemEval 2021 Task 2. We compare XLM-RoBERTa Base and Large in the few-shot and zero-shot settings and additionally test the effectiveness of using a k-nearest neighbors classifier in the few-shot setting instead of the more traditional multi-layered perceptron. Our experiments on both the multi-lingual and cross-lingual data show that XLM-RoBERTa Large, unlike the Base version, seems to be able to more effectively transfer learning in a few-shot setting and that the k-nearest neighbors classifier is indeed a more powerful classifier than a multi-layered perceptron when used in few-shot learning.
|Title of host publication||Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)|
|Place of Publication||Online|
|Publisher||Association for Computational Linguistics|
|Number of pages||5|
|Publication status||Published - 1 Aug 2021|