UoBUK at SemEval 2021 Task 2: Zero-Shot and Few-Shot Learning for Multi-lingual and Cross-lingual Word Sense Disambiguation.

Wei Li, Harish Tayyar Madabushi, Mark Lee

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

Abstract

This paper describes our submission to SemEval 2021 Task 2. We compare XLM-RoBERTa Base and Large in the few-shot and zero-shot settings and additionally test the effectiveness of using a k-nearest neighbors classifier in the few-shot setting instead of the more traditional multi-layered perceptron. Our experiments on both the multi-lingual and cross-lingual data show that XLM-RoBERTa Large, unlike the Base version, seems to be able to more effectively transfer learning in a few-shot setting and that the k-nearest neighbors classifier is indeed a more powerful classifier than a multi-layered perceptron when used in few-shot learning.
Original languageEnglish
Title of host publicationProceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
Place of PublicationOnline
PublisherAssociation for Computational Linguistics
Pages738-742
Number of pages5
DOIs
Publication statusPublished - 1 Aug 2021

Fingerprint

Dive into the research topics of 'UoBUK at SemEval 2021 Task 2: Zero-Shot and Few-Shot Learning for Multi-lingual and Cross-lingual Word Sense Disambiguation.'. Together they form a unique fingerprint.

Cite this