TY - GEN
T1 - AStitchInLanguageModels
T2 - Dataset and Methods for the Exploration of Idiomaticity in Pre-Trained Language Models
AU - Tayyar Madabushi, Harish
AU - Gow-Smith, Edward
AU - Scarton, Carolina
AU - Villavicencio, Aline
N1 - This work was partially supported by the UK EPSRC grant EP/T02450X/1 and the CD Tin Speech and Language Technologies and their Applications funded by UKRI (grant number EP/S023062/1).
PY - 2021/11/1
Y1 - 2021/11/1
N2 - Despite their success in a variety of NLP tasks, pre-trained language models, due to their heavy reliance on compositionality, fail in effectively capturing the meanings of multiword expressions (MWEs), especially idioms. Therefore, datasets and methods to improve the representation of MWEs are urgently needed. Existing datasets are limited to providing the degree of idiomaticity of expressions along with the literal and, where applicable, (a single) non-literal interpretation of MWEs. This work presents a novel dataset of naturally occurring sentences containing MWEs manually classified into a fine-grained set of meanings, spanning both English and Portuguese. We use this dataset in two tasks designed to test i) a language model’s ability to detect idiom usage, and ii) the effectiveness of a language model in generating representations of sentences containing idioms. Our experiments demonstrate that, on the task of detecting idiomatic usage, these models perform reasonably well in the one-shot and few-shot scenarios, but that there is significant scope for improvement in the zero-shot scenario. On the task of representing idiomaticity, we find that pre-training is not always effective, while fine-tuning could provide a sample efficient method of learning representations of sentences containing MWEs.
AB - Despite their success in a variety of NLP tasks, pre-trained language models, due to their heavy reliance on compositionality, fail in effectively capturing the meanings of multiword expressions (MWEs), especially idioms. Therefore, datasets and methods to improve the representation of MWEs are urgently needed. Existing datasets are limited to providing the degree of idiomaticity of expressions along with the literal and, where applicable, (a single) non-literal interpretation of MWEs. This work presents a novel dataset of naturally occurring sentences containing MWEs manually classified into a fine-grained set of meanings, spanning both English and Portuguese. We use this dataset in two tasks designed to test i) a language model’s ability to detect idiom usage, and ii) the effectiveness of a language model in generating representations of sentences containing idioms. Our experiments demonstrate that, on the task of detecting idiomatic usage, these models perform reasonably well in the one-shot and few-shot scenarios, but that there is significant scope for improvement in the zero-shot scenario. On the task of representing idiomaticity, we find that pre-training is not always effective, while fine-tuning could provide a sample efficient method of learning representations of sentences containing MWEs.
U2 - 10.18653/v1/2021.findings-emnlp.294
DO - 10.18653/v1/2021.findings-emnlp.294
M3 - Chapter in a published conference proceeding
SN - 9781955917100
SP - 3464
EP - 3477
BT - Findings of the Association for Computational Linguistics
PB - Association for Computational Linguistics
CY - Punta Cana, Dominican Republic
ER -