CogNLP-Sheffield at CMCL 2021 Shared Task: Blending Cognitively Inspired Features with Transformer-based Language Models for Predicting Eye Tracking Patterns

Peter Vickers, Rosa Wainwright, Harish Tayyar Madabushi, Aline Villavicencio

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

4 Citations (SciVal)

Abstract

The CogNLP-Sheffield submissions to the CMCL 2021 Shared Task examine the value of a variety of cognitively and linguistically inspired features for predicting eye tracking patterns, as both standalone model inputs and as supplements to contextual word embeddings (XLNet). Surprisingly, the smaller pre-trained model (XLNet-base) outperforms the larger (XLNet-large), and despite evidence that multi-word expressions (MWEs) provide cognitive processing advantages, MWE features provide little benefit to either model.
Original languageEnglish
Title of host publicationProceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Place of PublicationOnline
PublisherAssociation for Computational Linguistics
Pages125-133
Number of pages9
DOIs
Publication statusPublished - 1 Jun 2021

Fingerprint

Dive into the research topics of 'CogNLP-Sheffield at CMCL 2021 Shared Task: Blending Cognitively Inspired Features with Transformer-based Language Models for Predicting Eye Tracking Patterns'. Together they form a unique fingerprint.

Cite this