Abstract
Tokenisation is the first step in almost all NLP tasks, and state-of-the-art transformer-based language models all use subword tokenisation algorithms to process input text. Existing algorithms have problems, often producing tokenisations of limited linguistic validity, and representing equivalent strings differently depending on their position within a word. We hypothesise that these problems hinder the ability of transformer-based models to handle complex words, and suggest that these problems are a result of allowing tokens to include spaces. We thus experiment with an alternative tokenisation approach where spaces are always treated as individual tokens. Specifically, we apply this modification to the BPE and Unigram algorithms. We find that our modified algorithms lead to improved performance on downstream NLP tasks that involve handling complex words, whilst having no detrimental effect on performance in general natural language understanding tasks. Intrinsically, we find our modified algorithms give more morphologically correct tokenisations, in particular when handling prefixes. Given the results of our experiments, we advocate for always treating spaces as individual tokens as an improved tokenisation method.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing |
Pages | 11430 - 11443 |
Number of pages | 13 |
DOIs | |
Publication status | Published - 11 Dec 2022 |
Event | 2022 Conference on Empirical Methods in Natural Language Processing - Abu Dhabi, UAE United Arab Emirates Duration: 7 Dec 2022 → 11 Dec 2022 https://2022.emnlp.org/ |
Conference
Conference | 2022 Conference on Empirical Methods in Natural Language Processing |
---|---|
Abbreviated title | EMNLP 2022 |
Country/Territory | UAE United Arab Emirates |
City | Abu Dhabi |
Period | 7/12/22 → 11/12/22 |
Internet address |
Bibliographical note
Funding:This work was partially supported by the CDT in Speech and Language Technologies and their Applications funded by UKRI (grant number EP/S023062/1) and the UK EPSRC grant EP/T02450X/1.