Improving Tokenisation by Alternative Treatment of Spaces

Edward Gow-Smith, Harish Tayyar Madabushi, Carolina Scarton, Aline Villavicencio

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

1 Citation (SciVal)


Tokenisation is the first step in almost all NLP tasks, and state-of-the-art transformer-based language models all use subword tokenisation algorithms to process input text. Existing algorithms have problems, often producing tokenisations of limited linguistic validity, and representing equivalent strings differently depending on their position within a word. We hypothesise that these problems hinder the ability of transformer-based models to handle complex words, and suggest that these problems are a result of allowing tokens to include spaces. We thus experiment with an alternative tokenisation approach where spaces are always treated as individual tokens. Specifically, we apply this modification to the BPE and Unigram algorithms. We find that our modified algorithms lead to improved performance on downstream NLP tasks that involve handling complex words, whilst having no detrimental effect on performance in general natural language understanding tasks. Intrinsically, we find our modified algorithms give more morphologically correct tokenisations, in particular when handling prefixes. Given the results of our experiments, we advocate for always treating spaces as individual tokens as an improved tokenisation method.
Original languageEnglish
Title of host publicationProceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Pages11430 - 11443
Number of pages13
Publication statusPublished - 11 Dec 2022
Event2022 Conference on Empirical Methods in Natural Language Processing - Abu Dhabi, UAE United Arab Emirates
Duration: 7 Dec 202211 Dec 2022


Conference2022 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2022
Country/TerritoryUAE United Arab Emirates
CityAbu Dhabi
Internet address

Bibliographical note

This work was partially supported by the CDT in Speech and Language Technologies and their Applications funded by UKRI (grant number EP/S023062/1) and the UK EPSRC grant EP/T02450X/1.


Dive into the research topics of 'Improving Tokenisation by Alternative Treatment of Spaces'. Together they form a unique fingerprint.

Cite this