BasahaCorpus: An Expanded Linguistic Resource for Readability Assessment in Central Philippine Languages

Joseph Marvin Imperial, Ekaterina Kochmar

Research output: Working paper / PreprintPreprint

19 Downloads (Pure)

Abstract

Current research on automatic readability assessment (ARA) has focused on improving the performance of models in high-resource languages such as English. In this work, we introduce and release BasahaCorpus as part of an initiative aimed at expanding available corpora and baseline models for readability assessment in lower resource languages in the Philippines. We compiled a corpus of short fictional narratives written in Hiligaynon, Minasbate, Karay-a, and Rinconada -- languages belonging to the Central Philippine family tree subgroup -- to train ARA models using surface-level, syllable-pattern, and n-gram overlap features. We also propose a new hierarchical cross-lingual modeling approach that takes advantage of a language's placement in the family tree to increase the amount of available training data. Our study yields encouraging results that support previous work showcasing the efficacy of cross-lingual models in low-resource settings, as well as similarities in highly informative linguistic features for mutually intelligible languages.
Original languageUndefined/Unknown
PublisherarXiv
Publication statusPublished - 17 Oct 2023

Bibliographical note

Final camera-ready paper for EMNLP 2023 (Main)

Keywords

  • cs.CL

Cite this