Large Language Model as a Teacher for Zero-shot Tagging at Extreme Scales

Jinbin Zhang, Nasib Ullah, Rohit Babbar

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

Abstract

Extreme Multi-label Text Classification (XMC) entails selecting the most relevant labels for an instance from a vast label set. Extreme Zero-shot XMC (EZ-XMC) extends this challenge by operating without annotated data, relying only on raw text instances and a predefined label set, making it particularly critical for addressing cold-start problems in large-scale recommendation and categorization systems. State-of-the-art methods, such as MACLR (Xiong et al., 2022) and RTS (Zhang et al., 2022), leverage lightweight bi-encoders but rely on suboptimal pseudo labels for training, such as document titles (MACLR) or document segments (RTS), which may not align well with the intended tagging or categorization tasks. On the other hand, LLM-based approaches, like ICXML (Zhu and Zamani, 2024), achieve better label-instance alignment but are computationally expensive and impractical for real-world EZ-XMC applications due to their heavy inference costs. In this paper, we introduce LMTX (Large language Model as Teacher for eXtreme classification), a novel framework that bridges the gap between these two approaches. LMTX utilizes an LLM to identify high-quality pseudo labels during training, while employing a lightweight bi-encoder for efficient inference. This design eliminates the need for LLMs at inference time, offering the benefits of improved label alignment without sacrificing computational efficiency. Our approach achieves superior performance and efficiency over both LLM and non-LLM based approaches, establishing a new state-of-the-art in EZ-XMC.

Original languageEnglish
Title of host publicationProceedings - International Conference on Computational Linguistics, COLING
EditorsOwen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Place of PublicationTexas, U. S. A.
PublisherAssociation for Computational Linguistics (ACL)
Pages3465-3478
Number of pages14
ISBN (Electronic)9798891761964
Publication statusPublished - 24 Jan 2025
Event31st International Conference on Computational Linguistics, COLING 2025 - Abu Dhabi, UAE United Arab Emirates
Duration: 19 Jan 202524 Jan 2025

Publication series

NameProceedings - International Conference on Computational Linguistics, COLING
VolumePart F206484-1
ISSN (Print)2951-2093

Conference

Conference31st International Conference on Computational Linguistics, COLING 2025
Country/TerritoryUAE United Arab Emirates
CityAbu Dhabi
Period19/01/2524/01/25

Acknowledgements

We thank reviewers for their valuable comments and suggestions, we also sincerely thank Ansh Arora for his assistance in evaluating certain baselines of Zero-shot XMC models.

Funding

We acknowledge the support of Research Council of Finland (Academy of Finland) via grants 347707 and 348215. We also thank the Aalto Science-IT project, and CSC IT Center for Science, Finland for the computational resources provided.

ASJC Scopus subject areas

  • Computational Theory and Mathematics
  • Computer Science Applications
  • Theoretical Computer Science

Fingerprint

Dive into the research topics of 'Large Language Model as a Teacher for Zero-shot Tagging at Extreme Scales'. Together they form a unique fingerprint.

Cite this