Convex surrogates for unbiased loss functions in extreme classification with missing labels

Mohammadreza Qaraei, Erik Schultheis, Priyanshu Gupta, Rohit Babbar

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

18 Citations (SciVal)

Abstract

Extreme Classification (XC) refers to supervised learning where each training/test instance is labeled with small subset of relevant labels that are chosen from a large set of possible target labels. The framework of XC has been widely employed in web applications such as automatic labeling of web-encyclopedia, prediction of related searches, and recommendation systems. While most state-of-the-art models in XC achieve high overall accuracy by performing well on the frequently occurring labels, they perform poorly on a large number of infrequent (tail) labels. This arises from two statistical challenges, (i) missing labels, as it is virtually impossible to manually assign every relevant label to an instance, and (ii) highly imbalanced data distribution where a large fraction of labels are tail labels. In this work, we consider common loss functions that decompose over labels, and calculate unbiased estimates that compensate missing labels according to Natarajan et al. [26]. This turns out to be disadvantageous from an optimization perspective, as important properties such as convexity and lower-boundedness are lost. To circumvent this problem, we use the fact that typical loss functions in XC are convex surrogates of the 0-1 loss, and thus propose to switch to convex surrogates of its unbiased version. These surrogates are further adapted to the label imbalance by combining with label-frequency-based rebalancing. We show that the proposed loss functions can be easily incorporated into various different frameworks for extreme classification. This includes (i) linear classifiers, such as DiSMEC, on sparse input data representation, (ii) attention-based deep architecture, AttentionXML, learnt on dense Glove embeddings, and (iii) XLNet-based transformer model for extreme classification, APLC-XLNet. Our results demonstrate consistent improvements over the respective vanilla baseline models, on the propensity-scored metrics for precision and nDCG.

Original languageEnglish
Title of host publicationThe Web Conference 2021 - Proceedings of the World Wide Web Conference, WWW 2021
EditorsJ. Leskovec, M. Grobelnik
Place of PublicationNew York, U. S. A.
PublisherAssociation for Computing Machinery
Pages3711-3720
Number of pages10
ISBN (Electronic)9781450383127
DOIs
Publication statusPublished - 3 Jun 2021
Event2021 World Wide Web Conference, WWW 2021 - Ljubljana, Slovenia
Duration: 19 Apr 202123 Apr 2021

Publication series

NameThe Web Conference 2021 - Proceedings of the World Wide Web Conference, WWW 2021

Conference

Conference2021 World Wide Web Conference, WWW 2021
Country/TerritorySlovenia
CityLjubljana
Period19/04/2123/04/21

Keywords

  • Extreme classification
  • Imbalanced classification
  • Loss functions
  • Missing labels

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Software

Fingerprint

Dive into the research topics of 'Convex surrogates for unbiased loss functions in extreme classification with missing labels'. Together they form a unique fingerprint.

Cite this