TY - JOUR

T1 - Speeding-up one-versus-all training for extreme classification via mean-separating initialization

AU - Schultheis, Erik

AU - Babbar, Rohit

N1 - Funding Open Access funding provided by Aalto University. Academy of Finland (347707)
Data availability http://manikvarma.org/downloads/XC/XMLRepository.html

PY - 2022/10/18

Y1 - 2022/10/18

N2 - In this paper, we show that a simple, data dependent way of setting the initial vector can be used to substantially speed up the training of linear one-versus-all classifiers in extreme multi-label classification (XMC). We discuss the problem of choosing the initial weights from the perspective of three goals. We want to start in a region of weight space (a) with low loss value, (b) that is favourable for second-order optimization, and (c) where the conjugate-gradient (CG) calculations can be performed quickly. For margin losses, such an initialization is achieved by selecting the initial vector such that it separates the mean of all positive (relevant for a label) instances from the mean of all negatives – two quantities that can be calculated quickly for the highly imbalanced binary problems occurring in XMC. We demonstrate a training speedup of up to 5 × on Amazon-670K dataset with 670,000 labels. This comes in part from the reduced number of iterations that need to be performed due to starting closer to the solution, and in part from an implicit negative-mining effect that allows to ignore easy negatives in the CG step. Because of the convex nature of the optimization problem, the speedup is achieved without any degradation in classification accuracy. The implementation can be found at https://github.com/xmc-aalto/dismecpp.

AB - In this paper, we show that a simple, data dependent way of setting the initial vector can be used to substantially speed up the training of linear one-versus-all classifiers in extreme multi-label classification (XMC). We discuss the problem of choosing the initial weights from the perspective of three goals. We want to start in a region of weight space (a) with low loss value, (b) that is favourable for second-order optimization, and (c) where the conjugate-gradient (CG) calculations can be performed quickly. For margin losses, such an initialization is achieved by selecting the initial vector such that it separates the mean of all positive (relevant for a label) instances from the mean of all negatives – two quantities that can be calculated quickly for the highly imbalanced binary problems occurring in XMC. We demonstrate a training speedup of up to 5 × on Amazon-670K dataset with 670,000 labels. This comes in part from the reduced number of iterations that need to be performed due to starting closer to the solution, and in part from an implicit negative-mining effect that allows to ignore easy negatives in the CG step. Because of the convex nature of the optimization problem, the speedup is achieved without any degradation in classification accuracy. The implementation can be found at https://github.com/xmc-aalto/dismecpp.

KW - 2nd order optimization

KW - Class imbalance

KW - Large-scale multi-label classification

KW - Linear classification

KW - Weight initialization

UR - http://www.scopus.com/inward/record.url?scp=85140115267&partnerID=8YFLogxK

U2 - 10.1007/s10994-022-06228-2

DO - 10.1007/s10994-022-06228-2

M3 - Article

AN - SCOPUS:85140115267

SN - 0885-6125

VL - 111

SP - 3953

EP - 3976

JO - Machine Learning

JF - Machine Learning

IS - 11

ER -