TY - GEN
T1 - Majoration-Minimization for Sparse SVMs
AU - Benfenati, Alessandro
AU - Chouzenoux, Emilie
AU - Pesquet, Jean-Christophe
AU - Franchini, Giorgia
AU - Latva-Aijo, Salla
AU - Narnhofer, Dominik
AU - Scott, Seb
AU - Yousefi, Mahsa
PY - 2024/10/3
Y1 - 2024/10/3
N2 - Several decades ago, SupportVector Machines (SVMs) were introduced for performing binary classification tasks, under a supervised framework. Nowadays, they often outperform other supervised methods and remain one of the most popular approaches in the machine learning arena. In this work, we investigate the training of SVMs through a smooth sparse-promoting-regularized squared hinge loss minimization. This choice paves the way to the application of quick training methods built on majorization-minimization approaches, benefiting from the Lipschitz differentiabililty of the loss function. Moreover, the proposed approach allows us to handle sparsity-preserving regularizers promoting the selection of the most significant features, so enhancing the performance. Numerical tests and comparisons conducted on three different datasets demonstrate the good performance of the proposed methodology in terms of qualitative metrics (accuracy, precision, recall, and F1 score) as well as computational cost.
AB - Several decades ago, SupportVector Machines (SVMs) were introduced for performing binary classification tasks, under a supervised framework. Nowadays, they often outperform other supervised methods and remain one of the most popular approaches in the machine learning arena. In this work, we investigate the training of SVMs through a smooth sparse-promoting-regularized squared hinge loss minimization. This choice paves the way to the application of quick training methods built on majorization-minimization approaches, benefiting from the Lipschitz differentiabililty of the loss function. Moreover, the proposed approach allows us to handle sparsity-preserving regularizers promoting the selection of the most significant features, so enhancing the performance. Numerical tests and comparisons conducted on three different datasets demonstrate the good performance of the proposed methodology in terms of qualitative metrics (accuracy, precision, recall, and F1 score) as well as computational cost.
U2 - 10.1007/978-981-97-6769-4_3
DO - 10.1007/978-981-97-6769-4_3
M3 - Chapter in a published conference proceeding
SN - 978-981-97-6768-7
VL - 61
T3 - Springer INdAM Series
SP - 31
EP - 54
BT - Advanced Techniques in Optimization for Machine Learning and Imaging
PB - Springer, Singapore
CY - Springer, Singapore
ER -