Rest-Katyusha: Exploiting the solution's structure via scheduled restart schemes

Junqi Tang, Francis Bach, Mohammad Golbabaee, Mike Davies

Research output: Contribution to journalConference article

Abstract

We propose a structure-adaptive variant of a state-of-the-art stochastic variance-reduced gradient algorithm Katyusha for regularized empirical risk minimization. The proposed method is able to exploit the intrinsic low-dimensional structure of the solution, such as sparsity or low rank which is enforced by a non-smooth regularization, to achieve even faster convergence rate. This provable algorithmic improvement is done by restarting the Katyusha algorithm according to restricted strong-convexity (RSC) constants. We also propose an adaptive-restart variant which is able to estimate the RSC on the fly and adjust the restart period automatically. We demonstrate the effectiveness of our approach via numerical experiments.

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

Rest-Katyusha : Exploiting the solution's structure via scheduled restart schemes. / Tang, Junqi; Bach, Francis; Golbabaee, Mohammad; Davies, Mike.

In: Advances in Neural Information Processing Systems, 01.01.2018, p. 429-440.

Research output: Contribution to journalConference article

@article{a096dc5823b64165b1d53d5d7d7d2bdf,
title = "Rest-Katyusha: Exploiting the solution's structure via scheduled restart schemes",
abstract = "We propose a structure-adaptive variant of a state-of-the-art stochastic variance-reduced gradient algorithm Katyusha for regularized empirical risk minimization. The proposed method is able to exploit the intrinsic low-dimensional structure of the solution, such as sparsity or low rank which is enforced by a non-smooth regularization, to achieve even faster convergence rate. This provable algorithmic improvement is done by restarting the Katyusha algorithm according to restricted strong-convexity (RSC) constants. We also propose an adaptive-restart variant which is able to estimate the RSC on the fly and adjust the restart period automatically. We demonstrate the effectiveness of our approach via numerical experiments.",
author = "Junqi Tang and Francis Bach and Mohammad Golbabaee and Mike Davies",
year = "2018",
month = "1",
day = "1",
language = "English",
pages = "429--440",
journal = "Advances in Neural Information Processing Systems",
issn = "1049-5258",
publisher = "MIT Press",

}

TY - JOUR

T1 - Rest-Katyusha

T2 - Advances in Neural Information Processing Systems

AU - Tang, Junqi

AU - Bach, Francis

AU - Golbabaee, Mohammad

AU - Davies, Mike

PY - 2018/1/1

Y1 - 2018/1/1

N2 - We propose a structure-adaptive variant of a state-of-the-art stochastic variance-reduced gradient algorithm Katyusha for regularized empirical risk minimization. The proposed method is able to exploit the intrinsic low-dimensional structure of the solution, such as sparsity or low rank which is enforced by a non-smooth regularization, to achieve even faster convergence rate. This provable algorithmic improvement is done by restarting the Katyusha algorithm according to restricted strong-convexity (RSC) constants. We also propose an adaptive-restart variant which is able to estimate the RSC on the fly and adjust the restart period automatically. We demonstrate the effectiveness of our approach via numerical experiments.

AB - We propose a structure-adaptive variant of a state-of-the-art stochastic variance-reduced gradient algorithm Katyusha for regularized empirical risk minimization. The proposed method is able to exploit the intrinsic low-dimensional structure of the solution, such as sparsity or low rank which is enforced by a non-smooth regularization, to achieve even faster convergence rate. This provable algorithmic improvement is done by restarting the Katyusha algorithm according to restricted strong-convexity (RSC) constants. We also propose an adaptive-restart variant which is able to estimate the RSC on the fly and adjust the restart period automatically. We demonstrate the effectiveness of our approach via numerical experiments.

UR - http://www.scopus.com/inward/record.url?scp=85064836500&partnerID=8YFLogxK

M3 - Conference article

SP - 429

EP - 440

JO - Advances in Neural Information Processing Systems

JF - Advances in Neural Information Processing Systems

SN - 1049-5258

ER -