Abstract

Learning from small data sets is difficult in the absence of specific domain knowledge. We present a regularized linear model called STEW that benefits from a generic and prevalent form of prior knowledge: feature directions. STEW shrinks weights toward each other, converging to an equal-weights solution in the limit of infinite regularization. We provide theoretical results on the equal-weights solution that explains how STEW can productively trade-off bias and variance. Across a wide range of learning problems, including Tetris, STEW outperformed existing linear models, including ridge regression, the Lasso, and the non-negative Lasso, when feature directions were known. The model proved to be robust to unreliable (or absent) feature directions, still outperforming alternative models under diverse conditions. Our results in Tetris were obtained by using a novel approach to learning in sequential decision environments based on multinomial logistic regression.
Original languageEnglish
Pages3953-3962
Publication statusPublished - 15 Jun 2019
EventThirty-sixth International Conference on Machine Learning - Long Beach Convention Center, Long Beach, USA United States
Duration: 9 Jun 201915 Jun 2019
Conference number: 36
https://icml.cc/

Conference

ConferenceThirty-sixth International Conference on Machine Learning
Abbreviated titleICML
CountryUSA United States
CityLong Beach
Period9/06/1915/06/19
Internet address

Keywords

  • Machine learning
  • Reinforcement learning
  • Regularization
  • Equal weights

Fingerprint Dive into the research topics of 'Regularization in Directable Environments with Application to Tetris'. Together they form a unique fingerprint.

Cite this