Regularization in Directable Environments with Application to Tetris

Jan Lichtenberg, Özgür Şimşek

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

2 Citations (SciVal)
145 Downloads (Pure)

Abstract

Learning from small data sets is difficult in the absence of specific domain knowledge. We present a regularized linear model called STEW that benefits from a generic and prevalent form of prior knowledge: feature directions. STEW shrinks weights toward each other, converging to an equal-weights solution in the limit of infinite regularization. We provide theoretical results on the equal-weights solution that explains how STEW can productively trade-off bias and variance. Across a wide range of learning problems, including Tetris, STEW outperformed existing linear models, including ridge regression, the Lasso, and the non-negative Lasso, when feature directions were known. The model proved to be robust to unreliable (or absent) feature directions, still outperforming alternative models under diverse conditions. Our results in Tetris were obtained by using a novel approach to learning in sequential decision environments based on multinomial logistic regression.


This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 665992
Original languageEnglish
Title of host publicationProceedings of Machine Learning Research
PublisherInternational Machine Learning Society (IMLS)
Pages3953-3962
Number of pages10
Volume97
Publication statusPublished - 15 Jun 2019
EventThirty-sixth International Conference on Machine Learning - Long Beach Convention Center, Long Beach, USA United States
Duration: 9 Jun 201915 Jun 2019
Conference number: 36
https://icml.cc/

Publication series

NameProceedings of Machine Learning Research
PublisherInternational Machine Learning Society (IMLS)
Volume97
ISSN (Electronic)2640-3498

Conference

ConferenceThirty-sixth International Conference on Machine Learning
Abbreviated titleICML
Country/TerritoryUSA United States
CityLong Beach
Period9/06/1915/06/19
Internet address

Keywords

  • Machine learning
  • Reinforcement learning
  • Regularization
  • Equal weights

Fingerprint

Dive into the research topics of 'Regularization in Directable Environments with Application to Tetris'. Together they form a unique fingerprint.

Cite this