Iterative Policy-Space Expansion in Reinforcement Learning

Jan Lichtenberg, Özgür Şimşek

Research output: Contribution to conferencePosterpeer-review

136 Downloads (Pure)

Abstract

Humans and animals solve a difficult problem much more easily when they are presented with a sequence of problems that starts simple and slowly increases in difficulty. We explore this idea in the context of reinforcement learning. Rather than providing the agent with an externally provided curriculum of progressively more difficult tasks, the agent solves a single task utilizing a decreasingly constrained policy space. The algorithm we propose first learns to categorize features into positive and negative before gradually learning a more refined policy. Experimental results in Tetris demonstrate superior learning rate of our approach when compared to existing algorithms.


This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 665992
Original languageEnglish
Publication statusPublished - 2019
EventNeurIPS 2019 Workshop on Biological and Artificial Reinforcement Learning - Vancouver, Canada
Duration: 13 Dec 2019 → …
https://sites.google.com/view/biologicalandartificialrl/

Workshop

WorkshopNeurIPS 2019 Workshop on Biological and Artificial Reinforcement Learning
Country/TerritoryCanada
CityVancouver
Period13/12/19 → …
Internet address

Keywords

  • Reinforcement learning
  • Human decision making
  • Tetris

Fingerprint

Dive into the research topics of 'Iterative Policy-Space Expansion in Reinforcement Learning'. Together they form a unique fingerprint.

Cite this