Humans and animals solve a difficult problem much more easily when they are presented with a sequence of problems that starts simple and slowly increases in difficulty. We explore this idea in the context of reinforcement learning. Rather than providing the agent with an externally provided curriculum of progressively more difficult tasks, the agent solves a single task utilizing a decreasingly constrained policy space. The algorithm we propose first learns to categorize features into positive and negative before gradually learning a more refined policy. Experimental results in Tetris demonstrate superior learning rate of our approach when compared to existing algorithms.
|Publication status||Published - 2019|
|Event||NeurIPS 2019 Workshop on Biological and Artificial Reinforcement Learning - Vancouver, Canada|
Duration: 13 Dec 2019 → …
|Workshop||NeurIPS 2019 Workshop on Biological and Artificial Reinforcement Learning|
|Period||13/12/19 → …|
- Reinforcement learning
- Human decision making
Lichtenberg, J., & Şimşek, Ö. (2019). Iterative Policy-Space Expansion in Reinforcement Learning. Poster session presented at NeurIPS 2019 Workshop on Biological and Artificial Reinforcement Learning, Vancouver, Canada.