Abstract
Humans and animals solve a difficult problem much more easily when they are presented with a sequence of problems that starts simple and slowly increases in difficulty. We explore this idea in the context of reinforcement learning. Rather than providing the agent with an externally provided curriculum of progressively more difficult tasks, the agent solves a single task utilizing a decreasingly constrained policy space. The algorithm we propose first learns to categorize features into positive and negative before gradually learning a more refined policy. Experimental results in Tetris demonstrate superior learning rate of our approach when compared to existing algorithms.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 665992
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 665992
Original language | English |
---|---|
Publication status | Published - 2019 |
Event | NeurIPS 2019 Workshop on Biological and Artificial Reinforcement Learning - Vancouver, Canada Duration: 13 Dec 2019 → … https://sites.google.com/view/biologicalandartificialrl/ |
Workshop
Workshop | NeurIPS 2019 Workshop on Biological and Artificial Reinforcement Learning |
---|---|
Country/Territory | Canada |
City | Vancouver |
Period | 13/12/19 → … |
Internet address |
Keywords
- Reinforcement learning
- Human decision making
- Tetris