Abstract
We present a new method for automatically creating useful temporally-extended actions in reinforcement learning. Our method identifies states that lie between two densely-connected regions of the state space and generates temporally-extended actions (e.g., options) that take the agent efficiently to these states. We search for these states using graph partitioning meth- ods on local views of the transition graph. This local perspective is a key property of our algorithms that differentiates it from most of the earlier work in this area, and one that allows it to scale to problems with large state spaces.
Original language | English |
---|---|
Title of host publication | AAAI Workshop Proceedings, 2004 |
Publisher | AAAI Press |
Number of pages | 6 |
ISBN (Print) | 978-0-262-51183-4 |
Publication status | Published - 31 Jul 2004 |