Local graph partitioning as a basis for generating temporally-extended actions in reinforcement learning

Özgür Şimşek, Alicia P. Wolfe, Andrew G. Barto

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

Abstract

We present a new method for automatically creating useful temporally-extended actions in reinforcement learning. Our method identifies states that lie between two densely-connected regions of the state space and generates temporally-extended actions (e.g., options) that take the agent efficiently to these states. We search for these states using graph partitioning meth- ods on local views of the transition graph. This local perspective is a key property of our algorithms that differentiates it from most of the earlier work in this area, and one that allows it to scale to problems with large state spaces.
Original languageEnglish
Title of host publicationAAAI Workshop Proceedings, 2004
PublisherAAAI Press
Number of pages6
ISBN (Print)978-0-262-51183-4
Publication statusPublished - 31 Jul 2004

Fingerprint

Dive into the research topics of 'Local graph partitioning as a basis for generating temporally-extended actions in reinforcement learning'. Together they form a unique fingerprint.

Cite this