Show simple item record

Representation Discovery for Kernel-Based Reinforcement Learning

dc.date.accessioned2015-11-30T19:30:04Z
dc.date.accessioned2018-11-26T22:27:30Z
dc.date.available2015-11-30T19:30:04Z
dc.date.available2018-11-26T22:27:30Z
dc.date.issued2015-11-24
dc.identifier.urihttp://hdl.handle.net/1721.1/100053
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/100053
dc.description.abstractRecent years have seen increased interest in non-parametric reinforcement learning. There are now practical kernel-based algorithms for approximating value functions; however, kernel regression requires that the underlying function being approximated be smooth on its domain. Few problems of interest satisfy this requirement in their natural representation. In this paper we define Value-Consistent Pseudometric (VCPM), the distance function corresponding to a transformation of the domain into a space where the target function is maximally smooth and thus well-approximated by kernel regression. We then present DKBRL, an iterative batch RL algorithm interleaving steps of Kernel-Based Reinforcement Learning and distance metric adjustment. We evaluate its performance on Acrobot and PinBall, continuous-space reinforcement learning domains with discontinuous value functions.en_US
dc.format.extent16 p.en_US
dc.rightsCreative Commons Attribution-ShareAlike 4.0 International
dc.rights.urihttp://creativecommons.org/licenses/by-sa/4.0/
dc.subjectMetric learningen_US
dc.titleRepresentation Discovery for Kernel-Based Reinforcement Learningen_US


Files in this item

FilesSizeFormatView
MIT-CSAIL-TR-2015-032.pdf1.960Mbapplication/pdfView/Open

This item appears in the following Collection(s)

Show simple item record

Creative Commons Attribution-ShareAlike 4.0 International
Except where otherwise noted, this item's license is described as Creative Commons Attribution-ShareAlike 4.0 International