Show simple item record

Policy Improvement for POMDPs Using Normalized Importance Sampling

dc.date.accessioned2004-10-20T20:50:06Z
dc.date.accessioned2018-11-24T10:23:25Z
dc.date.available2004-10-20T20:50:06Z
dc.date.available2018-11-24T10:23:25Z
dc.date.issued2001-03-20en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/7218
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/7218
dc.description.abstractWe present a new method for estimating the expected return of a POMDP from experience. The estimator does not assume any knowle ge of the POMDP and allows the experience to be gathered with an arbitrary set of policies. The return is estimated for any new policy of the POMDP. We motivate the estimator from function-approximation and importance sampling points-of-view and derive its theoretical properties. Although the estimator is biased, it has low variance and the bias is often irrelevant when the estimator is used for pair-wise comparisons.We conclude by extending the estimator to policies with memory and compare its performance in a greedy search algorithm to the REINFORCE algorithm showing an order of magnitude reduction in the number of trials required.en_US
dc.format.extent4576001 bytes
dc.format.extent768071 bytes
dc.language.isoen_US
dc.titlePolicy Improvement for POMDPs Using Normalized Importance Samplingen_US


Files in this item

FilesSizeFormatView
AIM-2001-002.pdf768.0Kbapplication/pdfView/Open
AIM-2001-002.ps4.576Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record