Show simple item record

Reinforcement Learning by Policy Search

dc.date.accessioned2004-10-20T20:31:39Z
dc.date.accessioned2018-11-24T10:23:05Z
dc.date.available2004-10-20T20:31:39Z
dc.date.available2018-11-24T10:23:05Z
dc.date.issued2003-02-14en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/7101
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/7101
dc.description.abstractOne objective of artificial intelligence is to model the behavior of an intelligent agent interacting with its environment. The environment's transformations can be modeled as a Markov chain, whose state is partially observable to the agent and affected by its actions; such processes are known as partially observable Markov decision processes (POMDPs). While the environment's dynamics are assumed to obey certain rules, the agent does not know them and must learn. In this dissertation we focus on the agent's adaptation as captured by the reinforcement learning framework. This means learning a policy---a mapping of observations into actions---based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. The set of policies is constrained by the architecture of the agent's controller. POMDPs require a controller to have a memory. We investigate controllers with memory, including controllers with external memory, finite state controllers and distributed controllers for multi-agent systems. For these various controllers we work out the details of the algorithms which learn by ascending the gradient of expected cumulative reinforcement. Building on statistical learning theory and experiment design theory, a policy evaluation algorithm is developed for the case of experience re-use. We address the question of sufficient experience for uniform convergence of policy evaluation and obtain sample complexity bounds for various estimators. Finally, we demonstrate the performance of the proposed algorithms on several domains, the most complex of which is simulated adaptive packet routing in a telecommunication network.en_US
dc.format.extent144 p.en_US
dc.format.extent26942112 bytes
dc.format.extent1735254 bytes
dc.language.isoen_US
dc.subjectAIen_US
dc.subjectPOMDPen_US
dc.subjectpolicy searchen_US
dc.subjectadaptive systemsen_US
dc.subjectreinforcement learningen_US
dc.subjectadaptive behavioren_US
dc.titleReinforcement Learning by Policy Searchen_US


Files in this item

FilesSizeFormatView
AITR-2003-003.pdf1.735Mbapplication/pdfView/Open
AITR-2003-003.ps26.94Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record