Show simple item record

A Comparative Analysis of Reinforcement Learning Methods

dc.date.accessioned2004-10-04T14:25:16Z
dc.date.accessioned2018-11-24T10:11:22Z
dc.date.available2004-10-04T14:25:16Z
dc.date.available2018-11-24T10:11:22Z
dc.date.issued1991-10-01en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/5978
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/5978
dc.description.abstractThis paper analyzes the suitability of reinforcement learning (RL) for both programming and adapting situated agents. We discuss two RL algorithms: Q-learning and the Bucket Brigade. We introduce a special case of the Bucket Brigade, and analyze and compare its performance to Q in a number of experiments. Next we discuss the key problems of RL: time and space complexity, input generalization, sensitivity to parameter values, and selection of the reinforcement function. We address the tradeoffs between the built-in and learned knowledge and the number of training examples required by a learning algorithm. Finally, we suggest directions for future research.en_US
dc.format.extent13 p.en_US
dc.format.extent1444645 bytes
dc.format.extent1130480 bytes
dc.language.isoen_US
dc.subjectreinforcementen_US
dc.subjectlearningen_US
dc.subjectsituated agentsen_US
dc.subjectinputsgeneralizationen_US
dc.subjectcomplexityen_US
dc.subjectbuilt-in knowledgeen_US
dc.titleA Comparative Analysis of Reinforcement Learning Methodsen_US


Files in this item

FilesSizeFormatView
AIM-1322.pdf1.130Mbapplication/pdfView/Open
AIM-1322.ps1.444Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record