Show simple item record

Learning with Deictic Representation

dc.date.accessioned2004-10-08T20:37:45Z
dc.date.accessioned2018-11-24T10:21:31Z
dc.date.available2004-10-08T20:37:45Z
dc.date.available2018-11-24T10:21:31Z
dc.date.issued2002-04-10en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/6685
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/6685
dc.description.abstractMost reinforcement learning methods operate on propositional representations of the world state. Such representations are often intractably large and generalize poorly. Using a deictic representation is believed to be a viable alternative: they promise generalization while allowing the use of existing reinforcement-learning methods. Yet, there are few experiments on learning with deictic representations reported in the literature. In this paper we explore the effectiveness of two forms of deictic representation and a naive propositional representation in a simple blocks-world domain. We find, empirically, that the deictic representations actually worsen performance. We conclude with a discussion of possible causes of these results and strategies for more effective learning in domains with objects.en_US
dc.format.extent41 p.en_US
dc.format.extent5712208 bytes
dc.format.extent1294450 bytes
dc.language.isoen_US
dc.subjectAIen_US
dc.subjectReinforcement Learningen_US
dc.subjectPartial Observabilityen_US
dc.subjectRepresentationsen_US
dc.titleLearning with Deictic Representationen_US


Files in this item

FilesSizeFormatView
AIM-2002-006.pdf1.294Mbapplication/pdfView/Open
AIM-2002-006.ps5.712Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record