Show simple item record

Using Recurrent Networks for Dimensionality Reduction

dc.date.accessioned2004-10-20T20:23:37Z
dc.date.accessioned2018-11-24T10:22:51Z
dc.date.available2004-10-20T20:23:37Z
dc.date.available2018-11-24T10:22:51Z
dc.date.issued1992-09-01en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/7045
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/7045
dc.description.abstractThis report explores how recurrent neural networks can be exploited for learning high-dimensional mappings. Since recurrent networks are as powerful as Turing machines, an interesting question is how recurrent networks can be used to simplify the problem of learning from examples. The main problem with learning high-dimensional functions is the curse of dimensionality which roughly states that the number of examples needed to learn a function increases exponentially with input dimension. This thesis proposes a way of avoiding this problem by using a recurrent network to decompose a high-dimensional function into many lower dimensional functions connected in a feedback loop.en_US
dc.format.extent2167097 bytes
dc.format.extent1325986 bytes
dc.language.isoen_US
dc.titleUsing Recurrent Networks for Dimensionality Reductionen_US


Files in this item

FilesSizeFormatView
AITR-1396.pdf1.325Mbapplication/pdfView/Open
AITR-1396.ps2.167Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record