Show simple item record

Hierarchical Mixtures of Experts and the EM Algorithm

dc.date.accessioned2004-10-20T20:49:48Z
dc.date.accessioned2018-11-24T10:23:21Z
dc.date.available2004-10-20T20:49:48Z
dc.date.available2018-11-24T10:23:21Z
dc.date.issued1993-08-01en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/7206
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/7206
dc.description.abstractWe present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.en_US
dc.format.extent29 p.en_US
dc.format.extent190144 bytes
dc.format.extent678911 bytes
dc.language.isoen_US
dc.subjectsupervised learningen_US
dc.subjectstatisticsen_US
dc.subjectdecision treesen_US
dc.subjectneuralsnetworksen_US
dc.titleHierarchical Mixtures of Experts and the EM Algorithmen_US


Files in this item

FilesSizeFormatView
AIM-1440.pdf678.9Kbapplication/pdfView/Open
AIM-1440.ps.Z190.1Kbapplication/octet-streamView/Open

This item appears in the following Collection(s)

Show simple item record