Hierarchical Mixtures of Experts and the EM Algorithm
dc.date.accessioned | 2004-10-20T20:49:48Z | |
dc.date.accessioned | 2018-11-24T10:23:21Z | |
dc.date.available | 2004-10-20T20:49:48Z | |
dc.date.available | 2018-11-24T10:23:21Z | |
dc.date.issued | 1993-08-01 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/7206 | |
dc.identifier.uri | http://repository.aust.edu.ng/xmlui/handle/1721.1/7206 | |
dc.description.abstract | We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. | en_US |
dc.format.extent | 29 p. | en_US |
dc.format.extent | 190144 bytes | |
dc.format.extent | 678911 bytes | |
dc.language.iso | en_US | |
dc.subject | supervised learning | en_US |
dc.subject | statistics | en_US |
dc.subject | decision trees | en_US |
dc.subject | neuralsnetworks | en_US |
dc.title | Hierarchical Mixtures of Experts and the EM Algorithm | en_US |
Files in this item
Files | Size | Format | View |
---|---|---|---|
AIM-1440.pdf | 678.9Kb | application/pdf | View/ |
AIM-1440.ps.Z | 190.1Kb | application/octet-stream | View/ |