Show simple item record

Convergence Results for the EM Approach to Mixtures of Experts Architectures

dc.date.accessioned2004-10-08T20:34:35Z
dc.date.accessioned2018-11-24T10:16:00Z
dc.date.available2004-10-08T20:34:35Z
dc.date.available2018-11-24T10:16:00Z
dc.date.issued1993-11-01en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/6620
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/6620
dc.description.abstractThe Expectation-Maximization (EM) algorithm is an iterative approach to maximum likelihood parameter estimation. Jordan and Jacobs (1993) recently proposed an EM algorithm for the mixture of experts architecture of Jacobs, Jordan, Nowlan and Hinton (1991) and the hierarchical mixture of experts architecture of Jordan and Jacobs (1992). They showed empirically that the EM algorithm for these architectures yields significantly faster convergence than gradient ascent. In the current paper we provide a theoretical analysis of this algorithm. We show that the algorithm can be regarded as a variable metric algorithm with its searching direction having a positive projection on the gradient of the log likelihood. We also analyze the convergence of the algorithm and provide an explicit expression for the convergence rate. In addition, we describe an acceleration technique that yields a significant speedup in simulation experiments.en_US
dc.format.extent245749 bytes
dc.format.extent829871 bytes
dc.language.isoen_US
dc.titleConvergence Results for the EM Approach to Mixtures of Experts Architecturesen_US


Files in this item

FilesSizeFormatView
AIM-1458.pdf829.8Kbapplication/pdfView/Open
AIM-1458.ps.Z245.7Kbapplication/octet-streamView/Open

This item appears in the following Collection(s)

Show simple item record