Show simple item record

On Convergence Properties of the EM Algorithm for Gaussian Mixtures

dc.date.accessioned2004-10-20T20:49:25Z
dc.date.accessioned2018-11-24T10:23:18Z
dc.date.available2004-10-20T20:49:25Z
dc.date.available2018-11-24T10:23:18Z
dc.date.issued1995-04-21en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/7195
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/7195
dc.description.abstract"Expectation-Maximization'' (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite Gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix $P$, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of $P$ and provide new results analyzing the effect that $P$ has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of Gaussian mixture models.en_US
dc.format.extent9 p.en_US
dc.format.extent291671 bytes
dc.format.extent476864 bytes
dc.language.isoen_US
dc.subjectlearningen_US
dc.subjectneural networksen_US
dc.subjectEM algorithmen_US
dc.subjectclusteringen_US
dc.subjectmixture modelsen_US
dc.subjectstatisticsen_US
dc.titleOn Convergence Properties of the EM Algorithm for Gaussian Mixturesen_US


Files in this item

FilesSizeFormatView
AIM-1520.pdf476.8Kbapplication/pdfView/Open
AIM-1520.ps291.6Kbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record