On Convergence Properties of the EM Algorithm for Gaussian Mixtures
dc.date.accessioned | 2004-10-20T20:49:25Z | |
dc.date.accessioned | 2018-11-24T10:23:18Z | |
dc.date.available | 2004-10-20T20:49:25Z | |
dc.date.available | 2018-11-24T10:23:18Z | |
dc.date.issued | 1995-04-21 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/7195 | |
dc.identifier.uri | http://repository.aust.edu.ng/xmlui/handle/1721.1/7195 | |
dc.description.abstract | "Expectation-Maximization'' (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite Gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix $P$, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of $P$ and provide new results analyzing the effect that $P$ has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of Gaussian mixture models. | en_US |
dc.format.extent | 9 p. | en_US |
dc.format.extent | 291671 bytes | |
dc.format.extent | 476864 bytes | |
dc.language.iso | en_US | |
dc.subject | learning | en_US |
dc.subject | neural networks | en_US |
dc.subject | EM algorithm | en_US |
dc.subject | clustering | en_US |
dc.subject | mixture models | en_US |
dc.subject | statistics | en_US |
dc.title | On Convergence Properties of the EM Algorithm for Gaussian Mixtures | en_US |
Files in this item
Files | Size | Format | View |
---|---|---|---|
AIM-1520.pdf | 476.8Kb | application/pdf | View/ |
AIM-1520.ps | 291.6Kb | application/postscript | View/ |