Factorial Hidden Markov Models
dc.date.accessioned | 2004-10-20T20:49:14Z | |
dc.date.accessioned | 2018-11-24T10:23:16Z | |
dc.date.available | 2004-10-20T20:49:14Z | |
dc.date.available | 2018-11-24T10:23:16Z | |
dc.date.issued | 1996-02-09 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/7188 | |
dc.identifier.uri | http://repository.aust.edu.ng/xmlui/handle/1721.1/7188 | |
dc.description.abstract | We present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved analytically. However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approximation is derived. Empirical results on a set of problems suggest that both the mean field approximation and Gibbs sampling are viable alternatives to the computationally expensive exact algorithm. | en_US |
dc.format.extent | 7 p. | en_US |
dc.format.extent | 198365 bytes | |
dc.format.extent | 244196 bytes | |
dc.language.iso | en_US | |
dc.subject | AI | en_US |
dc.subject | MIT | en_US |
dc.subject | Artificial Intelligence | en_US |
dc.subject | Hidden Markov Models | en_US |
dc.subject | sNeural networks | en_US |
dc.subject | Time series | en_US |
dc.subject | Mean field theory | en_US |
dc.subject | Gibbs sampling | en_US |
dc.subject | sFactorial | en_US |
dc.subject | Learning algorithms | en_US |
dc.subject | Machine learning | en_US |
dc.title | Factorial Hidden Markov Models | en_US |
Files in this item
Files | Size | Format | View |
---|---|---|---|
AIM-1561.pdf | 244.1Kb | application/pdf | View/ |
AIM-1561.ps | 198.3Kb | application/postscript | View/ |