Show simple item record

Factorial Hidden Markov Models

dc.date.accessioned2004-10-20T20:49:14Z
dc.date.accessioned2018-11-24T10:23:16Z
dc.date.available2004-10-20T20:49:14Z
dc.date.available2018-11-24T10:23:16Z
dc.date.issued1996-02-09en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/7188
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/7188
dc.description.abstractWe present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved analytically. However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approximation is derived. Empirical results on a set of problems suggest that both the mean field approximation and Gibbs sampling are viable alternatives to the computationally expensive exact algorithm.en_US
dc.format.extent7 p.en_US
dc.format.extent198365 bytes
dc.format.extent244196 bytes
dc.language.isoen_US
dc.subjectAIen_US
dc.subjectMITen_US
dc.subjectArtificial Intelligenceen_US
dc.subjectHidden Markov Modelsen_US
dc.subjectsNeural networksen_US
dc.subjectTime seriesen_US
dc.subjectMean field theoryen_US
dc.subjectGibbs samplingen_US
dc.subjectsFactorialen_US
dc.subjectLearning algorithmsen_US
dc.subjectMachine learningen_US
dc.titleFactorial Hidden Markov Modelsen_US


Files in this item

FilesSizeFormatView
AIM-1561.pdf244.1Kbapplication/pdfView/Open
AIM-1561.ps198.3Kbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record