Show simple item record

Fast Learning by Bounding Likelihoods in Sigmoid Type Belief Networks

dc.date.accessioned2004-10-20T20:49:15Z
dc.date.accessioned2018-11-24T10:23:17Z
dc.date.available2004-10-20T20:49:15Z
dc.date.available2018-11-24T10:23:17Z
dc.date.issued1996-02-09en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/7189
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/7189
dc.description.abstractSigmoid type belief networks, a class of probabilistic neural networks, provide a natural framework for compactly representing probabilistic information in a variety of unsupervised and supervised learning problems. Often the parameters used in these networks need to be learned from examples. Unfortunately, estimating the parameters via exact probabilistic calculations (i.e, the EM-algorithm) is intractable even for networks with fairly small numbers of hidden units. We propose to avoid the infeasibility of the E step by bounding likelihoods instead of computing them exactly. We introduce extended and complementary representations for these networks and show that the estimation of the network parameters can be made fast (reduced to quadratic optimization) by performing the estimation in either of the alternative domains. The complementary networks can be used for continuous density estimation as well.en_US
dc.format.extent7 p.en_US
dc.format.extent197474 bytes
dc.format.extent292170 bytes
dc.language.isoen_US
dc.subjectAIen_US
dc.subjectMITen_US
dc.subjectArtificial Intelligenceen_US
dc.subjectBelief networksen_US
dc.subjectProbabilistic networksen_US
dc.subjectEM algorithmen_US
dc.subjectDensity estimationen_US
dc.subjectLikelihood boundsen_US
dc.titleFast Learning by Bounding Likelihoods in Sigmoid Type Belief Networksen_US


Files in this item

FilesSizeFormatView
AIM-1560.pdf292.1Kbapplication/pdfView/Open
AIM-1560.ps197.4Kbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record