Show simple item record

An Efficient Learning Procedure for Deep Boltzmann Machines

dc.date.accessioned2010-08-04T15:15:39Z
dc.date.accessioned2018-11-26T22:26:22Z
dc.date.available2010-08-04T15:15:39Z
dc.date.available2018-11-26T22:26:22Z
dc.date.issued2010-08-04
dc.identifier.urihttp://hdl.handle.net/1721.1/57474
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/57474
dc.description.abstractWe present a new learning algorithm for Boltzmann Machines that contain many layers of hidden variables. Data-dependent statistics are estimated using a variational approximation that tends to focus on a single mode, and data-independent statistics are estimated using persistent Markov chains. The use of two quite different techniques for estimating the two types of statistic that enter into the gradient of the log likelihood makes it practical to learn Boltzmann Machines with multiple hidden layers and millions of parameters. The learning can be made more efficient by using a layer-by-layer "pre-training" phase that initializes the weights sensibly. The pre-training also allows the variational inference to be initialized sensibly with a single bottom-up pass. We present results on the MNIST and NORB datasets showing that Deep Boltzmann Machines learn very good generative models of hand-written digits and 3-D objects. We also show that the features discovered by Deep Boltzmann Machines are a very effective way to initialize the hidden layers of feed-forward neural nets which are then discriminatively fine-tuned.en_US
dc.format.extent32 p.en_US
dc.subjectDeep learningen_US
dc.subjectGraphical modelsen_US
dc.subjectBoltzmann Machinesen_US
dc.titleAn Efficient Learning Procedure for Deep Boltzmann Machinesen_US


Files in this item

FilesSizeFormatView
MIT-CSAIL-TR-2010-037.pdf753.3Kbapplication/pdfView/Open

This item appears in the following Collection(s)

Show simple item record