An Efficient Learning Procedure for Deep Boltzmann Machines
dc.date.accessioned | 2010-08-04T15:15:39Z | |
dc.date.accessioned | 2018-11-26T22:26:22Z | |
dc.date.available | 2010-08-04T15:15:39Z | |
dc.date.available | 2018-11-26T22:26:22Z | |
dc.date.issued | 2010-08-04 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/57474 | |
dc.identifier.uri | http://repository.aust.edu.ng/xmlui/handle/1721.1/57474 | |
dc.description.abstract | We present a new learning algorithm for Boltzmann Machines that contain many layers of hidden variables. Data-dependent statistics are estimated using a variational approximation that tends to focus on a single mode, and data-independent statistics are estimated using persistent Markov chains. The use of two quite different techniques for estimating the two types of statistic that enter into the gradient of the log likelihood makes it practical to learn Boltzmann Machines with multiple hidden layers and millions of parameters. The learning can be made more efficient by using a layer-by-layer "pre-training" phase that initializes the weights sensibly. The pre-training also allows the variational inference to be initialized sensibly with a single bottom-up pass. We present results on the MNIST and NORB datasets showing that Deep Boltzmann Machines learn very good generative models of hand-written digits and 3-D objects. We also show that the features discovered by Deep Boltzmann Machines are a very effective way to initialize the hidden layers of feed-forward neural nets which are then discriminatively fine-tuned. | en_US |
dc.format.extent | 32 p. | en_US |
dc.subject | Deep learning | en_US |
dc.subject | Graphical models | en_US |
dc.subject | Boltzmann Machines | en_US |
dc.title | An Efficient Learning Procedure for Deep Boltzmann Machines | en_US |
Files in this item
Files | Size | Format | View |
---|---|---|---|
MIT-CSAIL-TR-2010-037.pdf | 753.3Kb | application/pdf | View/ |