Show simple item record

On the Dirichlet Prior and Bayesian Regularization

dc.date.accessioned2004-10-08T20:38:20Z
dc.date.accessioned2018-11-24T10:21:35Z
dc.date.available2004-10-08T20:38:20Z
dc.date.available2018-11-24T10:21:35Z
dc.date.issued2002-09-01en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/6702
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/6702
dc.description.abstractA common objective in learning a model from data is to recover its network structure, while the model parameters are of minor interest. For example, we may wish to recover regulatory networks from high-throughput data sources. In this paper we examine how Bayesian regularization using a Dirichlet prior over the model parameters affects the learned model structure in a domain with discrete variables. Surprisingly, a weak prior in the sense of smaller equivalent sample size leads to a strong regularization of the model structure (sparse graph) given a sufficiently large data set. In particular, the empty graph is obtained in the limit of a vanishing strength of prior belief. This is diametrically opposite to what one may expect in this limit, namely the complete graph from an (unregularized) maximum likelihood estimate. Since the prior affects the parameters as expected, the prior strength balances a "trade-off" between regularizing the parameters or the structure of the model. We demonstrate the benefits of optimizing this trade-off in the sense of predictive accuracy.en_US
dc.format.extent11 p.en_US
dc.format.extent3152389 bytes
dc.format.extent1414851 bytes
dc.language.isoen_US
dc.subjectAIen_US
dc.subjectRegularizationen_US
dc.subjectDirichlet Prioren_US
dc.titleOn the Dirichlet Prior and Bayesian Regularizationen_US


Files in this item

FilesSizeFormatView
AIM-2002-014.pdf1.414Mbapplication/pdfView/Open
AIM-2002-014.ps3.152Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record