Show simple item record

Optimal Rates for Regularization Operators in Learning Theory

dc.date.accessioned2006-09-29T18:36:42Z
dc.date.accessioned2018-11-24T10:25:06Z
dc.date.available2006-09-29T18:36:42Z
dc.date.available2018-11-24T10:25:06Z
dc.date.issued2006-09-10
dc.identifier.urihttp://hdl.handle.net/1721.1/34216
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/34216
dc.description.abstractWe develop some new error bounds for learning algorithms induced by regularization methods in the regression setting. The "hardness" of the problem is characterized in terms of the parameters r and s, the first related to the "complexity" of the target function, the second connected to the effective dimension of the marginal probability measure over the input space. We show, extending previous results, that by a suitable choice of the regularization parameter as a function of the number of the available examples, it is possible attain the optimal minimax rates of convergence for the expected squared loss of the estimators, over the family of priors fulfilling the constraint r + s > 1/2. The setting considers both labelled and unlabelled examples, the latter being crucial for the optimality results on the priors in the range r < 1/2.
dc.format.extent16 p.
dc.format.extent776374 bytes
dc.format.extent738421 bytes
dc.language.isoen_US
dc.subjectoptimal rates, regularized least-squares algorithm, regularization methods, adaptation
dc.titleOptimal Rates for Regularization Operators in Learning Theory


Files in this item

FilesSizeFormatView
MIT-CSAIL-TR-2006-062.pdf738.4Kbapplication/pdfView/Open
MIT-CSAIL-TR-2006-062.ps776.3Kbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record