Show simple item record

Notes on Regularized Least Squares

dc.date.accessioned2007-05-01T16:01:50Z
dc.date.accessioned2018-11-24T10:25:28Z
dc.date.available2007-05-01T16:01:50Z
dc.date.available2018-11-24T10:25:28Z
dc.date.issued2007-05-01
dc.identifier.urihttp://hdl.handle.net/1721.1/37318
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/37318
dc.description.abstractThis is a collection of information about regularized least squares (RLS). The facts here are not new results , but we have not seen them usefully collected together before. A key goal of this work is to demonstrate that with RLS, we get certain things for free : if we can solve a single supervised RLS problem, we can search for a good regularization parameter lambda at essentially no additional cost.The discussion in this paper applies to dense regularized least squares, where we work with matrix factorizations of the data or kernel matrix. It is also possible to work with iterative methods such as conjugate gradient, and this is frequently the method of choice for large data sets in high dimensions with very few nonzero dimensions per point, such as text classifciation tasks. The results discussed here do not apply to iterative methods, which have different design tradeoffs.We present the results in greater detail than strictly necessary, erring on the side of showing our work. We hope that this will be useful to people trying to learn more about linear algebra manipulations in the machine learning context.
dc.format.extent8 p.
dc.subjectmachine learning, linear algebra
dc.titleNotes on Regularized Least Squares


Files in this item

FilesSizeFormatView
MIT-CSAIL-TR-2007-025.pdf215.8Kbapplication/pdfView/Open
MIT-CSAIL-TR-2007-025.ps494.1Kbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record