Combining Variable Selection with Dimensionality Reduction
dc.date.accessioned | 2005-12-22T02:25:27Z | |
dc.date.accessioned | 2018-11-24T10:24:26Z | |
dc.date.available | 2005-12-22T02:25:27Z | |
dc.date.available | 2018-11-24T10:24:26Z | |
dc.date.issued | 2005-03-30 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/30531 | |
dc.identifier.uri | http://repository.aust.edu.ng/xmlui/handle/1721.1/30531 | |
dc.description.abstract | This paper bridges the gap between variable selection methods (e.g., Pearson coefficients, KS test) and dimensionality reductionalgorithms (e.g., PCA, LDA). Variable selection algorithms encounter difficulties dealing with highly correlated data,since many features are similar in quality. Dimensionality reduction algorithms tend to combine all variables and cannotselect a subset of significant variables.Our approach combines both methodologies by applying variable selection followed by dimensionality reduction. Thiscombination makes sense only when using the same utility function in both stages, which we do. The resulting algorithmbenefits from complex features as variable selection algorithms do, and at the same time enjoys the benefits of dimensionalityreduction.1 | |
dc.format.extent | 10 p. | |
dc.format.extent | 14957523 bytes | |
dc.format.extent | 722450 bytes | |
dc.language.iso | en_US | |
dc.subject | AI | |
dc.subject | Computer Vision | |
dc.subject | Statistical Learning | |
dc.subject | Variable Selection | |
dc.title | Combining Variable Selection with Dimensionality Reduction |
Files in this item
Files | Size | Format | View |
---|---|---|---|
MIT-CSAIL-TR-2005-019.pdf | 722.4Kb | application/pdf | View/ |
MIT-CSAIL-TR-2005-019.ps | 14.95Mb | application/postscript | View/ |