Cascading Regularized Classifiers
dc.date.accessioned | 2005-12-22T01:27:18Z | |
dc.date.accessioned | 2018-11-24T10:24:06Z | |
dc.date.available | 2005-12-22T01:27:18Z | |
dc.date.available | 2018-11-24T10:24:06Z | |
dc.date.issued | 2004-04-21 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/30463 | |
dc.identifier.uri | http://repository.aust.edu.ng/xmlui/handle/1721.1/30463 | |
dc.description.abstract | Among the various methods to combine classifiers, Boosting was originally thought as an stratagem to cascade pairs of classifiers through their disagreement. I recover the same idea from the work of Niyogi et al. to show how to loosen the requirement of weak learnability, central to Boosting, and introduce a new cascading stratagem. The paper concludes with an empirical study of an implementation of the cascade that, under assumptions that mirror the conditions imposed by Viola and Jones in [VJ01], has the property to preserve the generalization ability of boosting. | |
dc.format.extent | 8 p. | |
dc.format.extent | 8847621 bytes | |
dc.format.extent | 505102 bytes | |
dc.language.iso | en_US | |
dc.subject | AI | |
dc.title | Cascading Regularized Classifiers |
Files in this item
Files | Size | Format | View |
---|---|---|---|
MIT-CSAIL-TR-2004-023.pdf | 505.1Kb | application/pdf | View/ |
MIT-CSAIL-TR-2004-023.ps | 8.847Mb | application/postscript | View/ |