Show simple item record

Cascading Regularized Classifiers

dc.date.accessioned2005-12-22T01:27:18Z
dc.date.accessioned2018-11-24T10:24:06Z
dc.date.available2005-12-22T01:27:18Z
dc.date.available2018-11-24T10:24:06Z
dc.date.issued2004-04-21
dc.identifier.urihttp://hdl.handle.net/1721.1/30463
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/30463
dc.description.abstractAmong the various methods to combine classifiers, Boosting was originally thought as an stratagem to cascade pairs of classifiers through their disagreement. I recover the same idea from the work of Niyogi et al. to show how to loosen the requirement of weak learnability, central to Boosting, and introduce a new cascading stratagem. The paper concludes with an empirical study of an implementation of the cascade that, under assumptions that mirror the conditions imposed by Viola and Jones in [VJ01], has the property to preserve the generalization ability of boosting.
dc.format.extent8 p.
dc.format.extent8847621 bytes
dc.format.extent505102 bytes
dc.language.isoen_US
dc.subjectAI
dc.titleCascading Regularized Classifiers


Files in this item

FilesSizeFormatView
MIT-CSAIL-TR-2004-023.pdf505.1Kbapplication/pdfView/Open
MIT-CSAIL-TR-2004-023.ps8.847Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record