Show simple item record

A Trainable System for Object Detection in Images and Video Sequences

dc.date.accessioned2004-10-01T13:59:58Z
dc.date.accessioned2018-11-24T10:09:38Z
dc.date.available2004-10-01T13:59:58Z
dc.date.available2018-11-24T10:09:38Z
dc.date.issued2000-05-01en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/5566
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/5566
dc.description.abstractThis thesis presents a general, trainable system for object detection in static images and video sequences. The core system finds a certain class of objects in static images of completely unconstrained, cluttered scenes without using motion, tracking, or handcrafted models and without making any assumptions on the scene structure or the number of objects in the scene. The system uses a set of training data of positive and negative example images as input, transforms the pixel images to a Haar wavelet representation, and uses a support vector machine classifier to learn the difference between in-class and out-of-class patterns. To detect objects in out-of-sample images, we do a brute force search over all the subwindows in the image. This system is applied to face, people, and car detection with excellent results. For our extensions to video sequences, we augment the core static detection system in several ways -- 1) extending the representation to five frames, 2) implementing an approximation to a Kalman filter, and 3) modeling detections in an image as a density and propagating this density through time according to measured features. In addition, we present a real-time version of the system that is currently running in a DaimlerChrysler experimental vehicle. As part of this thesis, we also present a system that, instead of detecting full patterns, uses a component-based approach. We find it to be more robust to occlusions, rotations in depth, and severe lighting conditions for people detection than the full body version. We also experiment with various other representations including pixels and principal components and show results that quantify how the number of features, color, and gray-level affect performance.en_US
dc.format.extent128 p.en_US
dc.format.extent72537763 bytes
dc.format.extent15910731 bytes
dc.language.isoen_US
dc.subjectAIen_US
dc.subjectMITen_US
dc.subjectArtificial Intelligenceen_US
dc.subjectobject detectionen_US
dc.subjectpattern recognitionen_US
dc.subjectpeople detectionen_US
dc.subjectface detectionen_US
dc.subjectcar detectionen_US
dc.titleA Trainable System for Object Detection in Images and Video Sequencesen_US


Files in this item

FilesSizeFormatView
AITR-1685.pdf15.91Mbapplication/pdfView/Open
AITR-1685.ps72.53Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record