Show simple item record

An integrated model of visual attention using shape-based features

dc.date.accessioned2009-06-22T17:15:20Z
dc.date.accessioned2018-11-26T22:26:01Z
dc.date.available2009-06-22T17:15:20Z
dc.date.available2018-11-26T22:26:01Z
dc.date.issued2009-06-20
dc.identifier.urihttp://hdl.handle.net/1721.1/45598
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/45598
dc.description.abstractApart from helping shed some light on human perceptual mechanisms, modeling visual attention has important applications in computer vision. It has been shown to be useful in priming object detection, pruning interest points, quantifying visual clutter as well as predicting human eye movements. Prior work has either relied on purely bottom-up approaches or top-down schemes using simple low-level features. In this paper, we outline a top-down visual attention model based on shape-based features. The same shape-based representation is used to represent both the objects and the scenes that contain them. The spatial priors imposed by the scene and the feature priors imposed by the target object are combined in a Bayesian framework to generate a task-dependent saliency map. We show that our approach can predict the location of objects as well as match eye movements (92% overlap with human observers). We also show that the proposed approach performs better than existing bottom-up and top-down computational models.en_US
dc.format.extent10 p.en_US
dc.subjectattentionen_US
dc.subjectbayesian networken_US
dc.titleAn integrated model of visual attention using shape-based featuresen_US


Files in this item

FilesSizeFormatView
MIT-CSAIL-TR-2009-029.pdf16.22Mbapplication/pdfView/Open
MIT-CSAIL-TR-2009-029.ps5.802Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record