Show simple item record

A Benchmark of Computational Models of Saliency to Predict Human Fixations

dc.date.accessioned2012-01-13T22:30:12Z
dc.date.accessioned2018-11-26T22:26:46Z
dc.date.available2012-01-13T22:30:12Z
dc.date.available2018-11-26T22:26:46Z
dc.date.issued2012-01-13
dc.identifier.urihttp://hdl.handle.net/1721.1/68590
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/68590
dc.description.abstractMany computational models of visual attention have been created from a wide variety of different approaches to predict where people look in images. Each model is usually introduced by demonstrating performances on new images, and it is hard to make immediate comparisons between models. To alleviate this problem, we propose a benchmark data set containing 300 natural images with eye tracking data from 39 observers to compare model performances. We calculate the performance of 10 models at predicting ground truth fixations using three different metrics. We provide a way for people to submit new models for evaluation online. We find that the Judd et al. and Graph-based visual saliency models perform best. In general, models with blurrier maps and models that include a center bias perform well. We add and optimize a blur and center bias for each model and show improvements. We compare performances to baseline models of chance, center and human performance. We show that human performance increases with the number of humans to a limit. We analyze the similarity of different models using multidimensional scaling and explore the relationship between model performance and fixation consistency. Finally, we offer observations about how to improve saliency models in the future.en_US
dc.format.extent22 p.en_US
dc.rightsCreative Commons Attribution 3.0 Unporteden
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/
dc.subjectfixation maps, saliency maps, visionen_US
dc.titleA Benchmark of Computational Models of Saliency to Predict Human Fixationsen_US


Files in this item

FilesSizeFormatView
MIT-CSAIL-TR-2012-001.pdf50.57Mbapplication/pdfView/Open
supplementalMaterial.pdf8.721Mbapplication/pdfView/Open

This item appears in the following Collection(s)

Show simple item record

Creative Commons Attribution 3.0 Unported
Except where otherwise noted, this item's license is described as Creative Commons Attribution 3.0 Unported