Show simple item record

Examining high level neural representations of cluttered scenes

dc.date.accessioned2010-07-29T18:45:19Z
dc.date.accessioned2018-11-26T22:26:21Z
dc.date.available2010-07-29T18:45:19Z
dc.date.available2018-11-26T22:26:21Z
dc.date.issued2010-07-29
dc.identifier.urihttp://hdl.handle.net/1721.1/57463
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/57463
dc.description.abstractHumans and other primates can rapidly categorize objects even when they are embedded in complex visual scenes (Thorpe et al., 1996; Fabre-Thorpe et al., 1998). Studies by Serre et al., 2007 have shown that the ability of humans to detect animals in brief presentations of natural images decreases as the size of the target animal decreases and the amount of clutter increases, and additionally, that a feedforward computational model of the ventral visual system, originally developed to account for physiological properties of neurons, shows a similar pattern of performance. Motivated by these studies, we recorded single- and multi-unit neural spiking activity from macaque superior temporal sulcus (STS) and anterior inferior temporal cortex (AIT), as a monkey passively viewed images of natural scenes. The stimuli consisted of 600 images of animals in natural scenes, and 600 images of natural scenes without animals in them, captured at four different viewing distances, and were the same images used by Serre et al. to allow for a direct comparison between human psychophysics, computational models, and neural data. To analyze the data, we applied population "readout" techniques (Hung et al., 2005; Meyers et al., 2008) to decode from the neural activity whether an image contained an animal or not. The decoding results showed a similar pattern of degraded decoding performance with increasing clutter as was seen in the human psychophysics and computational model results. However, overall the decoding accuracies from the neural data lower were than that seen in the computational model, and the latencies of information in IT were long (~125ms) relative to behavioral measures obtained from primates in other studies. Additional tests also showed that the responses of the model units were not capturing several properties of the neural responses, and that detecting animals in cluttered scenes using simple model units based on V1 cells worked almost as well as using more complex model units that were designed to model the responses of IT neurons. While these results suggest AIT might not be the primary brain region involved in this form of rapid categorization, additional studies are needed before drawing strong conclusions.en_US
dc.format.extent50 p.en_US
dc.subjectdecodingen_US
dc.subjectreadouten_US
dc.subjectrapid categorizationen_US
dc.subjectinferior temporal cortexen_US
dc.subjectobject recognitionen_US
dc.subjectscene understandingen_US
dc.subjectneuroscienceen_US
dc.subjectvisual clutteren_US
dc.subjectelectrophysiologyen_US
dc.titleExamining high level neural representations of cluttered scenesen_US


Files in this item

FilesSizeFormatView
MIT-CSAIL-TR-2010-034.pdf2.152Mbapplication/pdfView/Open

This item appears in the following Collection(s)

Show simple item record