Show simple item record

Tiny images

dc.date.accessioned2007-04-24T14:01:48Z
dc.date.accessioned2018-11-24T10:25:28Z
dc.date.available2007-04-24T14:01:48Z
dc.date.available2018-11-24T10:25:28Z
dc.date.issued2007-04-23
dc.identifier.urihttp://hdl.handle.net/1721.1/37291
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/37291
dc.description.abstractThe human visual system is remarkably tolerant to degradations in image resolution: in a scene recognition task, human performance is similar whether $32 \times 32$ color images or multi-mega pixel images are used. With small images, even object recognition and segmentation is performed robustly by the visual system, despite the object being unrecognizable in isolation. Motivated by these observations, we explore the space of 32x32 images using a database of 10^8 32x32 color images gathered from the Internet using image search engines. Each image is loosely labeled with one of the 70,399 non-abstract nouns in English, as listed in the Wordnet lexical database. Hence the image database represents a dense sampling of all object categories and scenes. With this dataset, we use nearest neighbor methods to perform objectrecognition across the 10^8 images.
dc.format.extent9 p.
dc.subjectRecognition
dc.subjectNearest neighbors methods
dc.subjectImage databases
dc.titleTiny images


Files in this item

FilesSizeFormatView
MIT-CSAIL-TR-2007-024.pdf864.8Kbapplication/pdfView/Open
MIT-CSAIL-TR-2007-024.ps4.167Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record