Show simple item record

Context-based Visual Feedback Recognition

dc.date.accessioned2006-11-17T11:12:55Z
dc.date.accessioned2018-11-24T10:25:12Z
dc.date.available2006-11-17T11:12:55Z
dc.date.available2018-11-24T10:25:12Z
dc.date.issued2006-11-15
dc.identifier.urihttp://hdl.handle.net/1721.1/34893
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/34893
dc.descriptionPhD thesis
dc.description.abstractDuring face-to-face conversation, people use visual feedback (e.g.,head and eye gesture) to communicate relevant information and tosynchronize rhythm between participants. When recognizing visualfeedback, people often rely on more than their visual perception.For instance, knowledge about the current topic and from previousutterances help guide the recognition of nonverbal cues. The goal ofthis thesis is to augment computer interfaces with the ability toperceive visual feedback gestures and to enable the exploitation ofcontextual information from the current interaction state to improvevisual feedback recognition.We introduce the concept of visual feedback anticipationwhere contextual knowledge from an interactive system (e.g. lastspoken utterance from the robot or system events from the GUIinterface) is analyzed online to anticipate visual feedback from ahuman participant and improve visual feedback recognition. Ourmulti-modal framework for context-based visual feedback recognitionwas successfully tested on conversational and non-embodiedinterfaces for head and eye gesture recognition.We also introduce Frame-based Hidden-state Conditional RandomField model, a new discriminative model for visual gesturerecognition which can model the sub-structure of a gesture sequence,learn the dynamics between gesture labels, and can be directlyapplied to label unsegmented sequences. The FHCRF model outperformsprevious approaches (i.e. HMM, SVM and CRF) for visual gesturerecognition and can efficiently learn relevant contextualinformation necessary for visual feedback anticipation.A real-time visual feedback recognition library for interactiveinterfaces (called Watson) was developed to recognize head gaze,head gestures, and eye gaze using the images from a monocular orstereo camera and the context information from the interactivesystem. Watson was downloaded by more then 70 researchers around theworld and was successfully used by MERL, USC, NTT, MIT Media Lab andmany other research groups.
dc.format.extent195 p.
dc.format.extent5912220 bytes
dc.format.extent20231190 bytes
dc.language.isoen_US
dc.titleContext-based Visual Feedback Recognition


Files in this item

FilesSizeFormatView
MIT-CSAIL-TR-2006-075.pdf5.912Mbapplication/pdfView/Open
MIT-CSAIL-TR-2006-075.ps20.23Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record