Show simple item record

Segmentation and Alignment of Speech and Sketching in a Design Environment

dc.date.accessioned2004-10-20T20:31:48Z
dc.date.accessioned2018-11-24T10:23:06Z
dc.date.available2004-10-20T20:31:48Z
dc.date.available2018-11-24T10:23:06Z
dc.date.issued2003-02-01en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/7103
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/7103
dc.description.abstractSketches are commonly used in the early stages of design. Our previous system allows users to sketch mechanical systems that the computer interprets. However, some parts of the mechanical system might be too hard or too complicated to express in the sketch. Adding speech recognition to create a multimodal system would move us toward our goal of creating a more natural user interface. This thesis examines the relationship between the verbal and sketch input, particularly how to segment and align the two inputs. Toward this end, subjects were recorded while they sketched and talked. These recordings were transcribed, and a set of rules to perform segmentation and alignment was created. These rules represent the knowledge that the computer needs to perform segmentation and alignment. The rules successfully interpreted the 24 data sets that they were given.en_US
dc.format.extent193 p.en_US
dc.format.extent34430522 bytes
dc.format.extent46149955 bytes
dc.language.isoen_US
dc.subjectAIen_US
dc.subjectsketchen_US
dc.subjectdesignen_US
dc.subjectmultimodalen_US
dc.subjectdisambiguationen_US
dc.subjectsegmentationen_US
dc.subjectalignmenten_US
dc.titleSegmentation and Alignment of Speech and Sketching in a Design Environmenten_US


Files in this item

FilesSizeFormatView
AITR-2003-004.pdf46.14Mbapplication/pdfView/Open
AITR-2003-004.ps34.43Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record