Gestural Cues for Sentence Segmentation
In human-human dialogues, face-to-face meetings are often preferred over phone conversations.One explanation is that non-verbal modalities such as gesture provide additionalinformation, making communication more efficient and accurate. If so, computerprocessing of natural language could improve by attending to non-verbal modalitiesas well. We consider the problem of sentence segmentation, using hand-annotatedgesture features to improve recognition. We find that gesture features correlate wellwith sentence boundaries, but that these features improve the overall performance of alanguage-only system only marginally. This finding is in line with previous research onthis topic. We provide a regression analysis, revealing that for sentence boundarydetection, the gestural features are largely redundant with the language model andpause features. This suggests that gestural features can still be useful when speech recognition is inaccurate.