Show simple item record

Learning and recognition of hybrid manipulation tasks in variable environments using probabilistic flow tubes

dc.date.accessioned2018-01-30T23:46:40Z
dc.date.accessioned2018-11-26T22:27:48Z
dc.date.available2018-01-30T23:46:40Z
dc.date.available2018-11-26T22:27:48Z
dc.date.issued2012-08-23
dc.identifier.urihttp://hdl.handle.net/1721.1/113367
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/113367
dc.descriptionPhD thesisen_US
dc.description.abstractRobots can act as proxies for human operators in environments where a human operator is not present or cannot directly perform a task, such as in dangerous or remote situations. Teleoperation is a common interface for controlling robots that are designed to be human proxies. Unfortunately, teleoperation may fail to preserve the natural fluidity of human motions due to interface limitations such as communication delays, non-immersive sensing, and controller uncertainty. I envision a robot that can learn a set of motions that a teleoperator commonly performs, so that it can autonomously execute routine tasks or recognize a user's motion in real time. Tasks can be either primitive activities or compound plans. During online operation, the robot can recognize a user's teleoperated motions on the fly and offer real-time assistance, for example, by autonomously executing the remainder of the task. I realize this vision by addressing three main problems: (1) learning primitive activities by identifying significant features of the example motions and generalizing the behaviors from user demonstration trajectories; (2) recognizing activities in real time by determining the likelihood that a user is currently executing one of several learned activities; and (3) learning complex plans by generalizing a sequence of activities, through auto-segmentation and incremental learning of previously unknown activities. To solve these problems, I first present an approach to learning activities from human demonstration that (1) provides flexibility and robustness when encoding a user's demonstrated motions by using a novel representation called a probabilistic flow tube, and (2) automatically determines the relevant features of a motion so that they can be preserved during autonomous execution in new situations. I next introduce an approach to real-time motion recognition that (1) uses temporal information to successfully model motions that may be non-Markovian, (2) provides fast real-time recognition of motions in progress by using an incremental temporal alignment approach, and (3) leverages the probabilistic flow tube representation to ensure robustness during recognition against varying environment states. Finally, I develop an approach to learn combinations of activities that (1) automatically determines where activities should be segmented in a sequence and (2) learns previously unknown activities on the fly. I demonstrate the results of autonomously executing motions learned by my approach on two different robotic platforms supporting user-teleoperated manipulation tasks in a variety of environments. I also present the results of real-time recognition in different scenarios, including a robotic hardware platform. Systematic testing in a two-dimensional environment shows up to a 27% improvement in activity recognition rates over prior art, while maintaining average computing times for incremental recognition of less than half of human reaction time.en_US
dc.format.extent144 p.en_US
dc.rightsCreative Commons Attribution-NonCommercial-NoDerivatives 4.0 Internationalen
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.titleLearning and recognition of hybrid manipulation tasks in variable environments using probabilistic flow tubesen_US


Files in this item

FilesSizeFormatView
MIT-CSAIL-TR-2018-007.pdf17.76Mbapplication/pdfView/Open

This item appears in the following Collection(s)

Show simple item record

Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International
Except where otherwise noted, this item's license is described as Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International