首页 | 本学科首页   官方微博 | 高级检索  
     


Variable structure Human Intention Estimator with mobility and vision constraints as model selection criteria
Affiliation:1. Machine Vision Lab, Digital System Group, CSIR-CEERI, Pilani, India;2. Department of Computer Science, Northwest Missouri State University, MO, USA;1. Mechanical Engineering Department, Imperial College London, SW7 2AZ, United Kingdom;2. Electrical and Electronic Engineering Department, Imperial College London, SW7 2AZ, United Kingdom;3. Dipartimento di Ingegneria Civile e Ingegneria Informatica, Università di Roma “Tor Vergata”, Via del Politecnico 1, 00133 Roma, Italy
Abstract:In this paper, a novel method for early estimation of a human’s action intention is presented. Human intention, modeled as a goal location associated with a hand motion and eye gaze dynamics, is inferred by fusing information from collected hand motion and gaze motion data. The algorithm, called Human Intention Estimator with Variable Structure (HIEVS), uses two variable structure Interacting Multiple Model (VS-IMM) filters in parallel to process the hand motion and gaze data and generate posterior model probabilities associated with a finite set of action models. The posterior model probabilities from each filter are then fused at the end of each iteration and the current intention is estimated as the model, which has the highest fused posterior model probability. Two model set augmentation (MSA) algorithms are presented to select the active models for each VS-IMM during each iteration. For the hand motion filter, an MSA algorithm which computes the human’s reachable workspace is used. The MSA algorithm for the gaze filter utilizes the human’s visual span to determine the active models. This method allows for accurate early prediction of the human’s intention even when the total model set is large. A real world experiment is performed to validate the proposed method.
Keywords:Human Intention Estimation  Safe human–robot collaboration/interaction  Multiple model filtering  Fusion
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号