首页 | 本学科首页   官方微博 | 高级检索  
     


Model-based segmentation and recognition of dynamic gestures in continuous video streams
Authors:Hong Li [Author Vitae] [Author Vitae]
Affiliation:a Electrical and Computer Engineering, 19 Union Street, Walter Light Hall, Queen's University, Kingston, Ontario, Canada K7L 3N6
b School of Computing, 557 Goodwin Hall, Queen's University, Kingston, Ontario, Canada K7L 3N6
Abstract:Segmentation and recognition of continuous gestures are challenging due to spatio-temporal variations and endpoint localization issues. A novel multi-scale Gesture Model is presented here as a set of 3D spatio-temporal surfaces of a time-varying contour. Three approaches, which differ mainly in endpoint localization, are proposed: the first uses a motion detection strategy and multi-scale search to find the endpoints; the second uses Dynamic Time Warping to roughly locate the endpoints before a fine search is carried out; the last approach is based on Dynamic Programming. Experimental results on two arm and single hand gestures show that all three methods achieve high recognition rates, ranging from 88% to 96% for the two arm test, with the last method performing best.
Keywords:Continuous gesture recognition   Gesture segmentation   Motion Signature   Gesture Model   Dynamic Programming   Dynamic Time Warping
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号