共查询到17条相似文献,搜索用时 500 毫秒
1.
2.
3.
4.
针对复杂背景下手势运动过程中出现的手势形态变化、遮挡、光照变化等问题,提出了一种基于时空上下文的手势跟踪与识别方法。使用机器学习方法离线训练手势样本分类器,实现对手势的检测和定位;利用时空上下文跟踪算法对动态手势进行跟踪,同时为了避免跟踪过程中出现的漂移、目标丢失等情况,使用手势检测算法对手势位置信息进行实时校准;根据手势运动轨迹对手势运动进行跟踪与识别。实验表明,提出的方法可以实现对手势运动快速、准确、连续识别,满足人机交互的要求。 相似文献
5.
6.
针对复杂环境中的手势识别问题,提出了一种融合深度信息和红外信息的手势识别方法。首先利用Kinect摄像头的深度信息进行动态实时手势分割,然后融合红外图像复原手势区域。解决了实时手势分割和利用手势的空间分布特征进行手势识别时由于分割的手势区域有缺损或有人脸干扰时识别率低的问题。经实验验证,提出的方法不仅不受环境光线的影响,而且可以识别区分度较小的手势,对旋转、缩放、平移的手势识别也具有鲁棒性。对于区分度较大的手势,识别率高达100%。 相似文献
7.
复杂背景下基于空间分布特征的手势识别算法 总被引:3,自引:0,他引:3
为实现复杂背景下的手势识别,根据手势图像的区域形状特征提出一种基于手势空间分布特征的手势识别算法.利用复杂背景下基于亮度高斯模型的手势分割算法分割出肤色区域,利用"搜索窗口"筛选当前肤色区域实现手势定位,并提取包括空间相对密度特征和指节相对间距特征的手势空间分布特征,最后综合手势的2个手势特征向量计算总的相似性来识别手势.通过引入随机采样机制提高识别速度,并引入搜索窗口机制实现肤色干扰时的手势识别.实验结果表明,在环境光照相对稳定的条件下,文中算法能够实现鲁棒的实时手势识别,且具有很好的旋转、平移、缩放不变性,对于差异较大的手势识别率高达98%. 相似文献
8.
《计算机应用与软件》2013,(3)
针对手势图像的肤色特点,结合肤色在RGB空间的阈值分割和YCbCr颜色空间上的聚簇特性,以及背景模型的应用,有效减少了背景中类肤色的干扰,完成了手部图像在复杂背景下的检测和分割;并采用图像的7个不变Hu矩描述子来表征不同二值化的手势轮廓;最后采用BP神经网络进行手势识别。实验结果表明该方法有较好的识别率和鲁棒性。 相似文献
9.
复杂背景下基于傅立叶描述子的手势识别 总被引:6,自引:1,他引:5
人的手势是人们日常生活中最广泛使用的一种交流方式。由于在人机交互界面和虚拟现实环境中的应用,手势识别的研究受到了越来越广泛的关注。但是目前基于单目视觉的手势识别技术中,手势分割要求背景简单或者要求识别者戴着笨重的数据手套。而该文结合了运动信息和基于KL变换的肤色模型,在复杂背景下进行手势分割,与传统的基于RGB肤色模型的手势分割相比,在复杂背景环境下得到了很好的分割效果。在对分割的手势区域进行预处理后,该文使用了一种归一化的傅立叶描述子进行手势的特征提取,相比传统的傅立叶描述子更加准确,最后采用了传统的三层BP网络作为模式识别器,手势训练集和测试集的识别率分别达到了95.9%和95%。 相似文献
10.
利用OpenCV计算机视觉库在vs2008平台上设计了一个基于实时摄像头的集动态手势检测、动态手势跟踪、动态手势轨迹识别的应用.首先,该应用基于静止的背景更新,利用背景差分检测运动手势,再结合颜色直方图的粒子滤波进行动态手势跟踪,最后利用隐马尔可夫模型(HMM)进行运动轨迹识别.在运动检测部分结合了背景差分图与通过颜色直方图获得的反投影图,达到比较满意的实时运动检测效果;在运动手势跟踪部分,改进的颜色直方图的粒子跟踪能够在经过类肤色人脸的干扰后迅速地找回运动手势,基本达到了跟踪的要求,但是同时对于HMM识别轨迹时需要的运动轨迹序列采集造成了影响;在识别轨迹部分,HMM的训练达到了识别的要求,但是识别的效果主要取决于实时运动轨迹序列的采集工作与采集方法的优化. 相似文献
11.
Aiming at the use of hand gestures for human–computer interaction, this paper presents a real-time approach to the spotting, representation, and recognition of hand gestures from a video stream. The approach exploits multiple cues including skin color, hand motion, and shape. Skin color analysis and coarse image motion detection are joined to perform reliable hand gesture spotting. At a higher level, a compact spatiotemporal representation is proposed for modeling appearance changes in image sequences containing hand gestures. The representation is extracted by combining robust parameterized image motion regression and shape features of a segmented hand. For efficient recognition of gestures made at varying rates, a linear resampling technique for eliminating the temporal variation (time normalization) while maintaining the essential information of the original gesture representations is developed. The gesture is then classified according to a training set of gestures. In experiments with a library of 12 gestures, the recognition rate was over 90%. Through the development of a prototype gesture-controlled panoramic map browser, we demonstrate that a vocabulary of predefined hand gestures can be used to interact successfully with applications running on an off-the-shelf personal computer equipped with a home video camera. 相似文献
12.
Aditya RamamoorthyAuthor Vitae Subhashis BanerjeeAuthor Vitae 《Pattern recognition》2003,36(9):2069-2081
This paper is concerned with the problem of recognition of dynamic hand gestures. We have considered gestures which are sequences of distinct hand poses. In these gestures hand poses can undergo motion and discrete changes. However, continuous deformations of the hand shapes are not permitted. We have developed a recognition engine which can reliably recognize these gestures despite individual variations. The engine also has the ability to detect start and end of gesture sequences in an automated fashion. The recognition strategy uses a combination of static shape recognition (performed using contour discriminant analysis), Kalman filter based hand tracking and a HMM based temporal characterization scheme. The system is fairly robust to background clutter and uses skin color for static shape recognition and tracking. A real time implementation on standard hardware is developed. Experimental results establish the effectiveness of the approach. 相似文献
13.
基于傅立叶描述子和HMM的手势识别 总被引:1,自引:0,他引:1
针对家庭服务机器人平台中人机交互的问题,提出基于视觉的手势识别作为人与机器人交互的方式,研究利用傅立叶描述子对手势形状进行描述,并结合支持向量机和隐马尔可夫模型分别对静态手势和动态手势进行分类,实现了静态手势和动态手势的识别。该系统基于新型传感器Kinect,在图像分割阶段结合图像深度信息,可以有效的将手势区域提取出来,在一定范围内具有较强的鲁棒性,特征提取阶段基于傅立叶描述子,使手势识别具有旋转、缩放、平移不变性。针对七种常见静态手势和四种动态手势进行测试,平均识别率分别达到98.8%和96.7%,实验结果表明该系统具有较高的准确度。 相似文献
14.
15.
In this paper, we propose a new method for recognizing hand gestures in a continuous video stream using a dynamic Bayesian network or DBN model. The proposed method of DBN-based inference is preceded by steps of skin extraction and modelling, and motion tracking. Then we develop a gesture model for one- or two-hand gestures. They are used to define a cyclic gesture network for modeling continuous gesture stream. We have also developed a DP-based real-time decoding algorithm for continuous gesture recognition. In our experiments with 10 isolated gestures, we obtained a recognition rate upwards of 99.59% with cross validation. In the case of recognizing continuous stream of gestures, it recorded 84% with the precision of 80.77% for the spotted gestures. The proposed DBN-based hand gesture model and the design of a gesture network model are believed to have a strong potential for successful applications to other related problems such as sign language recognition although it is a bit more complicated requiring analysis of hand shapes. 相似文献
16.
基于视觉的多特征手势识别 总被引:1,自引:0,他引:1
手势是一种自然直观的交互方式,基于视觉的手势识别是实现新一代人机交互的关键技术。本文在已有的手势识别技术基础上,从手势分割及手势表示两方面着手,提出了一种单目视觉下的手势识别方法。利用颜色特征检测肤色区域,成功分割出人手;利用人手的轮廓及凸缺陷检测指尖,再利用指尖的数目和方位来表示一个手势,进而结合轮廓长度和面积等几何特征完成手势识别。传统的指尖检测方法需要遍历并扫描手掌外轮廓,计算量大,本文通过凸缺陷检测指尖,减少了计算量,提高了指尖检测的速度。实验结果表明,本文的方法具有很好的鲁棒性及实时性,能适应环境的变化。 相似文献
17.
Recognizing expressions from face and body gesture by temporal normalized motion and appearance features 总被引:1,自引:0,他引:1
Recently, recognizing affects from both face and body gestures attracts more attentions. However, it still lacks of efficient and effective features to describe the dynamics of face and gestures for real-time automatic affect recognition. In this paper, we combine both local motion and appearance feature in a novel framework to model the temporal dynamics of face and body gesture. The proposed framework employs MHI-HOG and Image-HOG features through temporal normalization or bag of words to capture motion and appearance information. The MHI-HOG stands for Histogram of Oriented Gradients (HOG) on the Motion History Image (MHI). It captures motion direction and speed of a region of interest as an expression evolves over the time. The Image-HOG captures the appearance information of the corresponding region of interest. The temporal normalization method explicitly solves the time resolution issue in the video-based affect recognition. To implicitly model local temporal dynamics of an expression, we further propose a bag of words (BOW) based representation for both MHI-HOG and Image-HOG features. Experimental results demonstrate promising performance as compared with the state-of-the-art. Significant improvement of recognition accuracy is achieved as compared with the frame-based approach that does not consider the underlying temporal dynamics. 相似文献