首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 296 毫秒
1.
基于多特征融合与支持向量机的手势识别   总被引:1,自引:0,他引:1  
针对手势识别中人的手部特征描述易受到环境因素影响,手势识别率低等问题,并考虑到单个特征的局限性,提出了一种基于Hu矩和HOG特征融合的支持向量机手势识别新方法。该方法首先对处理后的手势图像提取局部的HOG特征,然后针对手势的轮廓提取全局Hu矩特征,再将两种特征融合成混合特征,并通过主成分分析法对混合特征进行降维形成最终分类特征,并将新特征输入到支持向量机中进行识别。实验表明,该方法具有较好的鲁棒性和较高的识别率。  相似文献   

2.
针对当前传统手势识别技术受环境和手部自身条件干扰较大,如当手腕处存在大袖口或其他干扰物的情况下,难以准确识别手势的问题,提出一种基于深度数据的手势识别方法。首先,通过预处理提取手形;其次,利用提出的N-Iterate、C-Loop判定等方法识别手掌最大内切圆;然后,计算手形所有轮廓点到掌心距离的直方图及其波峰索引,并结合角度提取指尖个数;将得到的3类特征作为改进SVM的输入,映射到高维空间,进行手势0~5的识别。实验结果表明,该方法在复杂背景和手部自干扰等影响下具有较高的识别准确率和实时性,平均准确率提高至98.57%,识别耗时降低至37.923 ms,较大程度提高了识别效率。  相似文献   

3.
利用Kinect捕捉深度图像,使用有效的手势分割手段将手势区域截取并运用相关算法对手势进行轮廓、凸包及其最小外接圆提取;然后构建了4种手势特性参数并给出了4种参数的计算方法;最后综合手势特性参数构建分类决策树以实现手势识别.实验针对9种常见手势在复杂背景条件下进行测试,单个手势识别率在89%-100%之间,综合识别率达到96%.  相似文献   

4.
融合轮廓矩和Fourier描述子特征的压印字符识别   总被引:1,自引:2,他引:1  
李学勇  路长厚  李建美 《光电子.激光》2007,18(10):1244-12,471,259
提出了一种基于混合轮廓特征的压印字符识别新方法.首先提取字符轮廓的矩不变量和Fourier描述子,然后将2种特征融合构成混合特征,最后将新特征输入BP网络进行识别.实验证明,混合轮廓特征不仅克服了2种特征单独使用时的缺点,提高了识别率,而且可以提高特征的平移、缩放以及旋转不变性.  相似文献   

5.
在手势识别过程中,手势特征的提取非常重要,如果提取的手势特征不具备较好的可分辨性和表征的不变性,就很难达到手势识别的要求。由于人手经常会弯曲,手指也常会被手部其他位置遮挡,再加上所在环境光照的影响手势图像会出现高亮区域和阴暗区域,使得在设定初始轮廓曲线与手势轮廓较远时手势分割出现手势区域有漏检的情况,而且在手势轮廓凹形区域不易识别等问题,导致同样的手势得到不同的手势轮廓描述,影响手势识别的可靠性,为此本文提出了一种主动轮廓与肤色统计融合的静态手势轮廓特征提取算法解决这一问题。  相似文献   

6.
为克服单一输入形式存在的交互缺点,融合手部移动和面部表情两种输入方式的交互特性,将手部移动和面部表情动作相结合,提出了基于“面部表情+手”的混合手势交互技术。混合手势交互技术将7种面部表情和手部移动组合起来,通过手部移动和面部表情识别操控计算机执行一系列目标选择任务。设计的实验中,手部移动用于操控鼠标光标移动,面部表情识别替代鼠标的点击操作用于选中目标按钮。根据设计的多种目标选择任务,详细分析混合手势交互技术的识别错误率和平均识别时间。结果表明,“面部表情+手”的混合手势交互技术的识别准确率可达93.81%,平均识别时间可达2921 ms,完全满足日常的人机交互需求。  相似文献   

7.
孙红  廖蕾 《电子科技》2015,28(8):145
基于视觉的手势识别是实现新一代人机交互的关键技术。从手势分割和手势表示两方面入手,提出了一种基于OpenCV的多特征实时手势识别方法。利用HSV颜色空间肤色分割算法分割出肤色区域,根据手势的几何特征分离出手势区域,然后运用凸包算法检测指尖,结合指尖个数、手指间角度特征和轮廓长宽比特征,建立决策树对本定义的12种不同手势进行分类。实验结果表明,本方法具有较好的鲁棒性、实时性好、识别率高。  相似文献   

8.
耿磊  吴晓娟  彭彰 《信号处理》2005,21(Z1):339-342
本文提出一种用于手势识别的新方法,它将图像的方向直方图矢量(OHV)与神经网络相结合.其特点在于选用图像的方向直方图矢量作为手势的特征矢量,该特征矢量对于光线和手的平移变化具有较强的鲁棒性,这正是手势识别所要解决的关键问题.在训练阶段,首先需要建立手势样本的特征矢量库;在识别阶段,本文选用三层BP网络作为分类器,获得了90%以上的识别率.本文还对手势进行一定角度的旋转后的识别进行了讨论,识别结果达到预期要求.  相似文献   

9.
现有的基于雷达传感器的手势识别方法,大多先利用雷达回波对手势的距离、多普勒和角度等信息进行参数估计,得到各种数据谱图,然后再利用卷积神经网络对这些谱图进行分类,实现过程较为复杂.该文提出一种基于串联式1维神经网络(1D-ScNN)的毫米波雷达动态手势识别方法.首先基于毫米波雷达获取动态手势的原始回波,然后利用1维卷积和池化操作对手势特征进行提取,并将这些特征信息输入1维Inception v3结构.最后在网络的末端接入长短期记忆(LSTM)网络来聚合1维特征,充分利用动态手势的帧间相关性,提高识别准确率和训练收敛速度.实验结果表明,该方法实现过程简单,收敛速度快,识别准确率可以达到96.0%以上,高于现有基于数据谱图的手势分类方法.  相似文献   

10.
针对基于单目视觉信息的裸手手势,采用了基于改进型形状上下文描述子的分类识别方法。该方法首先通过肤色信息以及背景建模提取手部区域,然后利用单手指模板对手指进行检测,同时采用改进型形状上下文描述子对手部区域整体轮廓进行描述。在此基础上,使用有向无环图支持向量机(Directed Acyclic Graph Support Vector Machine,DAGSVM)对所提取的特征进行模式分类。其中,针对基本算法存在的问题,改进型形状上下文描述子将基于各个轮廓点的形状上下文直方图改为基于重心的形状上下文直方图,以提高计算速度,增强实时性。对30种字母手势,3种控制手势和10个数字手势开展的离线和在线实验结果表明,该方法取得了较好的分类准确率(离线:96%,在线:91%)和较高的实时性(识别时间14~15ms),适用于基于字母手势的实时人机交互。  相似文献   

11.
A vision-based static hand gesture recognition method which consists of preprocessing, feature extraction, feature selection and classification stages is presented in this work. The preprocessing stage involves image enhancement, segmentation, rotation and filtering. This work proposes an image rotation technique that makes segmented image rotation invariant and explores a combined feature set, using localized contour sequences and block-based features for better representation of static hand gesture. Genetic algorithm is used here to select optimized feature subset from the combined feature set. This work also proposes an improved version of radial basis function (RBF) neural network to classify hand gesture images using selected combined features. In the proposed RBF neural network, the centers are automatically selected using k-means algorithm and estimated weight matrix is recursively updated, utilizing least-mean-square algorithm for better recognition of hand gesture images. The comparative performances are tested on two indigenously developed databases of 24 American sign language hand alphabet.  相似文献   

12.
The accurate classification of hand gestures is crucial in the development of novel hand gesture-based systems designed for human-computer interaction (HCI) and for human alternative and augmentative communication (HAAC). A complete vision-based system, consisting of hand gesture acquisition, segmentation, filtering, representation and classification, is developed to robustly classify hand gestures. The algorithms in the subsystems are formulated or selected to optimality classify hand gestures. The gray-scale image of a hand gesture is segmented using a histogram thresholding algorithm. A morphological filtering approach is designed to effectively remove background and object noise in the segmented image. The contour of a gesture is represented by a localized contour sequence whose samples are the perpendicular distances between the contour pixels and the chord connecting the end-points of a window centered on the contour pixels. Gesture similarity is determined by measuring the similarity between the localized contour sequences of the gestures. Linear alignment and nonlinear alignment are developed to measure the similarity between the localized contour sequences. Experiments and evaluations on a subset of American Sign Language (ASL) hand gestures show that, by using nonlinear alignment, no gestures are misclassified by the system. Additionally, it is also estimated that real-time gesture classification is possible through the use of a high-speed PC, high-speed digital signal processing chips and code optimization  相似文献   

13.
Accurately recognizing human hand gestures is a useful component in many modern intelligent systems, such as identification authentication, human–computer interaction, and sign language recognition. Conventional approaches are typically based on shallow visual features and relatively simple backgrounds, which cannot readily recognize partially occluded hand gestures with sophisticated backgrounds. In this work, we propose a unified hand gesture recognition framework by optimally fusing a set of shallow/deep finger-level image attributes, based on which a weakly-supervised ranking algorithm is designed to select semantically salient regions for gesture understanding. More specifically, given a rich number of hand gesture images, we employ the well-known BING object proposal generator to extract hundreds of object patches that potentially draw human visual attention. Since the hundreds of object patches are still too many for building an effective recognition system, a weakly-supervised metric is proposed to rank them by extracting multiple shallow/deep features. And visual semantics are encoded at region-level by transferring the image-level semantic tags into various human gesture image regions by a weakly-supervised learning paradigm Apparently, the top-ranking highly salient object patches are highly indicative to human visual perception of human hand gesture, Thus we extract their ImageNet-CNN features and further concatenate them. Finally, the concatenated deep feature is fed into a multi-class SVM for classifying each hand gesture image into a particular type. Comprehensive experimental validations have demonstrated the effectiveness and robustness of our proposed hybrid-feature-based hand gesture categorization.  相似文献   

14.
Dynamic hand gesture recognition is still an interesting topic for the computer vision community. A set of feature vectors can represent any hand gesture. A Recurrent Neural Network (RNN) can recognize these feature vectors as a hand gesture that analyzes the temporal and contextual information of the gesture sequence. Thus, we proposed a hybrid deep learning framework to recognize dynamic hand gestures. In the Hybrid model GoogleNet is pipelined with a Bidirectional GRU unit to recognize the dynamic hand gesture. Dynamic hand gestures consist of many frames, and features of each frame need to be extracted to get the temporal and dynamic information of the performed gesture. As RNN takes input as a sequence of feature vectors, we extract features from videos using pretrained GoogleNet. As Gated Recurrent Unit is one of the variants of RNN to classify the sequential data, we created a feature vector that corresponds to each video and passed it to the bidirectional GRU (BGRU) network to classify the gestures. We evaluate our model on four publicly available hand gesture datasets. The proposed method performs well and is comparable with the existing methods. For instance, we achieved 98.6% accuracy on Northwestern University Hand Gesture(NWUHG), 99.6% on SKIG, 99.4% on Cambridge Hand Gesture (CHG) datasets respectively. We performed our experiments on DHG14/28 dataset and achieved an accuracy of 97.8% with 14-gesture classes and 92.1% on 28-gesture classes. DHG14/28 dataset contains skeleton and depth data, and our proposed model used depth data and achieved comparable accuracy.  相似文献   

15.
针对现有无线射频信号的手势识别研究中的数据预处理和特征利用问题,该文提出一种用于调频连续波(FMCW)雷达的时空压缩特征表示学习的手势识别算法。首先对手部反射的毫米波雷达回波信号的距离-多普勒(RD)图进行静态干扰去除和动目标点筛选,减少杂波对手势信号的干扰,同时减少计算数据量;然后提出一种压缩手势时空特征的表示方法,利用动目标点的主导速度来表示手势的运动特征,实现多维特征的压缩映射,并保留手势运动的关键特征信息;最后设计了一个单通道的卷积神经网络(CNN)来学习和分类多维手势特征信息并应用于多用户和多位置的手势识别。实验结果表明,与现有其他手势识别算法相比,该文提出的手势识别方法在识别精度、实时性以及泛化能力上都具有明显的优势。  相似文献   

16.
该文提出一种基于多通道调频连续波(FMCW)毫米波雷达的微动手势识别方法,并给出一种微动手势特征提取的最优雷达参数设计准则。通过对手部反射的雷达回波进行时频分析处理,估计目标的距离多普勒谱、距离谱、多普勒谱和水平方向角度谱。设计固定帧时间长度拼接的距离-多普勒-时间图特征,与距离-时间特征、多普勒-时间特征、水平方向角度-时间图特征和三者联合特征等,分别对7类微动手势进行表征。根据手势运动过程振幅和速度差异,进行手势特征捕获和对齐。利用仅有5层的轻量化卷积神经网络对微动手势特征进行分类。实验结果表明,相较其他特征,设计的距离-多普勒-时间图特征能够更为准确地表征微动手势,且对未经训练的测试对象具有更好的泛化能力。  相似文献   

17.
针对字母手势的检测和跟踪问题,文章提出一种基于最大似然准则Hausdorff距离的手势识别算法。该算法首先对字母手势图像进行二值化处理,并由字母手势图像的边缘信息中提取字母手势的关键点(指根和指尖);然后采用基于最大似然准则的Hausdorff距离对手势进行识别,搜索策略采用类似于Rucklidge提出的多分辨率搜索方法,在不影响成功率和目标定位精度的情况下,可以显著地缩短搜索时间。实验结果表明此方法可以较好地识别字母手势,同时对部分变形(旋转和缩放)手势也有良好的效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号