共查询到20条相似文献,搜索用时 171 毫秒
1.
为提高手语识别方法的识别速度与识别率,提出一种基于HOG特征的稀疏编码手语识别方法。通过基于学习加权局部特征的监督、判别和面向事件的字典,将手语识别表达为稀疏表示问题。对每一类手语样本的HOG特征进行提取,再用LC-KSVD算法来学习面向事件和辨别的字典,以将样本数据传输到稀疏空间。鉴于不同类别样本之间的区别,采用提取HOG特征来更加精确地表达出每一类手语独有的信息特征,经过字典学习得到一个过完备字典,作为判别错误项的判别性稀疏编码和重构误差以及分类性误差组合,形成目标函数,以在字典学习过程期间提高稀疏表示中的辨别能力。在自制24类手语数据集上进行测试,实验结果表明,与现有的一些识别方法相比,本方法具有更高的识别率和更快的识别时间。 相似文献
2.
目前,关于连续手语语句识别的研究相对较少,原因在于难以有效地分割出手语词。该文利用卷积神经网络提取手语词的手型特征,同时利用轨迹归一化算法提取手语词的轨迹特征,并在此基础上完成长短期记忆网络的构建,从而为手语语句识别准备好手语词分类器。对于一个待识别的手语语句,采用基于右手心轨迹信息的分割算法来检测过渡动作。由过渡动作可以将语句分割为多个片段,考虑到某些过渡动作可能是手语词内部的动作,所以将若干个片段拼接成一个复合段,并按照层次遍历的次序对所有复合段运用手语词分类器进行识别。最后,采用跨段搜索的动态规划算法寻找最大后验概率的词汇序列,从而完成手语语句的识别。实验结果表明,该算法可以对47个常用手语词组成的语句做出识别,且具有较高的准确性和实时性。 相似文献
3.
4.
目前,对于动态手语的识别大多只是针对手语词汇的,对连续的手语语句的识别研究以及相应成果较少,原因在于难以对其进行有效的分割。提出了一种基于加权关键帧的手语语句识别算法。关键帧可以看作是手语词汇的基本组成单元,根据关键帧即可得到相关词汇,并将其组成连续的手语语句,从而避免了对手语语句直接做分割的难点。借助于体感设备,首先提出了一种基于手语轨迹的自适应关键帧提取算法,然后根据关键帧包含的语义对其进行加权处理,最后设计了基于加权关键帧序列的识别算法,得到连续的手语语句。实验证明,设计的算法可以实现对连续手语语句的实时识别。 相似文献
5.
为有效地消除手语识别过程中背景、光照等干扰因素带来的视觉问题,采用低冗余的骨架数据表达手语信息,设计了一个端到端连续手语识别模型.首先,分别从帧内和帧间提取手型和轨迹特征,可以有效地降低原始样本的离散程度;其次,构建一系列并行的双路残差网络对手型和轨迹特征进行优化与融合,生成时空特征序列;最后,基于注意力机制的编码-解码网络实现时空特征序列到翻译文本的映射.使用Leap Motion收集建立了一个基于三维手部骨架数据的手语数据集LMSLR.实验结果表明,在LMSLR数据集和公共的CSL数据集上,该模型与大多数基于视频处理的模型相比具有较高的准确率和较小的计算量. 相似文献
6.
7.
8.
动态手语可以利用其轨迹与关键手型加以描述。大量的统计实验数据表明,大多数的常用手语通过轨迹曲线的匹配即可实现识别,因此,提出一种针对动态手语的分级匹配识别算法。首先利用体感设备获取手势轨迹,并根据轨迹的点密度分布设计了一种关键帧检测算法以提取手势的关键手型,结合轨迹的曲线特征,实现对动态手语的精确描述。然后利用优化的动态时间规整(DTW)算法完成对手语的一级匹配,即轨迹匹配。若此时可以得到识别结果,那么识别过程可以结束,否则进入二级匹配,即针对关键手型再做匹配识别,从而得到最终的识别结果。实验证明,所提算法不仅实时性好,识别的准确率也较高。 相似文献
9.
基于计算机视觉的手语识别技术能为聋校双语教学带来很大的便利.近年来,随着深度学习技术的蓬勃发展,手语识别的准确率和速度有了极大的提高.与使用颜色标记和外界技术(如Kinect手心定位技术)的方法不同,提出一种改进的SSD(Single-Shot Multibox Detector)网络,对手势进行目标检测完成中国手语识... 相似文献
10.
针对提高不同笔体下的手写识别准确率进行了研究,将深度卷积神经网络与自动编码器相结合,设计卷积自编码器网络层数,形成深度卷积自编码神经网络。首先采用双线性插值方法分别对MNIST数据集与一万幅自制中国大学生手写数字图片进行图像预处理,然后先使用单一MNIST数据集对深度卷积自编码神经网络进行训练与测试;最后使用MNIST与自制数据集中5 000幅混合,再次训练该网络,对另外5 000幅进行测试。实验数据表明,所提深度卷积自编码神经网络在MNIST测试集正确率达到99.37%,有效提高了准确率;且5 000幅自制数据集模型测试正确率达99.33%,表明该算法实用性较强,在不同笔体数字上得到了较高的识别准确率,模型准确有效。 相似文献
11.
12.
A Chinese sign language recognition system based on SOFM/SRN/HMM 总被引:3,自引:0,他引:3
In sign language recognition (SLR), the major challenges now are developing methods that solve signer-independent continuous sign problems. In this paper, SOFM/HMM is first presented for modeling signer-independent isolated signs. The proposed method uses the self-organizing feature maps (SOFM) as different signers' feature extractor for continuous hidden Markov models (HMM) so as to transform input signs into significant and low-dimensional representations that can be well modeled by the emission probabilities of HMM. Based on these isolated sign models, a SOFM/SRN/HMM model is then proposed for signer-independent continuous SLR. This model applies the improved simple recurrent network (SRN) to segment continuous sign language in terms of transformed SOFM representations, and the outputs of SRN are taken as the HMM states in which the lattice Viterbi algorithm is employed to search the best matched word sequence. Experimental results demonstrate that the proposed system has better performance compared with conventional HMM system and obtains a word recognition rate of 82.9% over a 5113-sign vocabulary and an accuracy of 86.3% for signer-independent continuous SLR. 相似文献
13.
This paper presents an automatic Australian sign language (Auslan) recognition system, which tracks multiple target objects (the face and hands) throughout an image sequence and extracts features for the recognition of sign phrases. Tracking is performed using correspondences of simple geometrical features between the target objects within the current and the previous frames. In signing, the face and a hand of a signer often overlap, thus the system needs to segment these for the purpose of feature extraction. Our system deals with the occlusion of the face and a hand by detecting the contour of the foreground moving object using a combination of motion cues and the snake algorithm. To represent signs, features that are invariant to scaling, 2D rotations and signing speed are used for recognition. The features represent the relative geometrical positioning and shapes of the target objects, as well as their directions of motion. These are used to recognise Auslan phrases using Hidden Markov Models. Experiments were conducted using 163 test sign phrases with varying grammatical formations. Using a known grammar, the system achieved over 97% recognition rate on a sentence level and 99% success rate at a word level. 相似文献
14.
Gaolin Fang Wen Gao Debin Zhao 《IEEE transactions on systems, man, and cybernetics. Part A, Systems and humans : a publication of the IEEE Systems, Man, and Cybernetics Society》2004,34(3):305-314
The major difficulty for large vocabulary sign recognition lies in the huge search space due to a variety of recognized classes. How to reduce the recognition time without loss of accuracy is a challenging issue. In this paper, a fuzzy decision tree with heterogeneous classifiers is proposed for large vocabulary sign language recognition. As each sign feature has the different discrimination to gestures, the corresponding classifiers are presented for the hierarchical decision to sign language attributes. A one- or two- handed classifier and a hand-shaped classifier with little computational cost are first used to progressively eliminate many impossible candidates, and then, a self-organizing feature maps/hidden Markov model (SOFM/HMM) classifier in which SOFM being as an implicit different signers' feature extractor for continuous HMM, is proposed as a special component of a fuzzy decision tree to get the final results at the last nonleaf nodes that only include a few candidates. Experimental results on a large vocabulary of 5113-signs show that the proposed method dramatically reduces the recognition time by 11 times and also improves the recognition rate about 0.95% over single SOFM/HMM. 相似文献
15.
基于关键帧的多级分类手语识别研究* 总被引:6,自引:1,他引:6
提出了一种基于关键帧识别的多级分类的手语识别方法,该方法采用HDR(多层判别回归)/DTW(动态时间规正)模板匹配多级分类方法。根据手语表达由多帧构成的特点,采用SIFT(尺度不变特征变换)算法定位获取手语词汇的关键帧,并提取其特征向量;根据手语词汇的关键帧采用HDR方法缩小搜索范围,然后采用DTW比较待识别的手语词特征与该范围内每一个手语词进行匹配比较,计算概率最大的为识别结果。这种方法在相同识别率的情况下比HMM识别方法速度提高近8.2%,解决了模板匹配法在大词汇量面前识别率快速下降的问题。 相似文献
16.
In this paper we focus on appearance features particularly the Local Binary Patterns describing the manual component of Sign Language. We compare the performance of these features with geometric moments describing the trajectory and shape of hands. Since the non-manual component is also very important for sign recognition we localize facial landmarks via Active Shape Model combined with Landmark detector that increases the robustness of model fitting. We test the recognition performance of individual features and their combinations on a database consisting of 11 signers and 23 signs with several repetitions. Local Binary Patterns outperform the geometric moments. When the features are combined we achieve a recognition rate up to 99.75% for signer dependent tests and 57.54% for signer independent tests. 相似文献
17.
In this paper we focus on appearance features describing the manual component of Sign Language particularly the Local Binary
Patterns. We compare the performance of these features with geometric moments describing the trajectory and shape of hands.
Since the non-manual component is also very important for sign recognition we localize facial landmarks via Active Shape Model
combined with Landmark detector that increases the robustness of model fitting. We test the recognition performance of individual
features and their combinations on a database consisting of 11 signers and 23 signs with several repetitions. Local Binary
Patterns outperform the geometric moments. When the features are combined we achieve a recognition rate up to 99.75% for signer
dependent tests and 57.54% for signer independent tests. 相似文献
18.
为了实现手语视频中手语字母的准确识别,提出了一种基于DI_CamShift和SLVW的算法。该方法将Kinect作为手语视频采集设备,在获取彩色视频的同时得到其深度信息;计算深度图像中手语手势的主轴方向角和质心位置,通过调整搜索窗口对手势进行准确跟踪;使用基于深度积分图像的Ostu算法分割手势,并提取其SIFT特征;构建了SLVW词包作为手语特征,并用SVM进行识别。通过实验验证该算法,其单个手语字母最好识别率为99.87%,平均识别率96.21%。 相似文献
19.
Multimedia Tools and Applications - Real-time sign language translation systems, that convert continuous sign sequences to text/speech, will facilitate communication between the deaf-mute community... 相似文献
20.
Starner T. Weaver J. Pentland A. 《IEEE transactions on pattern analysis and machine intelligence》1998,20(12):1371-1375
We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon 相似文献