首页 | 本学科首页   官方微博 | 高级检索  
     

基于CM-Transformer的连续手语识别
引用本文:叶康,张淑军,郭淇,李辉,崔雪红.基于CM-Transformer的连续手语识别[J].北京邮电大学学报,2022,45(5):49.
作者姓名:叶康  张淑军  郭淇  李辉  崔雪红
作者单位:青岛科技大学 信息科学技术学院
基金项目:山东省重点研发计划项目
摘    要:针对捕获手语动作的全局特征和局部特征以及保留图像中原有的结构和捕获上下文联系,提出了一种改进的卷积多层感知机鄄自注意力(CM-Transformer)方法用于连续手语识别。 CM-Transformer 将卷积层的结构一致性优势与自注意力模型编码器的全局建模性能相结合,以捕获长期的序列依赖。 同时将自注意力模型前馈层替换为多层感知机,以发挥其平移不变性和局部性。 使用随机帧丢弃和随机梯度停止技术,减少时间和空间上的训练计算量,防止过拟合,由此构建一种高效计算的轻量级网络;最后使用连接主义时间分类解码器对输入和输出序列对齐,得到最终的识别结果。 在两个大型基准数据集上的实验结果表明了所提方法的有效性。

收稿时间:2021-11-01
修稿时间:2022-01-05

Continuous Sign Language Recognition Based on CM-Transformer
Abstract:To capture the global and local features of sign language actions and preserve the originalstructure and context in the image, an improved convolution multilayer perceptron Transformer ( CM-Transformer) model is proposed for continuous sign language recognition. The structural consistencyadvantage of convolution layer and the global modeling performance of self attention model encoder arecombined by CM-Transformer to capture long-term sequence dependence. Meanwhile, the feedforwardlayer of self attention model is replaced by multilayer perceptron to perform translation invariance andlocality. In addition, random frame discarding and random gradient stopping techniques are used toreduce the training computation in time and space, and prevent over fitting. Thus, an efficient andlightweight network has been constructed. Finally the connectionist temporal classification decoder is usedto align the input and output sequences to obtain the final recognition result. Experimental results on twolarge benchmark data sets show the effectiveness of the proposed method.
Keywords:
点击此处可从《北京邮电大学学报》浏览原始摘要信息
点击此处可从《北京邮电大学学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号