首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 109 毫秒
1.
针对模糊语音发音机理相近、听觉上容易混淆和容易被智能机器误识的问题,该文设计了一个双模态模糊语音数据库,并提取不同特征用于分类研究.数据库包括语音信号和发音器官运动信号两种模态,共计语音数据6300条,运动信号数据1 268条.提取声学特征与运动学特征,在特征层进行双模态的融合,通过核主成分分析降维来得到双模态融合特征...  相似文献   

2.
徐亮  王晶  杨文镜  罗逸雨 《信号处理》2021,37(10):1799-1805
视听多模态建模已被验证在与语音分离的任务中性能表现优异,本文提出一种语音分离模型,对现有的时域音视频联合语音分离算法进行改进,增强音视频流之间的联系。针对现有音视频分离模型联合度不高的情况,作者提出一种在时域上将语音特征与额外输入的视觉特征进行多次融合,并加入纵向权值共享的端到端的语音分离模型。在GRID数据集上的实验结果表明,该网络与仅使用音频的时域语音卷积分离网络(Conv-TasNet)和音视频联合的Conv-TasNet相比,性能上分别获得了1.2 dB和0.4 dB的改善。   相似文献   

3.
肖易明  张海剑  孙洪  丁昊 《信号处理》2019,35(12):1969-1978
在日常生活中视觉事件通常伴随着声音的产生。这表明视频流与音频之间存在某种潜在的联系,本文称之为音视频同步的联合表达。本文将视频流与音频融合并通过训练所设计的神经网络预测视频流和音频是否在时间上同步来学习这种联合表达。与传统音视频信息融合方法不同,本文引入注意力机制,利用视频特征与音频特征的皮尔森相关系数在时间维度和空间维度同时对视频流加权,使视频流与音频关联更加紧密。基于学习到的音视频同步的联合表达,本文进一步利用类激活图方法进行视频声源定位。实验结果表明,所提出的引入注意力机制的音视频同步检测模型可以更好地判定给定视频的音视频是否同步,即更好地学习到音视频同步的联合表达,从而也可以有效地定位视频声源。   相似文献   

4.
本文在传统发音唇动分析模型的基础上,构建一个发音唇动时空模型.提出了唇动时域特征、空域特性与语音的相关度度量方法,以及融合时空度量的语音唇动一致性检测方法.利用唇宽、唇高与音频幅度变化之间的联系获得语音唇动的时域一致性评分;通过协惯量分析法获得语音与唇部空域特征的初始相关度,并提出了针对语音、唇动自然延时的相关度修订方法;最后将时空上的得分进行融合以判断语音唇动是否一致.初步实验结果表明,对于四种不一致音视频数据,与常用的协惯量方法相比,EER(Equal Error Rate)平均下降了约8.2%.  相似文献   

5.
林淑瑞  张晓辉  郭敏  张卫强  王贵锦 《信号处理》2021,37(10):1889-1898
近年来,情感计算逐渐成为人机交互发展突破的关键,而情感识别作为情感计算的重要部分,也受到了广泛的关注。本文实现了基于ResNet18的面部表情识别系统和基于HGFM架构的语音情感识别模型,通过调整参数,训练出了性能较好的模型。在此基础上,通过特征级融合和决策级融合这两种多模态融合策略,实现了包含视频和音频信号的多模态情感识别系统,展现了多模态情感识别系统性能的优越性。两种不同融合策略下的音视频情感识别模型相比视频模态和音频模态,在准确率上都有一定的提升,验证了多模态模型往往比最优的单模态模型的识别性能更好的结论。本文所实现的模型取得了较好的情感识别性能,融合后的音视频双模态模型的准确率达到了76.84%,与现有最优模型相比提升了3.50%,在与现有的音视频情感识别模型的比较中具有性能上的优势。   相似文献   

6.
本文研究了充满Kerr介质的高Q腔中非关联双模相干态场与级联三能级原子非共振相互作用情况下,电子的绝热跃迁转移现象与非关联场之间的关系。数值计算显示:级联三能级原子在与非关联双模相干态的相互作用下,该现象依然存在,但呈现不同于原子与关联场相互作用的新特征。  相似文献   

7.
在结合脑电(EEG)信号与人脸图像的双模态情感识别领域中,通常存在两个挑战性问题:(1)如何从EEG信号中以端到端方式学习到更具显著性的情感语义特征;(2)如何充分利用双模态信息,捕捉双模态特征中情感语义的一致性与互补性。为此,提出了多层次时空特征自适应集成与特有-共享特征融合的双模态情感识别模型。一方面,为从EEG信号中获得更具显著性的情感语义特征,设计了多层次时空特征自适应集成模块。该模块首先通过双流结构捕捉EEG信号的时空特征,再通过特征相似度加权并集成各层次的特征,最后利用门控机制自适应地学习各层次相对重要的情感特征。另一方面,为挖掘EEG信号与人脸图像之间的情感语义一致性与互补性,设计了特有-共享特征融合模块,通过特有特征的学习和共享特征的学习来联合学习情感语义特征,并结合损失函数实现各模态特有语义信息和模态间共享语义信息的自动提取。在DEAP和MAHNOB-HCI两种数据集上,采用跨实验验证和5折交叉验证两种实验手段验证了提出模型的性能。实验结果表明,该模型取得了具有竞争力的结果,为基于EEG信号与人脸图像的双模态情感识别提供了一种有效的解决方案。  相似文献   

8.
稳健语音识别技术发展现状及展望   总被引:12,自引:0,他引:12  
姚文冰  姚天任  韩涛 《信号处理》2001,17(6):484-493
本文在简单叙述稳健语音识别技术产生的背景后,着重介绍了现阶段国内外有关稳健语音识别的主要技术、研究现状及未来发展方向.首先简述引起语音质量恶化、影响语音识别系统稳健性的干扰源及其影响.然后分别介绍语音增强、稳健语音特征的提取、基于特征和模型的补偿技术、麦克风阵列、基于人耳的听觉处理及听觉视觉双模态语音识别等技术路线及发展现状.最后讨论稳健语音识别技术朱来的发展方向.  相似文献   

9.
针对单生物特征识别准确率和鲁棒性差的问题, 提出了一种基于总错误率(TER)和特征关联自适应融合多模态生物特 征识别方法。首先将TER作为判别特征引入到多模态识别,以代替传统的匹配分 数;其次在不确定度量理论的基 础上,考虑人脸特征和语音特征之间的时空关联性,提出了一种基于特征关联的多特征 自适应融合策略,利用特征关联 系数自适应调节不同识别特征对识别结果的贡献。仿真实验表明,与几种代表性的融合算法 相比,本文所 提出的融合模式可以有效提高多生物特征识别系统的准确性和鲁棒性。  相似文献   

10.
近年来,情感识别成为了人机交互领域的研究热点问题,而多模态维度情感识别能够检测出细微情感变化,得到了越来越多的关注多模态维度情感识别中需要考虑如何进行不同模态情感信息的有效融合。针对特征层融合存在有效特征提取和模态同步的问题、决策层融合存在不同模态特征信息的关联问题,本文采用模型层融合策略,提出了基于多头注意力机制的多模态维度情感识别方法,分别构建音频模型、视频模型和多模态融合模型对信息流进行深层特征学习,最后放入双向长短时网络中得到最终情感预测值。所提方法相比于不同基线方法在激活度和愉悦度上均取得了最佳的性能,可以在高层维度对情感信息有效捕捉,进而更好的对音视频信息进行有效融合。   相似文献   

11.
Audio-visual integration in multimodal communication   总被引:7,自引:0,他引:7  
We review recent research that examines audio-visual integration in multimodal communication. The topics include bimodality in human speech, human and automated lip reading, facial animation, lip synchronization, joint audio-video coding, and bimodal speaker verification. We also study the enabling technologies for these research topics, including automatic facial-feature tracking and audio-to-visual mapping. Recent progress in audio-visual research shows that joint processing of audio and video provides advantages that are not available when the audio and video are processed independently  相似文献   

12.
Recent advances in the automatic recognition of audiovisual speech   总被引:11,自引:0,他引:11  
Visual speech information from the speaker's mouth region has been successfully shown to improve noise robustness of automatic speech recognizers, thus promising to extend their usability in the human computer interface. In this paper, we review the main components of audiovisual automatic speech recognition (ASR) and present novel contributions in two main areas: first, the visual front-end design, based on a cascade of linear image transforms of an appropriate video region of interest, and subsequently, audiovisual speech integration. On the latter topic, we discuss new work on feature and decision fusion combination, the modeling of audiovisual speech asynchrony, and incorporating modality reliability estimates to the bimodal recognition process. We also briefly touch upon the issue of audiovisual adaptation. We apply our algorithms to three multisubject bimodal databases, ranging from small- to large-vocabulary recognition tasks, recorded in both visually controlled and challenging environments. Our experiments demonstrate that the visual modality improves ASR over all conditions and data considered, though less so for visually challenging environments and large vocabulary tasks.  相似文献   

13.
Emotion recognition is a hot research in modern intelligent systems. The technique is pervasively used in autonomous vehicles, remote medical service, and human–computer interaction (HCI). Traditional speech emotion recognition algorithms cannot be effectively generalized since both training and testing data are from the same domain, which have the same data distribution. In practice, however, speech data is acquired from different devices and recording environments. Thus, the data may differ significantly in terms of language, emotional types and tags. To solve such problem, in this work, we propose a bimodal fusion algorithm to realize speech emotion recognition, where both facial expression and speech information are optimally fused. We first combine the CNN and RNN to achieve facial emotion recognition. Subsequently, we leverage the MFCC to convert speech signal to images. Therefore, we can leverage the LSTM and CNN to recognize speech emotion. Finally, we utilize the weighted decision fusion method to fuse facial expression and speech signal to achieve speech emotion recognition. Comprehensive experimental results have demonstrated that, compared with the uni-modal emotion recognition, bimodal features-based emotion recognition achieves a better performance.  相似文献   

14.
It is noted that of great importance to the success of the articulatory approach to speech coding is the use of a good distortion measure between a given speech signal and the entries in a stored codebook of impulse responses and corresponding vocal-track shapes (articulatory codebook). One promising distortion measure is the weighted cepstral distortion. Since the impulse responses in the articulatory codebook do not include glottal characteristics, the authors derive optimal weighting functions (cepstral lifters) to reduce the influence of a varying glottal source on the cepstral distortion measure. This is done by examining the ensemble of cepstral coefficients of speech produced by an articulatory speech synthesizer that also includes a vocal-cord model. The obtained cepstral lifters are optimal for the given ensemble of cepstral coefficients and for given constraints on the weighting function. They are different for cepstral coefficients derived from the power spectrum (FFT cepstra) and for those derived from LPC (linear predictive coding) coefficients (LPC cepstra). The performances of the obtained cepstral lifters are compared in an articulatory codebook search  相似文献   

15.
语音和唇部运动的异步性是多模态融合语音识别的关键问题,该文首先引入一个多流异步动态贝叶斯网络(MS-ADBN)模型,在词的级别上描述了音频流和视频流的异步性,音视频流都采用了词-音素的层次结构.而多流多状态异步DBN(MM-ADBN)模型是MS-ADBN模型的扩展,音视频流都采用了词-音素-状态的层次结构.本质上,MS-ADBN是一个整词模型,而MM-ADBN模型是一个音素模型,适用于大词汇量连续语音识别.实验结果表明:基于连续音视频数据库,在纯净语音环境下,MM-ADBN比MS-ADBN模型和多流HMM识别率分别提高35.91%和9.97%.  相似文献   

16.
为实现音视频语音识别和同时对音频视频流进行准确的音素切分,该文提出一个新的多流异步三音素动态贝叶斯网络(MM-ADBN-TRI)模型,在词级别上描述了音频视频流的异步性,音频流和视频流都采用了词-三音素-状态-观测向量的层次结构,识别基元是三音素,描述了连续语音中的协同发音现象.实验结果表明:该模型在音视频语音识别和对音频视频流的音素切分方面,以及在确定音视频流的异步关系上,都具备较好的性能.  相似文献   

17.
A model of articulatory dynamics and control   总被引:1,自引:0,他引:1  
A model of human articulation is described whose spatial and dynamic characteristics closely match those of natural speech. The model includes a controller that embodies enough articulatory "motor skill" to produce, from discrete phonetic strings, properly timed sequences of articulatory movements. Together with programs for dictionary searching and rules for duration and other phonetic variables, the model can produce reasonably acceptable synthetic speech from ordinary English text.  相似文献   

18.
Speech-driven facial animation combines techniques from different disciplines such as image analysis, computer graphics, and speech analysis. Active shape models (ASM) used in image analysis are excellent tools for characterizing lip contour shapes and approximating their motion in image sequences. By controlling the coefficients for an ASM, such a model can also be used for animation. We design a mapping of the articulatory parameters used in phonetics into ASM coefficients that control nonrigid lip motion. The mapping is designed to minimize the approximation error when articulatory parameters measured on training lip contours are taken as input to synthesize the training lip movements. Since articulatory parameters can also be estimated from speech, the proposed technique can form an important component of a speech-driven facial animation system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号