首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 140 毫秒
1.
目的 针对当前视频情感判别方法大多仅依赖面部表情、而忽略了面部视频中潜藏的生理信号所包含的情感信息,本文提出一种基于面部表情和血容量脉冲(BVP)生理信号的双模态视频情感识别方法。方法 首先对视频进行预处理获取面部视频;然后对面部视频分别提取LBP-TOP和HOG-TOP两种时空表情特征,并利用视频颜色放大技术获取BVP生理信号,进而提取生理信号情感特征;接着将两种特征分别送入BP分类器训练分类模型;最后利用模糊积分进行决策层融合,得出情感识别结果。结果 在实验室自建面部视频情感库上进行实验,表情单模态和生理信号单模态的平均识别率分别为80%和63.75%,而融合后的情感识别结果为83.33%,高于融合前单一模态的情感识别精度,说明了本文融合双模态进行情感识别的有效性。结论 本文提出的双模态时空特征融合的情感识别方法更能充分地利用视频中的情感信息,有效增强了视频情感的分类性能,与类似的视频情感识别算法对比实验验证了本文方法的优越性。另外,基于模糊积分的决策层融合算法有效地降低了不可靠决策信息对融合的干扰,最终获得更优的识别精度。  相似文献   

2.
为充分提取文本和语音双模态深层情感特征,解决模态间有效交互融合的问题,提高情感识别准确率,提出了基于级联双通道分阶段融合(cascade two channel and phased fusion,CTC-PF)的双模态情感识别模型。设计级联顺序注意力编码器(cascaded sequential attention-Encoder,CSA-Encoder)对长距离语音情感序列信息进行并行化计算,提取深层语音情感特征;提出情感领域级联编码器(affective field cascade-Encoder,AFC-Encoder),提高模型的全局和局部文本理解能力,解决文本关键情感特征稀疏的问题。两个级联通道完成语音和文本信息的特征提取之后,利用协同注意力机制对两者的重要情感特征进行交互融合,降低对齐操作成本,然后采用哈达玛点积对其进行二次融合,捕获差异性特征,分阶段融合实现不同时间步长模态序列间的信息交互,解决双模态情感信息交互不足的问题。模型在IEMOCAP数据集上进行分类实验,结果表明,情感识别准确率可达79.4%,F1值可达79.0%,相比现有主流方法有明显提升,证明了该模型在语...  相似文献   

3.
融入深度学习的偏最小二乘优化方法   总被引:2,自引:0,他引:2  
偏最小二乘在多元变量分析中得到了广泛的应用。但偏最小二乘方法内部采用主成分分析,不能充分表达数据的非线性特征,对非线性数据的预测精度较低。提出了一种融入深度学习的偏最小二乘优化方法,该方法利用深度学习的稀疏自编码器对特征空间提取非线性结构,将提取的特征成分取代偏最小二乘中的成分,从而形成能适应非线性的模型。分别采用大承气汤、麻杏石甘汤、葛根芩连汤和UCI数据集的数据进行分析处理,实验结果表明,融入深度学习的偏最小二乘优化方法能较好反映中医药数据的特征。  相似文献   

4.
情绪识别作为人机交互的热门领域,其技术已经被应用于医学、教育、安全驾驶、电子商务等领域.情绪主要由面部表情、声音、话语等进行表达,不同情绪表达时的面部肌肉、语气、语调等特征也不相同,使用单一模态特征确定的情绪的不准确性偏高,考虑到情绪表达主要通过视觉和听觉进行感知,本文提出了一种基于视听觉感知系统的多模态表情识别算法,分别从语音和图像模态出发,提取两种模态的情感特征,并设计多个分类器为单特征进行情绪分类实验,得到多个基于单特征的表情识别模型.在语音和图像的多模态实验中,提出了晚期融合策略进行特征融合,考虑到不同模型间的弱依赖性,采用加权投票法进行模型融合,得到基于多个单特征模型的融合表情识别模型.本文使用AFEW数据集进行实验,通过对比融合表情识别模型与单特征的表情识别模型的识别结果,验证了基于视听觉感知系统的多模态情感识别效果要优于基于单模态的识别效果.  相似文献   

5.
目的 针对体积局部二值模式应用到视频帧特征提取上,特征维数大,对光照及噪声鲁棒性差等问题,提出一种新的特征描述算法—时空局部三值模式矩(TSLTPM)。考虑到TSLTPM描述的仅是纹理特征,本文进一步融合3维梯度方向直方图(3DHOG)特征来增强对情感视频的描述。方法 首先对情感视频进行预处理获得表情和姿态序列;然后对表情和姿态序列分别提取TSLTPM和3DHOG特征,计算测试序列与已标记的情感训练集特征间的最小欧氏距离,并将其作为独立证据来构造基本概率分配;最后使用D-S证据联合规则得到情感识别结果。结果 在FABO数据库上进行实验,表情和姿态单模态分别取得83.06%和94.78%的平均识别率,在表情上分别比VLBP(体积局部二值模式)、LBP-TOP(三正交平面局部二值模式)、TSLTPM、3DHOG高9.27%、12.89%、1.87%、1.13%;在姿态上分别比VLBP、LBP-TOP、TSLTPM、3DHOG高24.61%、27.55%、1.18%、0.98%。将两种模态进行融合以后平均识别率达到96.86%,说明了融合表情和姿态进行情感识别的有效性。结论 本文提出的TSLTPM特征将VLBP扩展成时空三值模式,能够有效降低维数,减少光照和噪声对识别的影响,与3DHOG特征形成复合时空特征有效增强了情感视频的分类性能,与典型特征提取算法的对比实验也表明了本文算法的有效性。另外,与其他方法的对比实验也验证了本文融合方法的优越性。  相似文献   

6.
针对双模态情感识别框架识别率低、可靠性差的问题,对情感识别最重要的两个模态语音和面部表情进行了双模态情感识别特征层融合的研究。采用基于先验知识的特征提取方法和VGGNet-19网络分别对预处理后的音视频信号进行特征提取,以直接级联的方式并通过PCA进行降维来达到特征融合的目的,使用BLSTM网络进行模型构建以完成情感识别。将该框架应用到AViD-Corpus和SEMAINE数据库上进行测试,并和传统情感识别特征层融合框架以及基于VGGNet-19或BLSTM的框架进行了对比。实验结果表明,情感识别的均方根误差(RMSE)得到降低,皮尔逊相关系数(PCC)得到提高,验证了文中提出方法的有效性。  相似文献   

7.
本文针对多模态间歇过程数据多中心和模态方差差异明显的问题,提出了一种基于局部近邻标准化偏最小二乘方法.首先,采用统计模量方法处理间歇过程数据,再利用局部近邻标准化方法将统计模量后的训练数据进行高斯化处理,建立偏最小二乘监控模型,确定控制限;然后,同样对统计模量后的测试数据进行局部近邻标准化处理,再计算测试数据的高斯偏最小二乘监控指标,进行过程监视及故障检测.最后,通过数值实例和青霉素发酵过程验证方法有效性.实验结果表明所提方法解决了故障样本近邻集跨模态问题,对多模态数据具有更好的故障检测能力.  相似文献   

8.
陈师哲  王帅  金琴 《软件学报》2018,29(4):1060-1070
自动情感识别是一个非常具有挑战性的课题,并且有着广泛的应用价值.本文探讨了在多文化场景下的多模态情感识别问题.我们从语音声学和面部表情等模态分别提取了不同的情感特征,包括传统的手工定制特征和基于深度学习的特征,并通过多模态融合方法结合不同的模态,比较不同单模态特征和多模态特征融合的情感识别性能.我们在CHEAVD中文多模态情感数据集和AFEW英文多模态情感数据集进行实验,通过跨文化情感识别研究,我们验证了文化因素对于情感识别的重要影响,并提出3种训练策略提高在多文化场景下情感识别的性能,包括:分文化选择模型、多文化联合训练以及基于共同情感空间的多文化联合训练,其中基于共同情感空间的多文化联合训练通过将文化影响与情感特征分离,在语音和多模态情感识别中均取得最好的识别效果.  相似文献   

9.
偏最小二乘法内部采用主成分分析,不能充分表达数据的非线性特征,对非线性数据的预测精度较低。为此,提出一种融合受限玻尔兹曼机与偏最小二乘的分析预测方法。该方法利用受限玻尔兹曼机对特征空间提取非线性结构,将提取的特征成分取代偏最小二乘中的成分,从而得到适应非线性的模型。实验结果表明,融合受限玻尔兹曼机与偏最小二乘法的分析方法能较好地反映数据的非线性特征。  相似文献   

10.
为了平衡情感信息在不同模态中分布的不均匀性,获得更深层次的多模态情感表征,提出了一种基于双元双模态二次门控融合的多模态情感分析方法。对文本、视觉模态,文本、语音模态分别融合,充分考虑文本模态在三个模态中的优势地位。同时为了获得更深层次的多模态交互信息,使用二次融合。在第一次融合中,使用融合门决定向主模态添加多少补充模态的知识,得到两个双模态混合知识矩阵。在第二次融合中,考虑到两个双模态混合知识矩阵中存在冗余、重复的信息,使用选择门从中选择有效、精简的情感信息作为双模态融合后的知识。在公开数据集CMU-MOSEI上,情感二分类的准确率和F1值分别达到了86.2%、86.1%,表现出良好的健壮性和先进性。  相似文献   

11.
To make human–computer interaction more naturally and friendly, computers must enjoy the ability to understand human’s affective states the same way as human does. There are many modals such as face, body gesture and speech that people use to express their feelings. In this study, we simulate human perception of emotion through combining emotion-related information using facial expression and speech. Speech emotion recognition system is based on prosody features, mel-frequency cepstral coefficients (a representation of the short-term power spectrum of a sound) and facial expression recognition based on integrated time motion image and quantized image matrix, which can be seen as an extension to temporal templates. Experimental results showed that using the hybrid features and decision-level fusion improves the outcome of unimodal systems. This method can improve the recognition rate by about 15 % with respect to the speech unimodal system and by about 30 % with respect to the facial expression system. By using the proposed multi-classifier system that is an improved hybrid system, recognition rate would increase up to 7.5 % over the hybrid features and decision-level fusion with RBF, up to 22.7 % over the speech-based system and up to 38 % over the facial expression-based system.  相似文献   

12.
情感识别研究热点正从单模态转移到多模态。针对多模态情感特征提取与融合的技术难点,本文列举了目前应用较广的多模态情感识别数据库,介绍了面部表情和语音情感这两个模态的特征提取技术,重点阐述了多模态情感融合识别技术,主要对多模态情感特征融合策略和融合方法进行了综述,对不同算法下的识别效果进行了对比。最后,对多模态情感识别研究中存在的问题进行了探讨,并对未来的研究方向进行了展望。  相似文献   

13.
将偏最小二乘回归方法用于人脸身份和表情的同步识别。首先,对每幅人脸图像进行脸部特征提取以及相应的语义特征定义。在脸部特征提取方面,从每幅图像中标定出若干脸部关键点位置,并提取图像在该关键点处的Gabor小波系数(Gabor特征)以及关键点的坐标值(几何特征),作为该图像的输入特征。语义特征则定义为该人脸图像所属的表情类别信息以及所对应的人脸身份信息。其次,利用核主成分分析(KPCA)方法对脸部Gabor特征和几何特征进行融合,使得输入特征具有更好的识别特性;最后,运用偏最小二乘回归(PLSR)方法建立脸部特征和语义特征之间的关系模型,并运用此模型对某一测试人脸图像进行表情和身份的同步识别。通过在JAFFE国际表情数据库和AR人脸数据库上的对比实验,证实了所提方法的有效性。  相似文献   

14.
To synthesize real-time and realistic facial animation, we present an effective algorithm which combines image- and geometry-based methods for facial animation simulation. Considering the numerous motion units in the expression coding system, we present a novel simplified motion unit based on the basic facial expression, and construct the corresponding basic action for a head model. As image features are difficult to obtain using the performance driven method, we develop an automatic image feature recognition method based on statistical learning, and an expression image semi-automatic labeling method with rotation invariant face detection, which can improve the accuracy and efficiency of expression feature identification and training. After facial animation redirection, each basic action weight needs to be computed and mapped automatically. We apply the blend shape method to construct and train the corresponding expression database according to each basic action, and adopt the least squares method to compute the corresponding control parameters for facial animation. Moreover, there is a pre-integration of diffuse light distribution and specular light distribution based on the physical method, to improve the plausibility and efficiency of facial rendering. Our work provides a simplification of the facial motion unit, an optimization of the statistical training process and recognition process for facial animation, solves the expression parameters, and simulates the subsurface scattering effect in real time. Experimental results indicate that our method is effective and efficient, and suitable for computer animation and interactive applications.  相似文献   

15.
The multi-modal emotion recognition lacks the explicit mapping relation between emotion state and audio and image features, so extracting the effective emotion information from the audio/visual data is always a challenging issue. In addition, the modeling of noise and data redundancy is not solved well, so that the emotion recognition model is often confronted with the problem of low efficiency. The deep neural network (DNN) performs excellently in the aspects of feature extraction and highly non-linear feature fusion, and the cross-modal noise modeling has great potential in solving the data pollution and data redundancy. Inspired by these, our paper proposes a deep weighted fusion method for audio-visual emotion recognition. Firstly, we conduct the cross-modal noise modeling for the audio and video data, which eliminates most of the data pollution in the audio channel and the data redundancy in visual channel. The noise modeling is implemented by the voice activity detection(VAD), and the data redundancy in the visual data is solved through aligning the speech area both in audio and visual data. Then, we extract the audio emotion features and visual expression features via two feature extractors. The audio emotion feature extractor, audio-net, is a 2D CNN, which accepting the image-based Mel-spectrograms as input data. On the other hand, the facial expression feature extractor, visual-net, is a 3D CNN to which facial expression image sequence is feeded. To train the two convolutional neural networks on the small data set efficiently, we adopt the strategy of transfer learning. Next, we employ the deep belief network(DBN) for highly non-linear fusion of multi-modal emotion features. We train the feature extractors and the fusion network synchronously. And finally the emotion classification is obtained by the support vector machine using the output of the fusion network. With consideration of cross-modal feature fusion, denoising and redundancy removing, our fusion method show excellent performance on the selected data set.  相似文献   

16.
针对单一模态情感识别精度低的问题,提出了基于Bi-LSTM-CNN的语音文本双模态情感识别模型算法。该算法采用带有词嵌入的双向长短时记忆网络(bi-directional long short-term memory network,Bi-LSTM)和卷积神经网络(convolutional neural network,CNN)构成Bi-LSTM-CNN模型,实现文本特征的提取,将其与声学特征融合结果作为联合CNN模型的输入,进行语音情感计算。基于IEMOCAP多模态情感检测数据集的测试结果表明,情感识别准确率达到了69.51%,比单一模态模型提高了至少6个百分点。  相似文献   

17.
为了解决语言障碍者与健康人之间的交流障碍问题,提出了一种基于神经网络的手语到情感语音转换方法。首先,建立了手势语料库、人脸表情语料库和情感语音语料库;然后利用深度卷积神经网络实现手势识别和人脸表情识别,并以普通话声韵母为合成单元,训练基于说话人自适应的深度神经网络情感语音声学模型和基于说话人自适应的混合长短时记忆网络情感语音声学模型;最后将手势语义的上下文相关标注和人脸表情对应的情感标签输入情感语音合成模型,合成出对应的情感语音。实验结果表明,该方法手势识别率和人脸表情识别率分别达到了95.86%和92.42%,合成的情感语音EMOS得分为4.15,合成的情感语音具有较高的情感表达程度,可用于语言障碍者与健康人之间正常交流。  相似文献   

18.
Psychological research findings suggest that humans rely on the combined visual channels of face and body more than any other channel when they make judgments about human communicative behavior. However, most of the existing systems attempting to analyze the human nonverbal behavior are mono-modal and focus only on the face. Research that aims to integrate gestures as an expression mean has only recently emerged. Accordingly, this paper presents an approach to automatic visual recognition of expressive face and upper-body gestures from video sequences suitable for use in a vision-based affective multi-modal framework. Face and body movements are captured simultaneously using two separate cameras. For each video sequence single expressive frames both from face and body are selected manually for analysis and recognition of emotions. Firstly, individual classifiers are trained from individual modalities. Secondly, we fuse facial expression and affective body gesture information at the feature and at the decision level. In the experiments performed, the emotion classification using the two modalities achieved a better recognition accuracy outperforming classification using the individual facial or bodily modality alone.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号