首页 | 本学科首页   官方微博 | 高级检索  
     

嵌入注意力机制并结合层级上下文的语音情感识别
引用本文:程艳芬,陈垚鑫,陈逸灵,杨益.嵌入注意力机制并结合层级上下文的语音情感识别[J].哈尔滨工业大学学报,2019,51(11):100-107.
作者姓名:程艳芬  陈垚鑫  陈逸灵  杨益
作者单位:武汉理工大学 计算机科学与技术学院,武汉,430063;湖北工业大学 计算机学院,武汉,430068
基金项目:国家自然科学基金(51179146)
摘    要:由于情感语料问题、情感与声学特征之间关联问题、语音情感识别建模问题等因素,语音情感识别一直充满挑战性.针对传统基于上下文的语音情感识别系统仅局限于特征层造成标签层上下文细节丢失以及两层级差异性被忽略的缺陷,本文提出嵌入注意力机制并结合层级上下文学习的双向长短时记忆(BLSTM)网络模型.模型分3个阶段完成语音情感识别任务,第1阶段提取情感语音特征全集后采用SVM-RFE特征排序算法降维得到最优特征子集,并对其进行注意力加权;第2阶段将加权后的特征子集输入BLSTM网络学习特征层上下文获得最初情感预测结果;第3阶段利用情感标签值对另一独立BLSTM网络训练学习标签层上下文信息并据此在第2阶段输出结果基础上完成最终预测.模型嵌入注意力机制使其自动学习调整对输入特征子集的关注度,引入标签层上下文使其联合特征层上下文实现层级上下文信息融合提高鲁棒性,提升了模型对情感语音的建模能力,在SEMAINE和RECOLA数据集上实验结果表明:与基线模型相比RMSE和CCC均得到较好改善.

关 键 词:语音情感识别  注意力机制  上下文  双向长短时记忆网络
收稿时间:2019/5/27 0:00:00

Speech emotion recognition with embedded attention mechanism and hierarchical context
CHENG Yanfen,CHEN Yaoxin,CHEN Yiling and YANG Yi.Speech emotion recognition with embedded attention mechanism and hierarchical context[J].Journal of Harbin Institute of Technology,2019,51(11):100-107.
Authors:CHENG Yanfen  CHEN Yaoxin  CHEN Yiling and YANG Yi
Affiliation:School of Computer Science and Technology, Wuhan University of Technology, Wuhan 430063, China,School of Computer, Hubei University of Technology, Wuhan 430068, China,School of Computer Science and Technology, Wuhan University of Technology, Wuhan 430063, China and School of Computer Science and Technology, Wuhan University of Technology, Wuhan 430063, China
Abstract:A challenging task remains with regarding to speech emotion recognition due to issues such as emotional corpus problems, association between emotion and acoustic features, and speech emotion recognition modeling. Conventional context-based speech emotion recognition system risks of losing the context details of the label layer and neglecting the difference of the two-level due to solely limited to the feature layer. This paper proposed a Bidirectional Long Short-Term Memory (BLSTM) network with embedded attention mechanism combined with hierarchical context learning model. The model completed the speech emotion recognition task in three phases. The first phase extracted the feature set from the emotional speech, then used the SVM-RFE feature-sorting algorithm to reduce the feature in order to obtain the optimal feature subset and assigned attention weights. The second phase, the weighted feature subset was input into the BLSTM network learning feature layer context to obtain the initial emotional prediction result. The third phase used the emotional value to train another independent BLSTM network for learning label layer context information. According to the information, the final prediction was completed based on the output result of the second phase. The model embedded the attention mechanism to automatically learn to adjust the attention to the input feature subset, introduced the label layer context to associate the feature layer context so as to achieve the hierarchical context information fusion and improve the robustness, and improved the model''s ability to model the emotional speech. The experimental results on the SEMAINE and RECOLA datasets showed that both RMSE and CCC were significantly improved than the baseline model.
Keywords:speech emotion recognition  attention mechanism  context  BLSTM
本文献已被 CNKI 万方数据 等数据库收录!
点击此处可从《哈尔滨工业大学学报》浏览原始摘要信息
点击此处可从《哈尔滨工业大学学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号