首页 | 本学科首页   官方微博 | 高级检索  
     


Multiple feature fusion for unimodal emotion recognition
Authors:Yang Lingzhi  Ban Xiaojuan  Michele Mukeshimana  Chen Zhe
Affiliation:1. School of Computer and Communication Engineering, Beijing Key Laboratory of Knowledge Engineering for Materials Science, University of Science and Technology Beijing, Beijing 10083, China
2. Citic Pacific Special Steel Holdings Qingdao Special Iron and SteelCompany Limited
3. Faculity of Engineering Sciences, University of Burundi, Bujumbura P. O. Box 1550 Bujumbura, Burundi
4. Qingdao Hisense GroupCompany Limited, Qingdao 266000, China
Abstract:A new semi-serial fusion method of multiple feature based on learning using privileged information (LUPI) model was put forward. The exploitation of LUPI paradigm permits the improvement of the learning accuracy and its stability, by additional information and computations using optimization methods. The execution time is also reduced, by sparsity and dimension of testing feature. The essence of improvements obtained using multiple features types for the emotion recognition (speech expression recognition), is particularly applicable when there is only one modality but still need to improve the recognition. The results show that the LUPI in unimodal case is effective when the size of the feature is considerable. In comparison to other methods using one type of features or combining them in a concatenated way, this new method outperforms others in recognition accuracy, execution reduction, and stability.
Keywords:multiple feature  LUPI  emotion recognition  semi-serial fusion method
本文献已被 维普 等数据库收录!
点击此处可从《中国邮电高校学报(英文版)》浏览原始摘要信息
点击此处可从《中国邮电高校学报(英文版)》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号