首页 | 本学科首页   官方微博 | 高级检索  
     

Multimodal Emotion Recognition Based on Facial Expression and ECG Signal
作者姓名:NIU Jian-wei  AN Yue-qi  NI Jie  JIANG Chang-hua
作者单位:School of Mechanical Engineering,University of Science and Technology Beijing,Beijing 100083,China;China Astronaut Research and Training Center Beijing,Beijing 404023,China
基金项目:This research is supported by the Open Funding Project of National Key Laboratory of Human Factors Engineering (Grant NO. 6142222190309). The authors acknowledged the kindness of MAHNOB-HCI, who provided the inducing materials for this study.
摘    要:As a key link in human-computer interaction, emotion recognition can enable robots to correctly perceive user emotions and provide dynamic and adjustable services according to the emotional needs of different users, which is the key to improve the cognitive level of robot service. Emotion recognition based on facial expression and electrocardiogram has numerous industrial applications. First, three-dimensional convolutional neural network deep learning architecture is utilized to extract the spa...

关 键 词:multi-modal  emotion  recognition  facial  expression  ECG  signal  three-dimensional  convolutional  neural  network

Multimodal Emotion Recognition Based on Facial Expression and ECG Signal
NIU Jian-wei,AN Yue-qi,NI Jie,JIANG Chang-hua.Multimodal Emotion Recognition Based on Facial Expression and ECG Signal[J].Packaging Engineering,2022,43(4):71-79.
Authors:NIU Jian-wei  AN Yue-qi  NI Jie  JIANG Chang-hua
Affiliation:School of Mechanical Engineering, University of Science and Technology Beijing, Beijing 100083, China; China Astronaut Research and Training Center Beijing, Beijing 404023, China
Abstract:As a key link in human-computer interaction, emotion recognition can enable robots to correctly perceive user emotions and provide dynamic and adjustable services according to the emotional needs of different users, which is the key to improve the cognitive level of robot service. Emotion recognition based on facial expression and electrocardiogram has numerous industrial applications. First, three-dimensional convolutional neural network deep learning architecture is utilized to extract the spatial and temporal features from facial expression video data and electrocardiogram (ECG) data, and emotion classification is carried out. Then two modalities are fused in the data level and the decision level, respectively, and the emotion recognition results are then given. Finally, the emotion recognition results of single-modality and multi-modality are compared and analyzed. Through the comparative analysis of the experimental results of single-modality and multi-modality under the two fusion methods, it is concluded that the accuracy rate of multi-modal emotion recognition is greatly improved compared with that of single-modal emotion recognition, and decision-level fusion is easier to operate and more effective than data-level fusion.
Keywords:multi-modal emotion recognition  facial expression  ECG signal  three-dimensional convolutional neural network
本文献已被 维普 万方数据 等数据库收录!
点击此处可从《包装工程》浏览原始摘要信息
点击此处可从《包装工程》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号