首页 | 本学科首页   官方微博 | 高级检索  
     


DEAR-MULSEMEDIA: Dataset for emotion analysis and recognition in response to multiple sensorial media
Abstract:Traditionally, emotion recognition is performed in response to stimuli that engage either one (vision: image or hearing: audio) or two (vision and hearing: video) human senses. An immersive environment can be generated by engaging more than two human senses while interacting with multimedia content and is known as MULtiple SEnsorial media (mulsemedia). This study aims to create a new dataset of multimodal physiological signals to recognize emotions in response to such content. To this end, four multimedia clips are selected and synchronized with fan, heater, olfaction dispenser, and haptic vest to augment cold air, hot air, olfaction, and haptic effects respectively. Furthermore, physiological responses including electroencephalography (EEG), galvanic skin response (GSR), and photoplethysmography (PPG) are observed to analyze human emotional responses while experiencing mulsemedia content. A t-test applied using arousal and valence scores show that engaging more than two human senses evokes significantly different emotions. Statistical tests on EEG, GSR, and PPG responses also show a significant difference between multimedia and mulsemedia content. Classification accuracy of 85.18% and 76.54% is achieved for valence and arousal, respectively, using K-nearest neighbor classifier and feature-level fusion strategy.
Keywords:Emotion recognition  Multiple sensorial media  Physiological signals  Modality Level Fusion  Classification
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号