首页 | 本学科首页   官方微博 | 高级检索  
     

基于混合神经网络的脑电时空特征情感分类
引用本文:陈景霞,郝为,张鹏伟,闵重丹,李玥辰.基于混合神经网络的脑电时空特征情感分类[J].软件学报,2021,32(12):3869-3883.
作者姓名:陈景霞  郝为  张鹏伟  闵重丹  李玥辰
作者单位:陕西科技大学 电子信息与人工智能学院,陕西 西安 710021
基金项目:国家自然科学基金(61806118);陕西科技大学科研启动基金(2020BJ-30)
摘    要:提出一种脑电图(electroencephalograph,简称EEG)数据表示方法,将一维链式EEG向量序列转换成二维网状矩阵序列,使矩阵结构与EEG电极位置的脑区分布相对应,以此来更好地表示物理上多个相邻电极EEG信号之间的空间相关性.再应用滑动窗将二维矩阵序列分成一个个等长的时间片段,作为新的融合了EEG时空相关性的数据表示.还提出了级联卷积-循环神经网络(CASC_CNN_LSTM)与级联卷积-卷积神经网络(CASC_CNN_CNN)这两种混合深度学习模型,二者都通过CNN卷积神经网络从转换的二维网状EEG数据表示中捕获物理上相邻脑电信号之间的空间相关性,而前者通过LSTM循环神经网络学习EEG数据流在时序上的依赖关系,后者则通过CNN卷积神经网络挖掘局部时间与空间更深层的相关判别性特征,从而精确识别脑电信号中包含的情感类别.在大规模脑电数据集DEAP上进行被试内效价维度上两类情感分类实验,结果显示,所提出的CASC_CNN_LSTM和CASC_CNN_CNN网络在二维网状EEG时空特征上的平均分类准确率分别达到93.15%和92.37%,均高于基准模型和现有最新方法的性能,表明该模型有效提高了EEG情感识别的准确率和鲁棒性,可以有效地应用到基于EEG的情感分类与识别相关应用中.

关 键 词:脑电图  情感识别  二维网状  时空特征  卷积循环神经网络  混合模型
收稿时间:2020/4/2 0:00:00
修稿时间:2020/5/21 0:00:00

Emotion Classification of Spatiotemporal EEG Features Using Hybrid Neural Networks
CHEN Jing-Xi,HAO Wei,ZHANG Peng-Wei,MIN Chong-Dan,LI Yue-Chen.Emotion Classification of Spatiotemporal EEG Features Using Hybrid Neural Networks[J].Journal of Software,2021,32(12):3869-3883.
Authors:CHEN Jing-Xi  HAO Wei  ZHANG Peng-Wei  MIN Chong-Dan  LI Yue-Chen
Affiliation:School of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi''an 710021, China
Abstract:This study proposes a data representation of electroencephalogram (EEG), which transforms 1D chain-like EEG vector sequences into 2D mesh-like matrix sequences. The mesh structure of the matrix at each time point corresponds to the distribution of EEG electrodes, which could better represent the spatial correlation of EEG signals among multiple physically adjacent electrodes. Then, the sliding window is used to divide the 2D mesh sequence into segments containing equal time length, and each segment is seen as an EEG sample integrating the temporal and spatial correlation of raw EEG recordings. Two hybrid deep learning models are also proposed, i.e., cascaded convolutional recurrent neural network (CASC_CNN_LSTM) and cascaded double convolutional neural network (CASC_CNN_CNN). Both of them use the CNN model to capture the spatial correlation between physically adjacent EEG signals from the converted 2DEEG meshes. The former uses the LSTM model to learn the time dependency of the EEG sequence, and the latter uses another CNN model to extract the deeper discriminative features of local time and space. Extensive binary emotion classification experiments in valence are carried out on a large scale open DEAP dataset (32 subjects, 9830 400 EEG recordings). The results show that the average classification accuracy of the proposed CASC_CNN_LSTM and CASC_CNN_CNN networks on spatiotemporal 2D meshlike EEG sequence reaches 93.15% and 92.37%, respectively, which significantly outperform the baseline models and the state-of-the-art methods. It demonstrates that the proposed method effectively improves the accuracy and robustness of EEG emotion classification due to its ability of jointly learning deeper spatiotemporal correlated features using hybrid deep neural network.
Keywords:EEG  emotion recognition  2Dmesh-like  spatiotemporal feature  convolutional recurrent neural networks  hybrid model
本文献已被 万方数据 等数据库收录!
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号