首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 46 毫秒
1.
该研究使用脑电(EEG)信号经过处理得到的数据集DEAP和HCI,利用微分熵作为特征提取的工具,基于传统机器学习算法,集成学习中的梯度提升树、Xgboost、Adaboost、随机森林算法,以及深度神经网络、卷积神经网络与GoogLeNet实现跨被试的EEG特征情感识别任务,并比较各方法应用于EEG情感分析时的结果差异。比较平均准确率,结果表明深度学习方法取得了不错的成绩,三个深度模型对两个数据集的valence平均准确率达到0.5956和0.6307之间,arousal达到0.6062和0.6774之间,显著优于机器学习算法与集成学习模型取得的结果。  相似文献   

2.
情绪是一种大脑产生的主观认知的概括。脑信号解码技术可以以一种较客观的方式来有效地研究人的情绪及其相关认知行为。本文提出了一种基于图注意力网络的脑电情绪识别方法(multi-path graph attention networks, MPGAT),该方法通过对脑电信号通道建图,利用卷积层提取脑电信号的时域特征以及各频带的特征,使用图注意力网络进一步捕捉情绪脑电信号的局部特征以及各脑区之间的内在功能关系,进而构建出更好的脑电信号表征。MPGAT在SEED和SEED-IV数据集的跨被试情绪识别平均准确率分别为86.03%、72.71%,在DREAMER数据集的效价(valence)和唤醒(arousal)维度的跨被试平均准确率分别为76.35%和75.46%,达到并部分超过了目前最先进脑电情绪识别方法的性能。本文所提出的脑电信号处理方法有望为情绪认知科学研究与情绪脑机接口系统提供新的技术手段。  相似文献   

3.
运动想象脑电信号个体差异大且数据采集成本高,构建小样本、跨被试的运动想象脑电识别模型是脑机接口中需要解决的关键问题.针对小样本跨领域学习,提出一种基于预对齐策略和对抗迁移学习的加权多源域自适应方法,把迁移学习和对抗训练相结合,将域对抗神经网络扩展到多源域,对各源域进行皮尔逊相关系数加权,实现多个源域和目标域在特征上的加权对齐,并采用预对齐策略提高域间数据分布的一致性.在BCI Competition运动想象数据集上,跨被试的运动想象任务识别正确率达到84.43%,与不迁移方法相比提高了9.17%,相较于域对抗神经网络提高了5.0%.实验结果表明,所提方法能够有效减小不同被试间脑电数据分布以及特征分布差异,实现数据和特征双重对齐,从而提升跨被试运动想象脑电分类性能.  相似文献   

4.
针对普通机器学习算法与单源域迁移学习在应用方面的局限性,利用多源域迁移学习算法解决跨被试情感识别中正确率低的问题。为提高迁移学习的计算效率并避免负迁移现象的产生,分别从样本和特征两个方面对迁移数据进行优化。用多源域选择算法筛选出最优源域集合,用迁移特征选择算法得到最优特征集合,训练出多个迁移学习模型并对之集成。在数据集SEED上对该算法进行验证,验证结果表明,该模型相比其它情感识别模型具有更优的跨被试情感识别能力。  相似文献   

5.
从相关向量机(RVM)和支持向量机(SVM)的相似性以及RVM的稀疏特性出发,将RVM应用于脑电信号(EEG)的情感识别中。针对一对一(OAO)和一对多(OAA)两种多分类方法各自的特点和不足,提出了一种全新的两层多分类模型(OAA-OAO),改进现有OAO算法中无效投票影响最终决策的现象。设计情感EEG信号识别对比实验,验证基于RVM的改进多分类算法在脑电信号情感识别中的应用。对于实验室采集的情感脑电信号,提取其非线性特征(功率谱熵、样本熵和Hurst指数)并采用主成分分析法进行降维。将OAA-OAO-RVM算法分别和OAOSVM、OAO-RVM两种识别网络进行对比,分析RVM的识别性能以及OAA-OAO多分类算法的分类性能。结果表明,采用降维后的最优特征集合作为识别网络的输入向量得到的识别性能更高,且RVM表现出的性能优于SVM。同时,改进后的OAA-OAO算法较传统OAO模型的平均识别率提高了7.89%,证明OAA-OAO算法可有效去除一部分无效投票从而使分类精度得到显著提高,验证了此模型是一种有效的多分类模型。  相似文献   

6.
运动想象脑电(MI-EEG)信号在构建临床辅助康复的无创脑机接口(BCI)中获得了广泛关注。受限于不同被试者的MI-EEG信号样本分布存在差异,跨被试MI-EEG信号的特征学习成为研究重点。然而,现有的相关方法存在域不变特征表达能力弱、时间复杂度较高等问题,无法直接应用于在线BCI。为解决该问题,提出黎曼切空间特征迁移核学习(TKRTS)方法,并基于此构建了高效的跨被试MI-EEG信号分类算法。TKRTS方法首先将MI-EEG信号协方差矩阵投影至黎曼空间,并在黎曼空间上对齐不同被试者的协方差矩阵,同时提取黎曼切空间(RTS)特征;随后,学习RTS特征集上的域不变核矩阵,从而获得完备的跨被试MI-EEG特征表达,并通过该矩阵训练核支持向量机(KSVM)进行分类。为验证TKRTS方法的可行性与有效性,在3个公开数据集上分别进行多源域-单目标域以及单源域-单目标域的实验,平均分类准确率分别提升了0.81个百分点和0.13个百分点。实验结果表明,与主流方法对比,TKRTS方法提升了平均分类准确率并保持相似的时间复杂度。此外,消融实验结果验证了TKRTS方法对跨被试特征表达的完备性和参数不敏感性,适合构建在线脑接机口。  相似文献   

7.
基于多模态生理数据的连续情绪识别技术在多个领域有重要用途,但碍于被试数据的缺乏和情绪的主观性,情绪识别模型的训练仍需更多的生理模态数据,且依赖于同源被试数据.本文基于人脸图像和脑电提出了多种连续情绪识别方法.在人脸图像模态,为解决人脸图像数据集少而造成的过拟合问题,本文提出了利用迁移学习技术训练的多任务卷积神经网络模型...  相似文献   

8.
刘嘉敏  苏远歧  魏平  刘跃虎 《自动化学报》2020,46(10):2137-2147
基于视频-脑电信号交互协同的情感识别是人机交互重要而具有挑战性的研究问题.本文提出了基于长短记忆神经网络(Long-short term memory, LSTM)和注意机制(Attention mechanism)的视频-脑电信号交互协同的情感识别模型.模型的输入是实验参与人员观看情感诱导视频时采集到的人脸视频与脑电信号, 输出是实验参与人员的情感识别结果.该模型在每一个时间点上同时提取基于卷积神经网络(Convolution neural network, CNN)的人脸视频特征与对应的脑电信号特征, 通过LSTM进行融合并预测下一个时间点上的关键情感信号帧, 直至最后一个时间点上计算出情感识别结果.在这一过程中, 该模型通过空域频带注意机制计算脑电信号${alpha}$波, ${beta}$波与${theta}$波的重要度, 从而更加有效地利用脑电信号的空域关键信息; 通过时域注意机制, 预测下一时间点上的关键信号帧, 从而更加有效地利用情感数据的时域关键信息.本文在MAHNOB-HCI和DEAP两个典型数据集上测试了所提出的方法和模型, 取得了良好的识别效果.实验结果表明本文的工作为视频-脑电信号交互协同的情感识别问题提供了一种有效的解决方法.  相似文献   

9.
    
Electroencephalogram (EEG), as a tool capable of objectively recording brain electrical signals during emotional expression, has been extensively utilized. Current technology heavily relies on datasets, with its performance being limited by the size of the dataset and the accuracy of its annotations. At the same time, unsupervised learning and contrastive learning methods largely depend on the feature distribution within datasets, thus requiring training tailored to specific datasets for optimal results. However, the collection of EEG signals is influenced by factors such as equipment, settings, individuals, and experimental procedures, resulting in significant variability. Consequently, the effectiveness of models is heavily dependent on dataset collection efforts conducted under stringent objective conditions. To address these challenges, we introduce a novel approach: employing a self-supervised pre-training model, to process data across different datasets. This model is capable of operating effectively across multiple datasets. The model conducts self-supervised pre-training without the need for direct access to specific emotion category labels, enabling it to pre-train and extract universally useful features without predefined downstream tasks. To tackle the issue of semantic expression confusion, we employed a masked prediction model that guides the model to generate richer semantic information through learning bidirectional feature combinations in sequence. Addressing challenges such as significant differences in data distribution, we introduced adaptive clustering techniques that manage by generating pseudo-labels across multiple categories. The model is capable of enhancing the expression of hidden features in intermediate layers during the self-supervised training process, enabling it to learn common hidden features across different datasets. This study, by constructing a hybrid dataset and conducting extensive experiments, demonstrated two key findings: (1) our model performs best on multiple evaluation metrics; (2) the model can effectively integrate critical features from different datasets, significantly enhancing the accuracy of emotion recognition.  相似文献   

10.
    
Music is an important way for emotion expression, and traditional manual composition requires a solid knowledge of music theory. It is needed to find a simple but accurate method to express personal emotions in music creation. In this paper, we propose and implement an EEG signal-driven real-time emotional music generation system for generating exclusive emotional music. To achieve real-time emotion recognition, the proposed system can obtain the model suitable for a newcomer quickly through short-time calibration. And then, both the recognized emotion state and music structure features are fed into the network as the conditional inputs to generate exclusive music which is consistent with the user’s real emotional expression. In the real-time emotion recognition module, we propose an optimized style transfer mapping algorithm based on simplified parameter optimization and introduce the strategy of instance selection into the proposed method. The module can obtain and calibrate a suitable model for a new user in short-time, which achieves the purpose of real-time emotion recognition. The accuracies have been improved to 86.78% and 77.68%, and the computing time is just to 7 s and 10 s on the public SEED and self-collected datasets, respectively. In the music generation module, we propose an emotional music generation network based on structure features and embed it into our system, which breaks the limitation of the existing systems by calling third-party software and realizes the controllability of the consistency of generated music with the actual one in emotional expression. The experimental results show that the proposed system can generate fluent, complete, and exclusive music consistent with the user’s real-time emotion recognition results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号