首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 203 毫秒
1.
目前恐高情绪分类中的生理信号主要涉及脑电、心电、皮电等, 考虑到脑电在采集和处理上的局限性以及多模态信号间的融合问题, 提出一种基于6种外周生理信号的动态加权决策融合算法. 首先, 通过虚拟现实技术诱发被试不同程度的恐高情绪, 同步记录心电、脉搏、肌电、皮电、皮温和呼吸这6种外周生理信号; 其次, 提取信号的统计特征和事件相关特征构建恐高情感数据集; 再次, 根据分类性能、模态和跨模态信息提出一种动态加权决策融合算法, 从而对多模态信号进行有效整合以提高识别精度. 最后, 将实验结果与先前相关研究进行对比, 同时在开源的WESAD情感数据集进行验证. 结论表明, 多模态外周生理信号有助于恐高情绪分类性能的提升, 提出的动态加权决策融合算法显著提升了分类性能和模型鲁棒性.  相似文献   

2.
针对目前睡眠呼吸暂停综合症(SAS)的治疗只能借助多导睡眠监测仪(PSG)监测和专业医师的诊断的现状,开展智能、便捷、无干扰监测睡眠呼吸暂停综合症系统的研究。该系统综合麦克风阵列、光纤和柔性压力阵列多种传感器采集人体多种生理信息,为SAS提供一种无扰动检测方法。基于人体鼾声强度、心率、呼吸率、睡姿等生理信息,建立多信息交互的系统模型,评判人体的睡眠风险指数,对SAS进行量化分级,并生成基于大数据分析的个性化报告。系统提供丰富可视化界面,显示人体生理信息及分析结果;不干扰人体自然睡眠,可以在家庭环境中长期跟踪记录个人鼾症变化;使用简单、便捷,促进了智慧医疗的家庭应用。  相似文献   

3.
针对铁道电网调度监控管理中自动化孤岛、信息难以集成的问题,提出了一种多框架融合的信息智能体集成新方法;采用JACK体系建立了监控信息智能体,采用公共信息模型实现了状态类、模拟类和控制类交互业务信息流,将多智能体、数据库连接池和Web Service多框架融合实现了调度监控信息集成平台;量测信号4#ZBCBVVDD的测试结果验证了集成平台的有效性.  相似文献   

4.
本文针对多模态情绪识别这一新兴领域进行综述。首先从情绪描述模型及情绪诱发方式两个方面对情绪识别的研究基础进行了综述。接着针对多模态情绪识别中的信息融合这一重难点问题,从数据级融合、特征级融合、决策级融合、模型级融合4种融合层次下的主流高效信息融合策略进行了介绍。然后从多种行为表现模态混合、多神经生理模态混合、神经生理与行为表现模态混合这3个角度分别列举具有代表性的多模态混合实例,全面合理地论证了多模态相较于单模态更具情绪区分能力和情绪表征能力,同时对多模态情绪识别方法转为工程技术应用提出了一些思考。最后立足于情绪识别研究现状的分析和把握,对改善和提升情绪识别模型性能的方式和策略进行了深入的探讨与展望。  相似文献   

5.
在目前工业生产监控系统结构的基础上,提出了基于嵌入式智能结点的分布式控制系统.分析了多传感器信息融合和嵌入式系统的应用原理,给出了智能结点系统结构,建立了多级信息融合的软件模型,最后对智能结点原型进行了仿真实验,对信息融合模型进行了验证.实验结果表明,该模型能对信息进行有效地融合,说明了模型的可行性.  相似文献   

6.
明确生理信号与情绪的关联度对提高情绪识别正确率起重要作用,然而目前关于两者的关联度研究成果比较少。为研究生理信号与情绪的关联度,采用德国Augsburg大学生理信号数据库的数据,基于灰色关联法研究喜、怒、哀情绪与心电、呼吸、皮肤电导信号的关联度,在此基础上,根据关联度结果采用CHAID决策树和SVM分类法进行情绪识别与分析。研究结果表明:(1)喜、怒、哀3种情绪与呼吸信号关联度最高,与皮肤电导关联度次之,与心电关联度最低;(2)基于CHAID决策树和SVM对3种情绪下的3种生理信号进行情绪识别,验证了喜、怒、哀与心电、呼吸、皮肤电导信号的关联程度。  相似文献   

7.
在信息融合系统中引入多智能体技术   总被引:4,自引:0,他引:4  
论文简要介绍了多智能体技术和信息融合系统,将多智能体技术运用到信息融合系统中,对信息融合系统中的模型和方法进行改进,提出了多智能体信息融合模型,并研究了信息融合中的分布式强化学习。通过借鉴多智能体技术的研究成果,开辟信息融合理论和应用发展的另一条途径。  相似文献   

8.
针对当前情绪识别研究中特征维数多、识别率不高的问题,提出了基于多生理信号(心电、肌电、呼吸、皮肤电)融合及FCA-ReliefF特征选择的情绪识别方法。通过将从时域和频域两个维度提取的生理信号特征进行融合,作为分类器的输入进行情绪分类。为了降低特征维度,首先进行特征相关性分析(FCA)删除相关性较大的特征;再通过ReliefF剔除分类贡献弱的特征,达到降低特征维度的目的。在公开的数据集上进行验证,并与相关研究进行对比。结果表明,提出的方法在特征维度及识别率两个方面均有优势。提出的FCA-ReliefF降维策略有效地将特征从108维减少到60维,并且将识别精度提高到98.40%,验证了方法的有效性。  相似文献   

9.
人的压力与其行为紧密相关,特别是在智能驾驶时,驾驶员压力感知对实现辅助驾驶具有巨大的应用潜力.现有压力感知方法多用于静态环境,检测过程也缺乏便捷性,难以适应高度动态的智能驾驶应用需求.为了实现智能驾驶中自然、准确和可靠的压力检测,提出一种基于可穿戴系统的行为辅助压力感知方法.该方法基于行为伴随实现压力检测,并基于多指标执行压力状态判别,能够有效提高压力检测准确度.其基本原理在于每个人在不同压力状态下的生理特征和行为模式不同,会对压力相关的PPG数据和行为相关的IMU数据产生独特影响.首先使用嵌入多传感器的可穿戴手套测量驾驶员的生理和运动信息,通过多信号融合技术获得可靠的生理行为指标,最终使用泛化性能较好的SVM模型分类驾驶员的压力状态.基于所提出的方法在模拟驾驶环境下部署了验证实验,实验结果显示,压力分类精确度可达到95%.  相似文献   

10.
基于视频序列的人体行为分析需要检测和判别人体姿态,已有人体姿态检测与判别方法往往达不到实用性要求.从两个方面探讨应用BEMD(bidimensional empirical mode decomposition)算法提升特征分离度与判别性,以进行人体姿态检测和判别:BEMD分解源图像得到的多层固有模态图BIMF具有判别特征,可形成具有强边缘的对比度高的区域,其中包括人体轮廓区域;从低分辨率尺度BIMF图像到高分辨率尺度BIMF图像递归计算,建立基于BEMD的多尺度树(BEMD muhiscale-trees tructured)模型,快速提取目标区域并获取人体形状轮廓特征.实验证明,利用该方法进行人体姿态轮廓特征提取,并建立人体姿态的简化模型,可快速检测并判别人体姿态,以达到实时识别.  相似文献   

11.
人机交互离不开情感识别,目前无论是单模态的情感识别还是多生理参数融合的情感识别都存在识别率低,鲁棒性差的问题.为了克服上述问题,故提出一种基于两种不同类型信号的融合情感识别系统,即生理参数皮肤电信号和文本信息融合的双模态情感识别系统.首先通过采集与分析相应情感皮肤电信号特征参数和文本信息的情感关键词特征参数并对其进行优化,分别设计人工神经网络算法和高斯混合模型算法作为单个模态的情感分类器,最后利用改进的高斯混合模型对判决层进行加权融合.实验结果表明,该种融合系统比单模态和多生理参数融合的多模态情感识别精度都要高.所以,依据皮肤电信号和文本信息这两种不同类型的情感特征可以构建出识别率高,鲁棒性好的情感识别系统.  相似文献   

12.
目的 针对当前视频情感判别方法大多仅依赖面部表情、而忽略了面部视频中潜藏的生理信号所包含的情感信息,本文提出一种基于面部表情和血容量脉冲(BVP)生理信号的双模态视频情感识别方法。方法 首先对视频进行预处理获取面部视频;然后对面部视频分别提取LBP-TOP和HOG-TOP两种时空表情特征,并利用视频颜色放大技术获取BVP生理信号,进而提取生理信号情感特征;接着将两种特征分别送入BP分类器训练分类模型;最后利用模糊积分进行决策层融合,得出情感识别结果。结果 在实验室自建面部视频情感库上进行实验,表情单模态和生理信号单模态的平均识别率分别为80%和63.75%,而融合后的情感识别结果为83.33%,高于融合前单一模态的情感识别精度,说明了本文融合双模态进行情感识别的有效性。结论 本文提出的双模态时空特征融合的情感识别方法更能充分地利用视频中的情感信息,有效增强了视频情感的分类性能,与类似的视频情感识别算法对比实验验证了本文方法的优越性。另外,基于模糊积分的决策层融合算法有效地降低了不可靠决策信息对融合的干扰,最终获得更优的识别精度。  相似文献   

13.

Emotion is considered a physiological state that appears whenever a transformation is observed by an individual in their environment or body. While studying the literature, it has been observed that combining the electrical activity of the brain, along with other physiological signals for the accurate analysis of human emotions is yet to be explored in greater depth. On the basis of physiological signals, this work has proposed a model using machine learning approaches for the calibration of music mood and human emotion. The proposed model consists of three phases (a) prediction of the mood of the song based on audio signals, (b) prediction of the emotion of the human-based on physiological signals using EEG, GSR, ECG, Pulse Detector, and finally, (c) the mapping has been done between the music mood and the human emotion and classifies them in real-time. Extensive experimentations have been conducted on the different music mood datasets and human emotion for influential feature extraction, training, testing and performance evaluation. An effort has been made to observe and measure the human emotions up to a certain degree of accuracy and efficiency by recording a person’s bio- signals in response to music. Further, to test the applicability of the proposed work, playlists are generated based on the user’s real-time emotion determined using features generated from different physiological sensors and mood depicted by musical excerpts. This work could prove to be helpful for improving mental and physical health by scientifically analyzing the physiological signals.

  相似文献   

14.
Affective computing conjoins the research topics of emotion recognition and sentiment analysis, and can be realized with unimodal or multimodal data, consisting primarily of physical information (e.g., text, audio, and visual) and physiological signals (e.g., EEG and ECG). Physical-based affect recognition caters to more researchers due to the availability of multiple public databases, but it is challenging to reveal one's inner emotion hidden purposefully from facial expressions, audio tones, body gestures, etc. Physiological signals can generate more precise and reliable emotional results; yet, the difficulty in acquiring these signals hinders their practical application. Besides, by fusing physical information and physiological signals, useful features of emotional states can be obtained to enhance the performance of affective computing models. While existing reviews focus on one specific aspect of affective computing, we provide a systematical survey of important components: emotion models, databases, and recent advances. Firstly, we introduce two typical emotion models followed by five kinds of commonly used databases for affective computing. Next, we survey and taxonomize state-of-the-art unimodal affect recognition and multimodal affective analysis in terms of their detailed architectures and performances. Finally, we discuss some critical aspects of affective computing and its applications and conclude this review by pointing out some of the most promising future directions, such as the establishment of benchmark database and fusion strategies. The overarching goal of this systematic review is to help academic and industrial researchers understand the recent advances as well as new developments in this fast-paced, high-impact domain.  相似文献   

15.
As an essential approach to understanding human interactions, emotion classification is a vital component of behavioral studies as well as being important in the design of context-aware systems. Recent studies have shown that speech contains rich information about emotion, and numerous speech-based emotion classification methods have been proposed. However, the classification performance is still short of what is desired for the algorithms to be used in real systems. We present an emotion classification system using several one-against-all support vector machines with a thresholding fusion mechanism to combine the individual outputs, which provides the functionality to effectively increase the emotion classification accuracy at the expense of rejecting some samples as unclassified. Results show that the proposed system outperforms three state-of-the-art methods and that the thresholding fusion mechanism can effectively improve the emotion classification, which is important for applications that require very high accuracy but do not require that all samples be classified. We evaluate the system performance for several challenging scenarios including speaker-independent tests, tests on noisy speech signals, and tests using non-professional acted recordings, in order to demonstrate the performance of the system and the effectiveness of the thresholding fusion mechanism in real scenarios.  相似文献   

16.
The Journal of Supercomputing - In this study, we present a fusion model for emotion recognition based on visual data. The proposed model uses video information as its input and generates emotion...  相似文献   

17.
针对现有表征情感信息的脑电信号的非线性特征提取不完善的问题,将相空间重构技术引入情感脑电的识别中,提取了在相空间重构下基于轨迹的描述轮廓的三种非线性几何特征作为新的情感脑电特征。结合脑电信号的功率谱熵以及非线性属性特征(近似熵、最大Lyapunov指数、Hurst指数),提出了基于主成分分析(PCA)的非线性全局特征(非线性几何特征+非线性属性特征)和功率谱熵的融合算法,以支持向量机(SVM)为分类器进行情感识别。结果显示,非线性全局特征能更有效地实现情感识别,二分类情感识别率约90%左右。基于PCA的融合情感特征相比单一特征能达到更佳的情感识别性能,四分类实验中平均识别率可达86.42%。结果表明,非线性全局特征相比非线性属性特征情感识别率有所提高,非线性全局特征以及功率谱熵的结合可以构造出更佳的情感脑电特征参数。  相似文献   

18.
Huan  Ruo-Hong  Shu  Jia  Bao  Sheng-Lin  Liang  Rong-Hua  Chen  Peng  Chi  Kai-Kai 《Multimedia Tools and Applications》2021,80(6):8213-8240

A video multimodal emotion recognition method based on Bi-GRU and attention fusion is proposed in this paper. Bidirectional gated recurrent unit (Bi-GRU) is applied to improve the accuracy of emotion recognition in time contexts. A new network initialization method is proposed and applied to the network model, which can further improve the video emotion recognition accuracy of the time-contextual learning. To overcome the weight consistency of each modality in multimodal fusion, a video multimodal emotion recognition method based on attention fusion network is proposed. The attention fusion network can calculate the attention distribution of each modality at each moment in real-time so that the network model can learn multimodal contextual information in real-time. The experimental results show that the proposed method can improve the accuracy of emotion recognition in three single modalities of textual, visual, and audio, meanwhile improve the accuracy of video multimodal emotion recognition. The proposed method outperforms the existing state-of-the-art methods for multimodal emotion recognition in sentiment classification and sentiment regression.

  相似文献   

19.
基于多模态生理数据的连续情绪识别技术在多个领域有重要用途,但碍于被试数据的缺乏和情绪的主观性,情绪识别模型的训练仍需更多的生理模态数据,且依赖于同源被试数据.本文基于人脸图像和脑电提出了多种连续情绪识别方法.在人脸图像模态,为解决人脸图像数据集少而造成的过拟合问题,本文提出了利用迁移学习技术训练的多任务卷积神经网络模型.在脑电信号模态,本文提出了两种情绪识别模型:第一个是基于支持向量机的被试依赖型模型,当测试数据与训练数据同源时有较高准确率;第二个是为降低脑电信号的个体差异性和非平稳特性对情绪识别的影响而提出的跨被试型模型,该模型基于长短时记忆网络,在测试数据和训练数据不同源的情况下也具有稳定的情绪识别性能.为提高对同源数据的情绪识别准确率,本文提出两种融合多模态决策层情绪信息的方法:枚举权重方法和自适应增强方法.实验表明:当测试数据与训练数据同源时,在最佳情况下,双模态情绪识别模型在情绪唤醒度维度和效价维度的平均准确率分别达74.23%和80.30%;而当测试数据与训练数据不同源时,长短时记忆网络跨被试型模型在情绪唤醒度维度和效价维度的准确率分别为58.65%和51.70%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号