首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
基于集成学习的半监督情感分类方法研究   总被引:1,自引:0,他引:1  
情感分类旨在对文本所表达的情感色彩类别进行分类的任务。该文研究基于半监督学习的情感分类方法,即在很少规模的标注样本的基础上,借助非标注样本提高情感分类性能。为了提高半监督学习能力,该文提出了一种基于一致性标签的集成方法,用于融合两种主流的半监督情感分类方法:基于随机特征子空间的协同训练方法和标签传播方法。首先,使用这两种半监督学习方法训练出的分类器对未标注样本进行标注;其次,选取出标注一致的未标注样本;最后,使用这些挑选出的样本更新训练模型。实验结果表明,该方法能够有效降低对未标注样本的误标注率,从而获得比任一种半监督学习方法更好的分类效果。  相似文献   

2.
基于Word2Vec的情感词典自动构建与优化   总被引:1,自引:0,他引:1  
情感词典的构建是文本挖掘领域中重要的基础性工作。近几年,情感词典的极性标注从二元褒贬标注向多元情绪标注发展,词典的领域特性也日趋明显。但是情感类别的手工标注不但费时费力,而且情感强度难以得到准确量化,同时对领域性的过分关注也大大限制了情感词典的适用性[1]。通过神经网络语言模型对大规模中文语料进行统计训练,并在此基础上提出了基于转换约束集的多维情感词典自动构建方法;然后研究了基于词分布密度的感情色彩消歧方法,对兼具褒贬意味词语的感情极性进行区分和识别,并分别计算两种感情色彩下的情感类别与强度;最后提出基于多个语义资源的全局优化方案,得到包含10种情绪标注的多维汉语情感词典SentiRuc。实验证实该词典1)在类别标注检验、强度标注检验、情感消歧效果及情感分类任务中均具有良好的效果,其中的情感强度检验证实该词典具有极强的情感语义描述力。  相似文献   

3.
针对在金融领域实体级情感分析任务中缺乏足够的标注语料,以及通用的情感分析模型难以有效处理金融文本等问题,该文构建一个百万级别的金融领域实体情感分析语料库,并标注5 000余个金融领域情感词作为金融领域情感词典。同时,基于该金融领域数据集,提出一种结合金融领域情感词典和注意力机制的金融文本细粒度情感分析模型(FinLexNet)。该模型使用两个LSTM网络分别提取词级别的语义信息和基于情感词典分类后的词类级别信息,能有效获取金融领域词语的特征信息。此外,为了让文本中金融领域情感词获得更多关注,提出一种基于金融领域情感词典的注意力机制来为不同实体获取重要的情感信息。最终在构建的金融领域实体级语料库上进行实验,取得了比对比模型更好的效果。  相似文献   

4.
情感是音乐最重要的语义信息,音乐情感分类广泛应用于音乐检索,音乐推荐和音乐治疗等领域.传统的音乐情感分类大都是基于音频的,但基于现在的技术水平,很难从音频中提取出语义相关的音频特征.歌词文本中蕴含着一些情感信息,结合歌词进行音乐情感分类可以进一步提高分类性能.本文将面向中文歌词进行研究,构建一部合理的音乐情感词典是歌词情感分析的前提和基础,因此基于Word2Vec构建音乐领域的中文情感词典,并基于情感词加权和词性进行中文音乐情感分析.本文首先以VA情感模型为基础构建情感词表,采用Word2Vec中词语相似度计算的思想扩展情感词表,构建中文音乐情感词典,词典中包含每个词的情感类别和情感权值.然后,依照该词典获取情感词权值,构建基于TF-IDF (Term Frequency-Inverse Document Frequency)和词性的歌词文本的特征向量,最终实现音乐情感分类.实验结果表明所构建的音乐情感词典更适用于音乐领域,同时在构造特征向量时考虑词性的影响也可以提高准确率.  相似文献   

5.
在基于语义的视频检索系统中,为了弥补视频底层特征与高层用户需求之间的差异,提出了时序概率超图模型。它将时间序列因素融入到模型的构建中,在此基础上提出了一种基于时序概率超图模型的视频多语义标注框架(TPH-VMLAF)。该框架结合视频时间相关性,通过使用基于时序概率超图的镜头多标签半监督分类学习算法对视频镜头进行多语义标注。标注过程中同时解决了已标注视频数据不足和多语义标注的问题。实验结果表明,该框架提高了标注的精确度,表现出了良好的性能。  相似文献   

6.
视频情感识别是计算机视觉的研究热点,由于认识到人类本身才是情感产生的源头,近来,利用人类自身的大脑响应等生理特征对视频所包含的情感进行识别,即隐性情感识别成为研究重点。然而,目前利用脑电图信号对音乐视频愉悦度的识别率仍不能令人满意,原因在于未能从大量的脑电图数据中获取到有效的分类特征。为了进一步提高识别准确率,在DEAP数据库中,不采用传统的脑电图时域和频域特征,而是利用数据标准化以及特征选择方法从脑电图时间序列信号中直接提取有效特征,从而提取到脑电图信号中具有较高分类能力的特征,并将得到的脑电图特征用于音乐视频分类实验中,结果表明,相对于传统方法,可以大大提高脑电图信号对音乐视频愉悦度识别率。  相似文献   

7.
特征选择旨在降低高维度特征空间,进而简化问题和优化学习方法。已有的研究显示特征提取方法能够有效降低监督学习的情感分类中的特征维度空间。同以往研究不一样的是,该文首次探讨半监督情感分类中的特征提取方法,提出一种基于二部图的特征选择方法。该方法首先借助二部图模型来表述文档与单词间的关系;然后,结合小规模标注样本的标签信息和二部图模型,利用标签传播(LP)算法计算每个特征的情感概率;最后,按照特征的情感概率进行排序进而实现特征选择。多个领域的实验结果表明,在半监督情感分类任务中,基于二部图的特征选择方法明显优于随机特征选择,在保证分类效果不下降(甚至提高)的前提下有效降低了特征空间维度。  相似文献   

8.
针对现有的情感分析方法缺乏对短视频中信息的充分考虑,从而导致不恰当的情感分析结果.基于音视频的多模态情感分析(AV-MSA)模型便由此产生,模型通过利用视频帧图像中的视觉特征和音频信息来完成短视频的情感分析.模型分为视觉与音频2分支,音频分支采用卷积神经网络(CNN)架构来提取音频图谱中的情感特征,实现情感分析的目的;...  相似文献   

9.
《微型机与应用》2017,(5):38-41
解决大规模音频数据库快速检索的有效手段之一是建立合适的音频索引,其中音频分割和标注是建立音频索引的基础。文中采用了一种基于短时能量和改进度量距离的两步音频分割算法,使得分割后的音频片段具有段间特征差异大、段内特征方差小的特点。在音频分割的基础上进行了音频数据库中音频流的标注;分别基于BP神经网络算法和Philips音频指纹算法对音频进行了音频类别和音频内容的标注,为后续建立音频索引表做准备。实验结果表明,两步分割算法能较好地分割任意音频流,音频标注算法能有效进行基于音频类别和音频内容的标注,算法同时具有良好的鲁棒性。  相似文献   

10.
刘嘉敏  苏远歧  魏平  刘跃虎 《自动化学报》2020,46(10):2137-2147
基于视频-脑电信号交互协同的情感识别是人机交互重要而具有挑战性的研究问题.本文提出了基于长短记忆神经网络(Long-short term memory, LSTM)和注意机制(Attention mechanism)的视频-脑电信号交互协同的情感识别模型.模型的输入是实验参与人员观看情感诱导视频时采集到的人脸视频与脑电信号, 输出是实验参与人员的情感识别结果.该模型在每一个时间点上同时提取基于卷积神经网络(Convolution neural network, CNN)的人脸视频特征与对应的脑电信号特征, 通过LSTM进行融合并预测下一个时间点上的关键情感信号帧, 直至最后一个时间点上计算出情感识别结果.在这一过程中, 该模型通过空域频带注意机制计算脑电信号${\alpha}$波, ${\beta}$波与${\theta}$波的重要度, 从而更加有效地利用脑电信号的空域关键信息; 通过时域注意机制, 预测下一时间点上的关键信号帧, 从而更加有效地利用情感数据的时域关键信息.本文在MAHNOB-HCI和DEAP两个典型数据集上测试了所提出的方法和模型, 取得了良好的识别效果.实验结果表明本文的工作为视频-脑电信号交互协同的情感识别问题提供了一种有效的解决方法.  相似文献   

11.
Automatically recognizing human emotions from spontaneous and non-prototypical real-life data is currently one of the most challenging tasks in the field of affective computing. This article presents our recent advances in assessing dimensional representations of emotion, such as arousal, expectation, power, and valence, in an audiovisual human–computer interaction scenario. Building on previous studies which demonstrate that long-range context modeling tends to increase accuracies of emotion recognition, we propose a fully automatic audiovisual recognition approach based on Long Short-Term Memory (LSTM) modeling of word-level audio and video features. LSTM networks are able to incorporate knowledge about how emotions typically evolve over time so that the inferred emotion estimates are produced under consideration of an optimal amount of context. Extensive evaluations on the Audiovisual Sub-Challenge of the 2011 Audio/Visual Emotion Challenge show how acoustic, linguistic, and visual features contribute to the recognition of different affective dimensions as annotated in the SEMAINE database. We apply the same acoustic features as used in the challenge baseline system whereas visual features are computed via a novel facial movement feature extractor. Comparing our results with the recognition scores of all Audiovisual Sub-Challenge participants, we find that the proposed LSTM-based technique leads to the best average recognition performance that has been reported for this task so far.  相似文献   

12.
Affective computing conjoins the research topics of emotion recognition and sentiment analysis, and can be realized with unimodal or multimodal data, consisting primarily of physical information (e.g., text, audio, and visual) and physiological signals (e.g., EEG and ECG). Physical-based affect recognition caters to more researchers due to the availability of multiple public databases, but it is challenging to reveal one's inner emotion hidden purposefully from facial expressions, audio tones, body gestures, etc. Physiological signals can generate more precise and reliable emotional results; yet, the difficulty in acquiring these signals hinders their practical application. Besides, by fusing physical information and physiological signals, useful features of emotional states can be obtained to enhance the performance of affective computing models. While existing reviews focus on one specific aspect of affective computing, we provide a systematical survey of important components: emotion models, databases, and recent advances. Firstly, we introduce two typical emotion models followed by five kinds of commonly used databases for affective computing. Next, we survey and taxonomize state-of-the-art unimodal affect recognition and multimodal affective analysis in terms of their detailed architectures and performances. Finally, we discuss some critical aspects of affective computing and its applications and conclude this review by pointing out some of the most promising future directions, such as the establishment of benchmark database and fusion strategies. The overarching goal of this systematic review is to help academic and industrial researchers understand the recent advances as well as new developments in this fast-paced, high-impact domain.  相似文献   

13.
14.
Emotional factors directly reflect audiences’ attention, evaluation and memory. Recently, video affective content analysis attracts more and more research efforts. Most of the existing methods map low-level affective features directly to emotions by applying machine learning. Compared to human perception process, there is actually a gap between low-level features and high-level human perception of emotion. In order to bridge the gap, we propose a three-level affective content analysis framework by introducing mid-level representation to indicate dialog, audio emotional events (e.g., horror sounds and laughters) and textual concepts (e.g., informative keywords). Mid-level representation is obtained from machine learning on low-level features and used to infer high-level affective content. We further apply the proposed framework and focus on a number of case studies. Audio emotional event, dialog and subtitle are studied to assist affective content detection in different video domains/genres. Multiple modalities are considered for affective analysis, since different modality has its own merit to evoke emotions. Experimental results shows the proposed framework is effective and efficient for affective content analysis. Audio emotional event, dialog and subtitle are promising mid-level representations.  相似文献   

15.
In this paper, we present human emotion recognition systems based on audio and spatio-temporal visual features. The proposed system has been tested on audio visual emotion data set with different subjects for both genders. The mel-frequency cepstral coefficient (MFCC) and prosodic features are first identified and then extracted from emotional speech. For facial expressions spatio-temporal features are extracted from visual streams. Principal component analysis (PCA) is applied for dimensionality reduction of the visual features and capturing 97 % of variances. Codebook is constructed for both audio and visual features using Euclidean space. Then occurrences of the histograms are employed as input to the state-of-the-art SVM classifier to realize the judgment of each classifier. Moreover, the judgments from each classifier are combined using Bayes sum rule (BSR) as a final decision step. The proposed system is tested on public data set to recognize the human emotions. Experimental results and simulations proved that using visual features only yields on average 74.15 % accuracy, while using audio features only gives recognition average accuracy of 67.39 %. Whereas by combining both audio and visual features, the overall system accuracy has been significantly improved up to 80.27 %.  相似文献   

16.
This paper presents a probabilistic Bayesian belief network (BBN) method for automatic indexing of excitement clips of sports video sequences. The excitement clips from sports video sequences are extracted using audio features. The excitement clips are comprised of multiple subclips corresponding to the events such as replay, field-view, close-ups of players, close-ups of referees/umpires, spectators, players’ gathering. The events are detected and classified using a hierarchical classification scheme. The BBN based on observed events is used to assign semantic concept-labels to the excitement clips, such as goals, saves, and card in soccer video, wicket and hit in cricket video sequences. The BBN based indexing results are compared with our previously proposed event-association based approach and found BBN is better than the event-association based approach. The proposed scheme provides a generalizable method for linking low-level video features with high-level semantic concepts. The generic nature of the proposed approach in the sports domain is validated by demonstrating successful indexing of soccer and cricket video excitement clips. The proposed scheme offers a general approach to the automatic tagging of large scale multimedia content with rich semantics. The collection of labeled excitement clips provide a video summary for highlight browsing, video skimming, indexing and retrieval.  相似文献   

17.
18.
针对解决图像有效情感标注的问题,提出了一种多特征综合的图像模糊情感注释方法。该方法在讨论图像中的可视化特征(颜色、纹理和形状)与图像情感之间关系的基础上,选取和修正合适的算法提取颜色、纹理和形状特征,并将其作为模糊输入量;提出一种情感空间表示法量化情感;利用模糊集的近似推理理论完成对图像的情感注释。对100幅自然图像进行模糊情感标注,将其结果和20名自愿者对图像的情感感觉进行对比,实验结果表明,该方法能够有效地标识图像情感语义,证实了所采用的情感空间表示法具有一定的科学性,对于装璜、电子教学、图像检索和情感计算有一定的应用价值。  相似文献   

19.
近年来,情感计算已经成为自然语言处理与人工智能领域的一个研究热点,而文本情感分析是情感计算的一个重要组成部分.提出了一个基于主题特征与三支决策理论相融合的多标记情感分类方法.首先采用基于主题的情感识别模型判断句子的多标记情感类别,在此基础上结合三支决策理论,最终实现对文本篇章的多标记情感分类.实验结果表明,该方法在文本篇章的多标记情感类别识别上取得了令人满意的结果.  相似文献   

20.
The availability of the humongous amount of multimodal content on the internet, the multimodal sentiment classification, and emotion detection has become the most researched topic. The feature selection, context extraction, and multi-modal fusion are the most important challenges in multimodal sentiment classification and affective computing. To address these challenges this paper presents multilevel feature optimization and multimodal contextual fusion technique. The evolutionary computing based feature selection models extract a subset of features from multiple modalities. The contextual information between the neighboring utterances is extracted using bidirectional long-short-term-memory at multiple levels. Initially, bimodal fusion is performed by fusing a combination of two unimodal modalities at a time and finally, trimodal fusion is performed by fusing all three modalities. The result of the proposed method is demonstrated using two publically available datasets such as CMU-MOSI for sentiment classification and IEMOCAP for affective computing. Incorporating a subset of features and contextual information, the proposed model obtains better classification accuracy than the two standard baselines by over 3% and 6% in sentiment and emotion classification, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号