首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 171 毫秒
1.
对电子音乐进行合理且有效的分类,可以使用户能快速搜索到喜爱的音乐,也使音乐推荐系统能进行更加精准的推荐。为了提高音乐分类的准确性,论文提出了基于CGABC-SVM的多特征融合音乐分类方法。在特征提取方面,针对单一音频特征表达不完整的问题,提取基音频率、共振峰、梅尔频率倒谱系数和相对谱-感知线性预测4种音频特征,组成多特征融合矩阵。在分类器选择方面,针对支持向量机(SVM)参数难以选取的问题,论文使用交叉全局人工蜂群算法(CGABC)来优化SVM的参数,构建CGABC-SVM音乐分类模型。实验结果表明,论文音乐分类方法可以有效地区分各种音乐信号,音乐分类的准确性显著好于对比音乐分类方法。  相似文献   

2.
音乐的主旋律音轨包含了很多重要的音乐旋律信息,是音乐特征识别的基础,也是音乐灯光表演方案设计的前提工作.这方面的工作涉及了音乐旋律的表达、旋律特征的抽取以及分类技术等许多内容.针对多音轨MIDI文件,介绍一种多音轨MIDI音乐主旋律识别方法.通过对表征音乐旋律的特征量的提取,采用H-K分类算法,构建音轨分类器模型,对MIDI主旋律音轨和伴奏旋律音轨进行分类,从而提取MIDI音乐主旋律音轨.实验结果显示取得了较好的效果,为音乐灯光表演方案的自动设计做了必要的前提工作.  相似文献   

3.
针对音乐的多样性和不确定性,使传统分类方法在大规模的实际音乐分类应用中速度慢、正确率低.为了提高音乐分类的正确率和分类精度,提出一种基于神经网络的音乐分类方法.首先采用倒谱系数提取音乐特征,选择最优的特征信号,加快识别速度,然后利用BP神经网络模型对特征信号进行训练,建立最优分类器模型,最后对测试音乐进行分类.对民歌、古筝、摇滚和流行四种音乐进行仿真实验,神经网络分类方法平均分类正确率达88.6%,比传统方法的分类正确率高出5%,同时速度也相应加快.结果表明,神经网络分类方法是一种有效的音乐类型分类方法.  相似文献   

4.
为了实现音乐情感识别的舞台灯光自动控制,需对音乐文件进行情感标记。针对人工情感标记效率低、速度慢的问题,开展了基于音乐情感识别的舞台灯光控制方法研究,提出了一种基于支持向量机和粒子群优化的音乐情感特征提取、分类和识别算法。首先以231首MIDI音乐文件为例,对平均音高、平均音强、旋律的方向等7种音乐基本特征进行提取并进行标准化处理;之后组成音乐情感特征向量输入支持向量机(SVM)多分类器,并利用改进的粒子群算法(PSO)优化分类器参数,建立标准音乐分类模型;最后设计灯光动作模型,将新的音乐文件通过离散情感模型与灯光动作相匹配,生成舞台灯光控制方法。实验结果表明了情感识别模型的有效性,与传统SVM多分类模型相比,明显提高了音乐情感的识别率,减少了测试时间,从而为舞台灯光设计人员提供合理参考。  相似文献   

5.
针对医学疾病数据中存在特征冗余的问题,以XGBoost特征选择方法度量特征重要度,删除冗余特征,选择最佳分类特征;针对识别精度不高的问题,使用Stacking方法集成XGBoost、LightGBM等多种异质分类器,并在异质分类器中引入性能更好的CatBoost分类器提升集成分类器分类精度。为了避免过拟合,选择基层分类器输出的分类概率作为高层分类器输入。实验结果表明,提出的基于XGBoost特征选择的XLC-Stacking方法相比当前主流分类算法以及单一的XGBoost算法和Stacking方法有较大提升,识别的准确率和F1-Score达到97.73%和98.21%,更加适用于疾病的诊断。  相似文献   

6.
Deep Web分类的小样本、高维特征的特点限制了分类算法的选择,影响分类器的设计和准确度,降低了分类器的"泛化"能力,出现分类器"过拟合",所以需要进行特征选择,降低特征的维数,避免.维数灾难".目前,没有Deep.Web特征选择自动算法的相关研究.通过对Deep Web分类的特征选择进行研究,提出了基于类别可分性判据和Tabu搜索的特征选择算法,可以在O(N2)的时间复杂度内得到次优的特征子集,减小了分类器设计的难度,提高了分类器分类准确率.根据特征选择前后的特征集,利用KNN分类算法进行Deep Web分类,结果表明提高了分类器的分类准确率,降低了分类算法的时间复杂度.  相似文献   

7.
针对目前基于单一脑区功能性网络层面的特征提取,文中提出稀疏组lasso-granger因果关系方法.首先从效应性脑网络层面提取不同脑区之间的因果关系作为脑电特征,分别提取受试者α,β,γ脑电波段的granger因果特征值.然后引用稀疏组lasso算法对获取的granger因果特征值进行特征筛选,获得高相关性特征子集作为情感分类特征.最后使用SVM分类器进行情感分类.此外,为了减少计算时间复杂度,使用过滤特征选择(ReliefF)算法,选取有效的脑电信号通道.实验表明,文中方法在Valence-Arousal二维情感模型上获得较高的平均情感分类准确率,分类效果优于对比的脑电特征,提取的情感脑电特征可以有效识别受试者的不同情感状态.  相似文献   

8.
钟将  程一峰 《计算机工程》2012,38(8):144-146
为更好地对歌词进行情感分类,提出一种改进的基于类间差别的CHI特征选择方法。该方法可单独用于歌词情感特征提取,将选取的特征应用于支持向量机分类器中,融合音频特征与利用改进CHI方法选择的歌词特征对歌曲进行情感分类。实验结果表明,融合后的特征可以取得比任何单一种类特征更好的分类效果。  相似文献   

9.
一种分段式音乐情感识别方法   总被引:1,自引:0,他引:1  
为了体现音乐情感跌宕起伏的变化,本文将乐曲划分为音符、小节和乐段,并提出一种分段式音乐情感识别方法.该方法从MIDI文件中提取音符特征,根据音符特征提取小节特征,并根据若干相邻小节的相似性将乐曲划分成若干独立的乐段,在提取乐段特征后利用BP神经网络识别乐段情感,最终获得整首乐曲的情感.实验结果表明,本文提出的音乐情感识别方法具有较好的识别效果.  相似文献   

10.
音乐是表达情感的重要载体,音乐情感识别广泛应用于各个领域.当前音乐情感研究中,存在音乐情感数据集稀缺、情感量化难度大、情感识别精准度有限等诸多问题,如何借助人工智能方法对音乐的情感趋向进行有效的、高质量的识别成为当前研究的热点与难点.总结目前音乐情感识别的研究现状,从音乐情感数据集、音乐情感模型、音乐情感分类方法三方面...  相似文献   

11.
音乐的情感标签预测对音乐的情感分析有着重要的意义。该文提出了一种基于情感向量空间模型的歌曲情感标签预测算法,首先,提取歌词中的情感特征词构建情感空间向量模型,然后利用SVM分类器对已知情感标签的音乐进行训练,通过分类技术找到与待预测歌曲情感主类一致的歌曲集合,最后,通过歌词的情感相似度计算找到最邻近的k首歌曲,将其标签推荐给待预测歌曲。实验发现本文提出的情感向量空间模型和“情感词—情感标签”共现的特征降维方法比传统的文本特征向量模型能够更好地提高歌曲情感分类准确率。同时,在分类基础上进行的情感标签预测方法可以有效地防止音乐“主类情感漂移”,比最近邻居方法达到更好的标签预测准确率。  相似文献   

12.
针对自动的音乐流派分类这一音乐信息检索领域的热点问题,提出了多模态音乐流派分类的概念。针对传统的基于底层声学特征的音乐流派分类中的特征选择环节,实现了一种全新的特征选择算法——基于特征间相互影响的前向特征选择算法(IBFFS)。开创性地使用LDA(latent Dirichlet allocation)模型处理音乐标签,将标签属于每个流派的概率通过计算转换为对应的音乐属于每个流派的概率。  相似文献   

13.
Music is the language of emotions. In recent years, music emotion recognition has attracted widespread attention in the academic and industrial community since it can be widely used in fields like recommendation systems, automatic music composing, psychotherapy, music visualization, and so on. Especially with the rapid development of artificial intelligence, deep learning-based music emotion recognition is gradually becoming mainstream. This paper gives a detailed survey of music emotion recognition. Starting with some preliminary knowledge of music emotion recognition, this paper first introduces some commonly used evaluation metrics. Then a three-part research framework is put forward. Based on this three-part research framework, the knowledge and algorithms involved in each part are introduced with detailed analysis, including some commonly used datasets, emotion models, feature extraction, and emotion recognition algorithms. After that, the challenging problems and development trends of music emotion recognition technology are proposed, and finally, the whole paper is summarized.  相似文献   

14.
In this paper, we suggest a new approach of genetic programming for music emotion classification. Our approach is based on Thayer’s arousal-valence plane which is one of representative human emotion models. Thayer’s plane which says human emotions is determined by the psychological arousal and valence. We map music pieces onto the arousal-valence plane, and classify the music emotion in that space. We extract 85 acoustic features from music signals, rank those by the information gain and choose the top k best features in the feature selection process. In order to map music pieces in the feature space onto the arousal-valence space, we apply genetic programming. The genetic programming is designed for finding an optimal formula which maps given music pieces to the arousal-valence space so that music emotions are effectively classified. k-NN and SVM methods which are widely used in classification are used for the classification of music emotions in the arousal-valence space. For verifying our method, we compare with other six existing methods on the same music data set. With this experiment, we confirm the proposed method is superior to others.  相似文献   

15.
为了提高语音情感识别系统的识别准确率,本文在传统支持向量机(SVM)方法的基础之上,提出了一种基于PCA的多级SVM情感分类算法。首先将容易区分的情感分开,针对混淆度大且不能再利用多级分类策略直接进行区分的情感,采用主成分分析法(PCA)进行特征降维,然后逐级地判断出输入语音所属的情感类型。与传统基于SVM分类算法的语音情感识别相比,本文提出的方法可将7种情感的平均识别率提高5.05%,并且特征维度可降低58.3%,从而证明了本文所提出的方法的正确性与有效性。  相似文献   

16.
The genre is an abstract feature, but still, it is considered to be one of the important characteristics of music. Genre recognition forms an essential component for a large number of commercial music applications. Most of the existing music genre recognition algorithms are based on manual feature extraction techniques. These extracted features are used to develop a classifier model to identify the genre. However, in many cases, it has been observed that a set of features giving excellent accuracy fails to explain the underlying typical characteristics of music genres. It has also been observed that some of the features provide a satisfactory level of performance on a particular dataset but fail to provide similar performance on other datasets. Hence, each dataset mostly requires manual selection of appropriate acoustic features to achieve an adequate level of performance on it. In this paper, we propose a genre recognition algorithm that uses almost no handcrafted features. The convolutional recurrent neural network‐based model proposed in this study is trained on melspectrogram extracted from 3‐s duration audio clips taken from GTZAN dataset. The proposed model provides an accuracy of 85.36% on 10‐class genre classification. The same model has been trained and tested on 10 genres of MagnaTagATune dataset having 18,476 clips of 29‐s duration. The model has yielded an accuracy of 86.06%. The experimental results suggest that the proposed architecture with melspectrogram as input feature is capable of providing consistent performances across the different datasets  相似文献   

17.
针对基于生理信号的情感识别问题,采用具有模拟退火机制的遗传算法、最大最小蚁群算法和粒子群算法来进行特征选择,用Fisher分类器对高兴、惊奇、厌恶、悲伤、愤怒和恐惧6种情感进行分类,获得了较高的识别率,并找出了对情感识别系统模型的构建具有较好性能的特征组合,建立了对6类情感具有预测能力的识别系统。  相似文献   

18.
首先针对公共情感词典对专业领域适用性较低问题,以公共情感词典作为种子情感词典,以评论语料库中未出现在公共情感词典中的形容词作为候选情感词,在此基础之上利用点互信息理论构建专业领域的情感词典。其次针对在线评论情感分类问题,利用复杂网络理论提出了一种新的情感分类特征选择算法,改进了传统特征选择算法忽略特征语义相关信息,遗漏评论情感资源的问题。通过构建候选特征词关系网络,利用复杂网络节点重要性理论,考虑节点的局部和全局重要性,提出了利用网络节点的度中心性、介数中心性和接近中心性综合衡量节点重要性来选择情感分类特征的算法NTFS(Complex network feature selection)。最后以iPhone手机的在线评论为实验数据,利用SVM、NNET、NB分类器对比了NTFS、GI、CHI传统特征选择方法,实验证明NTFS在分类性能上优于GI,CHI算法。  相似文献   

19.
Feature Fusion plays an important role in speech emotion recognition to improve the classification accuracy by combining the most popular acoustic features for speech emotion recognition like energy, pitch and mel frequency cepstral coefficients. However the performance of the system is not optimal because of the computational complexity of the system, which occurs due to high dimensional correlated feature set after feature fusion. In this paper, a two stage feature selection method is proposed. In first stage feature selection, appropriate features are selected and fused together for speech emotion recognition. In second stage feature selection, optimal feature subset selection techniques [sequential forward selection (SFS) and sequential floating forward selection (SFFS)] are used to eliminate the curse of dimensionality problem due to high dimensional feature vector after feature fusion. Finally the emotions are classified by using several classifiers like Linear Discriminant Analysis (LDA), Regularized Discriminant Analysis (RDA), Support Vector Machine (SVM) and K Nearest Neighbor (KNN). The performance of overall emotion recognition system is validated over Berlin and Spanish databases by considering classification rate. An optimal uncorrelated feature set is obtained by using SFS and SFFS individually. Results reveal that SFFS is a better choice as a feature subset selection method because SFS suffers from nesting problem i.e it is difficult to discard a feature after it is retained into the set. SFFS eliminates this nesting problem by making the set not to be fixed at any stage but floating up and down during the selection based on the objective function. Experimental results showed that the efficiency of the classifier is improved by 15–20 % with two stage feature selection method when compared with performance of the classifier with feature fusion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号