首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Human emotion recognition using brain signals is an active research topic in the field of affective computing. Music is considered as a powerful tool for arousing emotions in human beings. This study recognized happy, sad, love and anger emotions in response to audio music tracks from electronic, rap, metal, rock and hiphop genres. Participants were asked to listen to audio music tracks of 1 min for each genre in a noise free environment. The main objectives of this study were to determine the effect of different genres of music on human emotions and indicating age group that is more responsive to music. Thirty men and women of three different age groups (15–25 years, 26–35 years and 36–50 years) underwent through the experiment that also included self reported emotional state after listening to each type of music. Features from three different domains i.e., time, frequency and wavelet were extracted from recorded EEG signals, which were further used by the classifier to recognize human emotions. It has been evident from results that MLP gives best accuracy to recognize human emotion in response to audio music tracks using hybrid features of brain signals. It is also observed that rock and rap genres generated happy and sad emotions respectively in subjects under study. The brain signals of age group (26–35 years) gave best emotion recognition accuracy in accordance to the self reported emotions.  相似文献   

2.
3.
Speech signals play an essential role in communication and provide an efficient way to exchange information between humans and machines. Speech Emotion Recognition (SER) is one of the critical sources for human evaluation, which is applicable in many real-world applications such as healthcare, call centers, robotics, safety, and virtual reality. This work developed a novel TCN-based emotion recognition system using speech signals through a spatial-temporal convolution network to recognize the speaker’s emotional state. The authors designed a Temporal Convolutional Network (TCN) core block to recognize long-term dependencies in speech signals and then feed these temporal cues to a dense network to fuse the spatial features and recognize global information for final classification. The proposed network extracts valid sequential cues automatically from speech signals, which performed better than state-of-the-art (SOTA) and traditional machine learning algorithms. Results of the proposed method show a high recognition rate compared with SOTA methods. The final unweighted accuracy of 80.84%, and 92.31%, for interactive emotional dyadic motion captures (IEMOCAP) and berlin emotional dataset (EMO-DB), indicate the robustness and efficiency of the designed model.  相似文献   

4.
为了提升深度卷积神经网络对音乐频谱流派特征的提取效果,提出一种基于频谱空间域特征注意的音乐流派分类算法模型DCNN-SSA。DCNN-SSA模型通过对不同音乐梅尔谱图的流派特征在空间域上进行有效标注,并且改变网络结构,从而在提升特征提取效果的同时确保模型的有效性,进而提升音乐流派分类的准确率。首先,将原始音频信号进行梅尔滤波,以模拟人耳的滤波操作对音乐的音强及节奏变化进行有效过滤,所生成的梅尔谱图进行切割后输入网络;然后,通过深化网络层数、改变卷积结构及增加空间注意力机制对模型在流派特征提取上进行增强;最后,通过在数据集上进行多批次的训练与验证来有效提取并学习音乐流派特征,从而得到可以对音乐流派进行有效分类的模型。在GTZAN数据集上的实验结果表明,基于空间注意的音乐流派分类算法与其他深度学习模型相比,在音乐流派分类准确率和模型收敛效果上有所提高,准确率提升了5.36个百分点~10.44个百分点。  相似文献   

5.
针对机器学习模型对音乐流派特征识别能力较弱的问题,提出了一种基于深度卷积神经网络的音乐流派识别(DCNN-MGR)模型。该模型首先通过快速傅里叶变换提取音频信息,生成可以输入DCNN的频谱并切割生成频谱切片。然后通过融合带泄露整流(Leaky ReLU)函数、双曲正切(Tanh)函数和Softplus分类器对AlexNet进行增强。其次将生成的频谱切片输入增强的AlexNet进行多批次的训练与验证,提取并学习音乐特征,得到可以有效分辨音乐特征的网络模型。最后使用输出模型进行音乐流派识别测试。实验结果表明,增强的AlexNet在音乐特征识别准确率和网络收敛效果上明显优于AlexNet及其他常用的DCNN、DCNN-MGR模型在音乐流派识别准确率上比其他机器学习模型提升了4%~20%。  相似文献   

6.
The genre is an abstract feature, but still, it is considered to be one of the important characteristics of music. Genre recognition forms an essential component for a large number of commercial music applications. Most of the existing music genre recognition algorithms are based on manual feature extraction techniques. These extracted features are used to develop a classifier model to identify the genre. However, in many cases, it has been observed that a set of features giving excellent accuracy fails to explain the underlying typical characteristics of music genres. It has also been observed that some of the features provide a satisfactory level of performance on a particular dataset but fail to provide similar performance on other datasets. Hence, each dataset mostly requires manual selection of appropriate acoustic features to achieve an adequate level of performance on it. In this paper, we propose a genre recognition algorithm that uses almost no handcrafted features. The convolutional recurrent neural network‐based model proposed in this study is trained on melspectrogram extracted from 3‐s duration audio clips taken from GTZAN dataset. The proposed model provides an accuracy of 85.36% on 10‐class genre classification. The same model has been trained and tested on 10 genres of MagnaTagATune dataset having 18,476 clips of 29‐s duration. The model has yielded an accuracy of 86.06%. The experimental results suggest that the proposed architecture with melspectrogram as input feature is capable of providing consistent performances across the different datasets  相似文献   

7.

In order to improve the efficiency and accuracy of collecting theater music data and recognizing genres, and solve the problem that a single feature in traditional algorithms cannot efficiently classify music genres, the technology of collecting data based on the Internet of Things (IoT) and Deep Belief Network (DBN) is firstly proposed for application of collecting music data and recognizing genres. The study firstly introduces the scheme of collecting music data based on the Internet of Things (IoT) and the theoretical basis of the recognition and classification of music genres by DBN. Second, the study focuses on the construction and improvement of the music genre recognition algorithm under DBN. In other words, the algorithm is optimized by adding Dropout and momentum to the network, and the optimal network model after training is implemented. And finally, the effectiveness of the algorithm is confirmed by experiment and research. The experimental results show that the efficiency of the optimized algorithm of identifying the music genres in the music library reaches 75.8%, which is far better than the traditional classic algorithms in the past. It is concluded that the technology of music data collection and genre recognition based on IoT and DBN has strong advantages, which will greatly contribute to reducing the workload of manual collection and identification of classification features, and improve efficiency. Meanwhile, the optimized algorithm model can also be extended to other fields.

  相似文献   

8.
高效精准的乐器识别技术可以有效地推动声源分离、音乐识谱、音乐流派分类等研究的深入发展,可广泛应用于播放列表生成、声学环境分类、乐器智能教学和交互式多媒体等众多领域。近年来,随着乐器识别研究的不断推进,乐器识别系统在性能上有了大幅提高,但依旧存在着部分乐器难以识别、乐器音频特征提取较为困难、复音乐器识别精准度较低等诸多问题,如何借助人工智能技术对乐器进行高效精准的识别成为当前研究的热点和难点。针对当前研究现状,从乐器识别常用音频特征、乐器识别模型及方法和常用数据集三个方面进行综述,并对当前研究中存在的局限性和未来发展趋势进行总结,为乐器识别研究提供一定的借鉴参考。  相似文献   

9.
孙辉  许洁萍  刘彬彬 《计算机应用》2015,35(6):1753-1756
针对不同特征向量下选择最优核函数的学习方法问题,将多核学习支持向量机(MK-SVM)应用于音乐流派自动分类中,提出了将最优核函数进行加权组合构成合成核函数进行流派分类的方法。多核分类学习能够针对不同的声学特征采用不同的最优核函数,并通过学习得到各个核函数在分类中的权重,从而明确各声学特征在流派分类中的权重,为音乐流派分类中特征向量的分析和选择提供了一个清晰、明确的结果。在ISMIR 2011竞赛数据集上验证了提出的基于多核学习支持向量机(MKL-SVM)的分类方法,并与传统的基于单核支持向量机的方法进行了比较分析。实验结果表明基于MKL-SVM的音乐流派自动分类准确率比传统单核支持向量机的分类准确率提高了6.58%,且该方法与传统的特征选择结果比较,更清楚地解释了所选择的特征向量对流派分类的影响大小,通过选择影响较大的特征组合进行分类,分类结果也有了明显的提升。  相似文献   

10.
11.
12.
13.
由于音乐节拍的强度、快慢、持续时间等是反映音乐不同流派风格的重要语义特征,而音乐节拍多属于由打击乐器所产生的低频部分,为此利用小波变换对音乐信号进行6层分解来提取低频节拍特征;针对节拍特征差异不明显的音乐流派,提出用描述频域能量包络的MFCC声学特征与节拍特征结合,并用基于音乐流派机理分析的8阶MFCC代替常用的12阶MFCC。对8类音乐流派实验仿真结果表明,基于语义特征和声学特征结合的方法,总体分类准确率可达68.37%,同时特征维数增加对分类时间影响很小。  相似文献   

14.
在线教育中,学生实时动作能够准确反映学生当前的学习状态,在不影响学习注意力和保证个人隐私信息安全的情况下,准确识别学习动作是监测在线教育质量的关键要素.对此,提出一种基于无源RFID的网络学习动作识别系统LD-identify.LD-identify仅通过射频信号完成学生动作识别,所以识别系统可以很好地保护个人的隐私信息,且避免设备昂贵等一系列问题.通过提取相位和信号强度的有效特征和深度学习算法,LD-identify能够获得很好的识别准确率的性能.实验表明,LD-identify只需要在帽子的背面粘贴两个射频标签,就能很好地识别出抬头低头、左右摇头、前倾后倾3种动作.为了进一步验证系统性能,研究6名志愿者在不同的场景中的动作识别的准确率,实验结果显示LD-identify能够在不同的场景下很好地识别所有用户的3种动作,利用卷积神经网络构建分类模型来识别动作可以取得很好的识别率,识别准确率达到95.5%以上.  相似文献   

15.
16.
Automatic emotion recognition from speech signals is one of the important research areas, which adds value to machine intelligence. Pitch, duration, energy and Mel-frequency cepstral coefficients (MFCC) are the widely used features in the field of speech emotion recognition. A single classifier or a combination of classifiers is used to recognize emotions from the input features. The present work investigates the performance of the features of Autoregressive (AR) parameters, which include gain and reflection coefficients, in addition to the traditional linear prediction coefficients (LPC), to recognize emotions from speech signals. The classification performance of the features of AR parameters is studied using discriminant, k-nearest neighbor (KNN), Gaussian mixture model (GMM), back propagation artificial neural network (ANN) and support vector machine (SVM) classifiers and we find that the features of reflection coefficients recognize emotions better than the LPC. To improve the emotion recognition accuracy, we propose a class-specific multiple classifiers scheme, which is designed by multiple parallel classifiers, each of which is optimized to a class. Each classifier for an emotional class is built by a feature identified from a pool of features and a classifier identified from a pool of classifiers that optimize the recognition of the particular emotion. The outputs of the classifiers are combined by a decision level fusion technique. The experimental results show that the proposed scheme improves the emotion recognition accuracy. Further improvement in recognition accuracy is obtained when the scheme is built by including MFCC features in the pool of features.  相似文献   

17.
针对频谱图对于音乐特征挖掘较弱、深度学习分类模型复杂且训练时间长的问题,设计了一种基于频谱增强和卷积宽度学习(CNNBLS)的音乐流派分类模型.该模型首先通过SpecAugment中随机屏蔽部分频率信道的方法增强梅尔频谱图,再将切割后的梅尔频谱图作为CNNBLS的输入,同时将指数线性单元函数(ELU)融合进CNNBLS的卷积层,以增强其分类精度.相较于其他机器学习网络框架, CNNBLS能用少量的训练时间获得较高的分类精度.此外, CNNBLS可以对增量数据进行快速学习.实验结果表明:无增量模型CNNBLS在训练400首音乐数据可获得90.06%的分类准确率,增量模型Incremental-CNNBLS在增加400首训练数据后可达91.53%的分类准确率.  相似文献   

18.
Liu  Caifeng  Feng  Lin  Liu  Guochao  Wang  Huibing  Liu  Shenglan 《Multimedia Tools and Applications》2021,80(5):7313-7331

Music genre classification based on visual representation has been successfully explored over the last years. Recently, there has been increasing interest in attempting convolutional neural networks (CNNs) to achieve the task. However, most of the existing methods employ the mature CNN structures proposed in image recognition without any modification, which results in the learning features that are not adequate for music genre classification. Faced with the challenge of this issue, we fully exploit the low-level information from spectrograms of audio and develop a novel CNN architecture in this paper. The proposed CNN architecture takes the multi-scale time-frequency information into considerations, which transfers more suitable semantic features for the decision-making layer to discriminate the genre of the unknown music clip. The experiments are evaluated on the benchmark datasets including GTZAN, Ballroom, and Extended Ballroom. The experimental results show that the proposed method can achieve 93.9%, 96.7%, 97.2% classification accuracies respectively, which to the best of our knowledge, are the best results on these public datasets so far. It is notable that the trained model by our proposed network possesses tiny size, only 0.18M, which can be applied in mobile phones or other devices with limited computational resources. Codes and model will be available at https://github.com/CaifengLiu/music-genre-classification.

  相似文献   

19.
We present an algorithm that predicts musical genre and artist from an audio waveform. Our method uses the ensemble learner ADABOOST to select from a set of audio features that have been extracted from segmented audio and then aggregated. Our classifier proved to be the most effective method for genre classification at the recent MIREX 2005 international contests in music information extraction, and the second-best method for recognizing artists. This paper describes our method in detail, from feature extraction to song classification, and presents an evaluation of our method on three genre databases and two artist-recognition databases. Furthermore, we present evidence collected from a variety of popular features and classifiers that the technique of classifying features aggregated over segments of audio is better than classifying either entire songs or individual short-timescale features. Editor: Gerhard Widmer  相似文献   

20.
Toward intelligent music information retrieval   总被引:1,自引:0,他引:1  
Efficient and intelligent music information retrieval is a very important topic of the 21st century. With the ultimate goal of building personal music information retrieval systems, this paper studies the problem of intelligent music information retrieval. Huron points out that since the preeminent functions of music are social and psychological, the most useful characterization would be based on four types of information: genre, emotion, style,and similarity. This paper introduces Daubechies Wavelet Coefficient Histograms (DWCH)for music feature extraction for music information retrieval. The histograms are computed from the coefficients of the db/sub 8/ Daubechies wavelet filter applied to 3 s of music. A comparative study of sound features and classification algorithms on a dataset compiled by Tzanetakis shows that combining DWCH with timbral features (MFCC and FFT), with the use of multiclass extensions of support vector machine,achieves approximately 80% of accuracy, which is a significant improvement over the previously known result on this dataset. On another dataset the combination achieves 75% of accuracy. The paper also studies the issue of detecting emotion in music. Rating of two subjects in the three bipolar adjective pairs are used. The accuracy of around 70% was achieved in predicting emotional labeling in these adjective pairs. The paper also studies the problem of identifying groups of artists based on their lyrics and sound using a semi-supervised classification algorithm. Identification of artist groups based on the Similar Artist lists at All Music Guide is attempted. The semi-supervised learning algorithm resulted in nontrivial increases in the accuracy to more than 70%. Finally, the paper conducts a proof-of-concept experiment on similarity search using the feature set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号