首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
孙辉  许洁萍  刘彬彬 《计算机应用》2015,35(6):1753-1756
针对不同特征向量下选择最优核函数的学习方法问题,将多核学习支持向量机(MK-SVM)应用于音乐流派自动分类中,提出了将最优核函数进行加权组合构成合成核函数进行流派分类的方法。多核分类学习能够针对不同的声学特征采用不同的最优核函数,并通过学习得到各个核函数在分类中的权重,从而明确各声学特征在流派分类中的权重,为音乐流派分类中特征向量的分析和选择提供了一个清晰、明确的结果。在ISMIR 2011竞赛数据集上验证了提出的基于多核学习支持向量机(MKL-SVM)的分类方法,并与传统的基于单核支持向量机的方法进行了比较分析。实验结果表明基于MKL-SVM的音乐流派自动分类准确率比传统单核支持向量机的分类准确率提高了6.58%,且该方法与传统的特征选择结果比较,更清楚地解释了所选择的特征向量对流派分类的影响大小,通过选择影响较大的特征组合进行分类,分类结果也有了明显的提升。  相似文献   

2.
音乐类型分类主要包括两个阶段:特征提取和分类。文中在研究小波变换理论基础上,采用连续小波分析方法提取音乐特征参数。支持向量机是专门针对有限样本情况下的一种分类方法。它是建立在统计学习理论的VC维理论和结构风险最小原理基础上,根据有限的样本信息在模型的复杂性和学习能力之间寻求最佳折衷,以期获得最好的推广能力。采用指数径向基函数(脚)内核,分类正确率可达85%,比传统的混合高斯模型和K近邻分类器,分类性能分别提高了21%和23%。实验结果表明,采用小波和支持向量机方法是一种相当有效的音乐类型分类方法。  相似文献   

3.
4.
5.
A central problem in music information retrieval is audio-based music classification. Current music classification systems follow a frame-based analysis model. A whole song is split into frames, where a feature vector is extracted from each local frame. Each song can then be represented by a set of feature vectors. How to utilize the feature set for global song-level classification is an important problem in music classification. Previous studies have used summary features and probability models which are either overly restrictive in modeling power or numerically too difficult to solve. In this paper, we investigate the bag-of-features approach for music classification which can effectively aggregate the local features for song-level feature representation. Moreover, we have extended the standard bag-of-features approach by proposing a multiple codebook model to exploit the randomness in the generation of codebooks. Experimental results for genre classification and artist identification on benchmark data sets show that the proposed classification system is highly competitive against the standard methods.  相似文献   

6.
7.
针对自动的音乐流派分类这一音乐信息检索领域的热点问题,提出了多模态音乐流派分类的概念。针对传统的基于底层声学特征的音乐流派分类中的特征选择环节,实现了一种全新的特征选择算法——基于特征间相互影响的前向特征选择算法(IBFFS)。开创性地使用LDA(latent Dirichlet allocation)模型处理音乐标签,将标签属于每个流派的概率通过计算转换为对应的音乐属于每个流派的概率。  相似文献   

8.
为了提升深度卷积神经网络对音乐频谱流派特征的提取效果,提出一种基于频谱空间域特征注意的音乐流派分类算法模型DCNN-SSA。DCNN-SSA模型通过对不同音乐梅尔谱图的流派特征在空间域上进行有效标注,并且改变网络结构,从而在提升特征提取效果的同时确保模型的有效性,进而提升音乐流派分类的准确率。首先,将原始音频信号进行梅尔滤波,以模拟人耳的滤波操作对音乐的音强及节奏变化进行有效过滤,所生成的梅尔谱图进行切割后输入网络;然后,通过深化网络层数、改变卷积结构及增加空间注意力机制对模型在流派特征提取上进行增强;最后,通过在数据集上进行多批次的训练与验证来有效提取并学习音乐流派特征,从而得到可以对音乐流派进行有效分类的模型。在GTZAN数据集上的实验结果表明,基于空间注意的音乐流派分类算法与其他深度学习模型相比,在音乐流派分类准确率和模型收敛效果上有所提高,准确率提升了5.36个百分点~10.44个百分点。  相似文献   

9.
We argue that an evaluation of system behavior at the level of the music is required to usefully address the fundamental problems of music genre recognition (MGR), and indeed other tasks of music information retrieval, such as autotagging. A recent review of works in MGR since 1995 shows that most (82 %) measure the capacity of a system to recognize genre by its classification accuracy. After reviewing evaluation in MGR, we show that neither classification accuracy, nor recall and precision, nor confusion tables, necessarily reflect the capacity of a system to recognize genre in musical signals. Hence, such figures of merit cannot be used to reliably rank, promote or discount the genre recognition performance of MGR systems if genre recognition (rather than identification by irrelevant confounding factors) is the objective. This motivates the development of a richer experimental toolbox for evaluating any system designed to intelligently extract information from music signals.  相似文献   

10.
We present an algorithm that predicts musical genre and artist from an audio waveform. Our method uses the ensemble learner ADABOOST to select from a set of audio features that have been extracted from segmented audio and then aggregated. Our classifier proved to be the most effective method for genre classification at the recent MIREX 2005 international contests in music information extraction, and the second-best method for recognizing artists. This paper describes our method in detail, from feature extraction to song classification, and presents an evaluation of our method on three genre databases and two artist-recognition databases. Furthermore, we present evidence collected from a variety of popular features and classifiers that the technique of classifying features aggregated over segments of audio is better than classifying either entire songs or individual short-timescale features. Editor: Gerhard Widmer  相似文献   

11.
Li  Juan  Luo  Jing  Ding  Jianhang  Zhao  Xi  Yang  Xinyu 《Multimedia Tools and Applications》2019,78(9):11563-11584

Music regional classification, which is an important branch of music automatic classification, aims at classifying folk songs according to different regional style. Chinese folk songs have developed various regional musical styles in the process of its evolution. Regional classification of Chinese folk songs can promote the development of music recommendation systems which recommending proper style of music to users and improve the efficiency of the music retrieval system. However, the accuracy of existing music regional classification systems is not high enough, because most methods do not consider temporal characteristics of music for both features extraction and classification. In this paper, we proposed an approach based on conditional random field (CRF) which can fully take advantage of the temporal characteristics of musical audio features for music regional classification. Considering the continuity, high dimensionality and large size of the audio feature data, we employed two ways to calculate the label sequence of musical audio features in CRF, which are Gaussian Mixture Model (GMM) and Restricted Boltzmann Machine (RBM). The experimental results demonstrated that the proposed method based on CRF-RBM outperforms other existing music regional classifiers with the best accuracy of 84.71% on Chinese folk songs datasets. Besides, when the proposed methods were applied to the Greek folk songs dataset, the CRF-RBM model also performs the best.

  相似文献   

12.
音符识别是音乐信号分析处理领域内非常重要的研究内容,它为计算自动识谱、乐器调音、音乐数据库检索和电子音乐合成提供技术基础。传统的音符识别方法通过估计音符基频与标准频率进行一一对应识别。然而一一对应较为困难,且随着音符基频的增大将导致误差增大,可识别的音符基频范围不广。为此,文中采用分类的思想进行音符识别。首先,建立所需识别的音符音频库,并针对音乐信号低频信息的重要性,选取梅尔频率倒谱系数(Mel Frequency Cepstrum Coefficients,MFCC)和常数Q变换(Constant Q Transform,CQT)作为音符信号提取特征。然后,将提取的特征MFCC和CQT分别作为音符识别的单一特征输入和两者特征融合输入;结合Softmax回归模型在多分类问题中的优势以及BP神经网络良好的非线性映射能力与自学习能力,构建基于Softmax回归模型的BP神经网络多分类识别器。在MATLAB R2016a的仿真环境下,将特征参数输入到多分类器中进行学习与训练,通过调整网络参数来寻找最优解。通过改变训练样本数进行对比实验。实验结果表明,将融合特征(MFCC+CQT)作为特征输入时,可以识别出从大字组到小字三组的25类音符,并可以获得95.6%的平均识别率;在识别过程中,特征CQT比特征MFCC的贡献更大。实验数据充分说明,利用分类的思想提取音符信号的MFCC和CQT特征来进行音符识别,可以取得很好的识别效果,并且不受音符基频范围的限制。  相似文献   

13.
14.
Temporal feature integration is the process of combining all the feature vectors in a time window into a single feature vector in order to capture the relevant temporal information in the window. The mean and variance along the temporal dimension are often used for temporal feature integration, but they capture neither the temporal dynamics nor dependencies among the individual feature dimensions. Here, a multivariate autoregressive feature model is proposed to solve this problem for music genre classification. This model gives two different feature sets, the diagonal autoregressive (DAR) and multivariate autoregressive (MAR) features which are compared against the baseline mean-variance as well as two other temporal feature integration techniques. Reproducibility in performance ranking of temporal feature integration methods were demonstrated using two data sets with five and eleven music genres, and by using four different classification schemes. The methods were further compared to human performance. The proposed MAR features perform better than the other features at the cost of increased computational complexity.  相似文献   

15.
The genre is an abstract feature, but still, it is considered to be one of the important characteristics of music. Genre recognition forms an essential component for a large number of commercial music applications. Most of the existing music genre recognition algorithms are based on manual feature extraction techniques. These extracted features are used to develop a classifier model to identify the genre. However, in many cases, it has been observed that a set of features giving excellent accuracy fails to explain the underlying typical characteristics of music genres. It has also been observed that some of the features provide a satisfactory level of performance on a particular dataset but fail to provide similar performance on other datasets. Hence, each dataset mostly requires manual selection of appropriate acoustic features to achieve an adequate level of performance on it. In this paper, we propose a genre recognition algorithm that uses almost no handcrafted features. The convolutional recurrent neural network‐based model proposed in this study is trained on melspectrogram extracted from 3‐s duration audio clips taken from GTZAN dataset. The proposed model provides an accuracy of 85.36% on 10‐class genre classification. The same model has been trained and tested on 10 genres of MagnaTagATune dataset having 18,476 clips of 29‐s duration. The model has yielded an accuracy of 86.06%. The experimental results suggest that the proposed architecture with melspectrogram as input feature is capable of providing consistent performances across the different datasets  相似文献   

16.
17.
It can be said that the automatic classification of musical genres plays a very important role in the current digital technology world in which the creation, distribution, and enjoyment of musical works have undergone huge changes. As the number of music products increases daily and the music genres are extremely rich, storing, classifying, and searching these works manually becomes difficult, if not impossible. Automatic classification of musical genres will contribute to making this possible. The research presented in this paper proposes an appropriate deep learning model along with an effective data augmentation method to achieve high classification accuracy for music genre classification using Small Free Music Archive (FMA) data set. For Small FMA, it is more efficient to augment the data by generating an echo rather than pitch shifting. The research results show that the DenseNet121 model and data augmentation methods, such as noise addition and echo generation, have a classification accuracy of 98.97% for the Small FMA data set, while this data set lowered the sampling frequency to 16000 Hz. The classification accuracy of this study outperforms that of the majority of the previous results on the same Small FMA data set.  相似文献   

18.
This paper proposes a hierarchical time-efficient method for audio classification and also presents an automatic procedure to select the best set of features for audio classification using Kolmogorov-Smirnov test (KS-test). The main motivation for our study is to propose a framework of general genre (e.g., action, comedy, drama, documentary, musical, etc...) movie video abstraction scheme for embedded devices-based only on the audio component. Accordingly simple audio features are extracted to ensure the feasibility of real-time processing. Five audio classes are considered in this paper: pure speech, pure music or songs, speech with background music, environmental noise and silence. Audio classification is processed in three stages, (i) silence or environmental noise detection, (ii) speech and non-speech classification and (iii) pure music or songs and speech with background music classification. The proposed system has been tested on various real time audio sources extracted from movies and TV programs. Our experiments in the context of real time processing have shown the algorithms produce very satisfactory results.  相似文献   

19.
Nowadays, it appears essential to design automatic indexing tools which provide meaningful and efficient means to describe the musical audio content. There is in fact a growing interest for music information retrieval (MIR) applications amongst which the most popular are related to music similarity retrieval, artist identification, musical genre or instrument recognition. Current MIR-related classification systems usually do not take into account the mid-term temporal properties of the signal (over several frames) and lie on the assumption that the observations of the features in different frames are statistically independent. The aim of this paper is to demonstrate the usefulness of the information carried by the evolution of these characteristics over time. To that purpose, we propose a number of methods for early and late temporal integration and provide an in-depth experimental study on their interest for the task of musical instrument recognition on solo musical phrases. In particular, the impact of the time horizon over which the temporal integration is performed will be assessed both for fixed and variable frame length analysis. Also, a number of proposed alignment kernels will be used for late temporal integration. For all experiments, the results are compared to a state of the art musical instrument recognition system.  相似文献   

20.
Timbre distance and similarity are expressions of the phenomenon that some music appears similar while other songs sound very different to us. The notion of genre is often used to categorize music, but songs from a single genre do not necessarily sound similar and vice versa. In this work, we analyze and compare a large amount of different audio features and psychoacoustic variants thereof for the purpose of modeling timbre distance. The sound of polyphonic music is commonly described by extracting audio features on short time windows during which the sound is assumed to be stationary. The resulting down sampled time series are aggregated to form a high-level feature vector describing the music. We generated high-level features by systematically applying static and temporal statistics for aggregation. The temporal structure of features in particular has previously been largely neglected. A novel supervised feature selection method is applied to the huge set of possible features. The distances of the selected feature correspond to timbre differences in music. The features show few redundancies and have high potential for explaining possible clusters. They outperform seven other previously proposed feature sets on several datasets with respect to the separation of the known groups of timbrally different music.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号