首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
语音和非语音类声音的识别在很多系统的研发中都有非常重要的作用,如安全监控、医疗保健、现代化的视听会议系统等。虽然绝大多数声音信号都有其独特的发音机制,然而要从其中进行特征的提取往往缺乏系统有效的方法。基于不同的音频信号都有其固有的特点,使用类所属特征选择方法来提取音频中的特征,从而进行分类,并用所提出的方法对语音和两种非语音类声音(咳嗽和杯碟破碎的声音)进行了实验仿真,实验结果表明,与常规的特征选择方法相比,提出的方法用更少的特征实现了更好的分类。  相似文献   

2.
In this paper, we investigate the noise robustness of Wang and Shamma's early auditory (EA) model for the calculation of an auditory spectrum in audio classification applications. First, a stochastic analysis is conducted wherein an approximate expression of the auditory spectrum is derived to justify the noise-suppression property of the EA model. Second, we present an efficient fast Fourier transform (FFT)-based implementation for the calculation of a noise-robust auditory spectrum, which allows flexibility in the extraction of audio features. To evaluate the performance of the proposed FFT-based auditory spectrum, a set of speech/music/noise classification tasks is carried out wherein a support vector machine (SVM) algorithm and a decision tree learning algorithm (C4.5) are used as the classifiers. Features used for classification include conventional Mel-frequency cepstral coefficients (MFCCs), MFCC-like features obtained from the original auditory spectrum (i.e., based on the EA model) and the proposed FFT-based auditory spectrum, as well as spectral features (spectral centroid, bandwidth, etc.) computed from the latter. Compared to the conventional MFCC features, both the MFCC-like and spectral features derived from the proposed FFT-based auditory spectrum show more robust performance in noisy test cases. Test results also indicate that, using the new MFCC-like features, the performance of the proposed FFT-based auditory spectrum is slightly better than that of the original auditory spectrum, while its computational complexity is reduced by an order of magnitude.  相似文献   

3.
In this work, we have developed a speech mode classification model for improving the performance of phone recognition system (PRS). In this paper, we have explored vocal tract system, excitation source and prosodic features for development of speech mode classification (SMC) model. These features are extracted from voiced regions of a speech signal. In this study, conversation, extempore, and read speech are considered as three different modes of speech. The vocal tract component of speech is extracted using Mel-frequency cepstral coefficients (MFCCs). The excitation source features are captured through Mel power differences of spectrum in sub-bands (MPDSS) and residual Mel-frequency cepstral coefficients (RMFCCs) of the speech signal. The prosody information is extracted from pitch and intensity. Speech mode classification models are developed using above described features independently, and in fusion. The experiments carried out on Bengali speech corpus to analyze the accuracy of the speech mode classification model using the artificial neural network (ANN), naive Bayes, support vector machines (SVMs) and k-nearest neighbor (KNN). We proposed four classification models which are combined using maximum voting approach for optimal performance. From the results, it is observed that speech mode classification model developed using the fusion of vocal tract system, excitation source and prosodic features of speech, yields the best performance of 98%. Finally, the proposed speech mode classifier is integrated to the PRS, and the accuracy of phone recognition system is observed to be improved by 11.08%.  相似文献   

4.
5.
词义消歧要解决如何让计算机理解多义词在上下文中的具体含义,对信息检索、机器翻译、文本分类和自动文摘等自然语言处理问题有着十分重要的作用。通过引入句法信息,提出了一种新的词义消歧方法。构造歧义词汇上下文的句法树,提取句法信息、词性信息和词形信息作为消歧特征。利用贝叶斯模型来建立词义消歧分类器,并将其应用到测试数据集上。实验结果表明:消歧的准确率有所提升,达到了65%。  相似文献   

6.
针对多数语音识别系统在噪音环境下性能急剧下降的问题,提出了一种新的语音识别特征提取方法。该方法是建立在听觉模型的基础上,通过组合语音信号和其差分信号的上升过零率获得频率信息,通过峰值检测和非线性幅度加权来获取强度信息,二者组合在一起,得到输出语音特征,再分别用BP神经网络和HMM进行训练和识别。仿真实现了不同信噪比下不依赖人的50词的语音识别,给出了识别的结果,证明了组合差分信息的过零与峰值幅度特征具有较强的抗噪声性能。  相似文献   

7.
The main aim of this paper is to present a knowledge-based framework for automatically selecting the best image enhancement algorithm from several available on a per image basis in the context of X-ray images of airport luggage. The approach detailed involves a system that learns to map image features that represent its viewability to one or more chosen enhancement algorithms. Viewability measures have been developed to provide an automatic check on the quality of the enhanced image, i.e., is it really enhanced? The choice is based on ground-truth information generated by human X-ray screening experts. Such a system, for a new image, predicts the best-suited enhancement algorithm. Our research details the various characteristics of the knowledge-based system and shows extensive results on real images.  相似文献   

8.
汉语是一种有调语言,因此在汉语语音识别中,调型信息起着非常关键的作用。在现有的隐马尔可夫模型(Hidden Markov Model)框架下,如何有效地利用调型信息是有待研究的问题。现有的汉语语音识别系统中主要采用两种方式来使用调型信息 一种是基于Embedded Tone Model,即将调型特征向量与声学特征向量组成一个流去训练模型;一种是Explicit Tone Model,即将调型信息单独建模,再利用此模型优化原有的解码网络。该文将两种方法统一起来,首先利用Embedded Tone Model采用双流而非单流建模得到Nbest备选,再利用Explicit Tone Model对调进行左相关建模并对Nbest得分重新修正以得到识别结果,从而获得性能提升。与传统的无调模型相比,该文方法的识别率的平均绝对提升超过了3.0%,在第三测试集上的绝对提升达到了5.36%。  相似文献   

9.
This paper proposes a hierarchical time-efficient method for audio classification and also presents an automatic procedure to select the best set of features for audio classification using Kolmogorov-Smirnov test (KS-test). The main motivation for our study is to propose a framework of general genre (e.g., action, comedy, drama, documentary, musical, etc...) movie video abstraction scheme for embedded devices-based only on the audio component. Accordingly simple audio features are extracted to ensure the feasibility of real-time processing. Five audio classes are considered in this paper: pure speech, pure music or songs, speech with background music, environmental noise and silence. Audio classification is processed in three stages, (i) silence or environmental noise detection, (ii) speech and non-speech classification and (iii) pure music or songs and speech with background music classification. The proposed system has been tested on various real time audio sources extracted from movies and TV programs. Our experiments in the context of real time processing have shown the algorithms produce very satisfactory results.  相似文献   

10.
重音是语言交流中不可或缺的部分,在语言交流中扮演着非常重要的角色。为了验证基于听觉模型的短时谱特征集在汉语重音检测方法中的应用效果,使用MFCC(Mel frequency cepstrum coefficient)和RASTA-PLP(relative spectra perceptual linear prediction)算法提取每个语音段的短时谱信息,分别构建了基于MFCC算法的短时谱特征集和基于RASTA-PLP算法的短时谱特征集;选用NaiveBayes分类器对这两类特征集进行建模,把具有最大后验概率的类作为该对象所属的类,这种分类方法充分利用了当前语音段的相关语音特性;基于MFCC的短时谱特征集和基于RASTA-PLP的短时谱特征集在ASCCD(annotated speech corpus of Chi-nese discourse)上能够分别得到82.1%和80.8%的汉语重音检测正确率。实验结果证明,基于 MFCC的短时谱特征和基于RASTA-PLP的短时谱特征能用于汉语重音检测研究。  相似文献   

11.
12.
提升低信噪比下的分离语音质量是语音分离技术研究的重点,而大多数语音分离方法在低信噪比下仍只对目标说话人的语音进行特征训练.针对目前方法的不足,提出了一种基于联合训练生成对抗网络GAN的混合语音分离方法.为避免复杂的声学特征提取,生成模型采用全卷积神经网络直接提取混合语音时域波形的高维特征,判别模型通过构建二分类卷积神经...  相似文献   

13.
提出基于发音特征的声调建模改进方法,并将其用于随机段模型的一遍解码中。根据普通话的发音特点,确定了用于区别汉语元音、辅音信息的7种发音特征,并以此为目标值利用阶层式多层感知器计算语音信号属于发音特征的35个类别后验概率,将该概率作为发音特征与传统的韵律特征一起用于声调建模。根据随机段模型的解码特点,在两层剪枝后对保留下来的路径计算其声调模型概率得分,加权后加入路径总的概率得分中。在“863-test”测试集上进行的实验结果显示,使用了新的发音特征集合中声调模型的识别精度提高了3.11%;融入声调信息后随机段模型的字错误率从13.67%下降到12.74%。表明了将声调信息应用到随机段模型的可行性。  相似文献   

14.
针对语音情感识别研究体系进行综述。这一体系包括情感描述模型、情感语音数据库、特征提取与降维、情感分类与回归算法4个方面的内容。本文总结离散情感模型、维度情感模型和两模型间单向映射的情感描述方法;归纳出情感语音数据库选择的依据;细化了语音情感特征分类并列出了常用特征提取工具;最后对特征提取和情感分类与回归的常用算法特点进行凝练并总结深度学习研究进展,并提出情感语音识别领域需要解决的新问题、预测了发展趋势。  相似文献   

15.
Emphasis plays an important role in expressive speech synthesis in highlighting the focus of an utterance to draw the attention of the listener. We present a hidden Markov model (HMM)-based emphatic speech synthesis model. The ultimate objective is to synthesize corrective feedback in a computer-aided pronunciation training (CAPT) system. We first analyze contrastive (neutral versus emphatic) speech recording. The changes of the acoustic features of emphasis at different prosody locations and the local prominences of emphasis are analyzed. Based on the analysis, we develop a perturbation model that predicts the changes of the acoustic features from neutral to emphatic speech with high accuracy. Further based on the perturbation model we develop an HMM-based emphatic speech synthesis model. Different from the previous work, the HMM model is trained with neutral corpus, but the context features and additional acoustic-feature-related features are used during the growing of the decision tree. Then the output of the perturbation model can be used to supervise the HMM model to synthesize emphatic speeches instead of applying the perturbation model at the backend of a neutral speech synthesis model directly. In this way, the demand of emphasis corpus is reduced and the speech quality decreased by speech modification algorithm is avoided. The experiments indicate that the proposed emphatic speech synthesis model improves the emphasis quality of synthesized speech while keeping a high degree of the naturalness.  相似文献   

16.
张连海  陈斌  屈丹 《计算机科学》2012,39(9):211-214
提出了一种基于发音特性的摩擦音和塞擦音分类方法,该方法首先基于Seneff听觉谱提取一组描述音段能量分布特性和谱统计量的特征参数,刻画两者在发音过程上的差异,然后采用支持向量机模型实现摩擦音和塞擦音的分类。实验结果表明,其干净语音分类准确率可以达到90.08%,信噪比为5dB的语音分类准确率可达到80.4%,与传统的基于时频能量分布特征的摩擦音和塞擦音分类方法相比,较大地提高了低信噪比下的性能。  相似文献   

17.
传统的条件随机场(Conditional Random Fields,CRF)方法虽然可以容纳任意长度的上下文信息且特征设计灵活,但训练代价大、模型复杂度高,尤其在序列标注任务中由于需要计算整个标注序列的联合概率分布使其缺点更加突出.为此,结合一种结构化方式的支持向量机(Structured Support Vecto...  相似文献   

18.
This paper deals with speech emotion analysis within the context of increasing awareness of the wide application potential of affective computing. Unlike most works in the literature which mainly rely on classical frequency and energy based features along with a single global classifier for emotion recognition, we propose in this paper some new harmonic and Zipf based features for better speech emotion characterization in the valence dimension and a multi-stage classification scheme driven by a dimensional emotion model for better emotional class discrimination. Experimented on the Berlin dataset with 68 features and six emotion states, our approach shows its effectiveness, displaying a 68.60% classification rate and reaching a 71.52% classification rate when a gender classification is first applied. Using the DES dataset with five emotion states, our approach achieves an 81% recognition rate when the best performance in the literature to our knowledge is 76.15% on the same dataset.  相似文献   

19.
Auditory attention is a complex mechanism that involves the processing of low-level acoustic cues together with higher level cognitive cues. In this paper, a novel method is proposed that combines biologically inspired auditory attention cues with higher level lexical and syntactic information to model task-dependent influences on a given spoken language processing task. A set of low-level multiscale features (intensity, frequency contrast, temporal contrast, orientation, and pitch) is extracted in parallel from the auditory spectrum of the sound based on the processing stages in the central auditory system to create feature maps that are converted to auditory gist features that capture the essence of a sound scene. The auditory attention model biases the gist features in a task-dependent way to maximize target detection in a given scene. Furthermore, the top-down task-dependent influence of lexical and syntactic information is incorporated into the model using a probabilistic approach. The lexical information is incorporated by using a probabilistic language model, and the syntactic knowledge is modeled using part-of-speech (POS) tags. The combined model is tested on automatically detecting prominent syllables in speech using the BU Radio News Corpus. The model achieves 88.33% prominence detection accuracy at the syllable level and 85.71% accuracy at the word level. These results compare well with reported human performance on this task.   相似文献   

20.
Optimal representation of acoustic features is an ongoing challenge in automatic speech recognition research. As an initial step toward this purpose, optimization of filterbanks for the cepstral coefficient using evolutionary optimization methods is proposed in some approaches. However, the large number of optimization parameters required by a filterbank makes it difficult to guarantee that an individual optimized filterbank can provide the best representation for phoneme classification. Moreover, in many cases, a number of potential solutions are obtained. Each solution presents discrimination between specific groups of phonemes. In other words, each filterbank has its own particular advantage. Therefore, the aggregation of the discriminative information provided by filterbanks is demanding challenging task. In this study, the optimization of a number of complementary filterbanks is considered to provide a different representation of speech signals for phoneme classification using the hidden Markov model (HMM). Fuzzy information fusion is used to aggregate the decisions provided by HMMs. Fuzzy theory can effectively handle the uncertainties of classifiers trained with different representations of speech data. In this study, the output of the HMM classifiers of each expert is fused using a fuzzy decision fusion scheme. The decision fusion employed a global and local confidence measurement to formulate the reliability of each classifier based on both the global and local context when making overall decisions. Experiments were conducted based on clean and noisy phonetic samples. The proposed method outperformed conventional Mel frequency cepstral coefficients under both conditions in terms of overall phoneme classification accuracy. The fuzzy fusion scheme was shown to be capable of the aggregation of complementary information provided by each filterbank.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号