首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
语谱图是语音信号的时频表示,含有丰富的信息。把语谱图输入到脉冲耦合神经网络(PCNN)可以获得语音的特征矢量。传统的语音特征采用PCNN50次迭代的点火次数。提出了一种新的语音特征参数,该参数基于PCNN神经元点火位置的信息。说话人识别的实验表明,新语音特征比传统的特征更能反映话者语音信号的特点,获得更好的识别结果。  相似文献   

2.
基于PCNN的语谱图特征提取在说话人识别中的应用   总被引:7,自引:1,他引:7  
该文首次提出了一种将有生物视觉依据的人工神经网络——脉冲耦合神经网络(PulseCoupledNeuralNetwork,以下简称为PCNN)用于说话人识别领域的语谱图特征提取的新方法。该方法将语谱图输入到PCNN后得到输出图像的时间序列及其熵序列作为说话人语音的特征,利用它的不变性实现说话人识别。实验结果表明,该方法可以快速有效地进行说话人识别。该文将PCNN引入到语音识别的应用研究中,开拓了信号处理中两个极为重要的部分———语音信号处理和图像信号处理结合的新领域,同时对于PCNN的理论研究和实际应用具有非常重要的现实意义。  相似文献   

3.
电磁炮测试中,炮口产生强烈的火光信号以及振动等噪声,会严重干扰电枢特征信号的识别处理;为了提升对电枢信号的自动识别率,提出了一种基于小波变换和卷积神经网络(CNN)相结合的电枢信号识别方法;利用小波变换对过靶信号进行小波阈值去噪,进而重构信号,然后利用CNN提取信号的深层次特征,通过CNN的全连接层输出信号的分类结果;当输入信号为电枢信号时,对其作最大值检测获取电枢信号的特征点;实验结果表明,所提方法对比传统小波阈值滤波法在特征点自动拾取准确率上提升了5.88%;该算法对电磁炮电枢过靶信号的滤波、识别具有一定的参考意义。  相似文献   

4.
目前,自动语音识别系统往往会因为环境中复杂因素的影响,造成训练环境和测试环境存在不匹配现象,使得识别系统性能大幅度下降,极大地限制了语音识别技术的应用范围。近年来,很多鲁棒语音识别技术成功地被提出,这些技术的目标都是相同的,主要是提高系统的鲁棒性,进而提高识别率。其中,基于特征的归一化技术简单而有效,常常被作为鲁棒语音识别的首选方法,它主要是通过对特征向量的统计属性、累积密度函数或功率谱的归一化来补偿环境不匹配产生的影响。该文主要对目前主流的归一化方法进行介绍,其中包括倒谱矩归一化方法、直方图均衡化方法以及调频谱归一化方法等。  相似文献   

5.
This paper explains a new hybrid method for Automatic Speaker Recognition using speech signals based on the Artificial Neural Network (ANN). ASR performance characteristics is regarded as the foremost challenge and necessitated to be improved. This research work mainly focusses on resolving the ASR problems as well as to improve the accuracy of the prediction of a speaker.. Mel Frequency Cepstral Coefficient (MFCC) is greatly exploited for signal feature extraction.The input samples are created using these extracted features and its dimensions have been reduced using Self Organizing Feature Map (SOFM). Finally, using the reduced input samples, recognition is performed using Multilayer Perceptron (MLP) with Bayesian Regularization.. The training of the network has been accomplished and verified by means of real speech datasets from the Multivariability speaker recognition database for 10 speakers. The proposed method is validated by performance estimation as well as classification accuracies in contradiction to other models.The proposed method gives better recognition rate and 93.33% accuracy is attained.  相似文献   

6.
针对工业领域中故障诊断数据存在时序性和夹杂强噪声的特点导致的收敛速度慢以及诊断精度低的问题,提出了一种基于改进一维卷积和双向长短期记忆(1DCNN-BiLSTM)神经网络融合的故障诊断方法。该方法包括故障振动信号的预处理、特征的自动提取以及振动信号的分类。首先,采用自适应白噪声的完整经验模态分解(CEEMDAN)技术对原始振动信号进行预处理;其次,构建1DCNN-BiLSTM双通道模型,将处理后信号输入双向长短期记忆(BiLSTM)神经网络模型和一维卷积神经网络(1DCNN)模型两个通道,从而对信号的时序相关性特征、局部空间的非相关性特征和弱周期性规律进行充分提取;然后,针对信号夹杂强噪声的问题,对压缩与激励网络(SENet)模块进行改进并将其作用于两个不同的通道;最后,输入全连接层将双通道提取的特征进行融合并借助Softmax分类器实现对设备故障的精确识别。使用凯斯西储大学轴承数据集进行实验,结果表明改进后的SENet模块同时作用于1DCNN通道和stacked BiLSTM通道,1DCNN-BiLSTM双通道模型在保证快速收敛的情况下有最高诊断精度96.87%,优于传统单通道模型,有效提高了机械设备故障诊断效率。  相似文献   

7.
为解决噪声环境下语音识别率降低以及传统波束形成算法难以处理空间噪声的问题,基于双微阵列结构提出了一种改进的最小方差无畸变响应(MVDR)波束形成方法。首先,采用对角加载提高双微阵列增益,并利用递归矩阵求逆降低计算复杂度;然后,通过后置调制域谱减法对语音作进一步处理,解决了一般谱减法容易产生音乐噪声的问题,有效减小了语音畸变,获得了良好的噪声抑制效果;最后,采用卷积神经网络(CNN)进行语音模型的训练,提取语音深层次的特征,有效地解决了语音信号多样性问题。实验结果表明,提出的方法在经CNN训练的语音识别系统模型中取得了较好的识别效果,在信噪比为10 dB的F16噪声环境下的语音识别率达到了92.3%,具有良好的稳健性。  相似文献   

8.
提出了一种基于Bark子波变换和概率神经网络(PNN)的语音识别模型。利用符合人耳听觉特性的Bark滤波器组进行信号重构并提取语音特征,然后利用训练好的概率神经网络进行识别。通过训练大量语音样本来构成语音识别库,并建立综合识别系统。实验结果表明该方法与传统的LPCC/DTW和MFCC/DWT方法相比,识别率分别提高了14.9%和10.1%,达到了96.9%的识别率。  相似文献   

9.
Investigating new effective feature extraction methods applied to the speech signal is an important approach to improve the performance of automatic speech recognition (ASR) systems. Owing to the fact that the reconstructed phase space (RPS) is a proper field for true detection of signal dynamics, in this paper we propose a new method for feature extraction from the trajectory of the speech signal in the RPS. This method is based upon modeling the speech trajectory using the multivariate autoregressive (MVAR) method. Moreover, in the following, we benefit from linear discriminant analysis (LDA) for dimension reduction. The LDA technique is utilized to simultaneously decorrelate and reduce the dimension of the final feature set. Experimental results show that the MVAR of order 6 is appropriate for modeling the trajectory of speech signals in the RPS. In this study recognition experiments are conducted with an HMM-based continuous speech recognition system and a naive Bayes isolated phoneme classifier on the Persian FARSDAT and American English TIMIT corpora to compare the proposed features to some older RPS-based and traditional spectral-based MFCC features.  相似文献   

10.
This paper presents a new approach to speech enhancement based on modified least mean square-multi notch adaptive digital filter (MNADF). This approach differs from traditional speech enhancement methods since no a priori knowledge of the noise source statistics is required. Specifically, the proposed method is applied to the case where speech quality and intelligibility deteriorates in the presence of background noise. Speech coders and automatic speech recognition systems are designed to act on clean speech signals. Therefore, corrupted speech signals by the noise must be enhanced before their processing. The proposed method uses a primary input containing the corrupted speech signal and a reference input containing noise only. The new computationally efficient algorithm is developed here based on tracking significant frequencies of the noise and implementing MNADF at those frequencies. To track frequencies of the noise time-frequency analysis method such as short time frequency transform is used. Different types of noises from Noisex-92 database are used to degrade real speech signals. Objective measures, the study of the speech spectrograms and global signal-to-noise ratio (SNR), segmental SNR (segSNR) as well as subjective listing test demonstrate consistently superior enhancement performance of the proposed method over tradition speech enhancement method such as spectral subtraction.  相似文献   

11.
This work proposes a method for predicting the fundamental frequency and voicing of a frame of speech from its mel-frequency cepstral coefficient (MFCC) vector representation. This information is subsequently used to enable a speech signal to be reconstructed solely from a stream of MFCC vectors and has particular application in distributed speech recognition systems. Prediction is achieved by modeling the joint density of fundamental frequency and MFCCs. This is first modeled using a Gaussian mixture model (GMM) and then extended by using a set of hidden Markov models to link together a series of state-dependent GMMs. Prediction accuracy is measured on unconstrained speech input for both a speaker-dependent system and a speaker-independent system. A fundamental frequency prediction error of 3.06% is obtained on the speaker-dependent system in comparison to 8.27% on the speaker-independent system. On the speaker-dependent system 5.22% of frames have voicing errors compared to 8.82% on the speaker-independent system. Spectrogram analysis of reconstructed speech shows that highly intelligible speech is produced with the quality of the speaker-dependent speech being slightly higher owing to the more accurate fundamental frequency and voicing predictions  相似文献   

12.

The automatic speech recognition system is developed and tested for recognizing the speeches of a normal person in various languages. This paper mainly emphasizes the need for the development of a more challenging speaker independent speech recognition system for hearing impaired to recognize the speeches uttered by any Hearing Impaired (HI) speaker. In this work, Gamma tone energy features with filters spaced an equivalent rectangular bandwidth (ERB), MEL & BARK scale, and MFPLPC features are used at the front end and vector quantization (VQ) & multivariate hidden Markov models (MHMM) at the back end for recognizing the speeches uttered by any hearing impaired speaker. Performance of the system is compared for the three modeling techniques VQ, FCM (Fuzzy C means) clustering and MHMM for the recognition of isolated digits and simple continuous sentences in Tamil. Recognition accuracy (RA) is 81.5% with speeches of eight speakers considered for training and speeches of the remaining two speakers considered for testing for speaker independent isolated digit recognition system. Accuracy is found to be 91% and 87.5% for considering 90% of the data for training and 10% for testing for speaker independent isolated digit and continuous speech recognition systems respectively. Accuracy can be further enhanced by having an extensive database for creating models/templates. Receiver operating characteristics (ROC) drawn between True Positive Rate and False Positive Rate is used to assess the performance of the system for HI. This system can be utilized to understand the speech uttered by any hearing impaired speaker and the system facilitates the provision of necessary assistance to them. It ultimately improves the social status of the hearing impaired people and their confidence level will be enhanced.

  相似文献   

13.
针对说话人识别系统中存在的有效语音特征提取以及噪声影响的问题,提出了一种新的语音特征提取方法——基于S变换的美尔倒谱系数(SMFCC)。该方法是在传统美尔倒谱系数(MFCC)的基础上利用S变换的二维时频多分辨率特性,以及奇异值分解(SVD)方法的二维时频矩阵有效去噪性,并结合相关统计分析方法最终获得语音特征。采用TIMIT语音数据库,将所提的特征和现有特征进行对比实验。SMFCC特征的等错误率(EER)和最小检测代价(MinDCF)均小于线性预测倒谱系数(LPCC)、MFCC及其结合方法LMFCC,比MFCC的EER和MinDCF08分别下降了3.6%与17.9%。实验结果表明所提方法能够有效去除语音信号中的噪声,提升局部分辨率。  相似文献   

14.
In this research, a new speech recognition method based on improved feature extraction and improved support vector machine (ISVM) is developed. A Gaussian filter is used to denoise the input speech signal. The feature extraction method extracts five features such as peak values, Mel frequency cepstral coefficient (MFCC), tri-spectral features, discrete wavelet transform (DWT), and the difference values between the input and the standard signal. Next, these features are scaled using linear identical scaling (LIS) method with the same scaling method and the same scaling factors for each set of features in both training and testing phases. Following this, to accomplish the training process, an ISVM is developed with best fitness validation. The ISVM consists of two stages: (i) linear dual classifier that finds the same class attributes and different class attributes simultaneously and (ii) cross fitness validation (CFV) method to prevent over fitting problem. The proposed speech recognition method offers 98.2% accuracy.  相似文献   

15.
The fine spectral structure related to pitch information is conveyed in Mel cepstral features, with variations in pitch causing variations in the features. For speaker recognition systems, this phenomenon, known as "pitch mismatch" between training and testing, can increase error rates. Likewise, pitch-related variability may potentially increase error rates in speech recognition systems for languages such as English in which pitch does not carry phonetic information. In addition, for both speech recognition and speaker recognition systems, the parsing of the raw speech signal into frames is traditionally performed using a constant frame size and a constant frame offset, without aligning the frames to the natural pitch cycles. As a result the power spectral estimation that is done as part of the Mel cepstral computation may include artifacts. Pitch synchronous methods have addressed this problem in the past, at the expense of adding some complexity by using a variable frame size and/or offset. This paper introduces Pseudo Pitch Synchronous (PPS) signal processing procedures that attempt to align each individual frame to its natural cycle and avoid truncation of pitch cycles while still using constant frame size and frame offset, in an effort to address the above problems. Text independent speaker recognition experiments performed on NIST speaker recognition tasks demonstrate a performance improvement when the scores produced by systems using PPS are fused with traditional speaker recognition scores. In addition, a better distribution of errors across trials may be obtained for similar error rates, and some insight regarding of role of the fundamental frequency in speaker recognition is revealed. Speech recognition experiments run on the Aurora-2 noisy digits task also show improved robustness and better accuracy for extremely low signal-to-noise ratio (SNR) data.  相似文献   

16.
端点检测是语音识别过程中的一个重要的环节,因此改善端点检测的效果一直是语音识别领域的一个重要课题。为了提高在背景噪声下语音信号端点检测的准确率,提出了一种基于小波包的谱熵端点检测方法。该方法对语音信号进行小波包变换,将每帧信号分解成多个子带,在此基础上计算每帧信号的子带能量,通过子带能量所占比例求出每帧信号的谱熵,最后确定新的门限值。仿真实验表明,该方法比传统方法更有效、更优越,能够比较准确地检测语音信号。  相似文献   

17.
葛磊  强彦  赵涓涓 《软件学报》2016,27(S2):130-136
语音情感识别是人机交互中重要的研究内容,儿童自闭症干预治疗中的语音情感识别系统有助于自闭症儿童的康复,但是由于目前语音信号中的情感特征多而杂,特征提取本身就是一项具有挑战性的工作,这样不利于整个系统的识别性能.针对这一问题,提出了一种语音情感特征提取算法,利用无监督自编码网络自动学习语音信号中的情感特征,通过构建一个3层的自编码网络提取语音情感特征,把多层编码网络学习完的高层特征作为极限学习机分类器的输入进行分类,其识别率为84.14%,比传统的基于提取人为定义特征的识别方法有所提高.  相似文献   

18.
传统声纹识别方法过程繁琐且识别率低,现有的深度学习方法所使用的神经网络对语音信号没有针对性从而导致识别精度不够。针对上述问题,本文提出一种基于非线性堆叠双向LSTM的端到端声纹识别方法。首先,对原始语音文件提取出Fbank特征用于网络模型的输入。然后,针对语音信号连续且前后关联性强的特点,构建双向长短时记忆网络处理语音数据提取深度特征,为进一步增强网络的非线性表达能力,利用堆叠多层双向LSTM层和多层非线性层实现对语音信号更深层次抽象特征的提取。最后,使用SGD优化器优化训练方式。实验结果表明提出的方法能够充分利用语音序列信号特征,具有较强的时序全面性和非线性表达能力,所构造模型整体性强,比GRU和LSTM等模型具有更好的识别效果。  相似文献   

19.
In this work, spectral features extracted from sub-syllabic regions and pitch synchronous analysis are proposed for speech emotion recognition. Linear prediction cepstral coefficients, mel frequency cepstral coefficients and the features extracted from high amplitude regions of spectrum are used to represent emotion specific spectral information. These features are extracted from consonant, vowel and transition regions of each syllable to study the contribution of these regions toward recognition of emotions. Consonant, vowel and the transition regions are determined using vowel onset points. Spectral features extracted from each pitch cycle, are also used to recognize emotions present in speech. The emotions used in this study are: anger, fear, happy, neutral and sad. The emotion recognition performance using sub-syllabic speech segments are compared with the results of conventional block processing approach, where entire speech signal is processed frame by frame. The proposed emotion specific features are evaluated using simulated emotion speech corpus, IITKGP-SESC (Indian Institute of Technology, KharaGPur-Simulated Emotion Speech Corpus). The emotion recognition results obtained using IITKGP-SESC are compared with the results of Berlin emotion speech corpus. Emotion recognition systems are developed using Gaussian mixture models and auto-associative neural networks. The purpose of this study is to explore sub-syllabic regions to identify the emotions embedded in a speech signal, and if possible, to avoid processing of entire speech signal for emotion recognition without serious compromise in the performance.  相似文献   

20.
针对多噪声环境下的语音识别问题,提出了将环境噪声作为语音识别上下文考虑的层级语音识别模型。该模型由含噪语音分类模型和特定噪声环境下的声学模型两层组成,通过含噪语音分类模型降低训练数据与测试数据的差异,消除了特征空间研究对噪声稳定性的限制,并且克服了传统多类型训练在某些噪声环境下识别准确率低的弊端,又通过深度神经网络(DNN)进行声学模型建模,进一步增强声学模型分辨噪声的能力,从而提高模型空间语音识别的噪声鲁棒性。实验中将所提模型与多类型训练得到的基准模型进行对比,结果显示所提层级语音识别模型较该基准模型的词错率(WER)相对降低了20.3%,表明该层级语音识别模型有利于增强语音识别的噪声鲁棒性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号