首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 671 毫秒
1.
针对许多基于训练模型的计算机听觉场景分析系统,在解决双说话人混合语音信号分离时需要依赖样本训练的有效性以及说话人的先验知识,提出一种基于聚类的单声道混合语音分离系统。系统先利用多基音跟踪算法对语音信号进行分析并产生同时流,然后通过最大化类内散布矩阵与类间散布矩阵的迹,搜索同时流的最佳分类,最终完成对双说话人的语音分离。该系统不需要训练语音模型,并且有效地改善了在双说话人混合语音信号的分离效果,为双说话人的语音分离提供了新的思路。  相似文献   

2.
王珊  许刚 《计算机工程》2007,33(18):211-213
基于计算听觉场景原理,提出了一种混叠语音信号分离算法模型,对两个说话者的混叠声音进行分离。该模型对低频区和高频区的分离分别采用了不同方法,避免了因采用同样方法处理低频高频区而导致对高频段语音不能很好分离的结果。实验结果表明,该模型具有很好的应用效果。  相似文献   

3.
人耳听觉系统能够在强噪声的环境下区分出自己感兴趣的语音,基于计算听觉场景分析(CASA)的基本原理,其重点和难点是找到合适的声音分离线索,完成目标语音信号和噪声信号的分离.针对单通道浊语音分离的问题,提出了一种以基音为线索的浊语音信号分离算法.在白噪声、鸡尾酒会噪声等六种噪声干扰条件下,通过仿真实验结果表明,相比于传统的谱减法,语音分离算法的输出信噪比平均提高了7.47 dB,并有效抑制了干扰噪声,改善了分离效果.  相似文献   

4.
听觉场景分析(Auditory Scene Analysis,ASA)系统能将一个场景分解为与不同声源对应的语音流。分割是ASA的主要步骤,借助分割可将一个听觉场景分解成多个片断。实现基于上升缘和下降缘分析的语音分割系统需检测上升缘与下降缘,通过匹配对应的上升缘与下降缘的波前来生成语音片断,将这些片断重构成语音流。  相似文献   

5.
关勇  李鹏  刘文举  徐波 《自动化学报》2009,35(4):410-416
传统抗噪算法无法解决人声背景下语音识别(Automatic speech recognition, ASR)系统的鲁棒性问题. 本文提出了一种基于计算听觉场景分析(Computational auditory scene analysis, CASA)和语者模型信息的混合语音分离系统. 该系统在CASA框架下, 利用语者模型信息和因子最大矢量量化(Factorial-max vector quantization, MAXVQ)方法进行实值掩码估计, 实现了两语者混合语音中有效地分离出目标说话人语音的目标, 从而为ASR系统提供了鲁棒的识别前端. 在语音分离挑战(Speech separation challenge, SSC)数据集上的评估表明, 相比基线系统, 本文所提出的系统的语音识别正确率提高了15.68%. 相关的实验结果也验证了本文提出的多语者识别和实值掩码估计的有效性.  相似文献   

6.
基于计算听觉场景分析的理论,使用onset/offset线索完成混合语音分离研究。将经过外围模型处理后的数据,在时域和频域上分别检测并匹配onset/offset,利用时频图上的onset/offset信息合并片段,实现语音分离。通过对3类混合语料进行实验分析,得出onset/offset线索可以同时处理清音和浊音,对声音混合类型没有限制,能得到较好的分离效果。  相似文献   

7.
听觉场景分析(AuditorySceneAnalysis,ASA)是一种模仿人的听觉特性实现对混叠声音信号进行分离的方法。作为ASA的基础研究,论文针对各种ASA系统存在的相近频带信号无法有效分离问题,提出了一种新的基于双基频多带激励分离模型的元音分离系统,该系统利用两语音信号的基音轨迹特性提取多带激励分离模型中两基频对应的语音的各次谐波参数,将两组参数代入多带激励合成模型从而得到两个分离的语音信号。文中给出了算法的原理与具体描述。仿真实验结果表明,系统对存在基音频率差异的元音信号能实现有效的分离。  相似文献   

8.
受声学研究启发,结合人脑人耳听觉特性对语音的处理方式,建立了一个完整的模拟听觉中枢系统的语音分离模型.首先利用外周听觉模型对语音信号进行多频谱分析,然后建立重合神经元模型提取语音信号的特征,最后在脑下丘的神经细胞模型中完成对语音的分离.基于现有的语音识别方法,该模型能够很好地解决绝大多数的语音识别方法都只能在单声源和低噪声的环境下使用的问题.实验结果表明,该模型能够实现多声源环境下语音的分离并且具有较高的鲁棒性.随着研究的深入,基于人耳听觉特性的语音分离模型将有很广泛的应用前景.  相似文献   

9.
人的听觉系统除了可以感受声音的强度、音调、音色和空间外,还可以在嘈杂的环境中感知和定位自己感兴趣的声音,计算听觉场景的算法尝试用计算机技术来模拟人的这种听觉处理功能,介绍人的听觉生理规律,对听觉场景分析的算法进行了有益的探索与研究。  相似文献   

10.
本文分析了JAVA3D和JAVA语音技术接口规(JSAPI)结构和工作原理,描述了通过Java3D构造虚拟场景、并利用JSAPI在场景中提供语音交互,提出了语音驱动空间场景漫游模型,并以“汽车展示查询系统”为例,展示了模型的主框架设计和实现中的关键技术。  相似文献   

11.
Robustness is one of the most important topics for automatic speech recognition (ASR) in practical applications. Monaural speech separation based on computational auditory scene analysis (CASA) offers a solution to this problem. In this paper, a novel system is presented to separate the monaural speech of two talkers. Gaussian mixture models (GMMs) and vector quantizers (VQs) are used to learn the grouping cues on isolated clean data for each speaker. Given an utterance, speaker identification is firstly performed to identify the two speakers presented in the utterance, then the factorial-max vector quantization model (MAXVQ) is used to infer the mask signals and finally the utterance of the target speaker is resynthesized in the CASA framework. Recognition results on the 2006 speech separation challenge corpus prove that this proposed system can improve the robustness of ASR significantly.  相似文献   

12.
Auditory Segmentation Based on Onset and Offset Analysis   总被引:1,自引:0,他引:1  
A typical auditory scene in a natural environment contains multiple sources. Auditory scene analysis (ASA) is the process in which the auditory system segregates a scene into streams corresponding to different sources. Segmentation is a major stage of ASA by which an auditory scene is decomposed into segments, each containing signal mainly from one source. We propose a system for auditory segmentation by analyzing onsets and offsets of auditory events. The proposed system first detects onsets and offsets, and then generates segments by matching corresponding onset and offset fronts. This is achieved through a multiscale approach. A quantitative measure is suggested for segmentation evaluation. Systematic evaluation shows that most of target speech, including unvoiced speech, is correctly segmented, and target speech and interference are well separated into different segments  相似文献   

13.
A multistage neural model is proposed for an auditory scene analysis task-segregating speech from interfering sound sources. The core of the model is a two-layer oscillator network that performs stream segregation on the basis of oscillatory correlation. In the oscillatory correlation framework, a stream is represented by a population of synchronized relaxation oscillators, each of which corresponds to an auditory feature, and different streams are represented by desynchronized oscillator populations. Lateral connections between oscillators encode harmonicity, and proximity in frequency and time. Prior to the oscillator network are a model of the auditory periphery and a stage in which mid-level auditory representations are formed. The model has been systematically evaluated using a corpus of voiced speech mixed with interfering sounds, and produces improvements in terms of signal-to-noise ratio for every mixture. A number of issues including biological plausibility and real-time implementation are also discussed.  相似文献   

14.
The representation of sound signals at the cochlea and auditory cortical level has been studied as an alternative to classical analysis methods. In this work, we put forward a recently proposed feature extraction method called approximate auditory cortical representation, based on an approximation to the statistics of discharge patterns at the primary auditory cortex. The approach here proposed estimates a non-negative sparse coding with a combined dictionary of atoms. These atoms represent the spectro-temporal receptive fields of the auditory cortical neurons, and are calculated from the auditory spectrograms of clean signal and noise. The denoising is carried out on noisy signals by the reconstruction of the signal discarding the atoms corresponding to the noise. Experiments are presented using synthetic (chirps) and real data (speech), in the presence of additive noise. For the evaluation of the new method and its variants, we used two objective measures: the perceptual evaluation of speech quality and the segmental signal-to-noise ratio. Results show that the proposed method improves the quality of the signals, mainly under severe degradation.  相似文献   

15.
Direct word discovery from audio speech signals is a very difficult and challenging problem for a developmental robot. Human infants are able to discover words directly from speech signals, and, to understand human infants’ developmental capability using a constructive approach, it is very important to build a machine learning system that can acquire knowledge about words and phonemes, i.e. a language model and an acoustic model, autonomously in an unsupervised manner. To achieve this, the nonparametric Bayesian double articulation analyzer (NPB-DAA) with the deep sparse autoencoder (DSAE) is proposed in this paper. The NPB-DAA has been proposed to achieve totally unsupervised direct word discovery from speech signals. However, the performance was still unsatisfactory, although it outperformed pre-existing unsupervised learning methods. In this paper, we integrate the NPB-DAA with the DSAE, which is a neural network model that can be trained in an unsupervised manner, and demonstrate its performance through an experiment about direct word discovery from auditory speech signals. The experiment shows that the combined method, the NPB-DAA with the DSAE, outperforms pre-existing unsupervised learning methods, and shows state-of-the-art performance. It is also shown that the proposed method outperforms several standard speech recognizer-based methods with true word dictionaries.  相似文献   

16.
A speech pre-processing algorithm is presented that improves the speech intelligibility in noise for the near-end listener. The algorithm improves intelligibility by optimally redistributing the speech energy over time and frequency according to a perceptual distortion measure, which is based on a spectro-temporal auditory model. Since this auditory model takes into account short-time information, transients will receive more amplification than stationary vowels, which has been shown to be beneficial for intelligibility of speech in noise. The proposed method is compared to unprocessed speech and two reference methods using an intelligibility listening test. Results show that the proposed method leads to significant intelligibility gains while still preserving quality. Although one of the methods used as a reference obtained higher intelligibility gains, this happened at the cost of decreased quality. Matlab code is provided.  相似文献   

17.
A conventional automatic speech recognizer does not perform well in the presence of multiple sound sources, while human listeners are able to segregate and recognize a signal of interest through auditory scene analysis. We present a computational auditory scene analysis system for separating and recognizing target speech in the presence of competing speech or noise. We estimate, in two stages, the ideal binary time–frequency (T–F) mask which retains the mixture in a local T–F unit if and only if the target is stronger than the interference within the unit. In the first stage, we use harmonicity to segregate the voiced portions of individual sources in each time frame based on multipitch tracking. Additionally, unvoiced portions are segmented based on an onset/offset analysis. In the second stage, speaker characteristics are used to group the T–F units across time frames. The resulting masks are used in an uncertainty decoding framework for automatic speech recognition. We evaluate our system on a speech separation challenge and show that our system yields substantial improvement over the baseline performance.  相似文献   

18.
A method is described for estimating the fundamental frequencies of several concurrent sounds in polyphonic music and multiple-speaker speech signals. The method consists of a computational model of the human auditory periphery, followed by a periodicity analysis mechanism where fundamental frequencies are iteratively detected and canceled from the mixture signal. The auditory model needs to be computed only once, and a computationally efficient strategy is proposed for implementing it. Simulation experiments were made using mixtures of musical sounds and mixed speech utterances. The proposed method outperformed two reference methods in the evaluations and showed a high level of robustness in processing signals where important parts of the audible spectrum were deleted to simulate bandlimited interference. Different system configurations were studied to identify the conditions where pitch analysis using an auditory model is advantageous over conventional time or frequency domain approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号