首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 187 毫秒
1.
对各种语言发音模型进行了综述,分别讨论了言语声音模型和言语动作模型。言语声音模型研究语言发音的声学原理,利用声音信号处理技术重构语音信号波形,由于对声源和共鸣之间的关系的认识不同,以及对共鸣的分析方法的不同,产生了3种不同的语言发音模型,第一种是频谱分析模型,第二种是共振峰模型,第三种是生理发音模型。言语动作模型研究发音器官的运动过程,利用图像信号处理技术重构发音器官的发音动作,根据建模方法的不同,言语动作模型可以分为3类:生理机能模型、几何特征模型、统计参数模型。  相似文献   

2.
基于发音特征的音/视频双流语音识别模型*   总被引:1,自引:0,他引:1  
构建了一种基于发音特征的音/视频双流动态贝叶斯网络(dynamic Bayesian network, DBN)语音识别模型,定义了各节点的条件概率关系,以及发音特征之间的异步约束关系,最后在音/视频连接数字语音数据库上进行了语音识别实验,并与音频单流、视频单流DBN模型比较了在不同信噪比情况下的识别效果。结果表明,在低信噪比情况下,基于发音特征的音/视频双流语音识别模型表现出最好的识别性能,而且随着噪声的增加,其识别率下降的趋势比较平缓,表明该模型对噪声具有很强的鲁棒性,更适用于低信噪比环境下的语音识别  相似文献   

3.
为提高听障患者病理语音的检测效果,提出一种融合发音动作特征和声学特征的检测方法.分析病理语音和正常语音发音动作特征的差异,提取位移、速度两种发音动作特征,提取梅尔倒谱系数、基频、共振峰3种声学特征,对两类特征归一化处理,使用核主成分分析法进行降维,在支持向量机、随机森林、多层感知机中测试特征的检测性能.实验结果表明,发音动作特征和声学特征同样有效,两类特征构成的融合特征检测效果优于单一特征,验证该方法提高了病理语音的检测效果.  相似文献   

4.
本文提出了一种基于多普勒微波雷达的发音动作检测与命令词识别方法.该方法利用微波雷达的多普勒特性检测发音过程中面部肌肉的微小变化,实现不依赖语音声学信号的命令词识别.本文首先设计实现了一个基于多普勒微波雷达的发音动作检测系统,并基于此系统构建了一个包含2个说话人的命令词识别数据库.然后,本文研究了基于支持向量机和卷积神经网络模型的雷达数据分类方法,并对比了不同模型和特征组合在单话者建模和多话者建模情况下的命令词识别性能.实验结果表明,本文设计的数据采集系统可以有效检测发音动作,所构建的卷积神经网络分类器可以取得90%以上的命令词识别准确率.  相似文献   

5.
为减少噪声环境对评估性能的影响,该文将PNCC参数引入普通话发音评估。结果表明,其评分相关性在普通话测试实录音数据库上较传统MFCC参数提高了6.6%。在此基础上,对汉语声学模型拆分方法进行了研究,提出将声母介音+韵母模型拆分方法应用到发音评估中。使用这种拆分方式的评估系统总错误率降低5.6%,专家打分相关性则提高了0.056。该文还对模型最佳状态数的选取进行讨论,并提出模型状态数混合和不同配置综合评分两种混合评分方案,在相关性上较同等条件下3状态模型分别提高了0.021和0.017。  相似文献   

6.
研究中文发音过程中舌头运动的3D可视化问题。根据核磁共振数据构建舌头精细的3D模型,在此基础上,提取舌背表面处3个点的EMA数据为驱动源,利用弹簧网技术真实再现中文发音过程中的舌头运动。为了验证文中建模和舌头运动合成方法的有效性,使用计算机图形学的方法模拟舌头运动的细节效果,并对比其与由语言学家亲自拍摄的“普通话发音器官动作特征”的X光影像。实验表明,文中方法实现的3D舌头运动符合真实的舌头运动情况,拥有广泛的应用前景。  相似文献   

7.
汉语三维发音动作合成和动态模拟   总被引:2,自引:0,他引:2  
郑红娜  朱云  王岚  陈辉 《集成技术》2013,2(1):23-28
本文以帮助聋儿言语康复为出发点,从聋儿音频发音数据中获得了聋儿易错发音文本以及聋儿易混淆发音文本对。设计了一个数据驱动的3D说话人头发音系统,该系统以EMA AG500设备采集的发音动作为驱动数据,逼真模拟了汉语的发音,从而可使聋儿观察到说话人嘴唇及舌头的运动情况,辅助聋儿发音训练,纠正易错发音。最后对系统的性能进行了人工评测,结果表明:3D说话人头发音系统可以有效地模拟说话人发音时口腔内外器官的发音动作。此外,本文还用基于音素的CM协同发音模型合成的方法,合成了聋儿易错发音文本的发音动动作,并用RMS度量了合成发音动作与真实发音动作的误差,得到了均值为1.25mm的RMS误差值。  相似文献   

8.
针对汉语语音识别中协同发音现象引起的语音信号的易变性,提出一种基于音节的声学建模方法。首先建立基于音节的声学模型以解决音节内部声韵母之间的音变现象,并提出以音节内双音子模型来初始化基于音节声学模型的参数以缓解训练数据稀疏的问题;然后引入音节之间的过渡模型来处理音节之间的协同发音问题。在“863-test”测试集上进行的汉语连续语音识别实验显示汉语字的相对错误率下降了12.13%,表明了基于音节的声学模型和音节间过渡模型相结合在解决汉语协同发音问题上的有效性。  相似文献   

9.
为研究基于模式运动的系统动力学描述方法中聚类参数对生产过程调节性能的影响,给出描述系统动态调节性能与产品质量调节性能的指标,分析并建立了聚类参数与系统调节性能间的关系;介绍了基于模式运动的一类复杂生产过程建模方法,并利用LMI方法给出了状态反馈控制器设计方法;提出了基于粒子群优化方法的最大熵聚类算法,定义并提取了系统调节性能指标;利用提出的新的覆盖分类神经网络,建立最大熵聚类方法的参数与调节性能间的映射关系,并分析了分类网络泛化能力;采用实际烧结矿生产数据进行仿真,结果表明所提方法可以分析与建立调节性能与聚类参数间的关系,且可为实际生产中聚类参数的选择提供一定的依据.  相似文献   

10.
石祥滨  李怡颖  刘芳  代钦 《计算机应用研究》2021,38(4):1235-1239,1276
针对双流法进行视频动作识别时忽略特征通道间的相互联系、特征存在大量冗余的时空信息等问题,提出一种基于双流时空注意力机制的端到端的动作识别模型T-STAM,实现了对视频关键时空信息的充分利用。首先,将通道注意力机制引入到双流基础网络中,通过对特征通道间的依赖关系进行建模来校准通道信息,提高特征的表达能力。其次,提出一种基于CNN的时间注意力模型,使用较少的参数学习每帧的注意力得分,重点关注运动幅度明显的帧。同时提出一种多空间注意力模型,从不同角度计算每帧中各个位置的注意力得分,提取多个运动显著区域,并且对时空特征进行融合进一步增强视频的特征表示。最后,将融合后的特征输入到分类网络,按不同权重融合两流输出得到动作识别结果。在数据集HMDB51和UCF101上的实验结果表明T-STAM能有效地识别视频中的动作。  相似文献   

11.
An HMM-based speech-to-video synthesizer   总被引:1,自引:0,他引:1  
Emerging broadband communication systems promise a future of multimedia telephony, e.g. the addition of visual information to telephone conversations. It is useful to consider the problem of generating the critical information useful for speechreading, based on existing narrowband communications systems used for speech. This paper focuses on the problem of synthesizing visual articulatory movements given the acoustic speech signal. In this application, the acoustic speech signal is analyzed and the corresponding articulatory movements are synthesized for speechreading. This paper describes a hidden Markov model (HMM)-based visual speech synthesizer. The key elements in the application of HMMs to this problem are the decomposition of the overall modeling task into key stages and the judicious determination of the observation vector's components for each stage. The main contribution of this paper is a novel correlation HMM model that is able to integrate independently trained acoustic and visual HMMs for speech-to-visual synthesis. This model allows increased flexibility in choosing model topologies for the acoustic and visual HMMs. Moreover the propose model reduces the amount of training data compared to early integration modeling techniques. Results from objective experiments analysis show that the propose approach can reduce time alignment errors by 37.4% compared to conventional temporal scaling method. Furthermore, subjective results indicated that the purpose model can increase speech understanding.  相似文献   

12.
This paper presents an investigation into ways of integrating articulatory features into hidden Markov model (HMM)-based parametric speech synthesis. In broad terms, this may be achieved by estimating the joint distribution of acoustic and articulatory features during training. This may in turn be used in conjunction with a maximum-likelihood criterion to produce acoustic synthesis parameters for generating speech. Within this broad approach, we explore several variations that are possible in the construction of an HMM-based synthesis system which allow articulatory features to influence acoustic modeling: model clustering, state synchrony and cross-stream feature dependency. Performance is evaluated using the RMS error of generated acoustic parameters as well as formal listening tests. Our results show that the accuracy of acoustic parameter prediction and the naturalness of synthesized speech can be improved when shared clustering and asynchronous-state model structures are adopted for combined acoustic and articulatory features. Most significantly, however, our experiments demonstrate that modeling the dependency between these two feature streams can make speech synthesis systems more flexible. The characteristics of synthetic speech can be easily controlled by modifying generated articulatory features as part of the process of producing acoustic synthesis parameters.   相似文献   

13.
Articulatory features describe how articulators are involved in making sounds. Speakers often use a more exaggerated way to pronounce accented phonemes, so articulatory features can be helpful in pitch accent detection. Instead of using the actual articulatory features obtained by direct measurement of articulators, we use the posterior probabilities produced by multi-layer perceptrons (MLPs) as articulatory features. The inputs of MLPs are frame-level acoustic features pre-processed using the split temporal context-2 (STC-2) approach. The outputs are the posterior probabilities of a set of articulatory attributes. These posterior probabilities are averaged piecewise within the range of syllables and eventually act as syllable-level articulatory features. This work is the first to introduce articulatory features into pitch accent detection. Using the articulatory features extracted in this way, together with other traditional acoustic features, can improve the accuracy of pitch accent detection by about 2%.  相似文献   

14.
In this paper, a new bidirectional neural network for better acoustic-articulatory inversion mapping is proposed. The model is motivated by the parallel structure of human brain, processing information by having forward--reverse connections. In other words, there would be a feedback from articulatory system to the acoustic signals emitted from that organ. Inspired by this mechanism, a new bidirectional model is developed to map speech representations to articulatory features. Formation of attractor dynamics in such bidirectional model is first carried out by training the reference speaker subspace as the continuous attractor. Then, it is used to recognize the other speaker’s speech. In fact, the structure and training of this bidirectional model is designed in such a way that the network learns to denoise the signal step by step, using properties of attractors it has formed. In this work, the efficiency of a nonlinear feedforward network is compared to the same one with a bidirectional connection. The bidirectional model increases the accuracy up to approximately 3% (from 62.09 to 64.91%) in the phone recognition process.  相似文献   

15.
This paper presents an articulatory modelling approach to convert acoustic speech into realistic mouth animation. We directly model the movements of articulators, such as lips, tongue, and teeth, using a dynamic Bayesian network (DBN)-based audio-visual articulatory model (AVAM). A multiple-stream structure with a shared articulator layer is adopted in the model to synchronously associate the two building blocks of speech, i.e., audio and video. This model not only describes the synchronization between visual articulatory movements and audio speech, but also reflects the linguistic fact that different articulators evolve asynchronously. We also present a Baum-Welch DBN inversion (DBNI) algorithm to generate optimal facial parameters from audio given the trained AVAM under maximum likelihood (ML) criterion. Extensive objective and subjective evaluations on the JEWEL audio-visual dataset demonstrate that compared with phonemic HMM approaches, facial parameters estimated by our approach follow the true parameters more accurately, and the synthesized facial animation sequences are so lively that 38% of them are undistinguishable  相似文献   

16.
Traditional statistical models for speech recognition have mostly been based on a Bayesian framework using generative models such as hidden Markov models (HMMs). This paper focuses on a new framework for speech recognition using maximum entropy direct modeling, where the probability of a state or word sequence given an observation sequence is computed directly from the model. In contrast to HMMs, features can be asynchronous and overlapping. This model therefore allows for the potential combination of many different types of features, which need not be statistically independent of each other. In this paper, a specific kind of direct model, the maximum entropy Markov model (MEMM), is studied. Even with conventional acoustic features, the approach already shows promising results for phone level decoding. The MEMM significantly outperforms traditional HMMs in word error rate when used as stand-alone acoustic models. Preliminary results combining the MEMM scores with HMM and language model scores show modest improvements over the best HMM speech recognizer.  相似文献   

17.
基于深度神经网络的语音驱动发音器官的运动合成   总被引:1,自引:0,他引:1  
唐郅  侯进 《自动化学报》2016,42(6):923-930
实现一种基于深度神经网络的语音驱动发音器官运动合成的方法,并应用于语音驱动虚拟说话人动画合成. 通过深度神经网络(Deep neural networks, DNN)学习声学特征与发音器官位置信息之间的映射关系,系统根据输入的语音数据估计发音器官的运动轨迹,并将其体现在一个三维虚拟人上面. 首先,在一系列参数下对比人工神经网络(Artificial neural network, ANN)和DNN的实验结果,得到最优网络; 其次,设置不同上下文声学特征长度并调整隐层单元数,获取最佳长度; 最后,选取最优网络结构,由DNN 输出的发音器官运动轨迹信息控制发音器官运动合成,实现虚拟人动画. 实验证明,本文所实现的动画合成方法高效逼真.  相似文献   

18.
Acoustic modeling based on hidden Markov models (HMMs) is employed by state-of-the-art stochastic speech recognition systems. Although HMMs are a natural choice to warp the time axis and model the temporal phenomena in the speech signal, their conditional independence properties limit their ability to model spectral phenomena well. In this paper, a new acoustic modeling paradigm based on augmented conditional random fields (ACRFs) is investigated and developed. This paradigm addresses some limitations of HMMs while maintaining many of the aspects which have made them successful. In particular, the acoustic modeling problem is reformulated in a data driven, sparse, augmented space to increase discrimination. Acoustic context modeling is explicitly integrated to handle the sequential phenomena of the speech signal. We present an efficient framework for estimating these models that ensures scalability and generality. In the TIMIT phone recognition task, a phone error rate of 23.0% was recorded on the full test set, a significant improvement over comparable HMM-based systems.  相似文献   

19.
Audio-visual speech modeling for continuous speech recognition   总被引:3,自引:0,他引:3  
This paper describes a speech recognition system that uses both acoustic and visual speech information to improve recognition performance in noisy environments. The system consists of three components: a visual module; an acoustic module; and a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56% error rate, noise robust RASTA-PLP (relative spectra) acoustic features to 7.2% error rate and combined noise robust acoustic features and visual features to 2.5% error rate  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号