共查询到19条相似文献,搜索用时 109 毫秒
1.
支持重音合成的汉语语音合成系统 总被引:1,自引:1,他引:1
针对基于单元挑选的汉语语音合成系统中重音预测及实现,本文采用了知识指导下的数据驱动建模策略。首先,采用经过感知结果优化的重音检测器,实现了语音数据库的自动标注;其次,利用重音标注数据库,训练得到支持重音预测的韵律预测模型;用重音韵律预测模型替代原语音合成系统中的相应模型,从而构成了支持重音合成的语音合成系统。实验结果分析表明,基于感知结果优化的重音检测器的标注结果是可靠的;支持重音的韵律声学预测模型是合理的;新的合成系统能够合成出带有轻重变化的语音。 相似文献
2.
3.
情感语音合成作为一个新兴的语音合成方向,糅合生理学、心理学、语言学和信息科学等各学科知识,可以应用于文本阅读、信息查询发布和计算机辅助教学等领域,能够很好地将语音的口语分析、情感分析与计算机技术有机融合,为实现以人为本,具有个性化特征的语音合成系统奠定基础。目前的情感语音合成工作可分为基于规则合成和基于波形拼接合成两类。情感语音合成研究分为情感分析和语音合成两个部分。其中.情感分析的主要工作是收集不同情感的语音数据、提取声学特征,分析声学特征与情感联系;语音合成的主要工作是建立情感转换模型,利用情感转换模型实现合成。 相似文献
4.
5.
6.
汉语韵律词内部音节重音的强弱对总的F0曲线的特征有很大影响。本文参考生成F0曲线的数学优化模型,提出了对由孤立单音节调型曲线串接而成的汉语韵律词的F0曲线的连续性、平滑性、曲线形状、平均值进行整体优化的x2估计方法,实现了在重音作用下的F0曲线的优化。在谐波+噪声合成系统上实验研究了汉语三音节韵律词的64种不包含轻声的调型组合和10种结尾为轻声的调型组合的F0曲线的优化效果,展示优化过程中三个控制参数——平滑因子(smooth)、音节重音强度(stress)、音节F0形状失真度(Distor-tion)对F0曲线整体形状的控制效果和参数取值的有效范围。非正式的听觉实验表明合成语音的自然度有明显提高。 相似文献
7.
基于数据驱动方法的汉语文本-可视语音合成 总被引:7,自引:0,他引:7
计算机文本-可视语音合成系统(TTVS)可以增强语音的可懂度,并使人机交互界面变得更为友好.给出一个基于数据驱动方法(基于样本方法)的汉语文本-可视语音合成系统,通过将小段视频拼接生成新的可视语音.给出一种构造汉语声韵母视觉混淆树的有效方法,并提出了一个基于视觉混淆树和硬度因子的协同发音模型,模型可用于分析阶段的语料库选取和合成阶段的基元选取.对于拼接边界处两帧图像的明显差别,采用图像变形技术进行平滑并.结合已有的文本-语音合成系统(TTS),实现了一个中文文本视觉语音合成系统. 相似文献
8.
9.
为了提高语音合成自然度和稳定性,提出HMM与深度神经网络相融合的,以维吾尔语作为实验语言的语音合成方法.基于深度学习的端到端语音合成方法存在生成速度慢、稳定性及可控性不够好,但是合成语音自然度高,而基于HMM的方法系统稳定性好,合成语音自然度不如端到端的方法.因此,系统前端部分利用HMM(马尔科夫模型)获取维吾尔语固有的语言特征,后端合成部分利用深度神经网络框架建立自回归模型.前端文本分析用HMM模型获取语言特征,后端合成用不同的神经网路模型,并进行了对比试验.最后对于实验结果进行了评测.实验结果验证了基于HMM+BiLSTM的语音合成方法的效果最好. 相似文献
10.
针对汉语统计参数语音合成中的上下文相关标注生成,设计了声韵母层、音节层、词层、韵律词层、韵律短语层和语句层6层上下文相关的标注格式。对输入的中文语句进行文本规范并利用语法分析获得语句的结构和分词信息;通过字音转换获得每个汉字的声韵母及声调;利用TBL(Transformation-Based error driven Learning)算法预测输入文本的韵律词边界和韵律短语边界。在此基础上,获得输入文本中每个汉字的声韵母信息及其上下文结构信息,从而产生统计参数语音合成所需的上下文相关标注。设计了一个以声韵母为合成基元的普通话的基于隐Markov模型(HMM)的统计参数语音合成系统,通过主、客观实验评测了不同标注信息对合成语音音质的影响,结果表明,上下文相关的标注信息越丰富,合成语音的音质越好。 相似文献
11.
Kasiprasad Mannepalli Panyam Narahari Sastry Maloji Suman 《International Journal of Speech Technology》2016,19(1):87-93
Speech processing is very important research area where speaker recognition, speech synthesis, speech codec, speech noise reduction are some of the research areas. Many of the languages have different speaking styles called accents or dialects. Identification of the accent before the speech recognition can improve performance of the speech recognition systems. If the number of accents is more in a language, the accent recognition becomes crucial. Telugu is an Indian language which is widely spoken in Southern part of India. Telugu language has different accents. The main accents are coastal Andhra, Telangana, and Rayalaseema. In this present work the samples of speeches are collected from the native speakers of different accents of Telugu language for both training and testing. In this work, Mel frequency cepstral coefficients (MFCC) features are extracted for each speech of both training and test samples. In the next step Gaussian mixture model (GMM) is used for classification of the speech based on accent. The overall efficiency of the proposed system to recognize the speaker, about the region he belongs, based on accent is 91 %. 相似文献
12.
众所周知中文普通话被众多的地区口音强烈地影响着,然而带不同口音的普通话语音数据却十分缺乏。因此,普通话语音识别的一个重要目标是恰当地模拟口音带来的声学变化。文章给出了隐式和显式地使用口音信息的一系列基于深度神经网络的声学模型技术的研究。与此同时,包括混合条件训练,多口音决策树状态绑定,深度神经网络级联和多级自适应网络级联隐马尔可夫模型建模等的多口音建模方法在本文中被组合和比较。一个能显式地利用口音信息的改进多级自适应网络级联隐马尔可夫模型系统被提出,并应用于一个由四个地区口音组成的、数据缺乏的带口音普通话语音识别任务中。在经过序列区分性训练和自适应后,通过绝对上 0.8% 到 1.5%(相对上 6% 到 9%)的字错误率下降,该系统显著地优于基线的口音独立深度神经网络级联系统。 相似文献
13.
《Computer Speech and Language》2007,21(2):247-265
In this paper, an improved method of model complexity selection for nonnative speech recognition is proposed by using maximum a posteriori (MAP) estimation of bias distributions. An algorithm is described for estimating hyper-parameters of the priors of the bias distributions, and an automatic accent classification algorithm is also proposed for integration with dynamic model selection and adaptation. Experiments were performed on the WSJ1 task with American English speech, British accented speech, and mandarin Chinese accented speech. Results show that the use of prior knowledge of accents enabled more reliable estimation of bias distributions with very small amounts of adaptation speech, or without adaptation speech. Recognition results show that the new approach is superior to the previous maximum expected likelihood (MEL) method, especially when adaptation data are very limited. 相似文献
14.
This paper provides an introduction to the acoustic–phonetic structure of English regional accents and presents a signal processing method for the modeling and transformation of the acoustic correlates of English accents for example from British English to American English. The focus of this paper is on the modeling of intonation and duration correlates of accents as the modeling of formants is described in previous papers (Yan et al., 2007, Vaseghi et al., 2009). The intonation correlates of accents are modeled with the statistics of a set of broad features of the pitch contour. The statistical models of phoneme durations and word speaking rates are obtained from automatic segmentation of word/phoneme boundaries of speech databases. A contribution of this paper is the use of accent synthesis for comparative evaluation of the causal effects of the acoustic correlates of accent. The differences between the acoustics–phonetic realizations of British Received Pronunciation (RP), Broad Australian (BAU) and General American (GenAm) English accents are modeled and used in an accent transformation and synthesis method for evaluation of the influence of formant, pitch and duration on conveying accents. 相似文献
15.
该文根据云南境内少数民族同胞说普通话时明显带有民族口音的语言使用现状,介绍了一个以研究非母语说话人汉语连续语音识别为目的的云南少数民族口音汉语普通话语音数据库,并在其基础上开展了发音变异规律、说话人自适应和非母语说话人口音识别研究,是汉语语音识别中用户多样性研究的重要补充。 相似文献
16.
17.
针对普通话语音识别任务中的多口音识别问题,提出了链接时序主义(connectionist temporal classification,CTC)和多头注意力(multi-head attention)的混合端到端模型,同时采用多目标训练和联合解码的方法。实验分析发现随着混合架构中链接时序主义权重的降低和编码器层数的加深,混合模型在带口音的数据集上表现出了更好的学习能力,同时训练一个深度达到48层的编码器—解码器架构的网络,生成模型的表现超过之前所有端到端模型,在数据堂开源的200 h带口音数据集上达到了5.6%字错率和26.2%句错率。实验证明了提出的端到端模型超过一般端到端模型的识别率,在解决带口音的普通话识别上有一定的先进性。 相似文献
18.
Kamal Jambi Hassanin Al-Barhamtoshy Wajdi Al-Jedaibi Mohsen Rashwan Sherif Abdou 《计算机系统科学与工程》2022,43(3):1155-1173
Any natural language may have dozens of accents. Even though the equivalent phonemic formation of the word, if it is properly called in different accents, humans do have audio signals that are distinct from one another. Among the most common issues with speech, the processing is discrepancies in pronunciation, accent, and enunciation. This research study examines the issues of detecting, fixing, and summarising accent defects of average Arabic individuals in English-speaking speech. The article then discusses the key approaches and structure that will be utilized to address both accent flaws and pronunciation issues. The proposed SpeakCorrect computerized interface employs a cutting-edge speech recognition system and analyses pronunciation errors with a speech decoder. As a result, some of the most essential types of changes in pronunciation that are significant for speech recognition are performed, and accent defects defining such differences are presented. Consequently, the suggested technique increases the Speaker’s accuracy. SpeakCorrect uses 100 h of phonetically prepared individuals to construct a pronunciation instruction repository. These prerecorded sets are used to train Hidden Markov Models (HMM) as well as weighted graph systems. Their speeches are quite clear and might be considered natural. The proposed interface is optimized for use with an integrated phonetic pronounced dataset, as well as for analyzing and identifying speech faults in Saudi and Egyptian dialects. The proposed interface detects, analyses, and assists English learners in correcting utterance faults, overcoming problems, and improving their pronunciations. 相似文献
19.
Angkititrakul P. Hansen J.H.L. 《IEEE transactions on audio, speech, and language processing》2006,14(2):634-646
It is suggested that algorithms capable of estimating and characterizing accent knowledge would provide valuable information in the development of more effective speech systems such as speech recognition, speaker identification, audio stream tagging in spoken document retrieval, channel monitoring, or voice conversion. Accent knowledge could be used for selection of alternative pronunciations in a lexicon, engage adaptation for acoustic modeling, or provide information for biasing a language model in large vocabulary speech recognition. In this paper, we propose a text-independent automatic accent classification system using phone-based models. Algorithm formulation begins with a series of experiments focused on capturing the spectral evolution information as potential accent sensitive cues. Alternative subspace representations using principal component analysis and linear discriminant analysis with projected trajectories are considered. Finally, an experimental study is performed to compare the spectral trajectory model framework to a traditional hidden Markov model recognition framework using an accent sensitive word corpus. System evaluation is performed using a corpus representing five English speaker groups with native American English, and English spoken with Mandarin Chinese, French, Thai, and Turkish accents for both male and female speakers. 相似文献