首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
情感语音合成是情感计算和语音信号处理研究的热点之一,进行准确的语音情感分析是合成高质量情感语音的前提.文中采用PAD情感模型作为情感分析量化模型,对情感语料库中的语音进行情感分析和聚类,获得各情感PAD参数模型.由HMM语音合成系统合成的情感语音,通过PAD模型进行参数修正,使得合成语音的情感参数更加准确,从而提高情感语音合成的质量.实验表明该方法能较好地提高合成语音的自然度和情感清晰度,在同性别不同说话人中也能达到较好的性能.  相似文献   

2.
针对现有基于Tacotron模型的蒙古语语音合成系统存在的两个问题:(1)合成效率较低;(2)合成语音保真度较低,该文基于FastSpeech2模型提出了完全非自回归的实时、高保真蒙古语语音合成模型MonTTS。为了提高MonTTS模型合成蒙古语语音的韵律自然度/保真度,根据蒙古语声学特点提出以下三点创新改进:(1)使用蒙古文音素序列来表征蒙古文发音信息;(2)提出音素级的声学调节器以学习长时韵律变化;(3)提出基于蒙古语语音识别和自回归语音合成两种时长对齐方法。同时,该文构建了一个当前最大规模的蒙古语语音合成数据库:MonSpeech。实验结果表明,MonTTS在韵律自然度方面的主观平均意见分数(Mean Opinion Score, MOS)达到4.53,显著优于当前最优的基于Tacotron的蒙古语语音合成基线系统和基线FastSpeech2模型;MonTTS合成实时率达3.63×10-3,满足实时高保真合成要求。最后,文中涉及的训练脚本和预训练模型全部开源(https://github.com/ttslr/MonTTS)。  相似文献   

3.
提出一种基于统计声学模型的单元挑选语音合成算法.在模型训练阶段,首先提取语料库中语音数据的频谱、基频等声学参数,结合语料库中的音段和韵律标注来估计各上下文相关音素对应的统计声学模型,使用的模型结构为隐马尔柯夫模型.在合成阶段,以使目标合成句对应的声学模型具有最大的似然值输出为准则,来进行最佳合成单元的挑选,最后通过平滑连接各备选单元波形来生成合成语音.以此算法为基础,构建一个以声韵母为基本拼接单元的中文语音合成系统,并通过测听实验证明此算法相对传统算法在提高合成语音自然度上的有效性.  相似文献   

4.
在语音合成技术的研究中,情感语音合成是当前研究的热点.在众多研究因素中,建立恰当的韵律模型和选取好的韵律参数是研究的关键,它们描述的正确与否,直接影响到情感语音合成的输出效果.为了攻克提高情感语音自然度这一难点,对影响情感语音合成技术韵律参数进行了分析,建立了基于关联规则的情感语音韵律基频模型.本文通过研究关联规则、改进数据挖掘Apriori算法并由此来获得韵律参数中基频变化规则,并为情感语音合成的选音提供指导和帮助.  相似文献   

5.
提出一种基于时域基音同步叠加TD-PSOLA算法的情感语音合成系统。根据情感语音库分析总结情感规则,在此基础上利用TD-PSOLA算法对中性语音的韵律参数进行改变,并提出一种能够对基频曲线尾部形状改变的方法,使句子表达出丰富的情感。实验表明,合成出的语音具有明显的情感色彩,证明了该系统能以简单明了的方式实现情感语音的合成,有助于提高人脸语音动画表达的丰富性和生动性。  相似文献   

6.
情感语音合成可以增强语音的表现力,为使合成的情感语音更自然,提出一种结合时域基音同步叠加(PSOLA)和离散余弦变换(DCT)的情感语音合成方法。根据情感语音数据库中的高兴、悲伤、中性语音进行韵律参数分析归纳情感规则,调整中性语音各音节的基音频率、能量和时长。使用DCT方法对基音标记过的语音段进行基音频率的调整,并利用PSOLA算法修改基音频率使其逼近目标情感语音的基频。实验结果表明,该方法比单独使用PSOLA算法合成的情感语音更具情感色彩,其主观情感的识别率更高,合成的情感语音质量更好。  相似文献   

7.
为了解决语言障碍者与健康人之间的交流障碍问题,提出了一种基于神经网络的手语到情感语音转换方法。首先,建立了手势语料库、人脸表情语料库和情感语音语料库;然后利用深度卷积神经网络实现手势识别和人脸表情识别,并以普通话声韵母为合成单元,训练基于说话人自适应的深度神经网络情感语音声学模型和基于说话人自适应的混合长短时记忆网络情感语音声学模型;最后将手势语义的上下文相关标注和人脸表情对应的情感标签输入情感语音合成模型,合成出对应的情感语音。实验结果表明,该方法手势识别率和人脸表情识别率分别达到了95.86%和92.42%,合成的情感语音EMOS得分为4.15,合成的情感语音具有较高的情感表达程度,可用于语言障碍者与健康人之间正常交流。  相似文献   

8.
一种基于Straight的语音焦点合成方法   总被引:1,自引:0,他引:1  
杨金辉  易中华  王煦法 《计算机工程》2005,31(13):46-47,128
针对汉语焦点的特性,设计了接近自然语流风格的实验语料。通过对语料的分析,运用CART技术,建立了焦点的韵律模型。在语音合成阶段,使用韵律模型生成语音的韵律参数,结合Straight算法,实现了语音焦点的合成。对合成效果的评测表明,该方法能够合成自然度很高的语音焦点。  相似文献   

9.
基于韵律特征参数的情感语音合成算法研究   总被引:1,自引:0,他引:1  
为了合成更为自然的情感语音,提出了基于语音信号声学韵律参数及时域基音同步叠加算法的情感语音合成系统.实验通过对情感语音数据库中生气、无聊、高兴和悲伤4种情感的韵律参数分析,建立4种情感模板,采用波形拼接语音合成技术,运用时域基音同步叠加算法合成含有目标感情色彩的语音信号.实验结果表明,运用波形拼接算法,调节自然状态下语音信号的韵律特征参数,可合成较理想的情感语音.合成的目标情感语音具有明显的感情色彩,其主观情感类别判别正确率较高.  相似文献   

10.
基于韵律特征和语法信息的韵律边界检测模型   总被引:2,自引:2,他引:2  
韵律短语边界的自动检测,对语音合成中语料库的韵律标注以及语音识别中韵律短语的自动划分都有重要意义。本文通过对影响韵律短语边界的声学、韵律等参量的分析,得到和韵律短语边界关联性较大的一组声学特征参数、韵律环境参数和语法信息;同时引入语音合成中的韵律预测思想,在假定所有音节边界均为非韵律短语边界时,预测每个音节的基频。最后使用决策树模型,将音节边界处的韵律环境信息、语法信息以及预测结果作为决策树的输入,利用决策树综合判定当前音节边界是否为韵律短语的边界。实验表明,这种方法对于基于确定性文本(text-dependent)的语音韵律短语边界的检测,具有较好效果,同时可以显著提高语音合成中语料库的标注效率和标注结果的一致性。  相似文献   

11.
The Diplomat rapid-deployment speech-translation systemis intended to allow naï ve users to communicate across a languagebarrier, without strong domain restrictions, despite the error-pronenature of current speech and translation technologies. In addition,it should be deployable for new languages an order of magnitude morequickly than traditional technologies. Achieving this ambitious setof goals depends in large part on allowing the users to correct recognition and translation errors interactively. We present the Multi-Engine Machine Translation (MEMT) architecture, describing how it is well suited for such an application. We then discuss ourapproaches to rapid-deployment speech recognition and synthesis.Finally we describe our incorporation of interactive error correctionthroughout the system design. We have already developed workingbidirectional Croatian English and Spanish English systems, and have Haitian Creole English and Korean English versions under development.  相似文献   

12.
语音库裁剪或语音库去冗余,是大语料库语音合成技术的一个重要问题.提出了虚拟不定长替换的概念,以弥补不定长的损失.结合合成使用变体的频度,构建了语音库裁剪算法StaRp-VPA.该算法能够以任意比例裁剪语音库.实验表明:当裁剪率小于50%时,合成自然度几乎没有下降;当裁剪率大于50%时,合成自然度也不会严重降低.  相似文献   

13.
In recent years, speech synthesis systems have allowed for the production of very high-quality voices. Therefore, research in this domain is now turning to the problem of integrating emotions into speech. However, the method of constructing a speech synthesizer for each emotion has some limitations. First, this method often requires an emotional-speech data set with many sentences. Such data sets are very time-intensive and labor-intensive to complete. Second, training each of these models requires computers with large computational capabilities and a lot of effort and time for model tuning. In addition, each model for each emotion failed to take advantage of data sets of other emotions. In this paper, we propose a new method to synthesize emotional speech in which the latent expressions of emotions are learned from a small data set of professional actors through a Flowtron model. In addition, we provide a new method to build a speech corpus that is scalable and whose quality is easy to control. Next, to produce a high-quality speech synthesis model, we used this data set to train the Tacotron 2 model. We used it as a pre-trained model to train the Flowtron model. We applied this method to synthesize Vietnamese speech with sadness and happiness. Mean opinion score (MOS) assessment results show that MOS is 3.61 for sadness and 3.95 for happiness. In conclusion, the proposed method proves to be more effective for a high degree of automation and fast emotional sentence generation, using a small emotional-speech data set.  相似文献   

14.
Highest quality synthetic voices remain scarce in both parametric synthesis systems and in concatenative ones. Much synthetic speech lacks naturalness, pleasantness and flexibility. While great strides have been made over the past few years in the quality of synthetic speech, there is still much work that needs to be done. Now the major challenges facing developers are how to provide optimal size, performance, extensibility, and flexibility, together with developing improved signal processing techniques. This paper focuses on issues of performance and flexibility against a background containing a brief evolution of speech synthesis; some acoustic, phonetic and linguistic issues; and the merits and demerits of two commonly used synthesis techniques: parametric and concatenative. Shortcomings of both techniques are reviewed. Methodological developments in the variable size, selection and specification of the speech units used in concatenative systems are explored and shown to provide a more positive outlook for more natural, bearable synthetic speech. Differentiating considerations in making and improving concatenative systems are explored and evaluated. Acoustic and sociophonetic criteria are reviewed for the improvement of variable synthetic voices, and a ranking of their relative importance is suggested. Future rewards are weighed against current technical and developmental challenges. The conclusion indicates some of the current and future applications of TTS.  相似文献   

15.
This paper presents a study on the importance of short-term speech parameterizations for expressive statistical parametric synthesis. Assuming a source-filter model of speech production, the analysis is conducted over spectral parameters, here defined as features which represent a minimum-phase synthesis filter, and some excitation parameters, which are features used to construct a signal that is fed to the minimum-phase synthesis filter to generate speech. In the first part, different spectral and excitation parameters that are applicable to statistical parametric synthesis are tested to determine which ones are the most emotion dependent. The analysis is performed through two methods proposed to measure the relative emotion dependency of each feature: one based on K-means clustering, and another based on Gaussian mixture modeling for emotion identification. Two commonly used forms of parameters for the short-term speech spectral envelope, the Mel cepstrum and the Mel line spectrum pairs are utilized. As excitation parameters, the anti-causal cepstrum, the time-smoothed group delay, and band-aperiodicity coefficients are considered. According to the analysis, the line spectral pairs are the most emotion dependent parameters. Among the excitation features, the band-aperiodicity coefficients present the highest correlation with the speaker's emotion. The most emotion dependent parameters according to this analysis were selected to train an expressive statistical parametric synthesizer using a speaker and language factorization framework. Subjective test results indicate that the considered spectral parameters have a bigger impact on the synthesized speech emotion when compared with the excitation ones.  相似文献   

16.
在分析回顾现有话音编码方案基础上提出话音编码系统的五层结构模型以及“在收端利用边信息获取激励码”的概念。  相似文献   

17.
深度语音信号与信息处理:研究进展与展望   总被引:1,自引:0,他引:1  
论文首先对深度学习进行简要的介绍,然后就其在语音信号与信息处理研究领域的主要研究方向,包括语音识别、语音合成、语音增强的研究进展进行了详细的介绍。语音识别方向主要介绍了基于深度神经网络的语音声学建模、大数据下的模型训练和说话人自适应技术;语音合成方向主要介绍了基于深度学习模型的若干语音合成方法;语音增强方向主要介绍了基于深度神经网络的若干典型语音增强方案。论文的最后我们对深度学习在语音信与信息处理领域的未来可能的研究热点进行展望。  相似文献   

18.
本文介绍了智能工具机(YH-ITM)的语音子系统的结构及实现技术。该子系统是在IBM-PC机的TI语音卡的基础上开发而成的。  相似文献   

19.
本文介绍了智能工具机(YH-ITM)的语音子系统的结构及实现技术。该子系统是在IBM-PC机的TI语音卡的基础上开发而成的。  相似文献   

20.
文语转换是中文信息处理中研究的热点,是实现人机语音通信的一项关键技术。文章对实现中文文语转换的整个过程进行了初步分析和研究,给出了基于语音数据库的文语转换方法和实现过程。具体介绍了语音库的建立,分析了文本录入、文本分词、文本正则化、语音标注、韵律处理和语音合成等各个环节处理的内容及技术难点。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号