首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 250 毫秒
1.
语音合成技术日趋成熟,为了提高合成情感语音的质量,提出了一种端到端情感语音合成与韵律修正相结合的方法。在Tacotron模型合成的情感语音基础上,进行韵律参数的修改,提高合成系统的情感表达力。首先使用大型中性语料库训练Tacotron模型,再使用小型情感语料库训练,合成出具有情感的语音。然后采用Praat声学分析工具对语料库中的情感语音韵律特征进行分析并总结不同情感状态下的参数规律,最后借助该规律,对Tacotron合成的相应情感语音的基频、时长和能量进行修正,使情感表达更为精确。客观情感识别实验和主观评价的结果表明,该方法能够合成较为自然且表现力更加丰富的情感语音。  相似文献   

2.
在语音合成技术的研究中,情感语音合成是当前研究的热点.在众多研究因素中,建立恰当的韵律模型和选取好的韵律参数是研究的关键,它们描述的正确与否,直接影响到情感语音合成的输出效果.为了攻克提高情感语音自然度这一难点,对影响情感语音合成技术韵律参数进行了分析,建立了基于关联规则的情感语音韵律基频模型.本文通过研究关联规则、改进数据挖掘Apriori算法并由此来获得韵律参数中基频变化规则,并为情感语音合成的选音提供指导和帮助.  相似文献   

3.
情感语音合成作为一个新兴的语音合成方向,糅合生理学、心理学、语言学和信息科学等各学科知识,可以应用于文本阅读、信息查询发布和计算机辅助教学等领域,能够很好地将语音的口语分析、情感分析与计算机技术有机融合,为实现以人为本,具有个性化特征的语音合成系统奠定基础。目前的情感语音合成工作可分为基于规则合成和基于波形拼接合成两类。情感语音合成研究分为情感分析和语音合成两个部分。其中.情感分析的主要工作是收集不同情感的语音数据、提取声学特征,分析声学特征与情感联系;语音合成的主要工作是建立情感转换模型,利用情感转换模型实现合成。  相似文献   

4.
基于韵律特征参数的情感语音合成算法研究   总被引:1,自引:0,他引:1  
为了合成更为自然的情感语音,提出了基于语音信号声学韵律参数及时域基音同步叠加算法的情感语音合成系统.实验通过对情感语音数据库中生气、无聊、高兴和悲伤4种情感的韵律参数分析,建立4种情感模板,采用波形拼接语音合成技术,运用时域基音同步叠加算法合成含有目标感情色彩的语音信号.实验结果表明,运用波形拼接算法,调节自然状态下语音信号的韵律特征参数,可合成较理想的情感语音.合成的目标情感语音具有明显的感情色彩,其主观情感类别判别正确率较高.  相似文献   

5.
提出一种基于时域基音同步叠加TD-PSOLA算法的情感语音合成系统。根据情感语音库分析总结情感规则,在此基础上利用TD-PSOLA算法对中性语音的韵律参数进行改变,并提出一种能够对基频曲线尾部形状改变的方法,使句子表达出丰富的情感。实验表明,合成出的语音具有明显的情感色彩,证明了该系统能以简单明了的方式实现情感语音的合成,有助于提高人脸语音动画表达的丰富性和生动性。  相似文献   

6.
情感语音合成可以增强语音的表现力,为使合成的情感语音更自然,提出一种结合时域基音同步叠加(PSOLA)和离散余弦变换(DCT)的情感语音合成方法。根据情感语音数据库中的高兴、悲伤、中性语音进行韵律参数分析归纳情感规则,调整中性语音各音节的基音频率、能量和时长。使用DCT方法对基音标记过的语音段进行基音频率的调整,并利用PSOLA算法修改基音频率使其逼近目标情感语音的基频。实验结果表明,该方法比单独使用PSOLA算法合成的情感语音更具情感色彩,其主观情感的识别率更高,合成的情感语音质量更好。  相似文献   

7.
总结和分析了近年来情感可视语音合成领域的一些关键研究成果和研究方法,并根据可视语音合成机制的不同,从基于图像的方法和基于模型的方法两个角度对情感可视语音合成技术进行了系统归类和阐述,分析对比了其各自的优缺点及性能差异。重点讨论了各文献合成的可视语音在真实性和情感表现力两个方面的实现机理和程度。最后指出了合成具有情感表现力的可视语音应该重点考虑的一些问题,为情感可视语音合成的进一步研究指明了方向。  相似文献   

8.
针对传统的英语翻译系统存在无法准确识别说话者语音和语气的问题。设计一个基于语音识别和语气语音合成的英语翻译系统,该系统终端主要包括语音识别、语言翻译、语气识别、语气转换和语气语音合成模块。基于CVAE语气语音合成模型对语音识别和语言翻译的英语语句进行语气语音合成,以进行便携式英语翻译终端设计与实现。实验表明,基于CVAE的语气语音合成模型合成语气语音的基频曲线与原始语音间的误差仅为0.02,两者基频曲线十分接近。且在主观评价方面,本模型的语音合成自然度MOS评分为3.84分,方差仅为0.004;情感语气一致性平均打分为3.72,方差为0.002。综合分析可知,本模型可取得较好的语音生成效果,生成语音具备多样性和准确性。系统应用发现,本模型在系统中可提升英语翻译系统终端的语音识别和语气语音合成效果,系统性能优越。  相似文献   

9.
为了解决语言障碍者与健康人之间的交流障碍问题,提出了一种基于神经网络的手语到情感语音转换方法。首先,建立了手势语料库、人脸表情语料库和情感语音语料库;然后利用深度卷积神经网络实现手势识别和人脸表情识别,并以普通话声韵母为合成单元,训练基于说话人自适应的深度神经网络情感语音声学模型和基于说话人自适应的混合长短时记忆网络情感语音声学模型;最后将手势语义的上下文相关标注和人脸表情对应的情感标签输入情感语音合成模型,合成出对应的情感语音。实验结果表明,该方法手势识别率和人脸表情识别率分别达到了95.86%和92.42%,合成的情感语音EMOS得分为4.15,合成的情感语音具有较高的情感表达程度,可用于语言障碍者与健康人之间正常交流。  相似文献   

10.
韵律特征是语音信号中情感信息的主要表征之一。为了更好地进行情感语音合成的研究,本文通过提取普通话情感语音的韵律特征进行分析,采用广义回归神经网络构建了一个情感语音韵律特征预测模型,并根据所提取的测试集数据文本语境信息进行韵律特征预测,实验获得了相应的结果。实验结果表明,情感语音韵律特征预测效果较好。  相似文献   

11.
提出了一种基于PAD三维情绪模型的情感语音韵律转换方法。选取了11种典型情感,设计了文本语料,录制了语音语料,利用心理学的方法标注了语音语料的PAD值,利用五度字调模型对情感语音音节的基频曲线建模。在此基础上,利用广义回归神经网络(Generalized Regression Neural Network,GRNN)构建了一个情感语音韵律转换模型,根据情感的PAD值和语句的语境参数预测情感语音的韵律特征,并采用STRAIGHT算法实现了情感语音的转换。主观评测结果表明,提出的方法转换得到的11种情感语音,其平均EMOS(Emotional Mean Opinion Score)得分为3.6,能够表现出相应的情感。  相似文献   

12.
针对PAD(愉悦度、激活度、优势度)预测精度问题,提出将最小二乘支持向量机(least squares support vector machine,LSSVM)经粒子群优化(particle swarm optimization,PSO)算法优化再与情感聚类分析结合的聚类PSO-LSSVM模型。对TYUT2.0和柏林语音库的三种情感语音提取情感特征,基于特征与标注的P、A、D对三种单一情感分别建立各类情感维度PSO-LSSVM模型以及对三种情感建立混合情感维度PSO-LSSVM模型;然后利用混合情感维度PSO-LSSVM模型预测P、A、D,并计算其与基本情感PAD的距离;最后将距离大于阈值的情感聚类为混合情感,将距离小于阈值的情感聚类为与其距离最近的情感,并利用对应情感的回归模型预测其P、A、D。研究显示,该模型对P、A、D的预测误差较LSSVM和PSO-LSSVM模型更小,且预测值与标注值的相关性更强,说明聚类PSO-LSSVM模型对P、A、D的预测更加可靠、准确。  相似文献   

13.
In recent years, speech synthesis systems have allowed for the production of very high-quality voices. Therefore, research in this domain is now turning to the problem of integrating emotions into speech. However, the method of constructing a speech synthesizer for each emotion has some limitations. First, this method often requires an emotional-speech data set with many sentences. Such data sets are very time-intensive and labor-intensive to complete. Second, training each of these models requires computers with large computational capabilities and a lot of effort and time for model tuning. In addition, each model for each emotion failed to take advantage of data sets of other emotions. In this paper, we propose a new method to synthesize emotional speech in which the latent expressions of emotions are learned from a small data set of professional actors through a Flowtron model. In addition, we provide a new method to build a speech corpus that is scalable and whose quality is easy to control. Next, to produce a high-quality speech synthesis model, we used this data set to train the Tacotron 2 model. We used it as a pre-trained model to train the Flowtron model. We applied this method to synthesize Vietnamese speech with sadness and happiness. Mean opinion score (MOS) assessment results show that MOS is 3.61 for sadness and 3.95 for happiness. In conclusion, the proposed method proves to be more effective for a high degree of automation and fast emotional sentence generation, using a small emotional-speech data set.  相似文献   

14.
Recognizing speakers in emotional conditions remains a challenging issue, since speaker states such as emotion affect the acoustic parameters used in typical speaker recognition systems. Thus, it is believed that knowledge of the current speaker emotion can improve speaker recognition in real life conditions. Conversely, speech emotion recognition still has to overcome several barriers before it can be employed in realistic situations, as is already the case with speech and speaker recognition. One of these barriers is the lack of suitable training data, both in quantity and quality—especially data that allow recognizers to generalize across application scenarios (‘cross-corpus’ setting). In previous work, we have shown that in principle, the usage of synthesized emotional speech for model training can be beneficial for recognition of human emotions from speech. In this study, we aim at consolidating these first results in a large-scale cross-corpus evaluation on eight of most frequently used human emotional speech corpora, namely ABC, AVIC, DES, EMO-DB, eNTERFACE, SAL, SUSAS and VAM, covering natural, induced and acted emotion as well as a variety of application scenarios and acoustic conditions. Synthesized speech is evaluated standalone as well as in joint training with human speech. Our results show that the usage of synthesized emotional speech in acoustic model training can significantly improve recognition of arousal from human speech in the challenging cross-corpus setting.  相似文献   

15.
This paper presents a study on the importance of short-term speech parameterizations for expressive statistical parametric synthesis. Assuming a source-filter model of speech production, the analysis is conducted over spectral parameters, here defined as features which represent a minimum-phase synthesis filter, and some excitation parameters, which are features used to construct a signal that is fed to the minimum-phase synthesis filter to generate speech. In the first part, different spectral and excitation parameters that are applicable to statistical parametric synthesis are tested to determine which ones are the most emotion dependent. The analysis is performed through two methods proposed to measure the relative emotion dependency of each feature: one based on K-means clustering, and another based on Gaussian mixture modeling for emotion identification. Two commonly used forms of parameters for the short-term speech spectral envelope, the Mel cepstrum and the Mel line spectrum pairs are utilized. As excitation parameters, the anti-causal cepstrum, the time-smoothed group delay, and band-aperiodicity coefficients are considered. According to the analysis, the line spectral pairs are the most emotion dependent parameters. Among the excitation features, the band-aperiodicity coefficients present the highest correlation with the speaker's emotion. The most emotion dependent parameters according to this analysis were selected to train an expressive statistical parametric synthesizer using a speaker and language factorization framework. Subjective test results indicate that the considered spectral parameters have a bigger impact on the synthesized speech emotion when compared with the excitation ones.  相似文献   

16.
Recently, increasing attention has been directed to the study of the emotional content of speech signals, and hence, many systems have been proposed to identify the emotional content of a spoken utterance. This paper is a survey of speech emotion classification addressing three important aspects of the design of a speech emotion recognition system. The first one is the choice of suitable features for speech representation. The second issue is the design of an appropriate classification scheme and the third issue is the proper preparation of an emotional speech database for evaluating system performance. Conclusions about the performance and limitations of current speech emotion recognition systems are discussed in the last section of this survey. This section also suggests possible ways of improving speech emotion recognition systems.  相似文献   

17.
Prosody conversion from neutral speech to emotional speech   总被引:1,自引:0,他引:1  
Emotion is an important element in expressive speech synthesis. Unlike traditional discrete emotion simulations, this paper attempts to synthesize emotional speech by using "strong", "medium", and "weak" classifications. This paper tests different models, a linear modification model (LMM), a Gaussian mixture model (GMM), and a classification and regression tree (CART) model. The linear modification model makes direct modification of sentence F0 contours and syllabic durations from acoustic distributions of emotional speech, such as, F0 topline, F0 baseline, durations, and intensities. Further analysis shows that emotional speech is also related to stress and linguistic information. Unlike the linear modification method, the GMM and CART models try to map the subtle prosody distributions between neutral and emotional speech. While the GMM just uses the features, the CART model integrates linguistic features into the mapping. A pitch target model which is optimized to describe Mandarin F0 contours is also introduced. For all conversion methods, a deviation of perceived expressiveness (DPE) measure is created to evaluate the expressiveness of the output speech. The results show that the LMM gives the worst results among the three methods. The GMM method is more suitable for a small training set, while the CART method gives the better emotional speech output if trained with a large context-balanced corpus. The methods discussed in this paper indicate ways to generate emotional speech in speech synthesis. The objective and subjective evaluation processes are also analyzed. These results support the use of a neutral semantic content text in databases for emotional speech synthesis.  相似文献   

18.
交互设计中用户的情感体验关系到设计的成功与否,对用户情感的研究一直是交 互设计的研究热点。人在与移动终端交互的过程中会产生情感,由于该情感具有主观性,很难 对其进行客观衡量。为此提出一种利用眼动指标和PAD 情感量表进行情感预测的方法,选择中 文版PAD 情感量表作为情感测量的工具,通过偏最小二乘回归分析建立眼动指标和PAD 情感 值的数学模型,进行实验设计。实验结果表明该数学模型能较为准确的进行情感预测。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号