首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a technique to transform high-effort voices into breathy voices using adaptive pre-emphasis linear prediction (APLP). The primary benefit of this technique is that it estimates a spectral emphasis filter that can be used to manipulate the perceived vocal effort. The other benefit of APLP is that it estimates a formant filter that is more consistent across varying voice qualities. This paper describes how constant pre-emphasis linear prediction (LP) estimates a voice source with a constant spectral envelope even though the spectral envelope of the true voice source varies over time. A listening experiment demonstrates how differences in vocal effort and breathiness are audible in the formant filter estimated by constant pre-emphasis LP. APLP is presented as a technique to estimate a spectral emphasis filter that captures the combined influence of the glottal source and the vocal tract upon the spectral envelope of the voice. A final listening experiment demonstrates how APLP can be used to effectively transform high-effort voices into breathy voices. The techniques presented here are relevant to researchers in voice conversion, voice quality, singing, and emotion.  相似文献   

2.
The great majority of current voice technology applications rely on acoustic features, such as the widely used MFCC or LP parameters, which characterize the vocal tract response. Nonetheless, the major source of excitation, namely the glottal flow, is expected to convey useful complementary information. The glottal flow is the airflow passing through the vocal folds at the glottis. Unfortunately, glottal flow analysis from speech recordings requires specific and complex processing operations, which explains why it has been generally avoided. This paper gives a comprehensive overview of techniques for glottal source processing. Starting from analysis tools for pitch tracking, detection of glottal closure instant, estimation and modeling of glottal flow, this paper discusses how these tools and techniques might be properly integrated in various voice technology applications.  相似文献   

3.
Vocal fry (also called creak, creaky voice, and pulse register phonation) is a voice quality that carries important linguistic or paralinguistic information, depending on the language. We propose a set of acoustic measures and a method for automatically detecting vocal fry segments in speech utterances. A glottal pulse-synchronized method is proposed to deal with the very low fundamental frequency properties of vocal fry segments, which cause problems in the classic short-term analysis methods. The proposed acoustic measures characterize power, aperiodicity, and similarity properties of vocal fry signals. The basic idea of the proposed method is to scan for local power peaks in a ldquovery short-termrdquo power contour for obtaining glottal pulse candidates, check for periodicity properties, and evaluate a similarity measure between neighboring glottal pulse candidates for deciding the possibility of being vocal fry pulses. In the periodicity analysis, autocorrelation peak properties are taken into account for avoiding misdetection of periodicity in vocal fry segments. Evaluation of the proposed acoustic measures in the automatic detection resulted in 74% correct detection, with an insertion error rate of 13%.  相似文献   

4.
ABSTRACT

Vocal cord diseases can cause irregular vibration of the vocal cords, resulting in abnormalities. Therefore, it is necessary to study abnormal vocal cords in a vocal cord model. Research that focuses on vocal cord diseases mainly combines acoustic parameters and pattern recognition. However, it is also important to study the causes of vocal abnormalities in vocal cord diseases. In this paper, a bionic vocal system is modeled, and the influence of pulmonary airflow changes on glottic vibration excitation is analyzed. The effects of asymmetric vocal polyps on changes to the vocal airflow and flow field are studied, showing that the proposed model can assist in the detection of abnormal voice.  相似文献   

5.
This paper describes a robust glottal source estimation method based on a joint source-filter separation technique. In this method, the Liljencrants-Fant (LF) model, which models the glottal flow derivative, is integrated into a time-varying ARX speech production model. These two models are estimated in a joint optimization procedure, in which a Kalman filtering process is embedded for adaptively identifying the vocal tract parameters. Since the formulated joint estimation problem is a multiparameter nonlinear optimization procedure, we separate the optimization procedure into two passes. The first pass initializes the glottal source and vocal tract models by solving a quasi-convex approximate optimization problem. Having robust initial values, the joint estimation procedure determines the accuracy of model estimation implemented with a trust-region descent optimization algorithm. Experiments with synthetic and real voice signals show that the proposed method is a robust glottal source parameter estimation method with a high degree of accuracy.  相似文献   

6.
The human larynx is an important organ for voice production and respiratory mechanisms. The vocal cord is approximated for voice production and open for breathing. The videolaryngoscope is widely used for vocal cord examination. At present, physicians usually diagnose vocal cord diseases by manually selecting the image of the vocal cord opening to the largest extent (abduction), thus maximally exposing the vocal cord lesion. On the other hand, the severity of diseases such as vocal palsy, atrophic vocal cord is largely dependent on the vocal cord closing to the smallest extent (adduction). Therefore, diseases can be assessed by the image of the vocal cord opening to the largest extent, and the seriousness of breathy voice is closely correlated to the gap between vocal cords when closing to the smallest extent. The aim of the study was to design an automatic vocal cord image selection system to improve the conventional selection process by physicians and enhance diagnosis efficiency. Also, due to the unwanted fuzzy images resulting from examination process caused by human factors as well as the non-vocal cord images, texture analysis is added in this study to measure image entropy to establish a screening and elimination system to effectively enhance the accuracy of selecting the image of the vocal cord closing to the smallest extent.  相似文献   

7.
This paper proposes a method for automatic detection of breathy voiced vowels in continuous Gujarati speech. As breathy voice is a specific phonetic feature predominantly present in Gujarati among Indian languages, it can be used for identifying Gujarati language. The objective of this paper is to differentiate breathy voiced vowels from modal voiced vowels based on loudness measure. Excitation source characteristics represented by loudness measure are used for differentiating the voice quality. In the proposed method, initially vowel regions in continuous speech are determined by using the knowledge of vowel onset points and epochs. Later, hypothesized vowel segments are classified by using loudness measure. Performance of the proposed method is evaluated on Gujarati speech utterances containing around 47 breathy and 192 modal vowels spoken by 5 male and 5 female speakers. Classification of vowels into breathy or modal voice is achieved with an accuracy of around 94 %.  相似文献   

8.
Glottal stop sounds in Amharic are produced due to abrupt closure of the glottis without any significant gesture in the accompanying articulatory organs in the vocal tract system. It is difficult to observe the features of the glottal stop through spectral analysis, as the spectral features emphasize mostly the features of the vocal tract system. In order to spot the glottal stop sounds in continuous speech, it is necessary to extract the features of the source of excitation also, which may require some non-spectral methods for analysis. In this paper the linear prediction (LP) residual is used as an approximation to the excitation source signal, and the excitation features are extracted from the LP residual using zero frequency filtering (ZFF). The glottal closure instants (GCIs) or epoch are identified from the ZFF signal. At each GCI, the cross-correlation coefficients of successive glottal cycles of the LP residual, the normalized jitter and the logarithm of the peak normalized excitation strength (LPNES) are calculated. Further, the parameters of Gaussian approximation models are derived from the distributions of the excitation parameters. These model parameters are used to identify the regions of the glottal stop sounds in continuous speech. For the database used in this study 92.89% of the glottal stop regions are identified correctly, with 8.50% false indications.  相似文献   

9.
This paper presents a new glottal inverse filtering (GIF) method that utilizes a Markov chain Monte Carlo (MCMC) algorithm. First, initial estimates of the vocal tract and glottal flow are evaluated by an existing GIF method, iterative adaptive inverse filtering (IAIF). Simultaneously, the initially estimated glottal flow is synthesized using the Rosenberg–Klatt (RK) model and filtered with the estimated vocal tract filter to create a synthetic speech frame. In the MCMC estimation process, the first few poles of the initial vocal tract model and the RK excitation parameter are refined in order to minimize the error between the synthetic and original speech signals in the time and frequency domain. MCMC approximates the posterior distribution of the parameters, and the final estimate of the vocal tract is found by averaging the parameter values of the Markov chain. Experiments with synthetic vowels produced by a physical modeling approach show that the MCMC-based GIF method gives more accurate results compared to two known reference methods.  相似文献   

10.
We discuss the use of low-dimensional physical models of the voice source for speech coding and processing applications. A class of waveform-adaptive dynamic glottal models and parameter identification procedures are illustrated. The model and the identification procedures are assessed by addressing signal transformations on recorded speech, achievable by fitting the model to the data, and then acting on the physically oriented parameters of the voice source. The class of models proposed provides in principle a tool for both the estimation of glottal source signals, and the encoding of the speech signal for transformation purposes. The application of this model to time stretching and to fundamental frequency control (pitch shifting) is also illustrated. The experiments show that copy synthesis is perceptually very similar to the target, and that time stretching and “pitch extrapolation” effects can be obtained by simple control strategies.  相似文献   

11.
12.
为了在病理嗓音识别中为特征参数选择提供依据,提出声带非对称力学建模仿真病变声带并进行分析研究。依据声带的分层结构和组织特性,建立声带力学模型,耦合声门气流,求取模型输出的声门源激励波形。采用遗传粒子群 拟牛顿结合优化算法(Genetic particle swarm optimization based on quasi-Newton method, GPSO-QN)将模 型输出的声门源和实际目标声门波相匹配,提取优化模型参数。仿真实验结果表明,该声带模 型能产生与实际声门源相一致的声门波形,同时也证明了左右声带生理组织间的非对称性是产生病理嗓音的重要原因。  相似文献   

13.
This paper investigates the temporal excitation patterns of creaky voice. Creaky voice is a voice quality frequently used as a phrase-boundary marker, but also as a means of portraying attitude, affective states and even social status. Consequently, the automatic detection and modelling of creaky voice may have implications for speech technology applications. The acoustic characteristics of creaky voice are, however, rather distinct from modal phonation. Further, several acoustic patterns can bring about the perception of creaky voice, thereby complicating the strategies used for its automatic detection, analysis and modelling. The present study is carried out using a variety of languages, speakers, and on both read and conversational data and involves a mutual information-based assessment of the various acoustic features proposed in the literature for detecting creaky voice. These features are then exploited in classification experiments where we achieve an appreciable improvement in detection accuracy compared to the state of the art. Both experiments clearly highlight the presence of several creaky patterns. A subsequent qualitative and quantitative analysis of the identified patterns is provided, which reveals a considerable speaker-dependent variability in the usage of these creaky patterns. We also investigate how creaky voice detection systems perform across creaky patterns.  相似文献   

14.
This study deals with a numerical solution of a 2D unsteady flow of a compressible viscous fluid in a channel for low inlet airflow velocity. The unsteadiness of the flow is caused by a prescribed periodic motion of a part of the channel wall with large amplitudes, nearly closing the channel during oscillations. The channel is a simplified model of the glottal space in the human vocal tract and the flow can represent a model of airflow coming from the trachea, through the glottal region with periodically vibrating vocal folds, and to the human vocal tract.  相似文献   

15.
声门激励信号是语音信号的源信号,可用于语音特征参数的有效提取。研究了从观测语音获取声门激励的两种方法——线性预测法和倒谱法;用实际录制的语音做计算机仿真实验,比较了两种方法的性能和特点。结果表明倒谱法获取声门激励、由它提取基因周期等激励特征参数的精度高,但计算量相对较大;线性预测法由于采用高效算法,不仅获取声门激励的速度快,而且可同时获取声道模型参数、语音功率谱等重要参数,是获取声门激励的常用方法。  相似文献   

16.
Spasmodic Dysphonia is a voice disorder caused due to spasm of involuntary muscles in the voice box. These spasms can leads to breathy, soundless voice breaks, strangled voice by interrupting the opening of the vocal folds. There is no specific test for the diagnosis of spasmodic dysphonia. The cause of occurrence is unknown, there is no cure for the disorder, but treatments can improve the quality of voice. The main aim and objectives of the study are (i) to diagnose the dysphonia and to have comparative analysis on both continuous speech signal and sustained phonation /a/ by extracting the acoustic features. (ii) to extract the acoustic features by means of semi automated method using PRAAT software and automated method using FFT algorithm (ii) to classify the normal and spasmodic dysphonic patients using different classifiers such as Levenberg Marquardt Back propagation algorithm, K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) based on sensitivity and accuracy. Thirty normal and thirty abnormal patients were considered in the proposed study. The performance of three different classifiers was studied and it was observed that SVM and KNN were 100% accurate, whereas Levinberg BPN network produced an accuracy of about 96.7%. The voice sample of dysphonia patients showed variations from the normal speech samples. Automated analysis method was able to detect dysphonia and provides better results compared to semi automated method.  相似文献   

17.
The objective of voice conversion system is to formulate the mapping function which can transform the source speaker characteristics to that of the target speaker. In this paper, we propose the General Regression Neural Network (GRNN) based model for voice conversion. It is a single pass learning network that makes the training procedure fast and comparatively less time consuming. The proposed system uses the shape of the vocal tract, the shape of the glottal pulse (excitation signal) and long term prosodic features to carry out the voice conversion task. In this paper, the shape of the vocal tract and the shape of source excitation of a particular speaker are represented using Line Spectral Frequencies (LSFs) and Linear Prediction (LP) residual respectively. GRNN is used to obtain the mapping function between the source and target speakers. The direct transformation of the time domain residual using Artificial Neural Network (ANN) causes phase change and generates artifacts in consecutive frames. In order to alleviate it, wavelet packet decomposed coefficients are used to characterize the excitation of the speech signal. The long term prosodic parameters namely, pitch contour (intonation) and the energy profile of the test signal are also modified in relation to that of the target (desired) speaker using the baseline method. The relative performances of the proposed model are compared to voice conversion system based on the state of the art RBF and GMM models using objective and subjective evaluation measures. The evaluation measures show that the proposed GRNN based voice conversion system performs slightly better than the state of the art models.  相似文献   

18.
We propose a pitch synchronous approach to design the voice conversion system taking into account the correlation between the excitation signal and vocal tract system characteristics of speech production mechanism. The glottal closure instants (GCIs) also known as epochs are used as anchor points for analysis and synthesis of the speech signal. The Gaussian mixture model (GMM) is considered to be the state-of-art method for vocal tract modification in a voice conversion framework. However, the GMM based models generate overly-smooth utterances and need to be tuned according to the amount of available training data. In this paper, we propose the support vector machine multi-regressor (M-SVR) based model that requires less tuning parameters to capture a mapping function between the vocal tract characteristics of the source and the target speaker. The prosodic features are modified using epoch based method and compared with the baseline pitch synchronous overlap and add (PSOLA) based method for pitch and time scale modification. The linear prediction residual (LP residual) signal corresponding to each frame of the converted vocal tract transfer function is selected from the target residual codebook using a modified cost function. The cost function is calculated based on mapped vocal tract transfer function and its dynamics along with minimum residual phase, pitch period and energy differences with the codebook entries. The LP residual signal corresponding to the target speaker is generated by concatenating the selected frame and its previous frame so as to retain the maximum information around the GCIs. The proposed system is also tested using GMM based model for vocal tract modification. The average mean opinion score (MOS) and ABX test results are 3.95 and 85 for GMM based system and 3.98 and 86 for the M-SVR based system respectively. The subjective and objective evaluation results suggest that the proposed M-SVR based model for vocal tract modification combined with modified residual selection and epoch based model for prosody modification can provide a good quality synthesized target output. The results also suggest that the proposed integrated system performs slightly better than the GMM based baseline system designed using either epoch based or PSOLA based model for prosody modification.  相似文献   

19.
Primary voice production occurs in the larynx through vibrational movements carried out by vocal folds. However, many problems can affect this complex system resulting in voice disorders. In this context, time–frequency–shape analysis based on embedding phase space plots and nonlinear dynamics methods have been used to evaluate the vocal fold dynamics during phonation. For this purpose, the present work used high-speed video to record the vocal fold movements of three subjects and extract the glottal area time series using an image segmentation algorithm. This signal is used for an optimization method which combines genetic algorithms and a quasi-Newton method to optimize the parameters of a biomechanical model of vocal folds based on lumped elements (masses, springs and dampers). After optimization, this model is capable of simulating the dynamics of recorded vocal folds and their glottal pulse. Bifurcation diagrams and phase space analysis were used to evaluate the behavior of this deterministic system in different circumstances. The results showed that this methodology can be used to extract some physiological parameters of vocal folds and reproduce some complex behaviors of these structures contributing to the scientific and clinical evaluation of voice production.  相似文献   

20.
The intelligibility of speech transmitted through low-rate coders is severely degraded when high levels of acoustic noise are present in the acoustic environment. Recent advances in nonacoustic sensors, including microwave radar, skin vibration, and bone conduction sensors, provide the exciting possibility of both glottal excitation and, more generally, vocal tract measurements that are relatively immune to acoustic disturbances and can supplement the acoustic speech waveform. We are currently investigating methods of combining the output of these sensors for use in low-rate encoding according to their capability in representing specific speech characteristics in different frequency bands. Nonacoustic sensors have the ability to reveal certain speech attributes lost in the noisy acoustic signal; for example, low-energy consonant voice bars, nasality, and glottalized excitation. By fusing nonacoustic low-frequency and pitch content with acoustic-microphone content, we have achieved significant intelligibility performance gains using the DRT across a variety of environments over the government standard 2400-bps MELPe coder. By fusing quantized high-band 4-to-8-kHz speech, requiring only an additional 116 bps, we obtain further DRT performance gains by exploiting the ear's insensitivity to fine spectral detail in this frequency region.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号