首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5篇
  免费   0篇
无线电   1篇
自动化技术   4篇
  2008年   1篇
  2007年   1篇
  2006年   1篇
  2000年   1篇
  1994年   1篇
排序方式: 共有5条查询结果,搜索用时 15 毫秒
1
1.
The level of quality that can be achieved by modern concatenative text-to-speech synthesis heavily depends on a judicious composition of the unit inventory used in the unit selection process. Unit boundary optimization, in particular, can make a huge difference in the users' perception of the concatenated acoustic waveform. This paper considers the iterative refinement of unit boundaries based on a data-driven feature extraction framework separately optimized for each boundary region. This guarantees a globally optimal cut point between any two matching units in the underlying inventory. The associated boundary training procedure is objectively characterized, first in terms of convergence behavior, and then by comparing the distributions in inter-unit discontinuity obtained before and after training. Experimental results underscore the viability of this approach for unit boundary optimization. Listening evidence also qualitatively exemplifies a noticeable reduction in the perception of discontinuity between concatenated acoustic units  相似文献   
2.
The level of quality that can be attained in concatenative text-to-speech (TTS) synthesis is primarily governed by the inventory of units used in unit selection. This has led to the collection of ever larger corpora in the quest for ever more natural synthetic speech. As operational considerations limit the size of the unit inventory, however, pruning is critical to removing any instances that prove either spurious or superfluous. This paper proposes a novel pruning strategy based on a data-driven feature extraction framework separately optimized for each unit type in the inventory. A single distinctiveness/redundancy measure can then address, in a consistent manner, the two different problems of outliers and redundant units. Detailed analysis of an illustrative case study exemplifies the typical behavior of the resulting unit pruning procedure, and listening evidence suggests that both moderate and aggressive inventory pruning can be achieved with minimal degradation in perceived TTS quality. These experiments underscore the benefits of unit-centric feature mapping for database optimization in concatenative synthesis.  相似文献   
3.
The automatic recognition of online handwriting is considered from an information theoretic viewpoint. Emphasis is placed on the recognition of unconstrained handwriting, a general combination of cursively written word fragments and discretely written characters. Existing recognition algorithms, such as elastic matching, are severely challenged by the variability inherent in unconstrained handwriting. This motivates the development of a probabilistic framework suitable for the derivation of a fast statistical mixture algorithm. This algorithm exhibits about the same degree of complexity as elastic matching, while being more flexible and potentially more robust. The approach relies on a novel front-end processor that, unlike conventional character or stroke-based processing, articulates around a small elementary unit of handwriting called a frame. The algorithm is based on (1) producing feature vectors representing each frame in one (or several) feature spaces, (2) Gaussian K-means clustering in these spaces, and (3) mixture modeling, taking into account the contributions of all relevant clusters in each space. The approach is illustrated by a simple task involving an 81-character alphabet. Both writer-dependent and writer-independent recognition results are found to be competitive with their elastic matching counterparts  相似文献   
4.
Statistical language models used in large-vocabulary speech recognition must properly encapsulate the various constraints, both local and global, present in the language. While local constraints are readily captured through n-gram modeling, global constraints, such as long-term semantic dependencies, have been more difficult to handle within a data-driven formalism. This paper focuses on the use of latent semantic analysis, a paradigm that automatically uncovers the salient semantic relationships between words and documents in a given corpus. In this approach, (discrete) words and documents are mapped onto a (continuous) semantic vector space, in which familiar clustering techniques can be applied. This leads to the specification of a powerful framework for automatic semantic classification, as well as the derivation of several language model families with various smoothing properties. Because of their large-span nature, these language models are well suited to complement conventional n-grams. An integrative formulation is proposed for harnessing this synergy, in which the latent semantic information is used to adjust the standard n-gram probability. Such hybrid language modeling compares favorably with the corresponding n-gram baseline: experiments conducted on the Wall Street Journal domain show a reduction in average word error rate of over 20%. This paper concludes with a discussion of intrinsic tradeoffs, such as the influence of training data selection on the resulting performance  相似文献   
5.
The level of quality that can be achieved by modern concatenative text-to-speech synthesis heavily depends on the optimization criteria used in the unit selection process. While effective cost functions arise naturally for prosody assessment, the criteria typically selected to quantify discontinuities in the speech signal do not closely reflect users' perception of the resulting acoustic waveform. This paper introduces an alternative feature extraction paradigm, which eschews general purpose Fourier analysis in favor of a modal decomposition separately optimized for each boundary region. The ensuing transform framework preserves, by construction, those properties of the waveform which are globally relevant to each concatenation considered. In addition, it leads to a novel discontinuity measure which jointly, albeit implicitly, accounts for both interframe incoherence and discrepancies in formant frequencies/bandwidths. Experimental evaluations are conducted to characterize the behavior of this new metric, first on a contiguity prediction task, and then via a systematic listening comparison using a conventional metric as baseline. The results underscore the viability of the proposed framework in quantifying the perception of discontinuity between acoustic units.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号