首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The field of automatic speech recognition (ASR) is discussed from the viewpoint of pattern recognition (PR). This tutorial examines the problem area, its methods, successes and failures, focusing on the nature of the speech signal and techniques to accomplish useful data reduction. Comparison is made with other areas of PR. Suggestions are given for areas of future progress.  相似文献   

2.
Automatic speech recognition (ASR) system suffers from the variation of acoustic quality in an input speech. Speech may be produced in noisy environments and different speakers have their own way of speaking style. Variations can be observed even in the same utterance and the same speaker in different moods. All these uncertainties and variations should be normalized to have a robust ASR system. In this paper, we apply and evaluate different approaches of acoustic quality normalization in an utterance for robust ASR. Several HMM (hidden Markov model)-based systems using utterance-level, word-level, and monophone-level normalization are evaluated with HMM-SM (subspace method)-based system using monophone-level normalization for normalizing variations and uncertainties in an utterance. SM can represent variations of fine structures in sub-words as a set of eigenvectors, and so has better performance at monophone-level than HMM. Experimental results show that word accuracy is significantly improved by the HMM-SM-based system with monophone-level normalization compared to that by the typical HMM-based system with utterance-level normalization in both clean and noisy conditions. Experimental results also suggest that monophone-level normalization using SM has better performance than that using HMM.  相似文献   

3.
《Ergonomics》2012,55(11):1543-1555
The optimal type and amount of secondary feedback for data entry with automatic speech recognition were investigated. Six feedback conditions, varying the information channel for feedback (visual or auditory), the delay prior to feedback, and the amount of feedback history, were compared to a no-feedback control. In addition, the presence of a dialogue requiring users to confirm a word choice when the speech recognizer could not distinguish between two words was studied. The word confirmation dialogue increased recognition accuracy by about 5% with no significant increase in the time to enter data. Type of feedback affected both accuracy and time to enter data. When no feedback was available, data entry time was minimal but there were many errors. Any type of feedback/error correction vastly unproved accuracy, but auditory feedback provided after a string of data was spoken increased the time to enter data by a factor of three. Depending on task conditions, visual or auditory feedback following each word spoken is recommended.  相似文献   

4.
《Ergonomics》2012,55(11):1943-1957
Abstract

Errors, whether created by the user, the recognizer, or inadequate systems design, are an important consideration in the more widespread and successful use of automatic speech recognition (ASR). An experiment is described in which recognition errors are studied under different types of feedback. Subjects entered data verbally to a microcomputer according to four experimental conditions: namely, orthogonal combinations of spoken and visual feedback presented concurrently or terminally after six items. Although no significant differences in terms of error rates or speed of data entry were shown across the conditions, analysis of the time penalty for error correction indicated that as a general rule, there is a small timing advantage for terminal feedback, when the error rate is low. It was found that subjects do not monitor visual feedback with the same degree of accuracy as spoken, as a larger number of incorrect data entry strings was being confirmed as correct. Further evidence for the use of ‘second best’ recognition data is given, since correct recognition on re-entry could be increased from 83·0% to 92·4% when the first choice recognition was deleted from the second attempt. Finally, the implications for error correction protocols in system design are discussed.  相似文献   

5.
6.
Lectures can be digitally recorded and replayed to provide multimedia revision material for students who attended the class and a substitute learning experience for students unable to attend. Deaf and hard of hearing people can find it difficult to follow speech through hearing alone or to take notes while they are lip-reading or watching a sign-language interpreter. Synchronising the speech with text captions can ensure deaf students are not disadvantaged and assist all learners to search for relevant specific parts of the multimedia recording by means of the synchronised text. Automatic speech recognition has been used to provide real-time captioning directly from lecturers’ speech in classrooms but it has proved difficult to obtain accuracy comparable to stenography. This paper describes the development, testing and evaluation of a system that enables editors to correct errors in the captions as they are created by automatic speech recognition and makes suggestions for future possible improvements.  相似文献   

7.
Automatic recognition of the speech of children is a challenging topic in computer-based speech recognition systems. Conventional feature extraction method namely Mel-frequency cepstral coefficient (MFCC) is not efficient for children's speech recognition. This paper proposes a novel fuzzy-based discriminative feature representation to address the recognition of Malay vowels uttered by children. Considering the age-dependent variational acoustical speech parameters, performance of the automatic speech recognition (ASR) systems degrades in recognition of children's speech. To solve this problem, this study addresses representation of relevant and discriminative features for children's speech recognition. The addressed methods include extraction of MFCC with narrower filter bank followed by a fuzzy-based feature selection method. The proposed feature selection provides relevant, discriminative, and complementary features. For this purpose, conflicting objective functions for measuring the goodness of the features have to be fulfilled. To this end, fuzzy formulation of the problem and fuzzy aggregation of the objectives are used to address uncertainties involved with the problem.The proposed method can diminish the dimensionality without compromising the speech recognition rate. To assess the capability of the proposed method, the study analyzed six Malay vowels from the recording of 360 children, ages 7 to 12. Upon extracting the features, two well-known classification methods, namely, MLP and HMM, were employed for the speech recognition task. Optimal parameter adjustment was performed for each classifier to adapt them for the experiments. The experiments were conducted based on a speaker-independent manner. The proposed method performed better than the conventional MFCC and a number of conventional feature selection methods in the children speech recognition task. The fuzzy-based feature selection allowed the flexible selection of the MFCCs with the best discriminative ability to enhance the difference between the vowel classes.  相似文献   

8.
This communication discusses how automatic speech recognition (ASR) can support universal access to communication and learning through the cost-effective production of text synchronised with speech and describes achievements and planned developments of the Liberated Learning Consortium to: support preferred learning and teaching styles; assist those who for cognitive, physical or sensory reasons find notetaking difficult; assist learners to manage and search online digital multimedia resources; provide automatic captioning of speech for deaf learners or when speech is not available or suitable; assist blind, visually impaired or dyslexic people to read and search material; and, assist speakers to improve their communication skills.  相似文献   

9.
Speech is the most natural form of communication for human beings. However, in situations where audio speech is not available because of disability or adverse environmental condition, people may resort to alternative methods such as augmented speech, that is, audio speech supplemented or replaced by other modalities, such as audiovisual speech, or Cued Speech. This article introduces augmented speech communication based on Electro-Magnetic Articulography (EMA). Movements of the tongue, lips, and jaw are tracked by EMA and are used as features to create hidden Markov models (HMMs). In addition, automatic phoneme recognition experiments are conducted to examine the possibility of recognizing speech only from articulation, that is, without any audio information. The results obtained are promising, which confirm that phonetic features characterizing articulation are as discriminating as those characterizing acoustics (except for voicing). This article also describes experiments conducted in noisy environments using fused audio and EMA parameters. It has been observed that when EMA parameters are fused with noisy audio speech, the recognition rate increases significantly as compared with using noisy audio speech only.  相似文献   

10.
11.
We are addressing the novel problem of jointly evaluating multiple speech patterns for automatic speech recognition and training. We propose solutions based on both the non-parametric dynamic time warping (DTW) algorithm, and the parametric hidden Markov model (HMM). We show that a hybrid approach is quite effective for the application of noisy speech recognition. We extend the concept to HMM training wherein some patterns may be noisy or distorted. Utilizing the concept of “virtual pattern” developed for joint evaluation, we propose selective iterative training of HMMs. Evaluating these algorithms for burst/transient noisy speech and isolated word recognition, significant improvement in recognition accuracy is obtained using the new algorithms over those which do not utilize the joint evaluation strategy.  相似文献   

12.
In this paper, we present an on-line learning neural network model, Dynamic Recognition Neural Network (DRNN), for real-time speech recognition. The property of accumulative learning of the DRNN makes it very suitable for real-time speech recognition with on-line learning. A comparison between the DRNN and Hidden Markov Model (HMM) shows that the computational complexity of the former is lower than that of the latter in both training and recognition. Encouraging results are obtained when the DRNN is tested on a BUPT digit database (Mandarin) and on the on-line learning of twenty isolated English computer command words.  相似文献   

13.
Robustness is one of the most important topics for automatic speech recognition (ASR) in practical applications. Monaural speech separation based on computational auditory scene analysis (CASA) offers a solution to this problem. In this paper, a novel system is presented to separate the monaural speech of two talkers. Gaussian mixture models (GMMs) and vector quantizers (VQs) are used to learn the grouping cues on isolated clean data for each speaker. Given an utterance, speaker identification is firstly performed to identify the two speakers presented in the utterance, then the factorial-max vector quantization model (MAXVQ) is used to infer the mask signals and finally the utterance of the target speaker is resynthesized in the CASA framework. Recognition results on the 2006 speech separation challenge corpus prove that this proposed system can improve the robustness of ASR significantly.  相似文献   

14.
15.
语音识别技术经过半个世纪的发展,目前已日趋成熟,其在语音拨号系统、数字遥控、工业控制等领域都有了广泛的应用。由于目前常用的声学模型和语言模型的局限性,计算机只能识别一些词汇或一些句子。语音识别系统在语种改变时,往往会出现错误的识别结果。针对上述问题,结合隐马尔可夫模型原理,在HTK语音处理工具箱的基础上构建了中英文特定词语音识别系统。该系统通过代码控制整个构建过程,使其在更换新的训练数据和词典后能快速生成对应的识别模型。  相似文献   

16.
In this study a new approach is presented for the recognition of human actions of everyday life with a fixed camera. The originality of the presented method consists in characterizing sequences by a temporal succession of semi-global features, which are extracted from “space-time micro-volumes”. The advantage of this approach lies in the use of robust features (estimated on several frames) associated with the ability to manage actions with variable durations and easily segment the sequences with algorithms that are specific to time-varying data. Each action is actually characterized by a temporal sequence that constitutes the input of a Hidden Markov Model system for the recognition. Results presented of 1,614 sequences performed by several persons validate the proposed approach.  相似文献   

17.
针对经典隐马尔可夫模型对状态持续时间的函数表达与实际语音的物理事实不相符合这一缺点,在通常隐马尔可夫的基础上引入状态持续时间参数,建立基于状态持续时间的HMM语音识别模型(SDHMM),并用其进行语音识别实验,与经典隐马尔可夫模型相比,识别率有所提高。  相似文献   

18.
A new hidden Markov model (HMM) based feature generation scheme is proposed for face recognition (FR) in this paper. In this scheme, HMM method is used to model classes of face images. A set of Fisher scores is calculated through partial derivative analysis of the parameters estimated in each HMM. These Fisher scores are further combined with some traditional features such as log-likelihood and appearance based features to form feature vectors that exploit the strengths of both local and holistic features of human face. Linear discriminant analysis (LDA) is then applied to analyze these feature vectors for FR. Performance improvements are observed over stand-alone HMM method and Fisher face method which uses appearance based feature vectors. A further study reveals that, by reducing the number of models involved in the training and testing stages of LDA, the proposed feature generation scheme can maintain very high discriminative power at much lower computational complexity comparing to the traditional HMM based FR system. Experimental results on a public available face database are provided to demonstrate the viability of this scheme.  相似文献   

19.
基于语音识别的汉语发音自动评分系统的设计与实现   总被引:6,自引:0,他引:6  
语音识别技术的发展使得人与计算机的交互成为可能,针对目前对外汉语中发音教学的不足,在结合了语音识别的相关原理,提出了在对外汉语教学领域中汉语自动发音水平评价系统的设计,详细地描述了系统的结构、功能及流程.介绍了系统实现中的关键技术和步骤:动态时间弯折算法、语料库的建立、声韵分割技术以及评价分级标准.通过小范围的试验,表明该系统对留学生汉语发音水平的测试有一定的参考价值.  相似文献   

20.
在有噪声污染等复杂情况下,为了能够得到更高的语音识别率,提出了一种新的乘积隐马尔可夫模型(HMM)用于双模态语音识别,研究并确定了模型中权重系数与瞬时信噪比(SNR)之间的关系.该模型在独立训练音频和视频HMM的基础上,建立二雏训练模型,并使用重估策略保证更高的准确性.同时引入广义几率递减(GPD)算法,调整音视频特征的权重系数.实验结果表明,提出的方法在噪声环境下体现出了良好稳定的识别性能.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号