首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Many recent studies show that Augmented Reality (AR) and Automatic Speech Recognition (ASR) technologies can be used to help people with disabilities. Many of these studies have been performed only in their specialized field. Audio-Visual Speech Recognition (AVSR) is one of the advances in ASR technology that combines audio, video, and facial expressions to capture a narrator’s voice. In this paper, we combine AR and AVSR technologies to make a new system to help deaf and hard-of-hearing people. Our proposed system can take a narrator’s speech instantly and convert it into a readable text and show the text directly on an AR display. Therefore, in this system, deaf people can read the narrator’s speech easily. In addition, people do not need to learn sign-language to communicate with deaf people. The evaluation results show that this system has lower word error rate compared to ASR and VSR in different noisy conditions. Furthermore, the results of using AVSR techniques show that the recognition accuracy of the system has been improved in noisy places. Also, the results of a survey that was conducted with 100 deaf people show that more than 80 % of deaf people are very interested in using our system as an assistant in portable devices to communicate with people.  相似文献   

3.
A distributed rule-based system for automatic speech recognition is described. Acoustic property extraction and feature hypothesization are performed by the application of sequences of operators. These sequences, called plans, are executed by cooperative expert programs. Experimental results on the automatic segmentation and recognition of phrases, made of connected letters and digits, are described and discussed.  相似文献   

4.
5.
This paper investigates a noise robust technique for automatic speech recognition which exploits hidden Markov modeling of stereo speech features from clean and noisy channels. The HMM trained this way, referred to as stereo HMM, has in each state a Gaussian mixture model (GMM) with a joint distribution of both clean and noisy speech features. Given the noisy speech input, the stereo HMM gives rise to a two-pass compensation and decoding process where MMSE denoising based on N-best hypotheses is first performed and followed by decoding the denoised speech in a reduced search space on lattice. Compared to the feature space GMM-based denoising approaches, the stereo HMM is advantageous as it has finer-grained noise compensation and makes use of information of the whole noisy feature sequence for the prediction of each individual clean feature. Experiments on large vocabulary spontaneous speech from speech-to-speech translation applications show that the proposed technique yields superior performance than its feature space counterpart in noisy conditions while still maintaining decent performance in clean conditions.  相似文献   

6.
Audio-visual speech recognition (AVSR) has shown impressive improvements over audio-only speech recognition in the presence of acoustic noise. However, the problems of region-of-interest detection and feature extraction may influence the recognition performance due to the visual speech information obtained typically from planar video data. In this paper, we deviate from the traditional visual speech information and propose an AVSR system integrating 3D lip information. The Microsoft Kinect multi-sensory device was adopted for data collection. The different feature extraction and selection algorithms were applied to planar images and 3D lip information, so as to fuse the planar images and 3D lip feature into the visual-3D lip joint feature. For automatic speech recognition (ASR), the fusion methods were investigated and the audio-visual speech information was integrated into a state-synchronous two stream Hidden Markov Model. The experimental results demonstrated that our AVSR system integrating 3D lip information improved the recognition performance of traditional ASR and AVSR system in acoustic noise environments.  相似文献   

7.
8.
Recognition of emotion in speech has recently matured to one of the key disciplines in speech analysis serving next generation human-machine interaction and communication. However, compared to automatic speech recognition, that emotion recognition from an isolated word or a phrase is inappropriate for conversation. Because a complete emotional expression may stride across several sentences, and may fetch-up on any word in dialogue. In this paper, we present a segment-based emotion recognition approach to continuous Mandarin Chinese speech. In this proposed approach, the unit for recognition is not a phrase or a sentence but an emotional expression in dialogue. To that end, the following procedures are presented: First, we evaluate the performance of several classifiers in short sentence speech emotion recognition architectures. The results of the experiments show that the WD-KNN classifier achieves the best accuracy for the 5-class emotion recognition what among the five classification techniques. We then implemented a continuous Mandarin Chinese speech emotion recognition system with an emotion radar chart which is based on WD-KNN; this system can represent the intensity of each emotion component in speech. This proposed approach shows how emotions can be recognized by speech signals, and in turn how emotional states can be visualized.  相似文献   

9.
A maximum likelihood approach to continuous speech recognition   总被引:8,自引:0,他引:8  
Speech recognition is formulated as a problem of maximum likelihood decoding. This formulation requires statistical models of the speech production process. In this paper, we describe a number of statistical models for use in speech recognition. We give special attention to determining the parameters for such models from sparse data. We also describe two decoding methods, one appropriate for constrained artificial languages and one appropriate for more realistic decoding tasks. To illustrate the usefulness of the methods described, we review a number of decoding results that have been obtained with them.  相似文献   

10.
This paper presents a method for reconstructing unreliable spectral components of speech signals using the statistical distributions of the clean components. Our goal is to model the temporal patterns in speech signal and take advantage of correlations between speech features in both time and frequency domain simultaneously. In this approach, a hidden Markov model (HMM) is first trained on clean speech data to model the temporal patterns which appear in the sequences of the spectral components. Using this model and according to the probabilities of occurring noisy spectral component at each states, a probability distributions for noisy components are estimated. Then, by applying maximum a posteriori (MAP) estimation on the mentioned distributions, the final estimations of the unreliable spectral components are obtained. The proposed method is compared to a common missing feature method which is based on the probabilistic clustering of the feature vectors and also to a state of the art method based on sparse reconstruction. The experimental results exhibits significant improvement in recognition accuracy over a noise polluted Persian corpus.  相似文献   

11.
Multimedia Tools and Applications - This paper investigates language modeling with topical and positional information for large vocabulary continuous speech recognition. We first compare among a...  相似文献   

12.
We present a system that can separate and recognize the simultaneous speech of two people recorded in a single channel. Applied to the monaural speech separation and recognition challenge, the system out-performed all other participants – including human listeners – with an overall recognition error rate of 21.6%, compared to the human error rate of 22.3%. The system consists of a speaker recognizer, a model-based speech separation module, and a speech recognizer. For the separation models we explored a range of speech models that incorporate different levels of constraints on temporal dynamics to help infer the source speech signals. The system achieves its best performance when the model of temporal dynamics closely captures the grammatical constraints of the task. For inference, we compare a 2-D Viterbi algorithm and two loopy belief-propagation algorithms. We show how belief-propagation reduces the complexity of temporal inference from exponential to linear in the number of sources and the size of the language model. The best belief-propagation method results in nearly the same recognition error rate as exact inference.  相似文献   

13.
提出了一种基于随机段模型的发音信息集成方法。根据随机段模型的模型特性,建立了阶层式人工神经网络来获取语音段信号属于各类音素的后验概率,并通过一遍解码的方式集成到随机段模型系统中。在“863-test”测试集上进行的汉语连续语音识别实验显示汉语字的相对错误率下降了5.93%。实验结果表明了将发音信息应用到随机段模型的可行性。  相似文献   

14.
Speech prosody contains an important structural information for performing speech analysis and for extracting syntactic nuclei from spoken sentences. This paper describes a procedure based on a multichannel system of epoch filters for recognizing the pulses of glottal chord vibrations by an analysis of the speech waveform. Recognition is performed by a stochastic finite state automaton automatically inferred after experiments.  相似文献   

15.
提出了一种基于随机段模型的发音信息集成方法.根据随机段模型的模型特性,建立了阶层式人工神经网络来获取语音段信号属于各类音素的后验概率,并通过一遍解码的方式集成到随机段模型系统中.在“863-test”测试集上进行的汉语连续语音识别实验显示汉语字的相对错误率下降了5.93%.实验结果表明了将发音信息应用到随机段模型的可行性.  相似文献   

16.
This paper describes a set of modeling techniques for detecting a small vocabulary of keywords in running conversational speech. The techniques are applied in the context of a hidden Markov model (HMM) based continuous speech recognition (CSR) approach to keyword spotting. The word spotting task is derived from the Switchboard conversational speech corpus, and involves unconstrained conversational speech utterances spoken over the public switched telephone network. The utterances in this task contain many of the artifacts that are characteristic of unconstrained speech as it appears in many telecommunications based automatic speech recognition (ASR) applications. Results are presented for an experimental study that was performed on this task. Performance was measured by computing the percentage correct keyword detection over a range of false alarm rates evaluated over 2·2 h of speech for a 20 keyword vocabulary. The results of the study demonstrate the importance of several techniques. These techniques include the use of decision tree based allophone clustering for defining acoustic subword units, different representations for non-vocabulary words appearing in the input utterance, and the definition of simple language models for keyword detection. Decision tree based allophone clustering resulted in a significant increase in keyword detection performance over that obtained using tri-phone based subword units while at the same time reducing the size of the inventory of subword acoustic models by 40%. More complex representations of non-vocabulary speech were also found to significantly improve keyword detection performance; however, these representations also resulted in a significant increase in computational complexity.  相似文献   

17.
The two- or three-layered neural networks (2LNN, 3LNN) which originated from stereovision neural networks are applied to speech recognition. To accommodate sequential data flow, we consider a window through which the new acoustic data enter and from which the final neural activities are output. Inside the window, a recurrent neural network develops neural activity toward a stable point. The process is called winner-take-all (WTA) with cooperation and competition. The resulting neural activities clearly showed recognition of continuous speech of a word. The string of phonemes obtained is compared with reference words by using a dynamic programming method. The resulting recognition rate was 96.7% for 100 words spoken by nine male speakers, compared with 97.9% by a hidden Markov model (HMM) with three states and a single gaussian distribution. These results, which are close to those of HMM, seem important because the architecture of the neural network is very simple, and the number of parameters in the neural net equations is small and fixed. This work was presented in part at the Fifth International Symposium on Artificial Life and Robotics, Oita, Japan, January 26–28, 2000  相似文献   

18.
连续语音识别技术,是集语音处理、模式识别、句法和语义分析于一体的综合性语音处理技术,能够识别任意的连续语音,如一个句子或一段话,大大提高了语音交互的连续性和体验度,是语音识别技术的核心之一。本文介绍了连续语音识别技术的研究现状及几种常见的技术方法,并且分析探讨了连续语音识别技术的应用和发展前景。  相似文献   

19.
Humans are able to recognise a word before its acoustic realisation is complete. This in contrast to conventional automatic speech recognition (ASR) systems, which compute the likelihood of a number of hypothesised word sequences, and identify the words that were recognised on the basis of a trace back of the hypothesis with the highest eventual score, in order to maximise efficiency and performance. In the present paper, we present an ASR system, SpeM, based on principles known from the field of human word recognition that is able to model the human capability of ‘early recognition’ by computing word activation scores (based on negative log likelihood scores) during the speech recognition process.Experiments on 1463 polysyllabic words in 885 utterances showed that 64.0% (936) of these polysyllabic words were recognised correctly at the end of the utterance. For 81.1% of the 936 correctly recognised polysyllabic words the local word activation allowed us to identify the word before its last phone was available, and 64.1% of those words were already identified one phone after their lexical uniqueness point.We investigated two types of predictors for deciding whether a word is considered as recognised before the end of its acoustic realisation. The first type is related to the absolute and relative values of the word activation, which trade false acceptances for false rejections. The second type of predictor is related to the number of phones of the word that have already been processed and the number of phones that remain until the end of the word. The results showed that SpeM’s performance increases if the amount of acoustic evidence in support of a word increases and the risk of future mismatches decreases.  相似文献   

20.
The main recognition procedure in modern HMM-based continuous speech recognition systems is Viterbi algorithm. Viterbi algorithm finds out the best acoustic sequence according to input speech in the search space using dynamic programming. In this paper, dynamic programming is replaced by a search method which is based on particle swarm optimization. The major idea is focused on generating initial population of particles as the speech segmentation vectors. The particles try to achieve the best segmentation by an updating method during iterations. In this paper, a new method of particles representation and recognition process is introduced which is consistent with the nature of continuous speech recognition. The idea was tested on bi-phone recognition and continuous speech recognition workbenches and the results show that the proposed search method reaches the performance of the Viterbi segmentation algorithm ; however, there is a slight degradation in the accuracy rate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号