首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
The performance of spreadsheet users was compared for two modes of input to the computer—keyboard and continuous voice recognition (CVR)—for subjects classified by their decision style. In addition, the data for this experiment was compared to results of a similar experiment that used a discrete word recognition system. Dependent measures were task completion time, accuracy, keystroke count, correction count, and user confidence for spreadsheet tasks. Results, using a speaker-dependent continuous voice recognizer, showed that for both simple data input and more complex analytical problems, subjects did not perform more effectively using CVR compared to keyboard. In addition, a subject's decision style was found not to interact with CVR for an effect on performance. Compared to earlier discrete word recognition results, CVR tended to shorten the time to complete a spreadsheet analysis task.  相似文献   

2.
The performance of spreadsheet users was compared for two modes of input to the computer—keyboard and continuous voice recognition (CVR)—for subjects classified by their decision style. In addition, the data for this experiment was compared to results of a similar experiment that used a discrete word recognition system. Dependent measures were task completion time, accuracy, keystroke count, correction count, and user confidence for spreadsheet tasks. Results, using a speaker-dependent continuous voice recognizer, showed that for both simple data input and more complex analytical problems, subjects did not perform more effectively using CVR compared to keyboard. In addition, a subject's decision style was found not to interact with CVR for an effect on performance. Compared to earlier discrete word recognition results, CVR tended to shorten the time to complete a spreadsheet analysis task.  相似文献   

3.
This paper describes research that addresses the problem of dialog management from a strong, context‐centric approach. We further present a quantitative method of measuring the importance of contextual cues when dealing with speech‐based human–computer interactions. It is generally accepted that using context in conjunction with a human input, such as spoken speech, enhances a machine's understanding of the user's intent as a means to pinpoint an adequate reaction. For this work, however, we present a context‐centric approach in which the use of context is the primary basis for understanding and not merely an auxiliary process. We employ an embodied conversation agent that facilitates the seamless engagement of a speech‐based information‐deployment entity by its human end user. This dialog manager emphasizes the use of context to drive its mixed‐initiative discourse model. A typical, modern automatic speech recognizer (ASR) was incorporated to handle the speech‐to‐text translations. As is the nature of these ASR systems, the recognition rate is consistently less than perfect, thus emphasizing the need for contextual assistance. The dialog system was encapsulated into a speech‐based embodied conversation agent platform for prototyping and testing purposes. Experiments were performed to evaluate the robustness of its performance, namely through measures of naturalness and usefulness, with respect to the emphasized use of context. The contribution of this work is to provide empirical evidence of the importance of conversational context in speech‐based human–computer interaction using a field‐tested context‐centric dialog manager.  相似文献   

4.
Since the 1970s, many improvements have been made in the technology available for automatic speech recognition (ASR). Changes in the methods of analysing the incoming speech have resulted in larger, more complex vocabularies being used with greater recognition accuracy. Despite this enhanced performance and substantial research activity, the introduction of voice input into the office is still largely unrealized. This paper reviews the state-of-the-art of office applications of ASR, dividing them into the areas of voice messaging and word processing activities, data entry and information retrieval systems, and environmental control. Within these areas, cartographic computer-aided-design systems are identified as an application with proven success. The slow growth of voice input in the office is discussed in the light of constraints imposed by existing speech technology, and the need for human factors evaluation of potential applications.  相似文献   

5.
In this paper, we derive an uncertainty decoding rule for automatic speech recognition (ASR), which accounts for both corrupted observations and inter-frame correlation. The conditional independence assumption, prevalent in hidden Markov model-based ASR, is relaxed to obtain a clean speech posterior that is conditioned on the complete observed feature vector sequence. This is a more informative posterior than one conditioned only on the current observation. The novel decoding is used to obtain a transmission-error robust remote ASR system, where the speech capturing unit is connected to the decoder via an error-prone communication network. We show how the clean speech posterior can be computed for communication links being characterized by either bit errors or packet loss. Recognition results are presented for both distributed and network speech recognition, where in the latter case common voice-over-IP codecs are employed.  相似文献   

6.
Automatic speech recognition (ASR) system suffers from the variation of acoustic quality in an input speech. Speech may be produced in noisy environments and different speakers have their own way of speaking style. Variations can be observed even in the same utterance and the same speaker in different moods. All these uncertainties and variations should be normalized to have a robust ASR system. In this paper, we apply and evaluate different approaches of acoustic quality normalization in an utterance for robust ASR. Several HMM (hidden Markov model)-based systems using utterance-level, word-level, and monophone-level normalization are evaluated with HMM-SM (subspace method)-based system using monophone-level normalization for normalizing variations and uncertainties in an utterance. SM can represent variations of fine structures in sub-words as a set of eigenvectors, and so has better performance at monophone-level than HMM. Experimental results show that word accuracy is significantly improved by the HMM-SM-based system with monophone-level normalization compared to that by the typical HMM-based system with utterance-level normalization in both clean and noisy conditions. Experimental results also suggest that monophone-level normalization using SM has better performance than that using HMM.  相似文献   

7.
Abstract

Since the 1970s, many improvements have been made in the technology available for automatic speech recognition (ASR). Changes in the methods of analysing the incoming speech have resulted in larger, more complex vocabularies being used with greater recognition accuracy. Despite this enhanced performance and substantial research activity, the introduction of voice input into the office is still largely unrealized. This paper reviews the state-of-the-art of office applications of ASR, dividing them into the areas of voice messaging and word processing activities, data entry and information retrieval systems, and environmental control. Within these areas, cartographic computer-aided-design systems are identified as an application with proven success. The slow growth of voice input in the office is discussed in the light of constraints imposed by existing speech technology, and the need for human factors evaluation of potential applications.  相似文献   

8.
Conditional random fields (CRFs) are a statistical framework that has recently gained in popularity in both the automatic speech recognition (ASR) and natural language processing communities because of the different nature of assumptions that are made in predicting sequences of labels compared to the more traditional hidden Markov model (HMM). In the ASR community, CRFs have been employed in a method similar to that of HMMs, using the sufficient statistics of input data to compute the probability of label sequences given acoustic input. In this paper, we explore the application of CRFs to combine local posterior estimates provided by multilayer perceptrons (MLPs) corresponding to the frame-level prediction of phone classes and phonological attribute classes. We compare phonetic recognition using CRFs to an HMM system trained on the same input features and show that the monophone label CRF is able to achieve superior performance to a monophone-based HMM and performance comparable to a 16 Gaussian mixture triphone-based HMM; in both of these cases, the CRF obtains these results with far fewer free parameters. The CRF is also able to better combine these posterior estimators, achieving a substantial increase in performance over an HMM-based triphone system by mixing the two highly correlated sets of phone class and phonetic attribute class posteriors.  相似文献   

9.
We are interested in the problem of robust understanding from noisy spontaneous speech input. With the advances in automated speech recognition (ASR), there has been increasing interest in spoken language understanding (SLU). A challenge in large vocabulary spoken language understanding is robustness to ASR errors. State of the art spoken language understanding relies on the best ASR hypotheses (ASR 1-best). In this paper, we propose methods for a tighter integration of ASR and SLU using word confusion networks (WCNs). WCNs obtained from ASR word graphs (lattices) provide a compact representation of multiple aligned ASR hypotheses along with word confidence scores, without compromising recognition accuracy. We present our work on exploiting WCNs instead of simply using ASR one-best hypotheses. In this work, we focus on the tasks of named entity detection and extraction and call classification in a spoken dialog system, although the idea is more general and applicable to other spoken language processing tasks. For named entity detection, we have improved the F-measure by using both word lattices and WCNs, 6–10% absolute. The processing of WCNs was 25 times faster than lattices, which is very important for real-life applications. For call classification, we have shown between 5% and 10% relative reduction in error rate using WCNs compared to ASR 1-best output.  相似文献   

10.
Abstract

Although automatic speech recognition (ASR) can provide a medium of controlling computers which is relatively easy to use, novice users often have problems with it during their initial practices. In this study, two methods for training subjects to use ASR are compared. One group of subjects received a short demonstration given by an experienced ASR user and the other group received verbal instructions on how to use the device. The results show that subjects given a demonstration achieved better performance than those given instructions (p< 0.005). This is explained by virtue of the fact that the successful use of ASR requires procedural knowledge which is better acquired through some form of practice than through instruction. It is concluded that a demonstration provides ‘practice by proxy’. ‘Task like’ forms of enrolment are discussed. It is suggested that although they can provide the possibility of practice, they are not applicable to all types of ASR use. A demonstration provides users with task familiarization, and an appropriate style of speech.  相似文献   

11.
In a series of experiments isolated-word automated speech recognition (ASR) was compared with keyboard and mouse interfaces for three data entry tasks: textual phrase entry, selection from a list, and numerical data entry. To effect fair comparisons, the tasks were designed to minimize the transaction cycle for each input mode and data type, and the main comparisons used times from only correct data entries. With the hardware and software employed the results indicate that for inputting short phrases, ASR competes only if the typist's speed is below 45 words per minute. For selecting an item from a list, ASR offers an advantage only if the list length exceeds 15 items. For entering numerical data, ASR offers no advantage over keypad or mouse. An extrapolation to latency-free ASR suggests that even as hardware and software become faster, human factors will dominate and the results would shift only slightly in favor of ASR.  相似文献   

12.
The performance of current automatic speech recognition (ASR) systems often deteriorates radically when the input speech is corrupted by various kinds of noise sources. Several methods have been proposed to improve ASR robustness over the last few decades. The related literature can be generally classified into two categories according to whether the methods are directly based on the feature domain or consider some specific statistical feature characteristics. In this paper, we present a polynomial regression approach that has the merit of directly characterizing the relationship between speech features and their corresponding distribution characteristics to compensate for noise interference. The proposed approach and a variant were thoroughly investigated and compared with a few existing noise robustness approaches. All experiments were conducted using the Aurora-2 database and task. The results show that our approaches achieve considerable word error rate reductions over the baseline system and are comparable to most of the conventional robustness approaches discussed in this paper.  相似文献   

13.
As mobile computing devices grow smaller and as in-car computing platforms become more common, we must augment traditional methods of human-computer interaction. Although speech interfaces have existed for years, the constrained system resources of pervasive devices, such as limited memory and processing capabilities, present new challenges. We provide an overview of embedded automatic speech recognition (ASR) on the pervasive device and discuss its ability to help us develop pervasive applications that meet today's marketplace needs. ASR recognizes spoken words and phrases. State-of-the-art ASR uses a phoneme-based approach for speech modeling: it gives each phoneme (or elementary speech sound) in the language under consideration a statistical representation expressing its acoustic properties.  相似文献   

14.
A degradation in the performance of automatic speech recognition systems (ASR) is observed in mismatched training and testing conditions. One of the reasons for this degradation is due to the presence of emotions in the speech. The main objective of this work is to improve the performance of ASR in the presence of emotional conditions using prosody modification. The influence of different emotions on the prosody parameters is exploited in this work. Emotion conversion methods are employed to generate the word level non-uniform prosody modified speech. Modification factors for prosodic components such as pitch, duration and energy are used. The prosody modification is done in two ways. Firstly, emotion conversion is done at the testing stage to generate the neutral speech from the emotional speech. Secondly, the ASR is trained with the generated emotional speech from the neutral speech. In this work, the presence of emotions in speech is studied for the Telugu ASR systems. A new database of IIIT-H Telugu speech corpus is collected to build the large vocabulary neutral Telugu speech ASR system. The emotional speech samples from IITKGP-SESC Telugu corpus are used for testing it. The emotions of anger, happiness and compassion are considered during the evaluation. An improvement in the performance of ASR systems is observed in the prosody modified speech.  相似文献   

15.
Desktop interaction solutions are often inappropriate for mobile devices due to small screen size and portability needs. Speech recognition can improve interactions by providing a relatively hands-free solution that can be used in various situations. While mobile systems are designed to be transportable, few have examined the effects of motion on mobile interactions. This paper investigates the effect of motion on automatic speech recognition (ASR) input for mobile devices. Speech recognition error rates (RER) have been examined with subjects walking or seated, while performing text input tasks and the effect of ASR enrollment conditions on RER. The obtained results suggest changes in user training of ASR systems for mobile and seated usage.  相似文献   

16.
In this paper we propose a new method for utilising phase information by complementing it with traditional magnitude-only spectral subtraction speech enhancement through complex spectrum subtraction (CSS). The proposed approach has the following advantages over traditional magnitude-only spectral subtraction: (a) it introduces complementary information to the enhancement algorithm; (b) it reduces the total number of algorithmic parameters; and (c) is designed for improving clean speech magnitude spectra and is therefore suitable for both automatic speech recognition (ASR) and speech perception applications. Oracle-based ASR experiments verify this approach, showing an average of 20% relative word accuracy improvements when accurate estimates of the phase spectrum are available. Based on sinusoidal analysis and assuming stationarity between observations (which is shown to be better approximated as the frame rate is increased), this paper also proposes a novel method for acquiring the phase information called Phase Estimation via Delay Projection (PEDEP). Further oracle ASR experiments validate the potential for the proposed PEDEP technique in ideal conditions. Realistic implementation of CSS with PEDEP shows performance comparable to state of the art spectral subtraction techniques in a range of 15–20 dB signal-to-noise ratio environments. These results clearly demonstrate the potential for using phase spectra in spectral subtractive enhancement applications, and at the same time highlight the need for deriving more accurate phase estimates in a wider range of noise conditions.  相似文献   

17.
Recent developments in research on humanoid robots and interactive agents have highlighted the importance of and expectation on automatic speech recognition (ASR) as a means of endowing such an agent with the ability to communicate via speech. This article describes some of the approaches pursued at NTT Communication Science Laboratories (NTT-CSL) for dealing with such challenges in ASR. In particular, we focus on methods for fast search through finite-state machines, Bayesian solutions for modeling and classification of speech, and a discriminative training approach for minimizing errors in large vocabulary continuous speech recognition.  相似文献   

18.
This paper presents our work in automatic speech recognition (ASR) in the context of under-resourced languages with application to Vietnamese. Different techniques for bootstrapping acoustic models are presented. First, we present the use of acoustic–phonetic unit distances and the potential of crosslingual acoustic modeling for under-resourced languages. Experimental results on Vietnamese showed that with only a few hours of target language speech data, crosslingual context independent modeling worked better than crosslingual context dependent modeling. However, it was outperformed by the latter one, when more speech data were available. We concluded, therefore, that in both cases, crosslingual systems are better than monolingual baseline systems. The proposal of grapheme-based acoustic modeling, which avoids building a phonetic dictionary, is also investigated in our work. Finally, since the use of sub-word units (morphemes, syllables, characters, etc.) can reduce the high out-of-vocabulary rate and improve the lack of text resources in statistical language modeling for under-resourced languages, we propose several methods to decompose, normalize and combine word and sub-word lattices generated from different ASR systems. The proposed lattice combination scheme results in a relative syllable error rate reduction of 6.6% over the sentence MAP baseline method for a Vietnamese ASR task.   相似文献   

19.
This paper describes the preparation, recording, analyzing, and evaluation of a new speech corpus for Modern Standard Arabic (MSA). The speech corpus contains a total of 415 sentences recorded by 40 (20 male and 20 female) Arabic native speakers from 11 different Arab countries representing three major regions (Levant, Gulf, and Africa). Three hundred and sixty seven sentences are considered as phonetically rich and balanced, which are used for training Arabic Automatic Speech Recognition (ASR) systems. The rich characteristic is in the sense that it must contain all phonemes of Arabic language, whereas the balanced characteristic is in the sense that it must preserve the phonetic distribution of Arabic language. The remaining 48 sentences are created for testing purposes, which are mostly foreign to the training sentences and there are hardly any similarities in words. In order to evaluate the speech corpus, Arabic ASR systems were developed using the Carnegie Mellon University (CMU) Sphinx 3 tools at both training and testing/decoding levels. The speech engine uses 3-emitting state Hidden Markov Models (HMM) for tri-phone based acoustic models. Based on experimental analysis of about 8?h of training speech data, the acoustic model is best using continuous observation’s probability model of 16 Gaussian mixture distributions and the state distributions were tied to 500 senones. The language model contains uni-grams, bi-grams, and tri-grams. For same speakers with different sentences, Arabic ASR systems obtained average Word Error Rate (WER) of 9.70%. For different speakers with same sentences, Arabic ASR systems obtained average WER of 4.58%, whereas for different speakers with different sentences, Arabic ASR systems obtained average WER of 12.39%.  相似文献   

20.
Conventional Hidden Markov Model (HMM) based Automatic Speech Recognition (ASR) systems generally utilize cepstral features as acoustic observation and phonemes as basic linguistic units. Some of the most powerful features currently used in ASR systems are Mel-Frequency Cepstral Coefficients (MFCCs). Speech recognition is inherently complicated due to the variability in the speech signal which includes within- and across-speaker variability. This leads to several kinds of mismatch between acoustic features and acoustic models and hence degrades the system performance. The sensitivity of MFCCs to speech signal variability motivates many researchers to investigate the use of a new set of speech feature parameters in order to make the acoustic models more robust to this variability and thus improve the system performance. The combination of diverse acoustic feature sets has great potential to enhance the performance of ASR systems. This paper is a part of ongoing research efforts aspiring to build an accurate Arabic ASR system for teaching and learning purposes. It addresses the integration of complementary features into standard HMMs for the purpose to make them more robust and thus improve their recognition accuracies. The complementary features which have been investigated in this work are voiced formants and Pitch in combination with conventional MFCC features. A series of experimentations under various combination strategies were performed to determine which of these integrated features can significantly improve systems performance. The Cambridge HTK tools were used as a development environment of the system and experimental results showed that the error rate was successfully decreased, the achieved results seem very promising, even without using language models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号