首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
This paper analyzes the issue of catastrophic fusion, a problem that occurs in multimodal recognition systems that integrate the output from several modules while working in non-stationary environments. For concreteness we frame the analysis with regard to the problem of automatic audio visual speech recognition (AVSR), but the issues at hand are very general and arise in multimodal recognition systems which need to work in a wide variety of contexts. Catastrophic fusion is said to have occurred when the performance of a multimodal system is inferior to the performance of some isolated modules, e.g., when the performance of the audio visual speech recognition system is inferior to that of the audio system alone. Catastrophic fusion arises because recognition modules make implicit assumptions and thus operate correctly only within a certain context. Practice shows that when modules are tested in contexts inconsistent with their assumptions, their influence on the fused product tends to increase, with catastrophic results. We propose a principled solution to this problem based upon Bayesian ideas of competitive models and inference robustification. We study the approach analytically on a classic Gaussian discrimination task and then apply it to a realistic problem on audio visual speech recognition (AVSR) with excellent results.  相似文献   

2.
针对音、视频双模态语音识别能有效地提高噪声环境下的识别率的特性,本文设计了车载语音控制指令识别实验系统。该系统模拟车载环境,把说话时的视频信息融入到语音识别系统中,系统分为模型训练、离线识别和在线识别3部分。在线识别全程采用语音作为人机交互手段,并具备用户自适应的功能。离线识别部分将系统产生的数据分层次进行统计,非常适合进行双模态语音识别算法研究。  相似文献   

3.
秦伟  韦岗 《计算机应用研究》2007,24(11):100-102
提出了一种流权值的优化方法.这种优化方法是基于似然比最大化准则和N-best算法.实验表明,这种新的方法即使在少量优化数据的条件下,也可以得到合适的流权值.而且,在不同的信噪比条件下,利用这种方法优化的多数据流隐马尔可夫模型,都可以有效、合理地融合音频和视频语音,提高语音识别系统的识别率.  相似文献   

4.
智能语音技术包含语音识别、自然语言处理、语音合成三个方面的内容,其中语音识别是实现人机交互的关键技术,识别系统通常需要建立声学模型和语言模型。神经网络的兴起使声学模型数量急剧增加,基于神经网络的声学模型与传统识别模型相结合的方式,极大地推动了语音识别的发展。语音识别作为人机交互的前端,具有许多研究方向,文中着重对语音识别任务中的文本识别、说话人识别、情绪识别三个方向的声学模型研究现状进行归纳总结,尽可能对语音识别技术的演化进行细致介绍,为以后的相关研究提供有价值的参考。同时对目前语音识别的主流方法进行概括比较,介绍了端到端的语音识别模型的优势,并对发展趋势进行分析展望,最后提出当前语音识别任务中面临的挑战。  相似文献   

5.
周梁  高鹏  丁鹏  徐波 《中文信息学报》2006,20(3):101-106
对海量语音进行基于内容的检索需要语音识别技术和检索技术的结合。本文通过调节语言模型的途径研究在不同识别率的语音识别文本上进行关键词检索的差异,由此研究语音识别性能和检索性能之间的关联性。通过对114小时语音数据的实验表明:语音识别性能与检索性能有一定的相关性,同时也说明改进检索的方法可以消除一部分由于语音识别所带来的误差。研究结果为进一步针对性地改进识别引擎、语音识别输出的表示和相应的快速检索方法提供了基础。  相似文献   

6.
Optimal representation of acoustic features is an ongoing challenge in automatic speech recognition research. As an initial step toward this purpose, optimization of filterbanks for the cepstral coefficient using evolutionary optimization methods is proposed in some approaches. However, the large number of optimization parameters required by a filterbank makes it difficult to guarantee that an individual optimized filterbank can provide the best representation for phoneme classification. Moreover, in many cases, a number of potential solutions are obtained. Each solution presents discrimination between specific groups of phonemes. In other words, each filterbank has its own particular advantage. Therefore, the aggregation of the discriminative information provided by filterbanks is demanding challenging task. In this study, the optimization of a number of complementary filterbanks is considered to provide a different representation of speech signals for phoneme classification using the hidden Markov model (HMM). Fuzzy information fusion is used to aggregate the decisions provided by HMMs. Fuzzy theory can effectively handle the uncertainties of classifiers trained with different representations of speech data. In this study, the output of the HMM classifiers of each expert is fused using a fuzzy decision fusion scheme. The decision fusion employed a global and local confidence measurement to formulate the reliability of each classifier based on both the global and local context when making overall decisions. Experiments were conducted based on clean and noisy phonetic samples. The proposed method outperformed conventional Mel frequency cepstral coefficients under both conditions in terms of overall phoneme classification accuracy. The fuzzy fusion scheme was shown to be capable of the aggregation of complementary information provided by each filterbank.  相似文献   

7.
Advances in computer processing power and emerging algorithms are allowing new ways of envisioning human-computer interaction. Although the benefit of audio-visual fusion is expected for affect recognition from the psychological and engineering perspectives, most of existing approaches to automatic human affect analysis are unimodal: information processed by computer system is limited to either face images or the speech signals. This paper focuses on the development of a computing algorithm that uses both audio and visual sensors to detect and track a user's affective state to aid computer decision making. Using our multistream fused hidden Markov model (MFHMM), we analyzed coupled audio and visual streams to detect four cognitive states (interest, boredom, frustration and puzzlement) and seven prototypical emotions (neural, happiness, sadness, anger, disgust, fear and surprise). The MFHMM allows the building of an optimal connection among multiple streams according to the maximum entropy principle and the maximum mutual information criterion. Person-independent experimental results from 20 subjects in 660 sequences show that the MFHMM approach outperforms face-only HMM, pitch-only HMM, energy-only HMM, and independent HMM fusion, under clean and varying audio channel noise condition.  相似文献   

8.
Audio-visual speech recognition employing both acoustic and visual speech information is a novel extension of acoustic speech recognition and it significantly improves the recognition accuracy in noisy environments. Although various audio-visual speech-recognition systems have been developed, a rigorous and detailed comparison of the potential geometric visual features from speakers' faces is essential. Thus, in this paper the geometric visual features are compared and analyzed rigorously for their importance in audio-visual speech recognition. Experimental results show that among the geometric visual features analyzed, lip vertical aperture is the most relevant; and the visual feature vector formed by vertical and horizontal lip apertures and the first-order derivative of the lip corner angle leads to the best recognition results. Speech signals are modeled by hidden Markov models (HMMs) and using the optimized HMMs and geometric visual features the accuracy of acoustic-only, visual-only, and audio-visual speech recognition methods are compared. The audio-visual speech recognition scheme has a much improved recognition accuracy compared to acoustic-only and visual-only speech recognition especially at high noise levels. The experimental results showed that a set of as few as three labial geometric features are sufficient to improve the recognition rate by as much as 20% (from 62%, with acoustic-only information, to 82%, with audio-visual information at a signal-to-noise ratio of 0 dB).  相似文献   

9.
Audio-visual speech recognition (AVSR) has shown impressive improvements over audio-only speech recognition in the presence of acoustic noise. However, the problems of region-of-interest detection and feature extraction may influence the recognition performance due to the visual speech information obtained typically from planar video data. In this paper, we deviate from the traditional visual speech information and propose an AVSR system integrating 3D lip information. The Microsoft Kinect multi-sensory device was adopted for data collection. The different feature extraction and selection algorithms were applied to planar images and 3D lip information, so as to fuse the planar images and 3D lip feature into the visual-3D lip joint feature. For automatic speech recognition (ASR), the fusion methods were investigated and the audio-visual speech information was integrated into a state-synchronous two stream Hidden Markov Model. The experimental results demonstrated that our AVSR system integrating 3D lip information improved the recognition performance of traditional ASR and AVSR system in acoustic noise environments.  相似文献   

10.
语音识别是实现人机交互的一种重要途径,是自然语言处理的基础环节,随着人工智能技术的发展,人机交互等大量应用场景存在着流式语音识别的需求。流式语音识别的定义是一边输入语音一边输出结果,它能够大大减少人机交互过程中语音识别的处理时间。目前在学术研究领域,端到端语音识别已经取得了丰硕的研究成果,而流式语音识别在学术研究以及工业应用中还存在着一些挑战与困难,因此,最近两年,端到端流式语音识别逐渐成为语音领域的一个研究热点与重点。从端到端流式识别模型与性能优化等方面对近些年所展开的研究进行全面的调查与分析,具体包括以下内容:(1)详细分析和归纳了端到端流式语音识别的各种方法与模型,包括直接实现流式识别的CTC与RNN-T模型,以及对注意力机制进行改进以实现流式识别的单调注意力机制等方法;(2)介绍了端到端流式语音识别模型提高识别准确率与减少延迟的方法,在提高准确率方面,主要有最小词错率训练、知识蒸馏等方法,在降低延迟方面,主要有对齐、正则化等方法;(3)介绍了流式语音识别一些常用的中英文开源数据集以及流式识别模型的性能评价标准;(4)讨论了端到端流式语音识别模型的未来发展与展望。  相似文献   

11.
针对声音突发特征(笑声、哭声、叹息声等,称之为功能性副语言)携带大量情感信息,而包含这类突发特征的语句由于特征突发性的干扰整体情感识别率不高的问题,提出了融合功能性副语言的语音情感识别方法。该方法首先对待识别语句进行功能性副语言自动检测,根据检测结果将功能性副语言从语句中分离,从而得到较为纯净的两类信号:功能性副语言信号和传统语音信号,最后将两类信号的情感信息使用自适应权重融合方法进行融合,从而达到提高待识别语句情感识别率和系统鲁棒性的目的。在包含6种功能性副语言和6种典型情感的情感语料库上的实验表明:该方法在与人无关的情况下得到的情感平均识别率为67.41%,比线性加权融合、Dempster-Shafer(DS)证据理论、贝叶斯融合方法分别提高了4.2%、2.8%和2.4%,比融合前平均识别率提高了8.08%,该方法针对非特定人语音情感识别具有较好的鲁棒性及识别准确率。  相似文献   

12.
深度语音信号与信息处理:研究进展与展望   总被引:1,自引:0,他引:1  
论文首先对深度学习进行简要的介绍,然后就其在语音信号与信息处理研究领域的主要研究方向,包括语音识别、语音合成、语音增强的研究进展进行了详细的介绍。语音识别方向主要介绍了基于深度神经网络的语音声学建模、大数据下的模型训练和说话人自适应技术;语音合成方向主要介绍了基于深度学习模型的若干语音合成方法;语音增强方向主要介绍了基于深度神经网络的若干典型语音增强方案。论文的最后我们对深度学习在语音信与信息处理领域的未来可能的研究热点进行展望。  相似文献   

13.
李海峰  陈婧  马琳  薄洪健  徐聪  李洪伟 《软件学报》2020,31(8):2465-2491
情感识别是多学科交叉的研究方向,涉及认知科学、心理学、信号处理、模式识别、人工智能等领域的研究热点,目的是使机器理解人类情感状态,进而实现自然人机交互.本文首先从心理学及认知学角度介绍了语音情感认知研究进展,详细介绍了情感的认知理论、维度理论、脑机制以及基于情感理论的计算模型,旨在为语音情感识别提供科学的情感理论模型.然后,从人工智能角度系统地总结了目前维度情感识别的研究现状和发展,包括语音维度情感数据库、特征提取、识别算法等技术要点.最后,分析了维度情感识别技术目前面临的挑战以及可能的解决思路,对未来研究方向进行了展望.  相似文献   

14.
文章抓住人类语音感知多模型的特点,尝试建立一个在噪音环境下的基于音频和视频复合特征的连续语音识别系统。在视频特征提取方面,引入了一种基于特征口形的提取方法。识别实验证明,这种视频特征提取方法比传统DCT、DWT方法能够带来更高的识别率;基于特征口形的音频-视频混合连续语音识别系统具有很好的抗噪性。  相似文献   

15.
噪声鲁棒语音识别研究综述*   总被引:3,自引:1,他引:2  
针对噪声环境下的语音识别问题,对现有的噪声鲁棒语音识别技术进行讨论,阐述了噪声鲁棒语音识别研究的主要问题,并根据语音识别系统的构成将噪声鲁棒语音识别技术按照信号空间、特征空间和模型空间进行分类总结,分析了各种鲁棒语音识别技术的特点、实现,以及在语音识别中的应用。最后展望了进一步的研究方向。  相似文献   

16.
Unimodal analysis of palmprint and palm vein has been investigated for person recognition. One of the problems with unimodality is that the unimodal biometric is less accurate and vulnerable to spoofing, as the data can be imitated or forged. In this paper, we present a multimodal personal identification system using palmprint and palm vein images with their fusion applied at the image level. The palmprint and palm vein images are fused by a new edge-preserving and contrast-enhancing wavelet fusion method in which the modified multiscale edges of the palmprint and palm vein images are combined. We developed a fusion rule that enhances the discriminatory information in the images. Here, a novel palm representation, called “Laplacianpalm” feature, is extracted from the fused images by the locality preserving projections (LPP). Unlike the Eigenpalm approach, the “Laplacianpalm” finds an embedding that preserves local information and yields a palm space that best detects the essential manifold structure. We compare the proposed “Laplacianpalm” approach with the Fisherpalm and Eigenpalm methods on a large data set. Experimental results show that the proposed “Laplacianpalm” approach provides a better representation and achieves lower error rates in palm recognition. Furthermore, the proposed multimodal method outperforms any of its individual modality.  相似文献   

17.
语音情感识别研究进展综述   总被引:6,自引:2,他引:6  
对语音情感识别的研究现状和进展进行了归纳和总结,对未来语音情感识别技术发展趋势进行了展望. 从5个角度逐步展开进行归纳总结,即情感描述模型、具有代表性的情感语音库、语音情感特征提取、语音情感识别算法研究和语音情感识别技术应用,旨在尽可能全面地对语音情感识别技术进行细致的介绍与分析,为相关研究人员提供有价值的学术参考;最后,立足于研究现状的分析与把握,对当前语音情感识别领域所面临的挑战与发展趋势进行了展望.侧重于对语音情感识别研究的主流方法和前沿进展进行概括、比较和分析.  相似文献   

18.
We propose a three-stage pixel-based visual front end for automatic speechreading (lipreading) that results in significantly improved recognition performance of spoken words or phonemes. The proposed algorithm is a cascade of three transforms applied on a three-dimensional video region-of-interest that contains the speaker's mouth area. The first stage is a typical image compression transform that achieves a high-energy, reduced-dimensionality representation of the video data. The second stage is a linear discriminant analysis-based data projection, which is applied on a concatenation of a small amount of consecutive image transformed video data. The third stage is a data rotation by means of a maximum likelihood linear transform that optimizes the likelihood of the observed data under the assumption of their class-conditional multivariate normal distribution with diagonal covariance. We applied the algorithm to visual-only 52-class phonetic and 27-class visemic classification on a 162-subject, 8-hour long, large vocabulary, continuous speech audio-visual database. We demonstrated significant classification accuracy gains by each added stage of the proposed algorithm which, when combined, can achieve up to 27% improvement. Overall, we achieved a 60% (49%) visual-only frame-level visemic classification accuracy with (without) use of test set viseme boundaries. In addition, we report improved audio-visual phonetic classification over the use of a single-stage image transform visual front end. Finally, we discuss preliminary speech recognition results.  相似文献   

19.
语音识别的顽健性与语音库的建立   总被引:1,自引:0,他引:1  
汉语语音识别在近十几年有很大进展,现今已有一些系统投入实际应用,并初步商品化。但是一些系统的顽健性较差,因而这方面的问题将成为今后语音识别研究的一项主要任务。为此我们建立了一个适用于语音识别顽健性研究的汉语语音库,并详细介绍了它的构成、特点和试验结果等。  相似文献   

20.
This paper presents a new technique of data coding and an associated set of homogenous processing tools for the development of Human Computer Interactions (HCI). The proposed technique facilitates the fusion of different sensorial modalities and simplifies the implementations. The coding takes into account the spatio-temporal nature of the signals to be processed in the framework of a sparse representation of data. Neural networks adapted to such a representation of data are proposed to perform the recognition tasks. Their development is illustrated by two examples: one of on-line handwritten character recognition; and the other of visual speech recognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号