首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A method of integrating the Gibbs distributions (GDs) into hidden Markov models (HMMs) is presented. The probabilities of the hidden state sequences of HMMs are modeled by GDs in place of the transition probabilities. The GDs offer a general way in modeling neighbor interactions of Markov random fields where the Markov chains in HMMs are special cases. An algorithm for estimating the model parameters is developed based on Baum reestimation, and an algorithm for computing the probability terms is developed using a lattice structure. The GD models were used for experiments in speech recognition on the TI speaker-independent, isolated digit database. The observation sequences of the speech signals were modeled by mixture Gaussian autoregressive densities. The energy functions of the GDs were developed using very few parameters and proved adequate in hidden layer modeling. The results of the experiments showed that the GD models performed at least as well as the HMM models  相似文献   

2.
李楠  姬光荣 《现代电子技术》2012,35(8):54-56,60
为了更详细地研究隐马尔科夫模型在图像识别中的应用,以指纹识别为例,纵向总结了几种基于隐马尔科夫模型的指纹图像识别算法,包括一维隐马尔科夫模型、伪二维隐马尔科夫模型、二维模型及一维模型组。分别从时间复杂度、识别精确度等方面总结出这四种隐马尔科夫模型在图像识别时的优缺点,得出不同待识别图像适合使用的识别模型的结论。  相似文献   

3.
The authors consider the application of hidden Markov models (HMMs) to the problem of multitarget tracking-specifically, to the problem of tracking multiple frequency lines. The idea of a mixed track is introduced, a multitrack Viterbi algorithm is described and a detailed analysis of the underlying Markov model is presented. Simulations show that in some cases, it is possible to avoid data association and directly compute the maximum a posteriori mixed track. Some practical aspects of the algorithm are discussed and simulation results, presented  相似文献   

4.
李玉鑑 《电子学报》2004,32(11):1833-1838
研究了2维隐马尔可夫模型的三个基本问题,包括概率评估问题、最优状态问题和参数估计问题.通过把2维隐马尔可夫模型行或者列上的状态序列看作一个马尔可夫模型,从理论上分别给出了解决这三个基本问题的新算法;计算机仿真对新算法的实现和运行作了进一步的说明.  相似文献   

5.
We present a discriminative training algorithm, that uses support vector machines (SVMs), to improve the classification of discrete and continuous output probability hidden Markov models (HMMs). The algorithm uses a set of maximum-likelihood (ML) trained HMM models as a baseline system, and an SVM training scheme to rescore the results of the baseline HMMs. It turns out that the rescoring model can be represented as an unnormalized HMM. We describe two algorithms for training the unnormalized HMM models for both the discrete and continuous cases. One of the algorithms results in a single set of unnormalized HMMs that can be used in the standard recognition procedure (the Viterbi recognizer), as if they were plain HMMs. We use a toy problem and an isolated noisy digit recognition task to compare our new method to standard ML training. Our experiments show that SVM rescoring of hidden Markov models typically reduces the error rate significantly compared to standard ML training.  相似文献   

6.
Image segmentation using hidden Markov Gauss mixture models.   总被引:2,自引:0,他引:2  
Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.  相似文献   

7.
In this correspondence, we consider a probability distance problem for a class of hidden Markov models (HMMs). The notion of conditional relative entropy between conditional probability measures is introduced as an a posteriori probability distance which can be used to measure the discrepancy between hidden Markov models when a realized observation sequence is observed. Using a measure change technique, we derive a representation for conditional relative entropy in terms of the parameters of the HMMs and conditional expectations given measurements. With this representation, we show that this distance can be calculated using an information state approach  相似文献   

8.
We investigate approximate smoothing schemes for a class of hidden Markov models (HMMs), namely, HMMs with underlying Markov chains that are nearly completely decomposable. The objective is to obtain substantial computational savings. Our algorithm can not only be used to obtain aggregate smoothed estimates but can be used also to obtain systematically approximate full-order smoothed estimates with computational savings and rigorous performance guarantees, unlike many of the aggregation methods proposed earlier  相似文献   

9.
This paper reports an upper bound for the Kullback–Leibler divergence (KLD) for a general family of transient hidden Markov models (HMMs). An upper bound KLD (UBKLD) expression for Gaussian mixtures models (GMMs) is presented which is generalized for the case of HMMs. Moreover, this formulation is extended to the case of HMMs with nonemitting states, where under some general assumptions, the UBKLD is proved to be well defined for a general family of transient models. In particular, the UBKLD has a computationally efficient closed-form for HMMs with left-to-right topology and a final nonemitting state, that we refer to as left-to-right transient HMMs. Finally, the usefulness of the closed-form expression is experimentally evaluated for automatic speech recognition (ASR) applications, where left-to-right transient HMMs are used to model basic acoustic-phonetic units. Results show that the UBKLD is an accurate discrimination indicator for comparing acoustic HMMs used for ASR.   相似文献   

10.
Based on global optimisation, a new genetic algorithm for training hidden Markov models (HMMs) is proposed. The results of speech recognition are presented and a comparison made with the classic training HMM algorithm  相似文献   

11.
Partially hidden Markov models (PHMM) are introduced. They differ from the ordinary HMMs in that both the transition probabilities of the hidden states and the output probabilities are conditioned on past observations. As an illustration they are applied to black and white image compression where the hidden variables may be interpreted as representing noncausal pixels  相似文献   

12.
Wavelet-based statistical signal processing techniques such as denoising and detection typically model the wavelet coefficients as independent or jointly Gaussian. These models are unrealistic for many real-world signals. We develop a new framework for statistical signal processing based on wavelet-domain hidden Markov models (HMMs) that concisely models the statistical dependencies and non-Gaussian statistics encountered in real-world signals. Wavelet-domain HMMs are designed with the intrinsic properties of the wavelet transform in mind and provide powerful, yet tractable, probabilistic signal models. Efficient expectation maximization algorithms are developed for fitting the HMMs to observational signal data. The new framework is suitable for a wide range of applications, including signal estimation, detection, classification, prediction, and even synthesis. To demonstrate the utility of wavelet-domain HMMs, we develop novel algorithms for signal denoising, classification, and detection  相似文献   

13.
This paper is concerned with recursive algorithms for the estimation of hidden Markov models (HMMs) and autoregressive (AR) models under the Markov regime. Convergence and rate of convergence results are derived. Acceleration of convergence by averaging of the iterates and the observations are treated. Finally, constant step-size tracking algorithms are presented and examined  相似文献   

14.
In applying hidden Markov modeling for recognition of speech signals, the matching of the energy contour of the signal to the energy contour of the model for that signal is normally achieved by appropriate normalization of each vector of the signal prior to both training and recognition. This approach, however, is not applicable when only noisy signals are available for recognition. A unified approach is developed for gain adaptation in recognition of clean and noisy signals. In this approach, hidden Markov models (HMMs) for gain-normalized clean signals are designed using maximum-likelihood (ML) estimates of the gain contours of the clean training sequences. The models are combined with ML estimates of the gain contours of the clean test signals, obtained from the given clean or noisy signals, in performing recognition using the maximum a posteriori decision rule. The gain-adapted training and recognition algorithms are developed for HMMs with Gaussian subsources using the expectation-minimization (EM) approach  相似文献   

15.
For the acoustic models of embedded speech recognition systems, hidden Markov models (HMMs) are usually quantized and the original full space distributions are represented by combinations of a few quantized distribution prototypes. We propose a maximum likelihood objective function to train the quantized distribution prototypes. The experimental results show that the new training algorithm and the link structure adaptation scheme for the quantized HMMs reduce the word recognition error rate by 20.0%.  相似文献   

16.
Discrete hidden Markov models (HMMs) were applied to classify pregnancy disorders. The observation sequence was generated by transforming RR and systolic blood pressure time series using symbolic dynamics. Time series were recorded from 15 women with pregnancy-induced hypertension, 34 with preeclampsia and 41 controls beyond 30th gestational week. HMMs with five to ten hidden states were found to be sufficient to characterize different blood pressure variability, whereas significant classification in RR-based HMMs was found using fifteen hidden states. Pregnancy disorders preeclampsia and pregnancy induced hypertension revealed different patho-physiological autonomous regulation supposing different etiology of both disorders.  相似文献   

17.
We consider quantization from the perspective of minimizing filtering error when quantized instead of continuous measurements are used as inputs to a nonlinear filter, specializing to discrete-time two-state hidden Markov models (HMMs) with continuous-range output. An explicit expression for the filtering error when continuous measurements are used is presented. We also propose a quantization scheme based on maximizing the mutual information between quantized observations and the hidden states of the HMM  相似文献   

18.
Statistical modeling methods are becoming indispensable in today's large-scale image analysis. In this paper, we explore a computationally efficient parameter estimation algorithm for two-dimensional (2-D) and three-dimensional (3-D) hidden Markov models (HMMs) and show applications to satellite image segmentation. The proposed parameter estimation algorithm is compared with the first proposed algorithm for 2-D HMMs based on variable state Viterbi. We also propose a 3-D HMM for volume image modeling and apply it to volume image segmentation using a large number of synthetic images with ground truth. Experiments have demonstrated the computational efficiency of the proposed parameter estimation technique for 2-D HMMs and a potential of 3-D HMM as a stochastic modeling tool for volume images.  相似文献   

19.
The expectation-maximization (EM) algorithm is popular in estimating the parameters of various statistical models. We consider applications of the EM algorithm to the maximum a posteriori (MAP) sequence decoding assuming that sources and channels are described by hidden Markov models (HMMs). The HMMs can accurately approximate a large variety of communication channels with memory and, in particular, wireless fading channels with noise. The direct maximization of the a posteriori probability (APP) is too complex. The EM algorithm allows us to obtain the MAP sequence estimation iteratively. Since each step of the EM algorithm increases the APP, the algorithm can improve the performance of any decoding procedure  相似文献   

20.
The authors demonstrate the effectiveness of phonemic hidden Markov models with Gaussian mixture output densities (mixture HMMs) for speaker-dependent large-vocabulary word recognition. Speech recognition experiments show that for almost any reasonable amount of training data, recognizers using mixture HMMs consistently outperform those employing unimodal Gaussian HMMs. With a sufficiently large training set (e.g. more than 2500 words), use of HMMs with 25-component mixture distributions typically reduces recognition errors by about 40%. It is also found that the mixture HMMs outperform a set of unimodal generalized triphone models having the same number of parameters. Previous attempts to employ mixture HMMs for speech recognition proved discouraging because of the high complexity and computational cost in implementing the Baum-Welch training algorithm. It is shown how mixture HMMs can be implemented very simply in unimodal transition-based frameworks by allowing multiple transitions from one state to another  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号