首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
In this paper, we address the problem of reduced-complexity estimation of general large-scale hidden Markov models (HMMs) with underlying nearly completely decomposable discrete-time Markov chains and finite-state outputs. An algorithm is presented that computes O(/spl epsi/) (where /spl epsi/ is the related weak coupling parameter) approximations to the aggregate and full-order filtered estimates with substantial computational savings. These savings are shown to be quite large when the chains have blocks with small individual dimensions. Some simulation studies are presented to demonstrate the performance of the algorithm.  相似文献   

2.
The expectation-maximization (EM) algorithm is popular in estimating the parameters of various statistical models. We consider applications of the EM algorithm to the maximum a posteriori (MAP) sequence decoding assuming that sources and channels are described by hidden Markov models (HMMs). The HMMs can accurately approximate a large variety of communication channels with memory and, in particular, wireless fading channels with noise. The direct maximization of the a posteriori probability (APP) is too complex. The EM algorithm allows us to obtain the MAP sequence estimation iteratively. Since each step of the EM algorithm increases the APP, the algorithm can improve the performance of any decoding procedure  相似文献   

3.
Hidden Markov models (HMMs) have been used in the study of single-channel recordings of ion channel currents for restoration of idealized signals from noisy recordings and for estimation of kinetic parameters. A key to their effectiveness from a computational point of view is that the number of operations to evaluate the likelihood, posterior probabilities and the most likely state sequence is proportional to the product of the square of the dimension of the state space and the length of the series. However, when the state space is quite large, computations can become infeasible. This can happen when the record has been lowpass filtered and when the noise is colored. In this paper, we present an approximate method that can provide very substantial reductions in computational cost at the expense of only a very small error. We describe the method and illustrate through examples the gains that can be made in evaluating the likelihood  相似文献   

4.
5.
In this paper, we describe an automatic unsupervised texture segmentation scheme using hidden Markov models (HMMs). First, the feature map of the image is formed using Laws' micromasks and directional macromasks. Each pixel in the feature map is represented by a sequence of 4-D feature vectors. The feature sequences belonging to the same texture are modeled as an HMM. Thus, if there are M different textures present in an image, there are M distinct HMMs to be found and trained. Consequently, the unsupervised texture segmentation problem becomes an HMM-based problem, where the appropriate number of HMMs, the associated model parameters, and the discrimination among the HMMs become the foci of our scheme. A two-stage segmentation procedure is used. First, coarse segmentation is used to obtain the approximate number of HMMs and their associated model parameters. Then, fine segmentation is used to accurately estimate the number of HMMs and the model parameters. In these two stages, the critical task of merging the similar HMMs is accomplished by comparing the discrimination information (DI) between the two HMMs against a threshold computed from the distribution of all DI's. A postprocessing stage of multiscale majority filtering is used to further enhance the segmented result. The proposed scheme is highly suitable for pipeline/parallel implementation. Detailed experimental results are reported. These results indicate that the present scheme compares favorably with respect to other successful schemes reported in the literature.  相似文献   

6.
The authors demonstrate the effectiveness of phonemic hidden Markov models with Gaussian mixture output densities (mixture HMMs) for speaker-dependent large-vocabulary word recognition. Speech recognition experiments show that for almost any reasonable amount of training data, recognizers using mixture HMMs consistently outperform those employing unimodal Gaussian HMMs. With a sufficiently large training set (e.g. more than 2500 words), use of HMMs with 25-component mixture distributions typically reduces recognition errors by about 40%. It is also found that the mixture HMMs outperform a set of unimodal generalized triphone models having the same number of parameters. Previous attempts to employ mixture HMMs for speech recognition proved discouraging because of the high complexity and computational cost in implementing the Baum-Welch training algorithm. It is shown how mixture HMMs can be implemented very simply in unimodal transition-based frameworks by allowing multiple transitions from one state to another  相似文献   

7.
Approximate maximum likelihood (ML) hidden Markov modeling using the most likely state sequence (MLSS) is examined and compared with the exact ML approach that considers all possible state sequences. It is shown that for any hidden Markov model (HMM), the difference between the approximate and the exact normalized likelihood functions cannot exceed the logarithm of the number of states divided by the dimension of the output vectors (frame length). Furthermore, for Gaussian HMMs and a given observation sequence, the MLSS is typically the sequence of nearest neighbor states in the Itakura-Saito sense, and the posterior probability of any state sequence which departs from the MLSS in a single time instant, decays exponentially with the frame length. Hence, for a sufficiently large frame length the exact and approximate ML approach provide similar model estimates and likelihood values  相似文献   

8.
Classifier design often faces a lack of sufficient labeled data because the class labels are identified by experienced analysts and therefore collecting labeled data often costs much. To mitigate this problem, several learning methods have been proposed to effectively use unlabeled data that can be inexpensively collected. In these methods, however, only static data have been considered; time series unlabeled data cannot be dealt with by these methods. Focusing on Hidden Markov Models (HMMs), in this paper we first present an extension of HMMs, named Extended Tied-Mixture HMMs (ETM-HMMs), in which both labeled and unlabeled time series data can be utilized simultaneously. We also formally derive a learning algorithm for the ETM-HMMs based on the maximum likelihood framework. Experimental results using synthetic and real time series data show that we can obtain a certainly better classification accuracy when unlabeled time series data are added to labeled training data than the case only labeled data are used.  相似文献   

9.
In this correspondence, we consider a probability distance problem for a class of hidden Markov models (HMMs). The notion of conditional relative entropy between conditional probability measures is introduced as an a posteriori probability distance which can be used to measure the discrepancy between hidden Markov models when a realized observation sequence is observed. Using a measure change technique, we derive a representation for conditional relative entropy in terms of the parameters of the HMMs and conditional expectations given measurements. With this representation, we show that this distance can be calculated using an information state approach  相似文献   

10.
We present a discriminative training algorithm, that uses support vector machines (SVMs), to improve the classification of discrete and continuous output probability hidden Markov models (HMMs). The algorithm uses a set of maximum-likelihood (ML) trained HMM models as a baseline system, and an SVM training scheme to rescore the results of the baseline HMMs. It turns out that the rescoring model can be represented as an unnormalized HMM. We describe two algorithms for training the unnormalized HMM models for both the discrete and continuous cases. One of the algorithms results in a single set of unnormalized HMMs that can be used in the standard recognition procedure (the Viterbi recognizer), as if they were plain HMMs. We use a toy problem and an isolated noisy digit recognition task to compare our new method to standard ML training. Our experiments show that SVM rescoring of hidden Markov models typically reduces the error rate significantly compared to standard ML training.  相似文献   

11.
An on-line state and parameter identification scheme for hidden Markov models (HMMs) with states in a finite-discrete set is developed using recursive prediction error (RPE) techniques. The parameters of interest are the transition probabilities and discrete state values of a Markov chain. The noise density associated with the observations can also be estimated. Implementation aspects of the proposed algorithms are discussed, and simulation studies are presented to show that the algorithms converge for a wide variety of initializations. In addition, an improved version of an earlier proposed scheme (the Recursive Kullback-Leibler (RKL) algorithm) is presented with a parameterization that ensures positivity of transition probability estimates  相似文献   

12.
In applying hidden Markov modeling for recognition of speech signals, the matching of the energy contour of the signal to the energy contour of the model for that signal is normally achieved by appropriate normalization of each vector of the signal prior to both training and recognition. This approach, however, is not applicable when only noisy signals are available for recognition. A unified approach is developed for gain adaptation in recognition of clean and noisy signals. In this approach, hidden Markov models (HMMs) for gain-normalized clean signals are designed using maximum-likelihood (ML) estimates of the gain contours of the clean training sequences. The models are combined with ML estimates of the gain contours of the clean test signals, obtained from the given clean or noisy signals, in performing recognition using the maximum a posteriori decision rule. The gain-adapted training and recognition algorithms are developed for HMMs with Gaussian subsources using the expectation-minimization (EM) approach  相似文献   

13.
Hidden Markov models (HMMs) are successfully applied in various fields of time series analysis. Colored noise, e.g., due to filtering, violates basic assumptions of the model. Although it is well known how to consider autoregressive (AR) filtering, there is no algorithm to take into account moving-average (MA) filtering in parameter estimation exactly. We present an approximate likelihood estimator for MA-filtered HMM that is generalized to deal with an autoregressive moving-average (ARMA) filtered HMM. The approximation order of the likelihood calculation can be chosen. Therefore, we obtain a sequence of estimators for the HMM parameters as well as for the filter coefficients. The recursion equations for an efficient algorithm are derived from exact expressions for the forward iterations. By simulations, we show that the derived estimators are unbiased in filter situations where standard HMM's are not able to recover the true dynamics. Special implementation strategies together with small approximations yield further acceleration of the algorithm  相似文献   

14.
Image segmentation using hidden Markov Gauss mixture models.   总被引:2,自引:0,他引:2  
Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.  相似文献   

15.
Statistical modeling methods are becoming indispensable in today's large-scale image analysis. In this paper, we explore a computationally efficient parameter estimation algorithm for two-dimensional (2-D) and three-dimensional (3-D) hidden Markov models (HMMs) and show applications to satellite image segmentation. The proposed parameter estimation algorithm is compared with the first proposed algorithm for 2-D HMMs based on variable state Viterbi. We also propose a 3-D HMM for volume image modeling and apply it to volume image segmentation using a large number of synthetic images with ground truth. Experiments have demonstrated the computational efficiency of the proposed parameter estimation technique for 2-D HMMs and a potential of 3-D HMM as a stochastic modeling tool for volume images.  相似文献   

16.
17.
基于Contourlet子带能量特征多HMM融合的静脉识别   总被引:1,自引:0,他引:1  
为了准确识别人的身份,该文提出了一种以轮廓波(Contourlet)变换后不同尺度下的子带能量为特征,建立并融合多个隐马尔科夫模型(HMM)的手背静脉识别算法。该算法首先采用了光强可调的近红外阵列光源,通过逐步增加光强来获得手背静脉图像序列;而后,将每一静脉图像进行Contourlet变换,并计算不同尺度下每一子带的能量,以3个尺度下子带能量作为特征观测值建立3个HMM;最后,融合3个HMM计算得到的观测值发生概率,将融合结果与阈值作比较,从而完成静脉识别过程。实验结果表明,提出的算法可以使真实匹配与虚假匹配的区分度最大化,与基于特征点或静脉信息融合的识别算法相比,正确识别率得到了提高。  相似文献   

18.
This paper reports an upper bound for the Kullback–Leibler divergence (KLD) for a general family of transient hidden Markov models (HMMs). An upper bound KLD (UBKLD) expression for Gaussian mixtures models (GMMs) is presented which is generalized for the case of HMMs. Moreover, this formulation is extended to the case of HMMs with nonemitting states, where under some general assumptions, the UBKLD is proved to be well defined for a general family of transient models. In particular, the UBKLD has a computationally efficient closed-form for HMMs with left-to-right topology and a final nonemitting state, that we refer to as left-to-right transient HMMs. Finally, the usefulness of the closed-form expression is experimentally evaluated for automatic speech recognition (ASR) applications, where left-to-right transient HMMs are used to model basic acoustic-phonetic units. Results show that the UBKLD is an accurate discrimination indicator for comparing acoustic HMMs used for ASR.   相似文献   

19.
For purposes of simulating contemporary communication systems, it is, in many cases, useful to apply error models for specific levels of abstraction. Such models should approximate the packet error behavior of a given system at a specific protocol layer, thus incorporating the possible detrimental effects of lower protocol layers. Packet error models can efficiently be realized using finite-state models; for example, there exists a wide range of studies on using Markov models to simulate communication channels. In this paper, we consider aggregated Markov processes, which are a subclass of hidden Markov models (HMMs). Artificial limitations are set on the state transition probabilities of the models to find efficient methods of parameter estimation. We apply these models to the simulation of the performance of digital video broadcasting-handheld (DVB-H). The parameters of the packet error models are approximated as functions of the time-variant received signal strength and speed of a mobile vehicular DVB-H receiver, and it is shown that useful results may be achieved with the described packet error models, particularly when simulating mobile reception in field conditions.  相似文献   

20.
This paper provides a systematic method of obtaining reduced-complexity approximations to aggregate filters for a class of partially observed nearly completely decomposable Markov chains. It is also shown why an aggregate filter adapted from Courtois' (1977) aggregation scheme has the same order of approximation as achieved by the algorithm proposed in this paper. This algorithm can also be used systematically to obtain reduced-complexity approximations to the full-order fitter as opposed to algorithms adapted from other aggregation schemes. However, the computational savings in computing the full-order filters are substantial only when the large scale Markov chain has a large number of weakly interacting blocks or "superstates" with small individual dimensions. Some simulations are carried out to compare the performance of our algorithm with algorithms adapted from various other aggregation schemes on the basis of an average approximation error criterion in aggregate (slow) filtering. These studies indicate that the algorithms adapted from other aggregation schemes may become ad hoc under certain circumstances. The algorithm proposed in this paper however, always yields reduced-complexity filters with a guaranteed order of approximation by appropriately exploiting the special structures of the system matrices.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号