首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 225 毫秒
1.
Classifier design often faces a lack of sufficient labeled data because the class labels are identified by experienced analysts and therefore collecting labeled data often costs much. To mitigate this problem, several learning methods have been proposed to effectively use unlabeled data that can be inexpensively collected. In these methods, however, only static data have been considered; time series unlabeled data cannot be dealt with by these methods. Focusing on Hidden Markov Models (HMMs), in this paper we first present an extension of HMMs, named Extended Tied-Mixture HMMs (ETM-HMMs), in which both labeled and unlabeled time series data can be utilized simultaneously. We also formally derive a learning algorithm for the ETM-HMMs based on the maximum likelihood framework. Experimental results using synthetic and real time series data show that we can obtain a certainly better classification accuracy when unlabeled time series data are added to labeled training data than the case only labeled data are used.  相似文献   

2.
Hidden Markov models (HMMs) with bounded state durations (HMM/BSD) are proposed to explicitly model the state durations of HMMs and more accurately consider the temporal structures existing in speech signals in a simple, direct, but effective way. A series of experiments have been conducted for speaker dependent applications using 408 highly confusing first-tone Mandarin syllables as the example vocabulary. It was found that in the discrete case the recognition rate of HMM/BSD (78.5%) is 9.0%, 6.3%, and 1.9% higher than the conventional HMMs and HMMs with Poisson and gamma distribution state durations, respectively. In the continuous case (partitioned Gaussian mixture modeling), the recognition rates of HMM/BSD (88.3% with 1 mixture, 88.8% with 3 mixtures, and 89.4% with 5 mixtures) are 6.3%, 5.0%, and 5.5% higher than those of the conventional HMMs, and 5.9% (with 1 mixture), 3.9% (with 3 mixtures) and 3.1% (with 1 mixture), 1.8% (with 3 mixtures) higher than HMMs with Poisson and gamma distributed state durations, respectively  相似文献   

3.
Discrete hidden Markov models (HMMs) were applied to classify pregnancy disorders. The observation sequence was generated by transforming RR and systolic blood pressure time series using symbolic dynamics. Time series were recorded from 15 women with pregnancy-induced hypertension, 34 with preeclampsia and 41 controls beyond 30th gestational week. HMMs with five to ten hidden states were found to be sufficient to characterize different blood pressure variability, whereas significant classification in RR-based HMMs was found using fifteen hidden states. Pregnancy disorders preeclampsia and pregnancy induced hypertension revealed different patho-physiological autonomous regulation supposing different etiology of both disorders.  相似文献   

4.
Hidden Markov models (HMMs) represent a very important tool for analysis of signals and systems. In the past two decades, HMMs have attracted the attention of various research communities, including the ones in statistics, engineering, and mathematics. Their extensive use in signal processing and, in particular, speech processing is well documented. A major weakness of conventional HMMs is their inflexibility in modeling state durations. This weakness can be avoided by adopting a more complicated class of HMMs known as nonstationary HMMs. We analyze nonstationary HMMs whose state transition probabilities are functions of time that indirectly model state durations by a given probability mass function and whose observation spaces are discrete. The objective of our work is to estimate all the unknowns of a nonstationary HMM, which include its parameters and the state sequence. To that end, we construct a Markov chain Monte Carlo (MCMC) sampling scheme, where sampling from all the posterior probability distributions is very easy. The proposed MCMC sampling scheme has been tested in extensive computer simulations on finite discrete-valued observed data, and some of the simulation results are presented  相似文献   

5.
In this paper, we describe an automatic unsupervised texture segmentation scheme using hidden Markov models (HMMs). First, the feature map of the image is formed using Laws' micromasks and directional macromasks. Each pixel in the feature map is represented by a sequence of 4-D feature vectors. The feature sequences belonging to the same texture are modeled as an HMM. Thus, if there are M different textures present in an image, there are M distinct HMMs to be found and trained. Consequently, the unsupervised texture segmentation problem becomes an HMM-based problem, where the appropriate number of HMMs, the associated model parameters, and the discrimination among the HMMs become the foci of our scheme. A two-stage segmentation procedure is used. First, coarse segmentation is used to obtain the approximate number of HMMs and their associated model parameters. Then, fine segmentation is used to accurately estimate the number of HMMs and the model parameters. In these two stages, the critical task of merging the similar HMMs is accomplished by comparing the discrimination information (DI) between the two HMMs against a threshold computed from the distribution of all DI's. A postprocessing stage of multiscale majority filtering is used to further enhance the segmented result. The proposed scheme is highly suitable for pipeline/parallel implementation. Detailed experimental results are reported. These results indicate that the present scheme compares favorably with respect to other successful schemes reported in the literature.  相似文献   

6.
For the acoustic models of embedded speech recognition systems, hidden Markov models (HMMs) are usually quantized and the original full space distributions are represented by combinations of a few quantized distribution prototypes. We propose a maximum likelihood objective function to train the quantized distribution prototypes. The experimental results show that the new training algorithm and the link structure adaptation scheme for the quantized HMMs reduce the word recognition error rate by 20.0%.  相似文献   

7.
Linear regression for Hidden Markov Model (HMM) parameters is widely used for the adaptive training of time series pattern analysis especially for speech processing. The regression parameters are usually shared among sets of Gaussians in HMMs where the Gaussian clusters are represented by a tree. This paper realizes a fully Bayesian treatment of linear regression for HMMs considering this regression tree structure by using variational techniques. This paper analytically derives the variational lower bound of the marginalized log-likelihood of the linear regression. By using the variational lower bound as an objective function, we can algorithmically optimize the tree structure and hyper-parameters of the linear regression rather than heuristically tweaking them as tuning parameters. Experiments on large vocabulary continuous speech recognition confirm the generalizability of the proposed approach, especially when the amount of adaptation data is limited.  相似文献   

8.
用于连续语音识别的RBF-Gamma-HMM组合模型   总被引:2,自引:0,他引:2  
本文提供了一个有特色的、易扩展的多模块RBF-Gamma神经网与HMM组合的连续语音识别模型,兼有RBF网表达音元空间、Gamma综合时序相关信息、HMM作音元时间集成和扩展等功能,以实现功能互充本模型为基础,将本文提出的各咎改进分类的学习算法用于特定人连续数字语音识别,其字正识率达到98.9%,串正识率达到94.8%。  相似文献   

9.
This paper reports an upper bound for the Kullback–Leibler divergence (KLD) for a general family of transient hidden Markov models (HMMs). An upper bound KLD (UBKLD) expression for Gaussian mixtures models (GMMs) is presented which is generalized for the case of HMMs. Moreover, this formulation is extended to the case of HMMs with nonemitting states, where under some general assumptions, the UBKLD is proved to be well defined for a general family of transient models. In particular, the UBKLD has a computationally efficient closed-form for HMMs with left-to-right topology and a final nonemitting state, that we refer to as left-to-right transient HMMs. Finally, the usefulness of the closed-form expression is experimentally evaluated for automatic speech recognition (ASR) applications, where left-to-right transient HMMs are used to model basic acoustic-phonetic units. Results show that the UBKLD is an accurate discrimination indicator for comparing acoustic HMMs used for ASR.   相似文献   

10.
Kwong  S. He  Q.H. Man  K.F. 《Electronics letters》1996,32(17):1554-1555
The authors propose a new training approach based on maximum model distance (MMD) for HMMs. MMD uses the entire training set to estimate the parameters of each HMM, while the traditional maximum likelihood (ML) only uses those data labelled for the model. Experimental results showed that significant error reduction can be achieved through the proposed approach. In addition, the relationship between MMD and corrective training was discussed, and we have proved that the corrective training is a special case of the MMD approach  相似文献   

11.
In applying hidden Markov modeling for recognition of speech signals, the matching of the energy contour of the signal to the energy contour of the model for that signal is normally achieved by appropriate normalization of each vector of the signal prior to both training and recognition. This approach, however, is not applicable when only noisy signals are available for recognition. A unified approach is developed for gain adaptation in recognition of clean and noisy signals. In this approach, hidden Markov models (HMMs) for gain-normalized clean signals are designed using maximum-likelihood (ML) estimates of the gain contours of the clean training sequences. The models are combined with ML estimates of the gain contours of the clean test signals, obtained from the given clean or noisy signals, in performing recognition using the maximum a posteriori decision rule. The gain-adapted training and recognition algorithms are developed for HMMs with Gaussian subsources using the expectation-minimization (EM) approach  相似文献   

12.
In this paper, the asymptotic smoothing error for hidden Markov models (HMMs) is investigated using hypothesis testing ideas. A family of HMMs is studied parametrised by a positive constant /spl epsiv/, which is a measure of the frequency of change. Thus, when /spl epsiv//spl rarr/0, the HMM becomes increasingly slower moving. We show that the smoothing error is O(/spl epsiv/). These theoretical predictions are confirmed by a series of simulations.  相似文献   

13.
Wavelet-based statistical signal processing techniques such as denoising and detection typically model the wavelet coefficients as independent or jointly Gaussian. These models are unrealistic for many real-world signals. We develop a new framework for statistical signal processing based on wavelet-domain hidden Markov models (HMMs) that concisely models the statistical dependencies and non-Gaussian statistics encountered in real-world signals. Wavelet-domain HMMs are designed with the intrinsic properties of the wavelet transform in mind and provide powerful, yet tractable, probabilistic signal models. Efficient expectation maximization algorithms are developed for fitting the HMMs to observational signal data. The new framework is suitable for a wide range of applications, including signal estimation, detection, classification, prediction, and even synthesis. To demonstrate the utility of wavelet-domain HMMs, we develop novel algorithms for signal denoising, classification, and detection  相似文献   

14.
The authors consider the application of hidden Markov models (HMMs) to the problem of multitarget tracking-specifically, to the problem of tracking multiple frequency lines. The idea of a mixed track is introduced, a multitrack Viterbi algorithm is described and a detailed analysis of the underlying Markov model is presented. Simulations show that in some cases, it is possible to avoid data association and directly compute the maximum a posteriori mixed track. Some practical aspects of the algorithm are discussed and simulation results, presented  相似文献   

15.
Hidden Markov modeling of flat fading channels   总被引:2,自引:0,他引:2  
Hidden Markov models (HMMs) are a powerful tool for modeling stochastic random processes. They are general enough to model with high accuracy a large variety of processes and are relatively simple allowing us to compute analytically many important parameters of the process which are very difficult to calculate for other models (such as complex Gaussian processes). Another advantage of using HMMs is the existence of powerful algorithms for fitting them to experimental data and approximating other processes. In this paper, we demonstrate that communication channel fading can be accurately modeled by HMMs, and we find closed-form solutions for the probability distribution of fade duration and the number of level crossings  相似文献   

16.
This paper presents a novel approach for human activity recognition (HAR) using the joint angles from a 3D model of a human body. Unlike conventional approaches in which the joint angles are computed from inverse kinematic analysis of the optical marker positions captured with multiple cameras, our approach utilizes the body joint angles estimated directly from time‐series activity images acquired with a single stereo camera by co‐registering a 3D body model to the stereo information. The estimated joint‐angle features are then mapped into codewords to generate discrete symbols for a hidden Markov model (HMM) of each activity. With these symbols, each activity is trained through the HMM, and later, all the trained HMMs are used for activity recognition. The performance of our joint‐angle–based HAR has been compared to that of a conventional binary and depth silhouette‐based HAR, producing significantly better results in the recognition rate, especially for the activities that are not discernible with the conventional approaches.  相似文献   

17.
The expectation-maximization (EM) algorithm is popular in estimating the parameters of various statistical models. We consider applications of the EM algorithm to the maximum a posteriori (MAP) sequence decoding assuming that sources and channels are described by hidden Markov models (HMMs). The HMMs can accurately approximate a large variety of communication channels with memory and, in particular, wireless fading channels with noise. The direct maximization of the a posteriori probability (APP) is too complex. The EM algorithm allows us to obtain the MAP sequence estimation iteratively. Since each step of the EM algorithm increases the APP, the algorithm can improve the performance of any decoding procedure  相似文献   

18.
李楠  姬光荣 《现代电子技术》2012,35(8):54-56,60
为了更详细地研究隐马尔科夫模型在图像识别中的应用,以指纹识别为例,纵向总结了几种基于隐马尔科夫模型的指纹图像识别算法,包括一维隐马尔科夫模型、伪二维隐马尔科夫模型、二维模型及一维模型组。分别从时间复杂度、识别精确度等方面总结出这四种隐马尔科夫模型在图像识别时的优缺点,得出不同待识别图像适合使用的识别模型的结论。  相似文献   

19.
Hidden Markov models (HMMs) are successfully applied in various fields of time series analysis. Colored noise, e.g., due to filtering, violates basic assumptions of the model. Although it is well known how to consider autoregressive (AR) filtering, there is no algorithm to take into account moving-average (MA) filtering in parameter estimation exactly. We present an approximate likelihood estimator for MA-filtered HMM that is generalized to deal with an autoregressive moving-average (ARMA) filtered HMM. The approximation order of the likelihood calculation can be chosen. Therefore, we obtain a sequence of estimators for the HMM parameters as well as for the filter coefficients. The recursion equations for an efficient algorithm are derived from exact expressions for the forward iterations. By simulations, we show that the derived estimators are unbiased in filter situations where standard HMM's are not able to recover the true dynamics. Special implementation strategies together with small approximations yield further acceleration of the algorithm  相似文献   

20.
Approximate maximum likelihood (ML) hidden Markov modeling using the most likely state sequence (MLSS) is examined and compared with the exact ML approach that considers all possible state sequences. It is shown that for any hidden Markov model (HMM), the difference between the approximate and the exact normalized likelihood functions cannot exceed the logarithm of the number of states divided by the dimension of the output vectors (frame length). Furthermore, for Gaussian HMMs and a given observation sequence, the MLSS is typically the sequence of nearest neighbor states in the Itakura-Saito sense, and the posterior probability of any state sequence which departs from the MLSS in a single time instant, decays exponentially with the frame length. Hence, for a sufficiently large frame length the exact and approximate ML approach provide similar model estimates and likelihood values  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号