首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
Noise is ubiquitous in real life and changes image acquisition, communication, and processing characteristics in an uncontrolled manner. Gaussian noise and Salt and Pepper noise, in particular, are prevalent in noisy communication channels, camera and scanner sensors, and medical MRI images. It is not unusual for highly sophisticated image processing algorithms developed for clean images to malfunction when used on noisy images. For example, hidden Markov Gauss mixture models (HMGMM) have been shown to perform well in image segmentation applications, but they are quite sensitive to image noise. We propose a modified HMGMM procedure specifically designed to improve performance in the presence of noise. The key feature of the proposed procedure is the adjustment of covariance matrices in Gauss mixture vector quantizer codebooks to minimize an overall minimum discrimination information distortion (MDI). In adjusting covariance matrices, we expand or shrink their elements based on the noisy image. While most results reported in the literature assume a particular noise type, we propose a framework without assuming particular noise characteristics. Without denoising the corrupted source, we apply our method directly to the segmentation of noisy sources. We apply the proposed procedure to the segmentation of aerial images with Salt and Pepper noise and with independent Gaussian noise, and we compare our results with those of the median filter restoration method and the blind deconvolution-based method, respectively. We show that our procedure has better performance than image restoration-based techniques and closely matches to the performance of HMGMM for clean images in terms of both visual segmentation results and error rate.  相似文献   

2.
Gauss mixtures have gained popularity in statistics and statistical signal processing applications for a variety of reasons, including their ability to well approximate a large class of interesting densities and the availability of algorithms such as the Baum–Welch or expectation-maximization (EM) algorithm for constructing the models based on observed data. We here consider a quantization approach to Gauss mixture design based on the information theoretic view of Gaussian sources as a “worst case” for robust signal compression. Results in high-rate quantization theory suggest distortion measures suitable for Lloyd clustering of Gaussian components based on a training set of data. The approach provides a Gauss mixture model and an associated Gauss mixture vector quantizer which is locally robust. We describe the quantizer mismatch distortion and its relation to other distortion measures including the traditional squared error, the Kullback–Leibler (relative entropy) and minimum discrimination information, and the log-likehood distortions. The resulting Lloyd clustering algorithm is demonstrated by applications to image vector quantization, texture classification, and North Atlantic pipeline image classification.  相似文献   

3.
Discrete hidden Markov models (HMMs) were applied to classify pregnancy disorders. The observation sequence was generated by transforming RR and systolic blood pressure time series using symbolic dynamics. Time series were recorded from 15 women with pregnancy-induced hypertension, 34 with preeclampsia and 41 controls beyond 30th gestational week. HMMs with five to ten hidden states were found to be sufficient to characterize different blood pressure variability, whereas significant classification in RR-based HMMs was found using fifteen hidden states. Pregnancy disorders preeclampsia and pregnancy induced hypertension revealed different patho-physiological autonomous regulation supposing different etiology of both disorders.  相似文献   

4.
5.
We consider quantization from the perspective of minimizing filtering error when quantized instead of continuous measurements are used as inputs to a nonlinear filter, specializing to discrete-time two-state hidden Markov models (HMMs) with continuous-range output. An explicit expression for the filtering error when continuous measurements are used is presented. We also propose a quantization scheme based on maximizing the mutual information between quantized observations and the hidden states of the HMM  相似文献   

6.
Statistical modeling methods are becoming indispensable in today's large-scale image analysis. In this paper, we explore a computationally efficient parameter estimation algorithm for two-dimensional (2-D) and three-dimensional (3-D) hidden Markov models (HMMs) and show applications to satellite image segmentation. The proposed parameter estimation algorithm is compared with the first proposed algorithm for 2-D HMMs based on variable state Viterbi. We also propose a 3-D HMM for volume image modeling and apply it to volume image segmentation using a large number of synthetic images with ground truth. Experiments have demonstrated the computational efficiency of the proposed parameter estimation technique for 2-D HMMs and a potential of 3-D HMM as a stochastic modeling tool for volume images.  相似文献   

7.
This paper deals with unsupervised Bayesian classification of multidimensional data. We propose an extension of a previous method of generalized mixture estimation to the correlated sensors case. The method proposed is valid in the independent data case, as well as in the hidden Markov chain or field model case, with known applications in signal processing, particularly speech or image processing. The efficiency of the method proposed is shown via some simulations concerning hidden Markov fields, with application to unsupervised image segmentation  相似文献   

8.
A method of integrating the Gibbs distributions (GDs) into hidden Markov models (HMMs) is presented. The probabilities of the hidden state sequences of HMMs are modeled by GDs in place of the transition probabilities. The GDs offer a general way in modeling neighbor interactions of Markov random fields where the Markov chains in HMMs are special cases. An algorithm for estimating the model parameters is developed based on Baum reestimation, and an algorithm for computing the probability terms is developed using a lattice structure. The GD models were used for experiments in speech recognition on the TI speaker-independent, isolated digit database. The observation sequences of the speech signals were modeled by mixture Gaussian autoregressive densities. The energy functions of the GDs were developed using very few parameters and proved adequate in hidden layer modeling. The results of the experiments showed that the GD models performed at least as well as the HMM models  相似文献   

9.
We present a discriminative training algorithm, that uses support vector machines (SVMs), to improve the classification of discrete and continuous output probability hidden Markov models (HMMs). The algorithm uses a set of maximum-likelihood (ML) trained HMM models as a baseline system, and an SVM training scheme to rescore the results of the baseline HMMs. It turns out that the rescoring model can be represented as an unnormalized HMM. We describe two algorithms for training the unnormalized HMM models for both the discrete and continuous cases. One of the algorithms results in a single set of unnormalized HMMs that can be used in the standard recognition procedure (the Viterbi recognizer), as if they were plain HMMs. We use a toy problem and an isolated noisy digit recognition task to compare our new method to standard ML training. Our experiments show that SVM rescoring of hidden Markov models typically reduces the error rate significantly compared to standard ML training.  相似文献   

10.
李楠  姬光荣 《现代电子技术》2012,35(8):54-56,60
为了更详细地研究隐马尔科夫模型在图像识别中的应用,以指纹识别为例,纵向总结了几种基于隐马尔科夫模型的指纹图像识别算法,包括一维隐马尔科夫模型、伪二维隐马尔科夫模型、二维模型及一维模型组。分别从时间复杂度、识别精确度等方面总结出这四种隐马尔科夫模型在图像识别时的优缺点,得出不同待识别图像适合使用的识别模型的结论。  相似文献   

11.
Linear predictive coding (LPC), vector quantization (VQ), and hidden Markov models (HMMs) are three popular techniques from speech recognition which are applied in modeling and classifying nonspeech natural sounds. A new structure called the product code HMM uses two independent HMM per class, one for spectral shape and one for gain. Classification decisions are made by scoring shape and gain index sequences from a product code VQ. In a series of classification experiments, the product code structure outperformed the conventional structure, with an accuracy of over 96% for three classes  相似文献   

12.
In this correspondence, we consider a probability distance problem for a class of hidden Markov models (HMMs). The notion of conditional relative entropy between conditional probability measures is introduced as an a posteriori probability distance which can be used to measure the discrepancy between hidden Markov models when a realized observation sequence is observed. Using a measure change technique, we derive a representation for conditional relative entropy in terms of the parameters of the HMMs and conditional expectations given measurements. With this representation, we show that this distance can be calculated using an information state approach  相似文献   

13.
Wavelet-based statistical signal processing techniques such as denoising and detection typically model the wavelet coefficients as independent or jointly Gaussian. These models are unrealistic for many real-world signals. We develop a new framework for statistical signal processing based on wavelet-domain hidden Markov models (HMMs) that concisely models the statistical dependencies and non-Gaussian statistics encountered in real-world signals. Wavelet-domain HMMs are designed with the intrinsic properties of the wavelet transform in mind and provide powerful, yet tractable, probabilistic signal models. Efficient expectation maximization algorithms are developed for fitting the HMMs to observational signal data. The new framework is suitable for a wide range of applications, including signal estimation, detection, classification, prediction, and even synthesis. To demonstrate the utility of wavelet-domain HMMs, we develop novel algorithms for signal denoising, classification, and detection  相似文献   

14.
1 Introduction Manyrealobserveddataarecharacterizedbymultiplecoupledcausesorfactors.Forinstance ,faceimagesmaybegeneratedbycombiningeyebrows,eyes ,noseandmouth .Similarly ,speechsignalsmayresultfromanin teractionofmotionsoffactorssuchasthejaw ,tongue ,velum ,lipandmouth .RecentlyZemelandHintonpro posedafactoriallearningarchitecture[1~ 2 ] todealwithfactorialdata .Thegoaloffactoriallearningistodiscov erthemultipleunderlyingcausesorfactorsfromtheob serveddataandfindarepresentationthatwillbo…  相似文献   

15.
Improved hidden Markov models in the wavelet-domain   总被引:11,自引:0,他引:11  
Wavelet-domain hidden Markov models (HMMs), in particular the hidden Markov tree (HMT) model, have been introduced and applied to signal and image processing, e.g., signal denoising. We develop a simple initialization scheme for the efficient HMT model training and then propose a new four-state HMT model called HMT-2. We find that the new initialization scheme fits the HMT-2 model well. Experimental results show that the performance of signal denoising using the HMT-2 model is often improved over the two-state HMT model developed by Crouse et al. (see ibid., vol.46, p.886-902, 1998)  相似文献   

16.
Partially hidden Markov models (PHMM) are introduced. They differ from the ordinary HMMs in that both the transition probabilities of the hidden states and the output probabilities are conditioned on past observations. As an illustration they are applied to black and white image compression where the hidden variables may be interpreted as representing noncausal pixels  相似文献   

17.
基于Contourlet域HMT模型的多尺度图像分割   总被引:13,自引:5,他引:8  
基于Contourlet系数分布统计特性,结合隐马尔可夫树(HMT)模型和贝叶斯准则提出一种新的图像分割算法.为了更有效保持Contourlet域不同尺度间的信息,提出一种新的加权邻域背景模型,给出了基于高斯混合模型的象素级分割算法和基于新的背景模型的多尺度融合算法.分别选择合成纹理图像、航拍图像和SAR图像进行实验,并与小波域HMTseg方法进行比较以说明算法的有效性.对合成纹理图像给出错分概率作为评价参数.实验结果表明本文方法不但在边缘信息和方向信息保持上有明显改进,而且错分概率明显降低,对真实图像得到了理想的分割效果.  相似文献   

18.
In this paper, we describe an automatic unsupervised texture segmentation scheme using hidden Markov models (HMMs). First, the feature map of the image is formed using Laws' micromasks and directional macromasks. Each pixel in the feature map is represented by a sequence of 4-D feature vectors. The feature sequences belonging to the same texture are modeled as an HMM. Thus, if there are M different textures present in an image, there are M distinct HMMs to be found and trained. Consequently, the unsupervised texture segmentation problem becomes an HMM-based problem, where the appropriate number of HMMs, the associated model parameters, and the discrimination among the HMMs become the foci of our scheme. A two-stage segmentation procedure is used. First, coarse segmentation is used to obtain the approximate number of HMMs and their associated model parameters. Then, fine segmentation is used to accurately estimate the number of HMMs and the model parameters. In these two stages, the critical task of merging the similar HMMs is accomplished by comparing the discrimination information (DI) between the two HMMs against a threshold computed from the distribution of all DI's. A postprocessing stage of multiscale majority filtering is used to further enhance the segmented result. The proposed scheme is highly suitable for pipeline/parallel implementation. Detailed experimental results are reported. These results indicate that the present scheme compares favorably with respect to other successful schemes reported in the literature.  相似文献   

19.
Motion trajectories provide rich spatiotemporal information about an object's activity. This paper presents novel classification algorithms for recognizing object activity using object motion trajectory. In the proposed classification system, trajectories are segmented at points of change in curvature, and the subtrajectories are represented by their principal component analysis (PCA) coefficients. We first present a framework to robustly estimate the multivariate probability density function based on PCA coefficients of the subtrajectories using Gaussian mixture models (GMMs). We show that GMM-based modeling alone cannot capture the temporal relations and ordering between underlying entities. To address this issue, we use hidden Markov models (HMMs) with a data-driven design in terms of number of states and topology (e.g., left-right versus ergodic). Experiments using a database of over 5700 complex trajectories (obtained from UCI-KDD data archives and Columbia University Multimedia Group) subdivided into 85 different classes demonstrate the superiority of our proposed HMM-based scheme using PCA coefficients of subtrajectories in comparison with other techniques in the literature.  相似文献   

20.
The authors demonstrate the effectiveness of phonemic hidden Markov models with Gaussian mixture output densities (mixture HMMs) for speaker-dependent large-vocabulary word recognition. Speech recognition experiments show that for almost any reasonable amount of training data, recognizers using mixture HMMs consistently outperform those employing unimodal Gaussian HMMs. With a sufficiently large training set (e.g. more than 2500 words), use of HMMs with 25-component mixture distributions typically reduces recognition errors by about 40%. It is also found that the mixture HMMs outperform a set of unimodal generalized triphone models having the same number of parameters. Previous attempts to employ mixture HMMs for speech recognition proved discouraging because of the high complexity and computational cost in implementing the Baum-Welch training algorithm. It is shown how mixture HMMs can be implemented very simply in unimodal transition-based frameworks by allowing multiple transitions from one state to another  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号