首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper addresses the estimation of fuzzy Gaussian distribution mixture with applications to unsupervised statistical fuzzy image segmentation. In a general way, the fuzzy approach enriches the current statistical models by adding a fuzzy class, which has several interpretations in signal processing. One such interpretation in image segmentation is the simultaneous appearance of several thematic classes on the same site. We introduce a new procedure for estimating of fuzzy mixtures, which is an adaptation of the iterative conditional estimation (ICE) algorithm to the fuzzy framework, We first describe the blind estimation, i.e., without taking into account any spatial information, valid in any context of independent noisy observations. Then we introduce, in a manner analogous to classical hard segmentation, the spatial information by two different approaches: contextual segmentation and adaptive blind segmentation. In the first case, the spatial information is taken into account at the segmentation step level, and in the second case it is taken into account at the parameter estimation step level. The results obtained with the iterative conditional estimation algorithm are compared to those obtained with expectation-maximization (EM) and the stochastic EM algorithms, on both parameter estimation and unsupervised segmentation levels, via simulations. The methods proposed appear as complementary to the fuzzy C-means algorithms.  相似文献   

2.
A spatially variant finite mixture model is proposed for pixel labeling and image segmentation. For the case of spatially varying mixtures of Gaussian density functions with unknown means and variances, an expectation-maximization (EM) algorithm is derived for maximum likelihood estimation of the pixel labels and the parameters of the mixture densities, An a priori density function is formulated for the spatially variant mixture weights. A generalized EM algorithm for maximum a posteriori estimation of the pixel labels based upon these prior densities is derived. This algorithm incorporates a variation of gradient projection in the maximization step and the resulting algorithm takes the form of grouped coordinate ascent. Gaussian densities have been used for simplicity, but the algorithm can easily be modified to incorporate other appropriate models for the mixture model component densities. The accuracy of the algorithm is quantitatively evaluated through Monte Carlo simulation, and its performance is qualitatively assessed via experimental images from computerized tomography (CT) and magnetic resonance imaging (MRI).  相似文献   

3.
This paper deals with the problem of unsupervised image segmentation which consists in first mixture identification phase and second a Bayesian decision phase. During the mixture identification phase, the conditional probability density function (pdf) and the a priori class probabilities must be estimated. The most difficult part is the estimation of the number of pixel classes or in other words the estimation of the number of density mixture components. To resolve this problem, we propose here a Stochastic and Nonparametric Expectation-Maximization (SNEM) algorithm. The algorithm finds the most likely number of classes, their associated model parameters and generates a segmentation of the image by classifying the pixels into these classes. The non-parametric aspect comes from the use of the orthogonal series estimator. Experimental results are promising, we have obtained accurate results on a variety of real images.  相似文献   

4.
李磊  董卓莉  张德贤  费选 《电子学报》2016,44(6):1349-1354
提出一种基于区域限制的EM(Expectation Maximization)和图割的非监督彩色图像分割方法,以解决自动确定分割类数问题.首先,生成图像的超像素,提取图像的CIE Lab颜色特征和多尺度四元数Gabor滤波特征;为了高效自动地确定分割类数,同时避免因直接使用超像素造成的奇异值问题,对每一个超像素采样并使用采样像素表示超像素;然后采用高斯混合模型对采样像素集合进行建模,使用加入区域限制的分量EM自动获取模型组件数及参数,最后使用图割结合高斯混合模型对图像进行优化,获取最终分割结果.实验结果表明,该方法在分割效率和分割质量上均得到较大提升.  相似文献   

5.
Statistical neural networks executing soft-decision algorithms have been shown to be very effective in many classification problems. A neural network architecture is developed here that can perform unsupervised joint segmentation and labeling of objects in images. We propose the semi-parametric hierarchical mixture density (HMD) model as a tool for capturing the diversity of real world images and pose the object recognition problem as a maximum likelihood (ML) estimation of the HMD parameters. We apply the expectation-maximization (EM) algorithm for this purpose and utilize ideas and techniques from statistical physics to cast the problem as the minimization of a free energy function. We then proceed to regularize the solution thus obtained by adding smoothing terms to the objective function. The resulting recursive scheme for estimating the posterior probabilities of an object's presence in an image corresponds to an unsupervised feedback neural network architecture. We present here the results of experiments involving recognition of traffic signs in natural scenes using this technique  相似文献   

6.
This paper deals with the statistical segmentation of multisensor images. In a Bayesian context, the interest of using hidden Markov random fields, which allows one to take contextual information into account, has been well known for about 20 years. In other situations, the Bayesian framework is insufficient and one must make use of the theory of evidence. The aim of the authors' work is to propose evidential models that can take into account contextual information via Markovian fields. They define a general evidential Markovian model and show that it is usable in practice. Different simulation results presented show the interest of evidential Markovian field model-based segmentation algorithms. Furthermore, an original variant of generalized mixture estimation, making possible the unsupervised evidential fusion in a Markovian context, is described. It is applied to the unsupervised segmentation of real radar and SPOT images showing the relevance of the proposed models and corresponding segmentation methods in real situations  相似文献   

7.
This paper deals with unsupervised Bayesian classification of multidimensional data. We propose an extension of a previous method of generalized mixture estimation to the correlated sensors case. The method proposed is valid in the independent data case, as well as in the hidden Markov chain or field model case, with known applications in signal processing, particularly speech or image processing. The efficiency of the method proposed is shown via some simulations concerning hidden Markov fields, with application to unsupervised image segmentation  相似文献   

8.
The work addresses Bayesian unsupervised satellite image segmentation, using contextual methods. It is shown, via a simulation study, that the spatial or spectral context contribution is sensitive to image parameters such as homogeneity, means, variances, and spatial or spectral correlations of the noise. From this one may choose the best context contribution according to the estimated values of the above parameters. The parameter estimation is done by SEM, a densities mixture estimator which is a stochastic variant of the EM (expectation-maximization) algorithm. Another simulation study shows good robustness of the SEM algorithm with respect to different image parameters. Thus, modification of the behavior of the contextual methods, when the SEM-based unsupervised approaches are considered, is limited, and the conclusions of the supervised simulation study stay valid. An adaptive unsupervised method using more relevant contextual features is proposed. Different SEM-based unsupervised contextual segmentation methods, applied to two real SPOT images, give consistently better results than a classical histogram-based method  相似文献   

9.
It is well known in the pattern recognition community that the accuracy of classifications obtained by combining decisions made by independent classifiers can be substantially higher than the accuracy of the individual classifiers. We have previously shown this to be true for atlas-based segmentation of biomedical images. The conventional method for combining individual classifiers weights each classifier equally (vote or sum rule fusion). In this paper, we propose two methods that estimate the performances of the individual classifiers and combine the individual classifiers by weighting them according to their estimated performance. The two methods are multiclass extensions of an expectation-maximization (EM) algorithm for ground truth estimation of binary classification based on decisions of multiple experts (Warfield et al., 2004). The first method performs parameter estimation independently for each class with a subsequent integration step. The second method considers all classes simultaneously. We demonstrate the efficacy of these performance-based fusion methods by applying them to atlas-based segmentations of three-dimensional confocal microscopy images of bee brains. In atlas-based image segmentation, multiple classifiers arise naturally by applying different registration methods to the same atlas, or the same registration method to different atlases, or both. We perform a validation study designed to quantify the success of classifier combination methods in atlas-based segmentation. By applying random deformations, a given ground truth atlas is transformed into multiple segmentations that could result from imperfect registrations of an image to multiple atlas images. In a second evaluation study, multiple actual atlas-based segmentations are combined and their accuracies computed by comparing them to a manual segmentation. We demonstrate in both evaluation studies that segmentations produced by combining multiple individual registration-based segmentations are more accurate for the two classifier fusion methods we propose, which weight the individual classifiers according to their EM-based performance estimates, than for simple sum rule fusion, which weights each classifier equally.  相似文献   

10.
The appearance of disproportionately large amounts of high-density breast parenchyma in mammograms has been found to be a strong indicator of the risk of developing breast cancer. Hence, the breast density model is popular for risk estimation or for monitoring breast density change in prevention or intervention programs. However, the efficiency of such a stochastic model depends on the accuracy of estimation of the model's parameter set. We propose a new approach-heuristic optimization-to estimate more accurately the model parameter set as compared to the conventional and popular expectation-maximization (EM) algorithm. After initial segmentation of a given mammogram, the finite generalized Gaussian mixture (FGGM) model is constructed by computing the statistics associated with different image regions. The model parameter set thus obtained is estimated by particle swarm optimization (PSO) and evolutionary programming (EP) techniques, where the objective function to be minimized is the relative entropy between the image histogram and the estimated density distributions. When our heuristic approach was applied to different categories of mammograms from the Mini-MIAS database, it yielded lower floor of estimation error in 109 out of 112 cases (97.3 %), and 101 out of 102 cases (99.0%), for the number of image regions being five and eight, respectively, with the added advantage of faster convergence rate, when compared to the EM approach. Besides, the estimated density model preserves the number of regions specified by the information-theoretic criteria in all the test cases, and the assessment of the segmentation results by radiologists is promising.  相似文献   

11.
Discrete data are an important component in many image processing and computer vision applications. In this work we propose an unsupervised statistical approach to learn structures of this kind of data. The central ingredient in our model is the introduction of the generalized Dirichlet distribution as a prior to the multinomial. An estimation algorithm, based on leave-one-out likelihood and empirical Bayesian inference, for the parameters is developed. This estimation algorithm can be viewed as a hybrid expectation–maximization (EM) which alternates EM iterations with Newton–Raphson iterations using the Hessian matrix. We propose then the use of our model as a parametric basis for support vector machines within a hybrid generative/discriminative framework. In a series of experiments involving scene modeling and classification using visual words, and color texture modeling we show the efficiency of the proposed approach.  相似文献   

12.
In this paper we present new results relative to the "expectation-maximization/maximization of the posterior marginals" (EM/MPM) algorithm for simultaneous parameter estimation and segmentation of textured images. The EM/MPM algorithm uses a Markov random field model for the pixel class labels and alternately approximates the MPM estimate of the pixel class labels and estimates parameters of the observed image model. The goal of the EM/MPM algorithm is to minimize the expected value of the number of misclassified pixels. We present new theoretical results in this paper which show that the algorithm can be expected to achieve this goal, to the extent that the EM estimates of the model parameters are close to the true values of the model parameters. We also present new experimental results demonstrating the performance of the EM/MPM algorithm.  相似文献   

13.
Otsu(最大类间方差)是经典的非参数、无监督、自动获取最佳阈值的最优图像分割方法。但是,在用于含噪图像的分割时,Otsu方法并不能取得理想的分割效果。针对这一问题,本文在Otsu分割方法的基础上,给出了一种新的含噪遥感图像分割算法。该算法首先用小波包对含噪图像进行全局阈值的去噪处理,然后利用局部加权回归对图像像素逐一估计去噪,得到去噪后的图像,之后采用Otsu方法对估计图像分割。仿真实验表明:该算法不仅计算量小,具有良好的抗噪能力,而且获得了较好的分割效果。  相似文献   

14.
This paper presents an unsupervised method for restoration of sparse spike trains. These signals are modeled as random Bernoulli-Gaussian processes, and their unsupervised restoration requires (i) estimation of the hyperparameters that control the stochastic models of the input and noise signals and (ii) deconvolution of the pulse process. Classically, the problem is solved iteratively using a maximum generalized likelihood approach despite questionable statistical properties. The contribution of the article is threefold. First, we present a new “core algorithm” for supervised deconvolution of spike trains, which exhibits enhanced numerical efficiency and reduced memory requirements. Second, we propose an original implementation of a hyperparameter estimation procedure that is based upon a stochastic version of the expectation-maximization (EM) algorithm. This procedure utilizes the same core algorithm as the supervised deconvolution method. Third, Monte Carlo simulations show that the proposed unsupervised restoration method exhibits satisfactory theoretical and practical behavior and that, in addition, good global numerical efficiency is achieved  相似文献   

15.
Finite mixture models (FMMs) are an indispensable tool for unsupervised classification in brain imaging. Fitting an FMM to the data leads to a complex optimization problem. This optimization problem is difficult to solve by standard local optimization methods, such as the expectation-maximization (EM) algorithm, if a principled initialization is not available. In this paper, we propose a new global optimization algorithm for the FMM parameter estimation problem, which is based on real coded genetic algorithms. Our specific contributions are two-fold: 1) we propose to use blended crossover in order to reduce the premature convergence problem to its minimum and 2) we introduce a completely new permutation operator specifically meant for the FMM parameter estimation. In addition to improving the optimization results, the permutation operator allows for imposing biologically meaningful constraints to the FMM parameter values. We also introduce a hybrid of the genetic algorithm and the EM algorithm for efficient solution of multidimensional FMM fitting problems. We compare our algorithm to the self-annealing EM-algorithm and a standard real coded genetic algorithm with the voxel classification tasks within the brain imaging. The algorithms are tested on synthetic data as well as real three-dimensional image data from human magnetic resonance imaging, positron emission tomography, and mouse brain MRI. The tissue classification results by our method are shown to be consistently more reliable and accurate than with the competing parameter estimation methods.  相似文献   

16.
Fuzzy random fields and unsupervised image segmentation   总被引:7,自引:0,他引:7  
Statistical unsupervised image segmentation using fuzzy random fields is treated. A fuzzy model containing a hard component, which describes pure pixels, and a fuzzy component which describes mixed pixels, is introduced. A procedure for simulating, a fuzzy field based on a Gibbs sampler step followed by a second step involving white or correlated Gaussian noises is given. Then the different steps of unsupervised image segmentation are studied. Four different blind segmentation methods are performed: the conditional expectation, two variants of the maximum likelihood, and the least squares approach. The parameters required are estimated by the stochastic estimation maximization (SEM) algorithm, a stochastic variant of the expectation maximization (EM) algorithm. These fuzzy segmentation methods are compared with a classical hard segmentation method, without taking the fuzzy class into account. The study shows that the fuzzy SEM algorithm provides reliables estimators. Furthermore, fuzzy segmentation always improves upon the hard segmentation results  相似文献   

17.
It has been shown that employing multiple atlas images improves segmentation accuracy in atlas-based medical image segmentation. Each atlas image is registered to the target image independently and the calculated transformation is applied to the segmentation of the atlas image to obtain a segmented version of the target image. Several independent candidate segmentations result from the process, which must be somehow combined into a single final segmentation. Majority voting is the generally used rule to fuse the segmentations, but more sophisticated methods have also been proposed. In this paper, we show that the use of global weights to ponderate candidate segmentations has a major limitation. As a means to improve segmentation accuracy, we propose the generalized local weighting voting method. Namely, the fusion weights adapt voxel-by-voxel according to a local estimation of segmentation performance. Using digital phantoms and MR images of the human brain, we demonstrate that the performance of each combination technique depends on the gray level contrast characteristics of the segmented region, and that no fusion method yields better results than the others for all the regions. In particular, we show that local combination strategies outperform global methods in segmenting high-contrast structures, while global techniques are less sensitive to noise when contrast between neighboring structures is low. We conclude that, in order to achieve the highest overall segmentation accuracy, the best combination method for each particular structure must be selected.   相似文献   

18.
One of the main problems related to unsupervised change detection methods based on the “difference image” lies in the lack of efficient automatic techniques for discriminating between changed and unchanged pixels in the difference image. Such discrimination is usually performed by using empirical strategies or manual trial-and-error procedures, which affect both the accuracy and the reliability of the change-detection process. To overcome such drawbacks, in this paper, the authors propose two automatic techniques (based on the Bayes theory) for the analysis of the difference image. One allows an automatic selection of the decision threshold that minimizes the overall change detection error probability under the assumption that pixels in the difference image are independent of one another. The other analyzes the difference image by considering the spatial-contextual information included in the neighborhood of each pixel. In particular, an approach based on Markov Random Fields (MRFs) that exploits interpixel class dependency contexts is presented. Both proposed techniques require the knowledge of the statistical distributions of the changed and unchanged pixels in the difference image. To perform an unsupervised estimation of the statistical terms that characterize these distributions, they propose an iterative method based on the Expectation-Maximization (EM) algorithm. Experimental results confirm the effectiveness of both proposed techniques  相似文献   

19.
In this paper, we consider the problem of blind source separation in the wavelet domain. We propose a Bayesian estimation framework for the problem where different models of the wavelet coefficients are considered: the independent Gaussian mixture model, the hidden Markov tree model, and the contextual hidden Markov field model. For each of the three models, we give expressions of the posterior laws and propose appropriate Markov chain Monte Carlo algorithms in order to perform unsupervised joint blind separation of the sources and estimation of the mixing matrix and hyper parameters of the problem. Indeed, in order to achieve an efficient joint separation and denoising procedures in the case of high noise level in the data, a slight modification of the exposed models is presented: the Bernoulli-Gaussian mixture model, which is equivalent to a hard thresholding rule in denoising problems. A number of simulations are presented in order to highlight the performances of the aforementioned approach: 1) in both high and low signal-to-noise ratios and 2) comparing the results with respect to the choice of the wavelet basis decomposition.  相似文献   

20.
Estimating the number of components (the order) in a mixture model is often addressed using criteria such as the Bayesian information criterion (BIC) and minimum message length. However, when the feature space is very large, use of these criteria may grossly underestimate the order. Here, it is suggested that this failure is not mainly attributable to the criterion (e.g., BIC), but rather to the lack of "structure" in standard mixtures-these models trade off data fitness and model complexity only by varying the order. The authors of the present paper propose mixtures with a richer set of tradeoffs. The proposed model allows each component its own informative feature subset, with all other features explained by a common model (shared by all components). Parameter sharing greatly reduces complexity at a given order. Since the space of these parsimonious modeling solutions is vast, this space is searched in an efficient manner, integrating the component and feature selection within the generalized expectation-maximization (GEM) learning for the mixture parameters. The quality of the proposed (unsupervised) solutions is evaluated using both classification error and test set data likelihood. On text data, the proposed multinomial version-learned without labeled examples, without knowing the "true" number of topics, and without feature preprocessing-compares quite favorably with both alternative unsupervised methods and with a supervised naive Bayes classifier. A Gaussian version compares favorably with a recent method introducing "feature saliency" in mixtures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号