首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Evolutionary image segmentation algorithms have a number of advantages such as continuous contour, non-oversegmentation, and non-thresholds. However, most of the evolutionary image segmentation algorithms suffer from long computation time because the number of encoding parameters is large. In this paper, design and analysis of an efficient evolutionary image segmentation algorithm EISA are proposed. EISA uses a K-means algorithm to split an image into many homogeneous regions, and then uses an intelligent genetic algorithm IGA associated with an effective chromosome encoding method to merge the regions automatically such that the objective of the desired segmentation can be effectively achieved, where IGA is superior to conventional genetic algorithms in solving large parameter optimization problems. High performance of EISA is illustrated in terms of both the evaluation performance and computation time, compared with some current segmentation methods. It is empirically shown that EISA is robust and efficient using nature images with various characteristics.  相似文献   

2.
Many studies have proven that statistical model-based texture segmentation algorithms yield good results provided that the model parameters and the number of regions be known a priori. In this correspondence, we present an unsupervised texture segmentation method that does not require knowledge about the different texture regions, their parameters, or the number of available texture classes. The proposed algorithm relies on the analysis of local and global second and higher order spatial statistics of the original images. The segmentation map is modeled using an augmented-state Markov random field, including an outlier class that enables dynamic creation of new regions during the optimization process. A Bayesian estimate of this map is computed using a deterministic relaxation algorithm. Results on real-world textured images are presented.  相似文献   

3.
A wrapper-based approach to image segmentation and classification.   总被引:1,自引:0,他引:1  
The traditional processing flow of segmentation followed by classification in computer vision assumes that the segmentation is able to successfully extract the object of interest from the background image. It is extremely difficult to obtain a reliable segmentation without any prior knowledge about the object that is being extracted from the scene. This is further complicated by the lack of any clearly defined metrics for evaluating the quality of segmentation or for comparing segmentation algorithms. We propose a method of segmentation that addresses both of these issues, by using the object classification subsystem as an integral part of the segmentation. This will provide contextual information regarding the objects to be segmented, as well as allow us to use the probability of correct classification as a metric to determine the quality of the segmentation. We view traditional segmentation as a filter operating on the image that is independent of the classifier, much like the filter methods for feature selection. We propose a new paradigm for segmentation and classification that follows the wrapper methods of feature selection. Our method wraps the segmentation and classification together, and uses the classification accuracy as the metric to determine the best segmentation. By using shape as the classification feature, we are able to develop a segmentation algorithm that relaxes the requirement that the object of interest to be segmented must be homogeneous in some low-level image parameter, such as texture, color, or grayscale. This represents an improvement over other segmentation methods that have used classification information only to modify the segmenter parameters, since these algorithms still require an underlying homogeneity in some parameter space. Rather than considering our method as, yet, another segmentation algorithm, we propose that our wrapper method can be considered as an image segmentation framework, within which existing image segmentation algorithms may be executed. We show the performance of our proposed wrapper-based segmenter on real-world and complex images of automotive vehicle occupants for the purpose of recognizing infants on the passenger seat and disabling the vehicle airbag. This is an interesting application for testing the robustness of our approach, due to the complexity of the images, and, consequently, we believe the algorithm will be suitable for many other real-world applications.  相似文献   

4.
超像素分割在图像分割领域以其优异的性能表现被广泛应用,准确性和高效性是评价分割性能的重要指标.简单线性迭代聚类(simple linear iterative clustering,SLIC)方法在光学图像上表现出了优异的性能,在极化合成孔径雷达(synthetic aperture radar,SAR)图像中也被广泛应用,然而SLIC方法中的初始化步骤不能准确地定位类中心,需要多次的迭代纠正误差.改进的分水岭方法(spatial constrained watershed,SCoW)是一种基于梯度阈值区分的简单且高效的分割方法,但是不能直接用于极化SAR图像.本文受SCoW的启发,提出一种对SLIC进行预处理的分割方法,通过横虚警(constant false alarm rate,CFAR)边缘检测器计算得到极化SAR图像的梯度信息,并将梯度信息用于初始化分割.基于两幅实测极化SAR图像,将本文提出方法与其他三种方法对比.实验表明本文方法可以减少整个算法的迭代次数,得到更加符合图像信息、贴合图像边界的分割结果.  相似文献   

5.
The two-dimensional (2-D) fractional Brownian motion (fBm) model is useful in describing natural scenes and textures. Most fractal estimation algorithms for 2-D isotropic fBm images are simple extensions of the one-dimensional (1-D) fBm estimation method. This method does not perform well when the image size is small (say, 32x32). We propose a new algorithm that estimates the fractal parameter from the decay of the variance of the wavelet coefficients across scales. Our method places no restriction on the wavelets. Also, it provides a robust parameter estimation for small noisy fractal images. For image denoising, a Wiener filter is constructed by our algorithm using the estimated parameters and is then applied to the noisy wavelet coefficients at each scale. We show that the averaged power spectrum of the denoised image is isotropic and is a nearly 1/f process. The performance of our algorithm is shown by numerical simulation for both the fractal parameter and the image estimation. Applications to coastline detection and texture segmentation in a noisy environment are also demonstrated.  相似文献   

6.
This paper proposes a new-wavelet-based synthetic aperture radar (SAR) image despeckling algorithm using the sequential Monte Carlo method. A model-based Bayesian approach is proposed. This paper presents two methods for SAR image despeckling. The first method, called WGGPF, models a prior with Generalized Gaussian (GG) probability density function (pdf) and the second method, called WGMPF, models prior with a Generalized Gaussian Markov random field (GGMRF). The likelihood pdf is modeled using a Gaussian pdf. The GGMRF model is used because it enables texture parameter estimation. The prior is modeled using GG pdf, when texture parameters are not needed. A particle filter is used for drawing particles from the prior for different shape parameters of GG pdf. When the GGMRF prior is used, the particles are drawn from prior in order to estimate noise-free wavelet coefficients and for those coefficients the texture parameter is changed in order to obtain the best textural parameters. The texture parameters are changed for a predefined set of shape parameters of GGMRF. The particles with the highest weights represents the final noise-free estimate with corresponding textural parameters. The despeckling algorithms are compared with the state-of-the-art methods using synthetic and real SAR data. The experimental results show that the proposed despeckling algorithms efficiently remove noise and proposed methods are comparable with the state-of-the-art methods regarding objective measurements. The proposed WGMPF preserves textures of the real, high-resolution SAR images well.  相似文献   

7.
A statistical model is presented that represents the distributions of major tissue classes in single-channel magnetic resonance (MR) cerebral images. Using the model, cerebral images are segmented into gray matter, white matter, and cerebrospinal fluid (CSF). The model accounts for random noise, magnetic field inhomogeneities, and biological variations of the tissues. Intensity measurements are modeled by a finite Gaussian mixture. Smoothness and piecewise contiguous nature of the tissue regions are modeled by a three-dimensional (3-D) Markov random field (MRF). A segmentation algorithm, based on the statistical model, approximately finds the maximum a posteriori (MAP) estimation of the segmentation and estimates the model parameters from the image data. The proposed scheme for segmentation is based on the iterative conditional modes (ICM) algorithm in which measurement model parameters are estimated using local information at each site, and the prior model parameters are estimated using the segmentation after each cycle of iterations. Application of the algorithm to a sample of clinical MR brain scans, comparisons of the algorithm with other statistical methods, and a validation study with a phantom are presented. The algorithm constitutes a significant step toward a complete data driven unsupervised approach to segmentation of MR images in the presence of the random noise and intensity inhomogeneities  相似文献   

8.
李政文  王卫卫  水鹏朗 《电子学报》2006,34(12):2242-2245
Mumford-Shah两相分片常数模型是一个有效的图像分割模型,但当模型用于带有噪声的图像时,其水平集解法存在对初始解和长度参数敏感这两个问题.文中给出一种两阶段分割方法,首先利用传统的简单分割方法获得一个粗分割,再将其作为变分模型的初始解,从而实现自动选取初始解.文中还给出一个有效的自适应长度参数估计模型,该模型依据图像中噪声方差大小来确定参数.两阶段分割方法和自适应参数估计结合起来使得算法大大减弱了对参量的敏感性,而且可以正确、快速地分割.针对一些计算机生成图像和实际图像的实验结果验证了算法是有效的.  相似文献   

9.
Quantitative comparison of the performance of SAR segmentationalgorithms   总被引:4,自引:0,他引:4  
Methods to evaluate the performance of segmentation algorithms for synthetic aperture radar (SAR) images are developed, based on known properties of coherent speckle and a scene model in which areas of constant backscatter coefficient are separated by abrupt edges. Local and global measures of segmentation homogeneity are derived and applied to the outputs of two segmentation algorithms developed for SAR data, one based on iterative edge detection and segment growing, the other based on global maximum a posteriori (MAP) estimation using simulated annealing. The quantitative statistically based measures appear consistent with visual impressions of the relative quality of the segmentations produced by the two algorithms. On simulated data meeting algorithm assumptions, both algorithms performed well but MAP methods appeared visually and measurably better. On real data, MAP estimation was markedly the better method and retained performance comparable to that on simulated data, while the performance of the other algorithm deteriorated sharply. Improvements in the performance measures will require a more realistic scene model and techniques to recognize oversegmentation.  相似文献   

10.
针对目前复杂度较大的图像中目标分割速度较慢、显著性边界分割不明确等问题,提出了一种融合改进的FT(Frequency-tuned)显著性检测与Grabcut的图像分割算法。该算法首先通过改进基于频率调谐的FT显著性检测方法得到图像中显著性较高的区域,并利用SLIC(Simple Linear Iterative Clustering)算法对显著图进行预处理得到超像素图,能够有效改善边界的分割效果,然后通过以图论GraphCut算法为基础改进的Grabcut算法建立高斯混合模型。为了提高算法效率,通过聚类以超像素代替原像素,并反复迭代高斯混合模型(Gaussian Mixed Model,GMM)参数,最后利用最大流最小割算法得到最优目标分割结果。实验结果表明所提算法能够更准确更高效率地分割图像中的显著性目标,对高分辨率图像也有很好的适用效果,相比于其他算法在分割精度上提高10%左右,并具有较高的分割效率。  相似文献   

11.
本文针对多层CT影像中肺部结节分割问题,提出一种交互式手动分割结节的方法,用Canny算子代替传统Live-Wire算法中的拉普拉斯算子,解决了拉普拉斯算子边缘检测存在伪边缘和边缘间断的问题,实验证明利用该方法可以快速的为医生分割出感兴趣的结节区域,为定量分析和三维重建提供了前提,从分割和三维可视化的结果来看,该分割方法的分割结果是较为理想的.  相似文献   

12.
The segmentation of the human airway tree from volumetric computed tomography (CT) images builds an important step for many clinical applications and for physiological studies. Previously proposed algorithms suffer from one or several problems: leaking into the surrounding lung parenchyma, the need for the user to manually adjust parameters, excessive runtime. Low-dose CT scans are increasingly utilized in lung screening studies, but segmenting them with traditional airway segmentation algorithms often yields less than satisfying results. In this paper, a new airway segmentation method based on fuzzy connectivity is presented. Small adaptive regions of interest are used that follow the airway branches as they are segmented. This has several advantages. It makes it possible to detect leaks early and avoid them, the segmentation algorithm can automatically adapt to changing image parameters, and the computing time is kept within moderate values. The new method is robust in the sense that it works on various types of scans (low-dose and regular dose, normal subjects and diseased subjects) without the need for the user to manually adjust any parameters. Comparison with a commonly used region-grow segmentation algorithm shows that the newly proposed method retrieves a significantly higher count of airway branches. A method that conducts accurate cross-sectional airway measurements on airways is presented as an additional processing step. Measurements are conducted in the original gray-level volume. Validation on a phantom shows that subvoxel accuracy is achieved for all airway sizes and airway orientations.  相似文献   

13.
EM image segmentation algorithm based on an inhomogeneous hidden MRF model   总被引:2,自引:0,他引:2  
This paper introduces a Bayesian image segmentation algorithm that considers the label scale variability of images. An inhomogeneous hidden Markov random field is adopted in this algorithm to model the label scale variability as prior probabilities. An EM algorithm is developed to estimate parameters of the prior probabilities and likelihood probabilities. The image segmentation is established by using a MAP estimator. Different images are tested to verify the algorithm and comparisons with other segmentation algorithms are carried out. The segmentation results show the proposed algorithm has better performance than others.  相似文献   

14.
15.
基于方位特性表征的属性散射中心模型参数估计方法   总被引:1,自引:0,他引:1  
陶勇  胡卫东 《信号处理》2010,26(5):736-740
属性散射中心模型使用一组富含物理意义的特征参数描述高频区目标的散射特性,其模型参数中具有的频率和方位依赖项为目标识别提供了重要的特征信息。但复杂的模型形式使得参数的提取只能在图像域中进行,其中一个关键的步骤就是图像分割。由于属性散射中心在图像域中表现形式的复杂性,传统的分割算法往往不能准确地描述划分区域中散射的内在本质,使得参数的估计误差偏大。针对此缺陷,提出了一种基于方位特性表征的参数估计方法。该方法利用散射点的方位函数对散射类型进行判断,指导散射中心区域的划分以提高参数的估计精度。仿真实验验证了方法的有效性。   相似文献   

16.
Finite normal mixture (FNM) model-based image segmentation techniques adopt the following detection-estimation-classification paradigm: (1) detect the number of image regions by using theoretical information criteria; (2) estimate model parameters by using expectation-maximization (EM)/classification-maximization (CM) algorithms; and (3) classify pixels into regions by using various classifiers. This paper presents a theoretical framework to evaluate the performance of this class of image segmentation techniques. For the detection performance, probabilities of over-detection and under-detection of the number of image regions are defined, and the associated formulae in terms of model parameters and image quality are derived. For the estimation performance, both EM and CM algorithms are showed to produce asymptotically unbiased ML estimates of model parameters in the case of no-overlap. Cramer-Rao bounds of variances of these estimates are derived. For the classification performance, misclassification probability for the Bayesian classifier is defined, and a simple formula based on parameter estimates and classified data is derived to evaluate segmentation errors. This evaluation method provides both theoretically approachable accuracy limits of the techniques and practically achievable performance of the given images. Theoretical and experimental results are in good agreement and indicate that, for images of moderate quality, the detection operation is robust, the parameter estimates are accurate, and the segmentation errors are small.  相似文献   

17.
图像分割在医学超声图像的定量分析和定性分析中具有重要的作用,它直接影响到后续的分析和处理工作。对于具有复杂特性的医学超声图像,传统的图像分割算法难以获得满意的效果。文中提出了基于一维Otsu方法的三维分割算法,具有快速性和准确性的优点。该方法应用于B超图像,实现了序列图像的自动分割,与普通的一维Otsu法相比,具有更好的分割效果。  相似文献   

18.
Effective Level Set Image Segmentation With a Kernel Induced Data Term   总被引:1,自引:0,他引:1  
This study investigates level set multiphase image segmentation by kernel mapping and piecewise constant modeling of the image data thereof. A kernel function maps implicitly the original data into data of a higher dimension so that the piecewise constant model becomes applicable. This leads to a flexible and effective alternative to complex modeling of the image data. The method uses an active curve objective functional with two terms: an original term which evaluates the deviation of the mapped image data within each segmentation region from the piecewise constant model and a classic length regularization term for smooth region boundaries. Functional minimization is carried out by iterations of two consecutive steps: 1) minimization with respect to the segmentation by curve evolution via Euler-Lagrange descent equations and 2) minimization with respect to the regions parameters via fixed point iterations. Using a common kernel function, this step amounts to a mean shift parameter update. We verified the effectiveness of the method by a quantitative and comparative performance evaluation over a large number of experiments on synthetic images, as well as experiments with a variety of real images such as medical, satellite, and natural images, as well as motion maps.  相似文献   

19.
The work addresses Bayesian unsupervised satellite image segmentation, using contextual methods. It is shown, via a simulation study, that the spatial or spectral context contribution is sensitive to image parameters such as homogeneity, means, variances, and spatial or spectral correlations of the noise. From this one may choose the best context contribution according to the estimated values of the above parameters. The parameter estimation is done by SEM, a densities mixture estimator which is a stochastic variant of the EM (expectation-maximization) algorithm. Another simulation study shows good robustness of the SEM algorithm with respect to different image parameters. Thus, modification of the behavior of the contextual methods, when the SEM-based unsupervised approaches are considered, is limited, and the conclusions of the supervised simulation study stay valid. An adaptive unsupervised method using more relevant contextual features is proposed. Different SEM-based unsupervised contextual segmentation methods, applied to two real SPOT images, give consistently better results than a classical histogram-based method  相似文献   

20.
Object-based segmentation is the first essential step for image processing applications. Recently, satellite image segmentation techniques have been developed, but not enough to preserve the significant information contained in the small regions of an image. The proposed method is to partition the image into homogeneous regions by using a fuzzy hit-or-miss operator with an inherent spatial transformation, which enables the preservation of the small regions. In the algorithm proposed here, an iterative segmentation technique is formulated as consequential processes. Then, each time in iterating, hypothesis testing is used to evaluate the quality of the segmented regions with a homogeneity index. The segmentation algorithm is unsupervised and employs few parameters, most of which can be calculated from the input data. This comparative study indicates that the new iterative segmentation algorithm provides acceptable results as seen in the tested examples of synthetics and satellite images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号