首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
二维直方图斜分最大类间交叉熵的图像分割   总被引:1,自引:1,他引:0  
张新明  刘斌  李双  张慧云 《计算机应用》2010,30(9):2453-2457
利用二维直方图斜分原理,提出了一种基于最大类间交叉熵的快速图像分割方法。首先依据二维直方图斜分法构建最大类间交叉熵阈值选取公式,然后导出这种最大类间交叉熵阈值选取的快速递推算法,最后将定义的数组运算与这种快速算法相结合搜索最佳阈值向量,使整个算法更简明高效。实验结果表明,与当前二维直方图斜分阈值方法相比,此算法效率更高,通用性更强。  相似文献   

2.
We propose a new approach of the image segmentation methods. This approach is based on a functional model composed of five elementary blocks called in an iterative process. Different segmentation methods can be decomposed with such a scheme and lead to elementary building blocks with unified functionality and interfaces. We present the decompositions of three segmentation methods and the implementation results, which illustrate the potential of the proposed model. This generic model is a common framework, which makes segmentation techniques more readable and offers new perspectives for the development, the comparison and the implementation of segmentation methods.  相似文献   

3.
In the paper, a three-level thresholding method for image segmentation is presented, based on probability partition, fuzzy partition and entropy theory. A new fuzzy entropy has been defined through probability analysis. The image is divided into three parts, namely, dark, gray and white part, whose member functions of the fuzzy region are Z-function and Π-function and S-function, respectively, while the width and attribute of the fuzzy region can be determined by maximizing fuzzy entropy. The procedure for finding the optimal combination of all the fuzzy parameters is implemented by a genetic algorithm with appropriate coding method so as to avoid useless chromosomes. The experiment results show that the proposed method gives good performance.  相似文献   

4.
Over the years data clustering algorithms have been used for image segmentation. Due to the presence of uncertainty in real life datasets, several uncertainty based data clustering algorithms have been developed. The c-means clustering algorithms form one such family of algorithms. Starting with the fuzzy c-means (FCM) a subfamily of this family comprises of rough c-means (RCM), intuitionistic fuzzy c-means (IFCM) and their hybrids like rough fuzzy c-means (RFCM) and rough intuitionistic fuzzy c-means (RIFCM). In the basic subfamily of this family of algorithms, the Euclidean distance was being used to measure the similarity of data. However, the sub family of algorithms obtained replacing the Euclidean distance by kernel based similarities produced better results. Especially, these algorithms were useful in handling viably cluster data points which are linearly inseparable in original input space. During this period it was inferred by Krishnapuram and Keller that the membership constraints in some rudimentary uncertainty based clustering techniques like fuzzy c-means imparts them a probabilistic nature, hence they suggested its possibilistic version. In fact all the other member algorithms from basic subfamily have been extended to incorporate this new notion. Currently, the use of image data is growing vigorously and constantly, accounting to huge figures leading to big data. Moreover, since image segmentation happens to be one of the most time consuming processes, industries are in the need of algorithms which can solve this problem at a rapid pace and with high accuracy. In this paper, we propose to combine the notions of kernel and possibilistic approach together in a distributed environment provided by Apache™ Hadoop. We integrate this combined notion with map-reduce paradigm of Hadoop and put forth three novel algorithms; Hadoop based possibilistic kernelized rough c-means (HPKRCM), Hadoop based possibilistic kernelized rough fuzzy c-means (HPKRFCM) and Hadoop based possibilistic kernelized rough intuitionistic fuzzy c-means (HPKRIFCM) and study their efficiency in image segmentation. We compare their running times and analyze their efficiencies with the corresponding algorithms from the other three sub families on four different types of images, three different kernels and six different efficiency measures; the Davis Bouldin index (DB), Dunn index (D), alpha index (α), rho index (ρ), alpha star index (α*) and gamma index (γ). Our analysis shows that the hyper-tangent kernel with Hadoop based possibilistic kernelized rough intuitionistic fuzzy c-means is the best one for image segmentation among all these clustering algorithms. Also, the times taken to render segmented images by the proposed algorithms are drastically low in comparison to the other algorithms. The implementations of the algorithms have been carried out in Java and for the proposed algorithms we have used Hadoop framework installed on CentOS. For statistical plotting we have used matplotlib (python library).  相似文献   

5.
基于活动轮廓(Snake)模型的目标轮廓提取是图像分割中一种重要的方法.为了克服传统Snake模型在图像分割中不能向凹处收敛和收敛不准确的缺点,提出了一种粒子群优化算法与改进的Snake模型相结合的图像分割算法.改进的Snake模型,即在传统的Snake 模型的基础上增加了一个向心能量,增加此能量可以使初始化曲线向目标的凹处收敛.又由于粒子群优化算法具有获得全局最优的能力,可以使曲线能更准确地收敛到目标的边界.通过实验证明此方法可以取得很好的分割效果.  相似文献   

6.
周晚辉  刘文萍 《计算机工程》2010,36(24):211-213
模糊C均值算法是图像分割的常用方法,但该算法对噪声非常敏感。为此,提出一种新算法,在模糊C均值算法基础上引进Type-2模糊理论,以提高算法的分割准确性和鲁棒性。该算法对模糊C均值算法中每一个样本的隶属度进行分段线性拉伸,利用拉伸的结果作为一个新的隶属度函数,并用该函数对图像进行分割。实验结果表明,该算法准确性较高,且具有良好的抗噪能力。  相似文献   

7.
曹建农 《计算机应用》2011,31(12):3373-3377
针对图像分割阈值选择问题,提出用动态参数将原始图像直方图分成两部分,构造两个新的相关直方图,分别对应于同原始图像等尺寸的虚拟图像,其中等概率像素是原始图像的相似像素。聚集计算两个构造直方图概率分布的交叉熵,分析其函数曲线极大值的峰谷关系,实现图像最佳多阈值分割。实验结果表明该方法的有效性。  相似文献   

8.
图像分割质量的评价是图像分割技术和算法研究的重要环节,在图像分析和计算机视觉中有着重要应用。依据二型模糊集在不精确性描述方面的独特优势,提出一种图像分割评判指标的二型模糊集表示方法,引入两种二型模糊集的模糊性度量作为图像分割质量的评判标准,构建图像分割质量评价模型。模拟实验验证了该模型的有效性和实用性。  相似文献   

9.
为了解决图像分割中灰度不均匀和初始轮廓敏感的问题,提出一种基于多尺度局部特征的图像分割模型.与传统局部邻域定义在方形区域不同,该模型采用圆形区域来获取更多的局部信息;考虑到局部区域灰度的变化程度不一,提出利用多尺度结构与均值滤波器相结合的方法获得多尺度局部灰度信息;通过转换灰度不均匀模型得到一个逼近真实信息的图像,并将其融合进局部高斯分布拟合(LGDF)模型,构造出基于多尺度局部特征的能量泛函.从理论分析和实验结果表明:由于多尺度结构弱化了灰度不均匀的影响,该模型既能快速、准确地分割灰度不均匀图像,又表现出对初始轮廓具有较强的鲁棒性.  相似文献   

10.
针对隐式曲面上多相图像分割的问题,基于曲面的隐式表达、隐式曲面上的内蕴梯度等概念,将用于平面图像分割的Potts模型推广.首先对于隐式封闭曲面和隐式开放曲面,分别给出Potts模型的推广形式.然后对于传统梯度降方法计算效率低的问题,为曲面上的Potts模型设计了Split Bregman算法和对偶方法,并在对偶方法的基础上提出了一种改进的快速算法.多个数值实验结果表明,所提出的曲面上的Potts模型能有效地分割闭/开曲面上的分段常值图像,并且新的改进对偶方法在计算效率方面优于其他两种方法.  相似文献   

11.
一维阈值分割方法对噪声污染的图像很难达到理想的分割效果。二维直方图的分割方法结合了灰度和空间信息使分割精度提高,但计算复杂度急剧增加,并且传统二维直方图的方法对噪声和边缘像素的处理不够准确。改进了二维直方图的构造方法,采用自适应滤波器平滑噪声的同时更高效地保持了图像边缘和高频细节信息。运用改进的Hough变换对二维直方图进行图形统计分析,并搜索二维直方图的平面分割线,将二维直方图划分为不同的分割区域。实验结果表明改进的算法对噪声污染的图像有更好的抗噪能力,分割也更加准确。  相似文献   

12.
A novel image segmentation method based on a constraint satisfaction neural network (CSNN) is presented. The new method uses CSNN-based relaxation but with a modified scanning scheme of the image. The pixels are visited with more distant intervals and wider neighborhoods in the first level of the algorithm. The intervals between pixels and their neighborhoods are reduced in the following stages of the algorithm. This method contributes to the formation of more regular segments rapidly and consistently. A cluster validity index to determine the number of segments is also added to complete the proposed method into a fully automatic unsupervised segmentation scheme. The results are compared quantitatively by means of a novel segmentation evaluation criterion. The results are promising.  相似文献   

13.
Fast image segmentation based on multi-resolution analysis and wavelets   总被引:8,自引:0,他引:8  
An efficient algorithm for image segmentation based on a multi-resolution application of a wavelets transform and feature distribution is presented. The original feature space is transformed into a lower resolution with a wavelets transform to derive fast computation of the optimum threshold value in a feature space. Based on this lower resolution version of the given feature space, a single feature value or multiple feature values are determined as the optimum threshold values. The optimum feature values, which are in the lower resolution, are projected onto the original feature space. In this step a refinement procedure may be added to detect the optimum threshold value. Experimental results for the proposed algorithm indicate feasibility and reliability for fast image segmentation.  相似文献   

14.
基于树结构的马尔可夫随机场(TS-MRF),提出模糊多级逻辑模型(fuzzy MLL),并提出了一种新的图像分割算法——模糊TS-MRF算法。与传统的MRF分割算法和TS-MRF算法比较,该方法在计算耗时增加很少的情况下,对分割精度提高较大。更为重要的是,该方法提供了一个新思路,使得基于MRF的先验信息的描述更为精细。  相似文献   

15.
图像中的噪声会直接影响图像分割质量,为快速、准确地识别含噪图像中的目标,提出一种基于直方图预处理与BF算法的含噪图像分割方法。该方法通过小波变换抑制图像中的噪声,分析增强图像的直方图特点以缩小分割阈值的分布范围,以二维最大类间方差为原则设计分割目标函数,利用BF算法快速搜索最优分割阈值。实验结果表明,该方法在收敛速度、稳定性和分割效果三个方面均优于基于遗传算法、人工鱼群算法等其他群体智能的分割方法。  相似文献   

16.
为了使图像分割算法能够满足实时性要求,针对Otsu法计算量大的问题,将类间方差进行连续域扩展,推导出速度较快的黄金分割法;应用灰度差分直方图进行分割,能够根据多尺度灰度差分直方图得到候选集,将搜索次数减少到候选集中元素个数,计算量小、速度快;结合两者可以实现图像的快速分割。仿真实验和实际应用表明,该方法不仅分割效率高,而且能够得到很好的分割效果。  相似文献   

17.
多分辨率二维直方图阈值分割方法   总被引:1,自引:0,他引:1       下载免费PDF全文
鉴于二维直方图阈值分割方法利用灰度的空间相关性取得高的分割效果和多分辨率阈值分割方法具有阈值搜索灵活高效的特点,提出了一种基于多分辨率的二维直方图自适应阈值分割的结合方案,降低了二维直方图计算复杂度,提高了多分辨率阈值方法的搜索精度。实验结果表明,该方法取得的分割结果与二维直方图方法基本一致,计算复杂度随分辨率的级数成指数下降。  相似文献   

18.
杨涛  管一弘 《计算机应用》2010,30(10):2797-2801
针对人脑组织结构的不确定性和模糊性,提出模糊Gibbs随机场聚类与二维直方图相结合的分割方法。该方法首先利用均值、方差及邻域属性对隶属度函数进行定义,并建立模糊Gibbs随机场;然后以模糊Gibbs随机场作为先验知识、最大后验概率为判别准则来确定每一个像素的类归属以及它属于该类的隶属度,同时用模糊类的质心来更新类中心;最后将类中心引入二维直方图方法中,找到每个类之间的各个阈值点对图像进行分割。通过实验表明该算法能够准确分割出各种脑组织,对噪声的鲁棒性、结果的准确性及平滑性相对于模糊C均值(FCM)算法都有了很大的提高。  相似文献   

19.
针对GrabCut算法在图像分割中存在迭代求解耗时长、分割结果欠分割的问题,提出了一种基于非归一化直方图改进的GrabCut算法。在保留GrabCut第一次分割结果的基础上,通过非归一化直方图计算像素点属于前景或背景的方法来代替高斯混合模型迭代学习的过程;在构图过程中引入一类新的节点Bin进行构图以提高分割精度。选取MSRA1000数据集中部分图片进行实验验证,结果表明该算法在分割效果和效率上都有明显的提升,在进行背景复杂图像的分割时改进算法优势更加明显。  相似文献   

20.
Aiming at detecting sea targets reliably and timely, a novel ship recognition method using optical remote sensing data based on dynamic probability generative model is presented. First, with the visual saliency detection method, prior shape information of target objects in put images which is used to describe the initial curve adaptively is extracted, and an improved Chan–Vese (CV) model based on entropy and local neighborhood information is utilized for image segmentation. Second, based on rough set theory, the common discernibility degree is used to compute the significance weight of each candidate feature and select valid recognition features automatically. Finally, for each node, its neighbor nodes are sorted by their ε-neighborhood distances to the node. Using the classes of the selected nodes from top of sorted neighbor nodes list, a dynamic probability generative model is built to recognize ships in data from optical remote sensing system. Experimental results on real data show that the proposed approach can get better classification rates at a higher speed than the k-nearest neighbor (KNN), support vector machines (SVM) and traditional hierarchical discriminant regression (HDR) method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号