共查询到20条相似文献,搜索用时 15 毫秒
1.
基于区域的活动轮廓模型的基本思想是允许轮廓线形变以获得最小化的区域能量函数,由于通常依赖于每个待分割区域的亮度均匀性,因而不能正确分割亮度不均匀性图像.同时活动轮廓模型传统的基于水平集的数值解法运算速度慢,对初始条件敏感.提出一种基于可伸缩局部区域拟合能量的活动轮廓线模型及其全局凸分割方法,以图像的局部区域内亮度不均匀... 相似文献
2.
在染色体图像分析与识别中,将粘连或是交叠的染色体分割开的关键技术是找到正确的分割点。通过使用一种边界链码的计算方法来准确定位分割点所属的凹点,即候选分割点;再利用候选分割点间的距离阈值和边界弧长阈值判断并筛选出正确的分割点。同时提出了两条粘连的染色体在其端部粘连或首尾粘连情况下的正确分割方法。 相似文献
3.
Effective annotation and content-based search for videos in a digital library require a preprocessing step of detecting, locating and classifying scene transitions, i.e., temporal video segmentation. This paper proposes a novel approach—spatial-temporal joint probability image (ST-JPI) analysis for temporal video segmentation. A joint probability image (JPI) is derived from the joint probabilities of intensity values of corresponding points in two images. The ST-JPT, which is a series of JPIs derived from consecutive video frames, presents the evolution of the intensity joint probabilities in a video. The evolution in a ST-JPI during various transitions falls into one of several well-defined linear patterns. Based on the patterns in a ST-JPI, our algorithm detects and classifies video transitions effectively.Our study shows that temporal video segmentation based on ST-JPIs is distinguished from previous methods in the following way: (1) It is effective and relatively robust not only for video cuts but also for gradual transitions; (2) It classifies transitions on the basis of predefined evolution patterns of ST-JPIs during transitions; (3) It is efficient, scalable and suitable for real-time video segmentation. Theoretical analysis and experimental results of our method are presented to illustrate its efficacy and efficiency. 相似文献
4.
In this paper, we propose a new regular gridding and segmentation approach for microarray image. Initially, the microarray images are preprocessed using Stationary Wavelet Transform (SWT), followed by a hard thresholding filtering technique to get a de-noised microarray image. Then, we use autocorrelation to enhance the self-similarity of the image profile to get a regular gridding. 相似文献
5.
医学图像自动多阈值分割 总被引:3,自引:1,他引:3
针对医学图像的自动多阚值分割问题,采用模糊C-均值(FCM)聚类法找到医学图像的不同组织和背景的聚类中心,再利用二雏直方图的方法,找到多阈值分割的各个阈值点进行分割.引用二维直方图的方法可以很好地保留目标的细节信息,更好地抑制噪声. 相似文献
6.
在医学细胞图像处理中,经常遇到细胞图像重叠现象。传统的分水岭算法存在严重的过分割现象。使用距离变换的分水岭算法能很好地避免过分割现象。考虑到细胞具有类圆形的特点,使用固定的结构元素进行极限腐蚀容易导致细胞失真。使用交替结构元素对重叠细胞进行腐蚀,能准确地标记出单个细胞种子,同时能避免细胞因腐蚀而失真。最后使用膨胀处理找出重叠细胞的分界线。实验结果表明,算法有效地避免了过分割现象,并提高了分割处理的准确度。 相似文献
7.
指纹图像分割是指纹识别预处理中重要的一步。在灰度方差法的基础上,针对人为选择阈值的困难和不准确性,以及因噪声带来的误分割,提出了一种灰度方差和灰度梯度结合的局部阈值分割方法。在实验中,通过和其他典型指纹分割方法的比较,该方法可以高效快速的对指纹图像进行分割,对噪声比较大的低质量指纹图像分割较好,特别对一些用全局阈值不易分割的指纹图像有较好的效果。 相似文献
8.
目的 针对分水岭分割算法中存在的过分割现象及现有基于RGB图像分割方法的局限,提出了一种基于RGB图像和深度图像(RGBD)的标记分水岭分割算法。方法 本文使用物体表面几何信息来辅助进行图像分割,定义了一种深度梯度算子和一种法向量梯度算子来衡量物体表面几何信息的变化。通过生成深度梯度图像和法向量梯度图像,与彩色梯度图像进行融合,实现标记图像的提取。在此基础上,使用极小值标定技术对彩色梯度图像进行修正,然后使用分水岭算法进行图像分割。结果 在纽约大学提供的NYU2数据集上进行实验,本文算法有效抑制了过分割现象,将分割区域从上千个降至数十个,且获得了与人工标定的分割结果更接近的分割效果,分割的准确率也比只使用彩色图像进行分割提高了10%以上。结论 本文算法普遍适用于RGBD图像的分割问题,该算法加入了物体表面几何信息的使用,提高了分割的准确率,且对颜色纹理相似的区域获得了较好的分割结果。 相似文献
9.
Küçükkülahlı Enver Erdoğmuş Pakize Polat Kemal 《Neural computing & applications》2016,27(5):1445-1450
Neural Computing and Applications - The segmentation process is defined by separating the objects as clustering in the images. The most used method in the segmentation is k-means clustering... 相似文献
10.
Pankaj K. Singh Nitesh Sinha Karan Sikka 《International journal of remote sensing》2013,34(15):4155-4173
Image segmentation is one of the crucial tasks in the postprocessing of synthetic aperture radar (SAR) images. However, SAR images are textural in nature, marked by the textural patterns of widely disparate mean intensity values. This renders conventional multi-resolution techniques inefficient for the segmentation of these images. This article proposes a novel technique of combining both intensity and textural information for effective region classification. To achieve this, two new approaches, called Neighbourhood‐based Membership Ambiguity Correction (NMAC) and Dynamic Sliding Window Size Estimation (DSWSE), have been proposed. The results obtained from the two schemes are combined, segregating the image into well-defined regions of distinct textures as well as intensities. Promising results have been obtained over the SAR images of Nordlinger Ries in the Swabian Jura and flood regions near the river Kosi in Bihar, India. 相似文献
11.
E.G.P. Bovenkamp Author Vitae J. Dijkstra Author VitaeAuthor Vitae J.H.C. Reiber Author Vitae 《Pattern recognition》2004,37(4):647-663
A novel multi-agent image interpretation system has been developed which is markedly different from previous approaches in especially its elaborate high-level knowledge-based control over low-level image segmentation algorithms. Agents dynamically adapt segmentation algorithms based on knowledge about global constraints, contextual knowledge, local image information and personal beliefs. Generally agent control allows the underlying segmentation algorithms to be simpler and be applied to a wider range of problems with a higher reliability.The agent knowledge model is general and modular to support easy construction and addition of agents to any image processing task. Each agent in the system is further responsible for one type of high-level object and cooperates with other agents to come to a consistent overall image interpretation. Cooperation involves communicating hypotheses and resolving conflicts between the interpretations of individual agents.The system has been applied to IntraVascular UltraSound (IVUS) images which are segmented by five agents, specialized in lumen, vessel, calcified-plaque, shadow and sidebranch detection. IVUS image sequences from 7 patients were processed and vessel and lumen contours were detected fully automatically. These were compared with expert-corrected semiautomatically detected contours. Results show good correlations between agents and expert with r=0.84 for the lumen and r=0.92 for the vessel cross-sectional areas, respectively. 相似文献
12.
Piero Zamperoni 《Image and vision computing》1984,2(3):123-133
A model-based approach to grey-tone image segmentation is presented. A conceptual and computational frame is described, in which a variety of image models can be accommodated. Each model is defined by a feature pair and implies a uniformity criterion for ideal regions. Some particularly relevant models are described in detail and illustrated by means of experimental results obtained with real-world images. 相似文献
13.
In this work we present a snake based approach for the segmentation of images of computerized tomography (CT) scans. We introduce a new term for the internal energy and another one for external energy which solve common problems associated with classical snakes in this type of images. A simplified minimizing method is also presented. 相似文献
14.
Edge-region-based segmentation of range images 总被引:5,自引:0,他引:5
Wani M.A. Batchelor B.G. 《IEEE transactions on pattern analysis and machine intelligence》1994,16(3):314-319
In this correspondence, we present a new computationally efficient three-dimensional (3-D) object segmentation technique. The technique is based on the detection of edges in the image. The edges can be classified as belonging to one of the three categories: fold edges, semistep edges (defined here), and secondary edges. The 3-D image is sliced to create equidepth contours (EDCs). Three types of critical points are extracted from the EDCs. A subset of the edge pixels is extracted first using these critical points. The edges are grown from these pixels through the application of some masks proposed in this correspondence. The constraints of the masks can be adjusted depending on the noise present in the image. The total computational effort is small since the masks are applied only over a small neighborhood of critical points (edge regions). Furthermore, the algorithm can be implemented in parallel, as edge growing from different regions can be carried out independently of each other 相似文献
15.
Image semantic segmentation is a research topic that has emerged recently. Although existing approaches have achieved satisfactory accuracy, they are limited to handling low-resolution images owing to their large memory consumption. In this paper, we present a semantic segmentation method for high-resolution images. First, we downsample the input image to a lower resolution and then obtain a low-resolution semantic segmentation image using state-of-the-art methods. Next, we use joint bilateral upsampling to upsample the low-resolution solution and obtain a high-resolution semantic segmentation image. To modify joint bilateral upsampling to handle discrete semantic segmentation data, we propose using voting instead of interpolation in filtering computation. Compared to state-of-the-art methods, our method significantly reduces memory cost without reducing result quality. 相似文献
16.
Feature encoding for unsupervised segmentation of color images 总被引:3,自引:0,他引:3
Li N. Li Y.F. 《IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics》2003,33(3):438-447
In this paper, an unsupervised segmentation method using clustering is presented for color images. We propose to use a neural network based approach to automatic feature selection to achieve adaptive segmentation of color images. With a self-organizing feature map (SOFM), multiple color features can be analyzed, and the useful feature sequence (feature vector) can then be determined. The encoded feature vector is used in the final segmentation using fuzzy clustering. The proposed method has been applied in segmenting different types of color images, and the experimental results show that it outperforms the classical clustering method. Our study shows that the feature encoding approach offers great promise in automating and optimizing the segmentation of color images. 相似文献
17.
We consider the problem of semi-supervised segmentation of textured images. Existing model-based approaches model the intensity field of textured images as a Gauss-Markov random field to take into account the local spatial dependencies between the pixels. Classical Bayesian segmentation consists of also modeling the label field as a Markov random field to ensure that neighboring pixels correspond to the same texture class with high probability. Well-known relaxation techniques are available which find the optimal label field with respect to the maximum a posteriori or the maximum posterior mode criterion. But, these techniques are usually computationally intensive because they require a large number of iterations to converge. In this paper, we propose a new Bayesian framework by modeling two-dimensional textured images as the concatenation of two one-dimensional hidden Markov autoregressive models for the lines and the columns, respectively. A segmentation algorithm, which is similar to turbo decoding in the context of error-correcting codes, is obtained based on a factor graph approach. The proposed method estimates the unknown parameters using the Expectation-Maximization algorithm. 相似文献
18.
19.
An embedded system is developed to segment stereo images using disparity. The recent developments in the embedded system architecture have allowed real time implementation of low-level vision tasks such as stereo disparity computation. At the same time, an intermediate level task such as segmentation is rarely attempted in an embedded system. To solve the planar surface segmentation problem, which is iterative in nature, our system implements a Segmentation–Estimation framework. In the segmentation phase, segmentation labels are assigned based on the underlying plane parameters. Connected component analysis is carried out on the segmentation result to select the largest spatially connected area for each plane. From the largest areas, the parameters for each plane are reestimated. This iterative process was implemented on TMS320DM642 based embedded system that operates at 3–5 frames per second on images of size 320 × 240. 相似文献
20.
Accurate grading for hepatocellular carcinoma (HCC) biopsy images is important to prognosis and treatment planning. In this paper, we propose an automatic system for grading HCC biopsy images. In preprocessing, we use a dual morphological grayscale reconstruction method to remove noise and accentuate nuclear shapes. A marker-controlled watershed transform is applied to obtain the initial contours of nuclei and a snake model is used to segment the shapes of nuclei smoothly and precisely. Fourteen features are then extracted based on six types of characteristics for HCC classification. Finally, we propose a SVM-based decision-graph classifier to classify HCC biopsy images. Experimental results show that 94.54% of classification accuracy can be achieved by using our SVM-based decision-graph classifier while 90.07% and 92.88% of classification accuracy can be achieved by using k-NN and SVM classifiers, respectively. 相似文献