共查询到20条相似文献,搜索用时 0 毫秒
1.
基于区域的活动轮廓模型的基本思想是允许轮廓线形变以获得最小化的区域能量函数,由于通常依赖于每个待分割区域的亮度均匀性,因而不能正确分割亮度不均匀性图像.同时活动轮廓模型传统的基于水平集的数值解法运算速度慢,对初始条件敏感.提出一种基于可伸缩局部区域拟合能量的活动轮廓线模型及其全局凸分割方法,以图像的局部区域内亮度不均匀... 相似文献
2.
Effective annotation and content-based search for videos in a digital library require a preprocessing step of detecting, locating and classifying scene transitions, i.e., temporal video segmentation. This paper proposes a novel approach—spatial-temporal joint probability image (ST-JPI) analysis for temporal video segmentation. A joint probability image (JPI) is derived from the joint probabilities of intensity values of corresponding points in two images. The ST-JPT, which is a series of JPIs derived from consecutive video frames, presents the evolution of the intensity joint probabilities in a video. The evolution in a ST-JPI during various transitions falls into one of several well-defined linear patterns. Based on the patterns in a ST-JPI, our algorithm detects and classifies video transitions effectively.Our study shows that temporal video segmentation based on ST-JPIs is distinguished from previous methods in the following way: (1) It is effective and relatively robust not only for video cuts but also for gradual transitions; (2) It classifies transitions on the basis of predefined evolution patterns of ST-JPIs during transitions; (3) It is efficient, scalable and suitable for real-time video segmentation. Theoretical analysis and experimental results of our method are presented to illustrate its efficacy and efficiency. 相似文献
3.
In this paper, we propose a new regular gridding and segmentation approach for microarray image. Initially, the microarray images are preprocessed using Stationary Wavelet Transform (SWT), followed by a hard thresholding filtering technique to get a de-noised microarray image. Then, we use autocorrelation to enhance the self-similarity of the image profile to get a regular gridding. 相似文献
4.
医学图像自动多阈值分割 总被引:3,自引:1,他引:3
针对医学图像的自动多阚值分割问题,采用模糊C-均值(FCM)聚类法找到医学图像的不同组织和背景的聚类中心,再利用二雏直方图的方法,找到多阈值分割的各个阈值点进行分割.引用二维直方图的方法可以很好地保留目标的细节信息,更好地抑制噪声. 相似文献
5.
Küçükkülahlı Enver Erdoğmuş Pakize Polat Kemal 《Neural computing & applications》2016,27(5):1445-1450
Neural Computing and Applications - The segmentation process is defined by separating the objects as clustering in the images. The most used method in the segmentation is k-means clustering... 相似文献
6.
E.G.P. Bovenkamp Author Vitae J. Dijkstra Author VitaeAuthor Vitae J.H.C. Reiber Author Vitae 《Pattern recognition》2004,37(4):647-663
A novel multi-agent image interpretation system has been developed which is markedly different from previous approaches in especially its elaborate high-level knowledge-based control over low-level image segmentation algorithms. Agents dynamically adapt segmentation algorithms based on knowledge about global constraints, contextual knowledge, local image information and personal beliefs. Generally agent control allows the underlying segmentation algorithms to be simpler and be applied to a wider range of problems with a higher reliability.The agent knowledge model is general and modular to support easy construction and addition of agents to any image processing task. Each agent in the system is further responsible for one type of high-level object and cooperates with other agents to come to a consistent overall image interpretation. Cooperation involves communicating hypotheses and resolving conflicts between the interpretations of individual agents.The system has been applied to IntraVascular UltraSound (IVUS) images which are segmented by five agents, specialized in lumen, vessel, calcified-plaque, shadow and sidebranch detection. IVUS image sequences from 7 patients were processed and vessel and lumen contours were detected fully automatically. These were compared with expert-corrected semiautomatically detected contours. Results show good correlations between agents and expert with r=0.84 for the lumen and r=0.92 for the vessel cross-sectional areas, respectively. 相似文献
7.
Piero Zamperoni 《Image and vision computing》1984,2(3):123-133
A model-based approach to grey-tone image segmentation is presented. A conceptual and computational frame is described, in which a variety of image models can be accommodated. Each model is defined by a feature pair and implies a uniformity criterion for ideal regions. Some particularly relevant models are described in detail and illustrated by means of experimental results obtained with real-world images. 相似文献
8.
Image semantic segmentation is a research topic that has emerged recently. Although existing approaches have achieved satisfactory accuracy, they are limited to handling low-resolution images owing to their large memory consumption. In this paper, we present a semantic segmentation method for high-resolution images. First, we downsample the input image to a lower resolution and then obtain a low-resolution semantic segmentation image using state-of-the-art methods. Next, we use joint bilateral upsampling to upsample the low-resolution solution and obtain a high-resolution semantic segmentation image. To modify joint bilateral upsampling to handle discrete semantic segmentation data, we propose using voting instead of interpolation in filtering computation. Compared to state-of-the-art methods, our method significantly reduces memory cost without reducing result quality. 相似文献
9.
Edge-region-based segmentation of range images 总被引:5,自引:0,他引:5
Wani M.A. Batchelor B.G. 《IEEE transactions on pattern analysis and machine intelligence》1994,16(3):314-319
In this correspondence, we present a new computationally efficient three-dimensional (3-D) object segmentation technique. The technique is based on the detection of edges in the image. The edges can be classified as belonging to one of the three categories: fold edges, semistep edges (defined here), and secondary edges. The 3-D image is sliced to create equidepth contours (EDCs). Three types of critical points are extracted from the EDCs. A subset of the edge pixels is extracted first using these critical points. The edges are grown from these pixels through the application of some masks proposed in this correspondence. The constraints of the masks can be adjusted depending on the noise present in the image. The total computational effort is small since the masks are applied only over a small neighborhood of critical points (edge regions). Furthermore, the algorithm can be implemented in parallel, as edge growing from different regions can be carried out independently of each other 相似文献
10.
In this work we present a snake based approach for the segmentation of images of computerized tomography (CT) scans. We introduce a new term for the internal energy and another one for external energy which solve common problems associated with classical snakes in this type of images. A simplified minimizing method is also presented. 相似文献
11.
Feature encoding for unsupervised segmentation of color images 总被引:3,自引:0,他引:3
Li N. Li Y.F. 《IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics》2003,33(3):438-447
In this paper, an unsupervised segmentation method using clustering is presented for color images. We propose to use a neural network based approach to automatic feature selection to achieve adaptive segmentation of color images. With a self-organizing feature map (SOFM), multiple color features can be analyzed, and the useful feature sequence (feature vector) can then be determined. The encoded feature vector is used in the final segmentation using fuzzy clustering. The proposed method has been applied in segmenting different types of color images, and the experimental results show that it outperforms the classical clustering method. Our study shows that the feature encoding approach offers great promise in automating and optimizing the segmentation of color images. 相似文献
12.
We consider the problem of semi-supervised segmentation of textured images. Existing model-based approaches model the intensity field of textured images as a Gauss-Markov random field to take into account the local spatial dependencies between the pixels. Classical Bayesian segmentation consists of also modeling the label field as a Markov random field to ensure that neighboring pixels correspond to the same texture class with high probability. Well-known relaxation techniques are available which find the optimal label field with respect to the maximum a posteriori or the maximum posterior mode criterion. But, these techniques are usually computationally intensive because they require a large number of iterations to converge. In this paper, we propose a new Bayesian framework by modeling two-dimensional textured images as the concatenation of two one-dimensional hidden Markov autoregressive models for the lines and the columns, respectively. A segmentation algorithm, which is similar to turbo decoding in the context of error-correcting codes, is obtained based on a factor graph approach. The proposed method estimates the unknown parameters using the Expectation-Maximization algorithm. 相似文献
13.
Pankaj K. Singh Nitesh Sinha Karan Sikka 《International journal of remote sensing》2013,34(15):4155-4173
Image segmentation is one of the crucial tasks in the postprocessing of synthetic aperture radar (SAR) images. However, SAR images are textural in nature, marked by the textural patterns of widely disparate mean intensity values. This renders conventional multi-resolution techniques inefficient for the segmentation of these images. This article proposes a novel technique of combining both intensity and textural information for effective region classification. To achieve this, two new approaches, called Neighbourhood‐based Membership Ambiguity Correction (NMAC) and Dynamic Sliding Window Size Estimation (DSWSE), have been proposed. The results obtained from the two schemes are combined, segregating the image into well-defined regions of distinct textures as well as intensities. Promising results have been obtained over the SAR images of Nordlinger Ries in the Swabian Jura and flood regions near the river Kosi in Bihar, India. 相似文献
14.
15.
An embedded system is developed to segment stereo images using disparity. The recent developments in the embedded system architecture have allowed real time implementation of low-level vision tasks such as stereo disparity computation. At the same time, an intermediate level task such as segmentation is rarely attempted in an embedded system. To solve the planar surface segmentation problem, which is iterative in nature, our system implements a Segmentation–Estimation framework. In the segmentation phase, segmentation labels are assigned based on the underlying plane parameters. Connected component analysis is carried out on the segmentation result to select the largest spatially connected area for each plane. From the largest areas, the parameters for each plane are reestimated. This iterative process was implemented on TMS320DM642 based embedded system that operates at 3–5 frames per second on images of size 320 × 240. 相似文献
16.
Accurate grading for hepatocellular carcinoma (HCC) biopsy images is important to prognosis and treatment planning. In this paper, we propose an automatic system for grading HCC biopsy images. In preprocessing, we use a dual morphological grayscale reconstruction method to remove noise and accentuate nuclear shapes. A marker-controlled watershed transform is applied to obtain the initial contours of nuclei and a snake model is used to segment the shapes of nuclei smoothly and precisely. Fourteen features are then extracted based on six types of characteristics for HCC classification. Finally, we propose a SVM-based decision-graph classifier to classify HCC biopsy images. Experimental results show that 94.54% of classification accuracy can be achieved by using our SVM-based decision-graph classifier while 90.07% and 92.88% of classification accuracy can be achieved by using k-NN and SVM classifiers, respectively. 相似文献
17.
Fabio Bellavia Antonino Cacioppo Carmen Alina Lupaşcu Pietro Messina Giuseppe Scardina Domenico Tegolo Cesare Valenti 《Computer methods and programs in biomedicine》2014
We aim to describe a new non-parametric methodology to support the clinician during the diagnostic process of oral videocapillaroscopy to evaluate peripheral microcirculation. Our methodology, mainly based on wavelet analysis and mathematical morphology to preprocess the images, segments them by minimizing the within-class luminosity variance of both capillaries and background. Experiments were carried out on a set of real microphotographs to validate this approach versus handmade segmentations provided by physicians. By using a leave-one-patient-out approach, we pointed out that our methodology is robust, according to precision–recall criteria (average precision and recall are equal to 0.924 and 0.923, respectively) and it acts as a physician in terms of the Jaccard index (mean and standard deviation equal to 0.858 and 0.064, respectively). 相似文献
18.
19.
Robust adaptive segmentation of range images 总被引:4,自引:0,他引:4
Kil-Moo Lee Meer P. Rae-Hong Park 《IEEE transactions on pattern analysis and machine intelligence》1998,20(2):200-205
We propose a novel image segmentation technique using the robust, adaptive least kth order squares (ALKS) estimator which minimizes the kth order statistics of the squares of residuals. The optimal value of k is determined from the data, and the procedure detects the homogeneous surface patch representing the relative majority of the pixels. The ALKS shows a better tolerance to structured outliers than other recently proposed similar techniques. The performance of the new, fully autonomous, range image segmentation algorithm is compared to several other methods 相似文献
20.
Scale parameter(s) of multi-scale hierarchical segmentation (MSHS), which groups pixels as objects in different size and hierarchically organizes them in multiple levels, such as the multiresolution segmentation (MRS) embedded into the eCognition software, directly determines the average size of segmented objects and has significant influences on following geographic object-based image analysis. Recently, some studies have provided solutions to search the optimal scale parameter(s) by supervised strategies (with reference data) or unsupervised strategies (without reference data). They focused on designing metrics indicating better scale parameter(s) but neglected the influences of the linear sampling method of the scale parameter they used as default. Indeed, the linear sampling method not only requires a proper increment and a proper range to balance the accuracy and the efficiency by supervised strategies, but also performs badly in the selection of multiple key scales for the MSHS of complex landscapes by unsupervised strategies. Against these drawbacks, we propose an exponential sampling method. It was based on our finding that the logarithm of the segment count and the logarithm of the scale parameter are linearly dependent, which had been extensively validated on different landscapes in this study. The scale parameters sampled by the exponential sampling method and the linear sampling method with increments 2, 5, 10, 25, and 100 that most former studies used were evaluated and compared by two supervised strategies and an unsupervised strategy. Results indicated that, when searching by the supervised strategies, the exponential sampling method achieved both high accuracy and efficiency where the linear sampling method had to balance them through the experiences of an expert; and when searching by the unsupervised strategy, multiple key scale parameters in MSHS of complex landscapes could be identified among the exponential sampling results, while the linear sampling results hardly achieved this. Considering these two merits, we recommend the exponential sampling method to replace the linear sampling method when searching the optimal scale parameter(s) of MRS. 相似文献