共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper provides methodology for fully automated model-based image segmentation. All information necessary to perform image segmentation is automatically derived from a training set that is presented in a form of segmentation examples. The training set is used to construct two models representing the objects--shape model and border appearance model. A two-step approach to image segmentation is reported. In the first step, an approximate location of the object of interest is determined. In the second step, accurate border segmentation is performed. The shape-variant Hough transform method was developed that provides robust object localization automatically. It finds objects of arbitrary shape, rotation, or scaling and can handle object variability. The border appearance model was developed to automatically design cost functions that can be used in the segmentation criteria of edge-based segmentation methods. Our method was tested in five different segmentation tasks that included 489 objects to be segmented. The final segmentation was compared to manually defined borders with good results [rms errors in pixels: 1.2 (cerebellum), 1.1 (corpus callosum), 1.5 (vertebrae), 1.4 (epicardial), and 1.6 (endocardial) borders]. Two major problems of the state-of-the-art edge-based image segmentation algorithms were addressed: strong dependency on a close-to-target initialization, and necessity for manual redesign of segmentation criteria whenever new segmentation problem is encountered. 相似文献
2.
Li W. Aiken M. 《IEEE transactions on systems, man and cybernetics. Part C, Applications and reviews》1998,28(2):288-294
Many real-world decision-making problems fall into the general category of classification. Algorithms for constructing knowledge by inductive inference from example have been widely used for some decades. Although these learning algorithms frequently address the same problem of learning from preclassified examples and much previous work in inductive learning has focused on the algorithms' predictive accuracy, little attention has been paid to the effect of data factors on the performance of a learning system. An experiment was conducted using five learning algorithms on two data sets to investigate how the change in labeling the class attribute can alter the behavior of learning algorithms. The results show that different preclassification rules applied on the training examples can affect either the classification accuracy or classification structure 相似文献
3.
We describe a framework for multivalued segmentation and demonstrate that some of the problems affecting common region-based algorithms can be overcome by integrating statistical and topological methods in a nonlinear fashion. We address the sensitivity to parameter setting, the difficulty with handling global contextual information, and the dependence of results on analysis order and on initial conditions. We develop our method within a theoretical framework and resort to the definition of image segmentation as an estimation problem. We show that, thanks to an adaptive image scanning mechanism, there is no need of iterations to propagate a global context efficiently. The keyword multivalued refers to a result property, which spans over a set of solutions. The advantage is twofold: first, there is no necessity for setting a priori input thresholds; secondly, we are able to cope successfully with the problem of uncertainties in the signal model. To this end, we adopt a modified version of fuzzy connectedness, which proves particularly useful to account for densitometric and topological information simultaneously. The algorithm was tested on several synthetic and real images. The peculiarities of the method are assessed both qualitatively and quantitatively. 相似文献
4.
视频对象分割算法的性能好坏将直接影响MPEG-4编码产品的质量。连续两次差分后自适应处理,对差分图像取交集获得运动对象的边界,形态学处理后获得二值分割掩模进而提取运动目标。基于改进的Hausdorff距离度量法对后续帧中视频对象进行跟踪。实验结果证明,该方法能够从背景不变的图像序列中较好的提取出运动对象,具有较强的鲁棒性。 相似文献
5.
Haris K Efstratiadis SN Maglaveras N Pappas C Gourassas J Louridas G 《IEEE transactions on medical imaging》1999,18(10):1003-1015
6.
Niyogi P. Girosi F. Poggio T. 《Proceedings of the IEEE. Institute of Electrical and Electronics Engineers》1998,86(11):2196-2209
One of the key problems in supervised learning is the insufficient size of the training set. The natural way for an intelligent learner to counter this problem and successfully generalize is to exploit prior information that may be available about the domain or that can be learned from prototypical examples. We discuss the notion of using prior knowledge by creating virtual examples and thereby expanding the effective training-set size. We show that in some contexts this idea is mathematically equivalent to incorporating the prior knowledge as a regularizer, suggesting that the strategy is well motivated. The process of creating virtual examples in real-world pattern recognition tasks is highly nontrivial. We provide demonstrative examples from object recognition and speech recognition to illustrate the idea 相似文献
7.
Image segmentation and labeling using the Polya urn model 总被引:3,自引:0,他引:3
We propose a segmentation method based on Polya's (1931) urn model for contagious phenomena. A preliminary segmentation yields the initial composition of an urn representing the pixel. The resulting urns are then subjected to a modified urn sampling scheme mimicking the development of an infection to yield a segmentation of the image into homogeneous regions. This process is implemented using contagion urn processes and generalizes Polya's scheme by allowing spatial interactions. The composition of the urns is iteratively updated by assuming a spatial Markovian relationship between neighboring pixel labels. The asymptotic behavior of this process is examined and comparisons with simulated annealing and relaxation labeling are presented. Examples of the application of this scheme to the segmentation of synthetic texture images, ultra-wideband synthetic aperture radar (UWB SAR) images and magnetic resonance images (MRI) are provided. 相似文献
8.
9.
In medical image segmentation, tumors and other lesions demand the highest levels of accuracy but still call for the highest levels of manual delineation. One factor holding back automatic segmentation is the exemption of pathological regions from shape modelling techniques that rely on high-level shape information not offered by lesions. This paper introduces two new statistical shape models (SSMs) that combine radial shape parameterization with machine learning techniques from the field of nonlinear time series analysis. We then develop two dynamic contour models (DCMs) using the new SSMs as shape priors for tumor and lesion segmentation. From training data, the SSMs learn the lower level shape information of boundary fluctuations, which we prove to be nevertheless highly discriminant. One of the new DCMs also uses online learning to refine the shape prior for the lesion of interest based on user interactions. Classification experiments reveal superior sensitivity and specificity of the new shape priors over those previously used to constrain DCMs. User trials with the new interactive algorithms show that the shape priors are directly responsible for improvements in accuracy and reductions in user demand. 相似文献
10.
11.
12.
Rapid advances in artificial intelligence (AI) in the last decade have been largely built upon the wide applications of deep learning (DL). However, the high carbon footprint yielded by larger and larger DL networks has become a concern for sustainability. Furthermore, DL decision mechanism is somewhat obscure in that it can only be verified by test data. Green learning (GL) is being proposed as an alternative paradigm to address these concerns. GL is characterized by low carbon footprints, lightweight model, low computational complexity, and logical transparency. It offers energy-efficient solutions in cloud centers as well as mobile/edge devices. GL also provides a more transparent, logical decision-making process which is essential to gaining people’s trust. Several statistical tools such as unsupervised representation learning, supervised feature learning, and supervised decision learning, have been developed to achieve this goal in recent years. We have seen a few successful GL examples with performance comparable with state-of-the-art DL solutions. This paper introduces the key characteristics of GL, its demonstrated applications, and future outlook. 相似文献
13.
Lei Ma Xiao-Ping Zhang Jennie Si Glen P Abousleman 《IEEE transactions on image processing》2005,14(12):2073-2081
In this paper, we introduce a new image segmentation scheme that is based on bidirectional labeling and registration and prove that its segmentation performance is equivalent to that of the conventional watershed segmentation algorithm. The proposed bidirectional labeling and registration scheme, which we refer to as bidirectional labeling and registration scheme (BIDS), involves only linear scans of image pixels. It uses one-dimensional operations rather than the queues that are used in traditional segmentation algorithms, which are two-dimensional problems. BIDS also provides unique labels for individual homogeneous regions. In addition to achieving the same segmentation results, BIDS is four times less computationally complex than the conventional watershed by immersion technique. 相似文献
14.
基于改进先验形状CV模型的目标分割 总被引:1,自引:0,他引:1
由于空间目标姿态变化较大,且其灰度与地球背景差异较小,传统CV(Chan and Vese)模型难以获得理想的分割结果。针对目标被部分遮挡或部分信息丢失情况下CV模型不能正确识别问题,Chan和Zhu在CV模型基础上引入先验能量项,构建的先验形状模型只具有旋转、缩放和平移不变性。本文提出了一种先验形状约束的变分水平集改进模型,用于分割星空及复杂地球背景下的空间目标。在保持先验形状模型具有旋转、缩放和平移不变性的基础上,本文改进的变分水平集模型增加了X、Y方向拉伸以及剪切不变约束能量项,增强了先验形状对目标变化的自适应性。实验结果表明本文方法对复杂背景下姿态变化较大的空间目标,具有更好的分割效果。 相似文献
15.
16.
Papin C. Bouthemy P. Rochard G. 《Geoscience and Remote Sensing, IEEE Transactions on》2002,40(1):104-114
The early and accurate segmentation of low clouds during the night-time is an important task for nowcasting. It requires that observations can be acquired at a sufficient time rate as provided by the geostationary METEOSAT satellite over Europe. However, the information supplied by the single infrared METEOSAT channel available by night is not sufficient to discriminate between low clouds and ground during night from a single image. To tackle this issue, the authors consider several sources of information extracted from an infrared image sequence. Indeed, they exploit both relevant local motion-based measurements, intensity images and thermal parameters estimated over blocks, along with local contextual information. A statistical contextual labeling process in two classes, involving "low clouds" and "clear sky," is performed on the warmer pixels. It is formulated within a Bayesian estimation framework associated with Markov random field (MRF) models. This comes to minimize a global energy function comprising three terms: two data-driven terms (thermal and motion-based ones) and a regularization term expressing a priori knowledge on the label field (expected spatial contextual properties). The authors propose a progressive minimization procedure of this energy function starting from initial reliably labeled pixels and involving only local computation 相似文献
17.
This paper intends to extend the minimization algorithm developed by Bae, Yuan and Tai [IJCV, 2011] in several directions. First, we propose a new primal-dual approach for global minimization of the continuous Potts model with applications to the piecewise constant Mumford-Shah model for multiphase image segmentation. Different from the existing methods, we work directly with the binary setting without using convex relaxation, which is thereby termed as a direct approach. Second, we provide the sufficient and necessary conditions to guarantee a global optimum. Moreover, we provide efficient algorithms based on a reduction in the intermediate unknowns from the augmented Lagrangian formulation. As a result, the underlying algorithms involve significantly fewer parameters and unknowns than the naive use of augmented Lagrangian-based methods; hence, they are fast and easy to implement. Furthermore, they can produce global optimums under mild conditions. 相似文献
18.
19.
3D点云数据处理在物体分割、医学图像分割和虚拟现实等领域起到了重要作用。然而现有3D点云学习网络全局特征提取范围小,难以描述局部高级语义信息,进而导致点云特征表述不完整。针对这些问题,提出一种基于语义信息补偿全局特征的物体点云分类分割网络。首先,将输入的点云数据对齐到规范空间,进行数据的输入转换预处理。然后,利用扩张边缘卷积模块提取转换后数据的每一层特征,并叠加生成全局特征。而在局部特征提取时,利用提取到的低级语义信息来描述高级语义信息和有效几何特征,用于补偿全局特征中遗漏的点云特征。最后,融合全局特征和局部高级语义信息得到点云的整体特征。实验结果表明,文中方法在分类和分割性能上优于目前经典和新颖的算法。 相似文献