首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
3-D shape recovery using distributed aspect matching   总被引:2,自引:0,他引:2  
An approach to the recovery of 3-D volumetric primitives from a single 2-D image is presented. The approach first takes a set of 3-D volumetric modeling primitives and generates a hierarchical aspect representation based on the projected surfaces of the primitives; conditional probabilities capture the ambiguity of mappings between levels of the hierarchy. From a region segmentation of the input image, the authors present a formulation of the recovery problem based on the grouping of the regions into aspects. No domain-independent heuristics are used; only the probabilities inherent in the aspect hierarchy are exploited. Once the aspects are recovered, the aspect hierarchy is used to infer a set of volumetric primitives and their connectivity. As a front end to an object recognition system, the approach provides the indexing power of complex 3-D object-centered primitives while exploiting the convenience of 2-D viewer-centered aspect matching; aspects are used to represent a finite vocabulary of 3-D parts from which objects can be constructed  相似文献   

2.
The modeling and segmentation of images by MRF's (Markov random fields) is treated. These are two-dimensional noncausal Markovian stochastic processes. Two conceptually new algorithms are presented for segmenting textured images into regions in each of which the data are modeled as one of C MRF's. The algorithms are designed to operate in real time when implemented on new parallel computer architectures that can be built with present technology. A doubly stochastic representation is used in image modeling. Here, a Gaussian MRF is used to model textures in visible light and infrared images, and an autobinary (or autoternary, etc.) MRF to model a priori information about the local geometry of textured image regions. For image segmentation, the true texture class regions are treated either as a priori completely unknown or as a realization of a binary (or ternary, etc.) MRF. In the former case, image segmentation is realized as true maximum likelihood estimation. In the latter case, it is realized as true maximum a posteriori likelihood segmentation. In addition to providing a mathematically correct means for introducing geometric structure, the autobinary (or ternary, etc.) MRF can be used in a generative mode to generate image geometries and artificial images, and such simulations constitute a very powerful tool for studying the effects of these models and the appropriate choice of model parameters. The first segmentation algorithm is hierarchical and uses a pyramid-like structure in new ways that exploit the mutual dependencies among disjoint pieces of a textured region.  相似文献   

3.
Describes a probabilistic technique for the coupled reconstruction and restoration of underwater acoustic images. The technique is founded on the physics of the image-formation process. Beamforming, a method widely applied in acoustic imaging, is used to build a range image from backscattered echoes, associated point by point with another type of information representing the reliability (or confidence) of such an image. Unfortunately, this kind of images is plagued by problems due to the nature of the signal and to the related sensing system. In the proposed algorithm, the range and confidence images are modeled as Markov random fields whose associated probability distributions are specified by a single energy function. This function has been designed to fully embed the physics of the acoustic image-formation process by modeling a priori knowledge of the acoustic system, the considered scene, and the noise-affecting measures and also by integrating reliability information to allow the coupled and simultaneous reconstruction and restoration of both images. Optimal (in the maximum a posteriori probability sense) estimates of the reconstructed range image map and the restored confidence image are obtained by minimizing the energy function using simulated annealing. Experimental results show the improvement of the processed images over those obtained by other methods performing separate reconstruction and restoration processes that disregard reliability information  相似文献   

4.
It sometimes happens (for instance in case control studies) that a classifier is trained on a data set that does not reflect the true a priori probabilities of the target classes on real-world data. This may have a negative effect on the classification accuracy obtained on the real-world data set, especially when the classifier's decisions are based on the a posteriori probabilities of class membership. Indeed, in this case, the trained classifier provides estimates of the a posteriori probabilities that are not valid for this real-world data set (they rely on the a priori probabilities of the training set). Applying the classifier as is (without correcting its outputs with respect to these new conditions) on this new data set may thus be suboptimal. In this note, we present a simple iterative procedure for adjusting the outputs of the trained classifier with respect to these new a priori probabilities without having to refit the model, even when these probabilities are not known in advance. As a by-product, estimates of the new a priori probabilities are also obtained. This iterative algorithm is a straightforward instance of the expectation-maximization (EM) algorithm and is shown to maximize the likelihood of the new data. Thereafter, we discuss a statistical test that can be applied to decide if the a priori class probabilities have changed from the training set to the real-world data. The procedure is illustrated on different classification problems involving a multilayer neural network, and comparisons with a standard procedure for a priori probability estimation are provided. Our original method, based on the EM algorithm, is shown to be superior to the standard one for a priori probability estimation. Experimental results also indicate that the classifier with adjusted outputs always performs better than the original one in terms of classification accuracy, when the a priori probability conditions differ from the training set to the real-world data. The gain in classification accuracy can be significant.  相似文献   

5.
Perceptual grouping organizes image parts in clusters based on psychophysically plausible similarity measures. We propose a novel grouping method in this paper, which stresses connectedness of image elements via mediating elements rather than favoring high mutual similarity. This grouping principle yields superior clustering results when objects are distributed on low-dimensional extended manifolds in a feature space, and not as local point clouds. In addition to extracting connected structures, objects are singled out as outliers when they are too far away from any cluster structure. The objective function for this perceptual organization principle is optimized by a fast agglomerative algorithm. We report on perceptual organization experiments where small edge elements are grouped to smooth curves. The generality of the method is emphasized by results from grouping textured images with texture gradients in an unsupervised fashion.  相似文献   

6.
Image segmentation by unifying region and boundary information   总被引:7,自引:0,他引:7  
A two-stage method of image segmentation based on gray level cooccurrence matrices is described. An analysis of the distributions within a cooccurrence matrix defines an initial pixel classification into both region and interior or boundary designations. Local consistency of pixel classification is then implemented by minimizing the entropy of local information, where region information is expressed via conditional probabilities estimated from the cooccurrence matrices, and boundary information via conditional probabilities which are determined a priori. The method robustly segments an image into homogeneous areas and generates an edge map. The technique extends easily to general edge operators. An example is given for the Canny operator. Applications to synthetic and forward-looking infrared (FLIR) images are given  相似文献   

7.
Perceptual organization offers an elegant framework to group low-level features that are likely to come from a single object. We offer a novel strategy to adapt this grouping process to objects in a domain. Given a set of training images of objects in context, the associated learning process decides on the relative importance of the basic salient relationships such as proximity, parallelness, continuity, junctions, and common region toward segregating the objects from the background. The parameters of the grouping process are cast as probabilistic specifications of Bayesian networks that need to be learned. This learning is accomplished using a team of stochastic automata in an N-player cooperative game framework. The grouping process, which is based on graph partitioning is able to form large groups from relationships defined over a small set of primitives and is fast. We statistically demonstrate the robust performance of the grouping and the learning frameworks on a variety of real images. Among the interesting conclusions is the significant role of photometric attributes in grouping and the ability to form large salient groups from a set of local relations, each defined over a small number of primitives  相似文献   

8.
预估计混叠度的MAP超分辨率处理算法   总被引:15,自引:1,他引:15  
孟庆武 《软件学报》2004,15(2):207-214
提出一种预估计混叠度的PEMAP(pre-estimated MAP(maximum a posteriori)算法,用于卫星图像的地面超分辨率处理.它通过频域分析确定卫星图像的混叠度,将其作为先验信息在空域控制MAP估计的循环迭代,联合估计帧间位移和高分辨率图像.该算法克服了最大后验概率MAP算法的盲目性和不稳定性,使其适应性更好.实际的卫星图像处理显示了较好的处理效果.  相似文献   

9.
The theory and practice of Bayesian image labeling   总被引:10,自引:5,他引:5  
Image analysis that produces an image-like array of symbolic or numerical elements (such as edge finding or depth map reconstruction) can be formulated as a labeling problem in which each element is to be assigned a label from a discrete or continuous label set. This formulation lends itself to algorithms, based on Bayesian probability theory, that support the combination of disparate sources of information, including prior knowledge.In the approach described here, local visual observations for each entity to be labeled (e.g., edge sites, pixels, elements in a depth array) yield label likelihoods. Likelihoods from several sources are combined consistently in abstraction-hierarchical label structures using a new, computationally simple procedure. The resulting label likelihoods are combined with a priori spatial knowledge encoded in a Markov random field (MRF). From the prior probabilities and the evidence-dependent combined likelihoods, the a posteriori distribution of the labelings is derived using Bayes' theorem.A new inference method, Highest Confidence First (HCF) estimation, is used to infer a unique labeling from the a posteriori distribution that is consistent with both prior knowledge and evidence. HCF compares favorably to previous techniques, all equivalent to some form of energy minimization or optimization, in finding a good MRF labeling. HCF is computationally efficient and predictable and produces better answers (lower energies) while exhibiting graceful degradation under noise and least commitment under inaccurate models. The technique generalizes to higher-level vision problems and other domains, and is demonstrated on problems of intensity edge detection and surface depth reconstruction.  相似文献   

10.
This paper proposes a segmentation algorithm by means of a probabilistic reasoning to segment moving vehicles in front of a moving vehicle in a road traffic scene. According to the perceptually known facts of a target, we extract image primitives and update a probabilistic expectation for the target to be in an image. Since a noise image produces unreliable features and degrades the detection and localization, selecting the image primitives, which are less sensitive to noise and represent the facts well, is important. The probabilistic reasoning overcomes this problem based on MAP (maximum a posteriori) probability that combines the prior and likelihood probabilities of image features using Bayes' rule.  相似文献   

11.
We introduce a formalism for optimal sensor parameter selection for iterative state estimation in static systems. Our optimality criterion is the reduction of uncertainty in the state estimation process, rather than an estimator-specific metric (e.g., minimum mean squared estimate error). The claim is that state estimation becomes more reliable if the uncertainty and ambiguity in the estimation process can be reduced. We use Shannon's information theory to select information-gathering actions that maximize mutual information, thus optimizing the information that the data conveys about the true state of the system. The technique explicitly takes into account the a priori probabilities governing the computation of the mutual information. Thus, a sequential decision process can be formed by treating the a priori probability at a certain time step in the decision process as the a posteriori probability of the previous time step. We demonstrate the benefits of our approach in an object recognition application using an active camera for sequential gaze control and viewpoint selection. We describe experiments with discrete and continuous density representations that suggest the effectiveness of the approach  相似文献   

12.
基于最大类间后验交叉熵的阈值化分割算法   总被引:16,自引:0,他引:16       下载免费PDF全文
从目标和背景的类间差异性出发,提出了一种基于最大类间交叉熵准则的阈值化分割新算法。该算法假设目标和背景象素的条件分布服从正态分布,利用贝叶斯公式估计象素属于目标和背景两类区域的后验概率,再搜索这两类区域后验概率之间的最大交叉熵。比较了新算法与基于最小交叉熵以及基于传统香农熵的阈值化算法的特点和分割性能。  相似文献   

13.
基于最大类间后验交叉熵的阈值比分割算法   总被引:4,自引:1,他引:3       下载免费PDF全文
从目标和背景的类间差异性出发,提出了一种基于最大类间交叉熵准则的阈值化分割新算法,算法阈设目标的背景象素的条件分布服从正态分布,利用贝叶期公式估计象素属于目标和背景两类区域的后验概率,再搜索这两为区域后验概率之间的最大交叉熵。比较了新算法一基于最小交叉熵以及基于传统香农熵的阈值化算法的分割性能。  相似文献   

14.
目标识别是指一个特殊目标(或一种类型的目标)从其他目标(或其他类型的目标)中被区分出来的过程。给出了高阶马尔可夫随机场下的区域邻域系统定义;通过贝叶斯分析,构建了基于协方差矩阵描述子刻画的图像区域度量的先验模型和似然模型;应用随机算法得到极大后验估计,求得目标所在位置和角度;再通过以目标所在位置为中心,获得多个随机矩形;最终以覆盖范围最大者为所寻找的目标区域。通过Matlab仿真实验,对道路中的斑马线进行模拟识别。实验结果表明,可以达到在大区域中识别出既定目标的目的。  相似文献   

15.
16.
This paper presents a novel level set method for complex image segmentation, where the local statistical analysis and global similarity measurement are both incorporated into the construction of energy functional. The intensity statistical analysis is performed on local circular regions centered in each pixel so that the local energy term is constructed in a piecewise constant way. Meanwhile, the Bhattacharyya coefficient is utilized to measure the similarity between probability distribution functions for intensities inside and outside the evolving contour. The global energy term can be formulated by minimizing the Bhattacharyya coefficient. To avoid the time-consuming re-initialization step, the penalty energy term associated with a new double-well potential is constructed to maintain the signed distance property of level set function. The experiments and comparisons with four popular models on synthetic and real images have demonstrated that our method is efficient and robust for segmenting noisy images, images with intensity inhomogeneity, texture images and multiphase images.  相似文献   

17.
This paper deals with the problem of depth recovery and image restoration from sparse and noisy image data. The image is modeled as a Markov random field and a new energy function is developed to effectively detect discontinuities in highly sparse and noisy images. The model provides an alternative to the use of a line process. Interpolation over missing data sites is first done using local characteristics to obtain initial estimates and then simulated annealing is used to compute the maximum a posteriori (MAP) estimate. A threshold on energy reduction per iteration is used to speed up simulated annealing by avoiding computation that contributes little to the energy minimization. Moreover, a minor modification of the posterior energy function gives improved results for random as well as structured sparsing problems. Results of simulations carried out on real range and intensity images along with details of the simulations are presented  相似文献   

18.
Variational functionals such as Mumford-Shah and Chan-Vese methods have a major impact on various areas of image processing. After over 10 years of investigation, they are still in widespread use today. These formulations optimize contours by evolution through gradient descent, which is known for its overdependence on initialization and the tendency to produce undesirable local minima. In this paper, we propose an image segmentation model in a variational nonlocal means framework based on a weighted graph. The advantages of this model are twofold. First, the convexity global minimum (optimum) information is taken into account to achieve better segmentation results. Second, the proposed global convex energy functionals combine nonlocal regularization and local intensity fitting terms. The nonlocal total variational regularization term based on the graph is able to preserve the detailed structure of target objects. At the same time, the modified local binary fitting term introduced in the model as the local fitting term can efficiently deal with intensity inhomogeneity in images. Finally, we apply the Split Bregman method to minimize the proposed energy functional efficiently. The proposed model has been applied to segmentation of real medical and remote sensing images. Compared with other methods, the proposed model is superior in terms of both accuracy and efficient.  相似文献   

19.
Display Design of Process Systems Based on Functional Modelling   总被引:1,自引:0,他引:1  
The prevalent way to present information in industrial computer displays is by using piping and instrumentation diagrams. Such interfaces have sometimes resulted in difficulties for operators because they are not sufficient to fulfil their needs. A systematic way that supports interface design therefore has to be considered. In the new design framework, two questions must be answered. Firstly, a modelling method is required to describe a process system. Such a modelling method can define the information content that must be displayed in interfaces. Secondly, how to communicate this information to operators efficiently must be considered. This will provide a basis for determining the visual forms that the information should take. This study discusses interface design of human–machine systems from these two points of view. Based on other scholars’ work, a comprehensive set of functional primitives is summarised as a basis to build a functional model of process systems. A library of geometrical presentations for these primitives is then developed. To support effective interface design, the concept of ‘functional macro’ is introduced and a way to map functional model to interface display is illustrated by applying several principles. To make our ideas clear, a central heating system is taken as an example and its functional model is constructed. Based on the functional model, the information to be displayed is determined. Several functional macros are then found in the model and their corresponding displays are constructed. Finally, by using the library of geometrical presentations for functional primitives and functional macros, the display hierarchy of the central heating system is developed. Reusability of functional primitives makes it possible to use the methodology to support interface design of different process systems.  相似文献   

20.
基于模拟退火算法的立体匹配搜索方法   总被引:2,自引:0,他引:2  
选择图像的边缘特征点作为匹配基元,求出边缘梯度的大小、方向和拉普拉斯值作为特征的属性值,在满足一定的立体匹配约束条件下,建立全局能量函数和状态空间,并用模拟退火算法,随着对状态空间的随机扰动,使能量函数达到全局最小,从而实现立体匹配。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号