首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
A method for recovery of compact volumetric models for shape representation of single-part objects in computer vision is introduced. The models are superquadrics with parametric deformations (bending, tapering, and cavity deformation). The input for the model recovery is three-dimensional range points. Model recovery is formulated as a least-squares minimization of a cost function for all range points belonging to a single part. During an iterative gradient descent minimization process, all model parameters are adjusted simultaneously, recovery position, orientation, size, and shape of the model, such that most of the given range points lie close to the model's surface. A specific solution among several acceptable solutions, where are all minima in the parameter space, can be reached by constraining the search to a part of the parameter space. The many shallow local minima in the parameter space are avoided as a solution by using a stochastic technique during minimization. Results using real range data show that the recovered models are stable and that the recovery procedure is fast  相似文献   

2.
The generation of three-dimensional (3-D) digital models produced by optical technologies in some cases involves metric errors. This happens when small high-resolution 3-D images are assembled together in order to model a large object. In some applications, as for example 3-D modeling of Cultural Heritage, the problem of metric accuracy is a major issue and no methods are currently available for enhancing it. The authors present a procedure by which the metric reliability of the 3-D model, obtained through iterative alignments of many range maps, can be guaranteed to a known acceptable level. The goal is the integration of the 3-D range camera system with a close range digital photogrammetry technique. The basic idea is to generate a global coordinate system determined by the digital photogrammetric procedure, measuring the spatial coordinates of optical targets placed around the object to be modeled. Such coordinates, set as reference points, allow the proper rigid motion of few key range maps, including a portion of the targets, in the global reference system defined by photogrammetry. The other 3-D images are normally aligned around these locked images with usual iterative algorithms. Experimental results on an anthropomorphic test object, comparing the conventional and the proposed alignment method, are finally reported.  相似文献   

3.
4.
Cluster ZTP (zero-temperature process) is proposed as a method of image recovery and examined for 8-valued images. The cluster ZTP is an iterative algorithm for the search of an approximately optimal solution of an energy minimization problem.  相似文献   

5.
The gradient vector flow (GVF) deformable model was introduced by Xu and Prince as an effective approach to overcome the limited capture range problem of classical deformable models and their inability to progress into boundary concavities. It has found many important applications in the area of medical image processing. The simple iterative method proposed in the original work on GVF, however, is slow to converge. A new multigrid method is proposed for GVF computation on 2D and 3D images. Experimental results show that the new implementation significantly improves the computational speed by at least an order of magnitude, which facilitates the application of GVF deformable models in processing large medical images  相似文献   

6.
由散焦图像恢复三维景物的深度信息是一个不适定问题.提出一种新的基于整体变分的散焦图像深度恢复算法:首先将散焦图像深度恢复转化为带有整体变分正则化项的能量泛函极值问题,然后采用变分原理将其中的最小化问题转为偏微分方程的求解,最后通过方程迭代获得深度的最优解.该算法避免了解不适定问题的逆,恢复聚焦图像等问题.模拟图像和真实图像的实验结果表明该算法是有效的,与最小二乘法相比具有较小的均方根误差.  相似文献   

7.
Segmentation through variable-order surface fitting   总被引:28,自引:0,他引:28  
The solution of the segmentation problem requires a mechanism for partitioning the image array into low-level entities based on a model of the underlying image structure. A piecewise-smooth surface model for image data that possesses surface coherence properties is used to develop an algorithm that simultaneously segments a large class of images into regions of arbitrary shape and approximates image data with bivariate functions so that it is possible to compute a complete, noiseless image reconstruction based on the extracted functions and regions. Surface curvature sign labeling provides an initial coarse image segmentation, which is refined by an iterative region-growing method based on variable-order surface fitting. Experimental results show the algorithm's performance on six range images and three intensity images  相似文献   

8.
This paper presents three computationally efficient solutions for the image interpolation problem which are developed in a general framework. This framework is based on dealing with the problem as an inverse problem. Based on the observation model, our objective is to obtain a high resolution image which is as close as possible to the original high resolution image subject to certain constraints. In the first solution, a linear minimum mean square error (LMMSE) approach is suggested. The necessary assumptions required to reduce the computational complexity of the LMMSE solution are presented. The sensitivity of the LMMSE solution to these assumptions is studied. In the second solution, the concept of entropy maximization of the required high resolution image a priori is used. The implementation of the suggested maximum entropy solution as a single sparse matrix inversion is presented. Finally, the well-known regularization technique used in iterative nature in image interpolation and image restoration is revisited. An efficient sectioned implementation of regularized image interpolation, which avoids the large number of iterations required in the interactive technique, is presented. In our suggested regularized solution, the computational time is linearly proportional to the dimensions of the image to be interpolated and a single matrix inversion of moderate dimensions is required. This property allows its implementation in interpolating images of any dimensions which is a great problem in iterative techniques. The effect of the choice of the regularization parameter on the suggested regularized image interpolation solution is studied. The performance of all the above-mentioned solutions is compared to traditional polynomial based interpolation techniques such as cubic O-MOMS and to iterative interpolation as well. The suitability of each solution to interpolating different images is also studied.  相似文献   

9.
In this paper, we revisit the mean-variance model of Markowitz and the construction of the risk-return efficient frontier. A few other models, such as the mean absolute deviation, the minimax and maximin, and models with diagonal quadratic form as objectives, which use alternative metrics for risk are also introduced. Then we present a neurodynamic model for solving these kinds of problems. By employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. The validity and transient behavior of the neural network are demonstrated by using several examples of portfolio selection.  相似文献   

10.
11.
Generic model abstraction from examples   总被引:3,自引:0,他引:3  
The recognition community has typically avoided bridging the representational gap between traditional, low-level image features and generic models. Instead, the gap has been artificially eliminated by either bringing the image closer to the models using simple scenes containing idealized, textureless objects or by bringing the models closer to the images using 3D CAD model templates or 2D appearance model templates. In this paper, we attempt to bridge the representational gap for the domain of model acquisition. Specifically, we address the problem of automatically acquiring a generic 2D view-based class model from a set of images, each containing an exemplar object belonging to that class. We introduce a novel graph-theoretical formulation of the problem in which we search for the lowest common abstraction among a set of lattices, each representing the space of all possible region groupings in a region adjacency graph representation of an input image. The problem is intractable and we present a shortest path-based approximation algorithm to yield an efficient solution. We demonstrate the approach on real imagery.  相似文献   

12.
Three-dimensional appearance models consisting of spatially varying reflectance functions defined on a known shape can be used in analysis-by-synthesis approaches to a number of visual tasks. The construction of these models requires the measurement of reflectance, and the problem of recovering spatially varying reflectance from images of known shape has drawn considerable interest. To date, existing methods rely on either: 1) low-dimensional (e.g., parametric) reflectance models, or 2) large data sets involving thousands of images (or more) per object. Appearance models based on the former have limited accuracy and generality since they require the selection of a specific reflectance model a priori, and while approaches based on the latter may be suitable for certain applications, they are generally too costly and cumbersome to be used for image analysis. We present an alternative approach that seeks to combine the benefits of existing methods by enabling the estimation of a nonparametric spatially varying reflectance function from a small number of images. We frame the problem as scattered-data interpolation in a mixed spatial and angular domain, and we present a theory demonstrating that the angular accuracy of a recovered reflectance function can be increased in exchange for a decrease in its spatial resolution. We also present a practical solution to this interpolation problem using a new representation of reflectance based on radial basis functions. This representation is evaluated experimentally by testing its ability to predict appearance under novel view and lighting conditions. Our results suggest that since reflectance typically varies slowly from point to point over much of an object's surface, we can often obtain a nonparametric reflectance function from a sparse set of images. In fact, in some cases, we can obtain reasonable results in the limiting case of only a single input image.  相似文献   

13.
The dominance-based rough set approach is proposed as a methodology for plunge grinding process diagnosis. The process is analyzed and next its diagnosis is considered as a multi-criteria decision making problem based on the modelling of relationships between different process states and their symptoms using a set of rules induced from measured process data. The development of the diagnostic system is characterized by three phases. Firstly, the process experimental data is prepared in the form of a decision table. Using selected methods of signal processing, each process running is described by 17 process state features (condition attributes) and 5 criteria evaluating process state and results (decision attributes). The semantic correlation between all the attributes is modelled. Next, the phase of condition attributes selection and knowledge extraction are strictly integrated with the phase of the model evaluation using an iterative approach. After each loop of the iterative feature selection procedure the induction of rules is conducted using the VC-DomLEM algorithm. The classification capability of the induced rules is carried out using the leave-one-out method and a set of measures. The classification accuracy of individual models is in the range of 80.77–98.72 %. The induced set of rules constitutes a classifier for an assessment of new process run cases.  相似文献   

14.
This paper deals with the problem of image retrieval from large image databases. A particularly interesting problem is the retrieval of all images which are similar to one in the user's mind, taking into account his/her feedback which is expressed as positive or negative preferences for the images that the system progressively shows during the search. Here we present a novel algorithm for the incorporation of user preferences in an image retrieval system based exclusively on the visual content of the image, which is stored as a vector of low-level features. The algorithm considers the probability of an image belonging to the set of those sought by the user, and models the logit of this probability as the output of a generalized linear model whose inputs are the low-level image features. The image database is ranked by the output of the model and shown to the user, who selects a few positive and negative samples, repeating the process in an iterative way until he/she is satisfied. The problem of the small sample size with respect to the number of features is solved by adjusting several partial generalized linear models and combining their relevance probabilities by means of an ordered averaged weighted operator. Experiments were made with 40 users and they exhibited good performance in finding a target image (4 iterations on average) in a database of about 4700 images. The mean number of positive and negative examples is of 4 and 6 per iteration. A clustering of users into sets also shows consistent patterns of behavior.  相似文献   

15.
The development of an optical tomographic imaging system for biological tissue based on time-resolved near-infrared transillumination has received considerable interest recently. The reconstruction problem is ill posed because of scatter-dominated photon propagation, and hence it requires both an accurate and fast transport model and a robust solution convergence scheme. The iterative image recovery algorithm described in this paper uses a numerical finite-element solution to the diffusion equation as the photon propagation model. The model itself is used to compare the influence of absorbing and scattering inhomogeneities embedded in a homogeneous tissue sample on boundary measurements to estimate the possibility of separating absorption and scattering images. Images of absorbers and scatterers reconstructed from both mean-time-of-flight and logarithmic intensity data are presented. It is found that mean-time-of-flight data offer increased resolution for reconstructing the scattering coefficient, whereas intensity data are favorable for reconstructing absorption.  相似文献   

16.
Image denoising plays an important role in image processing, which aims to separate clean images from the noisy images. A number of methods have been presented to deal with this practical problem in the past decades. In this paper, a sparse coding algorithm using eigenvectors of the graph Laplacian (EGL-SC) is proposed for image denoising by considering the global structures of images. To exploit the geometry attributes of images, the eigenvectors of the graph Laplacian, which are derived from the graph of noised patches, are incorporated in the sparse model as a set of basis functions. Sequently, the corresponding sparse coding problem is presented and efficiently solved with a relaxed iterative method in the framework of the double sparsity model. Meanwhile, as the denoising performance of the EGL-SC significantly depends on the number of the used eigenvectors, an optimal strategy for the number selection is employed. A parameter called as out-of-control rate is set to record the percentage of the denoised patches that suffer from serious residual errors in the sparse coding procedure. Thus, with the eigenvector number increasing, the appropriate number can be heuristically selected when the out-of-control rate falls below an empirical threshold. Experiments illustrate that the EGL-SC can achieve a better performance than some other well-developed denoising methods, especially in the structural similarity index for the noise of large deviations.  相似文献   

17.
Encoding of a priori information in active contour models   总被引:1,自引:0,他引:1  
The theory of active contours models the problem of contour recovery as an energy minimization process. The computational solutions based on dynamic programming require that the energy associated with a contour candidate can be decomposed into an integral of local energy contributions. In this paper we propose a grammatical framework that can model different local energy models and a set of allowable transitions between these models. The grammatical encodings are utilized to represent a priori knowledge about the shape of the object and the associated signatures in the underlying images. The variability encountered in numerical experiments is addressed with the energy minimization procedure which is embedded in the grammatical framework. We propose an algorithmic solution that combines a nondeterministic version of the Knuth-Morris-Pratt algorithm for string matching with a time-delayed discrete dynamic programming algorithm for energy minimization. The numerical experiments address practical problems encountered in contour recovery such as noise robustness and occlusion  相似文献   

18.
Statistical appearance models have previously been used for computer face recognition applications in which an image patch is synthesized and morphed to match a target face image using an automated iterative fitting algorithm. Here we describe an alternative use for appearance models, namely for producing facial composite images (sometimes referred to as E-FIT or PhotoFIT images). This application poses an interesting real-world optimization problem because the target face exists in the mind of the witness and not in a tangible form such as a digital image. To solve this problem we employ an interactive evolutionary algorithm that allows the witness to evolve a likeness to the target face. A system based on our approach, called EFIT-V, is used frequently by three quarters of UK police constabularies.  相似文献   

19.
The problem of see-through cancelation in digital images of double-sided documents is addressed. We show that a nonlinear convolutional data model proposed elsewhere for moderate show-through can also be effective on strong back-to-front interferences, provided that the recto and verso pure patterns are estimated jointly. To this end, we propose a restoration algorithm that does not need any classification of the pixels. The see-through PSFs are estimated off-line, and an iterative procedure is then employed for a joint estimation of the pure patterns. This simple and fast algorithm can be used on both grayscale and color images and has proved to be very effective in real-world cases. The experimental results we report in this paper demonstrate that our algorithm outperforms the ones based on linear models with no need to tune free parameters and remains computationally inexpensive despite the nonlinear model and the iterative solution adopted. Strategies to overcome some of the residual difficulties are also envisaged.  相似文献   

20.
The process of knowledge discovery in databases consists of several steps that are iterative and interactive. In each application, to go through this process the user has to exploit different algorithms and their settings that usually yield multiple models. Model selection, that is, the selection of appropriate models or algorithms to achieve such models, requires meta-knowledge of algorithm/model and model performance metrics. Therefore, model selection is usually a difficult task for the user. We believe that simplifying the process of model selection for the user is crucial to the success of real-life knowledge discovery activities. As opposed to most related work that aims to automate model selection, in our view model selection is a semiautomatic process, requiring an effective collaboration between the user and the discovery system. For such a collaboration, our solution is to give the user the ability to try various alternatives and to compare competing models quantitatively by performance metrics, and qualitatively by effective visualization. This paper presents our research on model selection and visualization in the development of a knowledge discovery system called D2MS. The paper addresses the motivation of model selection in knowledge discovery and related work, gives an overview of D2MS, and describes its solution to model selection and visualization. It then presents the usefulness of D2MS model selection in two case studies of discovering medical knowledge in hospital data—on meningitis and stomach cancer—using three data mining methods of decision trees, conceptual clustering, and rule induction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号