首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We propose a discrete regularization framework on weighted graphs of arbitrary topology, which unifies local and nonlocal processing of images, meshes, and more generally discrete data. The approach considers the problem as a variational one, which consists in minimizing a weighted sum of two energy terms: a regularization one that uses the discrete p-Dirichlet form, and an approximation one. The proposed model is parametrized by the degree p of regularity, by the graph structure and by the weight function. The minimization solution leads to a family of simple linear and nonlinear processing methods. In particular, this family includes the exact expression or the discrete version of several neighborhood filters, such as the bilateral and the nonlocal means filter. In the context of images, local and nonlocal regularizations, based on the total variation models, are the continuous analog of the proposed model. Indirectly and naturally, it provides a discrete extension of these regularization methods for any discrete data or functions.  相似文献   

2.
This paper presents a probabilistic framework based on Bayesian theory for the performance prediction and selection of an optimal segmentation algorithm. The framework models the optimal algorithm selection process as one that accounts for the information content of an input image as well as the behavioral properties of a particular candidate segmentation algorithm. The input image information content is measured in terms of image features while the candidate segmentation algorithm’s behavioral characteristics are captured through the use of segmentation quality features. Gaussian probability distribution models are used to learn the required relationships between the extracted image and algorithm features and the framework tested on the Berkeley Segmentation Dataset using four candidate segmentation algorithms.  相似文献   

3.
In this paper, we present feature/detail preserving models for color image smoothing and segmentation using the Hamiltonian quaternion framework. First, we introduce a novel quaternionic Gabor filter (QGF) which can combine the color channels and the orientations in the image plane. We show that these filters are optimally localized both in the spatial and frequency domains and provide a good approximation to quaternionic quadrature filters. Using the QGFs, we extract the local orientation information in the color images. Second, in order to model this derived orientation information, we propose continuous mixtures of appropriate exponential basis functions and derive analytic expressions for these models. These analytic expressions take the form of spatially varying kernels which, when convolved with a color image or the signed distance function of an evolving contour (placed in the color image), yield a detail preserving smoothing and segmentation, respectively. Several examples on widely used image databases are shown to depict the performance of our algorithms.  相似文献   

4.
An Improved FoE Model for Image Deblurring   总被引:1,自引:0,他引:1  
Image restoration from noisy and blurred image is one of the important tasks in image processing and computer vision systems. In this paper, an improved Fields of Experts model for deconvolution of isotropic Gaussian blur is developed, where edges are preserved in deconvolution by introducing local prior information. The edges with different local background in a blur image are retained since local prior information is adaptively estimated. Experiments indicate that the proposed approach is capable of producing highly accurate solutions and preserving more edge and object boundaries than many other algorithms.  相似文献   

5.
Wavelet frame based models for image restoration have been extensively studied for the past decade (Chan et al. in SIAM J. Sci. Comput. 24(4):1408–1432, 2003; Cai et al. in Multiscale Model. Simul. 8(2):337–369, 2009; Elad et al. in Appl. Comput. Harmon. Anal. 19(3):340–358, 2005; Starck et al. in IEEE Trans. Image Process. 14(10):1570–1582, 2005; Shen in Proceedings of the international congress of mathematicians, vol. 4, pp. 2834–2863, 2010; Dong and Shen in IAS lecture notes series, Summer program on “The mathematics of image processing”, Park City Mathematics Institute, 2010). The success of wavelet frames in image restoration is mainly due to their capability of sparsely approximating piecewise smooth functions like images. Most of the wavelet frame based models designed in the past are based on the penalization of the ? 1 norm of wavelet frame coefficients, which, under certain conditions, is the right choice, as supported by theories of compressed sensing (Candes et al. in Appl. Comput. Harmon. Anal., 2010; Candes et al. in IEEE Trans. Inf. Theory 52(2):489–509, 2006; Donoho in IEEE Trans. Inf. Theory 52:1289–1306, 2006). However, the assumptions of compressed sensing may not be satisfied in practice (e.g. for image deblurring and CT image reconstruction). Recently in Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), the authors propose to penalize the ? 0 “norm” of the wavelet frame coefficients instead, and they have demonstrated significant improvements of their method over some commonly used ? 1 minimization models in terms of quality of the recovered images. In this paper, we propose a new algorithm, called the mean doubly augmented Lagrangian (MDAL) method, for ? 0 minimizations based on the classical doubly augmented Lagrangian (DAL) method (Rockafellar in Math. Oper. Res. 97–116, 1976). Our numerical experiments show that the proposed MDAL method is not only more efficient than the method proposed by Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), but can also generate recovered images with even higher quality. This study reassures the feasibility of using the ? 0 “norm” for image restoration problems.  相似文献   

6.
Image Fusion for Enhanced Visualization: A Variational Approach   总被引:3,自引:0,他引:3  
We present a variational model to perform the fusion of an arbitrary number of images while preserving the salient information and enhancing the contrast for visualization. We propose to use the structure tensor to simultaneously describe the geometry of all the inputs. The basic idea is that the fused image should have a structure tensor which approximates the structure tensor obtained from the multiple inputs. At the same time, the fused image should appear ‘natural’ and ‘sharp’ to a human interpreter. We therefore propose to combine the geometry merging of the inputs with perceptual enhancement and intensity correction. This is performed through a minimization functional approach which implicitly takes into account a set of human vision characteristics.  相似文献   

7.
An interactive framework for soft segmentation and matting of natural images and videos is presented in this paper. The proposed technique is based on the optimal, linear time, computation of weighted geodesic distances to user-provided scribbles, from which the whole data is automatically segmented. The weights are based on spatial and/or temporal gradients, considering the statistics of the pixels scribbled by the user, without explicit optical flow or any advanced and often computationally expensive feature detectors. These could be naturally added to the proposed framework as well if desired, in the form of weights in the geodesic distances. An automatic localized refinement step follows this fast segmentation in order to further improve the results and accurately compute the corresponding matte function. Additional constraints into the distance definition permit to efficiently handle occlusions such as people or objects crossing each other in a video sequence. The presentation of the framework is complemented with numerous and diverse examples, including extraction of moving foreground from dynamic background in video, natural and 3D medical images, and comparisons with the recent literature.  相似文献   

8.
9.
Multimedia Tools and Applications - Occlusion removal is a significant problem to be resolved in a remote traffic control system to enhance road safety. However, the conventional techniques do not...  相似文献   

10.
Spatially varying mixture models are characterized by the dependence of their mixing proportions on location (contextual mixing proportions) and they have been widely used in image segmentation. In this work, Gauss-Markov random field (MRF) priors are employed along with spatially varying mixture models to ensure the preservation of region boundaries in image segmentation. To preserve region boundaries, two distinct models for a line process involved in the MRF prior are proposed. The first model considers edge preservation by imposing a Bernoulli prior on the normally distributed local differences of the contextual mixing proportions. It is a discrete line process model whose parameters are computed by variational inference. The second model imposes Gamma prior on the Student’s-t distributed local differences of the contextual mixing proportions. It is a continuous line process whose parameters are also automatically estimated by the Expectation-Maximization (EM) algorithm. The proposed models are numerically evaluated and two important issues in image segmentation by mixture models are also investigated and discussed: the constraints to be imposed on the contextual mixing proportions to be probability vectors and the MRF optimization strategy in the frameworks of the standard and variational EM algorithm.  相似文献   

11.
Predicate logic based reasoning approaches provide a means of formally specifying domain knowledge and manipulating symbolic information to explicitly reason about different concepts of interest. Extension of traditional binary predicate logics with the bilattice formalism permits the handling of uncertainty in reasoning, thereby facilitating their application to computer vision problems. In this paper, we propose using first order predicate logics, extended with a bilattice based uncertainty handling formalism, as a means of formally encoding pattern grammars, to parse a set of image features, and detect the presence of different patterns of interest. Detections from low level feature detectors are treated as logical facts and, in conjunction with logical rules, used to drive the reasoning. Positive and negative information from different sources, as well as uncertainties from detections, are integrated within the bilattice framework. We show that this approach can also generate proofs or justifications (in the form of parse trees) for each hypothesis it proposes thus permitting direct analysis of the final solution in linguistic form. Automated logical rule weight learning is an important aspect of the application of such systems in the computer vision domain. We propose a rule weight optimization method which casts the instantiated inference tree as a knowledge-based neural network, interprets rule uncertainties as link weights in the network, and applies a constrained, back-propagation algorithm to converge upon a set of rule weights that give optimal performance within the bilattice framework. Finally, we evaluate the proposed predicate logic based pattern grammar formulation via application to the problems of (a) detecting the presence of humans under partial occlusions and (b) detecting large complex man made structures as viewed in satellite imagery. We also evaluate the optimization approach on real as well as simulated data and show favorable results.  相似文献   

12.
In this paper, we propose a general framework for fusing bottom-up segmentation with top-down object behavior inference over an image sequence. This approach is beneficial for both tasks, since it enables them to cooperate so that knowledge relevant to each can aid in the resolution of the other, thus enhancing the final result. In particular, the behavior inference process offers dynamic probabilistic priors to guide segmentation. At the same time, segmentation supplies its results to the inference process, ensuring that they are consistent both with prior knowledge and with new image information. The prior models are learned from training data and they adapt dynamically, based on newly analyzed images. We demonstrate the effectiveness of our framework via particular implementations that we have employed in the resolution of two hand gesture recognition applications. Our experimental results illustrate the robustness of our joint approach to segmentation and behavior inference in challenging conditions involving complex backgrounds and occlusions of the target object.  相似文献   

13.
In this paper, we present a new version of the famous Rudin-Osher-Fatemi (ROF) model to restore image. The key point of the model is that it could reconstruct images with blur and non-uniformly distributed noise. We develop this approach by adding several statistical control parameters to the cost functional, and these parameters could be adaptively determined by the given observed image. In this way, we could adaptively balance the performance of the fit-to-data term and the regularization term. The Numerical experiments have demonstrated the significant effectiveness and robustness of our model in restoring blurred images with mixed Gaussian noise or salt-and-pepper noise.  相似文献   

14.
Generalized rigid and generalized affine registration and interpolation obtained by finite displacements and by optical flow are here developed variationally and numerically as well as with respect to a geometric multigrid solution process. For high order optimality systems under natural boundary conditions, it is shown that the convergence criteria of Hackbusch (Iterative Solution of Large Sparse Systems of Equations. Springer, Berlin, 1993) are met. Specifically, the Galerkin formalism is used together with a multi-colored ordering of unknowns to permit vectorization of a symmetric successive over-relaxation on image processing systems. The geometric multigrid procedure is situated as an inner iteration within an outer Newton or lagged diffusivity iteration, which in turn is embedded within a pyramidal scheme that initializes each outer iteration from predictions obtained on coarser levels. Differences between results obtainable by finite displacements and by optical flows are elucidated. Specifically, independence of image order can be shown for optical flow but in general not for finite displacements. Also, while autonomous optical flows are used in practice, it is shown explicitly that finite displacements generate a broader class of registrations. This work is motivated by applications in histological reconstruction and in dynamic medical imaging, and results are shown for such realistic examples.  相似文献   

15.
We first present a method to rule out the existence of parameter non-increasing polynomial kernelizations of parameterized problems under the hypothesis P≠NP. This method is applicable, for example, to the problem Sat parameterized by the number of variables of the input formula. Then we obtain further improvements of corresponding results in (Bodlaender et al. in Lecture Notes in Computer Science, vol. 5125, pp. 563–574, Springer, Berlin, 2008; Fortnow and Santhanam in Proceedings of the 40th ACM Symposium on the Theory of Computing (STOC’08), ACM, New York, pp. 133–142, 2008) by refining the central lemma of their proof method, a lemma due to Fortnow and Santhanam. In particular, assuming that the polynomial hierarchy does not collapse to its third level, we show that every parameterized problem with a “linear OR” and with NP-hard underlying classical problem does not have polynomial self-reductions that assign to every instance x with parameter k an instance y with |y|=k O(1)⋅|x|1−ε (here ε is any given real number greater than zero). We give various applications of these results. On the structural side we prove several results clarifying the relationship between the different notions of preprocessing procedures, namely the various notions of kernelizations, self-reductions and compressions.  相似文献   

16.
We address the problem of depth and ego-motion estimation from omnidirectional images. We propose a correspondence-free structure-from-motion problem for sequences of images mapped on the 2-sphere. A novel graph-based variational framework is first proposed for depth estimation between pairs of images. The estimation is cast as a TV-L1 optimization problem that is solved by a fast graph-based algorithm. The ego-motion is then estimated directly from the depth information without explicit computation of the optical flow. Both problems are finally addressed together in an iterative algorithm that alternates between depth and ego-motion estimation for fast computation of 3D information from motion in image sequences. Experimental results demonstrate the effective performance of the proposed algorithm for 3D reconstruction from synthetic and natural omnidirectional images.  相似文献   

17.
The main aim of this paper is to accelerate the Chambolle gradient projection method for total variation image restoration. In the proposed minimization method model, we use the well known Barzilai-Borwein stepsize instead of the constant time stepsize in Chambolle’s method. Further, we adopt the adaptive nonmonotone line search scheme proposed by Dai and Fletcher to guarantee the global convergence of the proposed method. Numerical results illustrate the efficiency of this method and indicate that such a nonmonotone method is more suitable to solve some large-scale inverse problems.
Yuhong DaiEmail:
  相似文献   

18.
Interface evolution problems are often solved elegantly by the level set method, which generally requires the time-consuming reinitialization process. In order to avoid reinitialization, we reformulate the variational model as a constrained optimization problem. Then we present an augmented Lagrangian method and a projection Lagrangian method to solve the constrained model and propose two gradient-type algorithms. For the augmented Lagrangian method, we employ the Uzawa scheme to update the Lagrange multiplier. For the projection Lagrangian method, we use the variable splitting technique and get an explicit expression for the Lagrange multiplier. We apply the two approaches to the Chan-Vese model and obtain two efficient alternating iterative algorithms based on the semi-implicit additive operator splitting scheme. Numerical results on various synthetic and real images are provided to compare our methods with two others, which demonstrate effectiveness and efficiency of our algorithms.  相似文献   

19.
Interacting and annealing are two powerful strategies that are applied in different areas of stochastic modelling and data analysis. Interacting particle systems approximate a distribution of interest by a finite number of particles where the particles interact between the time steps. In computer vision, they are commonly known as particle filters. Simulated annealing, on the other hand, is a global optimization method derived from statistical mechanics. A recent heuristic approach to fuse these two techniques for motion capturing has become known as annealed particle filter. In order to analyze these techniques, we rigorously derive in this paper two algorithms with annealing properties based on the mathematical theory of interacting particle systems. Convergence results and sufficient parameter restrictions enable us to point out limitations of the annealed particle filter. Moreover, we evaluate the impact of the parameters on the performance in various experiments, including the tracking of articulated bodies from noisy measurements. Our results provide a general guidance on suitable parameter choices for different applications.
Jürgen GallEmail:
  相似文献   

20.
The blur in target images caused by camera vibration due to robot motion or hand shaking and by object(s) moving in the background scene is different to deal with in the computer vision system.In this paper,the authors study the relation model between motion and blur in the case of object motion existing in video image sequence,and work on a practical computation algorithm for both motion analysis and blut image restoration.Combining the general optical flow and stochastic process,the paper presents and approach by which the motion velocity can be calculated from blurred images.On the other hand,the blurred image can also be restored using the obtained motion information.For solving a problem with small motion limitation on the general optical flow computation,a multiresolution optical flow algoritm based on MAP estimation is proposed. For restoring the blurred image ,an iteration algorithm and the obtained motion velocity are used.The experiment shows that the proposed approach for both motion velocity computation and blurred image restoration works well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号