首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Nonlocal Image and Movie Denoising   总被引:3,自引:0,他引:3  
Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes. A CCD noise model will be presented justifying the involvement of neighborhood filters. A classification of neighborhood filters will be proposed, including classical image and movie denoising methods and discussing further a recently introduced neighborhood filter, NL-means. In order to compare denoising methods three principles will be discussed. The first principle, “method noise”, specifies that only noise must be removed from an image. A second principle will be introduced, “noise to noise”, according to which a denoising method must transform a white noise into a white noise. Contrarily to “method noise”, this principle, which characterizes artifact-free methods, eliminates any subjectivity and can be checked by mathematical arguments and Fourier analysis. “Noise to noise” will be proven to rule out most denoising methods, with the exception of neighborhood filters. This is why a third and new comparison principle, the “statistical optimality”, is needed and will be introduced to compare the performance of all neighborhood filters. The three principles will be applied to compare ten different image and movie denoising methods. It will be first shown that only wavelet thresholding methods and NL-means give an acceptable method noise. Second, that neighborhood filters are the only ones to satisfy the “noise to noise” principle. Third, that among them NL-means is closest to statistical optimality. A particular attention will be paid to the application of the statistical optimality criterion for movie denoising methods. It will be pointed out that current movie denoising methods are motion compensated neighborhood filters. This amounts to say that they are neighborhood filters and that the ideal neighborhood of a pixel is its trajectory. Unfortunately the aperture problem makes it impossible to estimate ground true trajectories. It will be demonstrated that computing trajectories and restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details.  相似文献   

2.
Three-dimensional computer animation often struggles to compete with the flexibility and expressiveness commonly found in traditional animation, particularly when rendered non-photorealistically. We present an animation tool that takes skeleton-driven 3D computer animations and generates expressive deformations to the character geometry. The technique is based upon the cartooning and animation concepts of “lines of action” and “lines of motion” and automatically infuses computer animations with some of the expressiveness displayed by traditional animation. Motion and pose-based expressive deformations are generated from the motion data and the character geometry is warped along each limb’s individual line of motion. The effect of this subtle, yet significant, warping is twofold: geometric inter-frame consistency is increased which helps create visually smoother animated sequences, and the warped geometry provides a novel solution to the problem of implied motion in non-photorealistic imagery. Object-space and image-space versions of the algorithm have been implemented and are presented.  相似文献   

3.
The Bayesian method is widely used in image processing and computer vision to solve ill-posed problems. This is commonly achieved by introducing a prior which, together with the data constraints, determines a unique and hopefully stable solution. Choosing a “correct” prior is however a well-known obstacle. This paper demonstrates that in a certain class of motion estimation problems, the Bayesian technique of integrating out the “nuisance parameters” yields stable solutions even if a flat prior on the motion parameters is used. The advantage of the suggested method is more noticeable when the domain points approach a degenerate configuration, and/or when the noise is relatively large with respect to the size of the point configuration.  相似文献   

4.
This work is part of a project aimed to develop automotive real-time observers based on detailed nonlinear multibody models and the extended Kalman filter (EKF). In previous works, a four-bar mechanism was studied to get insight into the problem. Regarding the formulation of the equations of motion, it was concluded that the state-space reduction method known as matrix-R is the most suitable one for this application. Regarding the sensors, it was shown that better stability, accuracy and efficiency are obtained as the sensored magnitude is a lower derivative and when it is a generalized coordinate of the problem. In the present work, the automotive problem has been addressed, through the selection of a Volkswagen Passat as a case-study. A model of the car containing fifteen degrees of freedom has been developed. The observer algorithm that combines the equations of motion and the integrator has been reformulated so that duplication of the problem size is avoided, in order to improve efficiency. A maneuver of acceleration from rest and double lane change has been defined, and tests have been run for the “prototype,” the “model” and the “observer,” all the three computational, with the model having 100 kg more than the prototype. Results have shown that good convergence is obtained for position level sensors, but the computational cost is high, still far from real-time performance.  相似文献   

5.
6.
Stable rankings for different effort models   总被引:1,自引:0,他引:1  
There exists a large and growing number of proposed estimation methods but little conclusive evidence ranking one method over another. Prior effort estimation studies suffered from “conclusion instability”, where the rankings offered to different methods were not stable across (a) different evaluation criteria; (b) different data sources; or (c) different random selections of that data. This paper reports a study of 158 effort estimation methods on data sets based on COCOMO features. Four “best” methods were detected that were consistently better than the “rest” of the other 154 methods. These rankings of “best” and “rest” methods were stable across (a) three different evaluation criteria applied to (b) multiple data sets from two different sources that were (c) divided into hundreds of randomly selected subsets using four different random seeds. Hence, while there exists no single universal “best” effort estimation method, there appears to exist a small number (four) of most useful methods. This result both complicates and simplifies effort estimation research. The complication is that any future effort estimation analysis should be preceded by a “selection study” that finds the best local estimator. However, the simplification is that such a study need not be labor intensive, at least for COCOMO style data sets.  相似文献   

7.
Motion of points and lines in the uncalibrated case   总被引:4,自引:4,他引:0  
In the present paper we address the problem of computing structure and motion, given a set point and/or line correspondences, in a monocular image sequence, when the camera is not calibrated.Considering point correspondences first, we analyse how to parameterize the retinal correspondences, in function of the chosen geometry: Euclidean, affine or projective geometry. The simplest of these parameterizations is called the FQs-representation and is a composite projective representation. The main result is that considering N+1 views in such a monocular image sequence, the retinal correspondences are parameterized by 11 N–4 parameters in the general projective case. Moreover, 3 other parameters are required to work in the affine case and 5 additional parameters in the Euclidean case. These 8 parameters are calibration parameters and must be calculated considering at least 8 external informations or constraints. The method being constructive, all these representations are made explicit.Then, considering line correspondences, we show how the the same parameterizations can be used when we analyse the motion of lines, in the uncalibrated case. The case of three views is extensively studied and a geometrical interpretation is proposed, introducing the notion of trifocal geometry which generalizes the well known epipolar geometry. It is also discussed how to introduce line correspondences, in a framework based on point correspondences, using the same equations.Finally, considering the F Qs-representation, one implementation is proposed as a motion module, taking retinal correspondences as input, and providing and estimation of the 11 N–4 retinal motion parameters. As discussed in this paper, this module can also estimate the 3D depth of the points up to an affine and projective transformation, defined by the 8 parameters identified in the first section. Experimental results are provided.  相似文献   

8.
We address the problem of detecting irregularities in visual data, e.g., detecting suspicious behaviors in video sequences, or identifying salient patterns in images. The term “irregular” depends on the context in which the “regular” or “valid” are defined. Yet, it is not realistic to expect explicit definition of all possible valid configurations for a given context. We pose the problem of determining the validity of visual data as a process of constructing a puzzle: We try to compose a new observed image region or a new video segment (“the query”) using chunks of data (“pieces of puzzle”) extracted from previous visual examples (“the database”). Regions in the observed data which can be composed using large contiguous chunks of data from the database are considered very likely, whereas regions in the observed data which cannot be composed from the database (or can be composed, but only using small fragmented pieces) are regarded as unlikely/suspicious. The problem is posed as an inference process in a probabilistic graphical model. We show applications of this approach to identifying saliency in images and video, for detecting suspicious behaviors and for automatic visual inspection for quality assurance. Patent Pending  相似文献   

9.
3D Motion recovery via affine Epipolar geometry   总被引:10,自引:6,他引:4  
Algorithms to perform point-based motion estimation under orthographic and scaled orthographic projection abound in the literature. A key limitation of many existing algorithms is that they operate on the minimum amount of data required, often requiring the selection of a suitable minimal set from the available data to serve as a local coordinate frame. Such approaches are extremely sensitive to errors and noise in the minimal set, and forfeit the advantages of using the full data set. Furthermore, attention is seldom paid to the statistical performance of the algorithms.We present a new framework that allowsall available features to be used in the motion computations, without the need to select a frame explicitly. This theory is derived in the context of theaffine camera, which preserves parallelism and generalises the orthographic, scaled orthographic and para-perspective models. We define the affine epipolar geometry for two such cameras, giving the fundamental matrix in this case. The noise resistant computation of the epipolar geometry is discussed, and a statistical noise model constructed so that confidence in the results can be assessed.The rigid motion parameters are then determineddirectly from the epipolar geometry, using the novel rotation representation of Koenderink and van Doorn (1991). The two-view partial motion solution comprises the scale factor between views, the projection of the 3D axis of rotation and the cyclotorsion angle, while the addition of a third view allows the true 3D rotation axis to be computed (up to a Necker reversal). The computed uncertainties in these parameters permit optimal estimates to be obtained over time by means of a linear Kalman filter. Our theory extends work by Huang and Lee (1989), Harris (1990), and Koenderink and van Doorn (1991), and results are given on both simulated and real data.  相似文献   

10.
This paper investigates the problem of inserting new rush orders into a current schedule of a real world job shop floor. Effective rescheduling methods must achieve reasonable levels of performance, measured according to a certain cost function, while preserving the stability of the shop floor, i.e. introducing as few changes as possible to the current schedule. This paper proposes new and effective match-up strategies which modify only a part of the schedule in order to accommodate the arriving jobs. The proposed strategies are compared with other rescheduling methods such as “right shift” and “insertion in the end”, which are optimal with respect to stability but poor with respect to performance, and with “total rescheduling” which is optimal with respect to performance but poor with respect to stability. Our results and statistical analysis reveal that the match-up strategies are comparable to the “right shift” and “insertion in the end” with respect to stability and as good as “total rescheduling” with respect to performance.  相似文献   

11.
Michael Serra’s high school classroom became a “geometry Cathedral” when students created panels with a stained-glass effect for the windows in the classroom. Projects like this can convert “math atheists” into “geometry believers”.  相似文献   

12.
Noise in textual data such as those introduced by multilinguality, misspellings, abbreviations, deletions, phonetic spellings, non-standard transliteration, etc. pose considerable problems for text-mining. Such corruptions are very common in instant messenger and short message service data and they adversely affect off-the-shelf text mining methods. Most techniques address this problem by supervised methods by making use of hand labeled corrections. But they require human generated labels and corrections that are very expensive and time consuming to obtain because of multilinguality and complexity of the corruptions. While we do not champion unsupervised methods over supervised when quality of results is the singular concern, we demonstrate that unsupervised methods can provide cost effective results without the need for expensive human intervention that is necessary to generate a parallel labeled corpora. A generative model based unsupervised technique is presented that maps non-standard words to their corresponding conventional frequent form. A hidden Markov model (HMM) over a “subsequencized” representation of words is used, where a word is represented as a bag of weighted subsequences. The approximate maximum likelihood inference algorithm used is such that the training phase involves clustering over vectors and not the customary and expensive dynamic programming (Baum–Welch algorithm) over sequences that is necessary for HMMs. A principled transformation of maximum likelihood based “central clustering” cost function of Baum–Welch into a “pairwise similarity” based clustering is proposed. This transformation makes it possible to apply “subsequence kernel” based methods that model delete and insert corruptions well. The novelty of this approach lies in that the expensive (Baum–Welch) iterations required for HMM, can be avoided through an approximation of the loglikelihood function and by establishing a connection between the loglikelihood and a pairwise distance. Anecdotal evidence of efficacy is provided on public and proprietary data.  相似文献   

13.
Theory and Practice of Projective Rectification   总被引:13,自引:0,他引:13  
This paper gives a new method for image rectification, the process of resampling pairs of stereo images taken from widely differing viewpoints in order to produce a pair of matched epipolar projections. These are projections in which the epipolar lines run parallel with the x-axis and consequently, disparities between the images are in the x-direction only. The method is based on an examination of the fundamental matrix of Longuet-Higgins which describes the epipolar geometry of the image pair. The approach taken is consistent with that advocated by Faugeras (1992) of avoiding camera calibration. The paper uses methods of projective geometry to determine a pair of 2D projective transformations to be applied to the two images in order to match the epipolar lines. The advantages include the simplicity of the 2D projective transformation which allows very fast resampling as well as subsequent simplification in the identification of matched points and scene reconstruction.  相似文献   

14.
Methods for reconstruction and camera estimation from miminal data are often used to boot-strap robust (RANSAC and LMS) and optimal (bundle adjustment) structure and motion estimates. Minimal methods are known for projective reconstruction from two or more uncalibrated images, and for “5 point” relative orientation and Euclidean reconstruction from two calibrated parameters, but we know of no efficient minimal method for three or more calibrated cameras except the uniqueness proof by Holt and Netravali. We reformulate the problem of Euclidean reconstruction from minimal data of four points in three or more calibrated images, and develop a random rational simulation method to show some new results on this problem. In addition to an alternative proof of the uniqueness of the solutions in general cases, we further show that unknown coplanar configurations are not singular, but the true solution is a double root. The solution from a known coplanar configuration is also generally unique. Some especially symmetric point-camera configurations lead to multiple solutions, but only symmetry of points or the cameras gives a unique solution.  相似文献   

15.
This paper proposes a robust method for recovery of motion and structure from two image sequences taken by stereo cameras undergoing a planar motion. The feature correspondences between images are extracted and refined automatically by the relation of the stereo cameras and the property of the motion. To improve the robustness, an auto-scale random sample consensus (RANSAC) algorithm is adopted in the motion and structure estimation. Unlike other work recovering epipolar geometry, here we use a random sampling algorithm to recover the 2D motion and to exclude the outliers which lie both on and out of the epipolar lines. Further more, the idea of RANSAC is used in structure estimation to exclude the outliers from the image sequence. The contribution of this work is the development of an approach to make structure and motion estimation more robust and efficient so as to be applicable to real applications. With the adoption of the auto-scale technique, the algorithm completely automates the estimation process without any prior information or user’s specification of parameters like thresholds. Indoor and outdoor experiments have been done to verify the performance of the algorithm. The results demonstrated that the proposed algorithm is robust and efficient for applications in planar motions.  相似文献   

16.
Much work on skewed, stochastic, high dimensional, and biased datasets usually implicitly solve each problem separately. Recently, we have been approached by Texas Commission on Environmental Quality (TCEQ) to help them build highly accurate ozone level alarm forecasting models for the Houston area, where these technical difficulties come together in one single problem. Key characteristics of this problem that is challenging and interesting include: (1) the dataset is sparse (72 features, and 2 or 5% positives depending on the criteria of “ozone days”), (2) evolving over time from year to year, (3) limited in collected data size (7 years or around 2,500 data entries), (4) contains a large number of irrelevant features, (5) is biased in terms of “sample selection bias”, and (6) the true model is stochastic as a function of measurable factors. Besides solving a difficult application problem, this dataset offers a unique opportunity to explore new and existing data mining techniques, and to provide experience, guidance and solution for similar problems. Our main technical focus addresses on how to estimate reliable probability given both sample selection bias and a large number of irrelevant features, and how to choose the most reliable decision threshold to predict the unknown future with different distribution. On the application side, the prediction accuracy of our chosen approach (bagging probabilistic decision trees and random decision trees) is 20% higher in recall (correctly detects 1–3 more ozone days, depending on the year) and 10% higher in precision (15–30 fewer false alarm days per year) than state-of-the-art methods used by air quality control scientists, and these results are significant for TCEQ. On the technical side of data mining, extensive empirical results demonstrate that, at least for this problem, and probably other problems with similar characteristics, these two straight-forward non-parametric methods can provide significantly more accurate and reliable solutions than a number of sophisticated and well-known algorithms, such as SVM and AdaBoost among many others.  相似文献   

17.
We present a new approach to the tracking of very non-rigid patterns of motion, such as water flowing down a stream. The algorithm is based on a “disturbance map”, which is obtained by linearly subtracting the temporal average of the previous frames from the new frame. Every local motion creates a disturbance having the form of a wave, with a “head” at the present position of the motion and a historical “tail” that indicates the previous locations of that motion. These disturbances serve as loci of attraction for “tracking particles” that are scattered throughout the image. The algorithm is very fast and can be performed in real time. We provide excellent tracking results on various complex sequences, using both stabilized and moving cameras, showing a busy ant column, waterfalls, rapids and flowing streams, shoppers in a mall, and cars in a traffic intersection. Received: 24 June 1997 / Accepted: 30 July 1998  相似文献   

18.
19.
《Image and vision computing》2002,20(5-6):441-448
In this paper, we address the problem of recovering structure and motion from the apparent contours of a smooth surface. Fixed image features under circular motion and their relationships with the intrinsic parameters of the camera are exploited to provide a simple parameterization of the fundamental matrix relating any pair of views in the sequence. Such a parameterization allows a trivial initialization of the motion parameters, which all bear physical meaning. It also greatly reduces the dimension of the search space for the optimization problem, which can now be solved using only two epipolar tangents. In contrast to previous methods, the motion estimation algorithm introduced here can cope with incomplete circular motion and more widely spaced images. Existing techniques for model reconstruction from apparent contours are then reviewed and compared. Experiment on real data has been carried out and the 3D model reconstructed from the estimated motion is presented.  相似文献   

20.
This study continues the analysis of observability of the problem of refinement of motion parameters of an orbital group of navigation spacecrafts using intersatellite range measurements on a given interval initiated in [1]. The analysis is performed in the framework of linear models connecting measured and refined parameters on a measurement interval. A number of theorems and corollaries concerning the properties of observability in the problem of refinement of motion parameters using intersatellite range measurements for the cases of three and all navigation spacecrafts of the orbital group are proved. It is shown that all motion parameters of an orbital group cannot be refined in a unique way using intersatellite range measurements between navigation spacecrafts, and three parameters are always “unobservable”. Recommendations for construction of onboard algorithm for intersatellite measurement processing are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号