首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The view-independent visualization of 3D scenes is most often based on rendering accurate 3D models or utilizes image-based rendering techniques. To compute the 3D structure of a scene from a moving vision sensor or to use image-based rendering approaches, we need to be able to estimate the motion of the sensor from the recorded image information with high accuracy, a problem that has been well-studied. In this work, we investigate the relationship between camera design and our ability to perform accurate 3D photography, by examining the influence of camera design on the estimation of the motion and structure of a scene from video data. By relating the differential structure of the time varying plenoptic function to different known and new camera designs, we can establish a hierarchy of cameras based upon the stability and complexity of the computations necessary to estimate structure and motion. At the low end of this hierarchy is the standard planar pinhole camera for which the structure from motion problem is non-linear and ill-posed. At the high end is a camera, which we call the full field of view polydioptric camera, for which the motion estimation problem can be solved independently of the depth of the scene which leads to fast and robust algorithms for 3D Photography. In between are multiple view cameras with a large field of view which we have built, as well as omni-directional sensors.  相似文献   

2.
We present an information theoretic approach to define the problem of structure from motion (SfM) as a blind source separation one. Given that for almost all practical joint densities of shape points, the marginal densities are non-Gaussian, we show how higher-order statistics can be used to provide improvements in shape estimates over the methods of factorization via Singular Value Decomposition (SVD), bundle adjustment and Bayesian approaches. Previous techniques have either explicitly or implicitly used only second-order statistics in models of shape or noise. A further advantage of viewing SfM as a blind source problem is that it easily allows for the inclusion of noise and shape models, resulting in Maximum Likelihood (ML) or Maximum a Posteriori (MAP) shape and motion estimates. A key result is that the blind source separation approach has the ability to recover the motion and shape matrices without the need to explicitly know the motion or shape pdf. We demonstrate that it suffices to know whether the pdf is sub- or super-Gaussian (i.e., semi-parametric estimation) and derive a simple formulation to determine this from the data. We provide extensive experimental results on synthetic and real tracked points in order to quantify the improvement obtained from this technique.  相似文献   

3.
Motion estimation is widely used in video coding schemes in order to reduce the inherent temporal redundancy among the frames of a video stream. In particular, low and very low bit rate video coding schemes need sophisticated motion models which usually require a large number of arithmetic operations. In this paper we present a parallel algorithm for the most practical of these models. Specifically we implement the affine motion model on a hypercube‐based multiprocessor. This model covers the most usual kinds of motion and requires only a modest number of arithmetic operations. Also, the hypercube network can efficiently handle the non‐regular data flow resulting from the parallel implementation of this model. In addition, we assume that our multiprocessor is fine grained, in contrast to most programmable architectures used in video coding, where processors usually have large local memory. Apart from its practicality, the constraint of limited local memory makes the algorithm design more challenging and thus more theoretically interesting. Finally, with regard to other proposals in the literature, our scheme is more general: whereas our scheme covers all kinds of motion supported by the affine motion model, the rest of the proposals deal only with a subset of these kinds. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper, we propose an affine parameter estimation algorithm from block motion vectors for extracting accurate motion information with the assumption that the undergoing motion can be characterized by an affine model. The motion may be caused either by a moving camera or a moving object. The proposed method first extracts motion vectors from a sequence of images by using size-variable block matching and then processes them by adaptive robust estimation to estimate affine parameters. Typically, a robust estimation filters out outliers (velocity vectors that do not fit into the model) by fitting velocity vectors to a predefined model. To filter out potential outliers, our adaptive robust estimation defines a continuous weight function based on a Sigmoid function. During the estimation process, we tune the Sigmoid function gradually to its hard-limit as the errors between the model and input data are decreased, so that we can effectively separate non-outliers from outliers with the help of the finally tuned hard-limit form of the weight function. Experimental results show that the suggested approach is very effective in estimating affine parameters reliably.  相似文献   

5.
If a visual observer moves through an environment, the patterns of light that impinge its retina vary leading to changes in sensed brightness. Spatial shifts of brightness patterns in the 2D image over time are called optic flow. In contrast to optic flow visual motion fields denote the displacement of 3D scene points projected onto the camera’s sensor surface. For translational and rotational movement through a rigid scene parametric models of visual motion fields have been defined. Besides ego-motion these models provide access to relative depth, and both ego-motion and depth information is useful for visual navigation.In the past 30 years methods for ego-motion estimation based on models of visual motion fields have been developed. In this review we identify five core optimization constraints which are used by 13 methods together with different optimization techniques.1 In the literature methods for ego-motion estimation typically have been evaluated by using an error measure which tests only a specific ego-motion. Furthermore, most simulation studies used only a Gaussian noise model. Unlike, we test multiple types and instances of ego-motion. One type is a fixating ego-motion, another type is a curve-linear ego-motion. Based on simulations we study properties like statistical bias, consistency, variability of depths, and the robustness of the methods with respect to a Gaussian or outlier noise model. In order to achieve an improvement of estimates for noisy visual motion fields, part of the 13 methods are combined with techniques for robust estimation like m-functions or RANSAC. Furthermore, a realistic scenario of a stereo image sequence has been generated and used to evaluate methods of ego-motion estimation provided by estimated optic flow and depth information.  相似文献   

6.
Recently, several model‐based control designs have been proposed for motion systems and computerized numerical control (CNC) machines to improve motion accuracy. However, in real applications, their performance is seriously degraded when significant disturbances or cutting forces are applied. In this paper, we derive straightforward design procedures for a general‐structured unknown input observer (UIO) which perfectly decouples the effect of the external disturbance from the state estimation. Furthermore, we derive the optimal UIO by minimizing the estimation errors for both the state and the disturbance via the Riccati equation. Experimental results show that the performance of alladvanced motion controllers suffers when external loads are applied. By compensating for the disturbance of a servo motor using the proposed optimal‐UIO, the original contouring accuracy, which is degraded by the external loading, can be successfully recovered.  相似文献   

7.
Many wavelet-based algorithms have been proposed in recent years to solve the problem of function estimation from noisy samples. In particular it has been shown that threshold approaches lead to asymptotically optimal estimation and are extremely effective when dealing with real data. Working under a Bayesian perspective, in this paper we first study optimality of the hard and soft thresholding rules when the function is modelled as a stochastic process with known covariance function. Next, we consider the case where the covariance function is unknown, and propose a novel approach that models the covariance as a certain wavelet combination estimated from data by Bayesian model selection. Simulated data are used to show that the new method outperforms traditional threshold approaches as well as other wavelet-based Bayesian techniques proposed in the literature.  相似文献   

8.
Presents a solution to a particular curve (surface) fitting problem and demonstrate its application in modeling objects from monocular image sequences. The curve-fitting algorithm is based on a modified nonparametric regression method, which forms the core contribution of this work. This method is far more effective compared to standard estimation techniques, such as the maximum likelihood estimation method, and can take into account the discontinuities present in the curve. Next, the theoretical results of this 1D curve estimation technique ate extended significantly for an object modeling problem. The input to the algorithm is a monocular image sequence of an object undergoing rigid motion. By using the affine camera projection geometry and a given choice of an image frame pair in the sequence, we adopt the KvD (Koenderink and van Doorn, 1991) model to express the depth at each point on the object as a function of the unknown out-of-plane rotation, and some measurable quantities computed directly from the optical flow. This is repeated for multiple image pairs (keeping one fixed image frame which we formally call the base image and choosing another frame from the sequence). The depth map is next estimated from these equations using the modified nonparametric regression analysis. We conducted experiments on various image sequences to verify the effectiveness of the technique. The results obtained using our curve-fitting technique can be refined further by hierarchical techniques, as well as by nonlinear optimization techniques in structure from motion  相似文献   

9.
Traditional optical flow algorithms assume local image translational motion and apply simple image filtering techniques. Recent studies have taken two separate approaches toward improving the accuracy of computed flow: the application of spatio-temporal filtering schemes and the use of advanced motion models such as the affine model. Each has achieved some improvement over traditional algorithms in specialized situations but the computation of accurate optical flow for general motion has been elusive. In this paper, we exploit the interdependency between these two approaches and propose a unified approach. The general motion model we adopt characterizes arbitrary 3-D steady motion. Under perspective projection, we derive an image motion equation that describes the spatio-temporal relation of gray-scale intensity in an image sequence, thus making the utilization of 3-D filtering possible. However, to accommodate this motion model, we need to extend the filter design to derive additional motion constraint equations. Using Hermite polynomials, we design differentiation filters, whose orthogonality and Gaussian derivative properties insure numerical stability; a recursive relation facilitates application of the general nonlinear motion model while separability promotes efficiency. The resulting algorithm produces accurate optical flow and other useful motion parameters. It is evaluated quantitatively using the scheme established by Barron et al. (1994) and qualitatively with real images.  相似文献   

10.
V. Ortenzi  R. Stolkin  J. Kuo  M. Mistry 《Advanced Robotics》2017,31(19-20):1102-1113
Abstract

This paper reviews hybrid motion/force control, a control scheme which enables robots to perform tasks involving both motion, in the free space, and interactive force, at the contacts. Motivated by the large amount of literature on this topic, we facilitate comparison and elucidate the key differences among different approaches. An emphasis is placed on the study of the decoupling of motion control and force control. And we conclude that it is indeed possible to achieve a complete decoupling; however, this feature can be relaxed or sacrificed to reduce the robot’s joint torques while still completing the task.  相似文献   

11.
From experience in component‐based software engineering, it is known that the integration of high‐quality components may not yield high‐quality software systems. It is difficult to evaluate all possible interactions between the components in the system to uncover inter‐component misfunctions. The problem is even harder when the components are used without source code, specifications or formal models. Such components are called black boxes in literature. This paper presents an iterative approach of combining model learning and testing techniques for the formal analysis of a system of black‐box components. In the approach, individual components in the system are learned as finite state machines that (partially) model the behavioural structure of the components. The learned models are then used to derive tests for refining the partial models and/or finding integration faults in the system. The approach has been applied on case studies that have produced encouraging results. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
Computing optical flow with physical models of brightness variation   总被引:12,自引:0,他引:12  
Although most optical flow techniques presume brightness constancy, it is well-known that this constraint is often violated, producing poor estimates of image motion. This paper describes a generalized formulation of optical flow estimation based on models of brightness variations that are caused by time-dependent physical processes. These include changing surface orientation with respect to a directional illuminant, motion of the illuminant, and physical models of heat transport in infrared images. With these models, we simultaneously estimate the 2D image motion and the relevant physical parameters of the brightness change model. The estimation problem is formulated using total least squares, with confidence bounds on the parameters. Experiments in four domains, with both synthetic and natural inputs, show how this formulation produces superior estimates of the 2D image motion  相似文献   

13.
Many variations exist of yield curve modeling based on the exponential components framework, but most do not consider the generating process of the error term. In this paper, we propose a method of yield curve estimation using an instantaneous error term generated with a standard Brownian motion. First, we add an instantaneous error term to Nelson and Siegel’s instantaneous forward rate model [C.R. Nelson, A.F. Siegel, Parsimonious modeling of yield curves, Journal of Business 60 (1987) 473–489]. Second, after differencing multiperiod spot rate models transformed using Nelson and Siegel’s instantaneous forward rate model [C.R. Nelson, A.F. Siegel, Parsimonious modeling of yield curves, Journal of Business 60 (1987) 473–489], we obtain a model with serially uncorrelated error terms because of independent increment properties of Brownian motion. As the error term in this model is heteroskedastic and not serially correlated, we can apply weighted least squares estimation techniques. That is, this specification of the error term does not lead to incorrect estimation methods. In an empirical analysis, we compare the instantaneous forward rate curves estimated by the proposed method and an existing method. We find that the shape from the proposed estimation equation differ from the latter method when fluctuations in the interest rate data used for the estimation are volatile.  相似文献   

14.
The objective of this paper is to study the problem of continuous-time blind deconvolution of a pulse amplitude modulated signal propagated over an unknown channel and perturbed by additive noise. The main idea is to use so-called Laguerre filters to estimate a continuous-time model of the channel. Laguerre-filter-based models can be viewed as an extension of finite-impulse-response (FIR) models to the continuous-time case, and lead to compact and parsimonious linear-in-the-parameters models.? Given an estimate of the channel, different symbol estimation techniques are possible. Here, the shift property of Laguerre filters is used to derive a minimum mean square error estimator to recover the transmitted symbols. This is done in a way that closely resembles recent FIR-based schemes for the corresponding discrete-time case.? The advantage of this concept is that physical a priori information can be incorporated in the model structure, like the transmitter pulse shape. Date received: July 23, 1998. Date revised: August 26, 1999.  相似文献   

15.
The problem of structure from motion is often decomposed into two steps: feature correspondence and three-dimensional reconstruction. This separation often causes gross errors when establishing correspondence fails. Therefore, we advocate the necessity to integrate visual information not only in time (i.e. across different views), but also in space, by matching regions – rather than points – using explicit photometric deformation models. We present an algorithm that integrates image-feature tracking and three-dimensional motion estimation into a closed loop, while detecting and rejecting outlier regions that do not fit the model. Due to occlusions and the causal nature of our algorithm, a drift in the estimates accumulates over time. We describe a method to perform global registration of local estimates of motion and structure by matching the appearance of feature regions stored over long time periods. We use image intensities to construct a score function that takes into account changes in brightness and contrast. Our algorithm is recursive and suitable for real-time implementation.  相似文献   

16.
The classic approach to structure from motion entails a clear separation between motion estimation and structure estimation and between two-dimensional (2D) and three-dimensional (3D) information. For the recovery of the rigid transformation between different views only 2D image measurements are used. To have available enough information, most existing techniques are based on the intermediate computation of optical flow which, however, poses a problem at the locations of depth discontinuities. If we knew where depth discontinuities were, we could (using a multitude of approaches based on smoothness constraints) accurately estimate flow values for image patches corresponding to smooth scene patches; but to know the discontinuities requires solving the structure from motion problem first. This paper introduces a novel approach to structure from motion which addresses the processes of smoothing, 3D motion and structure estimation in a synergistic manner. It provides an algorithm for estimating the transformation between two views obtained by either a calibrated or uncalibrated camera. The results of the estimation are then utilized to perform a reconstruction of the scene from a short sequence of images.The technique is based on constraints on image derivatives which involve the 3D motion and shape of the scene, leading to a geometric and statistical estimation problem. The interaction between 3D motion and shape allows us to estimate the 3D motion while at the same time segmenting the scene. If we use a wrong 3D motion estimate to compute depth, we obtain a distorted version of the depth function. The distortion, however, is such that the worse the motion estimate, the more likely we are to obtain depth estimates that vary locally more than the correct ones. Since local variability of depth is due either to the existence of a discontinuity or to a wrong 3D motion estimate, being able to differentiate between these two cases provides the correct motion, which yields the least varying estimated depth as well as the image locations of scene discontinuities. We analyze the new constraints, show their relationship to the minimization of the epipolar constraint, and present experimental results using real image sequences that indicate the robustness of the method.  相似文献   

17.
为了克服视频解码端时域差错掩盖技术不能准确估计丢失块运动矢量的缺点,提出了一种将解码端运动估计和隐藏运动模型相结合的误码差错掩盖方法。首先,利用丢失块 周围正确接收像素估计丢失块运动矢量,并计算估计的准确性。然后根据准确性,通过隐藏运动模型充分利用丢失块的空域和时域相关性进行差错块的掩盖。仿真结果表明,与现 有的算法相比,该方法能有效提高重建视频图像质量平均近1dB。  相似文献   

18.
We present a novel variational approach for segmenting the image plane into a set of regions of parametric motion on the basis of two consecutive frames from an image sequence. Our model is based on a conditional probability for the spatio-temporal image gradient, given a particular velocity model, and on a geometric prior on the estimated motion field favoring motion boundaries of minimal length.Exploiting the Bayesian framework, we derive a cost functional which depends on parametric motion models for each of a set of regions and on the boundary separating these regions. The resulting functional can be interpreted as an extension of the Mumford-Shah functional from intensity segmentation to motion segmentation. In contrast to most alternative approaches, the problems of segmentation and motion estimation are jointly solved by continuous minimization of a single functional. Minimizing this functional with respect to its dynamic variables results in an eigenvalue problem for the motion parameters and in a gradient descent evolution for the motion discontinuity set.We propose two different representations of this motion boundary: an explicit spline-based implementation which can be applied to the motion-based tracking of a single moving object, and an implicit multiphase level set implementation which allows for the segmentation of an arbitrary number of multiply connected moving objects.Numerical results both for simulated ground truth experiments and for real-world sequences demonstrate the capacity of our approach to segment objects based exclusively on their relative motion.  相似文献   

19.
提出了一种基于多模型结构鲁棒估计的运动分割算法。首先对视频处理对象进行基于 四叉树的分裂合并,获取鲁棒估计的初始运动数目以及相应的运动模型的初始参数,然后通过参数估 计,不断更新模型参数,之后通过把每个运动区域和几个运动模型相关联,来同时估计多个运动的区 域,最后通过小物体的运动检测方法检测出小的运动物体,最终达到分割的目的。试验证明该算法取 得了比较明显的结果。  相似文献   

20.
For pt. I see ibid., p. 1386-93 (1995). An approach applying artificial neural net techniques to 3D nonrigid motion analysis is proposed. The 3D nonrigid motion of the left ventricle of a human heart is examined using biplanar cineangiography data, consisting of 3D coordinates of 30 coronary artery bifurcation points of the left ventricle and the correspondences of these points taken over 10 time instants during the heart cardiac cycle. The motion is decomposed into global rigid motion and a set of local nonrigid deformations which are coupled with the global motion. The global rigid motion can be estimated precisely as a translation vecto and a rotation matrix. Local nonrigid deformation estimation is discussed. A set of neural nets similar in structure and dynamics but different in physical size is proposed to tackle the problem of nonrigidity. These neural networks are interconnected through feedbacks. The activation function of the output layer is selected so that a feedback is involved in the output updating. The constraints are specified to ensure stable and globally consistent estimation. The objective is to find the optimal deformation matrices that satisfy the constraints for all coronary artery bifurcation points of the left ventricle. The proposed neural networks differ from other existing neural network models in their unique structure and dynamics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号