首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Existing algorithms for camera calibration and metric reconstruction are not appropriate for image sets containing geometrically transformed images for which we cannot apply the camera constraints such as square or zero-skewed pixels. In this paper, we propose a framework to use scene constraints in the form of camera constraints. Our approach is based on image warping using images of parallelograms. We show that the warped image using parallelograms constrains the camera both intrinsically and extrinsically. Image warping converts the calibration problems of transformed images into the calibration problem with highly constrained cameras. In addition, it is possible to determine affine projection matrices from the images without explicit projective reconstruction. We introduce camera motion constraints of the warped image and a new parameterization of an infinite homography using the warping matrix. Combining the calibration and the affine reconstruction results in the fully metric reconstruction of scenes with geometrically transformed images. The feasibility of the proposed algorithm is tested with synthetic and real data. Finally, examples of metric reconstructions are shown from the geometrically transformed images obtained from the Internet.  相似文献   

2.
Self-calibration of an affine camera from multiple views   总被引:6,自引:2,他引:4  
A key limitation of all existing algorithms for shape and motion from image sequences under orthographic, weak perspective and para-perspective projection is that they require the calibration parameters of the camera. We present in this paper a new approach that allows the shape and motion to be computed from image sequences without having to know the calibration parameters. This approach is derived with the affine camera model, introduced by Mundy and Zisserman (1992), which is a more general class of projections including orthographic, weak perspective and para-perspective projection models. The concept of self-calibration, introduced by Maybank and Faugeras (1992) for the perspective camera and by Hartley (1994) for the rotating camera, is then applied for the affine camera.This paper introduces the 3 intrinsic parameters that the affine camera can have at most. The intrinsic parameters of the affine camera are closely related to the usual intrinsic parameters of the pin-hole perspective camera, but are different in the general case. Based on the invariance of the intrinsic parameters, methods of self-calibration of the affine camera are proposed. It is shown that with at least four views, an affine camera may be self-calibrated up to a scaling factor, leading to Euclidean (similarity) shape réconstruction up to a global scaling factor. Another consequence of the introduction of intrinsic and extrinsic parameters of the affine camera is that all existing algorithms using calibrated affine cameras can be assembled into the same framework and some of them can be easily extented to a batch solution.Experimental results are presented and compared with other methods using calibrated affine cameras.  相似文献   

3.
The broadcast soccer video is usually recorded by one main camera, which is constantly gazing somewhere of playfield where a highlight event is happening. So the camera parameters and their variety have close relationship with semantic information of soccer video, and much interest has been caught in camera calibration for soccer video. The previous calibration methods either deal with goal scene, or have strict calibration conditions and high complexity. So, it does not properly handle the non-goal scene such as midfield or center-forward scene. In this paper, based on a new soccer field model, a field symbol extraction algorithm is proposed to extract the calibration information. Then a two-stage calibration approach is developed which can calibrate camera not only for goal scene but also for non-goal scene. The preliminary experimental results demonstrate its robustness and accuracy.  相似文献   

4.
Camera lens distortion is crucial to obtain the best performance cameral model. Up to now, different techniques exist, which try to minimize the calibration error using different lens distortion models or computing them in different ways. Some compute lens distortion camera parameters in the camera calibration process together with the intrinsic and extrinsic ones. Others isolate the lens distortion calibration without using any template and basing the calibration on the deformation in the image of some features of the objects in the scene, like straight lines or circles. These lens distortion techniques which do not use any calibration template can be unstable if a complete camera lens distortion model is computed. They are named non-metric calibration or self-calibration methods.Traditionally a camera has been always best calibrated if metric calibration is done instead of self-calibration. This paper proposes a metric calibration technique which computes the camera lens distortion isolated from the camera calibration process under stable conditions, independently of the computed lens distortion model or the number of parameters. To make it easier to resolve, this metric technique uses the same calibration template that will be used afterwards for the calibration process. Therefore, the best performance of the camera lens distortion calibration process is achieved, which is transferred directly to the camera calibration process.  相似文献   

5.
Camera model and its calibration are required in many applications for coordinate conversions between the two-dimensional image and the real three-dimensional world. Self-calibration method is usually chosen for camera calibration in uncontrolled environments because the scene geometry could be unknown. However when no reliable feature correspondences can be established or when the camera is static in relation to the majority of the scene, self-calibration method fails to work. On the other hand, object-based calibration methods are more reliable than self-calibration methods due to the existence of the object with known geometry. However, most object-based calibration methods are unable to work in uncontrolled environments because they require the geometric knowledge on calibration objects. Though in the past few years the simplest geometry required for a calibration object has been reduced to a 1D object with at least three points, it is still not easy to find such an object in an uncontrolled environment, not to mention the additional metric/motion requirement in the existing methods. Meanwhile, it is very easy to find a 1D object with two end points in most scenes. Thus, it would be very worthwhile to investigate an object-based method based on such a simple object so that it would still be possible to calibrate a camera when both self-calibration and existing object-based calibration fail to work. We propose a new camera calibration method which requires only an object with two end points, the simplest geometry that can be extracted from many real-life objects. Through observations of such a 1D object at different positions/orientations on a plane which is fixed in relation to the camera, both intrinsic (focal length) and extrinsic (rotation angles and translations) camera parameters can be calibrated using the proposed method. The proposed method has been tested on simulated data and real data from both controlled and uncontrolled environments, including situations where no explicit 1D calibration objects are available, e.g. from a human walking sequence. Very accurate camera calibration results have been achieved using the proposed method.  相似文献   

6.
We consider the stratified self-calibration (affine and metric reconstruction) problem from images acquired with a camera with unchanging internal parameters undergoing circular motion. The general stratified method (modulus constraints) is known to fail with this motion. In this paper we give a novel constraint on the plane at infinity in projective reconstruction for circular motion, the constant inter-frame motion constraint on the plane at infinity between every two adjacent views and a fixed view of the motion sequences, by making use of the facts that in many commercial systems rotation angles are constant. An initial solution can be obtained by using the first three views of the sequence, and Stratified Iterative Particle Swarm Optimization (SIPSO) is proposed to get an accurate and robust solution when more views are at hand. Instead of using the traditional optimization algorithm as the last step to obtain an accurate solution, in this paper, the whole motion sequence information is exploited before computing the camera calibration matrix, this results in a more accurate and robust solution. Once the plane at infinity is identified, the calibration matrices of the camera and a metric reconstruction can be readily obtained. Experiments on both synthetic and real image sequence are given, showing the accuracy and robustness of the new algorithm.  相似文献   

7.
The Cayley framework here is meant to tackle the vision problems under the infinite Cayley transformation (ICT), its main advantage lies in its numerical stability. In this work, the stratified self-calibration under the Cayley framework is investigated. It is well known that the main difficulty of the stratified self-calibration in multiple view geometry is to upgrade a projective reconstruction to an affine one, in other words, to estimate the unknown 3-vector of the plane at infinity, called the normal vector. To our knowledge, without any prior knowledge about the scene or the camera motion, the only available constraint on a moving camera with constant intrinsic parameters is the well-known Modulus Constraint in the literature. Do other kinds of constraints exist? If yes, what they are? How could they be used? In this work, such questions will be systematically investigated under the Cayley framework. Our key contributions include: 1. The original projective expression of the ICT is simplified and a new projective expression is derived to make the upgrade easier from a projective reconstruction to a metric reconstruction. 2. The constraints on the normal vector are systematically investigated. For two views, two constraints on the normal vector are derived; one of them is the well-known modulus constraint, while the other is a new inequality constraint. There are only these two constraints for two views. For three views, besides the constraints for two views, two groups of new constraints are derived and each of them contains three constraints. In other words, there are 12 constraints in total for three views. 3. Based on our projective expression and these constraints, a stratified Cayley algorithm and a total Cayley algorithm are proposed for the metric reconstruction from images. It is experimentally shown that they both improve significantly the numerical stability of the classical algorithms. Compared with the global optimal algorithm under the infinite homography framework, the Cayley algorithms have comparable calibration accuracy, but substantially reduce the computational load.  相似文献   

8.
We present a batch method for recovering Euclidian camera motion from sparse image data. The main purpose of the algorithm is to recover the motion parameters using as much of the available information and as few computational steps as possible. The algorithm thus places itself in the gap between factorisation schemes, which make use of all available information in the initial recovery step, and sequential approaches which are able to handle sparseness in the image data. Euclidian camera matrices are approximated via the affine camera model, thus making the recovery direct in the sense that no intermediate projective reconstruction is made. Using a little known closure constraint, the FA-closure, we are able to formulate the camera coefficients linearly in the entries of the affine fundamental matrices. The novelty of the presented work is twofold: Firstly the presented formulation allows for a particularly good conditioning of the estimation of the initial motion parameters but also for an unprecedented diversity in the choice of possible regularisation terms. Secondly, the new autocalibration scheme presented here is in practice guaranteed to yield a Least Squares Estimate of the calibration parameters. As a bi-product, the affine camera model is rehabilitated as a useful model for most cameras and scene configurations, e.g. wide angle lenses observing a scene at close range. Experiments on real and synthetic data demonstrate the ability to reconstruct scenes which are very problematic for previous structure from motion techniques due to local ambiguities and error accumulation.  相似文献   

9.
基于主动视觉系统的摄像机自定标方法研究   总被引:16,自引:2,他引:14  
提出了一种新的基于主动视觉系统的摄像机自定标方法,其主要特点是可以线性求解 摄像机的所有5个内参数.该方法的基本原理是利用图像中平面场景的信息,通过控制摄像机 作多组平面正交平移运动,由平面场景图像的单应性(homography)矩阵建立摄像机内参数线性 约束方程组,来求解摄像机的内参数,同时给出约束方程组解的唯一性与平移运动组之间的关系.  相似文献   

10.
Recently, various techniques of shape reconstruction using cast shadows have been proposed. These techniques have the advantage that they can be applied to various scenes, including outdoor scenes, without using special devices. Previously proposed techniques usually require calibration of camera parameters and light source positions, and such calibration processes limit the range of application of these techniques. In this paper, we propose a method to reconstruct 3D scenes even when the camera parameters or light source positions are unknown. The technique first recovers the shape with 4-DOF indeterminacy using coplanarities obtained by cast shadows of straight edges or visible planes in a scene, and then upgrades the shape using metric constraints obtained from the geometrical constraints in the scene. In order to circumvent the need for calibrations and special devices, we propose both linear and nonlinear methods in this paper. Experiments using simulated and real images verified the effectiveness of this technique.  相似文献   

11.
目的 云台相机因监控视野广、灵活度高,在高速公路监控系统中发挥出重要的作用,但因云台相机焦距与角度不定时地随监控需求变化,对利用云台相机的图像信息获取真实世界准确的物理信息造成一定困难,因此进行云台相机非现场自动标定方法的研究对高速公路监控系统的应用具有重要价值。方法 本文提出了一种基于消失点约束与车道线模型约束的云台相机自动标定方法,以建立高速公路监控系统的图像信息与真实世界物理信息之间准确描述关系。首先,利用车辆目标运动轨迹的级联霍夫变换投票实现纵向消失点的准确估计,其次以车道线模型物理度量为约束,并采用枚举策略获取横向消失点的准确估计,最终在已知相机高度的条件下实现高速公路云台相机标定参数的准确计算。结果 将本文方法在不同的场景下进行实验,得到在不同的距离下的平均误差分别为4.63%、4.74%、4.81%、4.65%,均小于5%。结论 对多组高速公路监控场景的测试实验结果表明,本文提出的云台相机自动标定方法对高速公路监控场景的物理测量误差能够满足应用需求,与参考方法相比较而言具有较大的优势和一定的应用价值,得到的相机内外参数可用于计算车辆速度与空间位置等。  相似文献   

12.
We consider the self-calibration problem for a generic imaging model that assigns projection rays to pixels without a parametric mapping. We consider the central variant of this model, which encompasses all camera models with a single effective viewpoint. Self-calibration refers to calibrating a camera’s projection rays, purely from matches between images, i.e. without knowledge about the scene such as using a calibration grid. In order to do this we consider specific camera motions, concretely, pure translations and rotations, although without the knowledge of rotation and translation parameters (rotation angles, axis of rotation, translation vector). Knowledge of the type of motion, together with image matches, gives geometric constraints on the projection rays. We show for example that with translational motions alone, self-calibration can already be performed, but only up to an affine transformation of the set of projection rays. We then propose algorithms for full metric self-calibration, that use rotational and translational motions or just rotational motions.  相似文献   

13.
首先给出了无穷远平面的单应矩阵以及仿射重建算法,然后从数学上严格证明了下述命题:在变参数模型下,如果场景中含有一张平面和一对平行直线,或者场景中含有两张平行平面,则从两个平移视点下的图像均可以线性地对场景进行仿射重建;文章同时指出:如果场景中包含一对平行平面和一对平行直线,则从两个一般运动视点也可以线性地重建场景的仿射几何.大量的模拟和真实图像实验表明,该线性仿射重建算法是正确的,同时具有较高的重建精度和鲁棒性.  相似文献   

14.
A recursive structure from motion algorithm based on optical flow measurements taken from an image sequence is described. It provides estimates of surface normal in addition to 3D motion and depth. The measurements are affine motion parameters which approximate the local flow fields associated with near-planar surface patches in the scene. These are integrated over time to give estimates of the 3D parameters using an extended Kalman filter. This also estimates the camera focal length and, so, the 3D estimates are metric. The use of parametric measurements means that the algorithm is computationally less demanding than previous optical flow approaches and the recursive filter builds in a degree of noise robustness. Results of experiments on synthetic and real image sequences demonstrate that the algorithm performs well.  相似文献   

15.
Plane-based self-calibration aims at the computation of camera intrinsic parameters from homographies relating multiple views of the same unknown planar scene. This paper proposes a straightforward geometric statement of plane-based self-calibration, through the concept of metric rectification of images. A set of constraints is derived from a decomposition of metric rectification in terms of intrinsic parameters and planar scene orientation. These constraints are then solved using an optimization framework based on the minimization of a geometrically motivated cost function. The link with previous approaches is demonstrated and our method appears to be theoretically equivalent but conceptually simpler. Moreover, a solution dealing with radial distortion is introduced. Experimentally, the method is compared with plane-based calibration and very satisfactory results are obtained. Markerless self-calibration is demonstrated using an intensity-based estimation of the inter-image homographies.  相似文献   

16.
17.
This paper proposes a new method for self-calibrating a set of stationary non-rotating zooming cameras. This is a realistic configuration, usually encountered in surveillance systems, in which each zooming camera is physically attached to a static structure (wall, ceiling, robot, or tripod). In particular, a linear, yet effective method to recover the affine structure of the observed scene from two or more such stationary zooming cameras is presented. The proposed method solely relies on point correspondences across images and no knowledge about the scene is required. Our method exploits the mostly translational displacement of the so-called principal plane of each zooming camera to estimate the location of the plane at infinity. The principal plane of a camera, at any given setting of its zoom, is encoded in its corresponding perspective projection matrix from which it can be easily extracted. As a displacement of the principal plane of a camera under the effect of zooming allows the identification of a pair of parallel planes, each zooming camera can be used to locate a line on the plane at infinity. Hence, two or more such zooming cameras in general positions allow the obtainment of an estimate of the plane at infinity making it possible, under the assumption of zero-skew and/or known aspect ratio, to linearly calculate the camera's parameters. Finally, the parameters of the camera and the coordinates of the plane at infinity are refined through a nonlinear least-squares optimization procedure. The results of our extensive experiments using both simulated and real data are also reported in this paper.  相似文献   

18.
Lu  Ali  Huo  Ying  Zhou  Jingbo 《Multimedia Tools and Applications》2019,78(24):34673-34687

In order to measure and reconstruct accurate three-dimension (3D) data for visual aided navigation of autonomous land vehicles (ALVs), a multimedia stereo calibration algorithm which is suitable for normal scene and especially for low illumination scene is proposed. Firstly, an expression of object-point re-projection errors is derived by the collinear equation model, and the non-linear least square algorithm (NLS) is introduced to iteratively optimize external parameters for individual camera. A rectangular pyramidal method enforcing the rectangular geometric constraint is presented, to produce more stable initial parameter values. Then, according to imaging-point correspondences between the left and right camera, a re-projection error model is constructed for this stereo calibration system, of which all parameters are further optimized and calculated through the calibrated results of two separate cameras. Experimental results show that the proposed algorithm can achieve re-projection errors of no more than 0.5 pixels and converge fast usually with less than 10 interation times, whether under normal illumination or low illumination, so it can get better performance and realize a rapid re-calibration.

  相似文献   

19.
In this paper, we describe a reconstruction method for multiple motion scenes, which are scenes containing multiple moving objects, from uncalibrated views. Assuming that the objects are moving with constant velocities, the method recovers the scene structure, the trajectories of the moving objects, the camera motion, and the camera intrinsic parameters (except skews) simultaneously. We focus on the case where the cameras have unknown and varying focal lengths while the other intrinsic parameters are known. The number of the moving objects is automatically detected without prior motion segmentation. The method is based on a unified geometrical representation of the static scene and the moving objects. It first performs a projective reconstruction using a bilinear factorization algorithm and, then, converts the projective solution to a Euclidean one by enforcing metric constraints. Experimental results on synthetic and real images are presented.  相似文献   

20.
We present an algorithm for identifying and tracking independently moving rigid objects from optical flow. Some previous attempts at segmentation via optical flow have focused on finding discontinuities in the flow field. While discontinuities do indicate a change in scene depth, they do not in general signal a boundary between two separate objects. The proposed method uses the fact that each independently moving object has a unique epipolar constraint associated with its motion. Thus motion discontinuities based on self-occlusion can be distinguished from those due to separate objects. The use of epipolar geometry allows for the determination of individual motion parameters for each object as well as the recovery of relative depth for each point on the object. The algorithm assumes an affine camera where perspective effects are limited to changes in overall scale. No camera calibration parameters are required. A Kalman filter based approach is used for tracking motion parameters with time  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号