首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Segment Based Camera Calibration   总被引:5,自引:2,他引:3       下载免费PDF全文
The basic idea of calibrating a camera system in previous approaches is to determine camera parmeters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in whih camera parameters are determined by a set of 3D lines.A set of constraints is derived on camea parameters in terms of perspective line mapping.Form these constraints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Liu,Huang and Faugeras^[12] for camera location determination in which at least 8 line correspondences are required for linear computation of camera location.Since line segments in an image can be located easily and more accurately than points,the use of lines as calibration reference tends to ease the computation in inage preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported.  相似文献   

2.
基于二次曲线的纯旋转摄像机自标定   总被引:8,自引:0,他引:8  
研究探讨了一种基于平面二次曲线的纯旋转摄像机自标定方法.在不同的方位拍摄 三幅或三幅以上图像,每幅图像至少包含两个空间平面二次曲线、或两个二次曲面、或一个平 面二次曲线与一个二次曲面的投影,利用图像之间的二次曲线对应关系,可以确定摄像机的 内参数矩阵,同时可以获得摄像机不同方位之间的旋转矩阵.由于使用的定标基元为二次曲 线,是较点和直线包含更多信息的基元,因而基元之间的匹配容易自动实现,并有助于提高标 定算法的鲁棒性和在线实时性.模拟实验和真实图像实验表明文中所介绍的方法是可行的.  相似文献   

3.
In this study, we present a calibration technique that is valid for all single-viewpoint catadioptric cameras. We are able to represent the projection of 3D points on a catadioptric image linearly with a 6×10 projection matrix, which uses lifted coordinates for image and 3D points. This projection matrix can be computed from 3D–2D correspondences (minimum 20 points distributed in three different planes). We show how to decompose it to obtain intrinsic and extrinsic parameters. Moreover, we use this parameter estimation followed by a non-linear optimization to calibrate various types of cameras. Our results are based on the sphere camera model which considers that every central catadioptric system can be modeled using two projections, one from 3D points to a unitary sphere and then a perspective projection from the sphere to the image plane. We test our method both with simulations and real images, and we analyze the results performing a 3D reconstruction from two omnidirectional images.  相似文献   

4.
Accurate Camera Calibration from Multi-View Stereo and Bundle Adjustment   总被引:1,自引:0,他引:1  
The advent of high-resolution digital cameras and sophisticated multi-view stereo algorithms offers the promise of unprecedented geometric fidelity in image-based modeling tasks, but it also puts unprecedented demands on camera calibration to fulfill these promises. This paper presents a novel approach to camera calibration where top-down information from rough camera parameter estimates and the output of a multi-view-stereo system on scaled-down input images is used to effectively guide the search for additional image correspondences and significantly improve camera calibration parameters using a standard bundle adjustment algorithm (Lourakis and Argyros 2008). The proposed method has been tested on six real datasets including objects without salient features for which image correspondences cannot be found in a purely bottom-up fashion, and objects with high curvature and thin structures that are lost in visual hull construction even with small errors in camera parameters. Three different methods have been used to qualitatively assess the improvements of the camera parameters. The implementation of the proposed algorithm is publicly available at Furukawa and Ponce (2008b).  相似文献   

5.
By using mirror reflections of a scene, stereo images can be captured with a single camera (catadioptric stereo). In addition to simplifying data acquisition single camera stereo provides both geometric and radiometric advantages over traditional two camera stereo. In this paper, we discuss the geometry and calibration of catadioptric stereo with two planar mirrors. In particular, we will show that the relative orientation of a catadioptric stereo rig is restricted to the class of planar motions thus reducing the number of external calibration parameters from 6 to 5. Next we derive the epipolar geometry for catadioptric stereo and show that it has 6 degrees of freedom rather than 7 for traditional stereo. Furthermore, we show how focal length can be recovered from a single catadioptric image solely from a set of stereo correspondences. To test the accuracy of the calibration we present a comparison to Tsai camera calibration and we measure the quality of Euclidean reconstruction. In addition, we will describe a real-time system which demonstrates the viability of stereo with mirrors as an alternative to traditional two camera stereo.  相似文献   

6.
We address the problem of estimating three-dimensional motion, and structure from motion with an uncalibrated moving camera. We show that point correspondences between three images, and the fundamental matrices computed from these point correspondences, are sufficient to recover the internal orientation of the camera (its calibration), the motion parameters, and to compute coherent perspective projection matrices which enable us to reconstruct 3-D structure up to a similarity. In contrast with other methods, no calibration object with a known 3-D shape is needed, and no limitations are put upon the unknown motions to be performed or the parameters to be recovered, as long as they define a projective camera.The theory of the method, which is based on the constraint that the observed points are part of a static scene, thus allowing us to link the intrinsic parameters and the fundamental matrix via the absolute conic, is first detailed. Several algorithms are then presented, and their performances compared by means of extensive simulations and illustrated by several experiments with real images.  相似文献   

7.
为实现基于投影仪和摄像机的结构光视觉系统连续扫描,需要计算投影仪投影的任意光平面与摄像机图像平面的空间位置关系,进而需要求取摄像机光心与投影仪光心之间的相对位置关系。求取摄像机的内参数,在标定板上选取四个角点作为特征点并利用摄像机内参数求取该四个特征点的外参数,从而知道四个特征点在摄像机坐标系中的坐标。利用投影仪自身参数求解特征点在投影仪坐标系中的坐标,从而计算出摄像机光心与投影仪光心之间的相对位置关系,实现结构光视觉标定。利用标定后的视觉系统,对标定板上的角点距离进行测量,最大相对误差为0.277%,表明该标定算法可以应用于基于投影仪和摄像机的结构光视觉系统。  相似文献   

8.
Camera model and its calibration are required in many applications for coordinate conversions between the two-dimensional image and the real three-dimensional world. Self-calibration method is usually chosen for camera calibration in uncontrolled environments because the scene geometry could be unknown. However when no reliable feature correspondences can be established or when the camera is static in relation to the majority of the scene, self-calibration method fails to work. On the other hand, object-based calibration methods are more reliable than self-calibration methods due to the existence of the object with known geometry. However, most object-based calibration methods are unable to work in uncontrolled environments because they require the geometric knowledge on calibration objects. Though in the past few years the simplest geometry required for a calibration object has been reduced to a 1D object with at least three points, it is still not easy to find such an object in an uncontrolled environment, not to mention the additional metric/motion requirement in the existing methods. Meanwhile, it is very easy to find a 1D object with two end points in most scenes. Thus, it would be very worthwhile to investigate an object-based method based on such a simple object so that it would still be possible to calibrate a camera when both self-calibration and existing object-based calibration fail to work. We propose a new camera calibration method which requires only an object with two end points, the simplest geometry that can be extracted from many real-life objects. Through observations of such a 1D object at different positions/orientations on a plane which is fixed in relation to the camera, both intrinsic (focal length) and extrinsic (rotation angles and translations) camera parameters can be calibrated using the proposed method. The proposed method has been tested on simulated data and real data from both controlled and uncontrolled environments, including situations where no explicit 1D calibration objects are available, e.g. from a human walking sequence. Very accurate camera calibration results have been achieved using the proposed method.  相似文献   

9.
Silhouette coherence for camera calibration under circular motion   总被引:1,自引:0,他引:1  
We present a new approach to camera calibration as a part of a complete and practical system to recover digital copies of sculpture from uncalibrated image sequences taken under turntable motion. In this paper, we introduce the concept of the silhouette coherence of a set of silhouettes generated by a 3D object. We show how the maximization of the silhouette coherence can be exploited to recover the camera poses and focal length. Silhouette coherence can be considered as a generalization of the well-known epipolar tangency constraint for calculating motion from silhouettes or outlines alone. Further, silhouette coherence exploits all the geometric information encoded in the silhouette (not just at epipolar tangency points) and can be used in many practical situations where point correspondences or outer epipolar tangents are unavailable. We present an algorithm for exploiting silhouette coherence to efficiently and reliably estimate camera motion. We use this algorithm to reconstruct very high quality 3D models from uncalibrated circular motion sequences, even when epipolar tangency points are not available or the silhouettes are truncated. The algorithm has been integrated into a practical system and has been tested on more than 50 uncalibrated sequences to produce high quality photo-realistic models. Three illustrative examples are included in this paper. The algorithm is also evaluated quantitatively by comparing it to a state-of-the-art system that exploits only epipolar tangents  相似文献   

10.
Standard camera and projector calibration techniques use a checkerboard that is manually shown at different poses to determine the calibration parameters. Furthermore, when image geometric correction must be performed on a three‐dimensional (3D) surface, such as projection mapping, the surface geometry must be determined. Camera calibration and 3D surface estimation can be costly, error prone, and time‐consuming when performed manually. To address this issue, we use an auto‐calibration technique that projects a series of Gray code structured light patterns. These patterns are captured by the camera to build a dense pixel correspondence between the projector and camera, which are used to calibrate the stereo system using an objective function, which embeds the calibration parameters together with the undistorted points. Minimization is carried out by a greedy algorithm that minimizes the cost at each iteration with respect to both calibration parameters and noisy image points. We test the auto‐calibration on different scenes and show that the results closely match a manual calibration of the system. We show that this technique can be used to build a 3D model of the scene, which in turn with the dense pixel correspondence can be used for geometric screen correction on any arbitrary surface.  相似文献   

11.
In this paper we present a method for the calibration of multiple cameras based on the extraction and use of the physical characteristics of a one-dimensional invariant pattern which is defined by four collinear markers. The advantages of this kind of pattern stand out in two key steps of the calibration process. In the initial step of camera calibration methods, related to sample points capture, the proposed method takes advantage of using a new technique for the capture and recognition of a robust sample of projective invariant patterns, which allows to capture simultaneously more than one invariant pattern in the tracking area and recognize each pattern individually as well as each marker that composes them. This process is executed in real time while capturing our sample of calibration points in the cameras of our system. This new feature allows to capture a more numerous and robust set of sample points than other patterns used for multi-camera calibration methods. In the last step of the calibration process, related to camera parameters' optimization, we explore the collinearity feature of the invariant pattern and add this feature in the camera parameters optimization model. This approach obtains better results in the computation of camera parameters. We present the results obtained with the calibration of two multi-camera systems using the proposed method and compare them with other methods from the literature.  相似文献   

12.
基于单目体系的可见手重构算法研究   总被引:1,自引:0,他引:1  
首先确立单相机加单平面镜的体系结构,然后研究在该体系下实现三维重构的基本理论和基本方法,具体探讨了以下4个关键问题:(1)手边沿的提取;(2)对应关系的获取;(3)3D重构的基本方法;(4)校准算法,通过揭示出空间物点在像平面上的投影、该物点的对称点在同一像平面上的投影、镜面以及该物点本身这四者之间的关系,得到三维重构的新方法,既便于理论分析,又便于程序设计;既使校准过程简单易行,又保证了三维重构的精度.  相似文献   

13.
摄像机自标定方法的研究与进展   总被引:61,自引:0,他引:61  
该文回顾了近几年来摄像机自标定技术的发展,并分类介绍了其中几种主要方法.同 传统标定方法相比,自标定方法不需要使用标定块,仅根据图像间图像点的对应关系就能估计 出摄像机内参数.文中重点介绍了透视模型下的几种重要的自标定方法,包括内参数恒定和内 参数可变两种情形;最后还简要介绍了几种非透视模型下的摄像机自标定方法.  相似文献   

14.
This paper presents a study, based on conic correspondences, on the relationship between two perspective images acquired by an uncalibrated camera. We show that for a pair of corresponding conics, the parameters representing the conics satisfy a linear constraint. To be more specific, the parameters that represent a conic in one image are transformed by a five-dimensional projective transformation to the parameters that represent the corresponding conic in another image. We also show that this transformation is expressed as the symmetric component of the tensor product of the transformation based on point/line correspondences and itself. In addition, we present a linear algorithm for uniquely determining the corresponding point-based transformation from a given conic-based transformation up to a scale factor. Accordingly, conic correspondences enable us to easily handle both points and lines in uncalibrated images of a planar object.  相似文献   

15.
We propose a new algorithm for model-based extrinsic camera calibration that allows one to separate the recovery of the relative orientation of the camera from the recovery of its relative position, given a set of at least three correspondences between model and image points. The key idea is to replace each (real) model point whose correspondence is known by two (virtual) model edges, and then to use the fact that these edges have pairwise intersections in 3D space to derive a set of alignment constraints. We provide a proof that the resulting technique is essentially more powerful than any of the traditional methods for decoupled orientation and position recovery based uniquely on line correspondences. We also present a detailed example of a real-life application that benefits from our work, namely autonomous navigation using distant visual landmarks. We use simulation to show that, for this specific application, our algorithm, when compared to similar techniques, is either significantly more accurate at the same computational cost, or significantly faster with roughly the same average-case accuracy.  相似文献   

16.
Inspired by Zhang's work on flexible calibration technique, a new easy technique for calibrating a camera based on circular points is proposed. The proposed technique only requires the camera to observe a newly designed planar calibration pattern (referred to as the model plane hereinafter) which includes a circle and a pencil of lines passing through the circle's center, at a few (at least three) different unknown orientations, then all the five intrinsic parameters can be determined linearly. The main advantage of our new technique is that it needs to know neither any metric measurement on the model plane, nor the correspondences between points on the model plane and image ones, hence the whole calibration process becomes extremely simple. The proposed technique is particularly useful for those people who are not familiar with computer vision. Experiments with simulated data as well as with real images show that our new technique is robust and accurate.  相似文献   

17.
《Graphical Models》2001,63(5):277-303
Camera calibration is the estimation of parameters (both intrinsic and extrinsic) associated with a camera being used for imaging. Given the world coordinates of a number of precisely placed points in a 3D space, camera calibration requires the measurement of the 2D projection of those scene points on the image plane. While the coordinates of the points in space can be known precisely, the image coordinates that are determined from the digital image are often inaccurate and hence noisy. In this paper, we look at the statistics of the behavior of the camera calibration parameters, which are important for stereo matching, when the image plane measurements are corrupted by noise. We derive analytically the behavior of the camera calibration matrix under noisy conditions and further show that the elements of the camera calibration matrix have a Gaussian distribution if the noise introduced into the measurement system is Gaussian. Under certain approximations we derive relationships between the camera calibration parameters and the noisy camera calibration matrix and compare it with Monte Carlo simulations.  相似文献   

18.
One of the reasons of how virtual reality applications penetrate industrial production chains slowly is because of significant investment costs to purchase adequate supporting hardware. As a consequence such applications are only available to major companies and fail to benefit the production processes of small and medium enterprises. In this article, we introduce PTrack, a real-time, low-cost, marker-based multiple camera tracking solution for virtual reality providing the accuracy and scalability usually found in much more expensive tracking systems. PTrack is composed of single camera tracking PTrack Units. Each unit is connected to a video camera equipped with infrared strobes and features a novel iterative geometric pose estimation algorithm which does marker-based single camera tracking and is therefore completely autonomous. Multiple PTrack units successively extend the tracking range of the system. For a smooth transition of tracked labels from one camera to another, camera range areas must overlap to form a contiguous tracking space. A PTrack Sensor Fusion Module then computes the pose of a certain label within tracking space and forwards it to interested applications. A universal test setup for optical tracking systems has been built allowing to measure translational and rotational accuracy of PTrack as well as of competing systems.  相似文献   

19.
提出了在镜头畸变径向约束下,用平面上四个点及其成像关系来建立摄像机姿态的几何 方法.并运用随机样本一致性技术和多视点下摄像机内参数一致性约束提高计算的稳定性和 精度.指出了只利用摄像机正、反投影关系检验其姿态正确性是不充分的,提出将视点间运 动变换关系作为评价相应摄像机姿态精度的重要标准.  相似文献   

20.
Calibration-free augmented reality   总被引:8,自引:0,他引:8  
Camera calibration and the acquisition of Euclidean 3D measurements have so far been considered necessary requirements for overlaying three-dimensional graphical objects with live video. We describe a new approach to video-based augmented reality that avoids both requirements: it does not use any metric information about the calibration parameters of the camera or the 3D locations and dimensions of the environment's objects. The only requirement is the ability to track across frames at least four fiducial points that are specified by the user during system initialization and whose world coordinates are unknown. Our approach is based on the following observation: given a set of four or more noncoplanar 3D points, the projection of all points in the set can be computed as a linear combination of the projections of just four of the points. We exploit this observation by: tracking regions and color fiducial points at frame rate; and representing virtual objects in a non-Euclidean, affine frame of reference that allows their projection to be computed as a linear combination of the projection of the fiducial points. Experimental results on two augmented reality systems, one monitor-based and one head-mounted, demonstrate that the approach is readily implementable, imposes minimal computational and hardware requirements, and generates real-time and accurate video overlays even when the camera parameters vary dynamically  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号