首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Using vanishing points for camera calibration   总被引:43,自引:1,他引:42  
In this article a new method for the calibration of a vision system which consists of two (or more) cameras is presented. The proposed method, which uses simple properties of vanishing points, is divided into two steps. In the first step, the intrinsic parameters of each camera, that is, the focal length and the location of the intersection between the optical axis and the image plane, are recovered from a single image of a cube. In the second step, the extrinsic parameters of a pair of cameras, that is, the rotation matrix and the translation vector which describe the rigid motion between the coordinate systems fixed in the two cameras are estimated from an image stereo pair of a suitable planar pattern. Firstly, by matching the corresponding vanishing points in the two images the rotation matrix can be computed, then the translation vector is estimated by means of a simple triangulation. The robustness of the method against noise is discussed, and the conditions for optimal estimation of the rotation matrix are derived. Extensive experimentation shows that the precision that can be achieved with the proposed method is sufficient to efficiently perform machine vision tasks that require camera calibration, like depth from stereo and motion from image sequence.  相似文献   

2.
This paper presents a framework and the associated algorithms to perform 3D scene analysis from a single image with lens distortion. Previous work focuses on either making 3D measurements under the assumption of one or more ideal pinhole cameras or correcting the lens distortion up to a projective transformation with no additional metric analyses. In this work, we bridge the gap between these two areas of work by incorporating metric constraints into lens distortion correction to achieve metric calibration. Lens distortion parameters, especially the lens distortion center, can be precisely recovered with this approach. Subsequent 3D measurements can be made from the corrected image to recover scene structures. In addition, we propose an algorithm based on hybrid backward and forward covariance propagation to yield a quantitative analysis of the confidence of the results. Experimental results show that our approach simultaneously performs image correction and 3D scene analysis.  相似文献   

3.
We present algorithms for plane-based calibration of general radially distorted cameras. By this, we understand cameras that have a distortion center and an optical axis such that the projection rays of pixels lying on a circle centered on the distortion center form a right viewing cone centered on the optical axis. The camera is said to have a single viewpoint (SVP) if all such viewing cones have the same apex (the optical center); otherwise, we speak of NSVP cases. This model encompasses the classical radial distortion model [5], fisheyes, and most central or noncentral catadioptric cameras. Calibration consists in the estimation of the distortion center, the opening angles of all viewing cones, and their optical centers. We present two approaches of computing a full calibration from dense correspondences of a single or multiple planes with known euclidean structure. The first one is based on a geometric constraint linking viewing cones and their intersections with the calibration plane (conic sections). The second approach is a homography-based method. Experiments using simulated and a broad variety of real cameras show great stability. Furthermore, we provide a comparison with Hartley-Kang's algorithm [12], which, however, cannot handle such a broad variety of camera configurations, showing similar performance.  相似文献   

4.
In order to calibrate cameras in an accurate manner, lens distortion models have to be included in the calibration procedure. Usually, the lens distortion models used in camera calibration depend on radial functions of image pixel coordinates. Such models are well-known, simple and can be estimated using just image information. However, these models do not take into account an important physical constraint of lens distortion phenomena, namely: the amount of lens distortion induced in an image point depends on the scene point depth with respect to the camera projection plane. In this paper we propose a new accurate depth dependent lens distortion model. To validate this approach, we apply the new lens distortion model to camera calibration in planar view scenarios (that is 3D scenarios where the objects of interest lie on a plane). We present promising experimental results on planar pattern images and on sport event scenarios. Nevertheless, although we emphasize the feasibility of the method for planar view scenarios, the proposed model is valid in general and can be used in any scenario where the point depth can be estimated.  相似文献   

5.
In this paper we show how to carry out an automatic alignment of a pan-tilt camera platform with its natural coordinate frame, using only images obtained from the cameras during controlled motion of the unit. An active camera in aligned orientation represents the zero position for each axis, and allows axis odometry to be referred to a fixed reference frame; such referral is otherwise only possible using mechanical means, such as end-stops, which cannot take account of the unknown relationship between the camera coordinate frame and its mounting. The algorithms presented involve the calculation of two-view transformations (homographies or epipolar geometry) between pairs of images related by controlled rotation about individual head axes. From these relationships, which can be calculated linearly or optimised iteratively, an invariant line to the motion can be extracted which represents an aligned viewing direction. We present methods for general and degenerate motion (translating or non-translating), and general and degenerate scenes (non-planar and planar, but otherwise unknown), which do not require knowledge of the camera calibration, and are resistant to lens distortion non-linearity.Detailed experimentation in simulation, and in real scenes, demonstrate the speed, accuracy, and robustness of the methods, with the advantages of applicability to a wide range circumstances and no need to involve calibration objects or complex motions. Accuracy of within half a degree can be achieved with a single motion, and we also show how to improve on this by incorporating images from further motions, using a natural extension of the basic algorithm.  相似文献   

6.
图像配准是把同一场景的两幅或者多幅图像在空间上进行配准,它在图像分析领域应用广泛,如医学,遥感图像分析,图像融合,图像检索及目标识别等.匹配困难是图像配准的主要问题,造成匹配困难的原因之一是待匹配图像存在畸变.该问题是由于待配准图像相对于基准图像会产生大面积相同区域或产生平移、旋转、缩放等多种畸变.针对以上各种畸变,本文提出一种解决方案:利用相机标定信息求解畸变情况的平移矩阵、旋转矩阵和缩放比,通过反变换达到矫正图像畸变的效果.然而标定信息常会因为相机位置移动而发生改变.对于此问题,本文则是通过求解多角度下的云台转角来确定相机的标定信息.实验表明:本文提供的方案很好的解决了图像配准阶段的一些畸变问题,使图像配准有较高的精度.  相似文献   

7.
Generalized Camera Calibration Including Fish-Eye Lenses   总被引:1,自引:0,他引:1  
A method is described for accurately calibrating cameras including radial lens distortion, by using known points such as those measured from a calibration fixture. Both the intrinsic and extrinsic parameters are calibrated in a single least-squares adjustment, but provision is made for including old values of the intrinsic parameters in the adjustment. The distortion terms are relative to the optical axis, which is included in the model so that it does not have to be orthogonal to the image sensor plane. These distortion terms represent corrections to the basic lens model, which is a generalization that includes the perspective projection and the ideal fish-eye lens as special cases. The position of the entrance pupil point as a function of off-axis angle also is included in the model. (The complete camera model including all of these effects often is called CAHVORE.) A way of adding decentering distortion also is described. A priori standard deviations can be used to apply weight to given initial approximations (which can be zero) for the distortion terms, for the difference between the optical axis and the perpendicular to the sensor plane, and for the terms representing movement of the entrance pupil, so that the solution for these is well determined when there is insufficient information in the calibration data. For the other parameters, initial approximations needed for the nonlinear least-squares adjustment are obtained in a simple manner from the calibration data and other known information. (Weight can be given to these also, if desired.) Outliers among the calibration points that disagree excessively with the other data are removed by means of automatic editing based on analysis of the residuals. The use of the camera model also is described, including partial derivatives for propagating both from object space to image space and vice versa. These methods were used to calibrate the cameras on the Mars Exploration Rovers.  相似文献   

8.
针对自由双目立体视觉中由于摄像机旋转导致的摄像机外参数变化的问题,提出一种基于旋转轴标定的动态外参数获取方法。在多个不同位置,立体标定得到多组旋转平移矩阵,利用最小二乘法求解旋转轴参数;结合初始位置左右摄像机的内、外参数及旋转角度,实时获取左右摄像机的外参数。利用所提方法获取动态外参数,并对棋盘角点进行三维重建,平均误差为0.241mm,标准差为0.156mm;与基于多平面标靶的标定方法相比,精度高且操作简单。所提方法无需实时标定,可完成摄像机旋转情况下动态外参数的获取。  相似文献   

9.
Many recent applications of computer graphics and human computer interaction have adopted both colour cameras and depth cameras as input devices. Therefore, an effective calibration of both types of hardware taking different colour and depth inputs is required. Our approach removes the numerical difficulties of using non-linear optimization in previous methods which explicitly resolve camera intrinsics as well as the transformation between depth and colour cameras. A matrix of hybrid parameters is introduced to linearize our optimization. The hybrid parameters offer a transformation from a depth parametric space (depth camera image) to a colour parametric space (colour camera image) by combining the intrinsic parameters of depth camera and a rotation transformation from depth camera to colour camera. Both the rotation transformation and intrinsic parameters can be explicitly calculated from our hybrid parameters with the help of a standard QR factorisation. We test our algorithm with both synthesized data and real-world data where ground-truth depth information is captured by Microsoft Kinect. The experiments show that our approach can provide comparable accuracy of calibration with the state-of-the-art algorithms while taking much less computation time (1/50 of Herrera’s method and 1/10 of Raposo’s method) due to the advantage of using hybrid parameters.  相似文献   

10.
Automatic Radial Distortion Estimation from a Single Image   总被引:1,自引:0,他引:1  
Many computer vision algorithms rely on the assumptions of the pinhole camera model, but lens distortion with off-the-shelf cameras is usually significant enough to violate this assumption. Many methods for radial distortion estimation have been proposed, but they all have limitations. Robust automatic radial distortion estimation from a single natural image would be extremely useful for many applications, particularly those in human-made environments containing abundant lines. For example, it could be used in place of an extensive calibration procedure to get a mobile robot or quadrotor experiment up and running quickly in an indoor environment. We propose a new method for automatic radial distortion estimation based on the plumb-line approach. The method works from a single image and does not require a special calibration pattern. It is based on Fitzgibbon’s division model, robust estimation of circular arcs, and robust estimation of distortion parameters. We perform an extensive empirical study of the method on synthetic images. We include a comparative statistical analysis of how different circle fitting methods contribute to accurate distortion parameter estimation. We finally provide qualitative results on a wide variety of challenging real images. The experiments demonstrate the method’s ability to accurately identify distortion parameters and remove distortion from images.  相似文献   

11.
Implicit and explicit camera calibration: theory and experiments   总被引:22,自引:0,他引:22  
By implicit camera calibration, we mean the process of calibrating a camera without explicitly computing its physical parameters. Implicit calibration can be used for both three-dimensional (3-D) measurement and generation of image coordinates. In this paper, we present a new implicit model based on the generalized projective mappings between the image plane and two calibration planes. The back-projection and projection processes are modelled separately to ease the computation of distorted image coordinates from known world points. A set of constraints of perspectivity is derived to relate the transformation parameters of the two calibration planes. Under the assumption of the radial distortion model, we present a computationally efficient method for explicitly correcting the distortion of image coordinates in frame buffer without involving the computation of camera position and orientation. By combining with any linear calibration techniques, this method makes explicit the camera physical parameters. Extensive experimental comparison of our methods with the classic photogrammetric method and Tsai's (1986) method in the aspects of 3-D measurement (both absolute and relative errors), the prediction of image coordinates, and the effect of the number of calibration points, is made using real images from 15 different depth values  相似文献   

12.
Central catadioptric cameras are imaging devices that use mirrors to enhance the field of view while preserving a single effective viewpoint. Lines and spheres in space are all projected into conics in the central catadioptric image planes, and such conics are called line images and sphere images, respectively. We discovered that there exists an imaginary conic in the central catadioptric image planes, defined as the modified image of the absolute conic (MIAC), and by utilizing the MIAC, the novel identical projective geometric properties of line images and sphere images may be exploited: Each line image or each sphere image is double-contact with the MIAC, which is an analogy of the discovery in pinhole camera that the image of the absolute conic (IAC) is double-contact with sphere images. Note that the IAC also exists in the central catadioptric image plane, but it does not have the double-contact properties with line images or sphere images. This is the main reason to propose the MIAC. From these geometric properties with the MIAC, two linear calibration methods for central catadioptric cameras using sphere images as well as using line images are proposed in the same framework. Note that there are many linear approaches to central catadioptric camera calibration using line images. It seems that to use the properties that line images are tangent to the MIAC only leads to an alternative geometric construction for calibration. However, for sphere images, there are only some nonlinear calibration methods in literature. Therefore, to propose linear methods for sphere images may be the main contribution of this paper. Our new algorithms have been tested in extensive experiments with respect to noise sensitivity.  相似文献   

13.
Hybrid central catadioptric and perspective cameras are desired in practice, because the hybrid camera system can capture large field of view as well as high-resolution images. However, the calibration of the system is challenging due to heavy distortions in catadioptric cameras. In addition, previous calibration methods are only suitable for the camera system consisting of perspective cameras and catadioptric cameras with only parabolic mirrors, in which priors about the intrinsic parameters of perspective cameras are required. In this work, we provide a new approach to handle the problems. We show that if the hybrid camera system consists of at least two central catadioptric and one perspective cameras, both the intrinsic and extrinsic parameters of the system can be calibrated linearly without priors about intrinsic parameters of the perspective cameras, and the supported central catadioptric cameras of our method can be more generic. In this work, an approximated polynomial model is derived and used for rectification of catadioptric image. Firstly, with the epipolar geometry between the perspective and rectified catadioptric images, the distortion parameters of the polynomial model can be estimated linearly. Then a new method is proposed to estimate the intrinsic parameters of a central catadioptric camera with the parameters in the polynomial model, and hence the catadioptric cameras can be calibrated. Finally, a linear self-calibration method for the hybrid system is given with the calibrated catadioptric cameras. The main advantage of our method is that it cannot only calibrate both the intrinsic and extrinsic parameters of the hybrid camera system, but also simplify a traditional nonlinear self-calibration of perspective cameras to a linear process. Experiments show that our proposed method is robust and reliable.  相似文献   

14.
The main challenges of image steganography are imperceptibility of the cover image and no recoverability of the secret data. To deal with these challenges, a modified digital image steganography technique based on Discrete Wavelet Transform (DWT) is proposed. In proposed approach, two new concepts are being proposed to minimize the distortion in the cover image. The first one i.e. secret key computation concept is proposed to make it more robust and resistive towards steganalysis. The second one, known as blocking concept, is introduced to ensure least variation in the cover image. The proposed approach is tested over ten different cover images and two secret images. Its performance is compared with the six well-known steganography techniques. The experimental results reveal that the proposed approach performs better than the existing techniques in terms of imperceptibility, security and quality measures. The six image processing attacks are also applied on the stego-image to test the robustness of the proposed approach. The effects of compression, rotation, and application of different wavelets have also been investigated on the proposed approach. The results demonstrate the robustness of the proposed approach under different image processing attacks. Both stego-image and extracted secret images possess better visual quality.  相似文献   

15.
Extrinsic calibration of heterogeneous cameras by line images   总被引:1,自引:0,他引:1  
The extrinsic calibration refers to determining the relative pose of cameras. Most of the approaches for cameras with non-overlapping fields of view (FOV) are based on mirror reflection, object tracking or rigidity constraint of stereo systems whereas cameras with overlapping FOV can be calibrated using structure from motion solutions. We propose an extrinsic calibration method within structure from motion framework for cameras with overlapping FOV and its extension to cameras with partially non-overlapping FOV. Recently, omnidirectional vision has become a popular topic in computer vision as an omnidirectional camera can cover large FOV in one image. Combining the good resolution of perspective cameras and the wide observation angle of omnidirectional cameras has been an attractive trend in multi-camera system. For this reason, we present an approach which is applicable to heterogeneous types of vision sensors. Moreover, this method utilizes images of lines as these features possess several advantageous characteristics over point features, especially in urban environment. The calibration consists of a linear estimation of orientation and position of cameras and optionally bundle adjustment to refine the extrinsic parameters.  相似文献   

16.
针对基于Time-of-Flight(TOF)相机的彩色目标三维重建需标定CCD相机与TOF相机联合系统的几何参数,在研究现有的基于彩色图像和TOF深度图像标定算法的基础上,提出了一种基于平面棋盘模板的标定方法。拍摄了固定在平面标定模板上的彩色棋盘图案在不同角度下的彩色图像和振幅图像,改进了Harris角点提取,根据棋盘格上角点与虚拟像点的共轭关系,建立了相机标定系统模型,利用Levenberg-Marquardt算法求解,进行了标定实验。获取了TOF与CCD相机内参数,并利用像平面之间的位姿关系估计两相机坐标系的相对姿态,最后进行联合优化,获取了相机之间的旋转矩阵与平移向量。实验结果表明,提出的算法优化了求解过程,提高了标定效率,能够获得较高的精度。  相似文献   

17.
We propose a method of simultaneously calibrating the radial distortion function of a camera and the other internal calibration parameters. The method relies on the use of a planar (or, alternatively, nonplanar) calibration grid which is captured in several images. In this way, the determination of the radial distortion is an easy add-on to the popular calibration method proposed by Zhang [24]. The method is entirely noniterative and, hence, is extremely rapid and immune to the problem of local minima. Our method determines the radial distortion in a parameter-free way, not relying on any particular radial distortion model. This makes it applicable to a large range of cameras from narrow-angle to fish-eye lenses. The method also computes the center of radial distortion, which, we argue, is important in obtaining optimal results. Experiments show that this point may be significantly displaced from the center of the image or the principal point of the camera.  相似文献   

18.
Plenoptic cameras are a new type of sensors that capture the four-dimensional lightfield of a scene. Processing the recorded lightfield, they extend the capabilities of current commercial cameras. Conventional cameras obtain photographs focusing at a determined depth. This photograph can be described through a projection of the four-dimensional lightfield onto two spatial dimensions. The collection of such images is the focal stack of the scene. The focal stack can be used to select an image refocused at a certain depth, to recover 3D information or to obtain all-in-focus images. There are several approaches to the computation of the focal stack. In this paper we propose a new technique to compute the focal stack by means of its frequency decomposition that can be seen as an extension of the Discrete Focal Stack Transform (DFST). This new approach decreases the computational complexity of the DFST maintaining an efficient memory use. Experimental results are provided to show the validity of the technique and its extension to 3D processing and all-in-focus image computation is also studied.  相似文献   

19.
Minimal Aspect Distortion (MAD) Mosaicing of Long Scenes   总被引:2,自引:0,他引:2  
Long scenes can be imaged by mosaicing multiple images from cameras scanning the scene. We address the case of a video camera scanning a scene while moving in a long path, e.g. scanning a city street from a driving car, or scanning a terrain from a low flying aircraft. A robust approach to this task is presented, which is applied successfully to sequences having thousands of frames even when using a hand-held camera. Examples are given on a few challenging sequences. The proposed system consists of two components: (i) Motion and depth computation. (ii) Mosaic rendering. In the first part a “direct” method is presented for computing motion and dense depth. Robustness of motion computation has been increased by limiting the motion model for the scanning camera. An iterative graph-cuts approach, with planar labels and a flexible similarity measure, allows the computation of a dense depth for the entire sequence. In the second part a new minimal aspect distortion (MAD) mosaicing uses depth to minimize the geometrical distortions of long panoramic images. In addition to MAD mosaicing, interactive visualization using X-Slits is also demonstrated. This research was supported by the Israel Science Foundation. Video examples and high resolution images can be viewed in .  相似文献   

20.
In this paper, we discuss the problem of estimating parameters of a calibration model for active pan–tilt–zoom cameras. The variation of the intrinsic parameters of each camera over its full range of zoom settings is estimated through a two step procedure. We first determine the intrinsic parameters at the camera’s lowest zoom setting very accurately by capturing an extended panorama. The camera intrinsics and radial distortion parameters are then determined at discrete steps in a monotonically increasing zoom sequence that spans the full zoom range of the camera. Our model incorporates the variation of radial distortion with camera zoom. Both calibration phases are fully automatic and do not assume any knowledge of the scene structure. High-resolution calibrated panoramic mosaics are also computed during this process. These fully calibrated panoramas are represented as multi-resolution pyramids of cube-maps. We describe a hierarchical approach for building multiple levels of detail in panoramas, by aligning hundreds of images captured within a 1–12× zoom range. Results are shown from datasets captured from two types of pan–tilt–zoom cameras placed in an uncontrolled outdoor environment. The estimated camera intrinsics model along with the cube-maps provides a calibration reference for images captured on the fly by the active pan–tilt–zoom camera under operation making our approach promising for active camera network calibration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号