首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A new approach to camera calibration using vanishing line information for three-dimensional computer vision is proposed. Calibrated parameters include the orientation, the position, the focal length, and the image plane center of a camera. A rectangular parallelepiped is employed as the calibration target to generate three principal vanishing points and then three vanishing lines from the projected image of the parallelepiped. Only a monocular image is required for solving these camera parameters. It is shown that the image plane center is the orthocenter of a triangle formed by the three vanishing lines. From the slopes of the vanishing lines the camera orientation parameters can be determined. The focal length can be computed by the area of the triangle. The camera position parameters can then be calibrated by using related geometric projective relationships. The derived results show the geometric meanings of these camera parameters. The calibration formulas are analytic and simple to compute. Experimental results show the feasibility of the proposed approach for a practical application—autonomous land vehicle guidance.This work was supported by National Science Council, Republic of China under Grant NSC-77-0404-E-009-31.  相似文献   

2.
本文给出了一种以空间不变量的数据来计算摄象机外部参数的方法.空间透视不变量是指在几何变换中如投影或改变观察点时保持不变的形状描述.由于它可以得到一个相对于外界来讲独立的物体景物的特征描述,故可以很广泛的应用到计算机视觉等方面.摄象机标定是确定摄象机摄取的2D图象信息及其3D实际景物的信息之间的变换关系,它包括内部参数和外部参数两个部分.内部参数表征的是摄象机的内部特征和光学特征参数,包括图象中心(Cx,Cy)坐标、图象尺度因子Sx、有效的焦距长度f和透镜的畸变失真系数K;外部参数表示的是摄象机的位置和方向在世界坐标中的坐标参数,它包括平移矩阵T和旋转矩阵R3×3,一般情况下可以写成一个扩展矩阵[RT]3×4.本文基于空间透视不变量的计算数据,给出了一种标定摄象机外部参数的方法,实验结果表明该方法具有很强的鲁棒性.  相似文献   

3.
目的 云台相机因监控视野广、灵活度高,在高速公路监控系统中发挥出重要的作用,但因云台相机焦距与角度不定时地随监控需求变化,对利用云台相机的图像信息获取真实世界准确的物理信息造成一定困难,因此进行云台相机非现场自动标定方法的研究对高速公路监控系统的应用具有重要价值。方法 本文提出了一种基于消失点约束与车道线模型约束的云台相机自动标定方法,以建立高速公路监控系统的图像信息与真实世界物理信息之间准确描述关系。首先,利用车辆目标运动轨迹的级联霍夫变换投票实现纵向消失点的准确估计,其次以车道线模型物理度量为约束,并采用枚举策略获取横向消失点的准确估计,最终在已知相机高度的条件下实现高速公路云台相机标定参数的准确计算。结果 将本文方法在不同的场景下进行实验,得到在不同的距离下的平均误差分别为4.63%、4.74%、4.81%、4.65%,均小于5%。结论 对多组高速公路监控场景的测试实验结果表明,本文提出的云台相机自动标定方法对高速公路监控场景的物理测量误差能够满足应用需求,与参考方法相比较而言具有较大的优势和一定的应用价值,得到的相机内外参数可用于计算车辆速度与空间位置等。  相似文献   

4.
基于单幅透视图像确定相机参数的几何方法   总被引:1,自引:0,他引:1  
提出一种求解相机内、外参数的几何方法.在不考虑相机镜头变形的前提下,首先根据空间平面上的单个矩形,从该矩形的两灭点透视图中,根据灭点间的几何关系,补出另外一个灭点的坐标;然后根据透视投影的灭点理论,计算出相机的相对位置;若已知矩形边的实际长度,可以恢复出相机在三维空间中的实际位置和矩形顶点的空间坐标;最后根据透视投影关系计算出相机的有效焦距.大量的模拟和真实实验表明,该方法简单、易于操作,具有标定精度高、鲁棒性强的优点.  相似文献   

5.
一种基于灭影线的无人直升机位姿估计方法   总被引:1,自引:1,他引:1  
该文将基于灭影线的摄像机标定原理用于无人直升机的位姿估计,设计了一个包含四组平行线的平面着陆图标,通过机载摄像机摄取的图标图像,估计出直升机相对于该图标的位置和姿态。文章首先简述了灭影线的有关知识,推导了相应的关系方程,接着介绍了使用的图像处理方法和提高计算精度的措施,最后,给出了实验结果并对实验误差做了简要分析。该方法的技术关键是如何精确测定灭影点和灭影线的像平面参数,算法最大的特点为计算结果与焦距变化的无关性。实验结果表明,该方法算法复杂性低,实时性好,精度较高,可以用于无人直升机的位姿估计。  相似文献   

6.
储珺  肖旭  梁辰 《图学学报》2016,37(6):783
传统方法只能计算标定图像的正交灭点,同时没有考虑图像直线检测结果的误差、 直线的长度以及候选灭点与约束直线之间的位置关系对灭点检测精度的影响。针对此类问题, 提出了一种针对单视未标定图像的正交灭点检测方法。首先利用J-Linkage 完成灭点的初始化估 计,得到假设灭点集合;然后根据假设灭点与图像直线之间的一致性约束、图像直线的长度, 基于投票机制先得到精确的垂直方向灭点;后利用灭点、灭线的定义和性质,计算得到图像相 机参数;根据正交灭点的特性,得到准确的水平方向和纵深方向的灭点。因引入了一种新的假 设灭点和图像直线之间的一致性度量方法,正交灭点检测精度不受直线检测结果的误差、直线 的长度以及候选灭点与约束直线之间的位置关系的影响,在未知图像相机参数的情况下能精准 的得到三个正交灭点信息。正交灭点检测方法在室内场景下可以得到更加精确的检测结果。  相似文献   

7.
针对交通监控场景中对车辆速度测量的需求,提出了一种相机标定方法和车辆速度测量方案。首先,通过深度学习YOLO检测算法和光流跟踪算法对图像中的车辆目标进行检测和跟踪,根据获得的轨迹集合使用级联霍夫变换计算出道路方向上的消失点,从而检测出道路上的标志线。之后根据消失点和标志线,使用试探焦距思想完成相机标定任务。最后通过计算多帧之间瞬时速度的平均值来实现车辆速度的测量。通过真实交通监控场景的实验结果表明,这种基于消失点的自动相机标定方法具有较好的稳定性和较高的标定精度,能够满足车辆速度测量和实际工程应用的需求。  相似文献   

8.
This paper presents a novel method for 3D camera calibration. Calculation of the focal length and the optical center of the camera are the main objectives of this research work. The proposed technique requires a single image having two vanishing points. A rectangular prism is employed as the calibration target to generate vanishing points. The special arrangement of the calibration object adds more accuracy in finding the intrinsic parameters. Based on the geometry of the perspective distortion of the edges of the prisms from the image, vanishing points are found. There on, fixing up the picture plane followed by fixing up of the station point is carried out based on the relations that are formulated. Experimental results of our method are likened with Zhang’s method. Results are tabulated to show the accuracy of the proposed approach.
S. MuraliEmail:
  相似文献   

9.
We describe a method to compute the internal parameters (focal and principal point) of a camera with known position and orientation, based on the observation of two or more conics on a known plane. The conics can even be degenerate (e.g., pairs of lines). The proposed method can be used to re-estimate the internal parameters of a fully calibrated camera after zooming to a new, unknown, focal length. It also allows estimating the internal parameters when a second, fully calibrated camera observes the same conics. The parameters estimated through the proposed method are coherent with the output of more traditional procedures that require a higher number of calibration images. A deep analysis of the geometrical configurations that influence the proposed method is also reported.  相似文献   

10.
Using vanishing points for camera calibration   总被引:42,自引:1,他引:42  
In this article a new method for the calibration of a vision system which consists of two (or more) cameras is presented. The proposed method, which uses simple properties of vanishing points, is divided into two steps. In the first step, the intrinsic parameters of each camera, that is, the focal length and the location of the intersection between the optical axis and the image plane, are recovered from a single image of a cube. In the second step, the extrinsic parameters of a pair of cameras, that is, the rotation matrix and the translation vector which describe the rigid motion between the coordinate systems fixed in the two cameras are estimated from an image stereo pair of a suitable planar pattern. Firstly, by matching the corresponding vanishing points in the two images the rotation matrix can be computed, then the translation vector is estimated by means of a simple triangulation. The robustness of the method against noise is discussed, and the conditions for optimal estimation of the rotation matrix are derived. Extensive experimentation shows that the precision that can be achieved with the proposed method is sufficient to efficiently perform machine vision tasks that require camera calibration, like depth from stereo and motion from image sequence.  相似文献   

11.
Using points at infinity for parameter decoupling in camera calibration   总被引:3,自引:0,他引:3  
The majority of camera calibration methods, including the gold standard algorithm, use point-based information and simultaneously estimate all calibration parameters. In contrast, we propose a novel calibration method that exploits line orientation information and decouples the problem into two simpler stages. We formulate the problem as minimization of the lateral displacement between single projected image lines and their vanishing points. Unlike previous vanishing point methods, parallel line pairs are not required. Additionally, the invariance properties of vanishing points mean that multiple images related by pure translation can be used to increase the calibration data set size without increasing the number of estimated parameters. We compare this method with vanishing point methods and the gold standard algorithm and demonstrate that it has comparable performance.  相似文献   

12.
Calculation of camera projection matrix, also called camera calibration, is an essential task in many computer vision and 3D data processing applications. Calculation of projection matrix using vanishing points and vanishing lines is well suited in the literature; where the intersection of parallel lines (in 3D Euclidean space) when projected on the camera image plane (by a perspective transformation) is called vanishing point and the intersection of two vanishing points (in the image plane) is called vanishing line. The aim of this paper is to propose a new formulation for easily computing the projection matrix based on three orthogonal vanishing points. It can also be used to calculate the intrinsic and extrinsic camera parameters. The proposed method reaches to a closed-form solution by considering only two feasible constraints of zero-skewness in the internal camera matrix and having two corresponding points between the world and the image. A nonlinear optimization procedure is proposed to enhance the computed camera parameters, especially when the measurement error of input parameters or the skew factor are not negligible. The proposed method has been run on real and synthetic data for more precise evaluations. The provided experimental results demonstrate the superiority of the proposed method.  相似文献   

13.
摄像机定标是计算机视觉中一个非常重要的问题.对CCD/INS复合末制导系统中摄像机定标问题进行了研究.文中介绍了一种基于平面消影点的摄像机定标方法.该方法从消影点的基本性质和方形平面模板的几何特征出发,推导并证明了正交消影点之间的约束关系式,从这个正交约束中可以解析地求出摄像机的有效焦距.消影点从3D-2D的对应点坐标中解算求得,相比于图像处理方法具有更好的抗干扰性.方法原理简单、实现方便,仿真实验和真实图像实验结果均验证了该方法的有效性.  相似文献   

14.
In this paper, we show how an active binocular head, the IIS head, can be easily calibrated with very high accuracy. Our calibration method can also be applied to many other binocular heads. In addition to the proposal and demonstration of a four-stage calibration process, there are three major contributions in this paper. First, we propose a motorized-focus lens (MFL) camera model which assumes constant nominal extrinsic parameters. The advantage of having constant extrinsic parameters is to having a simple head/eye relation. Second, a calibration method for the MFL camera model is proposed in this paper, which separates estimation of the image center and effective focal length from estimation of the camera orientation and position. This separation has been proved to be crucial; otherwise, estimates of camera parameters would be very noise-sensitive. Thirdly, we show that, once the parameters of the MFL camera model is calibrated, a nonlinear recursive least-square estimator can be used to refine all the 35 kinematic parameters. Real experiments have shown that the proposed method can achieve accuracy of one pixel prediction error and 0.2 pixel epipolar error, even when all the joints, including the left and right focus motors, are moved simultaneously. This accuracy is good enough for many 3D vision applications, such as navigation, object tracking and reconstruction  相似文献   

15.
为实现未知摄像机参数的镜头畸变校正,提出了一种先标定畸变中心、再标定畸变系数的方法。先在镜头的不同焦距处对靶标成两次像,利用相同靶标点在两幅图像中的相对位置关系求解畸变中心;再根据直线的透视投影不变性,通过变步长的最优化方法搜索畸变系数。模拟实验表明,在靶标点数为25,噪声水平为0.2像素时,畸变中心的平均误差为(0.2243,0.1636)像素,畸变系数误差为0.28%。真实图像实验表明,用该方法得到的畸变中心和畸变系数能够很好地校正图像。该方法不需要标定摄像机的内外部参数,也无需知道直线网格的世界坐标,简便易行。  相似文献   

16.
By using mirror reflections of a scene, stereo images can be captured with a single camera (catadioptric stereo). In addition to simplifying data acquisition single camera stereo provides both geometric and radiometric advantages over traditional two camera stereo. In this paper, we discuss the geometry and calibration of catadioptric stereo with two planar mirrors. In particular, we will show that the relative orientation of a catadioptric stereo rig is restricted to the class of planar motions thus reducing the number of external calibration parameters from 6 to 5. Next we derive the epipolar geometry for catadioptric stereo and show that it has 6 degrees of freedom rather than 7 for traditional stereo. Furthermore, we show how focal length can be recovered from a single catadioptric image solely from a set of stereo correspondences. To test the accuracy of the calibration we present a comparison to Tsai camera calibration and we measure the quality of Euclidean reconstruction. In addition, we will describe a real-time system which demonstrates the viability of stereo with mirrors as an alternative to traditional two camera stereo.  相似文献   

17.
As the main observed illuminant outdoors, the sky is a rich source of information about the scene. However, it is yet to be fully explored in computer vision because its appearance in an image depends on the sun position, weather conditions, photometric and geometric parameters of the camera, and the location of capture. In this paper, we analyze two sources of information available within the visible portion of the sky region: the sun position, and the sky appearance. By fitting a model of the predicted sun position to an image sequence, we show how to extract camera parameters such as the focal length, and the zenith and azimuth angles. Similarly, we show how we can extract the same parameters by fitting a physically-based sky model to the sky appearance. In short, the sun and the sky serve as geometric calibration targets, which can be used to annotate a large database of image sequences. We test our methods on a high-quality image sequence with known camera parameters, and obtain errors of less that 1% for the focal length, 1° for azimuth angle and 3° for zenith angle. We also use our methods to calibrate 22 real, low-quality webcam sequences scattered throughout the continental US, and show deviations below 4% for focal length, and 3° for the zenith and azimuth angles. Finally, we demonstrate that by combining the information available within the sun position and the sky appearance, we can also estimate the camera geolocation, as well as its geometric parameters. Our method achieves a mean localization error of 110 km on real, low-quality Internet webcams. The estimated viewing and illumination geometry of the scene can be useful for a variety of vision and graphics tasks such as relighting, appearance analysis and scene recovery.  相似文献   

18.
We present a novel technique for calibrating a zooming camera based on the invariance properties of the normalised image of the absolute conic (NIAC). We show that the camera parameters independent of position, orientation and zooming are determined uniquely by the NIAC, and we exploit these invariance properties to develop a stratified calibration method that decouples the calibration parameters. The method is organised in three steps: (i) computation of the NIAC, (ii) computation of the focal length for each image and (iii) computation of the orientation and the position of the camera. The method requires a minimum of three views of a single planar grid. Experiments with synthetic and real data suggest that the method is competitive with other state-of-the-art plane-based zooming calibration methods in the scenarios considered.  相似文献   

19.
Single View Metrology   总被引:19,自引:0,他引:19  
We describe how 3D affine measurements may be computed from a single perspective view of a scene given only minimal geometric information determined from the image. This minimal information is typically the vanishing line of a reference plane, and a vanishing point for a direction not parallel to the plane. It is shown that affine scene structure may then be determined from the image, without knowledge of the camera's internal calibration (e.g. focal length), nor of the explicit relation between camera and world (pose).In particular, we show how to (i) compute the distance between planes parallel to the reference plane (up to a common scale factor); (ii) compute area and length ratios on any plane parallel to the reference plane; (iii) determine the camera's location. Simple geometric derivations are given for these results. We also develop an algebraic representation which unifies the three types of measurement and, amongst other advantages, permits a first order error propagation analysis to be performed, associating an uncertainty with each measurement.We demonstrate the technique for a variety of applications, including height measurements in forensic images and 3D graphical modelling from single images.  相似文献   

20.
提出了一种单视三维重构方法,该方法是利用用户提供图像点及其对应的三维点之间几何信息。由于结构场景是由大量平面构成的,存在大量的平行性、正交性约束,因此该方法主要应用于结构场景的三维重构。首先,相机定标和计算每个平面的度量信息,即先基于3组互相垂直方向的影灭点,对方形像素相机标定,再利用影灭线和圆环点像,对每个平面度量校正;然后考虑每个校正平面的尺度因子和非正交平面间的相对面向,从而将所有校正后的平面缝合起来。采用真实图像进行实验,实验结果表明,该方法简单易用。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号