首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
目的 相机外参标定是ADAS(advanced driver-assistance systems)等应用领域的关键环节。传统的相机外参标定方法通常依赖特定场景和特定标志物,无法实时实地进行动态标定。部分结合SLAM(simultaneous localization and mapping)或VIO(visual inertia odometry)的外参标定方法依赖于点特征匹配,且精度往往不高。针对ADAS应用,本文提出了一种相机地图匹配的外参自校正方法。方法 首先通过深度学习对图像中的车道线进行检测提取,数据筛选及后处理完成后,作为优化问题的输入;其次通过最近邻域解决车道线点关联,并在像平面内定义重投影误差;最后,通过梯度下降方法迭代求解最优的相机外参矩阵,使得像平面内检测车道线与地图车道线真值重投影匹配误差最小。结果 在开放道路上的测试车辆显示,本文方法经过多次迭代后收敛至正确的外参,其旋转角精度小于0.2°,平移精度小于0.2 m,对比基于消失点或VIO的标定方法(精度为2.2°及0.3 m),本文方法精度具备明显优势。同时,在相机外参动态改变时,所提出方法可迅速收敛至相机新外参。结论 本文方法不依赖于特定场景,支持实时迭代优化进行外参优化,有效提高了相机外参精确度,精度满足ADAS需求。  相似文献   

2.
一种基于校正误差的立体相机标定算法   总被引:1,自引:0,他引:1  
立体相机的标定是一个精确求解各个相机内参数以及相机之间关系参数的过程.它是三维重建的基础,其标定精度的好坏直接影响立体重建的结果.为此提出了一种使用校正误差作为代价函数的立体相机标定算法.该算法首先使用传统的基于重投影误差的方法对单个相机的内参数进行标定,然后利用校正误差完成对相机之间关系参数的标定求解.由于校正误差的计算只与相机内参数以及关系参数有关,可以避免在标定过程中使用难以精确标定的相机外参数.实验结果表明本算法能够有效的提高立体相机标定的精度.  相似文献   

3.
移动机器人自适应视觉伺服镇定控制   总被引:2,自引:0,他引:2  
对有单目视觉的移动机器人系统,提出了一种自适应视觉伺服镇定控制算法;在缺乏深度信息传感器并且摄像机外参数未知的情况下,该算法利用视觉反馈实现了移动机器人位置和姿态的渐近稳定.由于机器人坐标系与摄像机坐标系之间的平移外参数(手眼参数)是未知的,本文利用静态特征点的位姿变化特性,建立移动机器人在摄像机坐标系下的运动学模型.然后,利用单应矩阵分解的方法得到了可测的角度误差信号,并结合2维图像误差信号,通过一组坐标变换,得到了系统的开环误差方程.在此基础之上,基于Lyapunov稳定性理论设计了一种自适应镇定控制算法.理论分析、仿真与实验结果均证明了本文所设计的单目视觉控制器在摄像机外参数未知的情况下,可以使移动机器人渐近稳定到期望的位姿.  相似文献   

4.
In this paper, we show how an active binocular head, the IIS head, can be easily calibrated with very high accuracy. Our calibration method can also be applied to many other binocular heads. In addition to the proposal and demonstration of a four-stage calibration process, there are three major contributions in this paper. First, we propose a motorized-focus lens (MFL) camera model which assumes constant nominal extrinsic parameters. The advantage of having constant extrinsic parameters is to having a simple head/eye relation. Second, a calibration method for the MFL camera model is proposed in this paper, which separates estimation of the image center and effective focal length from estimation of the camera orientation and position. This separation has been proved to be crucial; otherwise, estimates of camera parameters would be very noise-sensitive. Thirdly, we show that, once the parameters of the MFL camera model is calibrated, a nonlinear recursive least-square estimator can be used to refine all the 35 kinematic parameters. Real experiments have shown that the proposed method can achieve accuracy of one pixel prediction error and 0.2 pixel epipolar error, even when all the joints, including the left and right focus motors, are moved simultaneously. This accuracy is good enough for many 3D vision applications, such as navigation, object tracking and reconstruction  相似文献   

5.
This paper presents a novel approach for image‐based visual servoing (IBVS) of a robotic system by considering the constraints in the case when the camera intrinsic and extrinsic parameters are uncalibrated and the position parameters of the features in 3‐D space are unknown. Based on the model predictive control method, the robotic system's input and output constraints, such as visibility constraints and actuators limitations, can be explicitly taken into account. Most of the constrained IBVS controllers use the traditional image Jacobian matrix, the proposed IBVS scheme is developed by using the depth‐independent interaction matrix. The unknown parameters can appear linearly in the prediction model and they can be estimated by the identification algorithm effectively. In addition, the model predictive control determines the optimal control input and updates the estimated parameters together with the prediction model. The proposed approach can simultaneously handle system constraints, unknown camera parameters and depth parameters. Both the visual positioning and tracking tasks can be achieved desired performances. Simulation results based on a 2‐DOF planar robot manipulator for both the eye‐in‐hand and eye‐to‐hand camera configurations are used to demonstrate the effectiveness of the proposed method.  相似文献   

6.
目的 RGB-D相机的外参数可以被用来将相机坐标系下的点云转换到世界坐标系的点云,可以应用在3维场景重建、3维测量、机器人、目标检测等领域。 一般的标定方法利用标定物(比如棋盘)对RGB-D彩色相机的外参标定,但并未利用深度信息,故很难简化标定过程,因此,若充分利用深度信息,则极大地简化外参标定的流程。基于彩色图的标定方法,其标定的对象是深度传感器,然而,RGB-D相机大部分则应用基于深度传感器,而基于深度信息的标定方法则可以直接标定深度传感器的姿势。方法 首先将深度图转化为相机坐标系下的3维点云,利用MELSAC方法自动检测3维点云中的平面,根据地平面与世界坐标系的约束关系,遍历并筛选平面,直至得到地平面,利用地平面与相机坐标系的空间关系,最终计算出相机的外参数,即相机坐标系内的点与世界坐标系内的点的转换矩阵。结果 实验以棋盘的外参标定方法为基准,处理从PrimeSense相机所采集的RGB-D视频流,结果表明,外参标定平均侧倾角误差为-1.14°,平均俯仰角误差为4.57°,平均相机高度误差为3.96 cm。结论 该方法通过自动检测地平面,准确估计出相机的外参数,具有很强的自动化,此外,算法具有较高地并行性,进行并行优化后,具有实时性,可应用于自动估计机器人姿势。  相似文献   

7.
《Graphical Models》2008,70(4):57-75
This paper studies the inside looking out camera pose estimation for the virtual studio. The camera pose estimation process, the process of estimating a camera’s extrinsic parameters, is based on closed-form geometrical approaches which use the benefit of simple corner detection of 3D cubic-like virtual studio landmarks. We first look at the effective parameters of the camera pose estimation process for the virtual studio. Our studies include all characteristic landmark parameters like landmark lengths, landmark corner angles and their installation position errors and some camera parameters like lens focal length and CCD resolution. Through computer simulation we investigate and analyze all these parameters’ efficiency in camera extrinsic parameters, including camera rotation and position matrixes. Based on this work, we found that the camera translation vector is affected more than other camera extrinsic parameters because of the noise of effective camera pose estimation parameters. Therefore, we present a novel iterative geometrical noise cancellation method for the closed-form camera pose estimation process. This is based on the collinearity theory that reduces the estimation error of the camera translation vector, which plays a major role in camera extrinsic parameters estimation errors. To validate our method, we test it in a complete virtual studio simulation. Our simulation results show that they are in the same order as those of some commercial systems, such as the BBC and InterSense IS-1200 VisTracker.  相似文献   

8.
为实现基于投影仪和摄像机的结构光视觉系统连续扫描,需要计算投影仪投影的任意光平面与摄像机图像平面的空间位置关系,进而需要求取摄像机光心与投影仪光心之间的相对位置关系。求取摄像机的内参数,在标定板上选取四个角点作为特征点并利用摄像机内参数求取该四个特征点的外参数,从而知道四个特征点在摄像机坐标系中的坐标。利用投影仪自身参数求解特征点在投影仪坐标系中的坐标,从而计算出摄像机光心与投影仪光心之间的相对位置关系,实现结构光视觉标定。利用标定后的视觉系统,对标定板上的角点距离进行测量,最大相对误差为0.277%,表明该标定算法可以应用于基于投影仪和摄像机的结构光视觉系统。  相似文献   

9.
由于积累误差和摄像机内部参数的校正误差,基于视觉的摄像机定标结果尽管在图像上的重投影误差很小,但是重建的空间结构往往存在一定程度的扭曲。提出了一种带约束的摄像机定标算法,将场景中存在几何条件(如摄像机路径在一条直线上)作为约束条件,对摄像机定标结果进行再优化,使得重建的空间与真实世界的欧氏空间更为一致。在优化过程中,将欧氏空间与图像空间的两种误差约束在同一范围,自动获取最佳的约束系数。实验结果证明了该算法的有效性。  相似文献   

10.
本文给出了一种以空间不变量的数据来计算摄象机外部参数的方法.空间透视不变量是指在几何变换中如投影或改变观察点时保持不变的形状描述.由于它可以得到一个相对于外界来讲独立的物体景物的特征描述,故可以很广泛的应用到计算机视觉等方面.摄象机标定是确定摄象机摄取的2D图象信息及其3D实际景物的信息之间的变换关系,它包括内部参数和外部参数两个部分.内部参数表征的是摄象机的内部特征和光学特征参数,包括图象中心(Cx,Cy)坐标、图象尺度因子Sx、有效的焦距长度f和透镜的畸变失真系数K;外部参数表示的是摄象机的位置和方向在世界坐标中的坐标参数,它包括平移矩阵T和旋转矩阵R3×3,一般情况下可以写成一个扩展矩阵[RT]3×4.本文基于空间透视不变量的计算数据,给出了一种标定摄象机外部参数的方法,实验结果表明该方法具有很强的鲁棒性.  相似文献   

11.
针对针孔相机和三维激光雷达的外参标定问题,提出了一种新的方法:利用相机坐标系和激光坐标系下棋盘面的对应性,将外参标定转换为三维空间中旋转、缩放矩阵的求解问题.每帧数据可以提供4个约束,最少只需3帧数据可求解外参矩阵.方法原理简单易懂,直观明了地解释了失效退化原因.仿真实验与真实数据实验表明:提出的方法可以得到高精度的外参矩阵,且在采样帧数较少的情况下依旧可以获得很好的标定效果.  相似文献   

12.
基于改进的SUSAN法的摄像机线性标定方法   总被引:3,自引:0,他引:3  
张颖 《计算机应用》2006,26(10):2516-2518
设计并实现了一种考虑径向畸变的逐步线性摄像机标定方法。结合边缘检测,缩小了SUSAN法检测角点的范围,并利用角点的邻域特征,剔除伪角点,从而提高了SUSAN法角点检测的速度和准确度。利用此改进的SUSAN法精确提取方格模板角点的亚像素坐标,并通过预标定获得主点坐标;然后在考虑摄像机径向畸变的情况下建立摄像机模型,并求解摄像机内外参数。最后通过实验及误差分析表明,本标定方法具有较高的精度和较好的实时性。  相似文献   

13.
Self-calibration of an affine camera from multiple views   总被引:6,自引:2,他引:4  
A key limitation of all existing algorithms for shape and motion from image sequences under orthographic, weak perspective and para-perspective projection is that they require the calibration parameters of the camera. We present in this paper a new approach that allows the shape and motion to be computed from image sequences without having to know the calibration parameters. This approach is derived with the affine camera model, introduced by Mundy and Zisserman (1992), which is a more general class of projections including orthographic, weak perspective and para-perspective projection models. The concept of self-calibration, introduced by Maybank and Faugeras (1992) for the perspective camera and by Hartley (1994) for the rotating camera, is then applied for the affine camera.This paper introduces the 3 intrinsic parameters that the affine camera can have at most. The intrinsic parameters of the affine camera are closely related to the usual intrinsic parameters of the pin-hole perspective camera, but are different in the general case. Based on the invariance of the intrinsic parameters, methods of self-calibration of the affine camera are proposed. It is shown that with at least four views, an affine camera may be self-calibrated up to a scaling factor, leading to Euclidean (similarity) shape réconstruction up to a global scaling factor. Another consequence of the introduction of intrinsic and extrinsic parameters of the affine camera is that all existing algorithms using calibrated affine cameras can be assembled into the same framework and some of them can be easily extented to a batch solution.Experimental results are presented and compared with other methods using calibrated affine cameras.  相似文献   

14.
改进了结构光三维测量系统的数学模型,将投影仪的内外部参数引入其中,并在该模型的基础上采用了一种投影仪标定的新方法.为了提高黑白摄像机在暗区的对比度用红/蓝棋盘代替了黑/白棋盘.将投影仪看作是摄像机的倒置,统一了标定摄像机和投影仪的方法.在3dMA×仿真环境下分别标定了摄像机和投影仪,标定实验的相对误差小于0.32%.利用已经标定的结构光三维测量系统进行重构仿真实验,测量误差小于0.136mm.  相似文献   

15.
摄像机线性三步定标方法研究   总被引:1,自引:1,他引:1       下载免费PDF全文
计算机视觉中,在对景物进行定量分析或对物体进行精确定位时,都需要进行摄像机标定,即准确确定摄像机的内部参数和外部参数。由于真实的摄像机光学模型存在很多类型的畸变,因而导致透视投影关系是非线性的。为了解决标定过程的非线性最优化问题,针对常用的带有一阶径向畸变的摄像机模型,提出了一种新的线性三步摄像机定标方法,即首先通过径向排列约束计算摄像机参数的旋转矩阵、x轴平移向量和y轴平移向量;然后根据透视投影的交比不变性解算一阶径向畸变参数;最后利用求得的摄像机参数建立有效焦距和z轴平移向量的线性方程,采用最小二乘法来得到线性解。实验表明,该方法简单快捷,不仅具有较高的标定精度,而且解决了原有算法采用非线性搜索寻优可能存在的解不稳定的问题。  相似文献   

16.
A panoramic image has a 360° horizontal field of view, and it can provide the viewer the impression of being immersed in the scene. A panorama is created by first taking a sequence of images while rotating the camera about a vertical axis. These images are then projected onto a cylindrical surface before being seamlessly composited. The cross-sectional circumference of the cylindrical panorama is called thecompositing length. This work characterizes the error in compositing panoramic images due to errors in some of the intrinsic parameters. The intrinsic camera parameters that are considered are the camera focal length and the radial distortion coefficient. We show that the error in the compositing length is more sensitive to the error in the camera focal length. Especially important is the discovery that the relative error in compositing length is always smaller than the relative error in the focal length. This means that the error in focal length can be corrected by iteratively using the composited length to compute a new and more correct focal length. Thiscompositing approach to camera calibrationhas the advantages of not requiring both feature detection and separate prior calibration.  相似文献   

17.
Hybrid central catadioptric and perspective cameras are desired in practice, because the hybrid camera system can capture large field of view as well as high-resolution images. However, the calibration of the system is challenging due to heavy distortions in catadioptric cameras. In addition, previous calibration methods are only suitable for the camera system consisting of perspective cameras and catadioptric cameras with only parabolic mirrors, in which priors about the intrinsic parameters of perspective cameras are required. In this work, we provide a new approach to handle the problems. We show that if the hybrid camera system consists of at least two central catadioptric and one perspective cameras, both the intrinsic and extrinsic parameters of the system can be calibrated linearly without priors about intrinsic parameters of the perspective cameras, and the supported central catadioptric cameras of our method can be more generic. In this work, an approximated polynomial model is derived and used for rectification of catadioptric image. Firstly, with the epipolar geometry between the perspective and rectified catadioptric images, the distortion parameters of the polynomial model can be estimated linearly. Then a new method is proposed to estimate the intrinsic parameters of a central catadioptric camera with the parameters in the polynomial model, and hence the catadioptric cameras can be calibrated. Finally, a linear self-calibration method for the hybrid system is given with the calibrated catadioptric cameras. The main advantage of our method is that it cannot only calibrate both the intrinsic and extrinsic parameters of the hybrid camera system, but also simplify a traditional nonlinear self-calibration of perspective cameras to a linear process. Experiments show that our proposed method is robust and reliable.  相似文献   

18.
Extrinsic calibration of heterogeneous cameras by line images   总被引:1,自引:0,他引:1  
The extrinsic calibration refers to determining the relative pose of cameras. Most of the approaches for cameras with non-overlapping fields of view (FOV) are based on mirror reflection, object tracking or rigidity constraint of stereo systems whereas cameras with overlapping FOV can be calibrated using structure from motion solutions. We propose an extrinsic calibration method within structure from motion framework for cameras with overlapping FOV and its extension to cameras with partially non-overlapping FOV. Recently, omnidirectional vision has become a popular topic in computer vision as an omnidirectional camera can cover large FOV in one image. Combining the good resolution of perspective cameras and the wide observation angle of omnidirectional cameras has been an attractive trend in multi-camera system. For this reason, we present an approach which is applicable to heterogeneous types of vision sensors. Moreover, this method utilizes images of lines as these features possess several advantageous characteristics over point features, especially in urban environment. The calibration consists of a linear estimation of orientation and position of cameras and optionally bundle adjustment to refine the extrinsic parameters.  相似文献   

19.
目的 云台相机因监控视野广、灵活度高,在高速公路监控系统中发挥出重要的作用,但因云台相机焦距与角度不定时地随监控需求变化,对利用云台相机的图像信息获取真实世界准确的物理信息造成一定困难,因此进行云台相机非现场自动标定方法的研究对高速公路监控系统的应用具有重要价值。方法 本文提出了一种基于消失点约束与车道线模型约束的云台相机自动标定方法,以建立高速公路监控系统的图像信息与真实世界物理信息之间准确描述关系。首先,利用车辆目标运动轨迹的级联霍夫变换投票实现纵向消失点的准确估计,其次以车道线模型物理度量为约束,并采用枚举策略获取横向消失点的准确估计,最终在已知相机高度的条件下实现高速公路云台相机标定参数的准确计算。结果 将本文方法在不同的场景下进行实验,得到在不同的距离下的平均误差分别为4.63%、4.74%、4.81%、4.65%,均小于5%。结论 对多组高速公路监控场景的测试实验结果表明,本文提出的云台相机自动标定方法对高速公路监控场景的物理测量误差能够满足应用需求,与参考方法相比较而言具有较大的优势和一定的应用价值,得到的相机内外参数可用于计算车辆速度与空间位置等。  相似文献   

20.
We present a method for active self-calibration of multi-camera systems consisting of pan-tilt zoom cameras. The main focus of this work is on extrinsic self-calibration using active camera control. Our novel probabilistic approach avoids multi-image point correspondences as far as possible. This allows an implicit treatment of ambiguities. The relative poses are optimized by actively rotating and zooming each camera pair in a way that significantly simplifies the problem of extracting correct point correspondences. In a final step we calibrate the entire system using a minimal number of relative poses. The selection of relative poses is based on their uncertainty. We exploit active camera control to estimate consistent translation scales for triplets of cameras. This allows us to estimate missing relative poses in the camera triplets. In addition to this active extrinsic self-calibration we present an extended method for the rotational intrinsic self-calibration of a camera that exploits the rotation knowledge provided by the camera’s pan-tilt unit to robustly estimate the intrinsic camera parameters for different zoom steps as well as the rotation between pan-tilt unit and camera. Quantitative experiments on real data demonstrate the robustness and high accuracy of our approach. We achieve a median reprojection error of $0.95$ pixel.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号