首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 234 毫秒
1.
多传感器融合技术已经广泛应用在智能汽车环境感知领域中;雷达和摄像机的空间标定是伴随信息实时融合的道路检测技术的基础;结合智能汽车的实际应用,提出了针对激光雷达和摄像机的空间标定方法;通过特制的标定板来获得雷达数据和图像数据,选取激光雷达坐标系作为世界坐标系,通过参数拟合的方法来求取图像坐标系与雷达坐标系的变换关系,进而实现两种传感器的空间配准;该方法只需要标定板就能够完成激光雷达和摄像机的空间标定,标定精度高,实现了多个传感器世界坐标系的统一,避免了后续处理中数据解释的二义性;实验结果表明这种方法简单准确,满足系统要求。  相似文献   

2.
根据摄像机标定原理,实现了VC2010环境下基于OpenCV的摄像机标定系统。该系统以棋盘格标定板图像作为输入,计算出了摄像机的各内外参数及畸变系数。通过图像矫正实验证明了系统的有效性。  相似文献   

3.
智能驾驶系统的车载传感信息通常由激光雷达信息和相机信息融合而成,精确稳定的外参数标定是有效多源信息融合的基础。为提高感知系统鲁棒性,文章提出一种基于特征匹配的激光雷达和相机标定方法。首先,提出点云数据球心算法和图像数据椭圆算法提取特征点的点云三维坐标和像素二维坐标;接着,建立特征点在激光雷达坐标系和相机坐标系下的点对约束,构建非线性优化问题;最后,采用非线性优化算法优化激光雷达和相机的外参数。利用优化后的外参数将激光雷达点云投影至图像,结果显示,激光雷达和相机联合标定精度横向平均误差和纵向平均误差分别为3.06像素和1.19像素。文章提出的方法与livox_camera_lidar_calibration方法相比,平均投影误差减少了40.8%,误差方差减少了56.4%,其精度和鲁棒性明显优于后者。  相似文献   

4.
根据摄像机标定原理,在VC++环境下利用OpenCV设计并实现了摄像机标定系统,详细介绍了系统的功能以及实现流程。以棋盘格标定板图像作为输入,不仅计算出了摄像机的各内外参数,而且对标定误差进行了统计。通过相关实验证明了该系统的有效性。相对于传统标定方法,该系统使用更加方便、快捷,标定结果更加精确。  相似文献   

5.
相机和标定板是相机标定过程中的关键设备,针对二者对相机标定精度影响的问题,进行了一系列分组对比标定实验。首先,制作了棋盘格尺寸大小不等的A2、A3、A4 3块平面标定板,选择不同类型的4款数码相机,分组采集了240张棋盘格标定板图像;然后,依据相机针孔模型原理,采用MATLAB相机标定工具进行了相机的标定实验。研究结果表明:对于任一相机,使用棋盘格尺寸大小不等的标定板,相机的标定精度差别很大,其中A3标定板对分辨率最小的CCD相机标定精度最优,重投影横纵坐标误差的平均值均小于0.1像素。实验结果对计算机视觉研究中相机和标定板的选择具有参考价值。  相似文献   

6.
在总结分析国内外摄像机标定技术研究的基础上,针对体育科研中使用三维立体 框架进行摄像机标定在标定范围、误差、便携性及时效性等方面存在的不足,提出了使用平面 棋盘格标定板进行三维空间标定的新方法。选取了5 m、10 m 和30 m 的拍摄距离,标定参照 物选用传统三维辐射式立体框架和自制二维平面棋盘格标定板,对标准1 m 长度的比例尺进行 三维重建,对两种标定方法进行了精度比较。此外还对标准1 m 板在测量空间的不同位置(测 量画面中间、边缘)进行了误差分布分析。实验结果表明,基于平面棋盘格标定板的摄像机空 间标定方法具有一定的优势,不论从使用方便性还是测量精度方面都克服了三维立体框架在使 用中的不足,理论上能够满足体育科研的需求。  相似文献   

7.
基于交比不变性的摄像机标定方法   总被引:1,自引:1,他引:0  
针对传统棋盘格模板制作过程复杂、精度要求高的问题,提出了一种动态的基于交比不变性的摄像机标定方法。对两幅同源图像中的直线信息进行提取,运用顶点交比序列的唯一性来完成两图像中直线的匹配,从而准确实现了点与点的动态标定。实验结果表明,该方法鲁棒性较好,在自然环境下,对于标定固定位置的摄像机也能运用,表现出了良好的效果,具有广泛的应用性。  相似文献   

8.
针对投影仪-摄像机结构光系统标定精度低的问题,提出一种基于伪随机阵列与正弦光栅结合的标定算法。该方法直接对标定板角点进行编码,根据角点在不同坐标系下的点对关系求解系统参数,提高了传统的编码精度和结构光系统标定精度。在正弦光栅条纹中心处投射伪随机阵列,根据伪随机阵列窗口的唯一性和相移法可求出正弦光栅的真实相位;分别在水平和竖直方向投射正弦光栅,根据不同方向的真实相位值形成对标定板角点的唯一编码;根据编码值获得标定板角点在投影仪像平面、摄像机像平面和世界坐标系下对应点集,根据相机标定算法可得到投影仪、相机的内外参数。该方法直接对标定板角点高精度编码,求解世界坐标点和像素坐标点的对应关系,仅需要普通棋盘格标定板可实现高精度标定。实验结果表明最大反投影残差为0.7像素,棋盘格角点重建均方差为0.08 mm。  相似文献   

9.
为了去除相机标定过程中的人为干预,提出了一种采用改进的棋盘格靶标的全自动相机标定方法。识别出每幅标定图像中的四个标志圆,利用四个标志圆圆心的图像坐标和物理坐标计算二维射影变换矩阵。依据该射影变换矩阵计算出棋盘格角点的初始图像位置,接着提取亚像素精度的角点位置。迭代求解需要标定的相机参数。由实验可知,该全自动相机标定方法的棋盘格角点识别能力和相机标定精度,与Bouguet的相机标定工具箱相当,且可以显著地减少标定时间和工作量。利用20幅分辨率为640×480的靶标图像标定相机仅需16 s。  相似文献   

10.
SUSAN算子在检测角点时,只以USAN区域面积的大小作为判断准则,忽略USAN区域形状的影响。因此,该算法对棋盘格标定板中的内角点与边缘点难以区分。针对此问题,本文提出在SUSAN圆模板内再次采用SUSAN算子来实现对棋盘格标定板角点的有效检测。此外,在每个初定位角点的局部邻域内,采用二次曲面拟合法得到角点的亚像素坐标。实验证明,所提出的算法准确、有效、适应性好,能为摄像机标定提供亚像素精度的角点信息。  相似文献   

11.
激光雷达的点云和相机的图像经常被融合应用在多个领域。准确的外参标定是融合两者信息的前提。点云特征提取是外参标定的关键步骤。但是点云的低分辨率和低质量会影响标定结果的精度。针对这些问题,提出一种基于边缘关联点云的激光雷达与相机外参标定方法。首先,利用双回波提取标定板边缘关联点云;然后,通过优化方法从边缘关联点云中提取出与实际标定板尺寸大小兼容的标定板角点;最后,将点云中角点和图像中角点匹配。用多点透视方法求解激光雷达与相机之间的外参。实验结果表明,该方法的重投影误差为1.602px,低于同类对比方法,验证了该方法的有效性与准确性。  相似文献   

12.
Liu  Huafeng  Han  Xiaofeng  Li  Xiangrui  Yao  Yazhou  Huang  Pu  Tang  Zhenmin 《Multimedia Tools and Applications》2019,78(17):24269-24283

Robust road detection is a key challenge in safe autonomous driving. Recently, with the rapid development of 3D sensors, more and more researchers are trying to fuse information across different sensors to improve the performance of road detection. Although many successful works have been achieved in this field, methods for data fusion under deep learning framework is still an open problem. In this paper, we propose a Siamese deep neural network based on FCN-8s to detect road region. Our method uses data collected from a monocular color camera and a Velodyne-64 LiDAR sensor. We project the LiDAR point clouds onto the image plane to generate LiDAR images and feed them into one of the branches of the network. The RGB images are fed into another branch of our proposed network. The feature maps that these two branches extract in multiple scales are fused before each pooling layer, via padding additional fusion layers. Extensive experimental results on public dataset KITTI ROAD demonstrate the effectiveness of our proposed approach.

  相似文献   

13.
棋盘格模板角点的自动识别与定位   总被引:1,自引:0,他引:1  
棋盘格模板角点的识别与定位是摄像机定标过程中的关键环节,而自动识别与定位则是实现定标过程自动化的前提条件。为了实现定标过程自动化,提出了一种有效的方法,即利用棋盘格模板图像内部角点的局部灰度特征和由栅格线构成的结构特征,实现了内部角点的自动识别与定位。实验结果表明,该方法是有效和实用的,其能明显地加快定标速度,缩短定标时间,从而为基于多幅棋盘格模板图像的摄像机定标过程自动化创造了条件。  相似文献   

14.
A novel algorithm for vehicle average velocity detection through automatic and dynamic camera calibration based on dark channel in homogenous fog weather condition is presented in this paper. Camera fixed in the middle of the road should be calibrated in homogenous fog weather condition, and can be used in any weather condition. Unlike other researches in velocity calculation area, our traffic model only includes road plane and vehicles in motion. Painted lines in scene image are neglected because sometimes there are no traffic lanes, especially in un-structured traffic scene. Once calibrated, scene distance will be got and can be used to calculate vehicles average velocity. Three major steps are included in our algorithm. Firstly, current video frame is recognized to discriminate current weather condition based on area search method (ASM). If it is homogenous fog, average pixel value from top to bottom in the selected area will change in the form of edge spread function (ESF). Secondly, traffic road surface plane will be found by generating activity map created by calculating the expected value of the absolute intensity difference between two adjacent frames. Finally, scene transmission image is got by dark channel prior theory, camera’s intrinsic and extrinsic parameters are calculated based on the parameter calibration formula deduced from monocular model and scene transmission image. In this step, several key points with particular transmission value for generating necessary calculation equations on road surface are selected to calibrate the camera. Vehicles’ pixel coordinates are transformed to camera coordinates. Distance between vehicles and the camera will be calculated, and then average velocity for each vehicle is got. At the end of this paper, calibration results and vehicles velocity data for nine vehicles in different weather conditions are given. Comparison with other algorithms verifies the effectiveness of our algorithm.  相似文献   

15.
针对传统人工测量板材尺寸精度较低、工作量大、易导致板材表面受损等局限,基于双目视觉技术设计了一种板材尺寸视觉测量系统;通过双目相机采集棋盘格图像,采用MATLAB进行相机标定和图像校正,拍摄左右图像并通过半全局立体匹配算法(SGM,semi global matching)进行特征点立体匹配,重建出目标三维点云模型;为提高目标特征点坐标获取的准确性,提出基于HARRIS的亚像素检测方法;采用区域生长算法结合膨胀和腐蚀操作提取板材表面轮廓,根据三角测量原理计算出板材轮廓上各点的三维坐标从而实现板材的尺寸测量,并进行点云重建增强三维展示效果;实践结果表明亚像素检测方法在角点提取上存在优势,在实际板材测量应用中实现了高精度尺寸测量,满足了工业测量需求。  相似文献   

16.
Image calibration requires both linearization of pixel values and scaling so that values in the image correspond to real‐world luminances. In this paper we focus on the latter and rather than rely on camera characterization, we calibrate images by analysing their content and metadata, obviating the need for expensive measuring devices or modeling of lens and camera combinations. Our analysis correlates sky pixel values to luminances that would be expected based on geographical metadata. Combined with high dynamic range (HDR) imaging, which gives us linear pixel data, our algorithm allows us to find absolute luminance values for each pixel—effectively turning digital cameras into absolute light meters. To validate our algorithm we have collected and annotated a calibrated set of HDR images and compared our estimation with several other approaches, showing that our approach is able to more accurately recover absolute luminance. We discuss various applications and demonstrate the utility of our method in the context of calibrated color appearance reproduction and lighting design.  相似文献   

17.
In this paper, we show how an active binocular head, the IIS head, can be easily calibrated with very high accuracy. Our calibration method can also be applied to many other binocular heads. In addition to the proposal and demonstration of a four-stage calibration process, there are three major contributions in this paper. First, we propose a motorized-focus lens (MFL) camera model which assumes constant nominal extrinsic parameters. The advantage of having constant extrinsic parameters is to having a simple head/eye relation. Second, a calibration method for the MFL camera model is proposed in this paper, which separates estimation of the image center and effective focal length from estimation of the camera orientation and position. This separation has been proved to be crucial; otherwise, estimates of camera parameters would be very noise-sensitive. Thirdly, we show that, once the parameters of the MFL camera model is calibrated, a nonlinear recursive least-square estimator can be used to refine all the 35 kinematic parameters. Real experiments have shown that the proposed method can achieve accuracy of one pixel prediction error and 0.2 pixel epipolar error, even when all the joints, including the left and right focus motors, are moved simultaneously. This accuracy is good enough for many 3D vision applications, such as navigation, object tracking and reconstruction  相似文献   

18.
目的 RGB-D相机的外参数可以被用来将相机坐标系下的点云转换到世界坐标系的点云,可以应用在3维场景重建、3维测量、机器人、目标检测等领域。 一般的标定方法利用标定物(比如棋盘)对RGB-D彩色相机的外参标定,但并未利用深度信息,故很难简化标定过程,因此,若充分利用深度信息,则极大地简化外参标定的流程。基于彩色图的标定方法,其标定的对象是深度传感器,然而,RGB-D相机大部分则应用基于深度传感器,而基于深度信息的标定方法则可以直接标定深度传感器的姿势。方法 首先将深度图转化为相机坐标系下的3维点云,利用MELSAC方法自动检测3维点云中的平面,根据地平面与世界坐标系的约束关系,遍历并筛选平面,直至得到地平面,利用地平面与相机坐标系的空间关系,最终计算出相机的外参数,即相机坐标系内的点与世界坐标系内的点的转换矩阵。结果 实验以棋盘的外参标定方法为基准,处理从PrimeSense相机所采集的RGB-D视频流,结果表明,外参标定平均侧倾角误差为-1.14°,平均俯仰角误差为4.57°,平均相机高度误差为3.96 cm。结论 该方法通过自动检测地平面,准确估计出相机的外参数,具有很强的自动化,此外,算法具有较高地并行性,进行并行优化后,具有实时性,可应用于自动估计机器人姿势。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号