首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper addresses the problem of geometry determination of a stereo rig that undergoes general rigid motions. Neither known reference objects nor stereo correspondence are required. With almost no exception, all existing online solutions attempt to recover stereo geometry by first establishing stereo correspondences. We first describe a mathematical framework that allows us to solve for stereo geometry, i.e., the rotation and translation between the two cameras, using only motion correspondence that is far easier to acquire than stereo correspondence. Second, we show how to recover the rotation and present two linear methods, as well as a nonlinear one to solve for the translation. Third, we perform a stability study for the developed methods in the presence of image noise, camera parameter noise, and ego-motion noise. We also address accuracy issues. Experiments with real image data are presented. The work allows the concept of online calibration to be broadened, as it is no longer true that only single cameras can exploit structure-from-motion strategies; even the extrinsic parameters of a stereo rig of cameras can do so without solving stereo correspondence. The developed framework is applicable for estimating the relative three-dimensional (3D) geometry associated with a wide variety of mounted devices used in vision and robotics, by exploiting their scaled ego-motion streams.  相似文献   

2.
针对基于Time-of-Flight(TOF)相机的彩色目标三维重建需标定CCD相机与TOF相机联合系统的几何参数,在研究现有的基于彩色图像和TOF深度图像标定算法的基础上,提出了一种基于平面棋盘模板的标定方法。拍摄了固定在平面标定模板上的彩色棋盘图案在不同角度下的彩色图像和振幅图像,改进了Harris角点提取,根据棋盘格上角点与虚拟像点的共轭关系,建立了相机标定系统模型,利用Levenberg-Marquardt算法求解,进行了标定实验。获取了TOF与CCD相机内参数,并利用像平面之间的位姿关系估计两相机坐标系的相对姿态,最后进行联合优化,获取了相机之间的旋转矩阵与平移向量。实验结果表明,提出的算法优化了求解过程,提高了标定效率,能够获得较高的精度。  相似文献   

3.
In this paper, a technique for calibrating a camera using a planar calibration object with known metric structure, when the camera (or the calibration plane) undergoes pure translational motion, is presented. The study is an extension of the standard formulation of plane-based camera calibration where the translational case is considered as degenerate. We derive a flexible and straightforward way of using different amounts of knowledge of the translational motion for the calibration task. The theory is mainly applicable in a robot vision setting, and the calculation of the hand–eye orientation and the special case of stereo head calibration are also being addressed. Results of experiments on both computer-generated and real image data are presented. The paper covers the most useful instances of applying the technique to a real system and discusses the degenerate cases that needs to be considered. The paper also presents a method for calculating the infinite homography between the two image planes in a stereo head, using the homographies estimated between the calibration plane and the image planes. Its possible usage and usefulness for simultaneous calibration of the two cameras in the stereo head are discussed and illustrated using experiments.  相似文献   

4.
In this paper we address the problem of recovering 3D non-rigid structure from a sequence of images taken with a stereo pair. We have extended existing non-rigid factorization algorithms to the stereo camera case and presented an algorithm to decompose the measurement matrix into the motion of the left and right cameras and the 3D shape, represented as a linear combination of basis-shapes. The added constraints in the stereo camera case are that both cameras are viewing the same structure and that the relative orientation between both cameras is fixed. Our focus in this paper is on the recovery of flexible 3D shape rather than on the correspondence problem. We propose a method to compute reliable 3D models of deformable structure from stereo images. Our experiments with real data show that improved reconstructions can be achieved using this method. The algorithm includes a non-linear optimization step that minimizes image reprojection error and imposes the correct structure to the motion matrix by choosing an appropriate parameterization. We show that 3D shape and motion estimates can be successfully disambiguated after bundle adjustment and demonstrate this on synthetic and real image sequences. While this optimization step is proposed for the stereo camera case, it can be readily applied to the case of non-rigid structure recovery using a monocular video sequence. Electronic supplementary material Electronic supplementary material is available for this article at and accessible for authorised users.  相似文献   

5.
In this paper, we show how to calibrate a camera and to recover the geometry and the photometry (textures) of objects from a single image. The aim of this work is to make it possible walkthrough and augment reality in a 3D model reconstructed from a single image. The calibration step does not need any calibration target and makes only four assumptions: (1) the single image contains at least two vanishing points, (2) the length (in 3D space) of one line segment (for determining the translation vector) in the image is known, (3) the principle point is the center of the image, and (4) the aspect ratio is fixed by the user. Each vanishing point is determined from a set of parallel lines. These vanishing points help determine a 3D world coordinate system R o. After having computed the focal length, the rotation matrix and the translation vector are evaluated in turn for describing the rigid motion between R o and the camera coordinate system R c. Next, the reconstruction step consists in placing, rotating, scaling, and translating a rectangular 3D box that must fit at best with the potential objects within the scene as seen through the single image. With each face of a rectangular box, a texture that may contain holes due to invisible parts of certain objects is assigned. We show how the textures are extracted and how these holes are located and filled. Our method has been applied to various real images (pictures scanned from books, photographs) and synthetic images.  相似文献   

6.
Extrinsic calibration of heterogeneous cameras by line images   总被引:1,自引:0,他引:1  
The extrinsic calibration refers to determining the relative pose of cameras. Most of the approaches for cameras with non-overlapping fields of view (FOV) are based on mirror reflection, object tracking or rigidity constraint of stereo systems whereas cameras with overlapping FOV can be calibrated using structure from motion solutions. We propose an extrinsic calibration method within structure from motion framework for cameras with overlapping FOV and its extension to cameras with partially non-overlapping FOV. Recently, omnidirectional vision has become a popular topic in computer vision as an omnidirectional camera can cover large FOV in one image. Combining the good resolution of perspective cameras and the wide observation angle of omnidirectional cameras has been an attractive trend in multi-camera system. For this reason, we present an approach which is applicable to heterogeneous types of vision sensors. Moreover, this method utilizes images of lines as these features possess several advantageous characteristics over point features, especially in urban environment. The calibration consists of a linear estimation of orientation and position of cameras and optionally bundle adjustment to refine the extrinsic parameters.  相似文献   

7.
This paper addresses the problem of recovering both the intrinsic and extrinsic parameters of a camera from the silhouettes of an object in a turntable sequence. Previous silhouette-based approaches have exploited correspondences induced by epipolar tangents to estimate the image invariants under turntable motion and achieved a weak calibration of the cameras. It is known that the fundamental matrix relating any two views in a turntable sequence can be expressed explicitly in terms of the image invariants, the rotation angle, and a fixed scalar. It will be shown that the imaged circular points for the turntable plane can also be formulated in terms of the same image invariants and fixed scalar. This allows the imaged circular points to be recovered directly from the estimated image invariants, and provide constraints for the estimation of the imaged absolute conic. The camera calibration matrix can thus be recovered. A robust method for estimating the fixed scalar from image triplets is introduced, and a method for recovering the rotation angles using the estimated imaged circular points and epipoles is presented. Using the estimated camera intrinsics and extrinsics, a Euclidean reconstruction can be obtained. Experimental results on real data sequences are presented, which demonstrate the high precision achieved by the proposed method.  相似文献   

8.
In computer vision, camera calibration is a necessary process when the retrieval of information such as angles and distances is required. This paper addresses the multi-camera calibration problem with a single dimension calibration pattern under general motions. Currently, the known algorithms for solving this problem are based on the estimation of vanishing points. However, this estimate is very susceptible to noise, making the methods unsuitable for practical applications. Instead, this paper presents a new calibration algorithm, where the cameras are divided into binocular sets. The fundamental matrix of each binocular set is then estimated, allowing to perform a projective calibration of each camera. Then, the calibration is updated for the Euclidean space, ending the process. The calibration is possible without imposing any restrictions on the movement of the pattern and without any prior information about the cameras or motion. Experiments on synthetic and real images validate the new method and show that its accuracy makes it suitable also for practical applications.  相似文献   

9.
This paper presents a new approach of combining stereo vision and dynamic vision with the objective of retaining their advantages and removing their disadvantages. It is shown that, by assuming affine cameras, the stereo correspondences and motion correspondences, if organized in a particular way in a matrix, can be decomposed into: the 3D structure of the scene, the camera parameters, the motion parameters, and the stereo geometry. With this, the approach can infer stereo correspondences from motion correspondences, requiring only a time linear with respect to the size of the available image data. The approach offers the advantages of simpler correspondence, as in dynamic vision, and accurate reconstruction, as in stereo vision, even with short image sequences  相似文献   

10.
立体图像对的生成   总被引:1,自引:0,他引:1  
获取同一场景的立体图像对是实现双目立体成像的一个关键问题。提出了一种在三维场景已经建好的情况下生成立体图像对的方法。该方法根据双目立体视觉的原理,利用3DS MAX中的摄像机对象对场景中的物体进行坐标变换和透视投影变换,分别生成左眼视图和右眼视图。实验结果表明,两个目标摄像机与三维模型的位置关系以及基线长度是影响立体效果的重要因素,改变目标摄像机与三维模型的位置,可以分别生成正视差、负视差的立体图像对,当AB与CO的比例参数为0.05时,生成的立体图像对的立体效果较佳。  相似文献   

11.
自主车上的立体视觉系统一般由两台固定在平台上的定焦摄像机组成,因此摄像机内参数经一次标定后不再变化,只需要考虑外参数的标定.本文针对自主车视觉系统的特殊应用情况,提出一种基于多尺度几何分析思想的摄像机对弱定标算法,该算法采用Contourlet变换对左右图像中的角点进行检测,利用Hartley规范化8点法估计摄像机对的基础矩阵.依托现有的摄像机内部参数标定工具箱,在摄像机对弱定标的基础上还可以快速地获得摄像机对之间的外参数矩阵.实验结果表明该方法具有较好的精度.  相似文献   

12.
针对单相机场景下的智能交通应用得到了较好的发展,但跨区域研究还处在起步阶段的问题,本文提出一种基于相机标定的跨相机场景拼接方法.首先,通过消失点标定分别建立两个相机场景中子世界坐标系下的物理信息到二维图像的映射关系;其次通过两个子世界坐标系间的公共信息来完成相机间的投影变换;最后,通过提出的逆投影思想和平移矢量关系来完成道路场景拼接.通过实验结果表明,该方法能够较好的实现道路场景拼接和跨区域道路物理测量,为相关实际应用研究奠定基础.  相似文献   

13.
针对智能无人车、监控等多相机全局标定问题,往往存在无重叠视域或无法单向观测的情况,因此本文提出了一种基于半镜面平面靶标的多相机全局标定方法。该方法首先完成了单相机的内参数标定,然后固定点激光器并投射光线到第一个相机c1视场内并进行散斑点的观测,接着利用半镜面平面靶将光路反射到第二个相机c2视场内并观测散斑点像,由于单相机自身的内外参数已经标定,因此可计算得到反射光路的直线方程,最后根据直线的方向向量可计算全局坐标系的转换矩阵 和平移向量 。仿真和实验的结果表明,所提方法在500mm-1200mm距离,相机视域 范围内,其标定误差为0.36毫米。相对于现有方法,本文所提方法具有更广泛的适应性,可针对多相机复杂视场结构完成标定,同时具有设备简单、操作方便、计算简便等优点。  相似文献   

14.
《Real》1999,5(3):189-202
Real-time computation of exact depth is not feasible in an active vision setup. Instead, reliable relative depth information which can be rapidly computed is preferred. In this paper, a stereo cue for computing relative depth obtained from an active stereo vision system is proposed. The proposed stereo cue can be computed purely from the coordinates of points in the stereo pair. The computational cost required is very low. No camera calibration or prior knowledge of the parameters of the stereo vision system is required. We show that the relationship between the relative depth cue and the actual depth in the three-dimensional (3D) space is monotonic. Such a relation is maintained even when the focal length and the vergence angle are changed, so long as the focal lengths of the two cameras are similar. Therefore, real-time implementation in an active vision setup can be realized. Stability analysis shows that the proposed method will be stable in practical situations, unless the stereo camera diverges. Experimental results are presented to highlight the properties and advantages of the proposed method.  相似文献   

15.
大视场双目立体视觉的摄像机标定   总被引:1,自引:0,他引:1  
针对大视场视觉测量应用,在分析摄像机成像模型的基础上,设计制作了可自由转动的十字靶标,实现了大视场双目视觉摄像机的精确标定。将十字靶标在测量空间内多次均匀摆放,两摄像机同步拍摄多幅靶标图像。由本质矩阵得到摄像机参数的初始值,采用自检校光束法平差得到摄像机参数的最优解。该方法不要求特征点共面,仅需要知道特征点之间的物理距离,降低了靶标制作难度。采用TN3DOMS.S进行了实测,在1500mm×1500mm的测量范围内测试标准标杆,误差均方值为0.06mm。  相似文献   

16.
针对自由双目立体视觉中由于摄像机旋转导致的摄像机外参数变化的问题,提出一种基于旋转轴标定的动态外参数获取方法。在多个不同位置,立体标定得到多组旋转平移矩阵,利用最小二乘法求解旋转轴参数;结合初始位置左右摄像机的内、外参数及旋转角度,实时获取左右摄像机的外参数。利用所提方法获取动态外参数,并对棋盘角点进行三维重建,平均误差为0.241mm,标准差为0.156mm;与基于多平面标靶的标定方法相比,精度高且操作简单。所提方法无需实时标定,可完成摄像机旋转情况下动态外参数的获取。  相似文献   

17.
目的 在计算机视觉和摄影测量领域,经常应用多视角图像对场景进行高精度的三维重建。其中,相机内参数和相机间固定相对关系的高精度标定是关键环节,文章提出一种能够在强约束条件下快速进行相机标定的方法。方法 通过相机间6个相互独立的约束,充分利用系统的几何条件,确定固有关系,再以共线方程为基础推导强约束条件下的平差模型,并应用于自检校光束法平差,开展相邻立体相机的匹配,实现多相机系统的快速标定。结果 最后通过实验,验证了加了强约束条件后,加大了平差的多余观测数,提高了标定精度和鲁棒性。结论 建立了相机标定系统,提出了在强约束条件下快速进行相机标定的方法,展开了人体三维重建研究,并且该方法可推广到多个相机组成的多相机立体量测系统的标定中。  相似文献   

18.
We propose camera models for cameras that are equipped with lenses that can be tilted in an arbitrary direction (often called Scheimpflug optics). The proposed models are comprehensive: they can handle all tilt lens types that are in common use for machine vision and consumer cameras and correctly describe the imaging geometry of lenses for which the ray angles in object and image space differ, which is true for many lenses. Furthermore, they are versatile since they can also be used to describe the rectification geometry of a stereo image pair in which one camera is perspective and the other camera is telecentric. We also examine the degeneracies of the models and propose methods to handle the degeneracies. Furthermore, we examine the relation of the proposed camera models to different classes of projective camera matrices and show that all classes of projective cameras can be interpreted as cameras with tilt lenses in a natural manner. In addition, we propose an algorithm that can calibrate an arbitrary combination of perspective and telecentric cameras (no matter whether they are tilted or untilted). The calibration algorithm uses a planar calibration object with circular control points. It is well known that circular control points may lead to biased calibration results. We propose two efficient algorithms to remove the bias and thus obtain accurate calibration results. Finally, we perform an extensive evaluation of the proposed camera models and calibration algorithms that establishes the validity and accuracy of the proposed models.  相似文献   

19.
一种新的基于主动视觉系统的摄像机自标定方法   总被引:11,自引:1,他引:10  
雷成  吴福朝  胡占义 《计算机学报》2000,23(11):1130-1139
摄像机标定是从二维图像取获三维信息必不可少的步骤。该文提出了一种新的基于主动视觉系统的摄像机自标定方法,通过控制摄像机平台作4次平移运动(其中任意3次均不在同一平面上)即可线性地标定摄像机的内参数以及摄像机坐标系与平台坐标系之间的旋转矩阵。同时,还分别给出了利用立体视觉方法和纯极点方法唯一求解摄像机坐标系与平台坐标系之间的平移向量的充要条件。  相似文献   

20.
三维空间尺度估计是三维重建中的一个重要工作,现实世界中也存在一些基于单幅图像进行三维空间尺度估计的需求。通常情况下,尺度估计需先对相机进行标定。根据单目图像符合透视原理的特性,提出了一种基于 2 个灭点和局部尺度信息的方法对相机进行标定,从而得到单目图像物体中三维空间尺度信息的估计。首先,从单目图像中选择 2 组互相正交的平行线组,得到对应 2 个灭点的坐标;然后,利用灭点坐标和焦距信息得到世界坐标系和相机坐标系之间的旋转矩阵,再利用灭点的性质和已知局部尺度信息得到平移向量,完成单目相机的标定;最后,还原二维图像中像素点对应的三维世界坐标值,计算出图像中 2 个像素点在三维空间的尺度信息。实验结果表明,该方法能有效地对单幅图像中的建筑物体进行尺度估计。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号