首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
本文给出了一种以空间不变量的数据来计算摄象机外部参数的方法.空间透视不变量是指在几何变换中如投影或改变观察点时保持不变的形状描述.由于它可以得到一个相对于外界来讲独立的物体景物的特征描述,故可以很广泛的应用到计算机视觉等方面.摄象机标定是确定摄象机摄取的2D图象信息及其3D实际景物的信息之间的变换关系,它包括内部参数和外部参数两个部分.内部参数表征的是摄象机的内部特征和光学特征参数,包括图象中心(Cx,Cy)坐标、图象尺度因子Sx、有效的焦距长度f和透镜的畸变失真系数K;外部参数表示的是摄象机的位置和方向在世界坐标中的坐标参数,它包括平移矩阵T和旋转矩阵R3×3,一般情况下可以写成一个扩展矩阵[RT]3×4.本文基于空间透视不变量的计算数据,给出了一种标定摄象机外部参数的方法,实验结果表明该方法具有很强的鲁棒性.  相似文献   

2.
Many computer vision applications can benefit from omnidirectional vision sensing, rather than depending solely on conventional cameras that have constrained fields of view. For example, mobile robots often require a full 360 view of their environment in order to perform navigational tasks such identifying landmarks, localizing within the environment, and determining free paths in which to move. There has been much research interest in omnidirectional vision in the past decade and many techniques have been developed. These techniques include: (i) catadioptric methods which can provide rapid image acquisition, but lack image resolution; and (ii) mosaicing and linear scanning techniques which have high image resolution but typically have slow image acquisition speed. In this paper, we introduce a novel linear scanning panoramic vision system that can acquire panoramic images quickly with little loss of image resolution. The system makes use of a fast line-scan camera, instead of a slower, conventional area-scan camera. In addition, a unique coarse-to-fine panoramic imaging technique has been developed that is based on smart sensing principles. Using the active vision paradigm, we control the motion of the rotating camera using feedback from the images. This results in high acquisition speeds and proportionally low storage requirements. Experimentation has been carried out, and results are given. Correspondence to: M.J. Barth (e-mail: barth@ee.ucr.edu)  相似文献   

3.
为实现基于投影仪和摄像机的结构光视觉系统连续扫描,需要计算投影仪投影的任意光平面与摄像机图像平面的空间位置关系,进而需要求取摄像机光心与投影仪光心之间的相对位置关系。求取摄像机的内参数,在标定板上选取四个角点作为特征点并利用摄像机内参数求取该四个特征点的外参数,从而知道四个特征点在摄像机坐标系中的坐标。利用投影仪自身参数求解特征点在投影仪坐标系中的坐标,从而计算出摄像机光心与投影仪光心之间的相对位置关系,实现结构光视觉标定。利用标定后的视觉系统,对标定板上的角点距离进行测量,最大相对误差为0.277%,表明该标定算法可以应用于基于投影仪和摄像机的结构光视觉系统。  相似文献   

4.
A new approach to camera calibration using vanishing line information for three-dimensional computer vision is proposed. Calibrated parameters include the orientation, the position, the focal length, and the image plane center of a camera. A rectangular parallelepiped is employed as the calibration target to generate three principal vanishing points and then three vanishing lines from the projected image of the parallelepiped. Only a monocular image is required for solving these camera parameters. It is shown that the image plane center is the orthocenter of a triangle formed by the three vanishing lines. From the slopes of the vanishing lines the camera orientation parameters can be determined. The focal length can be computed by the area of the triangle. The camera position parameters can then be calibrated by using related geometric projective relationships. The derived results show the geometric meanings of these camera parameters. The calibration formulas are analytic and simple to compute. Experimental results show the feasibility of the proposed approach for a practical application—autonomous land vehicle guidance.This work was supported by National Science Council, Republic of China under Grant NSC-77-0404-E-009-31.  相似文献   

5.
Camera calibration is a fundamental process for both photogrammetric and computer vision. Since the arrival of the direct linear transformation method and its later revisions, new methods have been developed by several authors, such as: Tsai, Heikkilä and Zhang. Most of these have been based on the pinhole model, including distortion correction. Some of these methods, such as Tsai method, allow the use of two different techniques for determining calibration parameters: a non-coplanar calibration technique using three-dimensional (3D) calibration objects, and a coplanar technique that uses two-dimensional (2D) calibration objects. The calibration performed by observing a 3D calibration object has good accuracy, and produces very efficient results; however, the calibration object must be accurate enough and requires an elaborate configuration. In contrast, the use of 2D calibration objects yields less accurate results, is much more flexible, and does not require complex calibration objects that are costly to produce. This article compares these two different calibration procedures from the perspective of stereo measurement. Particular attention was focused on the accuracy of the calculated camera parameters, the reconstruction error in the computer image coordinates and in the world coordinate system and advanced image-processing techniques for subpixel detection during the comparison. The purpose of this work is to establish a basis and selection criteria for choosing one of these techniques for camera calibration, according to the accuracy required in each of the many applications using photogrammetric vision: robot calibration methods, trajectory generation algorithms, articulated measuring arm calibration, and photogrammetric systems.  相似文献   

6.
Techniques are described for calibrating certain intrinsic camera parameters for machine vision. The parameters to be calibrated are the horizontal scale factor, and the image center. The scale factor calibration uses a one-dimensional fast Fourier transform and is accurate and efficient. It also permits the use of only one coplanar set of calibration points for general camera calibration. Three groups of techniques for center calibration are presented: Group I requires using a laser and a four-degree-of-freedom adjustment of its orientation, but is simplest in concept and is accurate and reproducible; Group II is simple to perform, but is less accurate than the other two; and the most general, Group II, is accurate and efficient, but requires a good calibration plate and accurate image feature extraction of calibration points. Group II is recommended most highly for machine vision applications. Results of experiments are presented and compared with theoretical predictions. Accuracy and reproducibility of the calibrated parameters are reported, as well as the improvement in actual 3-D measurement due to center calibration  相似文献   

7.
Standard camera and projector calibration techniques use a checkerboard that is manually shown at different poses to determine the calibration parameters. Furthermore, when image geometric correction must be performed on a three‐dimensional (3D) surface, such as projection mapping, the surface geometry must be determined. Camera calibration and 3D surface estimation can be costly, error prone, and time‐consuming when performed manually. To address this issue, we use an auto‐calibration technique that projects a series of Gray code structured light patterns. These patterns are captured by the camera to build a dense pixel correspondence between the projector and camera, which are used to calibrate the stereo system using an objective function, which embeds the calibration parameters together with the undistorted points. Minimization is carried out by a greedy algorithm that minimizes the cost at each iteration with respect to both calibration parameters and noisy image points. We test the auto‐calibration on different scenes and show that the results closely match a manual calibration of the system. We show that this technique can be used to build a 3D model of the scene, which in turn with the dense pixel correspondence can be used for geometric screen correction on any arbitrary surface.  相似文献   

8.
Camera lens distortion is crucial to obtain the best performance cameral model. Up to now, different techniques exist, which try to minimize the calibration error using different lens distortion models or computing them in different ways. Some compute lens distortion camera parameters in the camera calibration process together with the intrinsic and extrinsic ones. Others isolate the lens distortion calibration without using any template and basing the calibration on the deformation in the image of some features of the objects in the scene, like straight lines or circles. These lens distortion techniques which do not use any calibration template can be unstable if a complete camera lens distortion model is computed. They are named non-metric calibration or self-calibration methods.Traditionally a camera has been always best calibrated if metric calibration is done instead of self-calibration. This paper proposes a metric calibration technique which computes the camera lens distortion isolated from the camera calibration process under stable conditions, independently of the computed lens distortion model or the number of parameters. To make it easier to resolve, this metric technique uses the same calibration template that will be used afterwards for the calibration process. Therefore, the best performance of the camera lens distortion calibration process is achieved, which is transferred directly to the camera calibration process.  相似文献   

9.
3-D position sensing using a passive monocular vision system   总被引:3,自引:0,他引:3  
Passive monocular 3-D position sensing is made possible by a new calibration scheme that relates depth to focus blur through a composite lens and aperture model. The calibration technique enables the recovery of absolute 3-D position coordinates from image coordinates and measured focus blur. A geometric model of the camera's position and orientation in space is used to transform the camera's imaging coordinates into world coordinates. The relationship between the world coordinate system and the screen coordinate system which includes the amount of focus blur, is developed by modeling the camera imaging arrangement. The modeling proceeds first through the perspective view from a pinhole camera located anywhere in space. The camera's lens and aperture system is investigated to find the relationship between depth and focus blur. The aspect ratio of the frame image is considered. Position accuracies comparable to those in stereo based vision systems are possible without the need for solving the difficult point of correspondence problem  相似文献   

10.
一种考虑二阶径向畸变的主动视觉自标定算法   总被引:1,自引:0,他引:1       下载免费PDF全文
基于主动视觉的摄像机自标定是摄像机标定的一个重要分支 ,由于普通的 CCD摄像机拍摄的像片存在着各种类型的几何畸变 ,其中以径向畸变最为严重 ,因此研究考虑径向畸变的自标定技术有着重要的意义 .为了使标定结果更精确 ,提出了一种考虑二阶径向畸变的内参数自标定方法 ,并通过推导考虑二阶径向畸变的极线几何约束 ,得出了如果能控制摄像机做 4次不在同一平面上的平移运动 ,则可以标定摄像机的内参数和二阶径向畸变系数的结论 .仿真实验结果表明 ,该算法精度很高 ,且具有一定的鲁棒性 ,可用于摄像机的标定 .  相似文献   

11.
Pose refinement is an essential task for computer vision systems that require the calibration and verification of model and camera parameters. Typical domains include the real-time tracking of objects and verification in model-based recognition systems. A technique is presented for recovering model and camera parameters of 3D objects from a single two-dimensional image. This basic problem is further complicated by the incorporation of simple bounds on the model and camera parameters and linear constraints restricting some subset of object parameters to a specific relationship. It is demonstrated in this paper that this constrained pose refinement formulation is no more difficult than the original problem based on numerical analysis techniques, including active set methods and lagrange multiplier analysis. A number of bounded and linearly constrained parametric models are tested and convergence to proper values occurs from a wide range of initial error, utilizing minimal matching information (relative to the number of parameters and components). The ability to recover model parameters in a constrained search space will thus simplify associated object recognition problems.  相似文献   

12.
Using vanishing points for camera calibration   总被引:43,自引:1,他引:42  
In this article a new method for the calibration of a vision system which consists of two (or more) cameras is presented. The proposed method, which uses simple properties of vanishing points, is divided into two steps. In the first step, the intrinsic parameters of each camera, that is, the focal length and the location of the intersection between the optical axis and the image plane, are recovered from a single image of a cube. In the second step, the extrinsic parameters of a pair of cameras, that is, the rotation matrix and the translation vector which describe the rigid motion between the coordinate systems fixed in the two cameras are estimated from an image stereo pair of a suitable planar pattern. Firstly, by matching the corresponding vanishing points in the two images the rotation matrix can be computed, then the translation vector is estimated by means of a simple triangulation. The robustness of the method against noise is discussed, and the conditions for optimal estimation of the rotation matrix are derived. Extensive experimentation shows that the precision that can be achieved with the proposed method is sufficient to efficiently perform machine vision tasks that require camera calibration, like depth from stereo and motion from image sequence.  相似文献   

13.
Inspired by Zhang's work on flexible calibration technique, a new easy technique for calibrating a camera based on circular points is proposed. The proposed technique only requires the camera to observe a newly designed planar calibration pattern (referred to as the model plane hereinafter) which includes a circle and a pencil of lines passing through the circle's center, at a few (at least three) different unknown orientations, then all the five intrinsic parameters can be determined linearly. The main advantage of our new technique is that it needs to know neither any metric measurement on the model plane, nor the correspondences between points on the model plane and image ones, hence the whole calibration process becomes extremely simple. The proposed technique is particularly useful for those people who are not familiar with computer vision. Experiments with simulated data as well as with real images show that our new technique is robust and accurate.  相似文献   

14.
对比目前常见的CCD摄像机标定技术,采用了一种基于消隐点的摄像机标定方法。该方法基于射影几何中的B双空间几何概念,在充分利用二维棋盘格标定图像中点、线和面固有的几何关系与属性的基础上,通过改进的角点提取与计算,首先针对摄像机内参数中的横向焦距与纵向焦距进行计算,然后结合非线性优化方法对参数进行优化计算,从而求解出待标定摄像机的全部内参数。该标定方法具有操作过程简单和实验误差小的特点。  相似文献   

15.
In this paper, we show how an active binocular head, the IIS head, can be easily calibrated with very high accuracy. Our calibration method can also be applied to many other binocular heads. In addition to the proposal and demonstration of a four-stage calibration process, there are three major contributions in this paper. First, we propose a motorized-focus lens (MFL) camera model which assumes constant nominal extrinsic parameters. The advantage of having constant extrinsic parameters is to having a simple head/eye relation. Second, a calibration method for the MFL camera model is proposed in this paper, which separates estimation of the image center and effective focal length from estimation of the camera orientation and position. This separation has been proved to be crucial; otherwise, estimates of camera parameters would be very noise-sensitive. Thirdly, we show that, once the parameters of the MFL camera model is calibrated, a nonlinear recursive least-square estimator can be used to refine all the 35 kinematic parameters. Real experiments have shown that the proposed method can achieve accuracy of one pixel prediction error and 0.2 pixel epipolar error, even when all the joints, including the left and right focus motors, are moved simultaneously. This accuracy is good enough for many 3D vision applications, such as navigation, object tracking and reconstruction  相似文献   

16.
一种简单的基于共面的摄像机参数标定方法   总被引:1,自引:0,他引:1  
共面摄像机标定就是采用平面式模板来确定摄像机内外参数的过程,在此过程中图像像素和二维特征点是已知的。对于共面标定提出了一种简单及有效的方法去标定摄像机参数。即使用帧缓存中的计算机阵列图像直接来标定参数。首先采用预标定的方法标定出图像中心位置,然后根据帧存图像坐标和世界坐标之间的对应关系使用正交矩阵的约束条件来求解,在此算法中假设尺度因子为1,并且不考虑透镜畸变。所提出的算法用数字仿真图像及真实的图像检验。结果显示,所提出的算法具有较好的精度,是一种简单有效的标定方法。  相似文献   

17.
Camera calibration is the first step of three-dimensional machine vision. A fundamental parameter to be calibrated is the position of the camera projection center with respect to the image plane. This paper presents a method for the computation of the projection center position using images of a translating rigid object, taken by the camera itself.

Many works have been proposed in literature to solve the calibration problem, but this method has several desirable features. The projection center position is computed directly, independently of all other camera parameters. The dimensions and position of the object used for calibration can be completely unknown.

This method is based on a geometric relation between the projection center and the focus of expansion. The use of this property enables the problem to be split into two parts. First a suitable number of focuses of expansion are computed from the images of the translating object. Then the focuses of expansion are taken as landmarks to build a spatial back triangulation problem, the solution of which gives the projection center position.  相似文献   


18.
赵为民  唐俊 《微机发展》2003,13(1):16-17,20
在三维计算机视觉中,消失点和消失线扮演着极其重要的角色,在此利用场景中常见的平行线和正交线的特点,通过绝对二次曲线图像和消失点的计算,测量场景中其他几何结构的关系,该方法不需要摄像机事先标定,实验表明,在单幅图像中该方法所得结果可以作为一个较好的估计值,在2幅图像中所得结果则较为精确,因此在图像测量中该方法有着一定的实用价值。  相似文献   

19.
Generic camera calibration is a non-parametric calibration technique that is applicable to any type of vision sensor. However, the standard generic calibration method was developed such that both central and non-central cameras can be calibrated within the same framework. Consequently, existing parametric calibration techniques cannot be applied for the common case of cameras with a single centre of projection (e.g. pinhole, fisheye, hyperboloidal catadioptric). This paper proposes improvements to the standard generic calibration method for central cameras that reduce its complexity, and improve its accuracy and robustness. Improvements are achieved by taking advantage of the geometric constraints resulting from a single centre of projection in order to enable the application of established pinhole calibration techniques. Input data for the algorithm is acquired using active grids, the performance of which is characterised. A novel linear estimation stage is proposed that enables a well established pinhole calibration technique to be used to estimate the camera centre and initial grid poses. The proposed solution is shown to be more accurate than the linear estimation stage of the standard method. A linear alternative to the existing polynomial method for estimating the pose of additional grids used in the calibration is demonstrated and evaluated. Distortion correction experiments are conducted with real data for both an omnidirectional camera and a fisheye camera using the standard and proposed methods. Motion reconstruction experiments are also undertaken for the omnidirectional camera. Results show the accuracy and robustness of the proposed method to be improved over those of the standard method.  相似文献   

20.
目的 传统的单目视觉深度测量方法具有设备简单、价格低廉、运算速度快等优点,但需要对相机进行复杂标定,并且只在特定的场景条件下适用。为此,提出基于运动视差线索的物体深度测量方法,从图像中提取特征点,利用特征点与图像深度的关系得到测量结果。方法 对两幅图像进行分割,获取被测量物体所在区域;然后采用本文提出的改进的尺度不变特征变换SIFT(scale-invariant feature transtorm)算法对两幅图像进行匹配,结合图像匹配和图像分割的结果获取被测量物体的匹配结果;用Graham扫描法求得匹配后特征点的凸包,获取凸包上最长线段的长度;最后利用相机成像的基本原理和三角几何知识求出图像深度。结果 实验结果表明,本文方法在测量精度和实时性两方面都有所提升。当图像中的物体不被遮挡时,实际距离与测量距离之间的误差为2.60%,测量距离的时间消耗为1.577 s;当图像中的物体存在部分遮挡时,该方法也获得了较好的测量结果,实际距离与测量距离之间的误差为3.19%,测量距离所需时间为1.689 s。结论 利用两幅图像上的特征点来估计图像深度,对图像中物体存在部分遮挡情况具有良好的鲁棒性,同时避免了复杂的摄像机标定过程,具有实际应用价值。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号