首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
由晓龙  全厚德 《微计算机信息》2004,20(2):104-105,101
使用MATLAB实现了一种简便的摄象机参数的标定方法。先由四个标定点的坐标计算出透视投影矩阵,从而计算出标定模板上其它节点的图象坐标,之后进行内部参数和外部参数的计算。此方法具有易于实施、速度较快、精度较高等特点,是一种简便实用的方法。  相似文献   

2.
本文设计用并行立体摄象机产生的图象流作为输入,通过解系数由图象速度矩和两图象内坐标构成的线性系统等式,不需左、石两图象流场间特征点与特征点的对应就能复原物体的3-D运动参数.经用综合图象流场测验已经证明我们算法的可行性以及具有功能强、速度快的特点.  相似文献   

3.
袁立行  郑南宁 《机器人》1997,19(3):197-201
本文给出了一种以空间不变量的数据来计算摄象机外部参数的方法。空间透视不变量是指在几何变换中如投影或改变观察时保持不变的形状描述。  相似文献   

4.
Rainbow三维摄像机是一种基于光谱分析的快速三维信息获取方法。该方法利用连续变化的彩色光谱照射景物,彩色CCD摄像机摄取的景物图象将呈现有规律的颜色变化,而且不同的颜色构成不同的空间颜色面。通过标定这些颜色面和摄像机成像模型,即可计算出图象中各点的三维坐标。该文重点讨论实现该方法的标定技术和颜色分类技术,最后给出实验结果。  相似文献   

5.
利用2D点对应由两透视投影恢复刚体3D运动和结构的8点线性算法中的本质参数矩阵E可分解为一个反对称矩阵S与另一个旋转阵R的乘积,称此形式的分解为刚性分解.Huang,T.S.和Faugeras,O.D.得出了3×3矩阵可作刚性分解的一个充要条件.本文证明了3×3矩阵的刚性分解具有对偶性质,即若E=SR≠0,S=(Sij)3×3,则E有且仅有一个对偶的刚性分解E=(-S)R’,其中和ri’(i=1,2,3)分别表示R和R’的三个列向量,并给出计算3×3矩阵的两个对偶刚性分解的简单公式;得到了N×N(N≥2)矩阵可作刚性分解的充要条件,并证明矩阵刚性分解的对偶性是N=2或3时的特有性质.最后,给出本文结果在运动分析中的应用.  相似文献   

6.
由双目图象流无对应地估计3—D运动参数   总被引:2,自引:0,他引:2  
杨敬安 《计算机学报》1995,18(11):849-857
本文设计用并行立体摄象机产生的图象流作为输入,通过解系数由图象速度和两图象内坐标构成的线性系统等式,不需左、右两图象流场间特征点的对应就以有复原的3-D运动参数。经用综合图象流场测验已经证明我们算法的可行性以及具有功能强、速度快的特点。  相似文献   

7.
光学动作捕捉中的摄像机标定   总被引:2,自引:0,他引:2  
张金剑  陈福民 《计算机应用》2004,24(Z1):178-179
摄像机标定是光学动作捕捉中必不可少的部分,其作用是将摄像机获取的2D点还原为空间3D点.其主要工作是若干个摄像机对带有跟踪点的物体进行拍摄,根据空间中3D坐标与摄像机图像中2D坐标的对应关系,建立标定方程,对标定方程作一定的变换,得到计算相对简单快速的线性方程,最终求得用来将2D点还原成3D点的六个摄像机内部参数和外部参数(旋转矩阵和位移坐标).  相似文献   

8.
王岩  刘晓铭 《现代计算机》1999,(1):53-54,59
基于光谱分析的快速三维信息获取方法是利用连续变化的彩色光谱照射景物,彩色CCD摄像机摄取的景物图像将呈现有规律的颜色变化,不同的颜色构成了不同的空间颜色面。通过标定这些颜色面,根据摄像机成像模型,即可计算机出图像中各点的三维坐标。本文重点讨论实现该方法的标定技术和颜色分类技术。  相似文献   

9.
彩色图象的HVC表色空间同其他表色空间的换算   总被引:2,自引:0,他引:2  
叙述了怎样将原始的R.G.B表色空间,经X.Y.Z空间、L.a.b空间转换成最适合于作彩色图象处理的H.V.C表色空间的详细过程。实验证明在10s内能对512×512×8的彩色图象作完正反变换,且均方误差小于2%,将它用于对彩色图象进行处理与分割均取得了非常良好的效果。  相似文献   

10.
指出摄象机指标定问题中存在的观测误差适于用未知但有界误差(UBBE)模型描述,标定问题可用集内不确定(SMU)估计方法解决。针对有关文献研究的一种摄象机内部参数标定问题,提出了利用SMU估计方法进行参数标定的新算法。仿真实验结果表明,新算法不权可以在给出标定结果的同时给出标定误差确定的上界,而且可获得较好的标定精度,具有一定的使用价值。  相似文献   

11.
Camera model and its calibration are required in many applications for coordinate conversions between the two-dimensional image and the real three-dimensional world. Self-calibration method is usually chosen for camera calibration in uncontrolled environments because the scene geometry could be unknown. However when no reliable feature correspondences can be established or when the camera is static in relation to the majority of the scene, self-calibration method fails to work. On the other hand, object-based calibration methods are more reliable than self-calibration methods due to the existence of the object with known geometry. However, most object-based calibration methods are unable to work in uncontrolled environments because they require the geometric knowledge on calibration objects. Though in the past few years the simplest geometry required for a calibration object has been reduced to a 1D object with at least three points, it is still not easy to find such an object in an uncontrolled environment, not to mention the additional metric/motion requirement in the existing methods. Meanwhile, it is very easy to find a 1D object with two end points in most scenes. Thus, it would be very worthwhile to investigate an object-based method based on such a simple object so that it would still be possible to calibrate a camera when both self-calibration and existing object-based calibration fail to work. We propose a new camera calibration method which requires only an object with two end points, the simplest geometry that can be extracted from many real-life objects. Through observations of such a 1D object at different positions/orientations on a plane which is fixed in relation to the camera, both intrinsic (focal length) and extrinsic (rotation angles and translations) camera parameters can be calibrated using the proposed method. The proposed method has been tested on simulated data and real data from both controlled and uncontrolled environments, including situations where no explicit 1D calibration objects are available, e.g. from a human walking sequence. Very accurate camera calibration results have been achieved using the proposed method.  相似文献   

12.
This paper addresses the problem of recovering both the intrinsic and extrinsic parameters of a camera from the silhouettes of an object in a turntable sequence. Previous silhouette-based approaches have exploited correspondences induced by epipolar tangents to estimate the image invariants under turntable motion and achieved a weak calibration of the cameras. It is known that the fundamental matrix relating any two views in a turntable sequence can be expressed explicitly in terms of the image invariants, the rotation angle, and a fixed scalar. It will be shown that the imaged circular points for the turntable plane can also be formulated in terms of the same image invariants and fixed scalar. This allows the imaged circular points to be recovered directly from the estimated image invariants, and provide constraints for the estimation of the imaged absolute conic. The camera calibration matrix can thus be recovered. A robust method for estimating the fixed scalar from image triplets is introduced, and a method for recovering the rotation angles using the estimated imaged circular points and epipoles is presented. Using the estimated camera intrinsics and extrinsics, a Euclidean reconstruction can be obtained. Experimental results on real data sequences are presented, which demonstrate the high precision achieved by the proposed method.  相似文献   

13.
一种基于非量测畸变校正的摄像机标定方法   总被引:4,自引:0,他引:4  
设计一种基于非量测畸变校正的摄像机标定方法.该方法利用单参数除式模型校正镜头畸变,根据直线透视投影保留同素性,通过拉凡格氏法(LM)优化标定出畸变模型系数和摄像机主点坐标,然后校正成像点,使其满足针孔模型映射关系.根据内参数的两个基本方程,线性求解剩余参数.实验表明,该方法在非量测标定过程具有较好的鲁棒性,且对比张正友标定方法,可在单幅标靶图像下进行标定,避免了模型内外参数耦合在一起,提高了标定效率.  相似文献   

14.
Stereovision is an effective technique to use a CCD video camera to determine the 3D position of a target object from two or more simultaneous views of the scene. Camera calibration is a central issue in finding the position of objects in a stereovision system. This is usually carried out by calibrating each camera independently, and then applying a geometric transformation of the external parameters to find the geometry of the stereo setting. After calibration, the distance of various target objects in the scene can be calculated with CCD video cameras, and recovering the 3D structure from 2D images becomes simpler. However, the process of camera calibration is complicated. Based on the ideal pinhole model of a camera, we describe formulas to calculate intrinsic parameters that specify the correct camera characteristics, and extrinsic parameters that describe the spatial relationship between the camera and the world coordinate system. A simple camera calibration method for our CCD video cameras and corresponding experiment results are also given. This work was presented in part at the 7th International Symposium on Artificial Life and Robotics, Oita, Japan, January 16–18, 2002  相似文献   

15.
Linear or 1D cameras are used in several areas such as industrial inspection and satellite imagery. Since 1D cameras consist of a linear sensor, a motion (usually perpendicular to the sensor orientation) is performed in order to acquire a full image. In this paper, we present a novel linear method to estimate the intrinsic and extrinsic parameters of a 1D camera using a planar object. As opposed to traditional calibration scheme based on 3D-2D correspondences of landmarks, our method uses homographies induced by the images of a planar object. The proposed algorithm is linear, simple and produces good results as shown by our experiments.  相似文献   

16.
A new visual measurement method is proposed to estimate three-dimensional (3D) position of the object on the floor based on a single camera. The camera fixed on a robot is in an inclined position with respect to the floor. A measurement model with the camera’s extrinsic parameters such as the height and pitch angle is described. Single image of a chessboard pattern placed on the floor is enough to calibrate the camera’s extrinsic parameters after the camera’s intrinsic parameters are calibrated. Then the position of object on the floor can be computed with the measurement model. Furthermore, the height of object can be calculated with the paired-points in the vertical line sharing the same position on the floor. Compared to the conventional method used to estimate the positions on the plane, this method can obtain the 3D positions. The indoor experiment testifies the accuracy and validity of the proposed method.  相似文献   

17.
传统的相机标定方法通常需要建立复杂3维标定块或高精度3维控制场,在实际应用中受到了一定的限制。本文采用平面控制格网作为标定块,根据相机的理想模型确定内方位元素,利用2维直接线性变换和共线方程分解出相机的外方位元素初值,采用改进的Hough变换算法检测标定图像中的格网直线并利用最小二乘法拟合出最佳直线,通过求直线的交点得到标定格网点的像坐标。最后利用自检校光线束法平差进行相机的精确标定。实际图像数据实验结果表明,主点和焦距的标定精度分别达到了0.2像素和0.3像素左右。可以满足高精度近景3维量测的要求。  相似文献   

18.
This paper addresses the problem of calibrating camera parameters using variational methods. One problem addressed is the severe lens distortion in low-cost cameras. For many computer vision algorithms aiming at reconstructing reliable representations of 3D scenes, the camera distortion effects will lead to inaccurate 3D reconstructions and geometrical measurements if not accounted for. A second problem is the color calibration problem caused by variations in camera responses that result in different color measurements and affects the algorithms that depend on these measurements. We also address the extrinsic camera calibration that estimates relative poses and orientations of multiple cameras in the system and the intrinsic camera calibration that estimates focal lengths and the skew parameters of the cameras. To address these calibration problems, we present multiview stereo techniques based on variational methods that utilize partial and ordinary differential equations. Our approach can also be considered as a coordinated refinement of camera calibration parameters. To reduce computational complexity of such algorithms, we utilize prior knowledge on the calibration object, making a piecewise smooth surface assumption, and evolve the pose, orientation, and scale parameters of such a 3D model object without requiring a 2D feature extraction from camera views. We derive the evolution equations for the distortion coefficients, the color calibration parameters, the extrinsic and intrinsic parameters of the cameras, and present experimental results.  相似文献   

19.
一种反射折射摄像机的简易标定方法   总被引:3,自引:0,他引:3  
Central catadioptric cameras are widely used in virtual reality and robot navigation, and the camera calibration is a prerequisite for these applications. In this paper, we propose an easy calibration method for central catadioptric cameras with a 2D calibration pattern. Firstly, the bounding ellipse of the catadioptric image and field of view (FOV) are used to obtain the initial estimation of the intrinsic parameters. Then, the explicit relationship between the central catadioptric and the pinhole model is used to initialize the extrinsic parameters. Finally, the intrinsic and extrinsic parameters are refined by nonlinear optimization. The proposed method does not need any fitting of partial visible conic, and the projected images of 2D calibration pattern can easily cover the whole image, so our method is easy and robust. Experiments with simulated data as well as real images show the satisfactory performance of our proposed calibration method.  相似文献   

20.
Central catadioptric cameras are widely used in virtual reality and robot navigation,and the camera calibration is a prerequisite for these applications.In this paper,we propose an easy calibration method for central catadioptric cameras with a 2D calibration pattern.Firstly,the bounding ellipse of the catadioptric image and field of view (FOV) are used to obtain the initial estimation of the intrinsic parameters.Then,the explicit relationship between the central catadioptric and the pinhole model is used to initialize the extrinsic parameters.Finally,the intrinsic and extrinsic parameters are refined by nonlinear optimization.The proposed method does not need any fitting of partial visible conic,and the projected images of 2D calibration pattern can easily cover the whole image,so our method is easy and robust.Experiments with simulated data as well as real images show the satisfactory performance of our proposed calibration method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号