首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
基于共面靶标双目立体视觉传感器标定   总被引:1,自引:0,他引:1  
根据双目立体视觉传感器的特点,提出了一种基于共面靶标的双目立体视觉传感器的标定方法.在不需要外部测量设备的情况下,通过共面靶标在平面上自由移动,获得靶标的标定点图像坐标,利用类似于Dr.Janne Heikkil(a)和Dr.Olli Silven所提出来的摄像机内部参数标定模型,采用线性和非线性结合的方法对双目摄像机进行摄像机内部参数的标定.试验结果表明,标定参数与出厂参数有一定的差异,但和实际情况还是吻合的,并且标定方法操作简便,切实可行.  相似文献   

2.
利用双目视差理论的摄像机参数标定方法   总被引:3,自引:1,他引:2       下载免费PDF全文
针对龙眼和荔枝采摘机械手的目标获取问题,文章提出了基于双目立体视觉的摄像机标定的实验系统。实验系统是采用双目视差原理来获取深度信息的标定模型,通过对不同距离的标定板上已知特征点的图片获取和数据处理和分析,确定了两摄像机之间的最佳基线距离,在此基础上进一步验证了采摘目标的深度信息与摄像机的焦距有关的理论,并获得摄像机的合理焦距,为采摘机械手定位提供了基础理论研究依据。  相似文献   

3.
利用透视变换原理建立双目立体摄像机数学模型,全面考虑了镜头的径向畸变和切向畸 变,提出一种线性求解摄像机参数的标定方法,改变了以往的摄像机标定依赖于非线性优化的缺点,避免了非线性优化的不稳定性.该标定方法在单摄像机模型的基础上,加入对双摄像机相对位置的确定,通过成像过程中坐标系之间的转换,较好地实现了双目立体摄像测量系...  相似文献   

4.
In order for a binocular head to perform optimal 3D tracking, it should be able to verge its cameras actively, while maintaining geometric calibration. In this work we introduce a calibration update procedure, which allows a robotic head to simultaneously fixate, track, and reconstruct a moving object in real-time. The update method is based on a mapping from motor-based to image-based estimates of the camera orientations, estimated in an offline stage. Following this, a fast online procedure is presented to update the calibration of an active binocular camera pair. The proposed approach is ideal for active vision applications because no image-processing is needed at runtime for the scope of calibrating the system or for maintaining the calibration parameters during camera vergence. We show that this homography-based technique allows an active binocular robot to fixate and track an object, whilst performing 3D reconstruction concurrently in real-time.  相似文献   

5.
From depth sensors to thermal cameras, the increased availability of camera sensors beyond the visible spectrum has created many exciting applications. Most of these applications require combining information from these hyperspectral cameras with a regular RGB camera. Information fusion from multiple heterogeneous cameras can be a very complex problem. They can be fused at different levels from pixel to voxel or even semantic objects, with large variations in accuracy, communication, and computation costs. In this paper, we propose a system for robust segmentation of human figures in video sequences by fusing visible-light and thermal imageries. Our system focuses on the geometric transformation between visual blobs corresponding to human figures observed at both cameras. This approach provides the most reliable fusion at the expense of high computation and communication costs. To reduce the computational complexity of the geometric fusion, an efficient calibration procedure is first applied to rectify the two camera views without the complex procedure of estimating the intrinsic parameters of the cameras. To geometrically register different blobs at the pixel level, a blob-to-blob homography in the rectified domain is then computed in real-time by estimating the disparity for each blob-pair. Precise segmentation is finally achieved using a two-tier tracking algorithm and a unified background model. Our experimental results show that our proposed system provides significant improvements over existing schemes under various conditions.  相似文献   

6.
试飞测试中摄像机标定方法研究   总被引:2,自引:0,他引:2  
胡丙华  晏晖  陈贝 《测控技术》2013,32(5):134-137
随着数码相机技术和摄影测量技术的发展,越来越多的数字摄像机应用于试飞测试中,而摄像机标定是其成功应用于飞行试验的关键之一。为了突破试飞测试中现有的仅以点特征作为控制,充分利用现有设备条件,更好地解决加装在飞机上的摄像机在飞行过程中的实时标定问题,采取内标定与实时外标定两步实现摄像机标定的方法。着重探讨了一种基于平行直线的摄像机内标定方法,详细论述了基于灭点约束和直线几何约束的摄像机标定解算模型,该方法在无控制点的情况下可得到每个摄像机的内方位元素、各项畸变改正系数和外方位角元素;并简要介绍了基于单片后方交会的实时外标定方法。实际数据的试验结果表明,该方法切实可行,能够获得精确、稳定的参数结果,有效减少了摄像机标定过程中所需布设的控制点数,从而提高了试飞测试中精确测量导弹运动轨迹、机翼变形测量等工作的可实施性。  相似文献   

7.
In this paper we present a method for the calibration of multiple cameras based on the extraction and use of the physical characteristics of a one-dimensional invariant pattern which is defined by four collinear markers. The advantages of this kind of pattern stand out in two key steps of the calibration process. In the initial step of camera calibration methods, related to sample points capture, the proposed method takes advantage of using a new technique for the capture and recognition of a robust sample of projective invariant patterns, which allows to capture simultaneously more than one invariant pattern in the tracking area and recognize each pattern individually as well as each marker that composes them. This process is executed in real time while capturing our sample of calibration points in the cameras of our system. This new feature allows to capture a more numerous and robust set of sample points than other patterns used for multi-camera calibration methods. In the last step of the calibration process, related to camera parameters' optimization, we explore the collinearity feature of the invariant pattern and add this feature in the camera parameters optimization model. This approach obtains better results in the computation of camera parameters. We present the results obtained with the calibration of two multi-camera systems using the proposed method and compare them with other methods from the literature.  相似文献   

8.
9.
针对双目立体视觉测量系统中摄像机标定问题,讨论了基于标准长度的外部参数标定方法,选定了摄像机透视投影模型,采用双摄像机同时对放置于视场内的十字靶标拍摄多幅图像,得出了基于LabVIEW开发的摄像机标定方法.该方法利用了LabVIEW的开发环境,使用了数学工具包,将遗传算法与LM算法相结合,优化迭代获得摄像机外部参数,运算速度和精度大大提高.开发的模块可用于基于LabVIEW开发的工程软件进行高精度尺寸现场测量.在双目立体视觉测量系统标定结果基础上对标准靶进行测量,测量结果标准差达到0.1.  相似文献   

10.
Central catadioptric cameras are widely used in virtual reality and robot navigation,and the camera calibration is a prerequisite for these applications.In this paper,we propose an easy calibration method for central catadioptric cameras with a 2D calibration pattern.Firstly,the bounding ellipse of the catadioptric image and field of view (FOV) are used to obtain the initial estimation of the intrinsic parameters.Then,the explicit relationship between the central catadioptric and the pinhole model is used to initialize the extrinsic parameters.Finally,the intrinsic and extrinsic parameters are refined by nonlinear optimization.The proposed method does not need any fitting of partial visible conic,and the projected images of 2D calibration pattern can easily cover the whole image,so our method is easy and robust.Experiments with simulated data as well as real images show the satisfactory performance of our proposed calibration method.  相似文献   

11.
一种反射折射摄像机的简易标定方法   总被引:3,自引:0,他引:3  
Central catadioptric cameras are widely used in virtual reality and robot navigation, and the camera calibration is a prerequisite for these applications. In this paper, we propose an easy calibration method for central catadioptric cameras with a 2D calibration pattern. Firstly, the bounding ellipse of the catadioptric image and field of view (FOV) are used to obtain the initial estimation of the intrinsic parameters. Then, the explicit relationship between the central catadioptric and the pinhole model is used to initialize the extrinsic parameters. Finally, the intrinsic and extrinsic parameters are refined by nonlinear optimization. The proposed method does not need any fitting of partial visible conic, and the projected images of 2D calibration pattern can easily cover the whole image, so our method is easy and robust. Experiments with simulated data as well as real images show the satisfactory performance of our proposed calibration method.  相似文献   

12.
Light field cameras are becoming popular in computer vision and graphics, with many research and commercial applications already having been proposed.Various types of cameras have been developed with the camera array being one of the ways of acquiring a 4D light field image usingmultiple cameras. Camera calibration is essential, since each application requires the correct projection and ray geometry of the light field. The calibrated parameters are used in the light field image rectified from the images captured by multiple cameras. Various camera calibration approaches have been proposed for a single camera, multiple cameras, and amoving camera. However, although these approaches can be applied to calibrating camera arrays, they are not effective in terms of accuracy and computational cost. Moreover, less attention has been paid to camera calibration of a light field camera. In this paper, we propose a calibration method for a camera array and a rectification method for generating a light field image from the captured images. We propose a two-step algorithm consisting of closed form initialization and nonlinear refinement, which extends Zhang’swell-known method to the camera array. More importantly, we introduce a rigid camera constraint whereby the array of cameras is rigidly aligned in the camera array and utilize this constraint in our calibration. Using this constraint, we obtained much faster and more accurate calibration results in the experiments.  相似文献   

13.
A calibrated camera is essential for computer vision systems: the prime reason being that such a camera acts as an angle measuring device. Once the camera is calibrated, applications like three-dimensional reconstruction or metrology or other applications requiring real world information from the video sequences can be envisioned. Motivated by this, we address the problem of calibrating multiple cameras, with an overlapping field of view observing pedestrians in a scene walking on an uneven terrain. This problem of calibration on an uneven terrain has so far not been addressed in the vision community. We automatically estimate vertical and horizontal vanishing points by observing pedestrians in each camera and use the corresponding vanishing points to estimate the infinite homography existing between the different cameras. This homography provides constraints on intrinsic (or interior) camera parameters while also enabling us to estimate the extrinsic (or exterior) camera parameters. We test the proposed method on real as well as synthetic data, in addition to motion capture dataset and compare our results with the state of the art.  相似文献   

14.
三维人脸识别中,双目成象系统是初始工作中极为关键的一步.本文引入了PnP问题讨论双目成象系统中摄像机的标定算法.本文提出解出P4P问题的唯一解,即确定摄像机的标定,该方法简单快捷.最后本文给出具体标定过程.  相似文献   

15.
Inter-camera registration in multi-view systems with overlapped views has a particularly long and sophisticated research history within the computer vision community. Moreover, when applied to Distributed Video Coding, in systems with at least one moving camera it represents a real challenge due to the necessary data at decoder for generating the side information without any a priori knowledge of each instant camera position. This paper proposes a solution to this problem based on successive multi-view registration and motion compensated extrapolation for on-the-fly re-correlation of two views at decoder. This novel technique for side information generation is codec-independent, robust and flexible with regard to any free motion of the cameras. Furthermore, it doesn’t require any additional information from encoders nor communication between cameras or offline training stage. We also propose a metric for an objective assessment of the multi-view correlation performance.  相似文献   

16.
Camera networks are complex vision systems difficult to control if the number of sensors is getting higher. With classic approaches, each camera has to be calibrated and synchronized individually. These tasks are often troublesome because of spatial constraints, and mostly due to the amount of information that need to be processed. Cameras generally observe overlapping areas, leading to redundant information that are then acquired, transmitted, stored and then processed. We propose in this paper a method to segment, cluster and codify images acquired by cameras of a network. The images are decomposed sequentially into layers where redundant information are discarded. Without the need of any calibration operation, each sensor contributes to build a global representation of the entire network environment. The information sent by the network is then represented by a reduced and compact amount of data using a codification process. This framework allows structures to be retrieved and also the topology of the network. It can also provide the localization and trajectories of mobile objects. Experiments will present practical results in the case of a network containing 20 cameras observing a common scene.  相似文献   

17.
A camera mounted on an aerial vehicle provides an excellent means for monitoring large areas of a scene. Utilizing several such cameras on different aerial vehicles allows further flexibility, in terms of increased visual scope and in the pursuit of multiple targets. In this paper, we address the problem of associating objects across multiple airborne cameras. Since the cameras are moving and often widely separated, direct appearance-based or proximity-based constraints cannot be used. Instead, we exploit geometric constraints on the relationship between the motion of each object across cameras, to test multiple association hypotheses, without assuming any prior calibration information. Given our scene model, we propose a likelihood function for evaluating a hypothesized association between observations in multiple cameras that is geometrically motivated. Since multiple cameras exist, ensuring coherency in association is an essential requirement, e.g. that transitive closure is maintained between more than two cameras. To ensure such coherency we pose the problem of maximizing the likelihood function as a k-dimensional matching and use an approximation to find the optimal assignment of association. Using the proposed error function, canonical trajectories of each object and optimal estimates of inter-camera transformations (in a maximum likelihood sense) are computed. Finally, we show that as a result of associating objects across the cameras, a concurrent visualization of multiple aerial video streams is possible and that, under special conditions, trajectories interrupted due to occlusion or missing detections can be repaired. Results are shown on a number of real and controlled scenarios with multiple objects observed by multiple cameras, validating our qualitative models, and through simulation quantitative performance is also reported.  相似文献   

18.
Camera calibration from surfaces of revolution   总被引:9,自引:0,他引:9  
This paper addresses the problem of calibrating a pinhole camera from images of a surface of revolution. Camera calibration is the process of determining the intrinsic or internal parameters (i.e., aspect ratio, focal length, and principal point) of a camera, and it is important for both motion estimation and metric reconstruction of 3D models. In this paper, a novel and simple calibration technique is introduced, which is based on exploiting the symmetry of images of surfaces of revolution. Traditional techniques for camera calibration involve taking images of some precisely machined calibration pattern (such as a calibration grid). The use of surfaces of revolution, which are commonly found in daily life (e.g., bowls and vases), makes the process easier as a result of the reduced cost and increased accessibility of the calibration objects. In this paper, it is shown that two images of a surface of revolution will provide enough information for determining the aspect ratio, focal length, and principal point of a camera with fixed intrinsic parameters. The algorithms presented in this paper have been implemented and tested with both synthetic and real data. Experimental results show that the camera calibration method presented is both practical and accurate.  相似文献   

19.
Some aspects of zoom lens camera calibration   总被引:3,自引:0,他引:3  
Zoom lens camera calibration is an important and difficult problem for two reasons at least. First, the intrinsic parameters of such a camera change over time, it is difficult to calibrate them on-line. Secondly, the pin-hole model for single lens system can not be applied directly to a zoom lens system. In this paper, we address some aspects of this problem, such as determining principal point by zooming, modeling and calibration of lens distortion and focal length, as well as some practical aspects. Experimental results on calibrating cameras with computer controlled zoom, focus and aperture are presented  相似文献   

20.
Stereo-pair images obtained from two cameras can be used to compute three-dimensional (3D) world coordinates of a point using triangulation. However, to apply this method, camera calibration parameters for each camera need to be experimentally obtained. Camera calibration is a rigorous experimental procedure in which typically 12 parameters are to be evaluated for each camera. The general camera model is often such that the system becomes nonlinear and requires good initial estimates to converge to a solution. We propose that, for stereo vision applications in which real-world coordinates are to be evaluated, artificial neural networks be used to train the system such that the need for camera calibration is eliminated. The training set for our neural network consists of a variety of stereo-pair images and corresponding 3D world coordinates. We present the results obtained on our prototype mobile robot that employs two cameras as its sole sensors and navigates through simple regular obstacles in a high-contrast environment. We observe that the percentage errors obtained from our set-up are comparable with those obtained through standard camera calibration techniques and that the system is accurate enough for most machine-vision applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号