首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Simple and efficient method of calibrating a motorized zoom lens   总被引:3,自引:0,他引:3  
In this work, three servo motors are used to independently control the aperture, zoom, and focus of our zoom lens. Our goal is to calibrate, efficiently, the camera parameters for all the possible configurations of lens settings. We use a calibration object suitable for zoom lens calibration to deal with the defocusing problem. Instead of calibrating the zoom lens with respect to the three lens settings simultaneously, we perform the monofocal camera calibration, adaptively, over the ranges of the zoom and focus settings while fixing the aperture setting at a preset value. Bilinear interpolation is used to provide the values of the camera parameters for those lens settings where no observations are taken. The adaptive strategy requires the monofocal camera calibration only for the lens settings where the interpolated camera parameters are not accurate enough, and is hence referred to as the calibration-on-demand method. Our experiments show that the proposed calibration-on-demand method can provide accurate camera parameters for all the lens settings of a motorized zoom lens, even though the camera calibration is performed only for a few sampled lens settings.  相似文献   

2.
Active vision sensors are increasingly being employed in vision systems for their greater flexibility. For example, vision sensors in hand-eye configurations with computer controllable lenses (e.g., zoom lenses) can be set to values which satisfy the sensing situation at hand. For such applications, it is essential to determine the mapping between the parameters that can actually be controlled in a reconfigurable vision system (e.g., the robot arm pose, the zoom setting of the lens) and the higher-level viewpoint parameters that must be set to desired values (e.g., the viewpoint location, focal length). In this paper we present calibration techniques to determine this mapping. In addition, we discuss how to use these relationships in order to achieve the desired values of the viewpoint parameters by setting the controllable parameters to the appropriate values. The sensor setup that is considered consists of a camera in a hand-eye arrangement equipped with a lens that has zoom, focus, and aperture control. The calibration techniques are applied to the H6 × 12.5R Fujinon zoom lens and the experimental results are shown.  相似文献   

3.
In this paper, we discuss the problem of estimating parameters of a calibration model for active pan–tilt–zoom cameras. The variation of the intrinsic parameters of each camera over its full range of zoom settings is estimated through a two step procedure. We first determine the intrinsic parameters at the camera’s lowest zoom setting very accurately by capturing an extended panorama. The camera intrinsics and radial distortion parameters are then determined at discrete steps in a monotonically increasing zoom sequence that spans the full zoom range of the camera. Our model incorporates the variation of radial distortion with camera zoom. Both calibration phases are fully automatic and do not assume any knowledge of the scene structure. High-resolution calibrated panoramic mosaics are also computed during this process. These fully calibrated panoramas are represented as multi-resolution pyramids of cube-maps. We describe a hierarchical approach for building multiple levels of detail in panoramas, by aligning hundreds of images captured within a 1–12× zoom range. Results are shown from datasets captured from two types of pan–tilt–zoom cameras placed in an uncontrolled outdoor environment. The estimated camera intrinsics model along with the cube-maps provides a calibration reference for images captured on the fly by the active pan–tilt–zoom camera under operation making our approach promising for active camera network calibration.  相似文献   

4.
An approach to integrating stereo disparity, camera vergence, and lens focus to exploit their complementary strengths and weaknesses through active control of camera focus and orientations is presented. In addition, the aperture and zoom settings of the cameras are controlled. The result is an active vision system that dynamically and cooperatively interleaves image acquisition with surface estimation. A dense composite map of a single contiguous surface is synthesized by automatically scanning the surface and combining estimates of adjacent, local surface patches. This problem is formulated as one of minimizing a pair of objective functions. The first such function is concerned with the selection of a target for fixation. The second objective function guides the surface estimation process in the vicinity of the fixation point. Calibration parameters of the cameras are treated as variables during optimization, thus making camera calibration an integral, flexible component of surface estimation. An implementation of this method is described, and a performance evaluation of the system is presented. An average absolute error of less than 0.15% in estimated depth was achieved for a large surface having a depth of approximately 2 m  相似文献   

5.
Since PTZ (pan–tilt–zoom) camera is able to obtain multi-view-angle and multi-resolution information, PTZ-stereo system using two PTZ cameras has much higher capability and flexibility compared with traditional stereo system. In this paper, we propose a self-calibration framework to deal with the calibration of spherical rectification, which can be deemed as a kind of relative pose estimation, for a PTZ-stereo system. The goal of this calibration is to guarantee high performance of stereo rectification, so that stereo matching can be achieved more efficiently and accurately. In this framework, we assume two PTZ cameras are fully calibrated, i.e., the focal length and the local camera orientation can be computed by given pan–tilt–zoom values. This approach, which is based on point matches, aims at finding uniformly distributed point matches in an iterative way. At each iteration, according to the distribution of previously used point matches, the system could automatically guide two cameras to move to collect a new match. Point matching is firstly performed for the lowest zoom setting (widest field of view). Once a candidate match is chosen, each camera is then controlled to zoom in on corresponding point to get a refined match with high spatial resolution. The final match will be added into the estimation to update the calibration parameters. Compared with previous researches, the proposed framework has the following advantages: (1) Neither manual interaction nor calibration object is needed. Calibration samples (point matches) will be added and removed in each stage automatically. (2) The distribution of calibration samples is as uniform as possible so that biased estimation could be avoided to some extent. (3) The accuracy of calibration can be controlled and improved when iteration goes on. These advantages make the proposed framework more practicable in applications. Experimental results illustrate its accuracy.  相似文献   

6.
This paper addresses the problem of calibrating camera parameters using variational methods. One problem addressed is the severe lens distortion in low-cost cameras. For many computer vision algorithms aiming at reconstructing reliable representations of 3D scenes, the camera distortion effects will lead to inaccurate 3D reconstructions and geometrical measurements if not accounted for. A second problem is the color calibration problem caused by variations in camera responses that result in different color measurements and affects the algorithms that depend on these measurements. We also address the extrinsic camera calibration that estimates relative poses and orientations of multiple cameras in the system and the intrinsic camera calibration that estimates focal lengths and the skew parameters of the cameras. To address these calibration problems, we present multiview stereo techniques based on variational methods that utilize partial and ordinary differential equations. Our approach can also be considered as a coordinated refinement of camera calibration parameters. To reduce computational complexity of such algorithms, we utilize prior knowledge on the calibration object, making a piecewise smooth surface assumption, and evolve the pose, orientation, and scale parameters of such a 3D model object without requiring a 2D feature extraction from camera views. We derive the evolution equations for the distortion coefficients, the color calibration parameters, the extrinsic and intrinsic parameters of the cameras, and present experimental results.  相似文献   

7.
《Real》2000,6(6):433-448
In this paper, we present an overall algorithm for real-time camera parameter extraction, which is one of the key elements in implementing virtual studio, and we also present a new method for calculating the lens distortion parameter in real time. In a virtual studio, the motion of a virtual camera generating a graphic studio must follow the motion of the real camera in order to generate a realistic video product. This requires the calculation of camera parameters in real-time by analyzing the positions of feature points in the input video. Towards this goal, we first design a special calibration pattern utilizing the concept of cross-ratio, which makes it easy to extract and identify feature points, so that we can calculate the camera parameters from the visible portion of the pattern in real-time. It is important to consider the lens distortion when zoom lenses are used because it causes nonnegligible errors in the computation of the camera parameters. However, the Tsai algorithm, adopted for camera calibration, calculates the lens distortion through nonlinear optimization in triple parameter space, which is inappropriate for our real-time system. Thus, we propose a new linear method by calculating the lens distortion parameter independently, which can be computed fast enough for our real-time application. We implement the whole algorithm using a Pentium PC and Matrox Genesis boards with five processing nodes in order to obtain the processing rate of 30 frames per second, which is the minimum requirement for TV broadcasting. Experimental results show this system can be used practically for realizing a virtual studio.  相似文献   

8.
Camera lens distortion is crucial to obtain the best performance cameral model. Up to now, different techniques exist, which try to minimize the calibration error using different lens distortion models or computing them in different ways. Some compute lens distortion camera parameters in the camera calibration process together with the intrinsic and extrinsic ones. Others isolate the lens distortion calibration without using any template and basing the calibration on the deformation in the image of some features of the objects in the scene, like straight lines or circles. These lens distortion techniques which do not use any calibration template can be unstable if a complete camera lens distortion model is computed. They are named non-metric calibration or self-calibration methods.Traditionally a camera has been always best calibrated if metric calibration is done instead of self-calibration. This paper proposes a metric calibration technique which computes the camera lens distortion isolated from the camera calibration process under stable conditions, independently of the computed lens distortion model or the number of parameters. To make it easier to resolve, this metric technique uses the same calibration template that will be used afterwards for the calibration process. Therefore, the best performance of the camera lens distortion calibration process is achieved, which is transferred directly to the camera calibration process.  相似文献   

9.
3-D position sensing using a passive monocular vision system   总被引:3,自引:0,他引:3  
Passive monocular 3-D position sensing is made possible by a new calibration scheme that relates depth to focus blur through a composite lens and aperture model. The calibration technique enables the recovery of absolute 3-D position coordinates from image coordinates and measured focus blur. A geometric model of the camera's position and orientation in space is used to transform the camera's imaging coordinates into world coordinates. The relationship between the world coordinate system and the screen coordinate system which includes the amount of focus blur, is developed by modeling the camera imaging arrangement. The modeling proceeds first through the perspective view from a pinhole camera located anywhere in space. The camera's lens and aperture system is investigated to find the relationship between depth and focus blur. The aspect ratio of the frame image is considered. Position accuracies comparable to those in stereo based vision systems are possible without the need for solving the difficult point of correspondence problem  相似文献   

10.
本文给出了一种以空间不变量的数据来计算摄象机外部参数的方法.空间透视不变量是指在几何变换中如投影或改变观察点时保持不变的形状描述.由于它可以得到一个相对于外界来讲独立的物体景物的特征描述,故可以很广泛的应用到计算机视觉等方面.摄象机标定是确定摄象机摄取的2D图象信息及其3D实际景物的信息之间的变换关系,它包括内部参数和外部参数两个部分.内部参数表征的是摄象机的内部特征和光学特征参数,包括图象中心(Cx,Cy)坐标、图象尺度因子Sx、有效的焦距长度f和透镜的畸变失真系数K;外部参数表示的是摄象机的位置和方向在世界坐标中的坐标参数,它包括平移矩阵T和旋转矩阵R3×3,一般情况下可以写成一个扩展矩阵[RT]3×4.本文基于空间透视不变量的计算数据,给出了一种标定摄象机外部参数的方法,实验结果表明该方法具有很强的鲁棒性.  相似文献   

11.
王春阳  张宇  金丽漫  李茂忠  陈骥  喻刚 《软件》2020,(4):178-182
随着红外技术的不断发展,红外连续变焦镜头的应用越来越广泛。为了避免环境温度变化影响红外连续变焦镜头的成像质量,对改变过凸轮的红外镜头进行了光机热分析。建立连续变焦红外镜头的有限元模型,通过热分析对其模型进行处理,完成对有限元模型的热分析,并求解出不同情况下凸轮槽时镜头的位移云图、不同情况下镜片的位移云图、不同角度时凸轮的位移云图和不同角度下镜片的位移云图。结果表明,热光分析方法可以模拟红外光学系统的实际使用情况,并可以预测实际使用条件下的工作条件,对光学系统的设计具有重要的指导意义。  相似文献   

12.
摄影测量系统的高精度标定与修正   总被引:8,自引:0,他引:8  
介绍了目前国内外较常用的一些摄象机镜头标定与修正的方法,对其优缺点进行了分 析,而后根据透镜成象机理,提出了新的误差修正与标定模型,并采用一种隔离的三步法进行求 解.该方法能较全面地对摄象机系统进行标定和误差修正,且具有很好的稳定性与精度.  相似文献   

13.
We propose new mathematical models to study the variation of lens distortion models when modifying zoom setting in the case of zoom lenses. The new models are based on a polynomial approximation to account for the variation of the radial distortion parameters through the range of zoom lens settings and, on the minimization of a global error energy measuring the distance between sequences of distorted aligned points and straight lines after lens distortion correction. To validate the performance of the method we present experimental results on calibration pattern images and on sport event scenarios using broadcast video cameras. We obtain, experimentally, that using just a second order polynomial approximation of lens distortion parameter zoom variation, the quality of lens distortion correction is as good as the one obtained individually frame by frame using independent lens distortion model for each frame.  相似文献   

14.
This paper presents a new method for inferring 3D information using a static camera equipped with a zoom lens. The modeling algorithm does not require any explicit calibration model and the calculations involved are straightforward. This approach uses several images of accurate regular grids placed on a micrometric table, as the calibration process. The basic idea is to compute a local transformation that allows the establishment of a relationship between a distorted grid detected on the CCD matrix and the real one located in front of the camera. This relationship takes automatically into account all distortion phenomena and allows one to obtain reconstruction results much more accurate than those of previous works in the same field. A complete experiment on real data is provided and shows that it is possible to compute 3D information from a zooming image set even if data are close to the optical axis.  相似文献   

15.
This paper surveys zoom-lens calibration approaches, such as pattern-based calibration, self-calibration, and hybrid (or semiautomatic) calibration. We describe the characteristics and applications of various calibration methods employed in zoom-lens calibration and offer a novel classification model for zoom-lens calibration approaches in both single and stereo cameras. We elaborate on these calibration techniques to discuss their common characteristics and attributes. Finally, we present a comparative analysis of zoom-lens calibration approaches, highlighting the advantages and disadvantages of each approach. Furthermore, we compare the linear and nonlinear camera models proposed for zoom-lens calibration and enlist the different techniques used to model the camera’s parameters for zoom (or focus) settings.  相似文献   

16.
用于交会测量的摄像机标定系统设计   总被引:1,自引:1,他引:0  
介绍了一种简易的摄像机立体标定系统的设计方案,主要应用于多CCD交会测量技术中,可以完成单个摄像机的标定、多个摄像机之间的立体标定和标定精度评估等功能.建立了包含透镜径向畸变和切向畸变的摄像机成像模型,采用两步法求解摄像机参数,最后通过交会测量得到的多个棋盘格的边长与实际边长的误差量来衡量摄像机标定的精度.该系统不要求使用者具有专业的3D几何知识,速度快,成本低,而且可以达到很高的精度.  相似文献   

17.
Pan–tilt–zoom (PTZ) cameras are well suited for object identification and recognition in far-field scenes. However, the effective use of PTZ cameras is complicated by the fact that a continuous online camera calibration is needed and the absolute pan, tilt and zoom values provided by the camera actuators cannot be used because they are not synchronized with the video stream. So, accurate calibration must be directly extracted from the visual content of the frames. Moreover, the large and abrupt scale changes, the scene background changes due to the camera operation and the need of camera motion compensation make target tracking with these cameras extremely challenging. In this paper, we present a solution that provides continuous online calibration of PTZ cameras which is robust to rapid camera motion, changes of the environment due to varying illumination or moving objects. The approach also scales beyond thousands of scene landmarks extracted with the SURF keypoint detector. The method directly derives the relationship between the position of a target in the ground plane and the corresponding scale and position in the image and allows real-time tracking of multiple targets with high and stable degree of accuracy even at far distances and any zoom level.  相似文献   

18.
In this study, we developed a small 3 times zoom lens barrel for a 5M camera module to load a mobile phone set. Its dimensions are 16mm × 9mm × 28mm and is constructed with 8 sheets of lenses, two step motors, some gears, a cam plate and so on.  相似文献   

19.
Conventional iris recognition requires a high-resolution camera equipped with a zoom lens and a near-infrared illuminator to observe iris patterns. Moreover, with a zoom lens, the viewing angle is small, restricting the user’s head movement. To address these limitations, periocular recognition has recently been studied as biometrics. Because the larger surrounding area of the eye is used instead of iris region, the camera having the high-resolution sensor and zoom lens is not necessary for the periocular recognition. In addition, the image of user’s eye can be captured by using the camera having wide viewing angle, which reduces the constraints to the head movement of user’s head during the image acquisition. Previous periocular recognition methods extract features in Cartesian coordinates sensitive to the rotation (roll) of the eye region caused by in-plane rotation of the head, degrading the matching accuracy. Thus, we propose a novel periocular recognition method that is robust to eye rotation (roll) based on polar coordinates. Experimental results with open database of CASIA-Iris-Distance database (CASIA-IrisV4) show that the proposed method outperformed the others.  相似文献   

20.
石础  谌海云  宋展 《集成技术》2019,8(4):32-41
结构光三维重建技术已广泛应用于工业检测领域,随着工业检测需求的不断提高,工业检测 技术的需求也愈发向微小化和高精度化方向发展。其中,远心镜头具备透视误差小、镜头畸变小、成像失真少等优秀特性,获得越来越多的关注。该文投影仪采用传统镜头,将传统结构光系统中的相机镜头替换为远心镜头,通过对传统两步标定法的改进,对相机的仿射模型进行标定,随后用已标定的相机对投影仪进行参数标定。实验结果显示,所开发的远心结构光系统具备能够实现小视野范围的高精度、高分辨率三维重建,并具有加大的测量景深,可用于半导体器件及微型零部件等目标的高精度 三维检测。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号