首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
Catadioptric Projective Geometry   总被引:9,自引:0,他引:9  
Catadioptric sensors are devices which utilize mirrors and lenses to form a projection onto the image plane of a camera. Central catadioptric sensors are the class of these devices having a single effective viewpoint. In this paper, we propose a unifying model for the projective geometry induced by these devices and we study its properties as well as its practical implications. We show that a central catadioptric projection is equivalent to a two-step mapping via the sphere. The second step is equivalent to a stereographic projection in the case of parabolic mirrors. Conventional lens-based perspective cameras are also central catadioptric devices with a virtual planar mirror and are, thus, covered by the unifying model. We prove that for each catadioptric projection there exists a dual catadioptric projection based on the duality between points and line images (conics). It turns out that planar and parabolic mirrors build a dual catadioptric projection pair. As a practical example we describe a procedure to estimate focal length and image center from a single view of lines in arbitrary position for a parabolic catadioptric system.  相似文献   

2.
In central catadioptric systems 3D lines are projected into conics. In this paper we present a new approach to extract conics in the raw catadioptric image, which correspond to projected straight lines in the scene. Using the internal calibration and two image points we are able to compute analytically these conics which we name hypercatadioptric line images. We obtain the error propagation from the image points to the 3D line projection in function of the calibration parameters. We also perform an exhaustive analysis on the elements that can affect the conic extraction accuracy. Besides that, we exploit the presence of parallel lines in man-made environments to compute the dominant vanishing points (VPs) in the omnidirectional image. In order to obtain the intersection of two of these conics we analyze the self-polar triangle common to this pair. With the information contained in the vanishing points we are able to obtain the 3D orientation of the catadioptric system. This method can be used either in a vertical stabilization system required by autonomous navigation or to rectify images required in applications where the vertical orientation of the catadioptric system is assumed. We use synthetic and real images to test the proposed method. We evaluate the 3D orientation accuracy with a ground truth given by a goniometer and with an inertial measurement unit (IMU). We also test our approach performing vertical and full rectifications in sequences of real images.  相似文献   

3.
Catadioptric camera calibration using geometric invariants   总被引:5,自引:0,他引:5  
Central catadioptric cameras are imaging devices that use mirrors to enhance the field of view while preserving a single effective viewpoint. In this paper, we propose a novel method for the calibration of central catadioptric cameras using geometric invariants. Lines and spheres in space are all projected into conics in the catadioptric image plane. We prove that the projection of a line can provide three invariants whereas the projection of a sphere can only provide two. From these invariants, constraint equations for the intrinsic parameters of catadioptric camera are derived. Therefore, there are two kinds of variants of this novel method. The first one uses projections of lines and the second one uses projections of spheres. In general, the projections of two lines or three spheres are sufficient to achieve catadioptric camera calibration. One important conclusion in this paper is that the method based on projections of spheres is more robust and has higher accuracy than that based on projections of lines. The performances of our method are demonstrated by both the results of simulations and experiments with real images.  相似文献   

4.
Structure from motion with wide circular field of view cameras   总被引:2,自引:0,他引:2  
This paper presents a method for fully automatic and robust estimation of two-view geometry, autocalibration, and 3D metric reconstruction from point correspondences in images taken by cameras with wide circular field of view. We focus on cameras which have more than 180/spl deg/ field of view and for which the standard perspective camera model is not sufficient, e.g., the cameras equipped with circular fish-eye lenses Nikon FC-E8 (183/spl deg/), Sigma 8 mm-f4-EX (180/spl deg/), or with curved conical mirrors. We assume a circular field of view and axially symmetric image projection to autocalibrate the cameras. Many wide field of view cameras can still be modeled by the central projection followed by a nonlinear image mapping. Examples are the above-mentioned fish-eye lenses and properly assembled catadioptric cameras with conical mirrors. We show that epipolar geometry of these cameras can be estimated from a small number of correspondences by solving a polynomial eigenvalue problem. This allows the use of efficient RANSAC robust estimation to find the image projection model, the epipolar geometry, and the selection of true point correspondences from tentative correspondences contaminated by mismatches. Real catadioptric cameras are often slightly noncentral. We show that the proposed autocalibration with approximate central models is usually good enough to get correct point correspondences which can be used with accurate noncentral models in a bundle adjustment to obtain accurate 3D scene reconstruction. Noncentral camera models are dealt with and results are shown for catadioptric cameras with parabolic and spherical mirrors.  相似文献   

5.
In this paper, a novel linear calibration algorithm based on lines is presented for central catadioptric cameras. We firstly derive the relationship between the projection on the viewing sphere of a space point and its catadioptric image. And then by the relationship we establish a group of linear constraints on the catadioptric parameters from the catadioptric projections of spatial lines. By using these linear constraints, any central catadioptric camera can be fully calibrated from a single view of three or more lines without prior knowledge on the camera. Extensive experiments show this algorithm can improve the calibration's robustness.  相似文献   

6.
Central catadioptric cameras are imaging devices that use mirrors to enhance the field of view while preserving a single effective viewpoint. Lines and spheres in space are all projected into conics in the central catadioptric image planes, and such conics are called line images and sphere images, respectively. We discovered that there exists an imaginary conic in the central catadioptric image planes, defined as the modified image of the absolute conic (MIAC), and by utilizing the MIAC, the novel identical projective geometric properties of line images and sphere images may be exploited: Each line image or each sphere image is double-contact with the MIAC, which is an analogy of the discovery in pinhole camera that the image of the absolute conic (IAC) is double-contact with sphere images. Note that the IAC also exists in the central catadioptric image plane, but it does not have the double-contact properties with line images or sphere images. This is the main reason to propose the MIAC. From these geometric properties with the MIAC, two linear calibration methods for central catadioptric cameras using sphere images as well as using line images are proposed in the same framework. Note that there are many linear approaches to central catadioptric camera calibration using line images. It seems that to use the properties that line images are tangent to the MIAC only leads to an alternative geometric construction for calibration. However, for sphere images, there are only some nonlinear calibration methods in literature. Therefore, to propose linear methods for sphere images may be the main contribution of this paper. Our new algorithms have been tested in extensive experiments with respect to noise sensitivity.  相似文献   

7.
Epipolar Geometry for Central Catadioptric Cameras   总被引:11,自引:0,他引:11  
Central catadioptric cameras are cameras which combine lenses and mirrors to capture a very wide field of view with a central projection. In this paper we extend the classical epipolar geometry of perspective cameras to all central catadioptric cameras. Epipolar geometry is formulated as the geometry of corresponding rays in a three-dimensional space. Using the model of image formation of central catadioptric cameras, the constraint on corresponding image points is then derived. It is shown that the corresponding points lie on epipolar conics. In addition, the shape of the conics for all types of central catadioptric cameras is classified. Finally, the theory is verified by experiments with real central catadioptric cameras.  相似文献   

8.
For paracatadioptric camera, the estimation of intrinsic parameters from sphere images is still an open and challenging problem. In this paper, we propose a calibration method for paracatadioptric camera based on sphere images, which only requires that the projected contour of parabolic mirror is visible on the image plane in one view. We have found that, under central catadioptric camera, a sphere is projected to two conics on the image plane, which are defined as a pair of antipodal sphere images. The conic that is visible on the image plane is called the sphere image, while the other invisible conic is called the antipodal sphere image. In the other aspect, according to the image formation of central catadioptric camera, these two conics can also be considered as the projections of two parallel circles on the viewing sphere by a virtue camera. That is to say, if three pairs of antipodal sphere images are known, central catadioptric camera can be directly calibrated by the calibration method based on two parallel circles. Therefore, the problem of calibrating central catadioptric camera is transferred to the estimations of sphere images and their antipodal sphere images. Based on this idea, we first initialize the intrinsic parameters of the camera by the projected contour of parabolic mirror, and use them to initialize the antipodal sphere images. Next, we study properties of several pairs of antipodal sphere images under paracatadioptric camera. Then, these properties are used to optimize sphere images and their antipodal sphere images, so as to calibrate the paracatadioptric camera. Experimental results on both simulated and real image data have demonstrated the effectiveness of our method.  相似文献   

9.
We present a planarity constraint and a novel three‐dimensional (3D) point reconstruction algorithm for a multiview laser range slit scanner. The constraint is based on the fact that all observed points on a projected laser line lie on the same plane of laser light in 3D. The parameters of the plane of laser light linearly parametrize a homography between a pair of images of the laser points. This homography can be recovered from point correspondences derived from epipolar geometry. The use of the planar constraint reduces outliers in the reconstruction and allows for the reconstruction of points seen in only one view. We derive an optimal reconstruction of points subject to the planar constraint and compare the accuracy to the suboptimal approach in prior work. We also construct a catadioptric stereo rig with high quality optical components to remove error due to camera synchronization and non‐uniform laser projection. The reconstruction results are compared to prior work that uses inexpensive optics and two cameras.  相似文献   

10.
《Advanced Robotics》2013,27(8-9):947-967
Abstract

A wide field of view is required for many robotic vision tasks. Such an aperture may be acquired by a fisheye camera, which provides a full image compared to catadioptric visual sensors, and does not increase the size and the weakness of the imaging system with respect to perspective cameras. While a unified model exists for all central catadioptric systems, many different models, approximating the radial distortions, exist for fisheye cameras. It is shown in this paper that the unified projection model proposed for central catadioptric cameras is also valid for fisheye cameras in the context of robotic applications. This model consists of a projection onto a virtual unitary sphere followed by a perspective projection onto an image plane. This model is shown equivalent to almost all the fisheye models. Calibration with four cameras and partial Euclidean reconstruction are done using this model, and lead to persuasive results. Finally, an application to a mobile robot navigation task is proposed and correctly executed along a 200-m trajectory.  相似文献   

11.
In order to improve robot capabilities related to playing with a flying ball, reliable methods to localize a sphere in the 3D space are needed. When the radius of the sphere is known, it can be localized by analyzing a single, perspective image of it. When the sphere radius is not known, a single perspective image is not sufficient. In this paper we consider axial-symmetric catadioptric cameras, i.e. devices consisting of an axial-symmetric mirror plus a perspective camera, whose viewpoint is on the symmetry axis. If the viewing rays are not all concurrent at a single point, this camera is said to be non-central. We show that, using a noncentral axial-symmetric catadioptric camera, a single image is sufficient to determine both the position of a sphere and its radius. Some preliminary experimental results are also presented.  相似文献   

12.
In this paper, we consider the problem of controlling a 6 DOF holonomic robot and a nonholonomic mobile robot from the projection of 3-D straight lines in the image plane of central catadioptric systems. A generic central catadioptric interaction matrix for the projection of 3-D straight lines is derived using an unifying imaging model valid for an entire class of cameras. This result is exploited to design an image-based control law that allows us to control the 6 DOF of a robotic arm. Then, the projected lines are exploited to control a nonholonomic robot. We show that as when considering a robotic arm, the control objectives are mainly based on catadioptric image feature and that local asymptotic convergence is guaranteed. Simulation results and real experiments with a 6 DOF eye-to-hand system and a mobile robot illustrate the control strategy.  相似文献   

13.
What can two images tell us about a third one?   总被引:4,自引:0,他引:4  
This paper discusses the problem of predicting image features in an image from image features in two other images and the epipolar geometry between the three images. We adopt the most general camera model of perspective projection and show that a point can be predicted in the third image as a bilinear function of its images in the first two cameras, that the tangents to three corresponding curves are related by a trilinear function, and that the curvature of a curve in the third image is a linear function of the curvatures at the corresponding points in the other two images. Our analysis relies heavily on the use of the fundamental matrix which has been recently introduced (Faugeras et al, 1992) and on the properties of a special plane which we call the trifocal plane. Though the trinocular geometry of points and lines has been very recently addressed, our use of the differential properties of curves for prediction is unique.We thus completely solve the following problem: given two views of an object, predict what a third view would look like. The problem and its solution bear upon several areas of computer vision, stereo, motion analysis, and model-based object recognition. Our answer is quite general since it assumes the general perspective projection model for image formation and requires only the knowledge of the epipolar geometry for the triple of views. We show that in the special case of orthographic projection our results for points reduce to those of Ullman and Basri (Ullman and Basri, 1991). We demonstrate on synthetic as well as on real data the applicability of our theory.  相似文献   

14.
In this paper, we present a new technique of 3D face reconstruction from a sequence of images taken with cameras having varying parameters without the need to grid. This method is based on the estimation of the projection matrices of the cameras from a symmetry property which characterizes the face, these projections matrices are used with points matching in each pair of images to determine the 3D points cloud, subsequently, 3D mesh of the face is constructed with 3D Crust algorithm. Lastly, the 2D image is projected on the 3D model to generate the texture mapping. The strong point of the proposed approach is to minimize the constraints of the calibration system: we calibrated the cameras from a symmetry property which characterizes the face, this property gives us the opportunity to know some points of 3D face in a specific well-chosen global reference, to formulate a system of linear and nonlinear equations according to these 3D points, their projection in the image plan and the elements of the projections matrix. Then to solve these equations, we use a genetic algorithm which consists of finding the global optimum without the need of the initial estimation and allows to avoid the local minima of the formulated cost function. Our study is conducted on real data to demonstrate the validity and the performance of the proposed approach in terms of robustness, simplicity, stability and convergence.  相似文献   

15.
Hybrid central catadioptric and perspective cameras are desired in practice, because the hybrid camera system can capture large field of view as well as high-resolution images. However, the calibration of the system is challenging due to heavy distortions in catadioptric cameras. In addition, previous calibration methods are only suitable for the camera system consisting of perspective cameras and catadioptric cameras with only parabolic mirrors, in which priors about the intrinsic parameters of perspective cameras are required. In this work, we provide a new approach to handle the problems. We show that if the hybrid camera system consists of at least two central catadioptric and one perspective cameras, both the intrinsic and extrinsic parameters of the system can be calibrated linearly without priors about intrinsic parameters of the perspective cameras, and the supported central catadioptric cameras of our method can be more generic. In this work, an approximated polynomial model is derived and used for rectification of catadioptric image. Firstly, with the epipolar geometry between the perspective and rectified catadioptric images, the distortion parameters of the polynomial model can be estimated linearly. Then a new method is proposed to estimate the intrinsic parameters of a central catadioptric camera with the parameters in the polynomial model, and hence the catadioptric cameras can be calibrated. Finally, a linear self-calibration method for the hybrid system is given with the calibrated catadioptric cameras. The main advantage of our method is that it cannot only calibrate both the intrinsic and extrinsic parameters of the hybrid camera system, but also simplify a traditional nonlinear self-calibration of perspective cameras to a linear process. Experiments show that our proposed method is robust and reliable.  相似文献   

16.
In this paper, we study projection systems with a single effective viewpoint, including combinations of mirrors and lenses (catadioptric) as well as just lenses with or without radial distortion (dioptric systems). First, we extend a well-known unifying model for central catadioptric systems to incorporate a class of dioptric systems with radial distortion. Second, we provide a new representation for the image plane of central systems. This representation is the lifting through a Veronese map of the original image plane to the 5D projective space. We study how a collineation in the original image plane can be transferred to a collineation in the lifted space, and we prove that in the case of central parabolic systems and cameras with lens distortion the locus of the lifted points representing projections of world lines is a plane. The similarities between paracatadioptric systems and lens with radial distortion are emphasized by extending to the latter algorithms initially established for the former.  相似文献   

17.
An imaging system with a single effective viewpoint is called a central projection system. The conventional perspective camera is an example of central projection system. A catadioptric realization of omnidirectional vision combines reflective surfaces with lenses. Catadioptric systems with an unique projection center are also examples of central projection systems. Whenever an image is acquired, points in 3D space are mapped into points in the 2D image plane. The image formation process represents a transformation from 3 to 2, and mathematical models can be used to describe it. This paper discusses the definition of world coordinate systems that simplify the modeling of general central projection imaging. We show that an adequate choice of the world coordinate reference system can be highly advantageous. Such a choice does not imply that new information will be available in the images. Instead the geometric transformations will be represented in a common and more compact framework, while simultaneously enabling newer insights. The first part of the paper focuses on static imaging systems that include both perspective cameras and catadioptric systems. A systematic approach to select the world reference frame is presented. In particular we derive coordinate systems that satisfy two differential constraints (the compactness and the decoupling constraints). These coordinate systems have several advantages for the representation of the transformations between the 3D world and the image plane. The second part of the paper applies the derived mathematical framework to active tracking of moving targets. In applications of visual control of motion the relationship between motion in the scene and image motion must be established. In the case of active tracking of moving targets these relationships become more complex due to camera motion. Suitable world coordinate reference systems are defined for three distinct situations: perspective camera with planar translation motion, perspective camera with pan and tilt rotation motion, and catadioptric imaging system rotating around an axis going through the effective viewpoint and the camera center. Position and velocity equations relating image motion, camera motion and target 3D motion are derived and discussed. Control laws to perform active tracking of moving targets using visual information are established.  相似文献   

18.
一种反射折射摄像机的简易标定方法   总被引:3,自引:0,他引:3  
Central catadioptric cameras are widely used in virtual reality and robot navigation, and the camera calibration is a prerequisite for these applications. In this paper, we propose an easy calibration method for central catadioptric cameras with a 2D calibration pattern. Firstly, the bounding ellipse of the catadioptric image and field of view (FOV) are used to obtain the initial estimation of the intrinsic parameters. Then, the explicit relationship between the central catadioptric and the pinhole model is used to initialize the extrinsic parameters. Finally, the intrinsic and extrinsic parameters are refined by nonlinear optimization. The proposed method does not need any fitting of partial visible conic, and the projected images of 2D calibration pattern can easily cover the whole image, so our method is easy and robust. Experiments with simulated data as well as real images show the satisfactory performance of our proposed calibration method.  相似文献   

19.
Central catadioptric cameras are widely used in virtual reality and robot navigation,and the camera calibration is a prerequisite for these applications.In this paper,we propose an easy calibration method for central catadioptric cameras with a 2D calibration pattern.Firstly,the bounding ellipse of the catadioptric image and field of view (FOV) are used to obtain the initial estimation of the intrinsic parameters.Then,the explicit relationship between the central catadioptric and the pinhole model is used to initialize the extrinsic parameters.Finally,the intrinsic and extrinsic parameters are refined by nonlinear optimization.The proposed method does not need any fitting of partial visible conic,and the projected images of 2D calibration pattern can easily cover the whole image,so our method is easy and robust.Experiments with simulated data as well as real images show the satisfactory performance of our proposed calibration method.  相似文献   

20.
This work proposes a method of camera self-calibration having varying intrinsic parameters from a sequence of images of an unknown 3D object. The projection of two points of the 3D scene in the image planes is used with fundamental matrices to determine the projection matrices. The present approach is based on the formulation of a nonlinear cost function from the determination of a relationship between two points of the scene and their projections in the image planes. The resolution of this function enables us to estimate the intrinsic parameters of different cameras. The strong point of the present approach is clearly seen in the minimization of the three constraints of a self-calibration system (a pair of images, 3D scene, any camera): The use of a single pair of images provides fewer equations, which minimizes the execution time of the program, the use of a 3D scene reduces the planarity constraints, and the use of any camera eliminates the constraints of cameras having constant parameters. The experiment results on synthetic and real data are presented to demonstrate the performance of the present approach in terms of accuracy, simplicity, stability, and convergence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号