首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we describe a method for recovering camera parameters from perspective views of daylight shadows in a scene, given only minimal geometric information determined from the images. This minimal information consists of two 3D stationary points and their cast shadows on the ground plane. We show that this information captured in two views is sufficient to determine the focal length, the aspect ratio, and the principal point of a pinhole camera with fixed intrinsic parameters. In addition, we are also able to compute the orientation of the light source. Our method is based on exploiting novel inter-image constraints on the image of the absolute conic and the physical properties of solar shadows. Compared to the traditional methods that require images of some precisely machined calibration patterns, our method uses cast shadows by the sun, which are common in natural environments, and requires no measurements of any distance or angle in the 3D world. To demonstrate the accuracy of the proposed algorithm and its utility, we present the results on both synthetic and real images, and apply the method to an image-based rendering problem.  相似文献   

2.
This paper proposes a novel method for robustly recovering the camera geometry of an uncalibrated image sequence taken under circular motion. Under circular motion, all the camera centers lie on a circle and the mapping from the plane containing this circle to the horizon line observed in the image can be modelled as a 1D projection. A 2x2 homography is introduced in this paper to relate the projections of the camera centers in two 1D views. It is shown that the two imaged circular points of the motion plane and the rotation angle between the two views can be derived directly from such a homography. This way of recovering the imaged circular points and rotation angles is intrinsically a multiple view approach, as all the sequence geometry embedded in the epipoles is exploited in the estimation of the homography for each view pair. This results in a more robust method compared to those computing the rotation angles using adjacent views only. The proposed method has been applied to self-calibrate turntable sequences using either point features or silhouettes, and highly accurate results have been achieved.  相似文献   

3.
A new approach to camera calibration using vanishing line information for three-dimensional computer vision is proposed. Calibrated parameters include the orientation, the position, the focal length, and the image plane center of a camera. A rectangular parallelepiped is employed as the calibration target to generate three principal vanishing points and then three vanishing lines from the projected image of the parallelepiped. Only a monocular image is required for solving these camera parameters. It is shown that the image plane center is the orthocenter of a triangle formed by the three vanishing lines. From the slopes of the vanishing lines the camera orientation parameters can be determined. The focal length can be computed by the area of the triangle. The camera position parameters can then be calibrated by using related geometric projective relationships. The derived results show the geometric meanings of these camera parameters. The calibration formulas are analytic and simple to compute. Experimental results show the feasibility of the proposed approach for a practical application—autonomous land vehicle guidance.This work was supported by National Science Council, Republic of China under Grant NSC-77-0404-E-009-31.  相似文献   

4.
This paper addresses the problem of recovering both the intrinsic and extrinsic parameters of a camera from the silhouettes of an object in a turntable sequence. Previous silhouette-based approaches have exploited correspondences induced by epipolar tangents to estimate the image invariants under turntable motion and achieved a weak calibration of the cameras. It is known that the fundamental matrix relating any two views in a turntable sequence can be expressed explicitly in terms of the image invariants, the rotation angle, and a fixed scalar. It will be shown that the imaged circular points for the turntable plane can also be formulated in terms of the same image invariants and fixed scalar. This allows the imaged circular points to be recovered directly from the estimated image invariants, and provide constraints for the estimation of the imaged absolute conic. The camera calibration matrix can thus be recovered. A robust method for estimating the fixed scalar from image triplets is introduced, and a method for recovering the rotation angles using the estimated imaged circular points and epipoles is presented. Using the estimated camera intrinsics and extrinsics, a Euclidean reconstruction can be obtained. Experimental results on real data sequences are presented, which demonstrate the high precision achieved by the proposed method.  相似文献   

5.
The image motion of a planar surface between two camera views is captured by a homography (a 2D projective transformation). The homography depends on the intrinsic and extrinsic camera parameters, as well as on the 3D plane parameters. While camera parameters vary across different views, the plane geometry remains the same. Based on this fact, we derive linear subspace constraints on the relative homographies of multiple (⩾ 2) planes across multiple views. The paper has three main contributions: 1) We show that the collection of all relative homographies (homologies) of a pair of planes across multiple views, spans a 4-dimensional linear subspace. 2) We show how this constraint can be extended to the case of multiple planes across multiple views. 3) We show that, for some restricted cases of camera motion, linear subspace constraints apply also to the set of homographies of a single plane across multiple views. All the results derived are true for uncalibrated cameras. The possible utility of these multiview constraints for improving homography estimation and for detecting nonrigid motions are also discussed  相似文献   

6.
Geometry of single axis motions using conic fitting   总被引:1,自引:0,他引:1  
Previous algorithms for recovering 3D geometry from an uncalibrated image sequence of a single axis motion of unknown rotation angles are mainly based on the computation of two-view fundamental matrices and three-view trifocal tensors. We propose three new methods that are based on fitting a conic locus to corresponding image points over multiple views. The main advantage is that determining only five parameters of a conic from one corresponding point over at least five views is simpler and more robust than determining a fundamental matrix from two views or a trifocal tensor from three views. It is shown that the geometry of single axis motion can be recovered either by computing one conic locus and one fundamental matrix or by computing at least two conic loci. A maximum likelihood solution based on this parametrization of the single axis motion is also described for optimal estimation using three or more loci. The experiments on real image sequences demonstrate the simplicity, accuracy, and robustness of the new methods.  相似文献   

7.
We quantify the observation by Kender and Freudenstein (1987) that degenerate views occupy a significant fraction of the viewing sphere surrounding an object. For a perspective camera geometry, we introduce a computational model that can be used to estimate the probability that a view degeneracy will occur in a random view of a polyhedral object. For a typical recognition system parameterization, view degeneracies typically occur with probabilities of 20 percent and, depending on the parameterization, as high as 50 percent. We discuss the impact of view degeneracy on the problem of object recognition and, for a particular recognition framework, relate the cost of object disambiguation to the probability of view degeneracy. To reduce this cost, we incorporate our model of view degeneracy in an active focal length control paradigm that balances the probability of view degeneracy with the camera field of view. In order to validate both our view degeneracy model as well as our active focal length control model, a set of experiments are reported using a real recognition system operating on real images  相似文献   

8.
针对未标定相机的位姿估计问题,提出了一种焦距和位姿同时迭代的高精度位姿估计算法。现有的未标定相机的位姿估计算法是焦距和相机位姿单独求解,焦距估计精度较差。提出的算法首先通过现有算法得到相机焦距和位姿的初始参数;然后在正交迭代的基础上推导了焦距和位姿最小化函数,将焦距和位姿同时作为初始值进行迭代计算;最后得到高精度的焦距和位姿参数。仿真实验表明提出的算法在点数为10,噪声标准差为2的情况下,角度相对误差小于1%,平移相对误差小于4%,焦距相对误差小于3%;真实实验表明提出的算法与棋盘标定方法的精度相当。与现有算法相比,能够对未标定相机进行高精度的焦距和位姿估计。  相似文献   

9.
In this paper, we show how an active binocular head, the IIS head, can be easily calibrated with very high accuracy. Our calibration method can also be applied to many other binocular heads. In addition to the proposal and demonstration of a four-stage calibration process, there are three major contributions in this paper. First, we propose a motorized-focus lens (MFL) camera model which assumes constant nominal extrinsic parameters. The advantage of having constant extrinsic parameters is to having a simple head/eye relation. Second, a calibration method for the MFL camera model is proposed in this paper, which separates estimation of the image center and effective focal length from estimation of the camera orientation and position. This separation has been proved to be crucial; otherwise, estimates of camera parameters would be very noise-sensitive. Thirdly, we show that, once the parameters of the MFL camera model is calibrated, a nonlinear recursive least-square estimator can be used to refine all the 35 kinematic parameters. Real experiments have shown that the proposed method can achieve accuracy of one pixel prediction error and 0.2 pixel epipolar error, even when all the joints, including the left and right focus motors, are moved simultaneously. This accuracy is good enough for many 3D vision applications, such as navigation, object tracking and reconstruction  相似文献   

10.
Error analysis of pure rotation-based self-calibration   总被引:2,自引:0,他引:2  
Self-calibration using pure rotation is a well-known technique and has been shown to be a reliable means for recovering intrinsic camera parameters. However, in practice, it is virtually impossible to ensure that the camera motion for this type of self-calibration is a pure rotation. In this paper, we present an error analysis of recovered intrinsic camera parameters due to the presence of translation. We derived closed-form error expressions for a single pair of images with nondegenerate motion; for multiple rotations for which there are no closed-form solutions, analysis was done through repeated experiments. Among others, we show that translation-independent solutions do exist under certain practical conditions. Our analysis can be used to help choose the least error-prone approach (if multiple approaches exist) for a given set of conditions.  相似文献   

11.
In order to improve robot capabilities related to playing with a flying ball, reliable methods to localize a sphere in the 3D space are needed. When the radius of the sphere is known, it can be localized by analyzing a single, perspective image of it. When the sphere radius is not known, a single perspective image is not sufficient. In this paper we consider axial-symmetric catadioptric cameras, i.e. devices consisting of an axial-symmetric mirror plus a perspective camera, whose viewpoint is on the symmetry axis. If the viewing rays are not all concurrent at a single point, this camera is said to be non-central. We show that, using a noncentral axial-symmetric catadioptric camera, a single image is sufficient to determine both the position of a sphere and its radius. Some preliminary experimental results are also presented.  相似文献   

12.
This paper deals with the problem of recovering the dimensions of an object and its pose from a single image acquired with a camera of unknown focal length. It is assumed that the object in question can be modeled as a polyhedron where the coordinates of the vertices can be expressed as a linear function of a dimension vector. The reconstruction program takes as input, a set of correspondences between features in the model and features in the image. From this information, the program determines an appropriate projection model for the camera, the dimensions of the object, its pose relative to the camera and, in the case of perspective projection, the focal length of the camera. This paper describes how the reconstruction problem can be framed as an optimization over a compact set with low dimension (no more than four). This optimization problem can be solved efficiently by coupling standard nonlinear optimization techniques with a multistart method. The result is an efficient, reliable solution system that does not require initial estimates for any of the parameters being estimated  相似文献   

13.
对于双目相机焦距自标定,当前研究方法均假定相机主点位置已知,而相机主点在实际中却通常是未知的,提出一种主点和场景均未知的条件下相机焦距自标定的新方法,通过严格的数学分析表明相机的坐标系尺度对焦距标定具有重要影响,并在坐标系尺度影响的定量分析的基础上提出了一种选择合适的坐标系尺度算法。模拟和实际实验都验证了坐标系尺度的重要性,且该方法较传统方法可获得更好的标定精度。  相似文献   

14.
We present a novel technique for calibrating a zooming camera based on the invariance properties of the normalised image of the absolute conic (NIAC). We show that the camera parameters independent of position, orientation and zooming are determined uniquely by the NIAC, and we exploit these invariance properties to develop a stratified calibration method that decouples the calibration parameters. The method is organised in three steps: (i) computation of the NIAC, (ii) computation of the focal length for each image and (iii) computation of the orientation and the position of the camera. The method requires a minimum of three views of a single planar grid. Experiments with synthetic and real data suggest that the method is competitive with other state-of-the-art plane-based zooming calibration methods in the scenarios considered.  相似文献   

15.
In camera calibration, focal length is the most important parameter to be estimated, while other parameters can be obtained by prior information about scene or system configuration. In this paper, we present a polynomial constraint on the effective focal length with the condition that all the other parameters are known. The polynomial degree is 4 for paracatadioptric cameras and 16 for other catadioptric cameras. However, if the skew is 0 or the ratio between the skew and effective focal length is known, the constraint becomes a linear one or a polynomial one with degree 4 on the effective focal length square for paracatadioptric cameras and other catadioptric cameras, respectively. Based on this constraint, we propose a simple method for estimation of the effective focal length of central catadioptric cameras. Unlike many approaches using lines in literature, the proposed method needs no conic fitting of line images, which is error-prone and highly affects the calibration accuracy. It is easy to implement, and only a single view of one space line is enough with no other space information needed. Experiments on simulated and real data show this method is robust and effective.  相似文献   

16.
Camera calibration from surfaces of revolution   总被引:9,自引:0,他引:9  
This paper addresses the problem of calibrating a pinhole camera from images of a surface of revolution. Camera calibration is the process of determining the intrinsic or internal parameters (i.e., aspect ratio, focal length, and principal point) of a camera, and it is important for both motion estimation and metric reconstruction of 3D models. In this paper, a novel and simple calibration technique is introduced, which is based on exploiting the symmetry of images of surfaces of revolution. Traditional techniques for camera calibration involve taking images of some precisely machined calibration pattern (such as a calibration grid). The use of surfaces of revolution, which are commonly found in daily life (e.g., bowls and vases), makes the process easier as a result of the reduced cost and increased accessibility of the calibration objects. In this paper, it is shown that two images of a surface of revolution will provide enough information for determining the aspect ratio, focal length, and principal point of a camera with fixed intrinsic parameters. The algorithms presented in this paper have been implemented and tested with both synthetic and real data. Experimental results show that the camera calibration method presented is both practical and accurate.  相似文献   

17.
Detecting objects in complex scenes while recovering the scene layout is a critical functionality in many vision-based applications. In this work, we advocate the importance of geometric contextual reasoning for object recognition. We start from the intuition that objects' location and pose in the 3D space are not arbitrarily distributed but rather constrained by the fact that objects must lie on one or multiple supporting surfaces. We model such supporting surfaces by means of hidden parameters (i.e. not explicitly observed) and formulate the problem of joint scene reconstruction and object recognition as the one of finding the set of parameters that maximizes the joint probability of having a number of detected objects on K supporting planes given the observations. As a key ingredient for solving this optimization problem, we have demonstrated a novel relationship between object location and pose in the image, and the scene layout parameters (i.e. normal of one or more supporting planes in 3D and camera pose, location and focal length). Using a novel probabilistic formulation and the above relationship our method has the unique ability to jointly: i) reduce false alarm and false negative object detection rate; ii) recover object location and supporting planes within the 3D camera reference system; iii) infer camera parameters (view point and the focal length) from just one single uncalibrated image. Quantitative and qualitative experimental evaluation on two datasets (desk-top dataset [1] and LabelMe [2]) demonstrates our theoretical claims.  相似文献   

18.
Documents may be captured at any orientation when viewed with a hand-held camera. Here, a method of recovering fronto-parallel views of perspectively skewed text documents in single images is presented, useful for ‘point-and-click’ scanning or when generally seeking regions of text in a scene. We introduce a novel extension to the commonly used 2D projection profiles in document recognition to locate the horizontal vanishing point of the text plane. Following further analysis, we segment the lines of text to determine the style of justification of the paragraphs. The change in line spacings exhibited due to perspective is then used to locate the document's vertical vanishing point. No knowledge of the camera focal length is assumed. Using the vanishing points, a fronto-parallel view is recovered which is then suitable for OCR or other high-level recognition. We provide results demonstrating the algorithm's performance on documents over a wide range of orientations.  相似文献   

19.
Pose estimation is an important operation for many vision tasks. In this paper, the authors propose an algorithm for pose estimation based on the volume measurement of tetrahedra composed of feature-point triplets extracted from an arbitrary quadrangular target and the lens center of the vision system. The inputs to this algorithm are the six distances joining all feature pairs and the image coordinates of the quadrangular target. The outputs of this algorithm are the effective focal length of the vision system, the interior orientation parameters of the target, the exterior orientation parameters of the camera with respect to an arbitrary coordinate system if the target coordinates are known in this frame, and the final pose of the camera. The authors have also developed a shape restoration technique which is applied prior to pose recovery in order to reduce the effects of inaccuracies caused by image projection. An evaluation of the method has shown that this pose estimation technique is accurate and robust. Because it is based on a unique and closed form solution, its speed makes it a potential candidate for solving a variety of landmark-based tracking problems  相似文献   

20.
描述了一种适用于IBR系统的数字相机内参数自定标方法。该方法基于跟踪机机旋转得到的图象系列的特征匹配点以,而不需要标定物。认定在相机旋转过程中,其光学中心是稳定不变的,也即图象中心是固定的,可以事先定标;但容许相机的焦距在各幅图象间有变化,利用真实图象序列进行了实验验证,表明该方法能鲁棒地估算相机内参数。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号