排序方式: 共有27条查询结果,搜索用时 15 毫秒
21.
Davide Scaramuzza Friedrich Fraundorfer Marc Pollefeys 《Robotics and Autonomous Systems》2010,58(6):820-827
In this paper, we present a method that allows us to recover the trajectory of a vehicle purely from monocular omnidirectional images very accurately. The method uses a combination of appearance-guided structure from motion and loop closing. The appearance-guided monocular structure-from-motion scheme is used for initial motion estimation. Appearance information is used to correct the rotation estimates computed from feature points only. A place recognition scheme is employed for loop detection, which works with a visual word based approach. Loop closing is done by bundle adjustment minimizing the reprojection error of feature matches. The proposed method is successfully demonstrated on videos from an automotive platform. The experiments show that the use of appearance information leads to superior motion estimates compared to a purely feature based approach. And we demonstrate a working loop closing method which eliminates the residual drift errors of the motion estimation. 相似文献
22.
In this paper, we discuss the problem of estimating parameters of a calibration model for active pan–tilt–zoom cameras. The variation of the intrinsic parameters of each camera over its full range of zoom settings is estimated through a two step procedure. We first determine the intrinsic parameters at the camera’s lowest zoom setting very accurately by capturing an extended panorama. The camera intrinsics and radial distortion parameters are then determined at discrete steps in a monotonically increasing zoom sequence that spans the full zoom range of the camera. Our model incorporates the variation of radial distortion with camera zoom. Both calibration phases are fully automatic and do not assume any knowledge of the scene structure. High-resolution calibrated panoramic mosaics are also computed during this process. These fully calibrated panoramas are represented as multi-resolution pyramids of cube-maps. We describe a hierarchical approach for building multiple levels of detail in panoramas, by aligning hundreds of images captured within a 1–12× zoom range. Results are shown from datasets captured from two types of pan–tilt–zoom cameras placed in an uncontrolled outdoor environment. The estimated camera intrinsics model along with the cube-maps provides a calibration reference for images captured on the fly by the active pan–tilt–zoom camera under operation making our approach promising for active camera network calibration. 相似文献
23.
Self-Calibration and Metric Reconstruction Inspite of Varying and Unknown Intrinsic Camera Parameters 总被引:4,自引:0,他引:4
In this paper the theoretical and practical feasibility of self-calibration in the presence of varying intrinsic camera parameters is under investigation. The paper's main contribution is to propose a self-calibration method which efficiently deals with all kinds of constraints on the intrinsic camera parameters. Within this framework a practical method is proposed which can retrieve metric reconstruction from image sequences obtained with uncalibrated zooming/focusing cameras. The feasibility of the approach is illustrated on real and synthetic examples. Besides this a theoretical proof is given which shows that the absence of skew in the image plane is sufficient to allow for self-calibration. A counting argument is developed which—depending on the set of constraints—gives the minimum sequence length for self-calibration and a method to detect critical motion sequences is proposed. 相似文献
24.
25.
Kim SJ Pollefeys M 《IEEE transactions on pattern analysis and machine intelligence》2008,30(4):562-576
In many computer vision systems, it is assumed that the image brightness of a point directly reflects the scene radiance of the point. However, the assumption does not hold in most cases due to nonlinear camera response function, exposure changes, and vignetting. The effects of these factors are most visible in image mosaics and textures of 3D models where colors look inconsistent and notable boundaries exist. In this paper, we propose a full radiometric calibration algorithm that includes robust estimation of the radiometric response function, exposures, and vignetting. By decoupling the effect of vignetting from the response function estimation, we approach each process in a manner that is robust to noise and outliers. We verify our algorithm with both synthetic and real data which shows significant improvement compared to existing methods. We apply our estimation results to radiometrically align images for seamless mosaics and 3D model textures. We also use our method to create high dynamic range (HDR) mosaics which are more representative of the scene than normal mosaics. 相似文献
26.
In this paper we present an automatic method for calibrating a network of cameras that works by analyzing only the motion
of silhouettes in the multiple video streams. This is particularly useful for automatic reconstruction of a dynamic event
using a camera network in a situation where pre-calibration of the cameras is impractical or even impossible. The key contribution
of this work is a RANSAC-based algorithm that simultaneously computes the epipolar geometry and synchronization of a pair
of cameras only from the motion of silhouettes in video.
Our approach involves first independently computing the fundamental matrix and synchronization for multiple pairs of cameras
in the network. In the next stage the calibration and synchronization for the complete network is recovered from the pairwise
information. Finally, a visual-hull algorithm is used to reconstruct the shape of the dynamic object from its silhouettes
in video. For unsynchronized video streams with sub-frame temporal offsets, we interpolate silhouettes between successive
frames to get more accurate visual hulls. We show the effectiveness of our method by remotely calibrating several different
indoor camera networks from archived video streams. 相似文献
27.
Camera networks have gained increased importance in recent years. Existing approaches mostly use point correspondences between different camera views to calibrate such systems. However, it is often difficult or even impossible to establish such correspondences. But even without feature point correspondences between different camera views, if the cameras are temporally synchronized then the data from the cameras are strongly linked together by the motion correspondence: all the cameras observe the same motion. The present article therefore develops the necessary theory to use this motion correspondence for general rigid as well as planar rigid motions. Given multiple static affine cameras which observe a rigidly moving object and track feature points located on this object, what can be said about the resulting point trajectories? Are there any useful algebraic constraints hidden in the data? Is a 3D reconstruction of the scene possible even if there are no point correspondences between the different cameras? And if so, how many points are sufficient? Is there an algorithm which warrants finding the correct solution to this highly non-convex problem? This article addresses these questions and thereby introduces the concept of low-dimensional motion subspaces. The constraints provided by these motion subspaces enable an algorithm which ensures finding the correct solution to this non-convex reconstruction problem. The algorithm is based on multilinear analysis, matrix and tensor factorizations. Our new approach can handle extreme configurations, e.g. a camera in a camera network tracking only one single point. Results on synthetic as well as on real data sequences act as a proof of concept for the presented insights. 相似文献