首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
Structure from motion with wide circular field of view cameras   总被引:2,自引:0,他引:2  
This paper presents a method for fully automatic and robust estimation of two-view geometry, autocalibration, and 3D metric reconstruction from point correspondences in images taken by cameras with wide circular field of view. We focus on cameras which have more than 180/spl deg/ field of view and for which the standard perspective camera model is not sufficient, e.g., the cameras equipped with circular fish-eye lenses Nikon FC-E8 (183/spl deg/), Sigma 8 mm-f4-EX (180/spl deg/), or with curved conical mirrors. We assume a circular field of view and axially symmetric image projection to autocalibrate the cameras. Many wide field of view cameras can still be modeled by the central projection followed by a nonlinear image mapping. Examples are the above-mentioned fish-eye lenses and properly assembled catadioptric cameras with conical mirrors. We show that epipolar geometry of these cameras can be estimated from a small number of correspondences by solving a polynomial eigenvalue problem. This allows the use of efficient RANSAC robust estimation to find the image projection model, the epipolar geometry, and the selection of true point correspondences from tentative correspondences contaminated by mismatches. Real catadioptric cameras are often slightly noncentral. We show that the proposed autocalibration with approximate central models is usually good enough to get correct point correspondences which can be used with accurate noncentral models in a bundle adjustment to obtain accurate 3D scene reconstruction. Noncentral camera models are dealt with and results are shown for catadioptric cameras with parabolic and spherical mirrors.  相似文献   

2.
Three-dimensional (3-D) models of outdoor scenes are widely used for object recognition, navigation, mixed reality, and so on. Because such models are often made manually with high costs, automatic 3-D reconstruction has been widely investigated. In related work, a dense 3-D model is generated by using a stereo method. However, such approaches cannot use several hundreds images together for dense depth estimation because it is difficult to accurately calibrate a large number of cameras. In this paper, we propose a dense 3-D reconstruction method that first estimates extrinsic camera parameters of a hand-held video camera, and then reconstructs a dense 3-D model of a scene. In the first process, extrinsic camera parameters are estimated by tracking a small number of predefined markers of known 3-D positions and natural features automatically. Then, several hundreds dense depth maps obtained by multi-baseline stereo are combined together in a voxel space.So, we can acquire a dense 3-D model of the outdoor scene accurately by using several hundreds input images captured by a hand-held video camera.  相似文献   

3.
目的 在计算机视觉和摄影测量领域,经常应用多视角图像对场景进行高精度的三维重建。其中,相机内参数和相机间固定相对关系的高精度标定是关键环节,文章提出一种能够在强约束条件下快速进行相机标定的方法。方法 通过相机间6个相互独立的约束,充分利用系统的几何条件,确定固有关系,再以共线方程为基础推导强约束条件下的平差模型,并应用于自检校光束法平差,开展相邻立体相机的匹配,实现多相机系统的快速标定。结果 最后通过实验,验证了加了强约束条件后,加大了平差的多余观测数,提高了标定精度和鲁棒性。结论 建立了相机标定系统,提出了在强约束条件下快速进行相机标定的方法,展开了人体三维重建研究,并且该方法可推广到多个相机组成的多相机立体量测系统的标定中。  相似文献   

4.
Image-based hair modeling methods enable artists to produce abundant 3D hair models. However, the reconstructed hair models could not preserve the structural details, such as uniformly distributed hair roots, interior strands growing in line with real distribution and exterior strands similar to images. In this paper, we propose a novel approach to construct a realistic 3D hair model from a hybrid orientation field. Our hybrid orientation field is generated from four fields. The first field makes the surface structure of a hairstyle be similar to the input images as much as possible. The second field makes the hair roots and interior hair strands be consistent with actual distribution. The tracing hair strands can be confined to the hair volume according to the third field. And the fourth field makes the growing direction of one point at a strand be compatible with its predecessor. To generate these fields, we construct high-confidence 3D strand segments from the orientation field of point cloud and 2D traced strands. Hair strands automatically grow from uniformly distributed hair roots according to the hybrid orientation field. We use energy minimization strategy to optimize the entire 3D hair model. We demonstrate that our approach can preserve structural details of 3D hair models.  相似文献   

5.
The majority of methods for the automatic surface reconstruction of an environment from an image sequence have two steps: Structure-from-Motion and dense stereo. From the computational standpoint, it would be interesting to avoid dense stereo and to generate a surface directly from the sparse cloud of 3D points and their visibility information provided by Structure-from-Motion. The previous attempts to solve this problem are currently very limited: the surface is non-manifold or has zero genus, the experiments are done on small scenes or objects using a few dozens of images. Our solution does not have these limitations. Furthermore, we experiment with hand-held or helmet-held catadioptric cameras moving in a city and generate 3D models such that the camera trajectory can be longer than one kilometer.  相似文献   

6.
一种非定标图像高精度三维重建算法   总被引:1,自引:1,他引:0  
由非定标图像重建三维场景有着广泛的应用。给出了一种非定标多视图像三维重建算法。该算法主要基于因子分解和光束法平差技术。首先用因子分解方法得到射影空间下相机投影矩阵和物点坐标,以旋转矩阵的正交性以及对偶绝对二次曲面秩为3为约束,将射影空间升级到欧式空间,最后用光束法平差进行优化。该方法可同时获得相机的内外参数、畸变系数和场景的三维坐标。仿真实验表明,在1000 mm×1000 mm×400mm的范围内,当像点检测误差在0-1pixel和0-2pixel内,所重建三维点的误差分别为0.1530 mm和0.6712 mm。在500 mm×500 m×200 mm下,真实实验重构三维点的误差在0.3 mm以内。所提出的算法稳定可靠,可对实际工程进行指导。  相似文献   

7.
The notion of a virtual camera for optimal 3D reconstruction is introduced. Instead of planar perspective images that collect many rays at a fixed viewpoint, omnivergent cameras collect a small number of rays at many different viewpoints. The resulting 2D manifold of rays is arranged into two multiple-perspective images for stereo reconstruction. We call such images omnivergent images, and the process of reconstructing the scene from such images omnivergent stereo. This procedure is shown to produce 3D scene models with minimal reconstruction error, due to the fact that for any point in the 3D scene, two rays with maximum vergence angle can be found in the omnivergent images. Furthermore, omnivergent images are shown to have horizontal epipolar lines, enabling the application of traditional stereo matching algorithms, without modification. Three types of omnivergent virtual cameras are presented: spherical omnivergent cameras,center-strip cameras and dual-strip cameras.  相似文献   

8.
In this paper we address the problem of recovering 3D non-rigid structure from a sequence of images taken with a stereo pair. We have extended existing non-rigid factorization algorithms to the stereo camera case and presented an algorithm to decompose the measurement matrix into the motion of the left and right cameras and the 3D shape, represented as a linear combination of basis-shapes. The added constraints in the stereo camera case are that both cameras are viewing the same structure and that the relative orientation between both cameras is fixed. Our focus in this paper is on the recovery of flexible 3D shape rather than on the correspondence problem. We propose a method to compute reliable 3D models of deformable structure from stereo images. Our experiments with real data show that improved reconstructions can be achieved using this method. The algorithm includes a non-linear optimization step that minimizes image reprojection error and imposes the correct structure to the motion matrix by choosing an appropriate parameterization. We show that 3D shape and motion estimates can be successfully disambiguated after bundle adjustment and demonstrate this on synthetic and real image sequences. While this optimization step is proposed for the stereo camera case, it can be readily applied to the case of non-rigid structure recovery using a monocular video sequence. Electronic supplementary material Electronic supplementary material is available for this article at and accessible for authorised users.  相似文献   

9.
Modeling the energy performance of existing buildings enables quick identification and reporting of potential areas for building retrofit. However, current modeling practices of using energy simulation tools do not model the energy performance of buildings at their element level. As a result, potential retrofit candidates caused by construction defects and degradations are not represented. Furthermore, due to manual modeling and calibration processes, their application is often time-consuming. Current application of 2D thermography for building diagnostics is also facing several challenges due to a large number of unordered and non-geo-tagged images. To address these limitations, this paper presents a new computer vision-based method for automated 3D energy performance modeling of existing buildings using thermal and digital imagery captured by a single thermal camera. First, using a new image-based 3D reconstruction pipeline which consists of Graphic Processing Unit (GPU)-based Structure-from-Motion (SfM) and Multi-View Stereo (MVS) algorithms, the geometrical conditions of an existing building is reconstructed in 3D. Next, a 3D thermal point cloud model of the building is generated by using a new 3D thermal modeling algorithm. This algorithm involves a one-time thermal camera calibration, deriving the relative transformation by forming the Epipolar geometry between thermal and digital images, and the MVS algorithm for dense reconstruction. By automatically superimposing the 3D building and thermal point cloud models, 3D spatio-thermal models are formed, which enable the users to visualize, query, and analyze temperatures at the level of 3D points. The underlying algorithms for generating and visualizing the 3D spatio-thermal models and the 3D-registered digital and thermal images are presented in detail. The proposed method is validated for several interior and exterior locations of a typical residential building and an instructional facility. The experimental results show that inexpensive digital and thermal imagery can be converted into ubiquitous reporters of the actual energy performance of existing buildings. The proposed method expedites the modeling process and has the potential to be used as a rapid and robust building diagnostic tool.  相似文献   

10.
A stereo-vision system for support of planetary surface exploration   总被引:2,自引:0,他引:2  
Abstract. In this paper, we present a system that was developed for the European Space Agency (ESA) for the support of planetary exploration. The system that is sent to the planetary surface consists of a rover and a lander. The lander contains a stereo head equipped with a pan-tilt mechanism. This vision system is used both for modeling the terrain and for localization of the rover. Both tasks are necessary for the navigation of the rover. Due to the stress that occurs during the flight, a recalibration of the stereo-vision system is required once it is deployed on the planet. Practical limitations make it unfeasible to use a known calibration pattern for this purpose; therefore, a new calibration procedure had to be developed that could work on images of the planetary environment. This automatic procedure recovers the relative orientation of the cameras and the pan and tilt axes, as well as the exterior orientation for all the images. The same images are subsequently used to reconstruct the 3-D structure of the terrain. For this purpose, a dense stereo-matching algorithm is used that (after rectification) computes a disparity map. Finally, all the disparity maps are merged into a single digital terrain model. In this paper, a simple and elegant procedure is proposed that achieves that goal. The fact that the same images can be used for both calibration and 3-D reconstruction is important, since, in general, the communication bandwidth is very limited. In addition to navigation and path planning, the 3-D model of the terrain is also used for virtual-reality simulations of the mission, wherein the model is texture mapped with the original images. The system has been implemented, and the first tests on the ESA planetary terrain testbed were successful.  相似文献   

11.
杨军  石传奎  党建武 《计算机应用》2011,31(6):1566-1568
提出了基于序列图像的鲁棒三维重建方法。首先利用两幅图像的最优参数估计,然后添加新图像并采用稀疏调整,减少图像坐标测量值的最小几何误差。通过对三维结构和摄像机参数进行全局优化处理,以提高重建的鲁棒性。实验结果表明,该方法提高了重建的精度和鲁棒性,并真实地再现了物体的三维模型。  相似文献   

12.
为了高效、高精度、低成本地实现对物体的全视角三维重建, 提出一种使用深度相机融合光照约束实现全视角三维重建的方法。该重建方法中,在进行单帧重建时采用RGBD深度图像融合明暗恢复形状(Shape from shading,SFS)的重建方法, 即在原有的深度数据上加上额外的光照约束来优化深度值; 在相邻两帧配准时, 采用快速点特征直方图(Fast point feature histograms, FPFH)特征进行匹配并通过随机采样一致性(Random sample consensus, RANSAC)滤除错误的匹配点对求解粗配准矩阵, 然后通过迭代最近点(Iterative closest point, ICP)算法进行精配准得出两帧间的配准矩阵; 在进行全视角的三维重建时, 采用光束平差法优化相机位姿, 从而消除累积误差使首尾帧完全重合, 最后融合生成一个完整的模型。该方法融入了物体表面的光照信息,因此生成的三维模型更为光顺,也包含了更多物体表面的细节信息,提高了重建精度;同时该方法仅通过单张照片就能在自然光环境下完成对多反射率三维物体的重建,适用范围更广。本文方法的整个实验过程通过手持深度相机就能完成,不需要借助转台,操作更加方便。  相似文献   

13.
Geometric fusion for a hand-held 3D sensor   总被引:2,自引:0,他引:2  
Abstract. This article presents a geometric fusion algorithm developed for the reconstruction of 3D surface models from hand-held sensor data. Hand-held systems allow full 3D movement of the sensor to capture the shape of complex objects. Techniques previously developed for reconstruction from conventional 2.5D range image data cannot be applied to hand-held sensor data. A geometric fusion algorithm is introduced to integrate the measured 3D points from a hand-held sensor into a single continuous surface. The new geometric fusion algorithm is based on the normal-volume representation of a triangle, which enables incremental transformation of an arbitrary mesh into an implicit volumetric field function. This system is demonstrated for reconstruction of surface models from both hand-held sensor data and conventional 2.5D range images. Received: 30 August 1999 / Accepted: 21 January 2000  相似文献   

14.
双目立体视觉的三维人脸重建方法   总被引:2,自引:0,他引:2  
创建逼真的三维人脸模型始终是一个极具挑战性的课题.随着三维人脸模型在虚拟现实、视频监控、三维动画、人脸识别等领域的广泛应用,三维人脸重建成为计算机图像学和计算机视觉领域的一个研究热点.针对这一问题,提出一种基于双目立体视觉的三维人脸重建方法,重建过程中无需三维激光扫描仪和通用人脸模型.首先利用标定的2台摄像机获取人脸正面图像对,通过图像校正使图像对的极线对齐并且补偿摄像机镜头的畸变;在立体匹配方面,选择具有准确可靠视差的人脸边缘特征点作为种子像素,以种子像素的视差作为区域生长的视差,在外极线约束、单调性约束以及对应匹配的边缘特征点的约束下,进行水平扫描线上的区域生长,从而得到整个人脸区域的视差图,提高了对应点匹配的速度和准确度;最后,根据摄像机标定结果和立体匹配生成的视差图计算人脸空间散乱点的三维坐标,对人脸的三维点云进行三角剖分、网格细分和光顺处理.实验结果表明,该方法能够生成光滑、逼真的三维人脸模型,证明了该算法的有效性.  相似文献   

15.
An effective approach is proposed for 3D urban scene reconstruction in the form of point cloud with semantic labeling. Starting from high resolution oblique aerial images, our approach proceeds through three main stages:geographic reconstruction, geometrical reconstruction and semantic reconstruction. The absolute position and orientation of all the cameras relative to the real world are recovered in the geographic reconstruction stage. Then, in the geometrical reconstruction stage, an improved multi-view stereo matching method is employed to produce 3D dense points with color and normal information by taking into account the prior knowledge of aerial imagery. Finally the point cloud is classified into three classes (building, vegetation, and ground) by a rule-based hierarchical approach in the semantic reconstruction step. Experiments on complex urban scene show that our proposed 3-stage approach could generate reasonable reconstruction result robustly and efficiently. By comparing our final semantic reconstruction result with the manually labeled ground truth, classification accuracies from 86.75% to 93.02% are obtained.   相似文献   

16.
Developable surfaces have been extensively studied in computer graphics because they are involved in a large body of applications. This type of surfaces has also been used in computer vision and document processing in the context of three‐dimensional (3D) reconstruction for book digitization and augmented reality. Indeed, the shape of a smoothly deformed piece of paper can be very well modeled by a developable surface. Most of the existing developable surface parameterizations do not handle boundaries or are driven by overly large parameter sets. These two characteristics become issues in the context of developable surface reconstruction from real observations. Our main contribution is a generative model of bounded developable surfaces that solves these two issues. Our model is governed by intuitive parameters whose number depends on the actual deformation and including the “flat shape boundary”. A vast majority of the existing image‐based paper 3D reconstruction methods either require a tightly controlled environment or restricts the set of possible deformations. We propose an algorithm for reconstructing our model's parameters from a general smooth 3D surface interpolating a sparse cloud of 3D points. The latter is assumed to be reconstructed from images of a static piece of paper or any other developable surface. Our 3D reconstruction method is well adapted to the use of keypoint matches over multiple images. In this context, the initial 3D point cloud is reconstructed by structure‐from‐motion for which mature and reliable algorithms now exist and the thin‐plate spline is used as a general smooth surface model. After initialization, our model's parameters are refined with model‐based bundle adjustment. We experimentally validated our model and 3D reconstruction algorithm for shape capture and augmented reality on seven real datasets. The first six datasets consist of multiple images or videos and a sparse set of 3D points obtained by structure‐from‐motion. The last dataset is a dense 3D point cloud acquired by structured light. Our implementation has been made publicly available on the authors' web home pages. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
The problem of projective reconstruction by minimization of the 2D reprojection error in multiple images is considered. Although bundle adjustment techniques can be used to minimize the 2D reprojection error, these methods being based on nonlinear optimization algorithms require a good starting point. Quasi-linear algorithms with better global convergence properties can be used to generate an initial solution before submitting it to bundle adjustment for refinement. In this paper, we propose a factorization-based method to integrate the initial search as well as the bundle adjustment into a single algorithm consisting of a sequence of weighted least-squares problems, in which a control parameter is initially set to a relaxed state to allow the search of a good initial solution, and subsequently tightened up to force the final solution to approach a minimum point of the 2D reprojection error. The proposed algorithm is guaranteed to converge. Our method readily handles images with missing points.  相似文献   

18.
针对高超速飞行体在飞行过程中能获取到的图像信息较少,无法完全复现其轮廓信息,提出一种基于多粒度匹配的三维重构优化方法。首先通过两台高分辨率CCD相机正交的阴影照相站系统采集高超速飞行体多幅图像信息,利用图像分割技术提取物体的二维轮廓;然后采用改进的SFS算法求解出物体表面各点的相对高度和表面法向量,恢复其三维形貌;最后使用图像拼接优化技术实现高超速飞行体表面的三维重构。并经过实体模型的三维重构验证所提方法的可行性及有效性。  相似文献   

19.
In this paper, we present a generic, modular bundle adjustment method for pose estimation, simultaneous self-calibration and reconstruction for multi-camera systems. In contrast to other approaches that use bearing vectors (camera rays) as observations, we extend the common collinearity equations with a general camera model and include the relative orientation of each camera w.r.t to the fixed multi-camera system frame yielding the extended collinearity equations that directly express all image observations as functions of all unknowns. Hence, we can either calibrate the camera system, the cameras, reconstruct the observed scene, and/or simply estimate the pose of the system by including the corresponding parameter block into the Jacobian matrix. Apart from evaluating the implementation with comprehensive simulations, we benchmark our method against recently published methods for pose estimation and bundle adjustment for multi-camera systems. Finally, all methods are evaluated using a 6 degree of freedom ground truth data set, that was recorded with a lasertracker.  相似文献   

20.
A quasi-dense approach to surface reconstruction from uncalibrated images   总被引:4,自引:0,他引:4  
This paper proposes a quasi-dense approach to 3D surface model acquisition from uncalibrated images. First, correspondence information and geometry are computed based on new quasi-dense point features that are resampled subpixel points from a disparity map. The quasi-dense approach gives more robust and accurate geometry estimations than the standard sparse approach. The robustness is measured as the success rate of full automatic geometry estimation with all involved parameters fixed. The accuracy is measured by a fast gauge-free uncertainty estimation algorithm. The quasi-dense approach also works for more largely separated images than the sparse approach, therefore, it requires fewer images for modeling. More importantly, the quasi-dense approach delivers a high density of reconstructed 3D points on which a surface representation can be reconstructed. This fills the gap of insufficiency of the sparse approach for surface reconstruction, essential for modeling and visualization applications. Second, surface reconstruction methods from the given quasi-dense geometry are also developed. The algorithm optimizes new unified functionals integrating both 3D quasi-dense points and 2D image information, including silhouettes. Combining both 3D data and 2D images is more robust than the existing methods using only 2D information or only 3D data. An efficient bounded regularization method is proposed to implement the surface evolution by level-set methods. Its properties are discussed and proven for some cases. As a whole, a complete automatic and practical system of 3D modeling from raw images captured by hand-held cameras to surface representation is proposed. Extensive experiments demonstrate the superior performance of the quasi-dense approach with respect to the standard sparse approach in robustness, accuracy, and applicability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号