首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到11条相似文献,搜索用时 15 毫秒
1.
In this paper, we present methods for 3D volumetric reconstruction of visual scenes photographed by multiple calibrated cameras placed at arbitrary viewpoints. Our goal is to generate a 3D model that can be rendered to synthesize new photo-realistic views of the scene. We improve upon existing voxel coloring/space carving approaches by introducing new ways to compute visibility and photo-consistency, as well as model infinitely large scenes. In particular, we describe a visibility approach that uses all possible color information from the photographs during reconstruction, photo-consistency measures that are more robust and/or require less manual intervention, and a volumetric warping method for application of these reconstruction methods to large-scale scenes.  相似文献   

2.
This paper introduces the novel volumetric methodology “appearance-cloning” as a viable solution for achieving a more improved photo-consistent scene recovery, including a greatly enhanced geometric recovery performance, from a set of photographs taken at arbitrarily distributed multiple camera viewpoints. We do so while solving many of the problems associated with previous stereo-based and volumetric methodologies. We redesign the photo-consistency decision problem of individual voxel in volumetric space as the photo-consistent shape search problem in image space, by generalizing the concept of the point correspondence search between two images in stereo-based approach, within a volumetric framework. In detail, we introduce a self-constrained greedy-style optimization methodology, which iteratively searches a more photo-consistent shape based on the probabilistic shape photo-consistency measure, by using the probabilistic competition between candidate shapes. Our new measure is designed to bring back the probabilistic photo-consistency of a shape by comparing the appearances captured from multiple cameras with those rendered from that shape using the per-pixel Maxwell model in image space. Through various scene recoveries experiments including specular and dynamic scenes, we demonstrate that if sufficient appearances are given enough to reflect scene characteristics, our appearance-cloning approach can successfully recover both the geometry and photometry information of a scene without any kind of scene-dependent algorithm tuning.  相似文献   

3.
为了解决多视图立体三维重建算法不能很好地处理弱纹理或无纹理及高光区域的重建问题,提出一种基于可见外壳与多视图三维点云有机融合的多视图立体三维重建孔洞修复算法.该算法以可见外壳及多视图三维点云为输入,首先提取出可见外壳内满足点云稀疏度约束的叶节点,然后利用可见外壳法向量射线约束去除包裹在三维点云外层的叶节点,最后通过加入三维点云曲面曲率约束来消除点云中凹陷区域的影响.实验结果表明,文中算法有效地解决了物体缺乏纹理区域表面的孔洞修复问题,使得最终生成的三维网格模型完整和平滑,具有参数可调、易于实现的特点,对于不同的模型都具有非常好的鲁棒性.  相似文献   

4.
In this paper we study the problem of recovering the 3D shape, reflectance, and non-rigid motion properties of a dynamic 3D scene. Because these properties are completely unknown and because the scene's shape and motion may be non-smooth, our approach uses multiple views to build a piecewise-continuous geometric and radiometric representation of the scene's trace in space-time. A basic primitive of this representation is the dynamic surfel, which (1) encodes the instantaneous local shape, reflectance, and motion of a small and bounded region in the scene, and (2) enables accurate prediction of the region's dynamic appearance under known illumination conditions. We show that complete surfel-based reconstructions can be created by repeatedly applying an algorithm called Surfel Sampling that combines sampling and parameter estimation to fit a single surfel to a small, bounded region of space-time. Experimental results with the Phong reflectancemodel and complex real scenes (clothing, shiny objects, skin) illustrate our method's ability to explain pixels and pixel variations in terms of their underlying causes—shape, reflectance, motion, illumination, and visibility.  相似文献   

5.
Cast shadows are an informative cue to the shape of objects. They are particularly valuable for discovering object’s concavities which are not available from other cues such as occluding boundaries. We propose a new method for recovering shape from shadows which we call shadow carving. Given a conservative estimate of the volume occupied by an object, it is possible to identify and carve away regions of this volume that are inconsistent with the observed pattern of shadows. We prove a theorem that guarantees that when these regions are carved away from the shape, the shape still remains conservative. Shadow carving overcomes limitations of previous studies on shape from shadows because it is robust with respect to errors in shadows detection and it allows the reconstruction of objects in the round, rather than just bas-reliefs. We propose a reconstruction system to recover shape from silhouettes and shadow carving. The silhouettes are used to reconstruct the initial conservative estimate of the object’s shape and shadow carving is used to carve out the concavities. We have simulated our reconstruction system with a commercial rendering package to explore the design parameters and assess the accuracy of the reconstruction. We have also implemented our reconstruction scheme in a table-top system and present the results of scanning of several objects.  相似文献   

6.
陈坤  刘新国 《计算机工程》2013,(11):235-239
利用光线跟踪原理,提出一种全局优化的多视图三维重建方法。根据图像轮廓得到物体的包围盒,采用体素离散物体所在的几何空间。从相机中心向图像上每个像素发射一条光线,为确定光线达到的体素,使用归一化互相关(NCC)值度量光线一体素的一致性,并估计采样空间中面片的法向信息,以提高NCC值的可信度。设计基于因子图的全局优化模型得到物体体素,针对光线因子的特殊性,设计一种高效的置信度传播算法,使重建方法的时间复杂度从指数阶降为线性阶。实验结果表明,与基于马尔可夫场的重建方法相比,该方法的鲁棒性较好,可提高重建模型的准确度和完整性。  相似文献   

7.
We address the problem of estimating the three-dimensional shape and complex appearance of a scene from a calibrated set of views under fixed illumination. Our approach relies on a rank condition that must be satisfied when the scene exhibits specular + diffuse reflectance characteristics. This constraint is used to define a cost functional for the discrepancy between the measured images and those generated by the estimate of the scene, rather than attempting to match image-to-image directly. Minimizing such a functional yields the optimal estimate of the shape of the scene, represented by a dense surface, as well as its radiance, represented by four functions defined on such a surface. These can be used to generate novel views that capture the non-Lambertian appearance of the scene.This research was performed while Hailin Jin was with Computer Science Department, University of California at Los Angeles.  相似文献   

8.
We investigate the feasibility of reconstructing an arbitrarily-shaped specular scene (refractive or mirror-like) from one or more viewpoints. By reducing shape recovery to the problem of reconstructing individual 3D light paths that cross the image plane, we obtain three key results. First, we show how to compute the depth map of a specular scene from a single viewpoint, when the scene redirects incoming light just once. Second, for scenes where incoming light undergoes two refractions or reflections, we show that three viewpoints are sufficient to enable reconstruction in the general case. Third, we show that it is impossible to reconstruct individual light paths when light is redirected more than twice. Our analysis assumes that, for every point on the image plane, we know at least one 3D point on its light path. This leads to reconstruction algorithms that rely on an “environment matting” procedure to establish pixel-to-point correspondences along a light path. Preliminary results for a variety of scenes (mirror, glass, etc.) are also presented. Part of this research was conducted while K. Kutulakos was serving as a Visiting Scholar at Microsoft Research Asia.  相似文献   

9.
Visual Modeling with a Hand-Held Camera   总被引:10,自引:0,他引:10  
In this paper a complete system to build visual models from camera images is presented. The system can deal with uncalibrated image sequences acquired with a hand-held camera. Based on tracked or matched features the relations between multiple views are computed. From this both the structure of the scene and the motion of the camera are retrieved. The ambiguity on the reconstruction is restricted from projective to metric through self-calibration. A flexible multi-view stereo matching scheme is used to obtain a dense estimation of the surface geometry. From the computed data different types of visual models are constructed. Besides the traditional geometry- and image-based approaches, a combined approach with view-dependent geometry and texture is presented. As an application fusion of real and virtual scenes is also shown.  相似文献   

10.
In Part I of this paper we developed the theory and algorithms for performing Shape-From-Silhouette (SFS) across time. In this second part, we show how our temporal SFS algorithms can be used in the applications of human modeling and markerless motion tracking. First we build a system to acquire human kinematic models consisting of precise shape (constructed using the temporal SFS algorithm for rigid objects), joint locations, and body part segmentation (estimated using the temporal SFS algorithm for articulated objects). Once the kinematic models have been built, we show how they can be used to track the motion of the person in new video sequences. This marker-less tracking algorithm is based on the Visual Hull alignment algorithm used in both temporal SFS algorithms and utilizes both geometric (silhouette) and photometric (color) information.Electronic supplementary material Electronic supplementary material is available for this article at and accessible for authorised users.  相似文献   

11.
In this paper, we propose a new method that processes multiple synchronized video sequences and generates 3D rendering of dynamic objects in the video. It exploits an efficient image‐based reconstruction scheme that constructs and shades 3D models of objects from silhouette images by combining image‐based visual hull and view morphing. The proposed hybrid method improves the speed and the quality of the previous visual hull sampling methods. We designed and implemented a system based on this method which is relatively low cost and does not require any special hardware or specific environment. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号