首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 171 毫秒
1.
This paper presents a new class of interactive image editing operations designed to maintain consistency between multiple images of a physical 3D scene. The distinguishing feature of these operations is that edits to any one image propagate automatically to all other images as if the (unknown) 3D scene had itself been modified. The modified scene can then be viewed interactively from any other camera viewpoint and under different scene illuminations. The approach is useful first as a power-assist that enables a user to quickly modify many images by editing just a few, and second as a means for constructing and editing image-based scene representations by manipulating a set of photographs. The approach works by extending operations like image painting, scissoring, and morphing so that they alter a scene's plenoptic function in a physically-consistent way, thereby affecting scene appearance from all viewpoints simultaneously. A key element inrealizing these operations is a new volumetric decomposition technique for reconstructing an scene's plenoptic function from an incomplete set of camera viewpoints.  相似文献   

2.
We investigate the feasibility of reconstructing an arbitrarily-shaped specular scene (refractive or mirror-like) from one or more viewpoints. By reducing shape recovery to the problem of reconstructing individual 3D light paths that cross the image plane, we obtain three key results. First, we show how to compute the depth map of a specular scene from a single viewpoint, when the scene redirects incoming light just once. Second, for scenes where incoming light undergoes two refractions or reflections, we show that three viewpoints are sufficient to enable reconstruction in the general case. Third, we show that it is impossible to reconstruct individual light paths when light is redirected more than twice. Our analysis assumes that, for every point on the image plane, we know at least one 3D point on its light path. This leads to reconstruction algorithms that rely on an “environment matting” procedure to establish pixel-to-point correspondences along a light path. Preliminary results for a variety of scenes (mirror, glass, etc.) are also presented. Part of this research was conducted while K. Kutulakos was serving as a Visiting Scholar at Microsoft Research Asia.  相似文献   

3.
This paper introduces the novel volumetric methodology “appearance-cloning” as a viable solution for achieving a more improved photo-consistent scene recovery, including a greatly enhanced geometric recovery performance, from a set of photographs taken at arbitrarily distributed multiple camera viewpoints. We do so while solving many of the problems associated with previous stereo-based and volumetric methodologies. We redesign the photo-consistency decision problem of individual voxel in volumetric space as the photo-consistent shape search problem in image space, by generalizing the concept of the point correspondence search between two images in stereo-based approach, within a volumetric framework. In detail, we introduce a self-constrained greedy-style optimization methodology, which iteratively searches a more photo-consistent shape based on the probabilistic shape photo-consistency measure, by using the probabilistic competition between candidate shapes. Our new measure is designed to bring back the probabilistic photo-consistency of a shape by comparing the appearances captured from multiple cameras with those rendered from that shape using the per-pixel Maxwell model in image space. Through various scene recoveries experiments including specular and dynamic scenes, we demonstrate that if sufficient appearances are given enough to reflect scene characteristics, our appearance-cloning approach can successfully recover both the geometry and photometry information of a scene without any kind of scene-dependent algorithm tuning.  相似文献   

4.
Confocal Stereo     
We present confocal stereo, a new method for computing 3D shape by controlling the focus and aperture of a lens. The method is specifically designed for reconstructing scenes with high geometric complexity or fine-scale texture. To achieve this, we introduce the confocal constancy property, which states that as the lens aperture varies, the pixel intensity of a visible in-focus scene point will vary in a scene-independent way, that can be predicted by prior radiometric lens calibration. The only requirement is that incoming radiance within the cone subtended by the largest aperture is nearly constant. First, we develop a detailed lens model that factors out the distortions in high resolution SLR cameras (12MP or more) with large-aperture lenses (e.g., f1.2). This allows us to assemble an A×F aperture-focus image (AFI) for each pixel, that collects the undistorted measurements over all A apertures and F focus settings. In the AFI representation, confocal constancy reduces to color comparisons within regions of the AFI, and leads to focus metrics that can be evaluated separately for each pixel. We propose two such metrics and present initial reconstruction results for complex scenes, as well as for a scene with known ground-truth shape. Part of this work was done while the authors were visiting Microsoft Research Asia, in the roles of research intern and Visiting Scholar respectively.  相似文献   

5.
6.
Real‐time streaming of shape deformations in a shared distributed virtual environment is a challenging task due to the difficulty of transmitting large amounts of 3D animation data to multiple receiving parties at a high frame rate. In this paper, we present a framework for streaming 3D shape deformations, which allows shapes with multi‐resolutions to share the same deformations simultaneously in real time. The geometry and motion of deforming mesh or point‐sampled surfaces are compactly encoded, transmitted, and reconstructed using the spectra of the manifold harmonics. A receiver‐based multi‐resolution surface reconstruction approach is introduced, which allows deforming shapes to switch smoothly between continuous multi‐resolutions. On the basis of this dynamic reconstruction scheme, a frame rate control algorithm is further proposed to achieve rendering at interactive rates. We also demonstrate an efficient interpolation‐based strategy to reduce computing of deformation. The experiments conducted on both mesh and point‐sampled surfaces show that our approach achieves efficient performance even if deformations of complex 3D surfaces are streamed. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
Reconstructing the World’s Museums   总被引:2,自引:0,他引:2  
Virtual exploration tools for large indoor environments (e.g. museums) have so far been limited to either blueprint-style 2D maps that lack photo-realistic views of scenes, or ground-level image-to-image transitions, which are immersive but ill-suited for navigation. On the other hand, photorealistic aerial maps would be a useful navigational guide for large indoor environments, but it is impossible to directly acquire photographs covering a large indoor environment from aerial viewpoints. This paper presents a 3D reconstruction and visualization system for automatically producing clean and well-regularized texture-mapped 3D models for large indoor scenes, from ground-level photographs and 3D laser points. The key component is a new algorithm called “inverse constructive solid geometry (CSG)” for reconstructing a scene with a CSG representation consisting of volumetric primitives, which imposes powerful regularization constraints. We also propose several novel techniques to adjust the 3D model to make it suitable for rendering the 3D maps from aerial viewpoints. The visualization system enables users to easily browse a large-scale indoor environment from a bird’s-eye view, locate specific room interiors, fly into a place of interest, view immersive ground-level panorama views, and zoom out again, all with seamless 3D transitions. We demonstrate our system on various museums, including the Metropolitan Museum of Art in New York City—one of the largest art galleries in the world.  相似文献   

8.
We present an approach to compute the perceived complexity of a given 3D shape using the similarity between its views. Previous studies on 3D shape complexity relied on geometric and/or topological properties of the shape and are not appropriate for incorporating results from human shape perception which claim that humans perceive 3D shapes as organizations of 2D views. Therefore, we base our approach to computing 3D shape complexity on the (dis)similarity matrix of the shape's 2D views. To illustrate the application of our approach, we note that simple shapes lead to similar views whereas complex ones result in different, dissimilar views. This reflected in the View Similarity Graph (VSG) of a shape as tight clusters of points if the shape is simple and increasingly dispersed points as it gets more complex. To get a visual intuition of the VSG, we project it to 2D using Multi-Dimensional Scaling (MDS) and introduce measures to compute shape complexity through point dispersion in the resulting MDS plot. Experiments show that results obtained using our measures alleviate some of the drawbacks present in previous approaches.  相似文献   

9.
10.
In this paper, we present methods for 3D volumetric reconstruction of visual scenes photographed by multiple calibrated cameras placed at arbitrary viewpoints. Our goal is to generate a 3D model that can be rendered to synthesize new photo-realistic views of the scene. We improve upon existing voxel coloring/space carving approaches by introducing new ways to compute visibility and photo-consistency, as well as model infinitely large scenes. In particular, we describe a visibility approach that uses all possible color information from the photographs during reconstruction, photo-consistency measures that are more robust and/or require less manual intervention, and a volumetric warping method for application of these reconstruction methods to large-scale scenes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号