首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
We present a framework for the holographic representation and display of graphics objects. As opposed to traditional graphics representations, our approach reconstructs the light wave reflected or emitted by the original object directly from the underlying digital hologram. Our novel holographic graphics pipeline consists of several stages including the digital recording of a full-parallax hologram, the reconstruction and propagation of its wavefront, and rendering of the final image onto conventional, framebuffer-based displays. The required view-dependent depth image is computed from the phase information inherently represented in the complex-valued wavefront. Our model also comprises a correct physical modeling of the camera taking into account optical elements, such as lens and aperture. It thus allows for a variety of effects including depth of field, diffraction, interference, and features built-in anti-aliasing. A central feature of our framework is its seamless integration into conventional rendering and display technology which enables us to elegantly combine traditional 3D object or scene representations with holograms. The presented work includes the theoretical foundations and allows for high quality rendering of objects consisting of large numbers of elementary waves while keeping the hologram at a reasonable size  相似文献   

2.
Light fields were introduced a decade ago as a new high‐dimensional graphics rendering model. However, they have not been thoroughly used because their applications are very specific and their storage requirements are too high. Recently, spatial imaging devices have been related to light fields. These devices allow several users to see three‐dimensional (3D) images without using glasses or other intrusive elements. This paper presents a light‐field model that can be rendered in an autostereoscopic spatial device. The model is viewpoint‐independent and supports continuous multiresolution, foveal rendering, and integrating multiple light fields and geometric models in the same scene. We also show that it is possible to examine interactively a scene composed of several light fields and geometric models. Visibility is taken care of by the algorithm. Our goal is to apply our models to 3D TV and spatial imaging.  相似文献   

3.
In this paper, we introduce a new representation – radiance transfer fields (RTF) – for rendering interreflections in dynamic scenes under low frequency illumination. The RTF describes the radiance transferred by an individual object to its surrounding space as a function of the incident radiance. An important property of RTF is its independence of the scene configuration, enabling interreflection computation in dynamic scenes. Secondly, RTFs naturally fit in with the rendering framework of precomputed shadow fields, incurring negligible cost to add interreflection effects. In addition, RTFs can be used to compute interreflections for both diffuse and glossy objects. We also show that RTF data can be highly compressed by clustered principal component analysis (CPCA), which not only reduces the memory cost but also accelerates rendering. Finally, we present some experimental results demonstrating our techniques.  相似文献   

4.
We present the 3D Video Recorder, a system capable of recording, processing, and playing three‐dimensional video from multiple points of view. We first record 2D video streams from several synchronized digital video cameras and store pre‐processed images to disk. An off‐line processing stage converts these images into a time‐varying 3D hierarchical point‐based data structure and stores this 3D video to disk. We show how we can trade‐off 3D video quality with processing performance and devise efficient compression and coding schemes for our novel 3D video representation. A typical sequence is encoded at less than 7 Mbps at a frame rate of 8.5 frames per second. The 3D video player decodes and renders 3D videos from hard‐disk in real‐time, providing interaction features known from common video cassette recorders, like variable‐speed forward and reverse, and slow motion. 3D video playback can be enhanced with novel 3D video effects such as freeze‐and‐rotate and arbitrary scaling. The player builds upon point‐based rendering techniques and is thus capable of rendering high‐quality images in real‐time. Finally, we demonstrate the 3D Video Recorder on multiple real‐life video sequences. ACM CSS: I.3.2 Computer Graphics—Graphics Systems, I.3.5 Computer Graphics—Computational Geometry and Object Modelling, I.3.7 Computer Graphics—Three‐Dimensional Graphics and Realism  相似文献   

5.
A novel approach is presented to efficiently render local subsurface scattering effects. We introduce an importance sampling scheme for a practical subsurface scattering model. It leads to a simple and efficient rendering algorithm, which operates in image space, and which is even amenable for implementation on graphics hardware. We demonstrate the applicability of our technique to the problem of skin rendering, for which the subsurface transport of light typically remains local. Our implementation shows that plausible images can be rendered interactively using hardware acceleration.  相似文献   

6.
We present a novel algorithm, IlluminationCut, for rendering images using the many‐lights framework. It handles any light source that can be approximated with virtual point lights (VPLs) as well as highly glossy materials. The algorithm extends the Multidimensional Lightcuts technique by effectively creating an illumination‐aware clustering of the product‐space of the set of points to be shaded and the set of VPLs. Additionally, the number of visibility queries for each product‐space cluster is reduced by using an adaptive sampling technique. Our framework is flexible and achieves around 3 – 6 times speedup over previous state‐of‐the‐art methods.  相似文献   

7.
We present novel methods to enhance Computer Generated Holography (CGH) by introducing a complex‐valued wave‐based occlusion handling method. This offers a very intuitive and efficient interface to introduce optical elements featuring physically‐based light interaction exhibiting depth‐of‐field, diffraction, and glare effects. Fur‐thermore, an efficient and flexible evaluation of lit objects on a full‐parallax hologram leads to more convincing images. Previous illumination methods for CGH are not able to change the illumination settings of rendered holo‐grams. In this paper we propose a novel method for real‐time lighting of rendered holograms in order to change the appearance of a previously captured holographic scene. These functionalities are features of a bigger wave‐based rendering framework which can be combined with 2D framebuffer graphics. We present an algorithm which uses graphics hardware to accelerate the rendering.  相似文献   

8.
We present a new compressed sensing framework for reconstruction of incomplete and possibly noisy images and their higher dimensional variants, e.g. animations and light‐fields. The algorithm relies on a learning‐based basis representation. We train an ensemble of intrinsically two‐dimensional (2D) dictionaries that operate locally on a set of 2D patches extracted from the input data. We show that one can convert the problem of 2D sparse signal recovery to an equivalent 1D form, enabling us to utilize a large family of sparse solvers. The proposed framework represents the input signals in a reduced union of subspaces model, while allowing sparsity in each subspace. Such a model leads to a much more sparse representation than widely used methods such as K‐SVD. To evaluate our method, we apply it to three different scenarios where the signal dimensionality varies from 2D (images) to 3D (animations) and 4D (light‐fields). We show that our method outperforms state‐of‐the‐art algorithms in computer graphics and image processing literature.  相似文献   

9.
We introduce an image‐based representation, called volumetric billboards, allowing for the real‐time rendering of semi‐transparent and visually complex objects arbitrarily distributed in a 3D scene. Our representation offers full parallax effect from any viewing direction and improved anti‐aliasing of distant objects. It correctly handles transparency between multiple and possibly overlapping objects without requiring any primitive sorting. Furthermore, volumetric billboards can be easily integrated into common rasterization‐based renderers, which allows for their concurrent use with polygonal models and standard rendering techniques such as shadow‐mapping. The representation is based on volumetric images of the objects and on a dedicated real‐time volume rendering algorithm that takes advantage of the GPU geometry shader. Our examples demonstrate the applicability of the method in many cases including levels‐of‐detail representation for multiple intersecting complex objects, volumetric textures, animated objects and construction of high‐resolution objects by assembling instances of low‐resolution volumetric billboards.  相似文献   

10.
We introduce a GPU-friendly technique that efficiently exploits the highly structured nature of urban environments to ensure rendering quality and interactive performance of city exploration tasks. Central to our approach is a novel discrete representation, called BlockMap, for the efficient encoding and rendering of a small set of textured buildings far from the viewer. A BlockMap compactly represents a set of textured vertical prisms with a bounded on-screen footprint. BlockMaps are stored into small fixed size texture chunks and efficiently rendered through GPU raycasting. Blockmaps can be seamlessly integrated into hierarchical data structures for interactive rendering of large textured urban models. We illustrate an efficient output-sensitive framework in which a visibility-aware traversal of the hierarchy renders components close to the viewer with textured polygons and employs BlockMaps for far away geometry. Our approach provides a bounded size far distance representation of cities, naturally scales with the improving shader technology, and outperforms current state of the art approaches. Its efficiency and generality is demonstrated with the interactive exploration of a large textured model of the city of Paris on a commodity graphics platform.  相似文献   

11.
In this paper, we propose a stereo method specifically designed for image-based rendering. For effective image-based rendering, the interpolated views need only be visually plausible. The implication is that the extracted depths do not need to be correct, as long as the recovered views appear to be correct. Our stereo algorithm relies on over-segmenting the source images. Computing match values over entire segments rather than single pixels provides robustness to noise and intensity bias. Color-based segmentation also helps to more precisely delineate object boundaries, which is important for reducing boundary artifacts in synthesized views. The depths of the segments for each image are computed using loopy belief propagation within a Markov Random Field framework. Neighboring MRFs are used for occlusion reasoning and ensuring that neighboring depth maps are consistent. We tested our stereo algorithm on several stereo pairs from the Middlebury data set, and show rendering results based on two of these data sets. We also show results for video-based rendering.  相似文献   

12.
We present a generic and versatile framework for interactive editing of 3D video footage. Our framework combines the advantages of conventional 2D video editing with the power of more advanced, depth-enhanced 3D video streams. Our editor takes 3D video as input and writes both 2D or 3D video formats as output. Its underlying core data structure is a novel 4D spatio-temporal representation which we call the video hypervolume. Conceptually, the processing loop comprises three fundamental operators: slicing, selection, and editing. The slicing operator allows users to visualize arbitrary hyperslices from the 4D data set. The selection operator labels subsets of the footage for spatio-temporal editing. This operator includes a 4D graph-cut based algorithm for object selection. The actual editing operators include cut & paste, affine transformations, and compositing with other media, such as images and 2D video. For high-quality rendering, we employ EWA splatting with view-dependent texturing and boundary matting. We demonstrate the applicability of our methods to post-production of 3D video.  相似文献   

13.
Hardware-Accelerated Rendering of Photo Hulls   总被引:1,自引:0,他引:1  
  相似文献   

14.
We present a new motion‐compensated hierarchical compression scheme (HMLFC) for encoding light field images (LFI) that is suitable for interactive rendering. Our method combines two different approaches, motion compensation schemes and hierarchical compression methods, to exploit redundancies in LFI. The motion compensation schemes capture the redundancies in local regions of the LFI efficiently (local coherence) and the hierarchical schemes capture the redundancies present across the entire LFI (global coherence). Our hybrid approach combines the two schemes effectively capturing both local as well as global coherence to improve the overall compression rate. We compute a tree from LFI using a hierarchical scheme and use phase shifted motion compensation techniques at each level of the hierarchy. Our representation provides random access to the pixel values of the light field, which makes it suitable for interactive rendering applications using a small run‐time memory footprint. Our approach is GPU friendly and allows parallel decoding of LF pixel values. We highlight the performance on the two‐plane parameterized light fields and obtain a compression ratio of 30–800× with a PSNR of 40–45 dB. Overall, we observe a ~2–5× improvement in compression rates using HMLFC over prior light field compression schemes that provide random access capability. In practice, our algorithm can render new views of resolution 512 × 512 on an NVIDIA GTX‐980 at ~200 fps.  相似文献   

15.
光场成像技术及其在计算机视觉中的应用   总被引:2,自引:1,他引:1       下载免费PDF全文
目的 光场成像技术刚刚在计算机视觉研究中展开初步应用,其相关研究比较零散,缺乏系统性。本文旨在系统介绍光场成像技术发展以及其应用在计算机视觉研究中有代表性的工作。方法 从解决计算机视觉问题的角度出发,4个层面讨论光场成像技术最近十年的研究工作,包括:1)主流的光场成像设备及其作为计算机视觉传感器的优点与不足;2)光场相机作为视觉传感器的标定、解码以及预处理方法;3)基于4维光场的图像渲染与重建技术,以及其如何促进计算机视觉研究;4)以4维光场数据为基础的特征表达方法。结果 逐层梳理出光场成像在求解视觉问题中的优势和局限,分析其中根本性的原理与掣肘,力图总结出亟待解决的关键问题以及未来的发展趋势。结论 作为一种颇具前景的新型计算机视觉传感器技术,光场成像技术的研究必将更为广泛和深入。研究应用于计算机视觉的光场成像技术将有力的引导和促进计算机视觉和光场成像技术协同发展。  相似文献   

16.
We present a technique for approximating isotropic BRDFs and precomputed self-occlusion that enables accurate and efficient prefiltered environment map rendering. Our approach uses a nonlinear approximation of the BRDF as a weighted sum of isotropic Gaussian functions. Our representation requires a minimal amount of storage, can accurately represent BRDFs of arbitrary sharpness, and is above all, efficient to render. We precompute visibility due to self-occlusion and store a low-frequency approximation suitable for glossy reflections. We demonstrate our method by fitting our representation to measured BRDF data, yielding high visual quality at real-time frame rates.  相似文献   

17.
Glossy to glossy reflections are lights bounced between glossy surfaces. Such directional light transports are important for humans to perceive glossy materials, but difficult to simulate. This paper proposes a new method for rendering screen‐space glossy to glossy reflections in realtime. We use spherical von Mises‐Fisher (vMF) distributions to model glossy BRDFs at surfaces, and employ screen space directional occlusion (SSDO) rendering framework to trace indirect light transports bounced in the screen space. As our main contributions, we derive a new parameterization of vMF distribution so as to convert the non‐linear fit of multiple vMF distributions into a linear sum in the new space. Then, we present a new linear filtering technique to build MIP‐maps on glossy BRDFs, which allows us to create filtered radiance transfer functions at runtime, and efficiently estimate indirect glossy to glossy reflections. We demonstrate our method in a realtime application for rendering scenes with dynamic glossy objects. Compared with screen space directional occlusion, our approach only requires one extra texture and has a negligible overhead, 3% ~ 6% loss at frame rate, but enables glossy to glossy reflections.  相似文献   

18.
近年来基于图象的绘制方法有了很大发展,其中光场方法提供了一种有效的四维参数化模型,但是这种四维函数局限于无遮挡的空间。为了在有遮挡的环境中运行,这种四维光场函数被扩展为五维光场函数,然而该方法在显示速度上仍达不到实时。采用基于狭缝图象的四维光场模型,以减少视点在垂直方向上的自由度为代价,使在有遮挡的环境中漫游速度大大加快。介绍了光场的表示、自适应非均匀采样、重采样的过程,并在内存管理、路径预测和碰撞检测等方面做了处理,使得在漫游过程中帧间图象过渡连续。最后以澳门科技大学学生活动中心为例,实现了三维漫游系统,漫游速度可以达到准实时。  相似文献   

19.
Depth-of-Field Rendering by Pyramidal Image Processing   总被引:1,自引:0,他引:1  
We present an image-based algorithm for interactive rendering depth-of-field effects in images with depth maps. While previously published methods for interactive depth-of-field rendering suffer from various rendering artifacts such as color bleeding and sharpened or darkened silhouettes, our algorithm achieves a significantly improved image quality by employing recently proposed GPU-based pyramid methods for image blurring and pixel disocclusion. Due to the same reason, our algorithm offers an interactive rendering performance on modern GPUs and is suitable for real-time rendering for small circles of confusion. We validate the image quality provided by our algorithm by side-by-side comparisons with results obtained by distributed ray tracing.  相似文献   

20.
This paper proposes a novel technique for converting a given animated mesh into a series of displaced subdivision surfaces. Instead of independently converting each mesh frame in the animated mesh, our technique produces displaced subdivision surfaces that share the same topology of the control mesh and a single displacement map. We first propose a conversion framework that enables sharing the same control mesh topology and a single displacement map among frames, and then present the details of the components in the framework. Each component is specifically designed to minimize the shape conversion errors that can be caused by enforcing a single displacement map. The resulting displaced subdivision surfaces have a compact representation, while reproducing the details of the original animated mesh. The representation can also be used for efficient rendering on modern graphics hardware that supports accelerated rendering of subdivision surfaces.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号