首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Soft Shadow Maps: Efficient Sampling of Light Source Visibility   总被引:4,自引:0,他引:4  
Shadows, particularly soft shadows, play an important role in the visual perception of a scene by providing visual cues about the shape and position of objects. Several recent algorithms produce soft shadows at interactive rates, but they do not scale well with the number of polygons in the scene or only compute the outer penumbra. In this paper, we present a new algorithm for computing interactive soft shadows on the GPU. Our new approach provides both inner‐ and outer‐penumbra for a modest computational cost, providing interactive frame‐rates for models with hundreds of thousands of polygons. Our technique is based on a sampled image of the occluders, as in shadow map techniques. These shadow samples are used in a novel manner, computing their effect on a second projective shadow texture using fragment programs. In essence, the fraction of the light source area hidden by each sample is accumulated at each texel position of this Soft Shadow Map. We include an extensive study of the approximations caused by our algorithm, as well as its computational costs.  相似文献   

2.
Depth and visual hulls are useful for quick reconstruction and rendering of a 3D object based on a number of reference views. However, for many scenes, especially multi‐object, these hulls may contain significant artifacts known as phantom geometry. In depth hulls the phantom geometry appears behind the scene objects in regions occluded from all the reference views. In visual hulls the phantom geometry may also appear in front of the objects because there is not enough information to unambiguously imply the object positions. In this work we identify which parts of the depth and visual hull might constitute phantom geometry. We define the notion of reduced depth hull and reduced visual hull as the parts of the corresponding hull that are phantom‐free. We analyze the role of the depth information in identification of the phantom geometry. Based on this, we provide an algorithm for rendering the reduced depth hull at interactive frame‐rates and suggest an approach for rendering the reduced visual hull. The rendering algorithms take advantage of modern GPU programming techniques. Our techniques bypass explicit reconstruction of the hulls, rendering the reduced depth or visual hull directly from the reference views.  相似文献   

3.
We present a framework for interactive sketching that allows users to create three‐dimensional (3D) architectural models quickly and easily from a source drawing. The sketching process has four steps. (1) The user calibrates a viewing camera by specifying the origin and vanishing points of the drawing. (2) The user outlines surface polygons in the drawing. (3) A 3D reconstruction algorithm uses perceptual constraints to determine the closest visual fit for the polygon. (4) The user can then adjust aesthetic controls to produce several stylistic effects in the scene: a smooth transition between day and night rendering, a horizon knockout effect and entourage figures. The major advantage of our approach lies in the combination of perception‐based techniques, which allow us to minimize unnecessary interactions, and a hinging‐angle scheme, which shows significant improvement in numerical stability over previous optimization‐based 3D reconstruction algorithms. We also demonstrate how our reconstruction algorithm can be extended to work with perspective images, a feature unavailable in previous approaches.  相似文献   

4.
We present a novel and effective method for modeling a developable surface to simulate paper bending in interactive and animation applications. The method exploits the representation of a developable surface as the envelope of rectifying planes of a curve in 3D, which is therefore necessarily a geodesic on the surface. We manipulate the geodesic to provide intuitive shape control for modeling paper bending. Our method ensures a natural continuous isometric deformation from a piece of bent paper to its flat state without any stretching. Test examples show that the new scheme is fast, accurate, and easy to use, thus providing an effective approach to interactive paper bending. We also show how to handle non-convex piecewise smooth developable surfaces.  相似文献   

5.
6.
Defocus Magnification   总被引:1,自引:0,他引:1  
A blurry background due to shallow depth of field is often desired for photographs such as portraits, but, unfortunately, small point-and-shoot cameras do not permit enough defocus because of the small diameter of their lenses. We present an image-processing technique that increases the defocus in an image to simulate the shallow depth of field of a lens with a larger aperture. Our technique estimates the spatially-varying amount of blur over the image, and then uses a simple image-based technique to increase defocus. We first estimate the size of the blur kernel at edges and then propagate this defocus measure over the image. Using our defocus map, we magnify the existing blurriness, which means that we blur blurry regions and keep sharp regions sharp. In contrast to more difficult problems such as depth from defocus, we do not require precise depth estimation and do not need to disambiguate textureless regions.  相似文献   

7.
Adaptive Space Deformations Based on Rigid Cells   总被引:5,自引:0,他引:5  
We propose a new adaptive space deformation method for interactive shape modeling. A novel energy formulation based on elastically coupled volumetric cells yields intuitive detail preservation even under large deformations. By enforcing rigidity of the cells, we obtain an extremely robust numerical solver for the resulting nonlinear optimization problem. Scalability is achieved using an adaptive spatial discretization that is decoupled from the resolution of the embedded object. Our approach is versatile and easy to implement, supports thin-shell and solid deformations of 2D and 3D objects, and is applicable to arbitrary sample-based representations, such as meshes, triangle soups, or point clouds.  相似文献   

8.
Image‐based rendering (IBR) techniques allow users to create interactive 3D visualizations of scenes by taking a few snapshots. However, despite substantial progress in the field, the main barrier to better quality and more efficient IBR visualizations are several types of common, visually objectionable artifacts. These occur when scene geometry is approximate or viewpoints differ from the original shots, leading to parallax distortions, blurring, ghosting and popping errors that detract from the appearance of the scene. We argue that a better understanding of the causes and perceptual impact of these artifacts is the key to improving IBR methods. In this study we present a series of psychophysical experiments in which we systematically map out the perception of artifacts in IBR visualizations of façades as a function of the most common causes. We separate artifacts into different classes and measure how they impact visual appearance as a function of the number of images available, the geometry of the scene and the viewpoint. The results reveal a number of counter‐intuitive effects in the perception of artifacts. We summarize our results in terms of practical guidelines for improving existing and future IBR techniques.  相似文献   

9.
We introduce a new appearance-modeling paradigm for synthesizing the internal structure of a 3D model from photographs of a few cross-sections of a real object. When the internal surfaces of the 3D model are revealed as it is cut, carved, or simply clipped, we synthesize their texture from the input photographs. Our texture synthesis algorithm is best classified as a morphing technique, which efficiently outputs the texture attributes of each surface point on demand. For determining source points and their weights in the morphing algorithm, we propose an interpolation domain based on BSP trees that naturally resembles planar splitting of real objects. In the context of the interpolation domain, we define efficient warping and morphing operations that allow for real-time synthesis of textures. Overall, our modeling paradigm, together with its realization through our texture morphing algorithm, allow users to author 3D models that reveal highly realistic internal surfaces in a variety of artistic flavors.  相似文献   

10.
In this paper we present a pipeline for rendering dynamic 2D/3D line drawings efficiently. Our main goal is to create efficient static renditions and coherent animations of line drawings in a setting where lines can be added, deleted and arbitrarily transformed on‐the‐fly. Such a dynamic setting enables us to handle interactively sketched 2D line data, as well as arbitrarily transformed 3D line data in a unified manner. We evaluate the proximity of screen projected strokes to simplify them while preserving their continuity. We achieve this by using a special data structure that facilitates efficient proximity calculations in a dynamic setting. This on‐the‐fly proximity evaluation also facilitates generation of appropriate visibility cues to mitigate depth ambiguities and visual clutter for 3D line data. As we perform all these operations using only line data, we can create line drawings from 3D models without any surface information. We demonstrate the effectiveness and applicability of our approach by showing several examples with initial line representations obtained from a variety of sources: 2D and 3D hand‐drawn sketches and 3D salient geometry lines obtained from 3D surface representations.  相似文献   

11.
Most 3D vector field visualization techniques suffer from the problem of visual clutter, and it remains a challenging task to effectively convey both directional and structural information of 3D vector fields. In this paper, we present a novel visualization framework that combines the advantages of clustering methods and illustrative rendering techniques to generate a concise and informative depiction of complex flow structures. Given a 3D vector field, we first generate a number of streamlines covering the important regions based on an entropy measurement. Then we decompose the streamlines into different groups based on a categorization of vector information, wherein the streamline pattern in each group is ensured to be coherent or nearly coherent. For each group, we select a set of representative streamlines and render them in an illustrative fashion to enhance depth cues and succinctly show local flow characteristics. The results demonstrate that our approach can generate a visualization that is relatively free of visual clutter while facilitating perception of salient information of complex vector fields.  相似文献   

12.
Shape-aware Volume Illustration   总被引:1,自引:0,他引:1  
We introduce a novel volume illustration technique for regularly sampled volume datasets. The fundamental difference between previous volume illustration algorithms and ours is that our results are shape-aware, as they depend not only on the rendering styles, but also the shape styles. We propose a new data structure that is derived from the input volume and consists of a distance volume and a segmentation volume. The distance volume is used to reconstruct a continuous field around the object boundary, facilitating smooth illustrations of boundaries and silhouettes. The segmentation volume allows us to abstract or remove distracting details and noise, and apply different rendering styles to different objects and components. We also demonstrate how to modify the shape of illustrated objects using a new 2D curve analogy technique. This provides an interactive method for learning shape variations from 2D hand-painted illustrations by drawing several lines. Our experiments on several volume datasets demonstrate that the proposed approach can achieve visually appealing and shape-aware illustrations. The feedback from medical illustrators is quite encouraging.  相似文献   

13.
Image‐based rendering (IBR) techniques allow capture and display of 3D environments using photographs. Modern IBR pipelines reconstruct proxy geometry using multi‐view stereo, reproject the photographs onto the proxy and blend them to create novel views. The success of these methods depends on accurate 3D proxies, which are difficult to obtain for complex objects such as trees and cars. Large number of input images do not improve reconstruction proportionally; surface extraction is challenging even from dense range scans for scenes containing such objects. Our approach does not depend on dense accurate geometric reconstruction; instead we compensate for sparse 3D information by variational image warping. In particular, we formulate silhouette‐aware warps that preserve salient depth discontinuities. This improves the rendering of difficult foreground objects, even when deviating from view interpolation. We use a semi‐automatic step to identify depth discontinuities and extract a sparse set of depth constraints used to guide the warp. Our framework is lightweight and results in good quality IBR for previously challenging environments.  相似文献   

14.
Reviewing literatures for a certain research field is always important for academics. One could use Google‐like information seeking tools, but oftentimes he/she would end up obtaining too many possibly related papers, as well as the papers in the associated citation network. During such a process, a user may easily get lost after following a few links for searching or cross‐referencing. It is also difficult for the user to identify relevant/important papers from the resulting huge collection of papers. Our work, called PaperVis, endeavors to provide a user‐friendly interface to help users quickly grasp the intrinsic complex citation‐reference structures among a specific group of papers. We modify the existing Radial Space Filling (RSF) and Bullseye View techniques to arrange involved papers as a node‐link graph that better depicts the relationships among them while saving the screen space at the same time. PaperVis applies visual cues to present node attributes and their transitions among interactions, and it categorizes papers into semantically meaningful hierarchies to facilitate ensuing literature exploration. We conduct experiments on the InfoVis 2004 Contest Dataset to demonstrate the effectiveness of PaperVis.  相似文献   

15.
Cerebral aneurysms result from a congenital or evolved weakness of stabilizing parts of the vessel wall and potentially lead to rupture and a life-threatening bleeding. Current medical research concentrates on the integration of blood flow simulation results for risk assessment of cerebral aneurysms. Scalar flow characteristics close to the aneurysm surface, such as wall shear stress, form an important part of the simulation results. Aneurysms exhibit variable surface shapes with only few landmarks. Therefore, the exploration and mental correlation of different surface regions is a difficult task. In this paper, we present an approach for the intuitive and interactive overview visualization of near wall flow data that is mapped onto the surface of a 3D model of a cerebral aneurysm. We combine a multi-perspective 2D projection map with a standard 3D visualization and present techniques to facilitate the correlation between a 3D model and a related 2D map. An informal evaluation with 4 experienced radiologists has shown that the map-based overview actually improves the surface exploration. Furthermore, different color schemes were discussed and, as a result, an appropriate color scheme for the visual analysis of the wall shear stress is presented.  相似文献   

16.
The curve-skeleton of a 3D object is an abstract geometrical and topological representation of its 3D shape. It maps the spatial relation of geometrically meaningful parts to a graph structure. Each arc of this graph represents a part of the object with roughly constant diameter or thickness, and approximates its centerline. This makes the curve-skeleton suitable to describe and handle articulated objects such as characters for animation. We present an algorithm to extract such a skeleton on-the-fly, both from point clouds and polygonal meshes. The algorithm is based on a deformable model evolution that captures the object's volumetric shape. The deformable model involves multiple competing fronts which evolve inside the object in a coarse-to-fine manner. We first track these fronts' centers, and then merge and filter the resulting arcs to obtain a curve-skeleton of the object. The process inherits the robustness of the reconstruction technique, being able to cope with noisy input, intricate geometry and complex topology. It creates a natural segmentation of the object and computes a center curve for each segment while maintaining a full correspondence between the skeleton and the boundary of the object.  相似文献   

17.
Fast contact handling of soft articulated characters is a computationally challenging problem, in part due to complex interplay between skeletal and surface deformation. We present a fast, novel algorithm based on a layered representation for articulated bodies that enables physically-plausible simulation of animated characters with a high-resolution deformable skin in real time. Our algorithm gracefully captures the dynamic skeleton-skin interplay through a novel formulation of elastic deformation in the pose space of the skinned surface. The algorithm also overcomes the computational challenges by robustly decoupling skeleton and skin computations using careful approximations of Schur complements, and efficiently performing collision queries by exploiting the layered representation. With this approach, we can simultaneously handle large contact areas, produce rich surface deformations, and capture the collision response of a character/s skeleton.  相似文献   

18.
In contrast to 2D scatterplots, the existing 3D variants have the advantage of showing one additional data dimension, but suffer from inadequate spatial and shape perception and therefore are not well suited to display structures of the underlying data. We improve shape perception by applying a new illumination technique to the pointcloud representation of 3D scatterplots. Points are classified as locally linear, planar, and volumetric structures—according to the eigenvalues of the inverse distance-weighted covariance matrix at each data element. Based on this classification, different lighting models are applied: codimension-2 illumination, surface illumination, and emissive volumetric illumination. Our technique lends itself to efficient GPU point rendering and can be combined with existing methods like semi-transparent rendering, halos, and depth or attribute based color coding. The user can interactively navigate in the dataset and manipulate the classification and other visualization parameters. We demonstrate our visualization technique by showing examples of multi-dimensional data and of generic pointcloud data.  相似文献   

19.
Many processing operations are nowadays applied on 3D meshes like compression, watermarking, remeshing and so forth; these processes are mostly driven and/or evaluated using simple distortion measures like the Hausdorff distance and the root mean square error, however these measures do not correlate with the human visual perception while the visual quality of the processed meshes is a crucial issue. In that context we introduce a full‐reference 3D mesh quality metric; this metric can compare two meshes with arbitrary connectivity or sampling density and produces a score that predicts the distortion visibility between them; a visual distortion map is also created. Our metric outperforms its counterparts from the state of the art, in term of correlation with mean opinion scores coming from subjective experiments on three existing databases. Additionally, we present an application of this new metric to the improvement of rate‐distortion evaluation of recent progressive compression algorithms.  相似文献   

20.
Depth-of-Field Rendering by Pyramidal Image Processing   总被引:1,自引:0,他引:1  
We present an image-based algorithm for interactive rendering depth-of-field effects in images with depth maps. While previously published methods for interactive depth-of-field rendering suffer from various rendering artifacts such as color bleeding and sharpened or darkened silhouettes, our algorithm achieves a significantly improved image quality by employing recently proposed GPU-based pyramid methods for image blurring and pixel disocclusion. Due to the same reason, our algorithm offers an interactive rendering performance on modern GPUs and is suitable for real-time rendering for small circles of confusion. We validate the image quality provided by our algorithm by side-by-side comparisons with results obtained by distributed ray tracing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号