首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Depth and visual hulls are useful for quick reconstruction and rendering of a 3D object based on a number of reference views. However, for many scenes, especially multi‐object, these hulls may contain significant artifacts known as phantom geometry. In depth hulls the phantom geometry appears behind the scene objects in regions occluded from all the reference views. In visual hulls the phantom geometry may also appear in front of the objects because there is not enough information to unambiguously imply the object positions. In this work we identify which parts of the depth and visual hull might constitute phantom geometry. We define the notion of reduced depth hull and reduced visual hull as the parts of the corresponding hull that are phantom‐free. We analyze the role of the depth information in identification of the phantom geometry. Based on this, we provide an algorithm for rendering the reduced depth hull at interactive frame‐rates and suggest an approach for rendering the reduced visual hull. The rendering algorithms take advantage of modern GPU programming techniques. Our techniques bypass explicit reconstruction of the hulls, rendering the reduced depth or visual hull directly from the reference views.  相似文献   

2.
We present a new solution for temporal coherence in non‐photorealistic rendering (NPR) of animations. Given the conflicting goals of preserving the 2D aspect of the style and the 3D scene motion, any such solution is a tradeoff. We observe that primitive‐based methods in NPR can be seen as texture‐based methods when using large numbers of primitives, leading to our key insight, namely that this process is similar to sparse convolution noise in procedural texturing. Consequently, we present a new primitive for NPR based on Gabor noise, that preserves the 2D aspect of noise, conveys the 3D motion of the scene, and is temporally continuous. We can thus use standard techniques from procedural texturing to create various styles, which we show for interactive NPR applications. We also present a user study to evaluate this and existing solutions, and to provide more insight in the trade‐off implied by temporal coherence. The results of the study indicate that maintaining coherent motion is important, but also that our new solution provides a good compromise between the 2D aspect of the style and 3D motion.  相似文献   

3.
Variable bit rate compression can achieve better quality and compression rates than fixed bit rate methods. None the less, GPU texturing uses lossy fixed bit rate methods like DXT to allow random access and on‐the‐fly decompression during rendering. Changes in games and GPUs since DXT was developed make its compression artifacts less acceptable, and texture bandwidth less of an issue, but texture size is a serious and growing problem. Games use a large total volume of texture data, but have a much smaller active set. We present a new paradigm that separates GPU decompression from rendering. Rendering is from uncompressed data, avoiding the need for random access decompression. We demonstrate this paradigm with a new variable bit rate lossy texture compression algorithm that is well suited to the GPU, including a new GPU‐friendly formulation of range decoding, and a new texture compression scheme averaging 12.4:1 lossy compression ratio on 471 real game textures with a quality level similar to traditional DXT compression. The total game texture set are stored in the GPU in compressed form, and decompressed for use in a fraction of a second per scene.  相似文献   

4.
In this paper, we present an efficient approach for the interactive rendering of large‐scale urban models, which can be integrated seamlessly with virtual globe applications. Our scheme fills the gap between standard approaches for distant views of digital terrains and the polygonal models required for close‐up views. Our work is oriented towards city models with real photographic textures of the building facades. At the heart of our approach is a multi‐resolution tree of the scene defining multi‐level relief impostors. Key ingredients of our approach include the pre‐computation of a small set of zenithal and oblique relief maps that capture the geometry and appearance of the buildings inside each node, a rendering algorithm combining relief mapping with projective texture mapping which uses only a small subset of the pre‐computed relief maps, and the use of wavelet compression to simulate two additional levels of the tree. Our scheme runs considerably faster than polygonal‐based approaches while producing images with higher quality than competing relief‐mapping techniques. We show both analytically and empirically that multi‐level relief impostors are suitable for interactive navigation through large urban models.  相似文献   

5.
This paper presents a method for the accurate rendering of path‐based surface details such as grooves, scratches and similar features. The method is based on a continuous representation of the features in texture space, and the rendering is performed by means of two approaches: one for isolated or non‐intersecting grooves and another for special situations like intersections or ends. The proposed solutions perform correct antialiasing and take into account visibility and inter‐reflections with little computational effort and memory requirements. Compared to anisotropic BRDFs and scratch models, we have no limitations on the distribution of grooves over the surface or their geometry, thus allowing more general patterns. Compared to displacement mapping techniques, we can efficiently simulate features of all sizes without requiring additional geometry or multiple representations.  相似文献   

6.
Realistic animation and rendering of the ocean is an important aspect for simulators, movies and video games. By nature, the ocean is a difficult problem for Computer Graphics: it is a dynamic system, it combines wave trains at all scales, ranging from kilometric to millimetric. Worse, the ocean is usually viewed at several distances, from very close to the viewpoint to the horizon, increasing the multi‐scale issue, and resulting in aliasing problems. The illumination comes from natural light sources (the Sun and the sky dome), is also dynamic, and often underlines the aliasing issues. In this paper, we present a new algorithm for modelling, animation, illumination and rendering of the ocean, in real‐time, at all scales and for all viewing distances. Our algorithm is based on a hierarchical representation, combining geometry, normals and BRDF. For each viewing distance, we compute a simplified version of the geometry, and encode the missing details into the normal and the BRDF, depending on the level of detail required. We then use this hierarchical representation for illumination and rendering. Our algorithm runs in real‐time, and produces highly realistic pictures and animations.  相似文献   

7.
We propose an extension to the state‐of‐the‐art text rendering technique based on sampling a 2D signed distance field from a texture. This extension significantly improves the visual quality of sharp corners, which is the most problematic feature to reproduce for the original technique. We achieve this by using a combination of multiple distance fields in conjunction, which together provide a more thorough representation of the given glyph's (or any other 2D shape's) geometry. This multi‐channel distance field representation is described along with its application in shader‐based rendering. The rendering process itself remains very simple and efficient, and is fully compatible with previous monochrome distance fields. The introduced method of multi‐channel distance field construction requires a vector representation of the input shape. A comparative measurement of rendering quality shows that the error in the output image can be reduced by up to several orders of magnitude.  相似文献   

8.
In the context of 3D reconstruction, we present a static multi‐texturing system yielding a seamless texture atlas calculated by combining the colour information from several photos from the same subject covering most of its surface. These pictures can be provided by shooting just one camera several times when reconstructing a static object, or a set of synchronized cameras, when dealing with a human or any other moving object. We suppress the colour seams due to image misalignments and irregular lighting conditions that multi‐texturing approaches typically suffer from, while minimizing the blurring effect introduced by colour blending techniques. Our system is robust enough to compensate for the almost inevitable inaccuracies of 3D meshes obtained with visual hull–based techniques: errors in silhouette segmentation, inherently bad handling of concavities, etc.  相似文献   

9.
We present an optimized pruning algorithm that allows for considerable geometry reduction in large botanical scenes while maintaining high and coherent rendering quality. We improve upon previous techniques by applying model‐specific geometry reduction functions and optimized scaling functions. For this we introduce the use of Precision and Recall (PR) as a measure of quality to rendering and show how PR‐scores can be used to predict better scaling values. We conducted a user‐study letting subjects adjust the scaling value, which shows that the predicted scaling matches the preferred ones. Finally, we extend the originally purely stochastic geometry prioritization for pruning to account for view‐optimized geometry selection, which allows to take global scene information, such as occlusion, into consideration. We demonstrate our method for the rendering of scenes with thousands of complex tree models in real‐time.  相似文献   

10.
We propose a novel rendering method which supports interactive BRDF editing as well as relighting on a 3D scene. For interactive BRDF editing, we linearize an analytic BRDF model with basis BRDFs obtained from a principal component analysis. For each basis BRDF, the radiance transfer is precomputed and stored in vector form. In rendering time, illumination of a point is computed by multiplying the radiance transfer vectors of the basis BRDFs by the incoming radiance from gather samples and then linearly combining the results weighted by user‐controlled parameters. To improve the level of accuracy, a set of sub‐area samples associated with a gather sample refines the glossy reflection of the geometric details without increasing the precomputation time. We demonstrate this program with a number of examples to verify the real‐time performance of relighting and BRDF editing on 3D scenes with complex lighting and geometry.  相似文献   

11.
Texture bombing is a texture synthesis approach that saves memory by stopping short of assembling the output texture from the arrangement of input texture patches; instead, the arrangement is used directly at run time to texture surfaces. However, several problems remain in need of better solutions. One problem is improving texture diversification. A second problem is that mipmapping cannot be used because texel data is not stored explicitly. The lack of an appropriate level‐of‐detail (LoD) scheme results in severe minification artefacts. We present a just‐in‐time texturing method that addresses these two problems. Texture diversification is achieved by modelling a texture patch as an umbrella, a versatile hybrid 3‐D geometry and texture structure with parameterized appearance. The LoD is adapted continuously with a hierarchical algorithm that acts directly on the arrangement map. Results show that our method can model and render the diversity present in nature with only small texture memory requirements.  相似文献   

12.
In this paper, we present a new approach for shape‐grammar‐based generation and rendering of huge cities in real‐time on the graphics processing unit (GPU). Traditional approaches rely on evaluating a shape grammar and storing the geometry produced as a preprocessing step. During rendering, the pregenerated data is then streamed to the GPU. By interweaving generation and rendering, we overcome the problems and limitations of streaming pregenerated data. Using our methods of visibility pruning and adaptive level of detail, we are able to dynamically generate only the geometry needed to render the current view in real‐time directly on the GPU. We also present a robust and efficient way to dynamically update a scene's derivation tree and geometry, enabling us to exploit frame‐to‐frame coherence. Our combined generation and rendering is significantly faster than all previous work. For detailed scenes, we are capable of generating geometry more rapidly than even just copying pregenerated data from main memory, enabling us to render cities with thousands of buildings at up to 100 frames per second, even with the camera moving at supersonic speed.  相似文献   

13.
Common practice in brain research and brain surgery involves the multi‐modal acquisition of brain anatomy and brain activation data. These highly complex three‐dimensional data have to be displayed simultaneously in order to convey spatial relationships. Unique challenges in information and interaction design have to be solved in order to keep the visualization sufficiently complete and uncluttered at the same time. The visualization method presented in this paper addresses these issues by using a hybrid combination of polygonal rendering of brain structures and direct volume rendering of activation data. Advanced rendering techniques including illustrative display styles and ambient occlusion calculations enhance the clarity of the visual output. The presented rendering pipeline produces real‐time frame rates and offers a high degree of configurability. Newly designed interaction and measurement tools are provided, which enable the user to explore the data at large, but also to inspect specific features closely. We demonstrate the system in the context of a cognitive neurosciences dataset. An initial informal evaluation shows that our visualization method is deemed useful for clinical research.  相似文献   

14.
This paper describes a model for example-based, photo-realistic rendering of eye movements in 3D facial animation. Based on 3D scans of a face with different gaze directions, the model captures the motion of the eyeball along with the deformation of the eyelids and the surrounding skin. These deformations are represented in a 3D morphable model.
Unlike the standard procedure in facial animation, the eyeball is not modeled as a rotating 3D sphere located behind the skin surface. Instead, the visible region of the eyeball is part of a continuous face mesh, and displacements of the iris as well as occlusions by the lids are modeled in a texture mapping approach. The algorithm avoids artifacts that are widely encountered in 3D facial animation, and it presents a new concept of handling occlusions and discontinuities in morphing algorithms.  相似文献   

15.
Dynamic Sampling and Rendering of Algebraic Point Set Surfaces   总被引:2,自引:0,他引:2  
Algebraic Point Set Surfaces (APSS) define a smooth surface from a set of points using local moving least‐squares (MLS) fitting of algebraic spheres. In this paper we first revisit the spherical fitting problem and provide a new, more generic solution that includes intuitive parameters for curvature control of the fitted spheres. As a second contribution we present a novel real‐time rendering system of such surfaces using a dynamic up‐sampling strategy combined with a conventional splatting algorithm for high quality rendering. Our approach also includes a new view dependent geometric error tailored to efficient and adaptive up‐sampling of the surface. One of the key features of our system is its high degree of flexibility that enables us to achieve high performance even for highly dynamic data or complex models by exploiting temporal coherence at the primitive level. We also address the issue of efficient spatial search data structures with respect to construction, access and GPU friendliness. Finally, we present an efficient parallel GPU implementation of the algorithms and search structures.  相似文献   

16.
Prostate cancer is one of the most prevalent cancers among males, and the use of magnetic resonance imaging (MRI) has been suggested for its detection. A framework is presented for scoring and visualizing various MR data in an efficient and intuitive manner. A classification method is introduced where a cumulative score volume is created which takes into account each of three acquisition types. This score volume is integrated into a volume rendering framework which allows the user to view the prostate gland, the multi‐modal score values, and the surrounding anatomy. A visibility persistence mode is introduced to automatically avoid full occlusion of a selected score and indicate overlaps. The use of GPU‐accelerated multi‐modal single‐pass ray casting provides an interactive experience. User driven importance rendering allows the user to gain insight into the data and can assist in localization of the disease and treatment planning. We evaluate our results against pathology and radiologists'determinations.  相似文献   

17.
Image‐based rendering (IBR) techniques allow capture and display of 3D environments using photographs. Modern IBR pipelines reconstruct proxy geometry using multi‐view stereo, reproject the photographs onto the proxy and blend them to create novel views. The success of these methods depends on accurate 3D proxies, which are difficult to obtain for complex objects such as trees and cars. Large number of input images do not improve reconstruction proportionally; surface extraction is challenging even from dense range scans for scenes containing such objects. Our approach does not depend on dense accurate geometric reconstruction; instead we compensate for sparse 3D information by variational image warping. In particular, we formulate silhouette‐aware warps that preserve salient depth discontinuities. This improves the rendering of difficult foreground objects, even when deviating from view interpolation. We use a semi‐automatic step to identify depth discontinuities and extract a sparse set of depth constraints used to guide the warp. Our framework is lightweight and results in good quality IBR for previously challenging environments.  相似文献   

18.
Existing real‐time volume rendering techniques which support global illumination are limited in modeling distinct realistic appearances for classified volume data, which is a desired capability in many fields of study for illustration and education. Directly extending the emission‐absorption volume integral with heterogeneous material shading becomes unaffordable for real‐time applications because the high‐frequency view‐dependent global lighting needs to be evaluated per sample along the volume integral. In this paper, we present a decoupled shading algorithm for multi‐material volume rendering that separates global incident lighting evaluation from per‐sample material shading under multiple light sources. We show how the incident lighting calculation can be optimized through a sparse volume integration method. The quality, performance and usefulness of our new multi‐material volume rendering method is demonstrated through several examples.  相似文献   

19.
We present the design of an interactive image‐based modeling tool that enables a user to quickly generate detailed 3D models with texture from a set of calibrated input images. Our main contribution is an intuitive user interface that is entirely based on simple 2D painting operations and does not require any technical expertise by the user or difficult pre‐processing of the input images. One central component of our tool is a GPU‐based multi‐view stereo reconstruction scheme, which is implemented by an incremental algorithm, that runs in the background during user interaction so that the user does not notice any significant response delay.  相似文献   

20.
Image‐based rendering techniques are a powerful alternative to traditional polygon‐based computer graphics. This paper presents a novel light field rendering technique which performs per‐pixel depth correction of rays for high‐quality reconstruction. Our technique stores combined RGB and depth values in a parabolic 2D texture for every light field sample acquired at discrete positions on a uniform spherical setup. Image synthesis is implemented on the GPU as a fragment program which extracts the correct image information from adjacent cameras for each fragment by applying per‐pixel depth correction of rays. We show that the presented image‐based rendering technique provides a significant improvement compared to previous approaches. We explain two different rendering implementations which make use of a uniform parametrisation to minimise disparity problems and ensure full six degrees of freedom for virtual view synthesis. While one rendering algorithm implements an iterative refinement approach for rendering light fields with per pixel depth correction, the other approach employs a raycaster, which provides superior rendering quality at moderate frame rates. GPU based per‐fragment depth correction of rays, used in both implementations, helps reducing ghosting artifacts to a non‐noticeable amount and provides a rendering technique that performs without exhaustive pre‐processing for 3D object reconstruction and without real‐time ray‐object intersection calculations at rendering time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号