共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper we present a method for automatic interpolation between adjacent discrete levels of detail to achieve smooth LOD changes in image space. We achieve this by breaking the problem into two passes: We render the two LOD levels individually and combine them in a separate pass afterwards. The interpolation is formulated in a way that only one level has to be updated per frame and the other can be reused from the previous frame, thereby causing roughly the same render cost as with simple non interpolated discrete LOD rendering, only incurring the slight overhead of the final combination pass. Additionally we describe customized interpolation schemes using visibility textures. The method was designed with the ease of integration into existing engines in mind. It requires neither sorting nor blending of objects, nor does it introduce any constrains in the LOD used. The LODs can be coplanar, alpha masked, animated, impostors, and intersecting, while still interpolating smoothly. 相似文献
2.
We present a real‐time method for rendering a depth‐of‐field effect based on the per‐pixel layered splatting where source pixels are scattered on one of the three layers of a destination pixel. In addition, the missing information behind foreground objects is filled with an additional image of the areas occluded by nearer objects. The method creates high‐quality depth‐of‐field results even in the presence of partial occlusion, without major artifacts often present in the previous real‐time methods. The method can also be applied to simulating defocused highlights. The entire framework is accelerated by GPU, enabling real‐time post‐processing for both off‐line and interactive applications. 相似文献
3.
Systems projecting a continuous n‐dimensional parameter space to a continuous m‐dimensional target space play an important role in science and engineering. If evaluating the system is expensive, however, an analysis is often limited to a small number of sample points. The main contribution of this paper is an interactive approach to enable a continuous analysis of a sampled parameter space with respect to multiple target values. We employ methods from statistical learning to predict results in real‐time at any user‐defined point and its neighborhood. In particular, we describe techniques to guide the user to potentially interesting parameter regions, and we visualize the inherent uncertainty of predictions in 2D scatterplots and parallel coordinates. An evaluation describes a real‐world scenario in the application context of car engine design and reports feedback of domain experts. The results indicate that our approach is suitable to accelerate a local sensitivity analysis of multiple target dimensions, and to determine a sufficient local sampling density for interesting parameter regions. 相似文献
4.
Kishore Mosaliganti Raghu Machiraju Kun Huang Gustavo Leone 《Computer Graphics Forum》2008,27(3):871-878
At a microscopic resolution, biological structures are composed of cells, red blood corpuscles (RBCs), cytoplasm and other microstructural components. There is a natural pattern in terms of distribution, arrangement and packing density of these components in biological organization. In this work, we propose to use N‐point correlation functions to guide the analysis and exploration process in microscopic datasets. These functions provide useful feature spaces to aid segmentation and visualization tasks. We show 3D visualizations of mouse placenta tissue layers and mouse mammary ducts as well as 2D segmentation/tracking of clonal populations. Further confidence in our results stems from validation studies that were performed with manual ground‐truth for segmentation. 相似文献
5.
Interactive Simulation of the Human Eye Depth of Field and Its Correction by Spectacle Lenses 总被引:2,自引:0,他引:2
Masanori Kakimoto Tomoaki Tatsukawa Yukiteru Mukai Tomoyuki Nishita 《Computer Graphics Forum》2007,26(3):627-636
This paper describes a fast rendering algorithm for verification of spectacle lens design. Our method simulates refraction corrections of astigmatism as well as myopia or presbyopia. Refraction and defocus are the main issues in the simulation. For refraction, our proposed method uses per-vertex basis ray tracing which warps the environment map and produces a real-time refracted image which is subjectively as good as ray tracing. Conventional defocus simulation was previously done by distribution ray tracing and a real-time solution was impossible. We introduce the concept of a blur field, which we use to displace every vertex according to its position. The blurring information is precomputed as a set of field values distributed to voxels which are formed by evenly subdividing the perspective projected space. The field values can be determined by tracing a wavefront from each voxel through the lens and the eye, and by evaluating the spread of light at the retina considering the best human accommodation effort. The blur field is stored as texture data and referred to by the vertex shader that displaces each vertex. With an interactive frame rate, blending the multiple rendering results produces a blurred image comparable to distribution ray tracing output. 相似文献
6.
T. Schultz N. Sauber A. Anwander H. Theisel H.‐P. Seidel 《Computer Graphics Forum》2008,27(3):1063-1070
Fiber tracking is a standard tool to estimate the course of major white matter tracts from diffusion tensor magnetic resonance imaging (DT‐MRI) data. In this work, we aim at supporting the visual analysis of classical streamlines from fiber tracking by integrating context from anatomical data, acquired by a T1‐weighted MRI measurement. To this end, we suggest a novel visualization metaphor, which is based on data‐driven deformation of geometry and has been inspired by a technique for anatomical fiber preparation known as Klingler dissection. We demonstrate that our method conveys the relation between streamlines and surrounding anatomical features more effectively than standard techniques like slice images and direct volume rendering. The method works automatically, but its GPU‐based implementation allows for additional, intuitive interaction. 相似文献
7.
Recent soft shadow mapping techniques based on back-projection can render high quality soft shadows in real time. However, real time high quality rendering of large penumbrae is still challenging, especially when multilayer shadow maps are used to reduce single light sample silhouette artifact. In this paper, we present an efficient algorithm to attack this problem. We first present a GPU-friendly packet-based approach rendering a packet of neighboring pixels together to amortize the cost of computing visibility factors. Then, we propose a hierarchical technique to quickly locate the contour edges, further reducing the computation cost. At last, we suggest a multi-view shadow map approach to reduce the single light sample artifact. We also demonstrate its higher image quality and higher efficiency compared to the existing depth peeling approaches. 相似文献
8.
Precomputed Radiance Transfer Field for Rendering Interreflections in Dynamic Scenes 总被引:1,自引:0,他引:1
Minghao Pan Rui Wang Xinguo Liu † Qunsheng Peng Hujun Bao 《Computer Graphics Forum》2007,26(3):485-493
In this paper, we introduce a new representation – radiance transfer fields (RTF) – for rendering interreflections in dynamic scenes under low frequency illumination. The RTF describes the radiance transferred by an individual object to its surrounding space as a function of the incident radiance. An important property of RTF is its independence of the scene configuration, enabling interreflection computation in dynamic scenes. Secondly, RTFs naturally fit in with the rendering framework of precomputed shadow fields, incurring negligible cost to add interreflection effects. In addition, RTFs can be used to compute interreflections for both diffuse and glossy objects. We also show that RTF data can be highly compressed by clustered principal component analysis (CPCA), which not only reduces the memory cost but also accelerates rendering. Finally, we present some experimental results demonstrating our techniques. 相似文献
9.
Marios Papas Wojciech Jarosz Wenzel Jakob Szymon Rusinkiewicz Wojciech Matusik Tim Weyrich 《Computer Graphics Forum》2011,30(2):503-511
We propose a novel system for designing and manufacturing surfaces that produce desired caustic images when illuminated by a light source. Our system is based on a nonnegative image decomposition using a set of possibly overlapping anisotropic Gaussian kernels. We utilize this decomposition to construct an array of continuous surface patches, each of which focuses light onto one of the Gaussian kernels, either through refraction or reflection. We show how to derive the shape of each continuous patch and arrange them by performing a discrete assignment of patches to kernels in the desired caustic. Our decomposition provides for high fidelity reconstruction of natural images using a small collection of patches. We demonstrate our approach on a wide variety of caustic images by manufacturing physical surfaces with a small number of patches. 相似文献
10.
Resizing to a lower resolution can alter the appearance of an image. In particular, downsampling an image causes blurred regions to appear sharper. It is useful at times to create a downsampled version of the image that gives the same impression as the original, such as for digital camera viewfinders. To understand the effect of blur on image appearance at different image sizes, we conduct a perceptual study examining how much blur must be present in a downsampled image to be perceived the same as the original. We find a complex, but mostly image‐independent relationship between matching blur levels in images at different resolutions. The relationship can be explained by a model of the blur magnitude analyzed as a function of spatial frequency. We incorporate this model in a new appearance‐preserving downsampling algorithm, which alters blur magnitude locally to create a smaller image that gives the best reproduction of the original image appearance. 相似文献
11.
We present a design technique for colors with the purpose of lowering the energy consumption of the display device. Our approach is based on a screen space variant energy model. The result of our design is a set of distinguishable iso-lightness colors guided by perceptual principles. We present two variations of our approach. One is based on a set of discrete user-named (categorical) colors, which are analyzed according to their energy consumption. The second is based on the constrained continuous optimization of color energy in the perceptually uniform CIELAB color space. We quantitatively compare our two approaches with a traditional choice of colors, demonstrating that we typically save approximately 40 percent of the energy. The color sets are applied to examples from the 2D visualization of nominal data and volume rendering of 3D scalar fields. 相似文献
12.
Romain Vergne David Vanderhaeghe Jiazhou Chen Pascal Barla Xavier Granier Christophe Schlick 《Computer Graphics Forum》2011,30(2):513-522
We introduce a new technique called Implicit Brushes to render animated 3D scenes with stylized lines in realtime with temporal coherence. An Implicit Brush is defined at a given pixel by the convolution of a brush footprint along a feature skeleton; the skeleton itself is obtained by locating surface features in the pixel neighborhood. Features are identified via image‐space fitting techniques that not only extract their location, but also their profile, which permits to distinguish between sharp and smooth features. Profile parameters are then mapped to stylistic parameters such as brush orientation, size or opacity to give rise to a wide range of line‐based styles. 相似文献
13.
Y. Tokuyoshi 《Computer Graphics Forum》2015,34(6):135-147
Although geometry‐aware filtering and upsampling have often been used for interactive or real‐time rendering, they are unsuitable for glossy surfaces because shading results strongly depend on the bidirectional reflectance distribution functions. This paper proposes a novel weighting function of cross bilateral filtering and upsampling to measure the similarity of specular lobes. The difficulty is that a specular lobe is represented with a distribution function in directional space, whereas conventional cross bilateral filtering evaluates similarities using the distance between two points in a Euclidean space. Therefore, this paper first generalizes cross bilateral filtering for the similarity of distribution functions in a non‐Euclidean space. Then, the weighting function is specialized for specular lobes. Our key insight is that the weighting function of bilateral filtering can be represented with the product integral of two distribution functions corresponding to two pixels. In addition, we propose spherical Gaussian‐based approximations to calculate this weighting function analytically. Our weighting function detects the edges of glossiness, and adapts to all‐frequency materials using only a camera position and G‐buffer. These features are not only suitable for path tracing, but also deferred shading and non‐ray tracing–based methods such as voxel cone tracing. 相似文献
14.
We present a novel algorithm for the efficient extraction and visualization of high‐quality ridge and valley surfaces from numerical datasets. Despite their rapidly increasing popularity in visualization, these so‐called crease surfaces remain challenging to compute owing to their strongly nonlinear and non‐orientable nature, and their complex boundaries. In this context, existing meshing techniques require an extremely dense sampling that is computationally prohibitive. Our proposed solution intertwines sampling and meshing steps to yield an accurate approximation of the underlying surfaces while ensuring the geometric quality of the resulting mesh. Using the computation power of the GPU, we propose a fast, parallel method for sampling. Additionally, we present a new front propagation meshing strategy that leverages CPU multiprocessing. Results are shown for synthetic, medical and fluid dynamics datasets. 相似文献
15.
C. Pagot D. Osmari F. Sadlo D. Weiskopf T. Ertl J. Comba 《Computer Graphics Forum》2011,30(3):751-760
The parallel vectors (PV) operator is a feature extraction approach for defining line‐type features such as creases (ridges and valleys) in scalar fields, as well as separation, attachment, and vortex core lines in vector fields. In this work, we extend PV feature extraction to higher‐order data represented by piecewise analytical functions defined over grid cells. The extraction uses PV in two distinct stages. First, seed points on the feature lines are placed by evaluating the inclusion form of the PV criterion with reduced affine arithmetic. Second, a feature flow field is derived from the higher‐order PV expression where the features can be extracted as streamlines starting at the seeds. Our approach allows for guaranteed bounds regarding accuracy with respect to existence, position, and topology of the features obtained. The method is suitable for parallel implementation and we present results obtained with our GPU‐based prototype. We apply our method to higher‐order data obtained from discontinuous Galerkin fluid simulations. 相似文献
16.
Interactive computation of global illumination is a major challenge in current computer graphics research. Global illumination heavily affects the visual quality of generated images. It is therefore a key attribute for the perception of photo‐realistic images. Path tracing is able to simulate the physical behaviour of light using Monte Carlo techniques. However, the computational burden of this technique prohibits interactive rendering times on standard commodity hardware in high‐quality. Trying to solve the Monte Carlo integration with fewer samples results in characteristic noisy images. Global illumination filtering methods take advantage of the fact that the integral for neighbouring pixels may be very similar. Averaging samples of similar characteristics in screen‐space may approximate the correct integral, but may result in visible outliers. In this paper, we present a novel path tracing pipeline based on an edge‐aware filtering method for the indirect illumination which produces visually more pleasing results without noticeable outliers. The key idea is not to filter the noisy path traced images but to use it as a guidance to filter a second image composed from characteristic scene attributes that do not contain noise by default. We show that our approach better approximates the Monte Carlo integral compared to previous methods. Since the computation is carried out completely in screen‐space it is therefore applicable to fully dynamic scenes, arbitrary lighting and allows for high‐quality path tracing at interactive frame rates on commodity hardware. 相似文献
17.
Minh Hoai Nguyen Jean-Francois Lalonde Alexei A. Efros Fernando De la Torre 《Computer Graphics Forum》2008,27(2):627-635
Many categories of objects, such as human faces, can be naturally viewed as a composition of several different layers. For example, a bearded face with glasses can be decomposed into three layers: a layer for glasses, a layer for the beard and a layer for other permanent facial features. While modeling such a face with a linear subspace model could be very difficult, layer separation allows for easy modeling and modification of some certain structures while leaving others unchanged. In this paper, we present a method for automatic layer extraction and its applications to face synthesis and editing. Layers are automatically extracted by utilizing the differences between subspaces and modeled separately. We show that our method can be used for tasks such beard removal (virtual shaving), beard synthesis, and beard transfer, among others. 相似文献
18.
Motion based Painterly Rendering 总被引:1,自引:0,他引:1
Previous painterly rendering techniques normally use image gradients for deciding stroke orientations. Image gradients are good for expressing object shapes, but difficult to express the flow or movements of objects. In real painting, the use of brush strokes corresponding to the actual movement of objects allows viewers to recognize objects' motion better and thus to have an impression of the dynamic. In this paper, we propose a novel painterly rendering algorithm to express dynamic objects based on their motion information. We first extract motion information (magnitude, direction, standard deviation) of a scene from a set of consecutive image sequences from the same view. Then the motion directions are used for determining stroke orientations in the regions with significant motions, and image gradients determine stroke orientations where little motion is observed. Our algorithm is useful for realistically and dynamically representing moving objects. We have applied our algorithm for rendering landscapes. We could segment a scene into dynamic and static regions, and express the actual movement of dynamic objects using motion based strokes. 相似文献
19.
Daniela Oelke Halldor Janetzko Svenja Simon Klaus Neuhaus Daniel A. Keim 《Computer Graphics Forum》2011,30(3):871-880
Pixel‐based visualizations have become popular, because they are capable of displaying large amounts of data and at the same time provide many details. However, pixel‐based visualizations are only effective if the data set is not sparse and the data distribution not random. Single pixels – no matter if they are in an empty area or in the middle of a large area of differently colored pixels – are perceptually difficult to discern and may therefore easily be missed. Furthermore, trends and interesting passages may be camouflaged in the sea of details. In this paper we compare different approaches for visual boosting in pixel‐based visualizations. Several boosting techniques such as halos, background coloring, distortion, and hatching are discussed and assessed with respect to their effectiveness in boosting single pixels, trends, and interesting passages. Application examples from three different domains (document analysis, genome analysis, and geospatial analysis) show the general applicability of the techniques and the derived guidelines. 相似文献
20.
Christian Rieder Stephan Palmer Florian Link Horst K. Hahn 《Computer Graphics Forum》2011,30(3):1031-1040
In this paper, we present a rapid prototyping framework for GPU‐based volume rendering. Therefore, we propose a dynamic shader pipeline based on the SuperShader concept and illustrate the design decisions. Also, important requirements for the development of our system are presented. In our approach, we break down the rendering shader into areas containing code for different computations, which are defined as freely combinable, modularized shader blocks. Hence, high‐level changes of the rendering configuration result in the implicit modification of the underlying shader pipeline. Furthermore, the prototyping system allows inserting custom shader code between shader blocks of the pipeline at run‐time. A suitable user interface is available within the prototyping environment to allow intuitive modification of the shader pipeline. Thus, appropriate solutions for visualization problems can be interactively developed. We demonstrate the usage and the usefulness of our framework with implementations of dynamic rendering effects for medical applications. 相似文献