共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a hybrid approach to multiple fluid simulation that can handle miscible and immiscible fluids, simultaneously. We combine distance functions and volume fractions to capture not only the discontinuous interface between immiscible fluids but also the smooth transition between miscible fluids. Our approach consists of four steps: velocity field computation, volume fraction advection, miscible fluid diffusion, and visualization. By providing a combining scheme between volume fractions and level set functions, we are able to take advantages of both representation schemes of fluids. From the system point of view, our work is the first approach to Eulerian grid‐based multiple fluid simulation including both miscible and immiscible fluids. From the technical point of view, our approach addresses the issues arising from variable density and viscosity together with material diffusion. We show that the effectiveness of our approach to handle multiple miscible and immiscible fluids through experiments. 相似文献
2.
Christian Eisenacher Gregory Nichols Andrew Selle Brent Burley 《Computer Graphics Forum》2013,32(4):125-132
Ray‐traced global illumination (GI) is becoming widespread in production rendering but incoherent secondary ray traversal limits practical rendering to scenes that fit in memory. Incoherent shading also leads to intractable performance with production‐scale textures forcing renderers to resort to caching of irradiance, radiosity, and other values to amortize expensive shading. Unfortunately, such caching strategies complicate artist workflow, are difficult to parallelize effectively, and contend for precious memory. Worse, these caches involve approximations that compromise quality. In this paper, we introduce a novel path‐tracing framework that avoids these tradeoffs. We sort large, potentially out‐of‐core ray batches to ensure coherence of ray traversal. We then defer shading of ray hits until we have sorted them, achieving perfectly coherent shading and avoiding the need for shading caches. 相似文献
3.
Inyong Jeon Kwang‐Jin Choi Tae‐Yong Kim Bong‐Ouk Choi Hyeong‐Seok Ko 《Computer Graphics Forum》2013,32(7):31-39
We present a new technique which can handle both point and sliding constraints in the multigrid (MG) framework. Although the MG method can theoretically perform as fast as O(N), the development of a clothing simulator based on the MG method calls for solving an important technical challenge: handling the constraints. Resolving constrains has been difficult in MG because there has been no clear way to transfer the constraints existing in the finest level mesh to the coarser level meshes. This paper presents a new formulation based on soft constraints, which can coarsen the constraints defined in the finest level to the coarser levels. Experiments are performed which show that the proposed method can solve the linear system up to 4–9 times faster in comparison with the modified preconditioned conjugate gradient method (MPCG) without quality degradation. The proposed method is easy to implement and can be straightforwardly applied to existing clothing simulators which are based on implicit time integration. 相似文献
4.
We propose a new adaptive algorithm for determining virtual point lights (VPL) in the scope of real‐time instant radiosity methods, which use a limited number of VPLs. The proposed method is based on Metropolis‐Hastings sampling and exhibits better temporal coherence of VPLs, which is particularly important for real‐time applications dealing with dynamic scenes. We evaluate the properties of the proposed method in the context of the algorithm based on imperfect shadow maps and compare it with the commonly used inverse transform method. The results indicate that the proposed technique can significantly reduce the temporal flickering artifacts even for scenes with complex materials and textures. Further, we propose a novel splatting scheme for imperfect shadow maps using hardware tessellation. This scheme significantly improves the rendering performance particularly for complex and deformable scenes. We thoroughly analyze the performance of the proposed techniques on test scenes with detailed materials, moving camera, and deforming geometry. 相似文献
5.
This paper presents a digital storytelling approach that generates automatic animations for time‐varying data visualization. Our approach simulates the composition and transition of storytelling techniques and synthesizes animations to describe various event features. Specifically, we analyze information related to a given event and abstract it as an event graph, which represents data features as nodes and event relationships as links. This graph embeds a tree‐like hierarchical structure which encodes data features at different scales. Next, narrative structures are built by exploring starting nodes and suitable search strategies in this graph. Different stages of narrative structures are considered in our automatic rendering parameter decision process to generate animations as digital stories. We integrate this animation generation approach into an interactive exploration process of time‐varying data, so that more comprehensive information can be provided in a timely fashion. We demonstrate with a storm surge application that our approach allows semantic visualization of time‐varying data and easy animation generation for users without special knowledge about the underlying visualization techniques. 相似文献
6.
There is considerable recent progress in hair simulations, driven by the high demands in computer animated movies. However, capturing the complex interactions between hair and water is still relatively in its infancy. Such interactions are best modeled as those between water and an anisotropic permeable medium as water can flow into and out of the hair volume biased in hair fiber direction. Modeling the interaction is further challenged when the hair is allowed to move. In this paper, we introduce a simulation model that reproduces interactions between water and hair as a dynamic anisotropic permeable material. We utilize an Eulerian approach for capturing the microscopic porosity of hair and handle the wetting effects using a Cartesian bounding grid. A Lagrangian approach is used to simulate every single hair strand including interactions with each other, yielding fine‐detailed dynamic hair simulation. Our model and simulation generate many interesting effects of interactions between fine‐detailed dynamic hair and water, i.e., water absorption and diffusion, cohesion of wet hair strands, water flow within the hair volume, water dripping from the wet hair strands and morphological shape transformations of wet hair. 相似文献
7.
D. F. Keefe T. M. O'Brien D. B. Baier S. M. Gatesy E. L. Brainerd D. H. Laidlaw 《Computer Graphics Forum》2008,27(3):863-870
We present novel visual and interactive techniques for exploratory visualization of animal kinematics using instantaneous helical axes (IHAs). The helical axis has been used in orthopedics, biomechanics, and structural mechanics as a construct for describing rigid body motion. Within biomechanics, recent imaging advances have made possible accurate high‐speed measurements of individual bone positions and orientations during experiments. From this high‐speed data, instantaneous helical axes of motion may be calculated. We address questions of effective interactive, exploratory visualization of this high‐speed 3D motion data. A 3D glyph that encodes all parameters of the IHA in visual form is presented. Interactive controls are used to examine the change in the IHA over time and relate the IHA to anatomical features of interest selected by a user. The techniques developed are applied to a stereoscopic, interactive visualization of the mechanics of pig mastication and assessed by a team of evolutionary biologists who found interactive IHA‐based analysis a useful addition to more traditional motion analysis techniques. 相似文献
8.
Generating plausible deformations of a character skin within the standard production pipeline is a challenge. This paper presents a volume preservation method dedicated to skinned characters. As usual, the character is defined by a skin mesh at some rest pose and an animation skeleton. At each animation step, skin deformations are first computed using standard SSD. Our method corrects the result using a set of local deformations which model the fold‐over‐free, constant volume behavior of soft tissues. This is done geometrically, without the need of any physically‐based simulation. To make the method easily applicable, we also provide automatic ways to extract the local regions where volume is to be preserved and to compute adequate skinning weights, both based on the character's morphology. 相似文献
9.
We present a real‐time rendering algorithm for inhomogeneous, single scattering media, where all‐frequency shading effects such as glows, light shafts, and volumetric shadows can all be captured. The algorithm first computes source radiance at a small number of sample points in the medium, then interpolates these values at other points in the volume using a gradient‐based scheme that is efficiently applied by sample splatting. The sample points are dynamically determined based on a recursive sample splitting procedure that adapts the number and locations of sample points for accurate and efficient reproduction of shading variations in the medium. The entire pipeline can be easily implemented on the GPU to achieve real‐time performance for dynamic lighting and scenes. Rendering results of our method are shown to be comparable to those from ray tracing. 相似文献
10.
Miloš Hašan Edgar Velázquez‐Armendáriz Fabio Pellacini Kavita Bala 《Computer Graphics Forum》2008,27(4):1105-1114
Rendering animations of scenes with deformable objects, camera motion, and complex illumination, including indirect lighting and arbitrary shading, is a long‐standing challenge. Prior work has shown that complex lighting can be accurately approximated by a large collection of point lights. In this formulation, rendering of animation sequences becomes the problem of efficiently shading many surface samples from many lights across several frames. This paper presents a tensor formulation of the animated many‐light problem, where each element of the tensor expresses the contribution of one light to one pixel in one frame. We sparsely sample rows and columns of the tensor, and introduce a clustering algorithm to select a small number of representative lights to efficiently approximate the animation. Our algorithm achieves efficiency by reusing representatives across frames, while minimizing temporal flicker. We demonstrate our algorithm in a variety of scenes that include deformable objects, complex illumination and arbitrary shading and show that a surprisingly small number of representative lights is sufficient for high quality rendering. We believe out algorithm will find practical use in applications that require fast previews of complex animation. 相似文献
11.
Interactive rendering with dynamic natural lighting and changing view is a long‐standing goal in computer graphics. Recently, precomputation‐based methods for all‐frequency relighting have made substantial progress in this direction. Many of the most successful algorithms are based on a factorization of the BRDF into incident and outgoing directions, enabling each term to be precomputed independent of viewing direction, and re‐combined at run‐time. However, there has so far been no theoretical understanding of the accuracy of this factorization, nor the number of terms needed. In this paper, we conduct a theoretical and empirical analysis of the BRDF in‐out factorization. For Phong BRDFs, we obtain analytic results, showing that the number of terms needed grows linearly with the Phong exponent, while the factors correspond closely to spherical harmonic basis functions. More generally, the number of terms is quadratic in the frequency content of the BRDF along the reflected or half‐angle direction. This analysis gives clear practical guidance on the number of factors needed for a given material. Different objects in a scene can each be represented with the correct number of terms needed for that particular BRDF, enabling both accuracy and interactivity. 相似文献
12.
In this paper we present a technique for computing translational gradients of indirect surface reflectance in scenes containing participating media and significant occlusions. These gradients describe how the incident radiance field changes with respect to translation on surfaces. Previous techniques for computing gradients ignore the effects of volume scattering and attenuation and assume that radiance is constant along rays connecting surfaces. We present a novel gradient formulation that correctly captures the influence of participating media. Our formulation accurately accounts for changes of occlusion, including the effect of surfaces occluding scattering media. We show how the proposed gradients can be used within an irradiance caching framework to more accurately handle scenes with participating media, providing significant improvements in interpolation quality. 相似文献
13.
Veronika Šoltészová Daniel Patel Stefan Bruckner Ivan Viola 《Computer Graphics Forum》2010,29(3):883-891
In this paper, we present a novel technique which simulates directional light scattering for more realistic interactive visualization of volume data. Our method extends the recent directional occlusion shading model by enabling light source positioning with practically no performance penalty. Light transport is approximated using a tilted cone‐shaped function which leaves elliptic footprints in the opacity buffer during slice‐based volume rendering. We perform an incremental blurring operation on the opacity buffer for each slice in front‐to‐back order. This buffer is then used to define the degree of occlusion for the subsequent slice. Our method is capable of generating high‐quality soft shadowing effects, allows interactive modification of all illumination and rendering parameters, and requires no pre‐computation. 相似文献
14.
We present a new algorithm for efficient occlusion culling using hardware occlusion queries. The algorithm significantly improves on previous techniques by making better use of temporal and spatial coherence of visibility. This is achieved by using adaptive visibility prediction and query batching. As a result of the new optimizations the number of issued occlusion queries and the number of rendering state changes are significantly reduced. We also propose a simple method for determining tighter bounding volumes for occlusion queries and a method which further reduces the pipeline stalls. The proposed method provides up to an order of magnitude speedup over the previous state of the art. The new technique is simple to implement, does not rely on hardware calibration and integrates well with modern game engines. 相似文献
15.
We describe a global illumination method combining two well known techniques: photon mapping and irradiance caching. The photon mapping method has the advantage of being view independent but requires a costly additional rendering pass, called final gathering. As for irradiance caching, it is view‐dependent, irradiance is only computed and cached on surfaces of the scene as viewed by a single camera. To compute records covering the entire scene, the irradiance caching method has to be run for many cameras, which takes a long time and is a tedious task since the user has to place the needed cameras manually. Our method exploits the advantages of these two methods and avoids any intervention of the user. It computes a refined, view‐independent irradiance cache from a photon map. The global illumination solution is then rendered interactively using radiance cache splatting. 相似文献
16.
We present a new and accurate method to render the atmosphere in real time from any viewpoint from ground level to outer space, while taking Rayleigh and Mie multiple scattering into account. Our method reproduces many effects of the scattering of light, such as the daylight and twilight sky color and aerial perspective for all view and light directions, or the Earth and mountain shadows (light shafts) inside the atmosphere. Our method is based on a formulation of the light transport equation that is precomputable for all view points, view directions and sun directions. We show how to store this data compactly and propose a GPU compliant algorithm to precompute it in a few seconds. This precomputed data allows us to evaluate at runtime the light transport equation in constant time, without any sampling, while taking into account the ground for shadows and light shafts. 相似文献
17.
The selection of an appropriate global transfer function is essential for visualizing time‐varying simulation data. This is especially challenging when the global data range is not known in advance, as is often the case in remote and in‐situ visualization settings. Since the data range may vary dramatically as the simulation progresses, volume rendering using local transfer functions may not be coherent for all time steps. We present an exploratory technique that enables coherent classification of time‐varying volume data. Unlike previous approaches, which require pre‐processing of all time steps, our approach lets the user explore the transfer function space without accessing the original 3D data. This is useful for interactive visualization, and absolutely essential for in‐situ visualization, where the entire simulation data range is not known in advance. Our approach generates a compact representation of each time step at rendering time in the form of ray attenuation functions, which are used for subsequent operations on the opacity and color mappings. The presented approach offers interactive exploration of time‐varying simulation data that alleviates the cost associated with reloading and caching large data sets. 相似文献
18.
Witawat Rungjiratananon Zoltan Szego Yoshihiro Kanamori Tomoyuki Nishita 《Computer Graphics Forum》2008,27(7):1887-1893
Recent advances in physically‐based simulations have made it possible to generate realistic animations. However, in the case of solid‐fluid coupling, wetting effects have rarely been noticed despite their visual importance especially in interactions between fluids and granular materials. This paper presents a simple particle‐based method to model the physical mechanism of wetness propagating through granular materials; Fluid particles are absorbed in the spaces between the granular particles and these wetted granular particles then stick together due to liquid bridges that are caused by surface tension and which will subsequently disappear when over‐wetting occurs. Our method can handle these phenomena by introducing a wetness value for each granular particle and by integrating those aspects of behavior that are dependent on wetness into the simulation framework. Using this method, a GPU‐based simulator can achieve highly dynamic animations that include wetting effects in real time. 相似文献
19.
The paper describes a technique to generate high‐quality light field representations from volumetric data. We show how light field galleries can be created to give unexperienced audiences access to interactive high‐quality volume renditions. The proposed light field representation is lightweight with respect to storage and bandwidth capacity and is thus ideal as exchange format for visualization results, especially for web galleries. The approach expands an existing sphere‐hemisphere parameterization for the light field with per‐pixel depth. High‐quality paraboloid maps from volumetric data are generated using GPU‐based ray‐casting or slicing approaches. Different layers, such as isosurfaces, but not restricted to, can be generated independently and composited in real time. This allows the user to interactively explore the model and to change visibility parameters at run‐time. 相似文献
20.
In this paper we introduce the constrained tetrahedralization as a new acceleration structure for ray tracing. A constrained tetrahedralization of a scene is a tetrahedralization that respects the faces of the scene geometry. The closest intersection of a ray with a scene is found by traversing this tetrahedralization along the ray, one tetrahedron at a time. We show that constrained tetrahedralizations are a viable alternative to current acceleration structures, and that they have a number of unique properties that set them apart from other acceleration structures: constrained tetrahedralizations are not hierarchical yet adaptive; the complexity of traversing them is a function of local geometric complexity rather than global geometric complexity; constrained tetrahedralizations support deforming geometry without any effort; and they have the potential to unify several data structures currently used in global illumination. 相似文献