首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
Environment‐mapped rendering of Lambertian isotropic surfaces is common, and a popular technique is to use a quadratic spherical harmonic expansion. This compact irradiance map representation is widely adopted in interactive applications like video games. However, many materials are anisotropic, and shading is determined by the local tangent direction, rather than the surface normal. Even for visualization and illustration, it is increasingly common to define a tangent vector field, and use anisotropic shading. In this paper, we extend spherical harmonic irradiance maps to anisotropic surfaces, replacing Lambertian reflectance with the diffuse term of the popular Kajiya‐Kay model. We show that there is a direct analogy, with the surface normal replaced by the tangent. Our main contribution is an analytic formula for the diffuse Kajiya‐Kay BRDF in terms of spherical harmonics; this derivation is more complicated than for the standard diffuse lobe. We show that the terms decay even more rapidly than for Lambertian reflectance, going as l–3, where l is the spherical harmonic order, and with only 6 terms (l = 0 and l = 2) capturing 99.8% of the energy. Existing code for irradiance environment maps can be trivially adapted for real‐time rendering with tangent irradiance maps. We also demonstrate an application to offline rendering of the diffuse component of fibers, using our formula as a control variate for Monte Carlo sampling.  相似文献   

2.
Image‐based rendering techniques are a powerful alternative to traditional polygon‐based computer graphics. This paper presents a novel light field rendering technique which performs per‐pixel depth correction of rays for high‐quality reconstruction. Our technique stores combined RGB and depth values in a parabolic 2D texture for every light field sample acquired at discrete positions on a uniform spherical setup. Image synthesis is implemented on the GPU as a fragment program which extracts the correct image information from adjacent cameras for each fragment by applying per‐pixel depth correction of rays. We show that the presented image‐based rendering technique provides a significant improvement compared to previous approaches. We explain two different rendering implementations which make use of a uniform parametrisation to minimise disparity problems and ensure full six degrees of freedom for virtual view synthesis. While one rendering algorithm implements an iterative refinement approach for rendering light fields with per pixel depth correction, the other approach employs a raycaster, which provides superior rendering quality at moderate frame rates. GPU based per‐fragment depth correction of rays, used in both implementations, helps reducing ghosting artifacts to a non‐noticeable amount and provides a rendering technique that performs without exhaustive pre‐processing for 3D object reconstruction and without real‐time ray‐object intersection calculations at rendering time.  相似文献   

3.
Glossy to glossy reflections are lights bounced between glossy surfaces. Such directional light transports are important for humans to perceive glossy materials, but difficult to simulate. This paper proposes a new method for rendering screen‐space glossy to glossy reflections in realtime. We use spherical von Mises‐Fisher (vMF) distributions to model glossy BRDFs at surfaces, and employ screen space directional occlusion (SSDO) rendering framework to trace indirect light transports bounced in the screen space. As our main contributions, we derive a new parameterization of vMF distribution so as to convert the non‐linear fit of multiple vMF distributions into a linear sum in the new space. Then, we present a new linear filtering technique to build MIP‐maps on glossy BRDFs, which allows us to create filtered radiance transfer functions at runtime, and efficiently estimate indirect glossy to glossy reflections. We demonstrate our method in a realtime application for rendering scenes with dynamic glossy objects. Compared with screen space directional occlusion, our approach only requires one extra texture and has a negligible overhead, 3% ~ 6% loss at frame rate, but enables glossy to glossy reflections.  相似文献   

4.
自然现象的可视化是计算机图形学和虚拟现实领域的重要研究内容。对传统光线投射算法分析的基础上进行改进,提出基于球壳体的光线投射算法。将GPU运用于球壳体数据场的体绘制,设计了基于球壳体数据场的顶点着色程序和像素着色程序。同时,对台风源数据格式进行解析,生成了用于台风可视化的体数据,采用提出的算法实现了台风云层和因子的可视化。实验结果表明,本文基于GPU的球壳体光线投射算法在球体表面较好地实现了实时台风可视化效果。  相似文献   

5.
采用圆盘对物体表面进行近似,对每个圆盘形成的遮挡使用椭圆遮挡域表示,预计算过程中根据每个椭圆遮挡域参数进行采样并用球面调和系数表示,同时将其转换为对数,对采样数据采用主元分析方法进行压缩;在绘制的过程中,根据圆盘的法线、半径以及和阴影接收点之间的相对位置确定其椭圆遮挡域的值,然后采用球面调和指数算法进行累积.该方法将圆盘近似算法和球面调和指数算法相结合,能够灵活地描述柔性物体以及较薄的物体形成的柔和阴影.  相似文献   

6.
Soft shadows play an important role in photo‐realistic rendering. Although there are many efficient soft shadow algorithms, most of them focus on the one‐side light source situation, where a planar light source is on the outside of the scene. In fact, in many situations, such as games, light sources are omnidirectional. They may be surrounded by a number of 3D objects. This paper proposes a soft shadow algorithm for the omnidirectional situation. We develop a concentric spherical representation to model the behaviour of omnidirectional light sources. To provide better rendering results, a novel summed area table based filtering scheme for spherical functions is proposed. In addition, we utilize unicube mapping, which samples the spherical space more uniformly, to further improve the filtering quality.  相似文献   

7.
We present a camera lens simulation model capable of producing advanced photographic phenomena in a general spectral Monte Carlo image rendering system. Our approach incorporates insights from geometrical diffraction theory, from optical engineering and from glass science. We show how to efficiently simulate all five monochromatic aberrations, spherical and coma aberration, astigmatism, field curvature and distortion. We also consider chromatic aberration, lateral colour and aperture diffraction. The inclusion of Fresnel reflection generates correct lens flares and we present an optimized sampling method for path generation.  相似文献   

8.
设计并实现一种基于球面全景图的虚拟场景实时漫游系统。虚拟场景的实时漫游可以通过对球面全景图进行重投影完成。通过重投影球面全景图的可视部分到视平面上,可以生成虚拟场景在不同视线方向上的透视视图。使用球面全景图的重投影算法可以模拟相机的旋转运动,通过改变相机的视域,可以模拟相机的变焦运动。针对直接使用重投影不能满足球面全景图实时绘制的问题,在对球面全景图的重投影算法进行仔细分析的基础上,提出使用查找表和增量计算进行加速的策略。实验结果表明,优化后的系统能够对基于球面全景图的虚拟场景进行实时漫游  相似文献   

9.
Recent advances have made interactive ray tracing (IRT) possible on consumer desktop machines. These advances have brought about the potential for interactive global illumination (IGI) with enhanced realism through physically based lighting. IGI, unlike IRT, has a much higher computational complexity. Furthermore, since non‐primary rays constitute the majority of the computation, the rays are predominantly incoherent, making impractical many of the methods that have made IRT possible. Two methods that have already shown promise in decreasing the computational time of the GI solution are interleaved sampling and adaptive rendering. Interleaved sampling is a generalized sampling scheme that smoothly blends between regular and irregular sampling while maintaining coherence. Adaptive rendering algorithms adjust rendering quality, non‐uniformally, using a guidance scheme. While adaptive rendering has shown to provide speed‐up when used for off‐line rendering it has not been utilized in IRT due to its naturally incoherent nature. In this paper, we combine adaptive rendering and interleaved sampling within a component‐based solution into a new approach we term adaptive interleaved sampling. This allows us to tailor new adaptive heuristics for interleaved sampling of the individual components of the GI solution significantly improving overall performance. We present a novel component‐based IGI framework for which we achieve interactive frame rates for a range of effects such as indirect diffuse lighting, soft shadows and single scatter homogeneous participating media.  相似文献   

10.
We present a real‐time framework which allows interactive visualization of relativistic effects for time‐resolved light transport. We leverage data from two different sources: real‐world data acquired with an effective exposure time of less than 2 picoseconds, using an ultra‐fast imaging technique termed femto‐photography, and a transient renderer based on ray‐tracing. We explore the effects of time dilation, light aberration, frequency shift and radiance accumulation by modifying existing models of these relativistic effects to take into account the time‐resolved nature of light propagation. Unlike previous works, we do not impose limiting constraints in the visualization, allowing the virtual camera to explore freely a reconstructed 3D scene depicting dynamic illumination. Moreover, we consider not only linear motion, but also acceleration and rotation of the camera. We further introduce, for the first time, a pinhole camera model into our relativistic rendering framework, and account for subsequent changes in focal length and field of view as the camera moves through the scene.  相似文献   

11.
GPU Shape Grammars provide a solution for interactive procedural generation, tuning and visualization of massive environment elements for both video games and production rendering. Our technique generates detailed models without explicit geometry storage. To this end we reformulate the grammar expansion for generation of detailed models at the tesselation control and geometry shader stages. Using the geometry generation capabilities of modern graphics hardware, our technique generated massive, highly detailed models. GPU Shape Grammars integrate within a scalable framework by introducing automatic generation of levels of detail at reduced cost. We apply our solution for interactive generation and rendering of scenes containing thousands of buildings and trees.  相似文献   

12.
We propose an efficient approach for interactive visualization of massive models with CPU ray tracing. A voxel‐based hierarchical level‐of‐detail (LOD) framework is employed to minimize rendering time and required system memory. In a pre‐processing phase, a compressed out‐of‐core data structure is constructed, which contains the original primitives of the model and the LOD voxels, organized into a kd‐tree. During rendering, data is loaded asynchronously to ensure a smooth inspection of the model regardless of the available I/O bandwidth. With our technique, we are able to explore data sets consisting of hundreds of millions of triangles in real‐time on a desktop PC with a quad‐core CPU.  相似文献   

13.
Point‐Based Global Illumination (PBGI) is a popular rendering method in special effects and motion picture productions. This algorithm provides a diffuse global illumination solution by caching radiance in a mesh‐less hierarchical data structure during a preprocess, while solving for visibility over this cache, at rendering time and for each receiver, using microbuffers, which are localized depth and color buffers inspired from real time rendering environments. As a result, noise free ambient occlusion, indirect soft shadows and color bleeding effects are computed efficiently for high resolution image output and in a temporally coherent fashion. We propose an evolution of this method to address the case of non‐diffuse inter‐reflections and refractions. While the original PBGI algorithm models radiance using spherical harmonics, we propose to use wavelets parameterized on the direction space to better localize the radiance representation in the presence of highly directional reflectance. We also propose a new importance‐driven adaptive microbuffer model to capture accurately incoming radiance at a point. Furthermore, we evaluate outgoing radiance using a fast wavelet radiance product and contain the induced larger memory footprint by encoding hierarchically the wavelets in the PBGI tree. As a result, our algorithm can handle non‐lambertian BSDF in the light transport simulation, reproducing caustics and multiple reflections/refractions bounces with a similar quality to bidirectional path tracing in a large number of cases and for only a fraction of its computation time. Our approach is simple to implement and easy to integrate into any existing PBGI framework, with an intuitive control on the approximation error. We evaluate it on a collection of example scenes.  相似文献   

14.
目的 在实时渲染领域中,立即辐射度算法是用于实时模拟间接光泽反射效果的算法之一。基于立即辐射度的GGX SLC(stochastic light culling)算法中使用符合真实物理定律的GGX BRDF(bidirectional reflectance distribution function)光照模型计算间接光泽反射,计算复杂度很高,并且其计算开销会随着虚拟点光源的数量呈明显的线性增长。为解决上述问题,提出一种更高效的实时间接光泽反射渲染算法。方法 基于数学方法中的线性变换球面分布,将计算复杂度很高的GGX BRDF球面分布近似为一种计算复杂度较低的球面分布,并基于该球面分布提出了在单点光源以及多点光源环境下的基于物理的快速光照模型。该光照模型相比GGX BRDF光照模型具有更低的计算开销。然后基于该光照模型,提出实时间接光泽反射渲染算法,计算虚拟点光源对着色点的辐射强度,结合多点光源光照模型对着色点着色,高效地渲染间接光泽反射效果。结果 实验结果表明,改进后的实时间接光泽反射算法能够以更高的渲染效率实现与GGX SLC算法相似的渲染效果,渲染效率提升了20%~40%,并且场...  相似文献   

15.
熊元  刘世光 《软件学报》2014,25(S2):247-257
针对目前大规模复杂水面模拟中存在的效率不高、碰撞检测较为复杂等问题,提出了一种海洋尺度复杂水面模拟解决方案.首先,提出了一种球面投影网格方法实现大规模动态水面波动效果的模拟.与传统的投影网格方法相比,该方法不需要重新构造与球面直接相交的投影体,具有更高的绘制效率并且适合图形硬件加速.其次,设计了高效的交互式复杂水面的模拟方法,包括水面和刚体交互作用的模拟及刚体与地形的快速碰撞模拟.此外,给出了通用的泡沫绘制和海岸线绘制方法.实验结果表明,该方法的模拟结果较为逼真,能达到较高的绘制速度(FPS>60),适用于计算机游戏、虚拟现实等实时环境.  相似文献   

16.
Plants are important objects in virtual environments. High complexity of shape structure is found in plant communities. Level of detail (LOD) of plant geometric models becomes important for interactive forest rendering. We emphasize three major problems in current research: the time consumption in LOD model construction and extraction, the balance between visual effect and data compression, and the time consumption in the communication between Central Processing Unit (CPU) and Graphics Processing Unit (GPU). We present a new foliage simplification framework for LOD model and forest rendering. By an uneven subdivision of the tree crown volume, the cost for LOD model construction is drastically reduced. With a GPU‐oriented design of LOD storage structure for foliage, the costly hierarchical traversal of a binary tree is replaced by a sequential lookup of an array. The structure also decreases the communication between the CPU and the GPU in rendering. In addition, Leaf density is introduced to adapt compression to the local distribution of leaves, so that more visually relevant details are kept. According to foliage nature (broad leaves or needles), higher compression are finally reached using mixed polygon/line models. This framework is implemented on virtual scenes of simulated trees with high detail. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

17.
The visual fidelity of bleeding simulation in a surgical simulator is critical since it will affect not only the degree of visual realism, but also the user’s medical judgment and treatment in real-life settings. The conventional marching cubes surface rendering algorithm provides excellent visual effect in rendering gushing blood, however, it is insufficient for blood flow, which is very common in surgical procedures, since in this case the rendered surface and depth textures of blood are rough. In this paper, we propose a new method called the mixed depth rendering for rendering blood flow in surgical simulation. A smooth height field is created to minimize the height difference between neighboring particles on the bleeding surface. The color and transparency of each bleeding area are determined by the number of bleeding particles, which is consistent with the real visual effect. In addition, there is no much extra computational cost. The rendering of blood flow in a variety of surgical scenarios shows that visual feedback is much improved. The proposed mixed depth rendering method is also used in a neurosurgery simulator that we developed.   相似文献   

18.
Particle‐based simulation techniques, like the discrete element method or molecular dynamics, are widely used in many research fields. In real‐time explorative visualization it is common to render the resulting data using opaque spherical glyphs with local lighting only. Due to massive overlaps, however, inner structures of the data are often occluded rendering visual analysis impossible. Furthermore, local lighting is not sufficient as several important features like complex shapes, holes, rifts or filaments cannot be perceived well. To address both problems we present a new technique that jointly supports transparency and ambient occlusion in a consistent illumination model. Our approach is based on the emission‐absorption model of volume rendering. We provide analytic solutions to the volume rendering integral for several density distributions within a spherical glyph. Compared to constant transparency our approach preserves the three‐dimensional impression of the glyphs much better. We approximate ambient illumination with a fast hierarchical voxel cone‐tracing approach, which builds on a new real‐time voxelization of the particle data. Our implementation achieves interactive frame rates for millions of static or dynamic particles without any preprocessing. We illustrate the merits of our method on real‐world data sets gaining several new insights.  相似文献   

19.
Precomputed Radiance Transfer (PRT) remains an attractive solution for real-time rendering of complex light transport effects such as glossy global illumination. After precomputation, we can relight the scene with new environment maps while changing viewpoint in real-time. However, practical PRT methods are usually limited to low-frequency spherical harmonic lighting. All-frequency techniques using wavelets are promising but have so far had little practical impact. The curse of dimensionality and much higher data requirements have typically limited them to relighting with fixed view or only direct lighting with triple product integrals. In this paper, we demonstrate a hybrid neural-wavelet PRT solution to high-frequency indirect illumination, including glossy reflection, for relighting with changing view. Specifically, we seek to represent the light transport function in the Haar wavelet basis. For global illumination, we learn the wavelet transport using a small multi-layer perceptron (MLP) applied to a feature field as a function of spatial location and wavelet index, with reflected direction and material parameters being other MLP inputs. We optimize/learn the feature field (compactly represented by a tensor decomposition) and MLP parameters from multiple images of the scene under different lighting and viewing conditions. We demonstrate real-time (512 x 512 at 24 FPS, 800 x 600 at 13 FPS) precomputed rendering of challenging scenes involving view-dependent reflections and even caustics.  相似文献   

20.
Hypertexturing can be a powerful way of adding rich geometric details to surfaces at low memory cost by using a procedural three‐dimensional (3D) space distortion. However, this special kind of texturing technique still raises a major problem: the efficient control of the visual result. In this paper, we introduce a framework for interactive hypertexture modelling. This framework is based on two contributions. First, we propose a reformulation of the density modulation function. Our density modulation is based on the notion of shape transfer function. This function, which can be easily edited by users, allows us to control in an intuitive way the visual appearance of the geometric details resulting from the space distortion. Second, we propose to use a hybrid surface and volume‐point‐based representation in order to be able to dynamically hypertexture arbitrary objects at interactive frame rates. The rendering consists in a combined splat‐ and raycasting‐based direct volume rendering technique. The splats are used to model the volumetric object while raycasting allows us to add the details. An experimental study on users shows that our approach improves the design of hypertextures and yet preserves their procedural nature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号