首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The Bidirectional Texture Function (BTF) is a data‐driven solution to render materials with complex appearance. A typical capture contains tens of thousands of images of a material sample under varying viewing and lighting conditions. While capable of faithfully recording complex light interactions in the material, the main drawback is the massive memory requirement, both for storing and rendering, making effective compression of BTF data a critical component in practical applications. Common compression schemes used in practice are based on matrix factorization techniques, which preserve the discrete format of the original dataset. While this approach generalizes well to different materials, rendering with the compressed dataset still relies on interpolating between the closest samples. Depending on the material and the angular resolution of the BTF, this can lead to blurring and ghosting artefacts. An alternative approach uses analytic model fitting to approximate the BTF data, using continuous functions that naturally interpolate well, but whose expressive range is often not wide enough to faithfully recreate materials with complex non‐local lighting effects (subsurface scattering, inter‐reflections, shadowing and masking…). In light of these observations, we propose a neural network‐based BTF representation inspired by autoencoders: our encoder compresses each texel to a small set of latent coefficients, while our decoder additionally takes in a light and view direction and outputs a single RGB vector at a time. This allows us to continuously query reflectance values in the light and view hemispheres, eliminating the need for linear interpolation between discrete samples. We train our architecture on fabric BTFs with a challenging appearance and compare to standard PCA as a baseline. We achieve competitive compression ratios and high‐quality interpolation/extrapolation without blurring or ghosting artifacts.  相似文献   

2.
Photorealistic rendering of real world environments is important in a range of different areas; including Visual Special effects, Interior/Exterior Modelling, Architectural Modelling, Cultural Heritage, Computer Games and Automotive Design. Currently, rendering systems are able to produce photorealistic simulations of the appearance of many real‐world materials. In the real world, viewer perception of objects depends on the lighting and object/material/surface characteristics, the way a surface interacts with the light and on how the light is reflected, scattered, absorbed by the surface and the impact these characteristics have on material appearance. In order to re‐produce this, it is necessary to understand how materials interact with light. Thus the representation and acquisition of material models has become such an active research area. This survey of the state‐of‐the‐art of BRDF Representation and Acquisition presents an overview of BRDF (Bidirectional Reflectance Distribution Function) models used to represent surface/material reflection characteristics, and describes current acquisition methods for the capture and rendering of photorealistic materials.  相似文献   

3.
空间动态可变材质的交互式全局光照明绘制   总被引:1,自引:1,他引:0  
孙鑫  周昆  石教英 《软件学报》2008,19(7):1783-1793
提出了一种空间动态可变材质的交互式全局光照明绘制算法.如果在绘制过程中允许用户对物体的材质作修改,并且对一个物体的不同部分的材质作不同的修改,则称为空间动态可变材质.由于最终出射的辐射亮度和材质呈非线性关系,因此现有许多交互式全局光照明算法不允许用户修改物体的材质.如果一个物体各部分的材质可以不相同,那么材质对最终的出射的辐射亮度的影响更为复杂,目前没有任何交互式全局光照明绘制算法能够在绘制过程中对一个物体不同部分的材质作不同的修改.将一个空间动态可变材质区域划分成许多子区域来近似模拟,每个子区域内部材质处处相同.光在场景传播过程中可能先后被不同的子区域反射,并以此将最终出射的辐射亮度分为许多部分.用一组基材质来线性表示所有的材质,这组基材质被赋予场景中的所有子区域,从而得到不同的基材质的分布.预计算所有这些基材质分布下的各部分最终出射的辐射亮度.绘制时根据各子区域材质在基材质上的系数组合相应的预计算数据,就能交互式绘制全局光照明效果.  相似文献   

4.
In computer vision, convolutional neural networks (CNNs) achieve unprecedented performance for inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen space shading has boosted the quality of real‐time rendering, converting the same kind of attributes of a virtual scene back to appearance, enabling effects like ambient occlusion, indirect light, scattering and many more. In this paper we consider the diagonal problem: synthesizing appearance from given per‐pixel attributes using a CNN. The resulting Deep Shading renders screen space effects at competitive quality and speed while not being programmed by human experts but learned from example images.  相似文献   

5.
Physically based rendering is a well‐understood technique to produce realistic‐looking images. However, different algorithms exist for efficiency reasons, which work well in certain cases but fail or produce rendering artefacts in others. Few tools allow a user to gain insight into the algorithmic processes. In this work, we present such a tool, which combines techniques from information visualization and visual analytics with physically based rendering. It consists of an interactive parallel coordinates plot, with a built‐in sampling‐based data reduction technique to visualize the attributes associated with each light sample. Two‐dimensional (2D) and three‐dimensional (3D) heat maps depict any desired property of the rendering process. An interactively rendered 3D view of the scene displays animated light paths based on the user's selection to gain further insight into the rendering process. The provided interactivity enables the user to guide the rendering process for more efficiency. To show its usefulness, we present several applications based on our tool. This includes differential light transport visualization to optimize light setup in a scene, finding the causes of and resolving rendering artefacts, such as fireflies, as well as a path length contribution histogram to evaluate the efficiency of different Monte Carlo estimators.  相似文献   

6.
Soft shadows play an important role in photo‐realistic rendering. Although there are many efficient soft shadow algorithms, most of them focus on the one‐side light source situation, where a planar light source is on the outside of the scene. In fact, in many situations, such as games, light sources are omnidirectional. They may be surrounded by a number of 3D objects. This paper proposes a soft shadow algorithm for the omnidirectional situation. We develop a concentric spherical representation to model the behaviour of omnidirectional light sources. To provide better rendering results, a novel summed area table based filtering scheme for spherical functions is proposed. In addition, we utilize unicube mapping, which samples the spherical space more uniformly, to further improve the filtering quality.  相似文献   

7.
孙鑫  周昆  石教英 《软件学报》2008,19(4):1004-1015
现有的基于预计算的全局光照明绘制算法都假设场景中物体的材质固定不变,这样,从入射光照到出射的辐射亮度之间的传输变换就是线性变换.通过对这种线性变换的预计算,可以在动态光源下实现全局光照明的实时绘制.但是,当材质可以改变时,这种线性变换不再成立,因此,现有算法无法直接用于动态材质的场景.提出了一种方法:在修改场景中的物体材质时,可以实时得到场景在直接光照和间接光照下的绘制效果.将最终到达视点的辐射亮度根据其之前经过的反射次数及相应的反射材质分为多个部分,每个部分和先后反射的材质的乘积成正比,从而把该非线性问题转化为线性问题.又将所有可选的材质都表示为一组基的线性组合.将这组基作为材质赋予场景中的物体,就有各种不同的组合方式,预计算每种组合下所有部分的出射辐射亮度.在绘制时,根据各物体材质投影到基上的系数线性组合预计算的数据就能实时得到最终的全局光照明的绘制结果.该方法适用于几何场景、光照和视点都不发生变化的场景.使用双向反射分布函数来表示物体的材质,不考虑折射或者半透明的情况.该实现最多包含两次反射,并可以实时绘制得到一些很有趣的全局光照明效果,比如渗色、焦散等等.  相似文献   

8.
可微绘制技术是当前虚拟现实、计算机图形学与计算机视觉领域研究的热点,其目标是改造计算机图形学中以光栅化或光线跟踪算法为主的真实感绘制流程,支持梯度信息回传以计算由输出图像的变化导致的输入几何、材质属性变化,通过与优化及深度学习技术等相结合支持从数据中学习绘制模型和逆向推理,是可微学习技术在计算机图形学绘制技术中的应用的具体体现,在增强/虚拟现实内容生成、三维重建、表观采集建模和逆向光学设计等领域中有广泛的应用前景。本文对可微绘制当前的发展状况进行调研,重点对该技术在真实感绘制、3维重建和表观采集建模中的研究和应用情况进行综述,并对可微绘制技术发展趋势进行展望,以期推动可微技术在学术界和产业界的进一步发展。  相似文献   

9.
This paper introduces a real‐time rendering method for single‐bounce glossy caustics created by GGX microsurfaces. Our method is based on stochastic light culling of virtual point lights (VPLs), which is an unbiased culling method that randomly determines the range of influence of light for each VPL. While the original stochastic light culling method uses a bounding sphere defined by that light range for coarse culling (e.g., tiled culling), we have further extended the method by calculating a tighter bounding ellipsoid for glossy VPLs. Such bounding ellipsoids can be calculated analytically under the classic Phong reflection model which cannot be applied to physically plausible materials used in modern computer graphics productions. In order to use stochastic light culling for such modern materials, this paper derives a simple analytical solution to generate a tighter bounding ellipsoid for VPLs on GGX microsurfaces. This paper also presents an efficient implementation for culling bounding ellipsoids in the context of tiled culling. When stochastic light culling is combined with interleaved sampling for a scene with tens of thousands of VPLs, this tiled culling is faster than conservative rasterization‐based clustered shading which is a state‐of‐the‐art culling technique that supports bounding ellipsoids. Using these techniques, VPLs are culled efficiently for completely dynamic single‐bounce glossy caustics reflected from GGX microsurfaces.  相似文献   

10.
Many‐light rendering is becoming more common and important as rendering goes into the next level of complexity. However, to calculate the illumination under many lights, state of the art algorithms are still far from efficient, due to the separate consideration of light sampling and BRDF sampling. To deal with the inefficiency of many‐light rendering, we present a novel light sampling method named BRDF‐oriented light sampling, which selects lights based on importance values estimated using the BRDF's contributions. Our BRDF‐oriented light sampling method works naturally with MIS, and allows us to dynamically determine the number of samples allocated for different sampling techniques. With our method, we can achieve a significantly faster convergence to the ground truth results, both perceptually and numerically, as compared to previous many‐light rendering algorithms.  相似文献   

11.
Iridescence is a natural phenomenon that is perceived as gradual color changes, depending on the view and illumination direction. Prominent examples are the colors seen in oil films and soap bubbles. Unfortunately, iridescent effects are particularly difficult to recreate in real‐time computer graphics. We present a high‐quality real‐time method for rendering iridescent effects under image‐based lighting. Previous methods model dielectric thin‐films of varying thickness on top of an arbitrary micro‐facet model with a conducting or dielectric base material, and evaluate the resulting reflectance term, responsible for the iridescent effects, only for a single direction when using real‐time image‐based lighting. This leads to bright halos at grazing angles and over‐saturated colors on rough surfaces, which causes an unnatural appearance that is not observed in ground truth data. We address this problem by taking the distribution of light directions, given by the environment map and surface roughness, into account when evaluating the reflectance term. In particular, our approach prefilters the first and second moments of the light direction, which are used to evaluate a filtered version of the reflectance term. We show that the visual quality of our approach is superior to the ones previously achieved, while having only a small negative impact on performance.  相似文献   

12.
Robust and efficient rendering of complex lighting effects, such as caustics, remains a challenging task. While algorithms like vertex connection and merging can render such effects robustly, their significant overhead over a simple path tracer is not always justified and – as we show in this paper ‐ also not necessary. In current rendering solutions, caustics often require the user to enable a specialized algorithm, usually a photon mapper, and hand‐tune its parameters. But even with carefully chosen parameters, photon mapping may still trace many photons that the path tracer could sample well enough, or, even worse, that are not visible at all. Our goal is robust, yet lightweight, caustics rendering. To that end, we propose a technique to identify and focus computation on the photon paths that offer significant variance reduction over samples from a path tracer. We apply this technique in a rendering solution combining path tracing and photon mapping. The photon emission is automatically guided towards regions where the photons are useful, i.e., provide substantial variance reduction for the currently rendered image. Our method achieves better photon densities with fewer light paths (and thus photons) than emission guiding approaches based on visual importance. In addition, we automatically determine an appropriate number of photons for a given scene, and the algorithm gracefully degenerates to pure path tracing for scenes that do not benefit from photon mapping.  相似文献   

13.
构成大范围建筑群的几何面片中,每次成像的可见面片只占很少一部分,实时消隐技术正是针对此种情况的漫游场景进行简化的技术,本文利用实时消隐技术的视域剔除方法,减少漫游场景的多边形数目从而加速场景绘制,并在校园漫游系统中加以实践。  相似文献   

14.
探讨了真实感云场景的模拟技术,设计了一种云场景的实时渲染方法。基于Perlin噪声建模生成云浓度图,并采用考虑浓度的Phong光照模型与单向散射光照模型,分别计算反射光和透射光,改善了传统方法无法真实反映不同角度太阳光照的问题,实现了不同时段动态云场景的绘制。通过引入基于GPU的Bumping纹理算法与Render-to-Texture技术,极大提高了云场景的渲染速度。实验结果进一步表明该方法能够同时满足真实感与实时性两个方面的要求。  相似文献   

15.
Rendering materials such as metallic paints, scratched metals and rough plastics requires glint integrators that can capture all micro‐specular highlights falling into a pixel footprint, faithfully replicating surface appearance. Specular normal maps can be used to represent a wide range of arbitrary micro‐structures. The use of normal maps comes with important drawbacks though: the appearance is dark overall due to back‐facing normals and importance sampling is suboptimal, especially when the micro‐surface is very rough. We propose a new glint integrator relying on a multiple‐scattering patch‐based BRDF addressing these issues. To do so, our method uses a modified version of microfacet‐based normal mapping [SHHD17] designed for glint rendering, leveraging symmetric microfacets. To model multiple‐scattering, we re‐introduce the lost energy caused by a perfectly specular, single‐scattering formulation instead of using expensive random walks. This reflectance model is the basis of our patch‐based BRDF, enabling robust sampling and artifact‐free rendering with a natural appearance. Additional calculation costs amount to about 40% in the worst cases compared to previous methods [YHMR16, CCM18].  相似文献   

16.
17.
In computer cinematography, artists routinely use non‐physical lighting models to achieve desired appearances. This paper presents BendyLights, a non‐physical lighting model where light travels nonlinearly along splines, allowing artists to control light direction and shadow position at different points in the scene independently. Since the light deformation is smoothly defined at all world‐space positions, the resulting non‐physical lighting effects remain spatially consistent, avoiding the frequent incongruences of many non‐physical models. BendyLights are controlled simply by reshaping splines, using familiar interfaces, and require very few parameters. BendyLight control points can be keyframed to support animated lighting effects. We demonstrate BendyLights both in a realtime rendering system for editing and a production renderer for final rendering, where we show that BendyLights can also be used with global illumination.  相似文献   

18.
A new technique is proposed for scene analysis, called "appearance clustering.” The key result of this approach is that the scene points can be clustered according to their surface normals, even when the geometry, material, and lighting are all unknown. This is achieved by analyzing an image sequence of a scene as it is illuminated by a smoothly moving distant light source. In such a scenario, the brightness measurements at each pixel form a "continuous appearance profile.” When the source path follows an unstructured trajectory (obtained, say, by smoothly hand-waving a light source), the locations of the extrema of the appearance profile provide a strong cue for the scene point's surface normal. Based on this observation, a simple transformation of the appearance profiles and a distance metric are introduced that, together, can be used with any unsupervised clustering algorithm to obtain isonormal clusters of a scene. We support our algorithm empirically with comprehensive simulations of the Torrance-Sparrow and Oren-Nayar analytic BRDFs, as well as experiments with 25 materials obtained from the MERL database of measured BRDFs. The method is also demonstrated on 45 examples from the CURET database, obtaining clusters on scenes with real textures such as artificial grass and ceramic tile, as well as anisotropic materials such as satin and velvet. The results of applying our algorithm to indoor and outdoor scenes containing a variety of complex geometry and materials are shown. As an example application, isonormal clusters are used for lighting-consistent texture transfer. Our algorithm is simple and does not require any complex lighting setup for data collection.  相似文献   

19.
Light field display (LFD) is considered as a promising technology to reconstruct the light rays’ distribution of the real 3D scene, which approximates the original light field of target displayed objects with all depth cues in human vision including binocular disparity, motion parallax, color hint and correct occlusion relationship. Currently, computer-generated content is widely used for the LFD system, therefore rich 3D content can be provided. This paper firstly introduces applications of light field technologies in display system. Additionally, virtual stereo content rendering techniques and their application scenes are thoroughly combed and pointed out its pros and cons. Moreover, according to the different characteristics of light field system, the coding and correction algorithms in virtual stereo content rendering techniques are reviewed. Through the above discussion, there are still many problems in the existing rendering techniques for LFD. New rendering algorithms should be introduced to solve the real-time light-field rendering problem for large-scale virtual scenes.  相似文献   

20.
Selecting informative and visually appealing views for 3D indoor scenes is beneficial for the housing, decoration, and entertainment industries. A set of views that exhibit comfort, aesthetics, and functionality of a particular scene can attract customers and facilitate business transactions. However, selecting views for an indoor scene is challenging because the system has to consider not only the need to reveal as much information as possible, but also object arrangements, occlusions, and characteristics. Since there can be many principles utilized to guide the view selection, and various principles to follow under different circumstances, we achieve the goal by imitating popular photos on the Internet. Specifically, we select the view that can optimize the contour similarity of corresponding objects to the photo. Because the selected view can be inadequate if object arrangements in the 3D scene and the photo are different, our system imitates many popular photos and selects a certain number of views. After that, it clusters the selected views and determines the view/cluster centers by the weighted average to finally exhibit the scene. Experimental results demonstrate that the views selected by our method are visually appealing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号