首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Dense dynamic aggregates of similar elements are frequent in natural phenomena and challenging to render under full real time constraints. The optimal representation to render them changes drastically depending on the distance at which they are observed, ranging from sets of detailed textured meshes for near views to point clouds for distant ones. Our multiscale representation use impostors to achieve the mid-range transition from mesh-based to point-based scales. To ensure a visual continuum, the impostor model should match as closely as possible the mesh on one side, and reduce to a single pixel response that equals point rendering on the other. In this paper, we propose a model based on rich spherical impostors, able to combine precomputed as well as dynamic procedural data, and offering seamless transitions from close instanced meshes to distant points. Our approach is architectured around an on-the-fly discrimination mechanism and intensively exploits the rough spherical geometry of the impostor proxy. In particular, we propose a new sampling mechanism to reconstruct novel views from the precomputed ones, together with a new conservative occlusion culling method, coupled with a two-pass rendering pipeline leveraging early-Z rejection. As a result, our system scales well and is even able to render sand, while supporting completely dynamic stackings.  相似文献   

2.
We present a physically based method to render the atmosphere of a planet from ground to space views. Our method is cheap to compute and, as compared to previous successful methods, does not require any high dimensional Lookup Tables (LUTs) and thus does not suffer from visual artifacts associated with them. We also propose a new approximation to evaluate light multiple scattering within the atmosphere in real time. We take a new look at what it means to render natural atmospheric effects, and propose a set of simple look up tables and parameterizations to render a sky and its aerial perspective. The atmosphere composition can change dynamically to match artistic visions and weather changes without requiring heavy LUT update. The complete technique can be used in real-time applications such as games, simulators or architecture pre-visualizations. The technique also scales from power-efficient mobile platforms up to PCs with high-end GPUs, and is also useful for accelerating path tracing.  相似文献   

3.
Research in rendering large point clouds traditionally focused on the generation and use of hierarchical acceleration structures that allow systems to load and render the smallest fraction of the data with the largest impact on the output. The generation of these structures is slow and time consuming, however, and therefore ill-suited for tasks such as quickly looking at scan data stored in widely used unstructured file formats, or to immediately display the results of point-cloud processing tasks. We propose a progressive method that is capable of rendering any point cloud that fits in GPU memory in real time, without the need to generate hierarchical acceleration structures in advance. Our method supports data sets with a large amount of attributes per point, achieves a load performance of up to 100 million points per second, displays already loaded data in real time while remaining data is still being loaded, and is capable of rendering up to one billion points using an on-the-fly generated shuffled vertex buffer as its data structure, instead of slow-to-generate hierarchical structures. Shuffling is done during loading in order to allow efficiently filling holes with random subsets, which leads to a higher quality convergence behavior.  相似文献   

4.
Several fast global illumination algorithms rely on the Virtual Point Lights framework. This framework separates illumination into two steps: first, propagate radiance in the scene and store it in virtual lights, then gather illumination from these virtual lights. To accelerate the second step, virtual lights and receiving points are grouped hierarchically, for example using Multi-Dimensional Lightcuts. Computing visibility between clusters of virtual lights and receiving points is a bottleneck. Separately, matrix completion algorithms reconstruct completely a low-rank matrix from an incomplete set of sampled elements. In this paper, we use adaptive matrix completion to approximate visibility information after an initial clustering step. We reconstruct visibility information using as little as 10 % to 20 % samples for most scenes, and combine it with shading information computed separately, in parallel on the GPU. Overall, our method computes global illumination 3 or more times faster than previous state-of-the-art methods.  相似文献   

5.
We present a method for estimating the main properties of human skin, leveraging a hyperspectral dataset of skin tones synthetically generated through a biophysical layered skin model and Monte Carlo light transport simulations. Our approach learns the mapping between the skin parameters and diffuse skin reflectance in such space through an encoder-decoder network. We assess the performance of RGB and spectral reflectance up to 1 μm, allowing the model to retrieve visible and near-infrared. Instead of restricting the parameters to values in the ranges reported in medical literature, we allow the model to exceed such ranges to gain expressiveness to recover outliers like beard, eyebrows, rushes and other imperfections. The continuity of our albedo space allows to recover smooth textures of skin properties, enabling reflectance manipulations by meaningful edits of the skin properties. The space is robust under different illumination conditions, and presents high spectral similarity with the current largest datasets of spectral measurements of real human skin while expanding its gamut.  相似文献   

6.
Gradient-domain rendering can highly improve the convergence of light transport simulation using the smoothness in image space. These methods generate image gradients and solve an image reconstruction problem with rendered image and the gradient images. Recently, a previous work proposed a gradient-domain volumetric photon density estimation for homogeneous participating media. However, the image reconstruction relies on traditional L1 reconstruction, which leads to obvious artifacts when only a few rendering passes are performed. Deep learning based reconstruction methods have been exploited for surface rendering, but they are not suitable for volume density estimation. In this paper, we propose an unsupervised neural network for image reconstruction of gradient-domain volumetric photon density estimation, more specifically for volumetric photon mapping, using a variant of GradNet with an encoded shift connection and a separated auxiliary feature branch, which includes volume based auxiliary features such as transmittance and photon density. Our network smooths the images on global scale and preserves the high frequency details on a small scale. We demonstrate that our network produces a higher quality result, compared to previous work. Although we only considered volumetric photon mapping, it's straightforward to extend our method for other forms, like beam radiance estimation.  相似文献   

7.
We present a novel approach for rendering volumetric data including the Doppler effect of light. Similar to the acoustic Doppler effect, which is caused by relative motion between a sound emitter and an observer, light waves also experience compression or expansion when emitter and observer exhibit relative motion. We account for this by employing spectral volume rendering in an emission–absorption model, with the volumetric matter moving according to an accompanying vector field, and emitting and attenuating light at wavelengths subject to the Doppler effect. By introducing a novel piecewise linearear representation of the involved light spectra, we achieve accurate volume rendering at interactive frame rates. We compare our technique to rendering with traditional point-based spectral representation, and demonstrate its utility using a simulation of galaxy formation.  相似文献   

8.
Efficient visibility computation is a prominent requirement when designing automated camera control techniques for dynamic 3D environments; computer games, interactive storytelling or 3D media applications all need to track 3D entities while ensuring their visibility and delivering a smooth cinematic experience. Addressing this problem requires to sample a large set of potential camera positions and estimate visibility for each of them, which in practice is intractable despite the efficiency of ray-casting techniques on recent platforms. In this work, we introduce a novel GPU-rendering technique to efficiently compute occlusions of tracked targets in Toric Space coordinates – a parametric space designed for cinematic camera control. We then rely on this occlusion evaluation to derive an anticipation map predicting occlusions for a continuous set of cameras over a user-defined time window. We finally design a camera motion strategy exploiting this anticipation map to minimize the occlusions of tracked entities over time. The key features of our approach are demonstrated through comparison with traditionally used ray-casting on benchmark scenes, and through an integration in multiple game-like 3D scenes with heavy, sparse and dense occluders.  相似文献   

9.
We analyze the joint efforts made by the geometry processing and the numerical analysis communities in the last decades to define and measure the concept of “mesh quality”. Researchers have been striving to determine how, and how much, the accuracy of a numerical simulation or a scientific computation (e.g., rendering, printing, modeling operations) depends on the particular mesh adopted to model the problem, and which geometrical features of the mesh most influence the result. The goal was to produce a mesh with good geometrical properties and the lowest possible number of elements, able to produce results in a target range of accuracy. We overview the most common quality indicators, measures, or metrics that are currently used to evaluate the goodness of a discretization and drive mesh generation or mesh coarsening/refinement processes. We analyze a number of local and global indicators, defined over two- and three-dimensional meshes with any type of elements, distinguishing between simplicial, quadrangular/hexahedral, and generic polytopal elements. We also discuss mesh optimization algorithms based on the above indicators and report common libraries for mesh analysis and quality-driven mesh optimization.  相似文献   

10.
By-example aperiodic tilings are popular texture synthesis techniques that allow a fast, on-the-fly generation of unbounded and non-periodic textures with an appearance matching an arbitrary input sample called the “exemplar”. But by relying on uniform random sampling, these algorithms fail to preserve the autocovariance function, resulting in correlations that do not match the ones in the exemplar. The output can then be perceived as excessively random. In this work, we present a new method which can well preserve the autocovariance function of the exemplar. It consists in fetching contents with an importance sampler taking the explicit autocovariance function as the probability density function (pdf) of the sampler. Our method can be controlled for increasing or decreasing the randomness aspect of the texture. Besides significantly improving synthesis quality for classes of textures characterized by pronounced autocovariance functions, we moreover propose a real-time tiling and blending scheme that permits the generation of high-quality textures faster than former algorithms with minimal downsides by reducing the number of texture fetches.  相似文献   

11.
Oriented bounding box (OBB) hierarchies can be used instead of hierarchies based on axis-aligned bounding boxes (AABB), providing tighter fitting to the underlying geometric structures and resulting in improved interference tests, such as ray-geometry intersections. In this paper, we present a method for the fast, parallel transformation of an existing bounding volume hierarchy (BVH), based on AABBs, into a hierarchy based on oriented bounding boxes. To this end, we parallelise a high-quality OBB extraction algorithm from the literature to operate as a standalone OBB estimator and further extend it to efficiently build an OBB hierarchy in a bottom up manner. This agglomerative approach allows for fast parallel execution and the formation of arbitrary, high-quality OBBs in bounding volume hierarchies. The method is fully implemented on the GPU and extensively evaluated with ray intersections.  相似文献   

12.
Superior human pose and shape reconstruction from monocular images depends on removing the ambiguities caused by occlusions and shape variance. Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth. However, 3D ground truth is neither in abundance nor can efficiently be obtained. In this paper, we introduce body part segmentation as critical supervision. Part segmentation not only indicates the shape of each body part but helps to infer the occlusions among parts as well. To improve the reconstruction with part segmentation, we propose a part-level differentiable renderer that enables part-based models to be supervised by part segmentation in neural networks or optimization loops. We also introduce a general parametric model engaged in the rendering pipeline as an intermediate representation between skeletons and detailed shapes, which consists of primitive geometries for better interpretability. The proposed approach combines parameter regression, body model optimization, and detailed model registration altogether. Experimental results demonstrate that the proposed method achieves balanced evaluation on pose and shape, and outperforms the state-of-the-art approaches on Human3.6M, UP-3D and LSP datasets.  相似文献   

13.
Optimal transport is a long-standing theory that has been studied in depth from both theoretical and numerical point of views. Starting from the 50s this theory has also found a lot of applications in operational research. Over the last 30 years it has spread to computer vision and computer graphics and is now becoming hard to ignore. Still, its mathematical complexity can make it difficult to comprehend, and as such, computer vision and computer graphics researchers may find it hard to follow recent developments in their field related to optimal transport. This survey first briefly introduces the theory of optimal transport in layman's terms as well as most common numerical techniques to solve it. More importantly, it presents applications of these numerical techniques to solve various computer graphics and vision related problems. This involves applications ranging from image processing, geometry processing, rendering, fluid simulation, to computational optics, and many more. It is aimed at computer graphics researchers desiring to follow optimal transport research in their field as well as optimal transport researchers willing to find applications for their numerical algorithms.  相似文献   

14.
The outset of realistic rendering is a desire to reproduce the appearance of the real world. Rendering techniques therefore operate at a scale corresponding to the size of objects that we observe with our naked eyes. At the same time, rendering techniques must be able to deal with objects of nearly arbitrary shapes and materials. These requirements lead to techniques that oftentimes leave the task of setting the optical properties of the materials to the user. Matching the appearance of real objects by manual adjustment of optical properties is however nearly impossible. We can render objects with a plausible appearance in this way but cannot compare the appearance of a manufactured item to that of its digital twin. This is especially true in the case of translucent objects, where we need more than a goniometric measurement of the optical properties. In this survey, we provide an overview of forward and inverse models for acquiring the optical properties of translucent materials. We map out the efforts in graphics research in this area and describe techniques available in related fields. Our objective is to provide a better understanding of the tools currently available for appearance specification when it comes to digital representations of real translucent objects.  相似文献   

15.
Effective compression of densely sampled BRDF measurements is critical for many graphical or vision applications. In this paper, we present DeepBRDF, a deep-learning-based representation that can significantly reduce the dimensionality of measured BRDFs while enjoying high quality of recovery. We consider each measured BRDF as a sequence of image slices and design a deep autoencoder with a masked L2 loss to discover a nonlinear low-dimensional latent space of the high-dimensional input data. Thorough experiments verify that the proposed method clearly outperforms PCA-based strategies in BRDF data compression and is more robust. We demonstrate the effectiveness of DeepBRDF with two applications. For BRDF editing, we can easily create a new BRDF by navigating on the low-dimensional manifold of DeepBRDF, guaranteeing smooth transitions and high physical plausibility. For BRDF recovery, we design another deep neural network to automatically generate the full BRDF data from a single input image. Aided by our DeepBRDF learned from real-world materials, a wide range of reflectance behaviors can be recovered with high accuracy.  相似文献   

16.
In this paper we develop an in-depth theoretical investigation of the discrete Hamiltonian eigenbasis, which remains quite unexplored in the geometry processing community. This choice is supported by the fact that Dirichlet eigenfunctions can be equivalently computed by defining a Hamiltonian operator, whose potential energy and localization region can be controlled with ease. We vary with continuity the potential energy and study the relationship between the Dirichlet Laplacian and the Hamiltonian eigenbases with the functional map formalism. We develop a global analysis to capture the asymptotic behavior of the eigenpairs. We then focus on their local interactions, namely the veering patterns that arise between proximal eigenvalues. Armed with this knowledge, we are able to track the eigenfunctions in all possible configurations, shedding light on the nature of the functional maps. We exploit the Hamiltonian-Dirichlet connection in a partial shape matching problem, obtaining state of the art results, and provide directions where our theoretical findings could be applied in future research.  相似文献   

17.
We present an efficient new method for computing Mandelbrot-like fractals (Julia sets) that approximate a user-defined shape. Our algorithm is orders of magnitude faster than previous methods, as it entirely sidesteps the need for a time-consuming numerical optimization. It is also more robust, succeeding on shapes where previous approaches failed. The key to our approach is a versor-modulus analysis of fractals that allows us to formulate a novel shape modulus function that directly controls the broad shape of a Julia set, while keeping fine-grained fractal details intact. Our formulation contains flexible artistic controls that allow users to seamlessly add fractal detail to desired spatial regions, while transitioning back to the original shape in others. No previous approach allows Mandelbrot-like details to be “painted” onto meshes.  相似文献   

18.
In this paper, we articulate a novel plastic phase-field (PPF) method that can tightly couple the phase-field with plastic treatment to efficiently simulate ductile fracture with GPU optimization. At the theoretical level of physically-based modeling and simulation, our PPF approach assumes the fracture sensitivity of the material increases with the plastic strain accumulation. As a result, we first develop a hardening-related fracture toughness function towards phase-field evolution. Second, we follow the associative flow rule and adopt a novel degraded von Mises yield criterion. In this way, we establish the tight coupling of the phase-field and plastic treatment, with which our PPF method can present distinct elastoplasticity, necking, and fracture characteristics during ductile fracture simulation. At the numerical level towards GPU optimization, we further devise an advanced parallel framework, which takes the full advantages of hierarchical architecture. Our strategy dramatically enhances the computational efficiency of preprocessing and phase-field evolution for our PPF with the material point method (MPM). Based on our extensive experiments on a variety of benchmarks, our novel method's performance gain can reach 1.56× speedup of the primary GPU MPM. Finally, our comprehensive simulation results have confirmed that this new PPF method can efficiently and realistically simulate complex ductile fracture phenomena in 3D interactive graphics and animation.  相似文献   

19.
Physically based rendering of glittering surfaces is a challenging problem in computer graphics. Several methods have proposed off-line solutions, but none is dedicated to high-performance graphics. In this work, we propose a novel physically based BRDF for real-time rendering of glints. Our model can reproduce the appearance of sparkling materials (rocks, rough plastics, glitter fabrics, etc.). Compared to the previous real-time method [ZK16], which is not physically based, our BRDF uses normalized NDFs and converges to the standard microfacet BRDF [CT82] for a large number of microfacets. Our method procedurally computes NDFs with hundreds of sharp lobes. It relies on a dictionary of 1D marginal distributions: at each location two of them are randomly picked and multiplied (to obtain a NDF), rotated (to increase the variety), and scaled (to control standard deviation/roughness). The dictionary is multiscale, does not depend on roughness, and has a low memory footprint (less than 1 MiB).  相似文献   

20.
For the last decades, the concern of producing convincing facial animation has garnered great interest, that has only been accelerating with the recent explosion of 3D content in both entertainment and professional activities. The use of motion capture and retargeting has arguably become the dominant solution to address this demand. Yet, despite high level of quality and automation performance-based animation pipelines still require manual cleaning and editing to refine raw results, which is a time- and skill-demanding process. In this paper, we look to leverage machine learning to make facial animation editing faster and more accessible to non-experts. Inspired by recent image inpainting methods, we design a generative recurrent neural network that generates realistic motion into designated segments of an existing facial animation, optionally following user-provided guiding constraints. Our system handles different supervised or unsupervised editing scenarios such as motion filling during occlusions, expression corrections, semantic content modifications, and noise filtering. We demonstrate the usability of our system on several animation editing use cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号