首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 13 毫秒
1.
Computing direct illumination efficiently is still a problem of major significance in computer graphics. The evaluation involves an integral over the surface areas of the light sources in the scene. Because this integral typically features many discontinuities, introduced by the visibility term and complex material functions, Monte Carlo integration is one of the only general techniques that can be used to compute the integral. In this paper, we propose to evaluate the direct illumination using line samples instead of point samples. A direct consequence of line sampling is that the two‐dimensional integral over the area of the light source is reduced to a one‐dimensional integral. We exploit this dimensional reduction by relying on the property that commonly used sampling patterns, such as stratified sampling and low‐discrepancy sequences, converge faster when the dimension of the integration domain is reduced. We show that, while line sampling is generally more computationally intensive than point sampling, the variance of a line sample is smaller than that of a point sample, resulting in a higher order of convergence.  相似文献   

2.
Progressive light transport simulations aspire a physically‐based, consistent rendering to obtain visually appealing illumination effects, depth and realism. Thereby, the handling of large scenes is a difficult problem, as in typical scene subdivision approaches the parallel processing requires frequent synchronization due to the bouncing of light throughout the scene. In practice, however, only few object parts noticeably contribute to the radiance observable in the image, whereas large areas play only a minor role. In fact, a mesh simplification of the latter can go unnoticed by the human eye. This particular importance to the visible radiance in the image calls for an output‐sensitive mesh reduction that allows to render originally out‐of‐core scenes on a single machine without swapping of memory. Thus, in this paper, we present a preprocessing step that reduces the scene size under the constraint of radiance preservation with focus on high‐frequency effects such as caustics. For this, we perform a small number of preliminary light transport simulation iterations. Thereby, we identify mesh parts that contribute significantly to the visible radiance in the scene, and which we thus preserve during mesh reduction.  相似文献   

3.
Crowded motions refer to multiple objects moving around and interacting such as crowds, pedestrians and etc. We capture crowded scenes using a depth scanner at video frame rates. Thus, our input is a set of depth frames which sample the scene over time. Processing such data is challenging as it is highly unorganized, with large spatio‐temporal holes due to many occlusions. As no correspondence is given, locally tracking 3D points across frames is hard due to noise and missing regions. Furthermore global segmentation and motion completion in presence of large occlusions is ambiguous and hard to predict. Our algorithm utilizes Gestalt principles of common fate and good continuity to compute motion tracking and completion respectively. Our technique does not assume any pre‐given markers or motion template priors. Our key‐idea is to reduce the motion completion problem to a 1D curve fitting and matching problem which can be solved efficiently using a global optimization scheme. We demonstrate our segmentation and completion method on a variety of synthetic and real world crowded scanned scenes.  相似文献   

4.
5.
Recalculating the subspace basis of a deformable body is a mandatory procedure for subspace simulation, after the body gets modified by interactive applications. However, using linear modal analysis to calculate the basis from scratch is known to be computationally expensive. In the paper, we show that the subspace of a modified body can be efficiently obtained from the subspace of its original version, if mesh changes are small. Our basic idea is to approximate the stiffness matrix by its low‐frequency component, so we can calculate new linear deformation modes by solving an incremental eigenvalue decomposition problem. To further handle nonlinear deformations in the subspace, we present a hybrid approach to calculate modal derivatives from both new and original linear modes. Finally, we demonstrate that the cubature samples trained for the original mesh can be reused in fast reduced force and stiffness matrix evaluation, and we explore the use of our techniques in various simulation problems. Our experiment shows that the updated subspace basis still allows a simulator to generate visual plausible deformation effects. The whole system is efficient and it is compatible with other subspace construction approaches.  相似文献   

6.
Robust statistical methods are employed to reduce the noise in Monte Carlo ray tracing. Through the use of resampling, the sample mean distribution is determined for each pixel. Because this distribution is uni‐modal and normal for a large sample size, robust estimates converge to the true mean of the pixel values. Compared to existing methods, less additional storage is required at each pixel because the sample mean distribution can be distilled down to a compact size, and fewer computations are necessary because the robust estimation process is sampling independent and needs a small input size to compute pixel values. The robust statistical pixel estimators are not only resistant to impulse noise, but they also remove general noise from fat‐tailed distributions. A substantial speedup in rendering can therefore be achieved by reducing the number of samples required for a desired image quality. The effectiveness of the proposed approach is demonstrated for path tracing simulations.  相似文献   

7.
Due to the recent advancement of computer graphics hardware and software algorithms, deformable characters have become more and more popular in real‐time applications such as computer games. While there are mature techniques to generate primary deformation from skeletal movement, simulating realistic and stable secondary deformation such as jiggling of fats remains challenging. On one hand, traditional volumetric approaches such as the finite element method require higher computational cost and are infeasible for limited hardware such as game consoles. On the other hand, while shape matching based simulations can produce plausible deformation in real‐time, they suffer from a stiffness problem in which particles either show unrealistic deformation due to high gains, or cannot catch up with the body movement. In this paper, we propose a unified multi‐layer lattice model to simulate the primary and secondary deformation of skeleton‐driven characters. The core idea is to voxelize the input character mesh into multiple anatomical layers including the bone, muscle, fat and skin. Primary deformation is applied on the bone voxels with lattice‐based skinning. The movement of these voxels is propagated to other voxel layers using lattice shape matching simulation, creating a natural secondary deformation. Our multi‐layer lattice framework can produce simulation quality comparable to those from other volumetric approaches with a significantly smaller computational cost. It is best to be applied in real‐time applications such as console games or interactive animation creation.  相似文献   

8.
Bidirectional path tracing is known to perform poorly for the rendering of highly occluded scenes. Indeed, the connection strategy between light and eye subpaths does not take into account the visibility factor, presenting no contribution for many sampled paths. To improve the efficiency of bidirectional path tracing, we propose a new method for adaptive resampling of connections between light and eye subpaths. Aiming for this objective, we build discrete probability distributions of light subpaths based on a skeleton of the empty space of the scene. In order to demonstrate the efficiency of our algorithm, we compare our method to both standard bidirectional path tracing and a recent important caching method.  相似文献   

9.
We present a novel approach for analyzing the quality of multi‐agent crowd simulation algorithms. Our approach is data‐driven, taking as input a set of user‐defined metrics and reference training data, either synthetic or from video footage of real crowds. Given a simulation, we formulate the crowd analysis problem as an anomaly detection problem and exploit state‐of‐the‐art outlier detection algorithms to address it. To that end, we introduce a new framework for the visual analysis of crowd simulations. Our framework allows us to capture potentially erroneous behaviors on a per‐agent basis either by automatically detecting outliers based on individual evaluation metrics or by accounting for multiple evaluation criteria in a principled fashion using Principle Component Analysis and the notion of Pareto Optimality. We discuss optimizations necessary to allow real‐time performance on large datasets and demonstrate the applicability of our framework through the analysis of simulations created by several widely‐used methods, including a simulation from a commercial game.  相似文献   

10.
In this paper, we present a method to model hyperelasticity that is well suited for representing the nonlinearity of real‐world objects, as well as for estimating it from deformation examples. Previous approaches suffer several limitations, such as lack of integrability of elastic forces, failure to enforce energy convexity, lack of robustness of parameter estimation, or difficulty to model cross‐modal effects. Our method avoids these problems by relying on a general energy‐based definition of elastic properties. The accuracy of the resulting elastic model is maximized by defining an additive model of separable energy terms, which allow progressive parameter estimation. In addition, our method supports efficient modeling of extreme nonlinearities thanks to energy‐limiting constraints. We combine our energy‐based model with an optimization method to estimate model parameters from force‐deformation examples, and we show successful modeling of diverse deformable objects, including cloth, human finger skin, and internal human anatomy in a medical imaging application.  相似文献   

11.
In this paper, we present improvements to half vector space light transport (HSLT) [ KHD14 ], which make this approach more practical, robust for difficult input geometry, and faster. Our first contribution is the computation of half vector space ray differentials in a different domain than the original work. This enables a more uniform stratification over the image plane during Markov chain exploration. Furthermore, we introduce a new multi chain perturbation in half vector space, which, if combined appropriately with half vector perturbation, makes the mutation strategy both more robust to geometric configurations with fine displacements and faster due to reduced number of ray casts. We provide and analyze the results of improved HSLT and discuss possible applications of our new half vector ray differentials.  相似文献   

12.
We propose an efficient method to model paper tearing in the context of interactive modeling. The method uses geometrical information to automatically detect potential starting points of tears. We further introduce a new hybrid geometrical and physical‐based method to compute the trajectory of tears while procedurally synthesizing high resolution details of the tearing path using a texture based approach. The results obtained are compared with real paper and with previous studies on the expected geometric paths of paper that tears.  相似文献   

13.
A new unbiased sampling approach is presented, which allows the direct illumination from disk and cylinder light sources to be sampled with a uniform probability distribution within their solid angles, as seen from each illuminated point. This approach applies to any form of global illumination path tracing algorithm (forward or bidirectional), where the direct illumination integral from light sources needs to be estimated. We show that taking samples based on the solid angle of these two light sources leads to improved estimates and reduced variance of the Monte Carlo integral for direct illumination. This work follows from previously known unbiased methods for the solid angle sampling of triangular and rectangular light sources and extends the class of lights that can be rendered with these improved sampling algorithms.  相似文献   

14.
We present a new technique called Multiple Vertex Next Event Estimation, which outperforms current direct lighting techniques in forward scattering, optically dense media with the Henyey‐Greenstein phase function. Instead of a one‐segment connection from a vertex within the medium to the light source, an entire sub path of arbitrary length can be created and we show experimentally that 4–10 segments work best in practice. This is done by perturbing a seed path within the Monte Carlo context. Our technique was integrated in a Monte Carlo renderer, combining random walk path tracing with multiple vertex next event estimation via multiple importance sampling for an unbiased result. We evaluate this new technique against standard next event estimation and show that it significantly reduces noise and increases performance of multiple scattering renderings in highly anisotropic, optically dense media. Additionally, we discuss multiple light sources and performance implications of memory‐heavy heterogeneous media.  相似文献   

15.
16.
Rendering with accurate camera models greatly increases realism and improves the match of synthetic imagery to real‐life footage. Photographic lenses can be simulated by ray tracing, but the performance depends on the complexity of the lens system, and some operations required for modern algorithms, such as deterministic connections, can be difficult to achieve. We generalise the approach of polynomial optics, i.e. expressing the light field transformation from the sensor to the outer pupil using a polynomial, to work with extreme wide angle (fisheye) lenses and aspherical elements. We also show how sparse polynomials can be constructed from the large space of high‐degree terms (we tested up to degree 15). We achieve this using a variant of orthogonal matching pursuit instead of a Taylor series when computing the polynomials. We show two applications: photorealistic rendering using Monte Carlo methods, where we introduce a new aperture sampling technique that is suitable for light tracing, and an interactive preview method suitable for rendering with deep images.  相似文献   

17.
Combining high‐resolution level set surface tracking with lower resolution physics is an inexpensive method for achieving highly detailed liquid animations. Unfortunately, the inherent resolution mismatch introduces several types of disturbing visual artifacts. We identify the primary sources of these artifacts and present simple, efficient, and practical solutions to address them. First, we propose an unconditionally stable filtering method that selectively removes sub‐grid surface artifacts not seen by the fluid physics, while preserving fine detail in dynamic splashing regions. It provides comparable results to recent error‐correction techniques at lower cost, without substepping, and with better scaling behavior. Second, we show how a modified narrow‐band scheme can ensure accurate free surface boundary conditions in the presence of large resolution mismatches. Our scheme preserves the efficiency of the narrow‐band methodology, while eliminating objectionable stairstep artifacts observed in prior work. Third, we demonstrate that the use of linear interpolation of velocity during advection of the high‐resolution level set surface is responsible for visible grid‐aligned kinks; we therefore advocate higher‐order velocity interpolation, and show that it dramatically reduces this artifact. While these three contributions are orthogonal, our results demonstrate that taken together they efficiently address the dominant sources of visual artifacts arising with high‐resolution embedded liquid surfaces; the proposed approach offers improved visual quality, a straightforward implementation, and substantially greater scalability than competing methods.  相似文献   

18.
We address several limitations of the sampling‐based motion control method of Liu et at. [ LYvdP* 10 ]. The key insight is to learn from the past control reconstruction trials through sample distribution adaptation. Coupled with a sliding window scheme for better performance and an averaging method for noise reduction, the improved algorithm can efficiently construct open‐loop controls for long and challenging reference motions in good quality. Our ideas are intuitive and the implementations are simple. We compare the improved algorithm with the original algorithm both qualitatively and quantitatively, and demonstrate the effectiveness of the improved algorithm with a variety of motions ranging from stylized walking and dancing to gymnastic and Martial Arts routines.  相似文献   

19.
We propose a new boundary handling method for smoothed particle hydrodynamics (SPH). Previous approaches required the use of boundary particles to prevent particles from sticking to the boundary. We address this issue by correcting the fundamental equations of SPH with the integration of a kernel function. Our approach is able to directly handle triangle mesh boundaries without the need for boundary particles. We also show how our approach can be integrated into a position‐based fluid framework.  相似文献   

20.
We present a novel method to generate a virtual character's multi‐contact poses adaptive to the various shapes of the environment. Given the user‐specified center of mass (CoM) position and direction as inputs, our method finds the potential contacts for the character in the surrounding geometry of the environment and generates a set of stable poses that are contact‐rich. Major contributions of the work are in efficiently finding admissible support points for the target environment by precomputing candidate support points from a human pose database, and in automatically generating interactive poses that can maintain stable equilibrium. We develop the concept of support complexity to scale the set of precomputed support points by the geometric complexity of the environment. We demonstrate the effectiveness of our method by creating contact poses for various test cases of environments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号