首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
BRDFs are commonly used to represent given materials’ appearance in computer graphics and related fields. Although, in the recent past, BRDFs have been extensively measured, compressed, and fitted by a variety of analytical models, most research has been primarily focused on simplified isotropic BRDFs. In this paper, we present a unique database of 150 BRDFs representing a wide range of materials; the majority exhibiting anisotropic behavior. Since time‐consuming BRDF measurement represents a major obstacle in the digital material appearance reproduction pipeline, we tested several approaches estimating a very limited set of samples capable of high quality appearance reconstruction. Initially, we aligned all measured BRDFs according to the location of the anisotropic highlights. Then we propose an adaptive sampling method based on analysis of the measured BRDFs. For each BRDF, a unique sampling pattern was computed, given a predefined count of samples. Further, template‐based methods are introduced based on reusing of the precomputed sampling patterns. This approach enables a more efficient measurement of unknown BRDFs while preserving the visual fidelity for the majority of tested materials. Our method exhibits better performance and stability than competing sparse sampling approaches; especially for higher numbers of samples.  相似文献   

2.
Discrete conformal mappings of planar triangle meshes, also known as the As‐Similar‐As‐Possible (ASAP) mapping, involve the minimization of a quadratic energy function, thus are very easy to generate and are popular in image warping scenarios. We generalize this classical mapping to the case of quad meshes, taking into account the mapping of the interior of the quad, and analyze in detail the most common case ‐ the unit grid mesh. We show that the generalization, when combined with barycentric coordinate mappings between the source and target polygons, spawns an entire family of new mappings governed by quadratic energy functions, which allow to control quite precisely various effects of the mapping. This approach is quite general and applies also to arbitrary planar polygon meshes. As an application of generalized ASAP mappings of the unit grid mesh, we demonstrate how they can be used to warp digital photographs to achieve a variety of effects. One such effect is modifying the perspective of the camera that took a given photograph (without moving the camera). A related, but more challenging, effect is re‐photography ‐ warping a contemporary photograph in order to reproduce the camera view present in a vintage photograph of the same scene ‐ taken many years before with a different camera from a different viewpoint. We apply the generalized ASAP mapping to these images, discretized to a unit grid. Using a quad mesh (as opposed to a triangle mesh) permits biasing towards affine maps of the unit squares. This allows the introduction of an As‐Affine‐As‐Possible (AAAP) mapping for a good approximation of the homographies present in these warps, achieving quite accurate results. We demonstrate the advantages of the AAAP mapping on a variety of synthetic and real‐world examples.  相似文献   

3.
Rigging is a core element in the process of bringing a 3D character to life. The rig defines and delimits the motions of the character and provides an interface for an animator with which to interact with the 3D character. The quality of the rig has a key impact on the expressiveness of the character. Creating a usable, rich, production ready rig is a laborious task requiring direct intervention by a trained professional because the goal is difficult to achieve with fully automatic methods. We propose a semi‐automatic rigging editing framework which eases the need for manual intervention while maintaining an important degree of control over the final rig. Starting by automatically generated base rig, we provide interactive operations which efficiently configure the skeleton structure and mesh skinning.  相似文献   

4.
We present a real‐time multi‐view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high‐quality markerless facial performance capture in real‐time from multi‐view helmet camera data, employing an actor specific regressor. The regressor training is tailored to specified actor appearance and we further condition it for the expected illumination conditions and the physical capture rig by generating the training data synthetically. In order to leverage the information present in live imagery, which is typically provided by multiple cameras, we propose a novel multi‐view regression algorithm that uses multi‐dimensional random ferns. We show that higher quality can be achieved by regressing on multiple video streams than previous approaches that were designed to operate on only a single view. Furthermore, we evaluate possible camera placements and propose a novel camera configuration that allows to mount cameras outside the field of view of the actor, which is very beneficial as the cameras are then less of a distraction for the actor and allow for an unobstructed line of sight to the director and other actors. Our new real‐time facial capture approach has immediate application in on‐set virtual production, in particular with the ever‐growing demand for motion‐captured facial animation in visual effects and video games.  相似文献   

5.
Image vectorization is an important yet challenging problem, especially when the input image has rich content. In this paper, we develop a novel method for automatically vectorizing natural images with feature‐aligned quad‐dominant meshes. Inspired by the quadrangulation methods in 3D geometry processing, we propose a new directional field optimization technique by encoding the color gradients, sidestepping the explicit computing of salient image features. We further compute the anisotropic scales of the directional field by accommodating the distance among image features. Our method is fully automatic and efficient, which takes only a few seconds for a 400×400 image on a normal laptop. We demonstrate the effectiveness of the proposed method on various image editing applications.  相似文献   

6.
Due to the recent advancement of computer graphics hardware and software algorithms, deformable characters have become more and more popular in real‐time applications such as computer games. While there are mature techniques to generate primary deformation from skeletal movement, simulating realistic and stable secondary deformation such as jiggling of fats remains challenging. On one hand, traditional volumetric approaches such as the finite element method require higher computational cost and are infeasible for limited hardware such as game consoles. On the other hand, while shape matching based simulations can produce plausible deformation in real‐time, they suffer from a stiffness problem in which particles either show unrealistic deformation due to high gains, or cannot catch up with the body movement. In this paper, we propose a unified multi‐layer lattice model to simulate the primary and secondary deformation of skeleton‐driven characters. The core idea is to voxelize the input character mesh into multiple anatomical layers including the bone, muscle, fat and skin. Primary deformation is applied on the bone voxels with lattice‐based skinning. The movement of these voxels is propagated to other voxel layers using lattice shape matching simulation, creating a natural secondary deformation. Our multi‐layer lattice framework can produce simulation quality comparable to those from other volumetric approaches with a significantly smaller computational cost. It is best to be applied in real‐time applications such as console games or interactive animation creation.  相似文献   

7.
Rendering with accurate camera models greatly increases realism and improves the match of synthetic imagery to real‐life footage. Photographic lenses can be simulated by ray tracing, but the performance depends on the complexity of the lens system, and some operations required for modern algorithms, such as deterministic connections, can be difficult to achieve. We generalise the approach of polynomial optics, i.e. expressing the light field transformation from the sensor to the outer pupil using a polynomial, to work with extreme wide angle (fisheye) lenses and aspherical elements. We also show how sparse polynomials can be constructed from the large space of high‐degree terms (we tested up to degree 15). We achieve this using a variant of orthogonal matching pursuit instead of a Taylor series when computing the polynomials. We show two applications: photorealistic rendering using Monte Carlo methods, where we introduce a new aperture sampling technique that is suitable for light tracing, and an interactive preview method suitable for rendering with deep images.  相似文献   

8.
Many‐light methods approximate the light transport in a scene by computing the direct illumination from many virtual point light sources (VPLs), and render low‐noise images covering a wide range of performance and quality goals. However, they are very inefficient at representing glossy light transport. This is because a VPL on a glossy surface illuminates a small fraction of the scene only, and a tremendous number of VPLs might be necessary to render acceptable images. In this paper, we introduce Rich‐VPLs which, in contrast to standard VPLs, represent a multitude of light paths and thus have a more widespread emission profile on glossy surfaces and in scenes with multiple primary light sources. By this, a single Rich‐VPL contributes to larger portions of a scene with negligible additional shading cost. Our second contribution is a placement strategy for (Rich‐)VPLs proportional to sensor importance times radiance. Although both Rich‐VPLs and improved placement can be used individually, they complement each other ideally and share interim computation. Furthermore, both complement existing many‐light methods, e.g. Lightcuts or the Virtual Spherical Lights method, and can improve their efficiency as well as their application for scenes with glossy materials and many primary light sources.  相似文献   

9.
Physical simulation has long been the approach of choice for generating realistic hair animations in CG. A constant drawback of simulation, however, is the necessity to manually set the physical parameters of the simulation model in order to get the desired dynamic behavior. To alleviate this, researchers have begun to explore methods for reconstructing hair from the real world and even to estimate the corresponding simulation parameters through the process of inversion. So far, however, these methods have had limited applicability, because dynamic hair capture can only be played back without the ability to edit, and solving for simulation parameters can only be accomplished for static hairstyles, ignoring the dynamic behavior. We present the first method for capturing dynamic hair and automatically determining the physical properties for simulating the observed hairstyle in motion. Since our dynamic inversion is agnostic to the simulation model, the proposed method applies to virtually any hair simulation technique, which we demonstrate using two state‐of‐the‐art hair simulation models. The output of our method is a fully simulation‐ready hairstyle, consisting of both the static hair geometry as well as its physical properties. The hairstyle can be easily edited by adding additional external forces, changing the head motion, or re‐simulating in completely different environments, all while remaining faithful to the captured hairstyle.  相似文献   

10.
This paper generalizes the well‐known Diffusion Curves Images (DCI), which are composed of a set of Bezier curves with colors specified on either side. These colors are diffused as Laplace functions over the image domain, which results in smooth color gradients interrupted by the Bezier curves. Our new formulation allows for more color control away from the boundary, providing a similar expressive power as recent Bilaplace image models without introducing associated issues and computational costs. The new model is based on a special Laplace function blending and a new edge blur formulation. We demonstrate that given some user‐defined boundary curves over an input raster image, fitting colors and edge blur from the image to the new model and subsequent editing and animation is equally convenient as with DCIs. Numerous examples and comparisons to DCIs are presented.  相似文献   

11.
We present a technique to efficiently importance sample distant, all‐frequency illumination in indoor scenes. Standard environment sampling is inefficient in such cases since the distant lighting is typically only visible through small openings (e.g. windows). This visibility is often addressed by manually placing a portal around each window to direct samples towards the openings; however, uniformly sampling the portal (its area or solid angle) disregards the possibly high frequency environment map. We propose a new portal importance sampling technique which takes into account both the environment map and its visibility through the portal, drawing samples proportional to the product of the two. To make this practical, we propose a novel, portal‐rectified reparametrization of the environment map with the key property that the visible region induced by a rectangular portal projects to an axis‐aligned rectangle. This allows us to sample according to the desired product distribution at an arbitrary shading location using a single (precomputed) summed‐area table per portal. Our technique is unbiased, relevant to many renderers, and can also be applied to rectangular light sources with directional emission profiles, enabling efficient rendering of non‐diffuse light sources with soft shadows.  相似文献   

12.
In this paper, we present a method to model hyperelasticity that is well suited for representing the nonlinearity of real‐world objects, as well as for estimating it from deformation examples. Previous approaches suffer several limitations, such as lack of integrability of elastic forces, failure to enforce energy convexity, lack of robustness of parameter estimation, or difficulty to model cross‐modal effects. Our method avoids these problems by relying on a general energy‐based definition of elastic properties. The accuracy of the resulting elastic model is maximized by defining an additive model of separable energy terms, which allow progressive parameter estimation. In addition, our method supports efficient modeling of extreme nonlinearities thanks to energy‐limiting constraints. We combine our energy‐based model with an optimization method to estimate model parameters from force‐deformation examples, and we show successful modeling of diverse deformable objects, including cloth, human finger skin, and internal human anatomy in a medical imaging application.  相似文献   

13.
In this paper, we present an on‐line real‐time physics‐based approach to motion control with contact repositioning based on a low‐dimensional dynamics model using example motion data. Our approach first generates a reference motion in run time according to an on‐line user request by transforming an example motion extracted from a motion library. Guided by the reference motion, it repeatedly generates an optimal control policy for a small time window one at a time for a sequence of partially overlapping windows, each covering a couple of footsteps of the reference motion, which supports an on‐line performance. On top of this, our system dynamics and problem formulation allow to derive closed‐form derivative functions by exploiting the low‐dimensional dynamics model together with example motion data. These derivative functions and their sparse structures facilitate a real‐time performance. Our approach also allows contact foot repositioning so as to robustly respond to an external perturbation or an environmental change as well as to perform locomotion tasks such as stepping on stones effectively.  相似文献   

14.
The most common solutions to the light transport problem rely on either Monte Carlo (MC) integration or density estimation methods, such as uni‐ & bi‐directional path tracing or photon mapping. Recent gradient‐domain extensions of MC approaches show great promise; here, gradients of the final image are estimated numerically (instead of the image intensities themselves) with coherent paths generated from a deterministic shift mapping. We extend gradient‐domain approaches to light transport simulation based on density estimation. As with previous gradient‐domain methods, we detail important considerations that arise when moving from a primal‐ to gradient‐domain estimator. We provide an efficient and straightforward solution to these problems. Our solution supports stochastic progressive density estimation, so it is robust to complex transport effects. We show that gradient‐domain photon density estimation converges faster than its primal‐domain counterpart, as well as being generally more robust than gradient‐domain uni‐ & bi‐directional path tracing for scenes dominated by complex transport.  相似文献   

15.
We address the problem of constructing appearance‐preserving level of details (LoDs) of complex 3D models such as trees. We propose a hybrid method that combines the strengths of mesh and volume representations. Our main idea is to separate macroscopic (i.e. larger than the target spatial resolution) and microscopic (sub‐resolution) surfaces at each scale and to treat them differently, because meshes are very efficient at representing macroscopic surfaces while sub‐resolution geometry benefits from volumetric approximations. We introduce a new algorithm that detects the macroscopic surfaces of a mesh for a given resolution. We simplify these surfaces with edge collapses and we provide a method for pre‐filtering their normal distributions and albedos. To approximate microscopic details, we use a heterogeneous microflake participating medium and we introduce a new artifact‐free voxelization algorithm that preserves local occlusion. Thanks to our macroscopic surface analysis, our algorithm is fully automatic and it generates seamless LoDs at arbitrarily coarse resolutions for a wide range of 3D models.  相似文献   

16.
BRDFs are commonly used for material appearance representation in applications ranging from gaming and the movie industry, to product design and specification. Most applications rely on isotropic BRDFs due to their better availability as a result of their easier acquisition process. On the other hand, anisotropic BRDF due to their structure‐dependent anisotropic highlights, are more challenging to measure and process. This paper thus leverages the measurement process of anisotropic BRDF by representing such BRDF by the collection of isotropic BRDFs. Our method relies on an anisotropic BRDF database decomposition into training isotropic slices forming a linear basis, where appropriate sparse samples are identified using numerical optimization. When an unknown anisotropic BRDF is measured, these samples are repeatably captured in a small set of azimuthal directions. All collected samples are then used for an entire measured BRDF reconstruction from a linear isotropic basis. Typically, below 100 samples are sufficient for the capturing of main visual features of complex anisotropic materials, and we provide a minimal directional samples to be regularly measured at each sample rotation. We conclude, that even simple setups relying on five bidirectional samples (maximum of five stationary sensors/lights) in combination with eight rotations (rotation stage for specimen) can yield a promising reconstruction of anisotropic behavior. Next, we outline extension of the proposed approach to adaptive sampling of anisotropic BRDF to gain even better performance. Finally, we show that our method allows using standard geometries, including industrial multi‐angle reflectometers, for the fast measurement of anisotropic BRDFs.  相似文献   

17.
Light field videos express the entire visual information of an animated scene, but their shear size typically makes capture, processing and display an off‐line process, i. e., time between initial capture and final display is far from real‐time. In this paper we propose a solution for one of the key bottlenecks in such a processing pipeline, which is a reliable depth reconstruction possibly for many views. This is enabled by a novel correspondence algorithm converting the video streams from a sparse array of off‐the‐shelf cameras into an array of animated depth maps. The algorithm is based on a generalization of the classic multi‐resolution Lucas‐Kanade correspondence algorithm from a pair of images to an entire array. Special inter‐image confidence consolidation allows recovery from unreliable matching in some locations and some views. It can be implemented efficiently in massively parallel hardware, allowing for interactive computations. The resulting depth quality as well as the computation performance compares favorably to other state‐of‐the art light field‐to‐depth approaches, as well as stereo matching techniques. Another outcome of this work is a data set of light field videos that are captured with multiple variants of sparse camera arrays.  相似文献   

18.
We present an energy‐conserving fiber shading model for hair and fur that is efficient enough for path tracing. Our model adopts a near‐field formulation to avoid the expensive integral across the fiber, accounts for all high order internal reflection events with a single lobe, and proposes a novel, closed‐form distribution for azimuthal roughness based on the logistic distribution. Additionally, we derive, through simulation, a parameterization that relates intuitive user controls such as multiple‐scattering albedo and isotropic cylinder roughness to the underlying physical parameters.  相似文献   

19.
We address several limitations of the sampling‐based motion control method of Liu et at. [ LYvdP* 10 ]. The key insight is to learn from the past control reconstruction trials through sample distribution adaptation. Coupled with a sliding window scheme for better performance and an averaging method for noise reduction, the improved algorithm can efficiently construct open‐loop controls for long and challenging reference motions in good quality. Our ideas are intuitive and the implementations are simple. We compare the improved algorithm with the original algorithm both qualitatively and quantitatively, and demonstrate the effectiveness of the improved algorithm with a variety of motions ranging from stylized walking and dancing to gymnastic and Martial Arts routines.  相似文献   

20.
We propose a stable and efficient particle‐based method for simulating highly viscous fluids that can generate coiling and buckling phenomena and handle variable viscosity. In contrast to previous methods that use explicit integration, our method uses an implicit formulation to improve the robustness of viscosity integration, therefore enabling use of larger time steps and higher viscosities. We use Smoothed Particle Hydrodynamics to solve the full form of viscosity, constructing a sparse linear system with a symmetric positive definite matrix, while exploiting the variational principle that automatically enforces the boundary condition on free surfaces. We also propose a new method for extracting coefficients of the matrix contributed by second‐ring neighbor particles to efficiently solve the linear system using a conjugate gradient solver. Several examples demonstrate the robustness and efficiency of our implicit formulation over previous methods and illustrate the versatility of our method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号