首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Procedural material models have been gaining traction in many applications thanks to their flexibility, compactness, and easy editability. We explore the inverse rendering problem of procedural material parameter estimation from photographs, presenting a unified view of the problem in a Bayesian framework. In addition to computing point estimates of the parameters by optimization, our framework uses a Markov Chain Monte Carlo approach to sample the space of plausible material parameters, providing a collection of plausible matches that a user can choose from, and efficiently handling both discrete and continuous model parameters. To demonstrate the effectiveness of our framework, we fit procedural models of a range of materials—wall plaster, leather, wood, anisotropic brushed metals and layered metallic paints—to both synthetic and real target images.  相似文献   

2.
Physically based rendering of glittering surfaces is a challenging problem in computer graphics. Several methods have proposed off-line solutions, but none is dedicated to high-performance graphics. In this work, we propose a novel physically based BRDF for real-time rendering of glints. Our model can reproduce the appearance of sparkling materials (rocks, rough plastics, glitter fabrics, etc.). Compared to the previous real-time method [ZK16], which is not physically based, our BRDF uses normalized NDFs and converges to the standard microfacet BRDF [CT82] for a large number of microfacets. Our method procedurally computes NDFs with hundreds of sharp lobes. It relies on a dictionary of 1D marginal distributions: at each location two of them are randomly picked and multiplied (to obtain a NDF), rotated (to increase the variety), and scaled (to control standard deviation/roughness). The dictionary is multiscale, does not depend on roughness, and has a low memory footprint (less than 1 MiB).  相似文献   

3.
Efficient visibility computation is a prominent requirement when designing automated camera control techniques for dynamic 3D environments; computer games, interactive storytelling or 3D media applications all need to track 3D entities while ensuring their visibility and delivering a smooth cinematic experience. Addressing this problem requires to sample a large set of potential camera positions and estimate visibility for each of them, which in practice is intractable despite the efficiency of ray-casting techniques on recent platforms. In this work, we introduce a novel GPU-rendering technique to efficiently compute occlusions of tracked targets in Toric Space coordinates – a parametric space designed for cinematic camera control. We then rely on this occlusion evaluation to derive an anticipation map predicting occlusions for a continuous set of cameras over a user-defined time window. We finally design a camera motion strategy exploiting this anticipation map to minimize the occlusions of tracked entities over time. The key features of our approach are demonstrated through comparison with traditionally used ray-casting on benchmark scenes, and through an integration in multiple game-like 3D scenes with heavy, sparse and dense occluders.  相似文献   

4.
In avatar-mediated telepresence systems, a similar environment is assumed for involved spaces, so that the avatar in a remote space can imitate the user's motion with proper semantic intention performed in a local space. For example, touching on the desk by the user should be reproduced by the avatar in the remote space to correctly convey the intended meaning. It is unlikely, however, that the two involved physical spaces are exactly the same in terms of the size of the room or the locations of the placed objects. Therefore, a naive mapping of the user's joint motion to the avatar will not create the semantically correct motion of the avatar in relation to the remote environment. Existing studies have addressed the problem of retargeting human motions to an avatar for telepresence applications. Few studies, however, have focused on retargeting continuous full-body motions such as locomotion and object interaction motions in a unified manner. In this paper, we propose a novel motion adaptation method that allows to generate the full-body motions of a human-like avatar on-the-fly in the remote space. The proposed method handles locomotion and object interaction motions as well as smooth transitions between them according to given user actions under the condition of a bijective environment mapping between morphologically-similar spaces. Our experiments show the effectiveness of the proposed method in generating plausible and semantically correct full-body motions of an avatar in room-scale space.  相似文献   

5.
6.
Recently, deep learning-based denoising approaches have led to dramatic improvements in low sample-count Monte Carlo rendering. These approaches are aimed at path tracing, which is not ideal for simulating challenging light transport effects like caustics, where photon mapping is the method of choice. However, photon mapping requires very large numbers of traced photons to achieve high-quality reconstructions. In this paper, we develop the first deep learning-based method for particle-based rendering, and specifically focus on photon density estimation, the core of all particle-based methods. We train a novel deep neural network to predict a kernel function to aggregate photon contributions at shading points. Our network encodes individual photons into per-photon features, aggregates them in the neighborhood of a shading point to construct a photon local context vector, and infers a kernel function from the per-photon and photon local context features. This network is easy to incorporate in many previous photon mapping methods (by simply swapping the kernel density estimator) and can produce high-quality reconstructions of complex global illumination effects like caustics with an order of magnitude fewer photons compared to previous photon mapping methods. Our approach largely reduces the required number of photons, significantly advancing the computational efficiency in photon mapping.  相似文献   

7.
An application may have to load an unknown 3D model and, for enhanced realistic rendering, precompute values over the surface domain, such as light maps, ambient occlusion, or other global-illumination parameters. High-quality uv-unwrapping has several problems, such as seams, distortions, and wasted texture space. Additionally, procedurally generated scene content, perhaps on the fly, can make manual uv unwrapping impossible. Even when artist manipulation is feasible, good uv layouts can require expertise and be highly labor intensive. This paper investigates how to use Sparse Voxel DAGs (or DAGs for short) as one alternative to avoid uv mapping. The result is an algorithm enabling high compression ratios of both voxel structure and colors, which can be important for a baked scene to fit in GPU memory. Specifically, we enable practical usage for an automatic system by targeting efficient real-time mipmap filtering using compressed textures and adding support for individual mesh voxelizations and resolutions in the same DAG. Furthermore, the latter increases the texture-compression ratios by up to 32% compared to using one global voxelization, DAG compression by 10 – 15% compared to using a DAG per mesh, and reduces color-bleeding problems for large mipmap filter sizes. The voxel-filtering is more costly than standard hardware 2D-texture filtering. However, for full HD with deferred shading, it is optimized down to 2.5 ± 0.5 ms for a custom multisampling filtering (e.g., targeted for minification of low-frequency textures) and 5 ± 2 ms for quad-linear mipmap filtering (e.g., for high-frequency textures). Multiple textures sharing voxelization can amortize the majority of this cost. Hence, these numbers involve 1–3 textures per pixel (Fig. 1c).  相似文献   

8.
Layered materials capture subtle, realistic reflection behaviors that traditional single-layer models lack. Much of this is due to the complex subsurface light transport at the interfaces of – and in the media between – layers. Rendering with these materials can be costly, since we must simulate these transport effects at every evaluation of the underlying reflectance model. Rendering an image requires thousands of such evaluations, per pixel. Recent work treats this complexity by introducing significant approximations, requiring large precomputed datasetsper material, or simplifying the light transport simulations within the materials. Even the most effective of these methods struggle with the complexity induced by high-frequency variation in reflectance parameters and micro-surface normal variation, as well as anisotropic volumetric scattering between the layer interfaces. We present a more efficient, unbiased estimator for light transport in such general, complex layered appearance models. By conducting an analysis of the types of transport paths that contribute most to the aggregate reflectance dynamics, we propose an effective and unbiased path sampling method that reduces variance in the reflectance evaluations. Our method additionally supports reflectance importance sampling, does not rely on any precomputation, and so integrates readily into existing renderers. We consistently outperform the state-of-the-art by ~2 – 6 × in equal-quality (i.e., equal error) comparisons.  相似文献   

9.
We present a new software ray tracing solution that efficiently computes visibilities in dynamic scenes. We first introduce a novel scene representation: ray-aligned occupancy map array (ROMA) that is generated by rasterizing the dynamic scene once per frame. Our key contribution is a fast and low-divergence tracing method computing visibilities in constant time, without constructing and traversing the traditional intersection acceleration data structures such as BVH. To further improve accuracy and alleviate aliasing, we use a spatiotemporal scheme to stochastically distribute the candidate ray samples. We demonstrate the practicality of our method by integrating it into a modern real-time renderer and showing better performance compared to existing techniques based on distance fields (DFs). Our method is free of the typical artifacts caused by incomplete scene information, and is about 2.5×–10× faster than generating and tracing DFs at the same resolution and equal storage.  相似文献   

10.
Superior human pose and shape reconstruction from monocular images depends on removing the ambiguities caused by occlusions and shape variance. Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth. However, 3D ground truth is neither in abundance nor can efficiently be obtained. In this paper, we introduce body part segmentation as critical supervision. Part segmentation not only indicates the shape of each body part but helps to infer the occlusions among parts as well. To improve the reconstruction with part segmentation, we propose a part-level differentiable renderer that enables part-based models to be supervised by part segmentation in neural networks or optimization loops. We also introduce a general parametric model engaged in the rendering pipeline as an intermediate representation between skeletons and detailed shapes, which consists of primitive geometries for better interpretability. The proposed approach combines parameter regression, body model optimization, and detailed model registration altogether. Experimental results demonstrate that the proposed method achieves balanced evaluation on pose and shape, and outperforms the state-of-the-art approaches on Human3.6M, UP-3D and LSP datasets.  相似文献   

11.
Logos are one of the most important graphic design forms that use an abstracted shape to clearly represent the spirit of a community. Among various styles of abstraction, a particular golden-ratio design is frequently employed by designers to create a concise and regular logo. In this context, designers utilize a set of circular arcs with golden ratios (i.e., all arcs are taken from circles whose radii form a geometric series based on the golden ratio) as the design elements to manually approximate a target shape. This error-prone process requires a large amount of time and effort, posing a significant challenge for design space exploration. In this work, we present a novel computational framework that can automatically generate golden ratio logo abstractions from an input image. Our framework is based on a set of carefully identified design principles and a constrained optimization formulation respecting these principles. We also propose a progressive approach that can efficiently solve the optimization problem, resulting in a sequence of abstractions that approximate the input at decreasing levels of detail. We evaluate our work by testing on images with different formats including real photos, clip arts, and line drawings. We also extensively validate the key components and compare our results with manual results by designers to demonstrate the effectiveness of our framework. Moreover, our framework can largely benefit design space exploration via easy specification of design parameters such as abstraction levels, golden circle sizes, etc.  相似文献   

12.
We present RGB2AO, a novel task to generate ambient occlusion (AO) from a single RGB image instead of screen space buffers such as depth and normal. RGB2AO produces a new image filter that creates a non-directional shading effect that darkens enclosed and sheltered areas. RGB2AO aims to enhance two 2D image editing applications: image composition and geometry-aware contrast enhancement. We first collect a synthetic dataset consisting of pairs of RGB images and AO maps. Subsequently, we propose a model for RGB2AO by supervised learning of a convolutional neural network (CNN), considering 3D geometry of the input image. Experimental results quantitatively and qualitatively demonstrate the effectiveness of our model.  相似文献   

13.
3D scanned point cloud data of teeth is popular used in digital orthodontics. The classification and semantic labelling for point cloud of each tooth is a key and challenging task for planning dental treatment. Utilizing the priori ordered position information of tooth arrangement, we propose an effective network for tooth model classification in this paper. The relative position and the adjacency similarity feature vectors are calculated for tooth 3D model, and combine the geometric feature into the fully connected layers of the classification training task. For the classification of dental anomalies, we present a dental anomalies processing method to improve the classification accuracy. We also use FocalLoss as the loss function to solve the sample imbalance of wisdom teeth. The extensive evaluations, ablation studies and comparisons demonstrate that the proposed network can classify tooth models accurately and automatically and outperforms state-of-the-art point cloud classification methods.  相似文献   

14.
We present a novel technique to efficiently render complex direct illumination in real-time. It is based on a spatio-temporal randomized mixture model of von Mises-Fisher (vMF) distributions in screen space. For every pixel we determine the vMF distribution to sample from using a Markov chain process which is targeted to capture important features of the integrand. By this we avoid the storage overhead of finite-component deterministic mixture models, for which, in addition, determining the optimal component count is challenging. We use stochastic multiple importance sampling (SMIS) to be independent of the equilibrium distribution of our Markov chain process, since it cancels out in the estimator. Further, we use the same sample to advance the Markov chain and to construct the SMIS estimator and local Markov chain state permutations avoid the resulting bias due to dependent sampling. As a consequence we require one ray per sample and pixel only. We evaluate our technique using implementations in a research renderer as well as a classic game engine with highly dynamic content. Our results show that it is efficient and quickly readapts to dynamic conditions. We compare to spatio-temporal resampling (ReSTIR), which can suffer from correlation artifacts due to its non-adapting candidate distributions that can deviate strongly from the integrand. While we focus on direct illumination, our approach is more widely applicable and we exemplarily show the rendering of caustics.  相似文献   

15.
It is notoriously difficult for artists to control liquids while generating plausible animations. We introduce a new liquid control tool that allows users to load, transform, and apply precomputed liquid simulation templates in a scene in order to control a particle-based simulation. Each template instance generates control forces that drive the global simulated liquid to locally reproduce the templated liquid behavior. Our system is augmented with a variable proportion of temporary particles to help efficiently reproduce the templated liquid density, with fewer requirements on the surrounding environment. The resulting control strategy adds only a small computational overhead, leading to quick visual feedback for resolutions allowing interactive simulation. We demonstrate the robustness and ease of use of our method on various examples in 2D and 3D.  相似文献   

16.
We present a simple, efficient and low-memory technique, targeting fast construction of bounding volume hierarchies (BVH) for broad-phase collision detection. To achieve this, we devise a novel representation of BVH trees in memory. We develop a mapping of the implicit index representation to compact memory locations, based on simple bit-shifts, to then construct and evaluate bounding volume test trees (BVTT) during collision detection with real-time performance. We model the topology of the BVH tree implicitly as binary encodings which allows us to determine the nodes missing from a complete binary tree using the binary representation of the number of missing nodes. The simplicity of our technique allows for fast hierarchy construction achieving over speedup over the state-of-the-art. Making use of these characteristics, we show that not only it is feasible to rebuild the BVH at every frame, but that using our technique, it is actually faster than refitting and more memory efficient.  相似文献   

17.
Voxels are a popular choice to encode complex geometry. Their regularity makes updates easy and enables random retrieval of values. The main limitation lies in the poor scaling with respect to resolution. Sparse voxel DAGs (Directed Acyclic Graphs) overcome this hurdle and offer high-resolution representations for real-time rendering but only handle static data. We introduce a novel data structure to enable interactive modifications of such compressed voxel geometry without requiring de- and recompression. Besides binary data to encode geometry, it also supports compressed attributes (e.g., color). We illustrate the usefulness of our representation via an interactive large-scale voxel editor (supporting carving, filling, copying, and painting).  相似文献   

18.
We present a novel method to construct short cuts for parameterizations with low isometric distortion. The algorithm contains two steps: (i) detect feature points, where the distortion is usually concentrated; and (ii) construct a cut by connecting the detected feature points. Central to each step is a greedy method. After generating a redundant feature point set, a greedy filtering process is performed to identify the feature points required for low isometric distortion parameterizations. This filtering process discards the feature points that are useless for distortion reduction while still enabling us to obtain low isometric distortion. Next, we formulate the process of connecting the detected feature points as a Steiner tree problem. To find an approximate solution, we first successively and greedily produce a collection of auxiliary points. Then, a cut is constructed by connecting the feature points and auxiliary points. In the 26,299 test cases in which an exact solution to the Steiner tree problem is available, the length of the cut obtained by our method is on average 0.17% longer than optimal. Compared to state-of-the-art cut construction methods, our method is one order of magnitude faster and generates shorter cuts while achieving similar isometric distortion.  相似文献   

19.
20.
Personalized hand models can be utilized to synthesize high quality hand datasets, provide more possible training data for deep learning and improve the accuracy of hand pose estimation. In recent years, parameterized hand models, e.g., MANO, are widely used for obtaining personalized hand models. However, due to the low resolution of existing parameterized hand models, it is still hard to obtain high-fidelity personalized hand models. In this paper, we propose a new method to estimate personalized hand models from multiple hand postures with multi-view color images. The personalized hand model is represented by a personalized neutral hand, and multiple hand postures. We propose a novel optimization strategy to estimate the neutral hand from multiple hand postures. To demonstrate the performance of our method, we have built a multi-view system and captured more than 35 people, and each of them has 30 hand postures. We hope the estimated hand models can boost the research of high-fidelity parameterized hand modeling in the future. All the hand models are publicly available on www.yangangwang.com .  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号