首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Many‐light rendering is becoming more common and important as rendering goes into the next level of complexity. However, to calculate the illumination under many lights, state of the art algorithms are still far from efficient, due to the separate consideration of light sampling and BRDF sampling. To deal with the inefficiency of many‐light rendering, we present a novel light sampling method named BRDF‐oriented light sampling, which selects lights based on importance values estimated using the BRDF's contributions. Our BRDF‐oriented light sampling method works naturally with MIS, and allows us to dynamically determine the number of samples allocated for different sampling techniques. With our method, we can achieve a significantly faster convergence to the ground truth results, both perceptually and numerically, as compared to previous many‐light rendering algorithms.  相似文献   

2.
With the widespread use of 3D acquisition devices, there is an increasing need of consolidating captured noisy and sparse point cloud data for accurate representation of the underlying structures. There are numerous algorithms that rely on a variety of assumptions such as local smoothness to tackle this ill‐posed problem. However, such priors lead to loss of important features and geometric detail. Instead, we propose a novel data‐driven approach for point cloud consolidation via a convolutional neural network based technique. Our method takes a sparse and noisy point cloud as input, and produces a dense point cloud accurately representing the underlying surface by resolving ambiguities in geometry. The resulting point set can then be used to reconstruct accurate manifold surfaces and estimate surface properties. To achieve this, we propose a generative neural network architecture that can input and output point clouds, unlocking a powerful set of tools from the deep learning literature. We use this architecture to apply convolutional neural networks to local patches of geometry for high quality and efficient point cloud consolidation. This results in significantly more accurate surfaces, as we illustrate with a diversity of examples and comparisons to the state‐of‐the‐art.  相似文献   

3.
In this paper, we propose PCPNET , a deep‐learning based approach for estimating local 3D shape properties in point clouds. In contrast to the majority of prior techniques that concentrate on global or mid‐level attributes, e.g., for shape classification or semantic labeling, we suggest a patch‐based learning method, in which a series of local patches at multiple scales around each point is encoded in a structured manner. Our approach is especially well‐adapted for estimating local shape properties such as normals (both unoriented and oriented) and curvature from raw point clouds in the presence of strong noise and multi‐scale features. Our main contributions include both a novel multi‐scale variant of the recently proposed PointNet architecture with emphasis on local shape information, and a series of novel applications in which we demonstrate how learning from training data arising from well‐structured triangle meshes, and applying the trained model to noisy point clouds can produce superior results compared to specialized state‐of‐the‐art techniques. Finally, we demonstrate the utility of our approach in the context of shape reconstruction, by showing how it can be used to extract normal orientation information from point clouds.  相似文献   

4.
Monte Carlo methods for physically‐based light transport simulation are broadly adopted in the feature film production, animation and visual effects industries. These methods, however, often result in noisy images and have slow convergence. As such, improving the convergence of Monte Carlo rendering remains an important open problem. Gradient‐domain light transport is a recent family of techniques that can accelerate Monte Carlo rendering by up to an order of magnitude, leveraging a gradient‐based estimation and a reformulation of the rendering problem as an image reconstruction. This state of the art report comprehensively frames the fundamentals of gradient‐domain rendering, as well as the pragmatic details behind practical gradient‐domain uniand bidirectional path tracing and photon density estimation algorithms. Moreover, we discuss the various image reconstruction schemes that are crucial to accurate and stable gradient‐domain rendering. Finally, we benchmark various gradient‐domain techniques against the state‐of‐the‐art in denoising methods before discussing open problems.  相似文献   

5.
Information visualization is a rapidly evolving field with a growing volume of scientific literature and texts continually published. To keep abreast of the latest developments in the domain, survey papers and state‐of‐the‐art reviews provide valuable tools for managing the large quantity of scientific literature. Recently, a survey of survey papers was published to keep track of the quantity of refereed survey papers in information visualization conferences and journals. However, no such resources exist to inform readers of the large volume of books being published on the subject, leaving the possibility of valuable knowledge being overlooked. We present the first literature survey of information visualization books that addresses this challenge by surveying the large volume of books on the topic of information visualization and visual analytics. This unique survey addresses some special challenges associated with collections of books (as opposed to research papers) including searching, browsing and cost. This paper features a novel two‐level classification based on both books and chapter topics examined in each book, enabling the reader to quickly identify to what depth a topic of interest is covered within a particular book. Readers can use this survey to identify the most relevant book for their needs amongst a quickly expanding collection. In indexing the landscape of information visualization books, this survey provides a valuable resource to both experienced researchers and newcomers in the data visualization discipline.  相似文献   

6.
Image‐ and data‐parallel rendering across multiple nodes on high‐performance computing systems is widely used in visualization to provide higher frame rates, support large data sets, and render data in situ. Specifically for in situ visualization, reducing bottlenecks incurred by the visualization and compositing is of key concern to reduce the overall simulation runtime. Moreover, prior algorithms have been designed to support either image‐ or data‐parallel rendering and impose restrictions on the data distribution, requiring different implementations for each configuration. In this paper, we introduce the Distributed FrameBuffer, an asynchronous image‐processing framework for multi‐node rendering. We demonstrate that our approach achieves performance superior to the state of the art for common use cases, while providing the flexibility to support a wide range of parallel rendering algorithms and data distributions. By building on this framework, we extend the open‐source ray tracing library OSPRay with a data‐distributed API, enabling its use in data‐distributed and in situ visualization applications.  相似文献   

7.
Harmonious color combinations can stimulate positive user emotional responses. However, a widely open research question is: how can we establish a robust and accurate color harmony measure for the public and professional designers to identify the harmony level of a color theme or color set. Building upon the key discovery that color pairs play an important role in harmony estimation, in this paper we present a novel color‐pair based estimation model to accurately measure the color harmony. It first takes a two‐layer maximum likelihood estimation (MLE) based method to compute an initial prediction of color harmony by statistically modeling the pair‐wise color preferences from existing datasets. Then, the initial scores are refined through a back‐propagation neural network (BPNN) with a variety of color features extracted in different color spaces, so that an accurate harmony estimation can be obtained at the end. Our extensive experiments, including performance comparisons of harmony estimation applications, show the advantages of our method in comparison with the state of the art methods.  相似文献   

8.
This paper proposes a new methodology for measuring the error of unbiased physically based rendering algorithms. The current state of the art includes mean squared error (MSE) based metrics and visual comparisons of equal‐time renderings of competing algorithms. Neither is satisfying as MSE does not describe behavior and can exhibit significant variance, and visual comparisons are inherently subjective. Our contribution is two‐fold: First, we propose to compute many short renderings instead of a single long run and use the short renderings to estimate MSE expectation and variance as well as per‐pixel standard deviation. An algorithm that achieves good results in most runs, but with occasional outliers is essentially unreliable, which we wish to quantify numerically. We use per‐pixel standard deviation to identify problematic lighting effects of rendering algorithms. The second contribution is the error spectrum ensemble (ESE), a tool for measuring the distribution of error over frequencies. The ESE serves two purposes: It reveals correlation between pixels and can be used to detect outliers, which offset the amount of error substantially.  相似文献   

9.
We present a convolutional neural network architecture for performing joint design of color filter array (CFA) patterns and demosaicing. Our generic model allows the training of CFAs of arbitrary sizes, optimizing each color filter over the entire RGB color space. The patterns and algorithms produced by our method provide high‐quality color reconstructions. We demonstrate the effectiveness of our approach by showing that its results achieve higher PSNR than the ones obtained with state‐of‐the‐art techniques on all standard demosaicing datasets, both for noise‐free and noisy scenarios. Our method can also be used to obtain demosaicing strategies for pre‐defined CFAs, such as the Bayer pattern, for which our results also surpass even the demosaicing algorithms specifically designed for such a pattern.  相似文献   

10.
The wide adoption of path‐tracing algorithms in high‐end realistic rendering has stimulated many diverse research initiatives. In this paper we present a coherent survey of methods that utilize Monte Carlo integration for estimating light transport in scenes containing participating media. Our work complements the volume‐rendering state‐of‐the‐art report by Cerezo et al. [ CPP*05 ]; we review publications accumulated since its publication over a decade ago, and include earlier methods that are key for building light transport paths in a stochastic manner. We begin by describing analog and non‐analog procedures for free‐path sampling and discuss various expected‐value, collision, and track‐length estimators for computing transmittance. We then review the various rendering algorithms that employ these as building blocks for path sampling. Special attention is devoted to null‐collision methods that utilize fictitious matter to handle spatially varying densities; we import two “next‐flight” estimators originally developed in nuclear sciences. Whenever possible, we draw connections between image‐synthesis techniques and methods from particle physics and neutron transport to provide the reader with a broader context.  相似文献   

11.
Procedural textile models are compact, easy to edit, and can achieve state‐of‐the‐art realism with fiber‐level details. However, these complex models generally need to be fully instantiated (aka. realized ) into 3D volumes or fiber meshes and stored in memory, We introduce a novel realization‐minimizing technique that enables physically based rendering of procedural textiles, without the need of full model realizations. The key ingredients of our technique are new data structures and search algorithms that look up regular and flyaway fibers on the fly, efficiently and consistently. Our technique works with compact fiber‐level procedural yarn models in their exact form with no approximation imposed. In practice, our method can render very large models that are practically unrenderable using existing methods, while using considerably less memory (60–200× less) and achieving good performance.  相似文献   

12.
Bounding volume hierarchy (BVH) has been widely adopted as the acceleration structure in broad‐phase collision detection. Previous state‐of‐the‐art BVH‐based collision detection approaches exploited the spatio‐temporal coherence of simulations by maintaining a bounding volume test tree (BVTT) front. A major drawback of these algorithms is that large deformations in the scenes decrease culling efficiency and slow down collision queries. Moreover, for front‐based methods, the inefficient caching on GPU caused by the arbitrary layout of BVH and BVTT front nodes becomes a critical performance issue. We present a fast and robust BVH‐based collision detection scheme on GPU that addresses the above problems by ordering and restructuring BVHs and BVTT fronts. Our techniques are based on the use of histogram sort and an auxiliary structure BVTT front log, through which we analyze the dynamic status of BVTT front and BVH quality. Our approach efficiently handles inter‐ and intra‐object collisions and performs especially well in simulations where there is considerable spatio‐temporal coherence. The benchmark results demonstrate that our approach is significantly faster than the previous BVH‐based method, and also outperforms other state‐of‐the‐art spatial subdivision schemes in terms of speed.  相似文献   

13.
Dissection puzzles require assembling a common set of pieces into multiple distinct forms. Existing works focus on creating 2D dissection puzzles that form primitive or naturalistic shapes. Unlike 2D dissection puzzles that could be supported on a tabletop surface, 3D dissection puzzles are preferable to be steady by themselves for each assembly form. In this work, we aim at computationally designing steady 3D dissection puzzles. We address this challenging problem with three key contributions. First, we take two voxelized shapes as inputs and dissect them into a common set of puzzle pieces, during which we allow slightly modifying the input shapes, preferably on their internal volume, to preserve the external appearance. Second, we formulate a formal model of generalized interlocking for connecting pieces into a steady assembly using both their geometric arrangements and friction. Third, we modify the geometry of each dissected puzzle piece based on the formal model such that each assembly form is steady accordingly. We demonstrate the effectiveness of our approach on a wide variety of shapes, compare it with the state‐of‐the‐art on 2D and 3D examples, and fabricate some of our designed puzzles to validate their steadiness.  相似文献   

14.
A mandatory component for many point set algorithms is the availability of consistently oriented vertex‐normals (e.g. for surface reconstruction, feature detection, visualization). Previous orientation methods on meshes or raw point clouds do not consider a global context, are often based on unrealistic assumptions, or have extremely long computation times, making them unusable on real‐world data. We present a novel massively parallelized method to compute globally consistent oriented point normals for raw and unsorted point clouds. Built on the idea of graph‐based energy optimization, we create a complete kNN‐graph over the entire point cloud. A new weighted similarity criterion encodes the graph‐energy. To orient normals in a globally consistent way we perform a highly parallel greedy edge collapse, which merges similar parts of the graph and orients them consistently. We compare our method to current state‐of‐the‐art approaches and achieve speedups of up to two orders of magnitude. The achieved quality of normal orientation is on par or better than existing solutions, especially for real‐world noisy 3D scanned data.  相似文献   

15.
Microfacet theory is commonly used to build reflectance models for surfaces. While traditional microfacet‐based models assume that the distribution of a surface's microstructure is continuous, recent studies indicate that some surfaces with tiny, discrete and stochastic facets exhibit glittering visual effects, while some surfaces with structured features exhibit anisotropic specular reflection. Accordingly, this paper proposes an efficient and stationary method of surface material modeling to process both glittery and non‐glittery surfaces in a consistent way. Our method comprises two steps: in the preprocessing step, we take a fixed‐size sample normal map as input, then organize 4D microfacet trees in position and normal space for arbitrary‐sized surfaces; we also cluster microfacets into 4D K‐lobes via the adaptive k‐means method. In the rendering step, moreover, surface normals can be efficiently evaluated using pre‐clustered microfacets. Our method is able to efficiently render any structured, discrete and continuous micro‐surfaces using a precisely reconstructed surface NDF. Our method is both faster and uses less memory compared to the state‐of‐the‐art glittery surface modeling works.  相似文献   

16.
The stochastic nature of Monte Carlo rendering algorithms inherently produces noisy images. Essentially, three approaches have been developed to solve this issue: improving the ray‐tracing strategies to reduce pixel variance, providing adaptive sampling by increasing the number of rays in regions needing so, and filtering the noisy image as a post‐process. Although the algorithms from the latter category introduce bias, they remain highly attractive as they quickly improve the visual quality of the images, are compatible with all sorts of rendering effects, have a low computational cost and, for some of them, avoid deep modifications of the rendering engine. In this paper, we build upon recent advances in both non‐local and collaborative filtering methods to propose a new efficient denoising operator for Monte Carlo rendering. Starting from the local statistics which emanate from the pixels sample distribution, we enrich the image with local covariance measures and introduce a nonlocal bayesian filter which is specifically designed to address the noise stemming from Monte Carlo rendering. The resulting algorithm only requires the rendering engine to provide for each pixel a histogram and a covariance matrix of its color samples. Compared to state‐of‐the‐art sample‐based methods, we obtain improved denoising results, especially in dark areas, with a large increase in speed and more robustness with respect to the main parameter of the algorithm. We provide a detailed mathematical exposition of our bayesian approach, discuss extensions to multiscale execution, adaptive sampling and animated scenes, and experimentally validate it on a collection of scenes.  相似文献   

17.
In this paper, we propose a unified neural network for panoptic segmentation, a task aiming to achieve more fine‐grained segmentation. Following existing methods combining semantic and instance segmentation, our method relies on a triple‐branch neural network for tackling the unifying work. In the first stage, we adopt a ResNet50 with a feature pyramid network (FPN) as shared backbone to extract features. Then each branch leverages the shared feature maps and serves as the stuff, things, or mask branch. Lastly, the outputs are fused following a well‐designed strategy. Extensive experimental results on MS‐COCO dataset demonstrate that our approach achieves a competitive Panoptic Quality (PQ) metric score with the state of the art.  相似文献   

18.
Power saving is a prevailing concern in desktop computers and, especially, in battery‐powered devices such as mobile phones. This is generating a growing demand for power‐aware graphics applications that can extend battery life, while preserving good quality. In this paper, we address this issue by presenting a real‐time power‐efficient rendering framework, able to dynamically select the rendering configuration with the best quality within a given power budget. Different from the current state of the art, our method does not require precomputation of the whole camera‐view space, nor Pareto curves to explore the vast power‐error space; as such, it can also handle dynamic scenes. Our algorithm is based on two key components: our novel power prediction model, and our runtime quality error estimation mechanism. These components allow us to search for the optimal rendering configuration at runtime, being transparent to the user. We demonstrate the performance of our framework on two different platforms: a desktop computer, and a mobile device. In both cases, we produce results close to the maximum quality, while achieving significant power savings.  相似文献   

19.
We propose a novel approach for shape matching between triangular meshes that, in contrast to existing methods, can match crease features. Our approach is based on a hybrid optimization scheme, that solves simultaneously for an elastic deformation of the source and its projection on the target. The elastic energy we minimize is invariant to rigid body motions, and its non‐linear membrane energy component favors locally injective maps. Symmetrizing this model enables feature aligned correspondences even for non‐isometric meshes. We demonstrate the advantage of our approach over state of the art methods on isometric and non‐isometric datasets, where we improve the geodesic distance from the ground truth, the conformal and area distortions, and the mismatch of the mean curvature functions. Finally, we show that our computed maps are applicable for surface interpolation, consistent cross‐field computation, and consistent quadrangular remeshing of a set of shapes.  相似文献   

20.
We propose ClustMe, a new visual quality measure to rank monochrome scatterplots based on cluster patterns. ClustMe is based on data collected from a human‐subjects study, in which 34 participants judged synthetically generated cluster patterns in 1000 scatterplots. We generated these patterns by carefully varying the free parameters of a simple Gaussian Mixture Model with two components, and asked the participants to count the number of clusters they could see (1 or more than 1). Based on the results, we form ClustMe by selecting the model that best predicts these human judgments among 7 different state‐of‐the‐art merging techniques (Demp ). To quantitatively evaluate ClustMe, we conducted a second study, in which 31 human subjects ranked 435 pairs of scatterplots of real and synthetic data in terms of cluster patterns complexity. We use this data to compare ClustMe's performance to 4 other state‐of‐the‐art clustering measures, including the well‐known Clumpiness scagnostics. We found that of all measures, ClustMe is in strongest agreement with the human rankings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号