首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we study the comprehensive effects on volume rendered images due to numerical errors caused by the use of finite precision for data representation and processing. To estimate actual error behavior we conduct a thorough study using a volume renderer implemented with arbitrary floating‐point precision. Based on the experimental data we then model the impact of floating‐point pipeline precision, sampling frequency and fixed‐point input data quantization on the fidelity of rendered images. We introduce three models, an average model, which does not adapt to different data nor varying transfer functions, as well as two adaptive models that take the intricacies of a new data set and transfer function into account by adapting themselves given a few different images rendered. We also test and validate our models based on new data that was not used during our model building.  相似文献   

2.
In this paper we address the problem of optimal centre placement for scattered data approximation using radial basis functions (RBFs) by introducing the concept of floating centres. Given an initial least‐squares solution, we optimize the positions and the weights of the RBF centres by minimizing a non‐linear error function. By optimizing the centre positions, we obtain better approximations with a lower number of centres, which improves the numerical stability of the fitting procedure. We combine the non‐linear RBF fitting with a hierarchical domain decomposition technique. This provides a powerful tool for surface reconstruction from oriented point samples. By directly incorporating point normal vectors into the optimization process, we avoid the use of off‐surface points which results in less computational overhead and reduces undesired surface artefacts. We demonstrate that the proposed surface reconstruction technique is as robust as recent methods, which compute the indicator function of the solid described by the point samples. In contrast to indicator function based methods, our method computes a global distance field that can directly be used for shape registration.  相似文献   

3.
This paper presents a new method for estimating normals on unorganized point clouds that preserves sharp features. It is based on a robust version of the Randomized Hough Transform (RHT). We consider the filled Hough transform accumulator as an image of the discrete probability distribution of possible normals. The normals we estimate corresponds to the maximum of this distribution. We use a fixed‐size accumulator for speed, statistical exploration bounds for robustness, and randomized accumulators to prevent discretization effects. We also propose various sampling strategies to deal with anisotropy, as produced by laser scans due to differences of incidence. Our experiments show that our approach offers an ideal compromise between precision, speed, and robustness: it is at least as precise and noise‐resistant as state‐of‐the‐art methods that preserve sharp features, while being almost an order of magnitude faster. Besides, it can handle anisotropy with minor speed and precision losses.  相似文献   

4.
One of the most elementary application of a lattice is the quantization of real‐valued s‐dimensional vectors into finite bit precision to make them representable by a digital computer. Most often, the simple s‐dimensional regular grid is used for this task where each component of the vector is quantized individually. However, it is known that other lattices perform better regarding the average quantization error. A rank‐1 lattices is a special type of lattice, where the lattice points can be described by a single s‐dimensional generator vector. Further, the number of points inside the unit cube [0, 1)s is arbitrary and can be directly enumerated by a single one‐dimensional integer value. By choosing a suitable generator vector the minimum distance between the lattice points can be maximized which, as we show, leads to a nearly optimal mean quantization error. We present methods for finding parameters for s‐dimensional maximized minimum distance rank‐1 lattices and further show their practical use in computer graphics applications.  相似文献   

5.
6.
Material interface reconstruction (MIR) is the task of constructing boundary interfaces between regions of homogeneous material, while satisfying volume constraints, over a structured or unstructured spatial domain. In this paper, we present a discrete approach to MIR based upon optimizing the labeling of fractional volume elements within a discretization of the problem's original domain. We detail how to construct and initially label a discretization, and introduce a volume conservative swap move for optimization. Furthermore, we discuss methods for extracting and visualizing material interfaces from the discretization. Our technique has significant advantages over previous methods: we produce interfaces between multiple materials that are continuous across cell boundaries for time‐varying and static data in arbitrary dimension with bounded error.  相似文献   

7.
Associating normal vectors to surfaces is essential for many rendering algorithms. We introduce a new method to compute normals on discrete surfaces in object space. Assuming that the surface separates space locally into two disjoint subsets, each of these subsets contains implicitly information about the surface inclination. Considering one of these subsets in a small neighbourhood of a surface point enables us to derive the surface normal from this set. We show that this leads to exact results for C1 continuous surfaces in R3. Furthermore, we show that good approximations can be obtained numerically by sampling the considered area. Finally, we derive a method for normal computation on surfaces in discrete space.  相似文献   

8.
Enforcing fluid incompressibility is one of the time‐consuming aspects in SPH. In this paper, we present a local Poisson SPH (LPSPH) method to solve incompressibility for particle based fluid simulation. Considering the pressure Poisson equation, we first convert it into an integral form, and then apply a discretization to convert the continuous integral equation to a discretized summation over all the particles in the local pressure integration domain determined by the local geometry. To control the approximation error, we further integrate our local pressure solver into the predictive‐corrective framework to avoid the computational cost of solving a pressure Poisson equation globally. Our method can effectively eliminate the large density deviations mainly caused by the solid boundary treatment and free surface topological change, and show advantage of a higher convergence rate over the predictive‐corrective incompressible SPH (PCISPH).  相似文献   

9.
We present an algorithm for the restoration of noisy point cloud data, termed Moving Robust Principal Components Analysis (MRPCA). We model the point cloud as a collection of overlapping two‐dimensional subspaces, and propose a model that encourages collaboration between overlapping neighbourhoods. Similar to state‐of‐the‐art sparse modelling‐based image denoising, the estimated point positions are computed by local averaging. In addition, the proposed approach models grossly corrupted observations explicitly, does not require oriented normals, and takes into account both local and global structure. Sharp features are preserved via a weighted ?1 minimization, where the weights measure the similarity between normal vectors in a local neighbourhood. The proposed algorithm is compared against existing point cloud denoising methods, obtaining competitive results.  相似文献   

10.
In this paper, we introduce a novel coordinate‐free method for manipulating and analyzing vector fields on discrete surfaces. Unlike the commonly used representations of a vector field as an assignment of vectors to the faces of the mesh, or as real values on edges, we argue that vector fields can also be naturally viewed as operators whose domain and range are functions defined on the mesh. Although this point of view is common in differential geometry it has so far not been adopted in geometry processing applications. We recall the theoretical properties of vector fields represented as operators, and show that composition of vector fields with other functional operators is natural in this setup. This leads to the characterization of vector field properties through commutativity with other operators such as the Laplace‐Beltrami and symmetry operators, as well as to a straight‐forward definition of differential properties such as the Lie derivative. Finally, we demonstrate a range of applications, such as Killing vector field design, symmetric vector field estimation and joint design on multiple surfaces.  相似文献   

11.
We propose an efficient and robust image‐space denoising method for noisy images generated by Monte Carlo ray tracing methods. Our method is based on two new concepts: virtual flash images and homogeneous pixels. Inspired by recent developments in flash photography, virtual flash images emulate photographs taken with a flash, to capture various features of rendered images without taking additional samples. Using a virtual flash image as an edge‐stopping function, our method can preserve image features that were not captured well only by existing edge‐stopping functions such as normals and depth values. While denoising each pixel, we consider only homogeneous pixels—pixels that are statistically equivalent to each other. This makes it possible to define a stochastic error bound of our method, and this bound goes to zero as the number of ray samples goes to infinity, irrespective of denoising parameters. To highlight the benefits of our method, we apply our method to two Monte Carlo ray tracing methods, photon mapping and path tracing, with various input scenes. We demonstrate that using virtual flash images and homogeneous pixels with a standard denoising method outperforms state‐of‐the‐art image‐space denoising methods.  相似文献   

12.
Optimized Sub-Sampling of Point Sets for Surface Splatting   总被引:9,自引:0,他引:9  
Using surface splats as a rendering primitive has gained increasing attention recently due to its potential for high‐performance and high‐quality rendering of complex geometric models. However, as with any other rendering primitive, the processing costs are still proportional to the number of primitives that we use to represent a given object. This is why complexity reduction for point‐sampled geometry is as important as it is, e.g., for triangle meshes. In this paper we present a new sub‐sampling technique for dense point clouds which is specifically adjusted to the particular geometric properties of circular or elliptical surface splats. A global optimization scheme computes an approximately minimal set of splats that covers the entire surface while staying below a globally prescribed maximum error toleranceε. Since our algorithm converts pure point sample data into surface splats with normal vectors and spatial extent, it can also be considered as a surface reconstruction technique which generates a hole‐free piecewise linearC?1 continuous approximation of the input data. Here we can exploit the higher flexibility of surface splats compared to triangle meshes. Compared to previous work in this area we are able to obtain significantly lower splat numbers for a given error tolerance.  相似文献   

13.
We propose a new strategy to estimate surface normal information from highly noisy sparse data. Our approach is based on a tensor field morphologically adapted to infer normals. It acts as a three‐dimensional structuring element of smooth surfaces. Robust orientation inference for all input elements is performed by morphological operations using the tensor field. A general normal estimator is defined by combining the inferred normals, their confidences and the tensor field. This estimator can be used to directly reconstruct the surface or give input normals to other reconstruction methods. We present qualitative and quantitative results to show the behavior of the original methods and ours. A comparative discussion of these results shows the efficiency of our propositions.  相似文献   

14.
Voxel‐based approaches are today's standard to encode volume data. Recently, directed acyclic graphs (DAGs) were successfully used for compressing sparse voxel scenes as well, but they are restricted to a single bit of (geometry) information per voxel. We present a method to compress arbitrary data, such as colors, normals, or reflectance information. By decoupling geometry and voxel data via a novel mapping scheme, we are able to apply the DAG principle to encode the topology, while using a palette‐based compression for the voxel attributes, leading to a drastic memory reduction. Our method outperforms existing state‐of‐the‐art techniques and is well‐suited for GPU architectures. We achieve real‐time performance on commodity hardware for colored scenes with up to 17 hierarchical levels (a 128K3voxel resolution), which are stored fully in core.  相似文献   

15.
This paper presents a method for compressing measured datasets of the near‐field emission of physical light sources (represented by raysets). We create a mesh on the bounding surface of the light source that stores illumination information. The mesh is augmented with information about directional distribution and energy density. We have developed a new approach to smoothly generate random samples on the illumination distribution represented by the mesh, and to efficiently handle importance sampling of points and directions. We will show that our representation can compress a 10 million particle rayset into a mesh of a few hundred triangles. We also show that the error of this representation is low, even for very close objects.  相似文献   

16.
In this paper, we investigate the efficiency of ray queries on the CPU in the context of path tracing, where ray distributions are mostly random. We show that existing schemes that exploit data locality to improve ray tracing efficiency fail to do so beyond the first diffuse bounce, and analyze the cause for this. We then present an alternative scheme inspired by the work of Pharr et al. in which we improve data locality by using a data‐centric breadth‐first approach. We show that our scheme improves on state‐of‐the‐art performance for ray distributions in a path tracer.  相似文献   

17.
http://gamma.cs.unc.edu/BSC/ We present a realtime and reliable continuous collision detection (CCD) algorithm between triangulated models that exploits the floating point hardware capability of current CPUs and GPUs. Our formulation is based on Bernstein Sign Classification that takes advantage of the geometry properties of Bernstein basis and Bézier curves to perform Boolean collision queries. We derive tight numerical error bounds on the computations and employ those bounds to design an accurate algorithm using finite‐precision arithmetic. Compared with prior floatingpoint CCD algorithms, our approach eliminates all the false negatives and 90–95% of the false positives. We integrated our algorithm (TightCCD) with physically‐based simulation system and observe speedups in collision queries of 5–15X compared with prior reliable CCD algorithms. Furthermore, we demonstrate its benefits in terms of improving the performance or robustness of cloth simulation systems.  相似文献   

18.
This paper presents a new approach for the transformation and normal vector calculation algorithms of parametrically defined surfaces via dual vectors and line transformations. The surface is defined via dual points, the transformation is performed by rotations and translations based on screw theory while normal vector calculation is utilized for shading based on Phong's illumination model. The main benefit of this approach lies into the compactness of the surface's representation since geometrical characteristics, such as tangent vectors, that are necessary for shading algorithms, are included within its definition. An extensive comparison is performed between the proposed approach and the traditional homogeneous model, presenting the merits of our approach. Analytical and experimental determination of the computational cost via computer implementation of 3D surface transformation and shading is presented. Point‐based methods for the representation, transformation and shading of parametrically defined surfaces are compared to the introduced line‐based methods (dual quaternions and dual orthogonal matrices). It is shown that the simplified rendering procedure of 3D objects, is considerably faster using screw theory over the traditional point‐based structures.  相似文献   

19.
We present a new approach to microfacet‐based BSDF importance sampling. Previously proposed sampling schemes for popular analytic BSDFs typically begin by choosing a microfacet normal at random in a way that is independent of direction of incident light. To sample the full BSDF using these normals requires arbitrarily large sample weights leading to possible fireflies. Additionally, at grazing angles nearly half of the sampled normals face away from the incident ray and must be rejected, making the sampling scheme inefficient. Instead, we show how to use the distribution of visible normals directly to generate samples, where normals are weighted by their projection factor toward the incident direction. In this way, no backfacing normals are sampled and the sample weights contain only the shadowing factor of outgoing rays (and additionally a Fresnel term for conductors). Arbitrarily large sample weights are avoided and variance is reduced. Since the BSDF depends on the microsurface model, we describe our sampling algorithm for two models: the V‐cavity and the Smith models. We demonstrate results for both isotropic and anisotropic rough conductors and dielectrics with Beckmann and GGX distributions.  相似文献   

20.
Traversing voxels along a three dimensional (3D) line is one of the most fundamental algorithms for voxel‐based applications. This paper presents a new 6‐connectivity integer algorithm for this task. The proposed algorithm accepts voxels having different sizes in x, y and z directions. To explain the idea of the proposed approach, a 2D algorithm is firstly considered and then extended in 3D. This algorithm is a multi‐step as up to three voxels may be added in one iteration. It accepts both integer and floating‐point input. The new algorithm was compared to other popular voxel traversing algorithms. Counting the number of arithmetic operations showed that the proposed algorithm requires the least amount of operations per traversed voxel. A comparison of spent CPU time using either integer or floating‐point arithmetic confirms that the proposed algorithm is the most efficient. This algorithm is simple, and in compact form which also makes it attractive for hardware implementation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号