首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There is a vast number of applications that require distance field computation over triangular meshes. State‐of‐the‐art algorithms have quadratic or sub‐quadratic worst‐case complexity, making them impractical for interactive applications. While most of the research on this subject has been focused on reducing the computation complexity of the algorithms, in this work we propose an approximate algorithm that achieves similar results working in lower resolutions of the input meshes. The creation of lower resolution meshes is the essence of our proposal. The idea is to identify regions on the input mesh that can be unfolded into planar regions with minimal area distortion (i.e. quasi‐developable charts). Once charts are computed, their interior is re‐triangulated to reduce the number of triangles, which results in a collection of simplified charts that we call a base mesh. Due to the properties of quasi‐developable regions, we are able to compute distance fields over the base mesh instead of over the input mesh. This reduces the memory footprint and data processed for distance computations, which is the bottleneck of these algorithms. We present results that are one order of magnitude faster than current exact solutions, with low approximation errors.  相似文献   

2.
Producing traditional animation is a laborious task where the key drawings are first drawn by artists and thereafter inbetween drawings are created, whether it is by hand or computer‐assisted. Auto‐inbetweening of these 2D key drawings by computer is a non‐trivial task as 3D depths are missing. An alternate approach is to generate all the drawings by extracting lines directly from animated 3D models frame by frame, concatenating and rendering them together into an animation. However, animation quality generated using this straightforward method bears two problems. Firstly, the animation contains unsatisfactory visual artifacts such as line flickering and popping. This is especially pronounced when the lines are extracted using high‐order derivatives, such as ridges and valleys, from 3D models represented in triangle meshes. Secondly, there is a lack of temporal continuity as each drawing is generated without taking its neighboring drawings into consideration. In this paper, we propose an improved approach over the straightforward method by transferring extracted 3D line drawings of each frame into individual 3D lines and processing them along the time domain. Our objective is to minimize the visual artifacts and incorporate temporal relationship of individual lines throughout the entire animation sequence. This is achieved by creating correspondent trajectory of each line from each frame and applying global optimization on each trajectory. To realize this target, we present a fully automatic novel approach, which consists of (1) a line matching algorithm, (2) an optimizing algorithm, taking into account both the variations of numbers and lengths of 3D lines in each frame, and (3) a robust tracing method for transferring collections of line segments extracted from the 3D models into individual lines. We evaluate our approach on several animated model sequences to demonstrate its effectiveness in producing line drawing animations with temporal coherence.  相似文献   

3.
Lossy compression of motion capture data can alleviate the problems of efficient storage and transmission by exploiting the redundancy and the superfluous precision of the data. When considering the acceptable amount of distortion, perceptual issues have to be taken into account. Current state of the art methods reduce the data rate required for high quality storage of motion capture data using various techniques. Most of them, however, do not use the common tools of general data compression, such as the method of Lagrange multipliers, and thus they obtain sub‐optimal results, making it difficult to do a fair comparison of their performance. In this paper, we present a general preprocessing step based on Lagrange multipliers, which allows to rigorously adjust the precision in each of the degrees of freedom of the input data according to the amount of influence the given degree of freedom has on the overall distortion. We then present a simple compression method based on Principal Component Analysis, which in combination with the proposed preprocessing achieves significantly better results than current state of the art methods. It allows optimization with respect to various distortion metrics, and we discuss the choice of the metric in two common but distinct scenarios, proposing a perceptually oriented comparison metric based on the relation of the problem at hand to the problem of compression of dynamic meshes.  相似文献   

4.
Recently, approaches have been put forward that focus on the recognition of mesh semantic meanings. These methods usually need prior knowledge learned from training dataset, but when the size of the training dataset is small, or the meshes are too complex, the segmentation performance will be greatly effected. This paper introduces an approach to the semantic mesh segmentation and labeling which incorporates knowledge imparted by both segmented, labeled meshes, and unsegmented, unlabeled meshes. A Conditional Random Fields (CRF) based objective function measuring the consistency of labels and faces, labels of neighbouring faces is proposed. To implant the information from the unlabeled meshes, we add an unlabeled conditional entropy into the objective function. With the entropy, the objective function is not convex and hard to optimize, so we modify the Virtual Evidence Boosting (VEB) to solve the semi‐supervised problem efficiently. Our approach yields better results than those methods which only use limited labeled meshes, especially when many unlabeled meshes exist. The approach reduces the overall system cost as well as the human labelling cost required during training. We also show that combining knowledge from labeled and unlabeled meshes outperforms using either type of meshes alone.  相似文献   

5.
6.
Molecular dynamics simulations are a principal tool for studying molecular systems. Such simulations are used to investigate molecular structure, dynamics, and thermodynamical properties, as well as a replacement for, or complement to, costly and dangerous experiments. With the increasing availability of computational power the resulting data sets are becoming increasingly larger, and benchmarks indicate that the interactive visualization on desktop computers poses a challenge when rendering substantially more than millions of glyphs. Trading visual quality for rendering performance is a common approach when interactivity has to be guaranteed. In this paper we address both problems and present a method for high‐quality visualization of massive molecular dynamics data sets. We employ several optimization strategies on different levels of granularity, such as data quantization, data caching in video memory, and a two‐level occlusion culling strategy: coarse culling via hardware occlusion queries and a vertex‐level culling using maximum depth mipmaps. To ensure optimal image quality we employ GPU raycasting and deferred shading with smooth normal vector generation. We demonstrate that our method allows us to interactively render data sets containing tens of millions of high‐quality glyphs.  相似文献   

7.
Laplacian mesh compression, also known as high‐pass mesh coding, is a popular technique for efficiently storing both static and dynamic triangle meshes that gained further recognition with the advent of perceptual mesh distortion evaluation metrics. Currently, the usual rule of thumb that drives the decision for a mesh compression algorithm is whether or not accuracy in absolute scale is required: Laplacian mesh encoding is chosen when perceptual quality is the main objective, while other techniques provide better results in terms of mechanistic error measures such as mean squared error. In this work, we present a modification of the Laplacian mesh encoding algorithm that preserves its benefits while it substantially reduces the resulting absolute error. Our approach is based on analyzing the reconstruction stage and modifying the quantization of differential coordinates, so that the decoded result stays close to the input even in areas that are distant from anchor points. In our approach, we avoid solving an overdetermined system of linear equations and thus reduce data redundancy, improve conditioning and achieve faster processing. Our approach can be directly applied to both static and dynamic mesh compression and we provide quantitative results comparing our approach with the state of the art methods.  相似文献   

8.
In this paper, we describe a novel approach for the reconstruction of animated meshes from a series of time‐deforming point clouds. Given a set of unordered point clouds that have been captured by a fast 3‐D scanner, our algorithm is able to compute coherent meshes which approximate the input data at arbitrary time instances. Our method is based on the computation of an implicit function in ?4 that approximates the time‐space surface of the time‐varying point cloud. We then use the four‐dimensional implicit function to reconstruct a polygonal model for the first time‐step. By sliding this template mesh along the time‐space surface in an as‐rigid‐as‐possible manner, we obtain reconstructions for further time‐steps which have the same connectivity as the previously extracted mesh while recovering rigid motion exactly. The resulting animated meshes allow accurate motion tracking of arbitrary points and are well suited for animation compression. We demonstrate the qualities of the proposed method by applying it to several data sets acquired by real‐time 3‐D scanners.  相似文献   

9.
Most state‐of‐the‐art compression algorithms use complex connectivity traversal and prediction schemes, which are not efficient enough for online compression of large meshes. In this paper we propose a scalable massively parallel approach for compression and decompression of large triangle meshes using the GPU. Our method traverses the input mesh in a parallel breadth‐first manner and encodes the connectivity data similarly to the well known cut‐border machine. Geometry data is compressed using a local prediction strategy. In contrast to the original cut‐border machine, we can additionally handle triangle meshes with inconsistently oriented faces. Our approach is more than one order of magnitude faster than currently used methods and achieves competitive compression rates.  相似文献   

10.
Development of geometry data compression techniques in the past years has been limited by the lack of a metric with proven correlation with human perception of mesh distortion. Many algorithms have been proposed, but usually the aim has been to minimise mean squared error, or some of its derivatives. In the field of dynamic mesh compression, the situation has changed with the recent proposal of the STED metric, which has been shown to capture the human perception of mesh distortion much better than previous metrics. In this paper we show how existing algorithms can be steered to provide optimal results with respect to this metric, and we propose a novel dynamic mesh compression algorithm, based on trajectory space PCA and Laplacian coordinates, specifically designed to minimise the newly proposed STED error. Our experiments show that using the proposed algorithm, we were able to reduce the required data rate by up to 50% while preserving the introduced STED error.  相似文献   

11.
We present an adaptive slicing scheme for reducing the manufacturing time for 3D printing systems. Based on a new saliency‐based metric, our method optimizes the thicknesses of slicing layers to save printing time and preserve the visual quality of the printing results. We formulate the problem as a constrained ?0 optimization and compute the slicing result via a two‐step optimization scheme. To further reduce printing time, we develop a saliency‐based segmentation scheme to partition an object into subparts and then optimize the slicing of each subpart separately. We validate our method with a large set of 3D shapes ranging from CAD models to scanned objects. Results show that our method saves printing time by 30–40% and generates 3D objects that are visually similar to the ones printed with the finest resolution possible.  相似文献   

12.
Large 3D asset databases are critical for designing virtual worlds, and using them effectively requires techniques for efficient querying and navigation. One important form of query is search by style compatibility: given a query object, find others that would be visually compatible if used in the same scene. In this paper, we present a scalable, learning‐based approach for solving this problem which is designed for use with real‐world 3D asset databases; we conduct experiments on 121 3D asset packages containing around 4000 3D objects from the Unity Asset Store. By leveraging the structure of the object packages, we introduce a technique to synthesize training labels for metric learning that work as well as human labels. These labels can grow exponentially with the number of objects, allowing our approach to scale to large real‐world 3D asset databases without the need for expensive human training labels. We use these synthetic training labels in a metric learning model that analyzes the in‐engine rendered appearance of an object—combining geometry, material, and texture—whereas prior work considers only object geometry, or disjoint geometry and texture features. Through an ablation experiment, we find that using this representation yields better results than using renders which lack texture, materiality, or both.  相似文献   

13.
This paper addresses the problem of representing dynamic 3D meshes in a compact way, so that they can be stored and transmitted efficiently. We focus on sequences of triangle meshes with shared connectivity, avoiding the necessity of having a skinning structure. Our method first computes an average mesh of the whole sequence in edge shape space. A discrete geometric Laplacian of this average surface is then used to encode the coefficients that describe the trajectories of the mesh vertices. Optionally, a novel spatio‐temporal predictor may be applied to the trajectories to further improve the compression rate. We demonstrate that our approach outperforms the current state of the art in terms of low data rate at a given perceived distortion, as measured by the STED and KG error metrics.  相似文献   

14.
Variable bit rate compression can achieve better quality and compression rates than fixed bit rate methods. None the less, GPU texturing uses lossy fixed bit rate methods like DXT to allow random access and on‐the‐fly decompression during rendering. Changes in games and GPUs since DXT was developed make its compression artifacts less acceptable, and texture bandwidth less of an issue, but texture size is a serious and growing problem. Games use a large total volume of texture data, but have a much smaller active set. We present a new paradigm that separates GPU decompression from rendering. Rendering is from uncompressed data, avoiding the need for random access decompression. We demonstrate this paradigm with a new variable bit rate lossy texture compression algorithm that is well suited to the GPU, including a new GPU‐friendly formulation of range decoding, and a new texture compression scheme averaging 12.4:1 lossy compression ratio on 471 real game textures with a quality level similar to traditional DXT compression. The total game texture set are stored in the GPU in compressed form, and decompressed for use in a fraction of a second per scene.  相似文献   

15.
Lighting design plays a crucial role in indoor lighting design, computer cinematograph and many other applications. Computer‐assisted lighting design aims to find a lighting configuration that best approximates the illumination effect specified by designers. In this paper, we present an automatic approach for lighting design, in which discrete and continuous optimization of the lighting configuration, including the number, intensity, and position of lights, are achieved. Our lighting design algorithm consists of two major steps. The first step estimates an initial lighting configuration by light sampling and clustering. The initial light clusters are then recursively merged to form a light hierarchy. The second step optimizes the lighting configuration by alternatively selecting a light cut on the light hierarchy to determine the number of representative lights and optimizing the lighting parameters using the simplex method. To speed up the optimization computation, only illumination at scene vertices that are important to rendering are sampled and taken into account in the optimization. Using the proposed approach, we develop a lighting design system that can compute appropriate lighting configurations to generate the illumination effects iteratively painted and modified by a designer interactively.  相似文献   

16.
When simulating fluids, tetrahedral methods provide flexibility and ease of adaptivity that Cartesian grids find difficult to match. However, this approach has so far been limited by two conflicting requirements. First, accurate simulation requires quality Delaunay meshes and the use of circumcentric pressures. Second, meshes must align with potentially complex moving surfaces and boundaries, necessitating continuous remeshing. Unfortunately, sacrificing mesh quality in favour of speed yields inaccurate velocities and simulation artifacts. We describe how to eliminate the boundary‐matching constraint by adapting recent embedded boundary techniques to tetrahedra, so that neither air nor solid boundaries need to align with mesh geometry. This enables the use of high quality, arbitrarily graded, non‐conforming Delaunay meshes, which are simpler and faster to generate. Temporal coherence can also be exploited by reusing meshes over adjacent timesteps to further reduce meshing costs. Lastly, our free surface boundary condition eliminates the spurious currents that previous methods exhibited for slow or static scenarios. We provide several examples demonstrating that our efficient tetrahedral embedded boundary method can substantially increase the flexibility and accuracy of adaptive Eulerian fluid simulation.  相似文献   

17.
We present a new technique which can handle both point and sliding constraints in the multigrid (MG) framework. Although the MG method can theoretically perform as fast as O(N), the development of a clothing simulator based on the MG method calls for solving an important technical challenge: handling the constraints. Resolving constrains has been difficult in MG because there has been no clear way to transfer the constraints existing in the finest level mesh to the coarser level meshes. This paper presents a new formulation based on soft constraints, which can coarsen the constraints defined in the finest level to the coarser levels. Experiments are performed which show that the proposed method can solve the linear system up to 4–9 times faster in comparison with the modified preconditioned conjugate gradient method (MPCG) without quality degradation. The proposed method is easy to implement and can be straightforwardly applied to existing clothing simulators which are based on implicit time integration.  相似文献   

18.
This paper presents a digital storytelling approach that generates automatic animations for time‐varying data visualization. Our approach simulates the composition and transition of storytelling techniques and synthesizes animations to describe various event features. Specifically, we analyze information related to a given event and abstract it as an event graph, which represents data features as nodes and event relationships as links. This graph embeds a tree‐like hierarchical structure which encodes data features at different scales. Next, narrative structures are built by exploring starting nodes and suitable search strategies in this graph. Different stages of narrative structures are considered in our automatic rendering parameter decision process to generate animations as digital stories. We integrate this animation generation approach into an interactive exploration process of time‐varying data, so that more comprehensive information can be provided in a timely fashion. We demonstrate with a storm surge application that our approach allows semantic visualization of time‐varying data and easy animation generation for users without special knowledge about the underlying visualization techniques.  相似文献   

19.
Bidirectional Texture Functions (BTFs) are among the highest quality material representations available today and thus well suited whenever an exact reproduction of the appearance of a material or complete object is required. In recent years, BTFs have started to find application in various industrial settings and there is also a growing interest in the cultural heritage domain. BTFs are usually measured from real‐world samples and easily consist of tens or hundreds of gigabytes. By using data‐driven compression schemes, such as matrix or tensor factorization, a more compact but still faithful representation can be derived. This way, BTFs can be employed for real‐time rendering of photo‐realistic materials on the GPU. However, scenes containing multiple BTFs or even single objects with high‐resolution BTFs easily exceed available GPU memory on today's consumer graphics cards unless quality is drastically reduced by the compression. In this paper, we propose the Bidirectional Sparse Virtual Texture Function, a hierarchical level‐of‐detail approach for the real‐time rendering of large BTFs that requires only a small amount of GPU memory. More importantly, for larger numbers or higher resolutions, the GPU and CPU memory demand grows only marginally and the GPU workload remains constant. For this, we extend the concept of sparse virtual textures by choosing an appropriate prioritization, finding a trade off between factorization components and spatial resolution. Besides GPU memory, the high demand on bandwidth poses a serious limitation for the deployment of conventional BTFs. We show that our proposed representation can be combined with an additional transmission compression and then be employed for streaming the BTF data to the GPU from from local storage media or over the Internet. In combination with the introduced prioritization this allows for the fast visualization of relevant content in the users field of view and a consecutive progressive refinement.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号