首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The signed distance field for a polygonal model is a useful representation that facilitates efficient computation in many visualization and geometric processing tasks. Often it is more effective to build a local distance field only within a narrow band around the surface that holds local geometric information for the model. In this paper, we present a novel technique to construct a volumetric local signed distance field of a polygonal model. To compute the local field efficiently, exactly those cells that cross the polygonal surface are found first through a new voxelization method, building a list of intersecting triangles for each boundary cell. After their neighboring cells are classified, the triangle lists are exploited to compute the local signed distance field with minimized voxel‐to‐triangle distance computations. While several efficient methods for computing the distance field, particularly those harnessing the graphics processing unit's (GPU's) processing power, have recently been proposed, we focus on a CPU‐based technique, intended to deal flexibly with large polygonal models and high‐resolution grids that are often too bulky for GPU computation.  相似文献   

2.
Two‐parameter families of straight lines (line congruences) are implicitly present in graphics and geometry processing in several important ways including lighting and shape analysis. In this paper we make them accessible to optimization and geometric computing, by introducing a general discrete version of congruences based on piecewise‐linear correspondences between triangle meshes. Our applications of congruences are based on the extraction of a so‐called torsion‐free support structure, which is a procedure analogous to remeshing a surface along its principal curvature lines. A particular application of such structures are freeform shading and lighting systems for architecture. We combine interactive design of such systems with global optimization in order to satisfy geometric constraints. In this way we explore a new area where architecture can greatly benefit from graphics.  相似文献   

3.
In this report we review and structure the branch of molecular visualization that is concerned with the visual analysis of cavities in macromolecular protein structures. First the necessary background, the domain terminology, and the goals of analytical reasoning are introduced. Based on a comprehensive collection of relevant research works, we present a novel classification for cavity detection approaches and structure them into four distinct classes: grid‐based, Voronoi‐based, surface‐based, and probe‐based methods. The subclasses are then formed by their combinations. We match these approaches with corresponding visualization technologies starting with direct 3D visualization, followed with non‐spatial visualization techniques that for example abstract the interactions between structures into a relational graph, straighten the cavity of interest to see its profile in one view, or aggregate the time sequence into a single contour plot. We also discuss the current state of methods for the visual analysis of cavities in dynamic data such as molecular dynamics simulations. Finally, we give an overview of the most common tools that are actively developed and used in the structural biology and biochemistry research. Our report is concluded by an outlook on future challenges in the field.  相似文献   

4.
Light fields were introduced a decade ago as a new high‐dimensional graphics rendering model. However, they have not been thoroughly used because their applications are very specific and their storage requirements are too high. Recently, spatial imaging devices have been related to light fields. These devices allow several users to see three‐dimensional (3D) images without using glasses or other intrusive elements. This paper presents a light‐field model that can be rendered in an autostereoscopic spatial device. The model is viewpoint‐independent and supports continuous multiresolution, foveal rendering, and integrating multiple light fields and geometric models in the same scene. We also show that it is possible to examine interactively a scene composed of several light fields and geometric models. Visibility is taken care of by the algorithm. Our goal is to apply our models to 3D TV and spatial imaging.  相似文献   

5.
This paper presents a survey of ocean simulation and rendering methods in computer graphics. To model and animate the ocean’s surface, these methods mainly rely on two main approaches: on the one hand, those which approximate ocean dynamics with parametric, spectral or hybrid models and use empirical laws from oceanographic research. We will see that this type of methods essentially allows the simulation of ocean scenes in the deep water domain, without breaking waves. On the other hand, physically‐based methods use Navier–Stokes equations to represent breaking waves and more generally ocean surface near the shore. We also describe ocean rendering methods in computer graphics, with a special interest in the simulation of phenomena such as foam and spray, and light’s interaction with the ocean surface.  相似文献   

6.
We propose a method for mapping polynomial volumes. Given a closed surface and an initial template volume grid, our method deforms the template grid by fitting its boundary to the input surface while minimizing a volume distortion criterion. The result is a point‐to‐point map distorting linear cells into curved ones. Our method is based on several extensions of Voronoi Squared Distance Minimization (VSDM) combined with a higher‐order finite element formulation of the deformation energy. This allows us to globally optimize the mapping without prior parameterization. The anisotropic VSDM formulation allows for sharp and semi‐sharp features to be implicitly preserved without tagging. We use a hierarchical finite element function basis that selectively adapts to the geometric details. This makes both the method more efficient and the representation more compact. We apply our method to geometric modeling applications in computer‐aided design and computer graphics, including mixed‐element meshing, mesh optimization, subdivision volume fitting, and shell meshing.  相似文献   

7.
Photo‐realistic rendering of inhomogeneous participating media with light scattering in consideration is important in computer graphics, and is typically computed using Monte Carlo based methods. The key technique in such methods is the free path sampling, which is used for determining the distance (free path) between successive scattering events. Recently, it has been shown that efficient and unbiased free path sampling methods can be constructed based on Woodcock tracking. The key concept for improving the efficiency is to utilize space partitioning (e.g., kd‐tree or uniform grid), and a better space partitioning scheme is important for better sampling efficiency. Thus, an estimation framework for investigating the gain in sampling efficiency is important for determining how to partition the space. However, currently, there is no estimation framework that works in 3D space. In this paper, we propose a new estimation framework to overcome this problem. Using our framework, we can analytically estimate the sampling efficiency for any typical partitioned space. Conversely, we can also use this estimation framework for determining the optimal space partitioning. As an application, we show that new space partitioning schemes can be constructed using our estimation framework. Moreover, we show that the differences in the performances using different schemes can be predicted fairly well using our estimation framework.  相似文献   

8.
Digital fabrication devices are powerful tools for creating tangible reproductions of 3D digital models. Most available printing technologies aim at producing an accurate copy of a tridimensional shape. However, fabrication technologies can also be used to create a stylistic representation of a digital shape. We refer to this class of methods as ‘stylized fabrication methods’. These methods abstract geometric and physical features of a given shape to create an unconventional representation, to produce an optical illusion or to devise a particular interaction with the fabricated model. In this state‐of‐the‐art report, we classify and overview this broad and emerging class of approaches and also propose possible directions for future research.  相似文献   

9.
In this paper, we describe a novel approach for the reconstruction of animated meshes from a series of time‐deforming point clouds. Given a set of unordered point clouds that have been captured by a fast 3‐D scanner, our algorithm is able to compute coherent meshes which approximate the input data at arbitrary time instances. Our method is based on the computation of an implicit function in ?4 that approximates the time‐space surface of the time‐varying point cloud. We then use the four‐dimensional implicit function to reconstruct a polygonal model for the first time‐step. By sliding this template mesh along the time‐space surface in an as‐rigid‐as‐possible manner, we obtain reconstructions for further time‐steps which have the same connectivity as the previously extracted mesh while recovering rigid motion exactly. The resulting animated meshes allow accurate motion tracking of arbitrary points and are well suited for animation compression. We demonstrate the qualities of the proposed method by applying it to several data sets acquired by real‐time 3‐D scanners.  相似文献   

10.
This paper surveys mesh segmentation techniques and algorithms, with a focus on part‐based segmentation, that is, segmentation that divides a mesh (featuring a 3D object) into meaningful parts. Part‐based segmentation applies to a single object and also to a family of objects (i.e. co‐segmentation). However, we shall not address here chart‐based segmentation, though some mesh co‐segmentation methods employ such chart‐based segmentation in the initial step of their pipeline. Finally, the taxonomy proposed in this paper is new in the sense that one classifies each segmentation algorithm regarding the dimension (i.e. 1D, 2D and 3D) of the representation of object parts. The leading idea behind this survey is to identify the properties and limitations of the state‐of‐the‐art algorithms to shed light on the challenges for future work.  相似文献   

11.
Sharp edges are important shape features and their extraction has been extensively studied both on point clouds and surfaces. We consider the problem of extracting sharp edges from a sparse set of colour‐and‐depth (RGB‐D) images. The noise‐ridden depth measurements are challenging for existing feature extraction methods that work solely in the geometric domain (e.g. points or meshes). By utilizing both colour and depth information, we propose a novel feature extraction method that produces much cleaner and more coherent feature lines. We make two technical contributions. First, we show that intensity edges can augment the depth map to improve normal estimation and feature localization from a single RGB‐D image. Second, we designed a novel algorithm for consolidating feature points obtained from multiple RGB‐D images. By utilizing normals and ridge/valley types associated with the feature points, our algorithm is effective in suppressing noise without smearing nearby features.  相似文献   

12.
In this paper, a novel concept, Affective Modelling, is introduced to encapsulate the idea of creating 3D models based on the emotional responses that they may invoke. Research on perceptually‐related issues in Computer Graphics focuses mostly on the rendering aspect. Low‐level perceptual criteria taken from established Psychology theories or identified by purposefully‐designed experiments are utilised to reduce rendering effort or derive quality evaluation schemes. For modelling, similar ideas have been applied to optimise the level of geometrical details. High‐level cognitive responses such as emotions/feelings are less addressed in graphics literatures. This paper investigates the possibility of incorporating emotional/affective factors for 3D model creations. Using a glasses frame model as our test case, we demonstrate a methodological framework to build the links between human emotional responses and geometrical features. We design and carry out a factorial experiment to systematically analyse how certain shape factors individually and interactively influence the viewer's impression of the shape of glasses frames. The findings serve as a basis for establishing computational models that facilitate emotionally‐guided 3D modelling.  相似文献   

13.
The efficient evaluation of visibility in a three‐dimensional scene is a longstanding problem in computer graphics. Visibility evaluations come in many different forms: figuring out what object is visible in a pixel; determining whether a point is visible to a light source; or evaluating the mutual visibility between 2 surface points. This paper provides a new, experimental view on visibility, based on a probabilistic evaluation of the visibility function. Instead of checking the visibility against all possible intervening geometry the visibility between 2 points is now evaluated by testing only a random subset of objects. The result is not a Boolean value that is either 0 or 1, but a numerical value that can even be negative. Because we use the visibility evaluation as part of the integrand in illumination computations, the probabilistic evaluation of visibility becomes part of the Monte Carlo procedure of estimating the illumination integral, and results in an unbiased computation of illumination values in the scene. Moreover, the number of intersections tests for any given ray is decreased, since only a random selection of geometric primitives is tested. Although probabilistic visibility is an experimental and new idea, we present a practical algorithm for direct illumination that uses the probabilistic nature of visibility evaluations.  相似文献   

14.
In this paper, a new method for deformable 3D shape registration is proposed. The algorithm computes shape transitions based on local similarity transforms which allows to model not only as‐rigid‐as‐possible deformations but also local and global scale. We formulate an ordinary differential equation (ODE) which describes the transition of a source shape towards a target shape. We assume that both shapes are roughly pre‐aligned (e.g., frames of a motion sequence). The ODE consists of two terms. The first one causes the deformation by pulling the source shape points towards corresponding points on the target shape. Initial correspondences are estimated by closest‐point search and then refined by an efficient smoothing scheme. The second term regularizes the deformation by drawing the points towards locally defined rest positions. These are given by the optimal similarity transform which matches the initial (undeformed) neighborhood of a source point to its current (deformed) neighborhood. The proposed ODE allows for a very efficient explicit numerical integration. This avoids the repeated solution of large linear systems usually done when solving the registration problem within general‐purpose non‐linear optimization frameworks. We experimentally validate the proposed method on a variety of real data and perform a comparison with several state‐of‐the‐art approaches.  相似文献   

15.
We developed an interactive system to design a customized cover for a given three‐dimensional (3D) object such as a camera, teapot, or car. The system first computes the convex hull of the input geometry. The user segments it into several cloth patches by drawing on the 3D surface. This paper provides two technical contributions. First, it introduces a specialized flattening algorithm for cover patches. It makes each two‐dimensional edge in the flattened pattern equal to or longer than the original 3D edge; a smaller patch would fail to cover the object, and a larger patch would result in extra wrinkles. Second, it introduces a mechanism to verify that the user‐specified opening would be large enough for the object to be removed. Starting with the initial configuration, the system virtually “pulls” the object out of the cover while avoiding excessive stretching of cloth patches. We used the system to design real covers and confirmed that it functions as intended.  相似文献   

16.
In this paper we address the question of how to quickly model glyph‐based Geographic Information System visualizations. Our solution is based on using shape grammars to set up the different aspects of a visualization, including the geometric content of the visualization, methods for resolving layout conflicts and interaction methods. Our approach significantly increases modelling efficiency over similarly flexible systems currently in use.  相似文献   

17.
Interactive graphics are commonly used in games and have been shown to be successful in attracting the general audience. Instead of computer games, animations, cartoons, and videos being used only for entertainment, there is now an interest in using interactive graphics for ‘innovative testing’. Rather than traditional pen‐and‐paper tests, audio, video and graphics are being conceived as alternative means for more effective testing in the future. In this paper, we review some examples of graphics item types for testing. As well, we outline how games can be used to interactively test concepts; discuss designing chemistry item types with interactive 3D graphics; suggest approaches for automatically adjusting difficulty level in interactive graphics based questions; and propose strategies for giving partial marks for incorrect answers. We study how to test different cognitive skills, such as music, using multimedia interfaces; and also evaluate the effectiveness of our model. Methods for estimating difficulty level of a mathematical item type using Item Response Theory (IRT) and a molecule construction item type using Graph Edit Distance are discussed. Evaluation of the graphics item types through extensive testing on some students is described. We also outline the application of using interactive graphics over cell phones. All of the graphics item types used in this paper are developed by members of our research group.  相似文献   

18.
Signed distance functions (SDF) to explicit or implicit surface representations are intensively used in various computer graphics and visualization algorithms. Among others, they are applied to optimize collision detection, are used to reconstruct data fields or surfaces, and, in particular, are an obligatory ingredient for most level set methods. Level set methods are common in scientific visualization to extract surfaces from scalar or vector fields. Usual approaches for the construction of an SDF to a surface are either based on iterative solutions of a special partial differential equation or on marching algorithms involving a polygonization of the surface. We propose a novel method for a non‐iterative approximation of an SDF and its derivatives in a vicinity of a manifold. We use a second‐order algebraic fitting scheme to ensure high accuracy of the approximation. The manifold is defined (explicitly or implicitly) as an isosurface of a given volumetric scalar field. The field may be given at a set of irregular and unstructured samples. Stability and reliability of the SDF generation is achieved by a proper scaling of weights for the Moving Least Squares approximation, accurate choice of neighbors, and appropriate handling of degenerate cases. We obtain the solution in an explicit form, such that no iterative solving is necessary, which makes our approach fast.  相似文献   

19.
We present a novel approach to parameterize a mesh with disk topology to the plane in a shape‐preserving manner. Our key contribution is a local/global algorithm, which combines a local mapping of each 3D triangle to the plane, using transformations taken from a restricted set, with a global “stitch” operation of all triangles, involving a sparse linear system. The local transformations can be taken from a variety of families, e.g. similarities or rotations, generating different types of parameterizations. In the first case, the parameterization tries to force each 2D triangle to be an as‐similar‐as‐possible version of its 3D counterpart. This is shown to yield results identical to those of the LSCM algorithm. In the second case, the parameterization tries to force each 2D triangle to be an as‐rigid‐as‐possible version of its 3D counterpart. This approach preserves shape as much as possible. It is simple, effective, and fast, due to pre‐factoring of the linear system involved in the global phase. Experimental results show that our approach provides almost isometric parameterizations and obtains more shape‐preserving results than other state‐of‐the‐art approaches. We present also a more general “hybrid” parameterization model which provides a continuous spectrum of possibilities, controlled by a single parameter. The two cases described above lie at the two ends of the spectrum. We generalize our local/global algorithm to compute these parameterizations. The local phase may also be accelerated by parallelizing the independent computations per triangle.  相似文献   

20.
Point cloud data is one of the most common types of input for geometric processing applications. In this paper, we study the point cloud density adaptation problem that underlies many pre‐processing tasks of points data. Specifically, given a (sparse) set of points Q sampling an unknown surface and a target density function, the goal is to adapt Q to match the target distribution. We propose a simple and robust framework that is effective at achieving both local uniformity and precise global density distribution control. Our approach relies on the Gaussian‐weighted graph Laplacian and works purely in the points setting. While it is well known that graph Laplacian is related to mean‐curvature flow and thus has denoising ability, our algorithm uses certain information encoded in the graph Laplacian that is orthogonal to the mean‐curvature flow. Furthermore, by leveraging the natural scale parameter contained in the Gaussian kernel and combining it with a simulated annealing idea, our algorithm moves points in a multi‐scale manner. The resulting algorithm relies much less on the input points to have a good initial distribution (neither uniform nor close to the target density distribution) than many previous refinement‐based methods. We demonstrate the simplicity and effectiveness of our algorithm with point clouds sampled from different underlying surfaces with various geometric and topological properties.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号