首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
We present a method for the generation of coarse and fine finite element meshes on multiply connected surfaces. Our method is based on the medial axis transform (MAT) which is employed to decompose a complex shape into topologically simple subdomains. One important property of our approach is that MAT is effectively employed to automatically extract some important shape characteristics and their length scales. Using this technique, we can create a coarse subdivision of a complex surface and select local element size to generate fine triangular meshes within individual subregions. The MAT allows us to carry out these processes in an automated manner. Thus, our approach can lead to integration of automated finite element (FE) mesh generation schemes into existing FE preprocessing systems. We also briefly discuss several design and analysis applications, which include adaptive surface approximations and adaptive h- and p-version finite element analysis (FEA) processes, in order to demonstrate our method.  相似文献   

2.
Streaming simplification of tetrahedral meshes   总被引:1,自引:0,他引:1  
Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory  相似文献   

3.
We present a particle-based approach for generating adaptive triangular surface and tetrahedral volume meshes from computer-aided design models. Input shapes are treated as a collection of smooth, parametric surface patches that can meet non-smoothly on boundaries. Our approach uses a hierarchical sampling scheme that places particles on features in order of increasing dimensionality. These particles reach a good distribution by minimizing an energy computed in 3D world space, with movements occurring in the parametric space of each surface patch. Rather than using a pre-computed measure of feature size, our system automatically adapts to both curvature as well as a notion of topological separation. It also enforces a measure of smoothness on these constraints to construct a sizing field that acts as a proxy to piecewise-smooth feature size. We evaluate our technique with comparisons against other popular triangular meshing techniques for this domain.  相似文献   

4.
海量数据的曲面分层重建算法   总被引:7,自引:0,他引:7       下载免费PDF全文
吕晟珉  杨勋年  汪国昭 《软件学报》2003,14(8):1448-1455
从二维图像序列进行表面重建的问题由来已久.传统的重建方法通常是先重建或先等值面抽取,再简化数据量.随着处理数据量的增长,传统算法的中间过程会因为存储空间的限制不能进行下去.如何利用有限的存储空间对大数据量进行处理,从而完成曲面的重建曾是要研究的问题.针对大数据量的已分割的医学切片图像,利用逐层重建、即时简化的基本思想,给出一个易于操作实现、数据量可控制的算法.这样可以在硬件条件不太高的计算机(如内存不太大的个人微机)上实现大数据量的医学图像表面重建.  相似文献   

5.
In this paper, we present efficient algorithms for generating hierarchical molecular skin meshes with decreasing size and guaranteed quality. Our algorithms generate a sequence of coarse meshes for both the surfaces and the bounded volumes. Each coarser surface mesh is adaptive to the surface curvature and maintains the topology of the skin surface with guaranteed mesh quality. The corresponding tetrahedral mesh is conforming to the interface surface mesh and contains high quality tetrahedra that decompose both the interior of the molecule and the surrounding region (enclosed in a sphere). Our hierarchical tetrahedral meshes have a number of advantages that will facilitate fast and accurate multigrid PDE solvers. Firstly, the quality of both the surface triangulations and tetrahedral meshes is guaranteed. Secondly, the interface in the tetrahedral mesh is an accurate approximation of the molecular boundary. In particular, all the boundary points lie on the skin surface. Thirdly, our meshes are Delaunay meshes. Finally, the meshes are adaptive to the geometry.  相似文献   

6.
Computational simulations frequently generate solutions defined over very large tetrahedral volume meshes containing many millions of elements. Furthermore, such solutions may often be expressed using non-linear basis functions. Certain solution techniques, such as discontinuous Galerkin methods, may even produce non-conforming meshes. Such data is difficult to visualize interactively, as it is far too large to fit in memory and many common data reduction techniques, such as mesh simplification, cannot be applied to non-conforming meshes. We introduce a point-based visualization system for interactive rendering of large, potentially non-conforming, tetrahedral meshes. We propose methods for adaptively sampling points from non-linear solution data and for decimating points at run time to fit GPU memory limits. Because these are streaming processes, memory consumption is independent of the input size. We also present an order-independent point rendering method that can efficiently render volumes on the order of 20 million tetrahedra at interactive rates.  相似文献   

7.
Ray tracing a volume scene graph composed of multiple point-based volume objects (PBVO) can produce high quality images with effects such as shadows and constructive operations. A naive approach, however, would demand an overwhelming amount of memory to accommodate all point datasets and their associated control structures such as octrees. This paper describes an out-of-core approach for rendering such a scene graph in a scalable manner. In order to address the difficulty in pre-determining the order of data caching, we introduce a technique based on a dynamic, in-core working set. We present a ray-driven algorithm for predicting the working set automatically. This allows both the data and the control structures required for ray tracing to be dynamically pre-fetched according to access patterns determined based on captured knowledge of ray-data intersection. We have conducted a series of experiments on the scalability of the technique using working sets and datasets of different sizes. With the aid of both qualitative and quantitative analysis, we demonstrate that this approach allows the rendering of multiple large PBVOs in a volume scene graph to be performed on desktop computers.  相似文献   

8.
Large scale and structurally complex volume datasets from high-resolution 3D imaging devices or computational simulations pose a number of technical challenges for interactive visual analysis. In this paper, we present the first integration of a multiscale volume representation based on tensor approximation within a GPU-accelerated out-of-core multiresolution rendering framework. Specific contributions include (a) a hierarchical brick-tensor decomposition approach for pre-processing large volume data, (b) a GPU accelerated tensor reconstruction implementation exploiting CUDA capabilities, and (c) an effective tensor-specific quantization strategy for reducing data transfer bandwidth and out-of-core memory footprint. Our multiscale representation allows for the extraction, analysis and display of structural features at variable spatial scales, while adaptive level-of-detail rendering methods make it possible to interactively explore large datasets within a constrained memory footprint. The quality and performance of our prototype system is evaluated on large structurally complex datasets, including gigabyte-sized micro-tomographic volumes.  相似文献   

9.
In order to deal with the common trend in size increase of volumetric datasets, in the past few years research in isosurface extraction has focused on related aspects such as surface simplification and load-balanced parallel algorithms.We present a parallel, block-wise extension of the tandem algorithm [Attali D, Cohen-Steiner D, Edelsbrunner H. Extraction and simplification of iso-surfaces in tandem. In: SGP ’05: Proceedings of the third Eurographics symposium on Geometry processing. Aire-la-Ville, Switzerland: Eurographics Association; 2005. p. 139-148], which simplifies on the fly an isosurface being extracted. Our approach minimizes the overall memory consumption using an adequate block splitting and merging strategy along with the introduction of a component dumping mechanism that drastically reduces the amount of memory needed for particular datasets such as those encountered in geophysics. As soon as detected, surface components are migrated to the disk along with a meta-data index (oriented bounding box, volume, etc.) that permits further improved exploration scenarios (small component removal or particularly oriented component selection for instance).For ease of implementation, we carefully describe a master and worker algorithm architecture that clearly separates the four required basic tasks. We show several results of our parallel algorithm applied on a geophysical dataset of size 7000×1600×2000.  相似文献   

10.
This paper describes a novel algorithm to extract surface meshes directly from implicitly represented heterogeneous models made of different constituent materials. Our approach can directly convert implicitly represented heterogeneous objects into a surface model separating homogeneous material regions, where every homogeneous region in a heterogeneous structure is enclosed by a set of two-manifold surface meshes. Unlike other discretization techniques of implicitly represented heterogeneous objects, the intermediate surfaces between two constituent materials can be directly extracted by our algorithm. Therefore, it is more convenient to adopt the surface meshes from our approach in the boundary element method (BEM) or as a starting model to generate volumetric meshes preserving intermediate surfaces for the finite element method (FEM). The algorithm consists of three major steps: firstly, a set of assembled two-manifold surface patches coarsely approximating the interfaces between homogeneous regions are extracted and segmented; secondly, signed distance fields are constructed such that each field expresses the Euclidean distance from points to the surface of one homogeneous material region; and finally, coarse patches generated in the first step are dynamically optimized to give adaptive and high-quality surface meshes. The manifold topology is preserved on each surface patch.  相似文献   

11.
Isosurfaces are ubiquitous in many fields, including visualization, graphics, and vision. They are often the main computational component of important processing pipelines (e.g. , surface reconstruction), and are heavily used in practice. The classical approach to compute isosurfaces is to apply the Marching Cubes algorithm, which although robust and simple to implement, generates surfaces that require additional processing steps to improve triangle quality and mesh size. An important issue is that in some cases, the surfaces generated by Marching Cubes are irreparably damaged, and important details are lost which can not be recovered by subsequent processing. The main motivation of this work is to develop a technique capable of constructing high-quality and high-fidelity isosurfaces. We propose a new advancing front technique that is capable of creating high-quality isosurfaces from regular and irregular volumetric datasets. Our work extends the guidance field framework of Schreiner et al. to implicit surfaces, and improves it in significant ways. In particular, we describe a set of sampling conditions that guarantee that surface features will be captured by the algorithm. We also describe an efficient technique to compute a minimal guidance field, which greatly improves performance. Our experimental results show that our technique can generate high-quality meshes from complex datasets.  相似文献   

12.
We present a technique for steganography in polygonal meshes. Our method hides a message in the indexed rep‐resentation of a mesh by permuting the order in which faces and vertices are stored. The permutation is relative to a reference ordering that encoder and decoder derive from the mesh connectivity in a consistent manner. Our method is distortion‐free because it does not modify the geometry of the mesh. Compared to previous steganographic methods for polygonal meshes our capacity is up to an order of magnitude better. Our steganography algorithm is universal and can be used instead of the standard permutation steganography algorithm on arbitrary datasets. The standard algorithm runs in Ω (n2 log2 n log log n) time and achieves optimal O(nlog n) bit capacity on datasets with n elements. In contrast, our algorithm runs in O(n) time, achieves a capacity that is only one bit per element less than optimal, and is extremely simple to implement.  相似文献   

13.
Based on mesh deformation, we present a unified mesh parametrization algorithm for both planar and spherical domains. Our approach can produce intermediate frames from the original meshes to the targets. We derive and define a novel geometric flow: ‘unit normal flow (UNF)’ and prove that if UNF converges, it will deform a surface to a constant mean curvature (CMC) surface, such as planes and spheres. Our method works by deforming meshes of disk topology to planes, and spherical meshes to spheres. Our algorithm is robust, efficient, simple to implement. To demonstrate the robustness and effectiveness of our method, we apply it to hundreds of models of varying complexities. Our experiments show that our algorithm can be a competing alternative approach to other state-of-the-art mesh parametrization methods. The unit normal flow also suggests a potential direction for creating CMC surfaces.  相似文献   

14.
In medical imaging, the generation of surface representations of anatomical objects obtained by labeling images from various modalities, is a critical component for visualization, simulation, and analysis. The interfaces between labeled regions can meet at arbitrary angles and with complex topologies, causing most automatic meshing algorithms to fail. We apply a recent Delaunay refinement algorithm to generate high quality triangular meshes that approximate the interface surfaces. This algorithm has proven guarantees for meshing piecewise-smooth shapes and its implementation overhead is low. Consequently, the approach is applicable to labeled datasets generated from binary segmentations as well as from probabilistic segmentation algorithms. We show the effectiveness of this technique on data from a variety of medical fields and discuss its ability to control the quality and size of the output meshes. The same algorithm can be used to generate tetrahedral meshes of the segmentation space.  相似文献   

15.
Important engineering applications use unstructured hexahedral meshes for numerical simulations. Hexahedral cells, when compared to tetrahedral ones, tend to be more numerically stable and to require less mesh refinement. However, volume visualization of unstructured hexahedral meshes is challenging due to the trilinear variation of scalar fields inside the cells. The conventional solution consists in subdividing each hexahedral cell into five or six tetrahedra, approximating a trilinear variation by a nonadaptive piecewise linear function. This results in inaccurate images and increases the memory consumption. In this paper, we present an accurate ray-casting volume rendering algorithm for unstructured hexahedral meshes. In order to capture the trilinear variation along the ray, we propose the use of quadrature integration. A set of computational experiments demonstrates that our proposal produces accurate results, with reduced memory footprint. The entire algorithm is implemented on graphics cards, ensuring competitive performance. We also propose a faster approach that, as the tetrahedron subdivision scheme, also approximates the trilinear variation by a piecewise linear function, but in an adaptive and more accurate way, considering the points of minimum and maximum of the scalar function along the ray.  相似文献   

16.
We introduce a reliable method to generate offset meshes from input triangle meshes or triangle soups. Our method proceeds in two steps. The first step performs a Dual Contouring method on the offset surface, operating on an adaptive octree that is refined in areas where the offset topology is complex. Our approach substantially reduces memory consumption and runtime compared to isosurfacing methods operating on uniform grids. The second step improves the output Dual Contouring mesh with an offset-aware remeshing algorithm to reduce the normal deviation between the mesh facets and the exact offset. This remeshing process reconstructs concave sharp features and approximates smooth shapes in convex areas up to a user-defined precision. We show the effectiveness and versatility of our method by applying it to a wide range of input meshes. We also benchmark our method on the Thingi10k dataset: watertight and topologically 2-manifold offset meshes are obtained for 100% of the cases.  相似文献   

17.
This paper describes an algorithm to extract adaptive and quality 3D meshes directly from volumetric imaging data. The extracted tetrahedral and hexahedral meshes are extensively used in the finite element method (FEM). A top-down octree subdivision coupled with a dual contouring method is used to rapidly extract adaptive 3D finite element meshes with correct topology from volumetric imaging data. The edge contraction and smoothing methods are used to improve mesh quality. The main contribution is extending the dual contouring method to crack-free interval volume 3D meshing with boundary feature sensitive adaptation. Compared to other tetrahedral extraction methods from imaging data, our method generates adaptive and quality 3D meshes without introducing any hanging nodes. The algorithm has been successfully applied to constructing quality meshes for finite element calculations.  相似文献   

18.
Vector field visualization techniques have evolved very rapidly over the last two decades, however, visualizing vector fields on complex boundary surfaces from computational flow dynamics (CFD) still remains a challenging task. In part, this is due to the large, unstructured, adaptive resolution characteristics of the meshes used in the modeling and simulation process. Out of the wide variety of existing flow field visualization techniques, vector field clustering algorithms offer the advantage of capturing a detailed picture of important areas of the domain while presenting a simplified view of areas of less importance. This paper presents a novel, robust, automatic vector field clustering algorithm that produces intuitive and insightful images of vector fields on large, unstructured, adaptive resolution boundary meshes from CFD. Our bottom-up, hierarchical approach is the first to combine the properties of the underlying vector field and mesh into a unified error-driven representation. The motivation behind the approach is the fact that CFD engineers may increase the resolution of model meshes according to importance. The algorithm has several advantages. Clusters are generated automatically, no surface parameterization is required, and large meshes are processed efficiently. The most suggestive and important information contained in the meshes and vector fields is preserved while less important areas are simplified in the visualization. Users can interactively control the level of detail by adjusting a range of clustering distance measure parameters. We describe two data structures to accelerate the clustering process. We also introduce novel visualizations of clusters inspired by statistical methods. We apply our method to a series of synthetic and complex, real-world CFD meshes to demonstrate the clustering algorithm results.  相似文献   

19.
We present an algorithm to compute an approximation of the generalized Voronoi diagram (GVD) on arbitrary collections of 2D or 3D geometric objects. In particular, we focus on datasets with closely spaced objects; GVD approximation is expensive and sometimes intractable on these datasets using previous algorithms. With our approach, the GVD can be computed using commodity hardware even on datasets with many, extremely tightly packed objects. Our approach is to subdivide the space with an octree that is represented with an adjacency structure. We then use a novel adaptive distance transform to compute the distance function on octree vertices. The computed distance field is sampled more densely in areas of close object spacing, enabling robust and parallelizable GVD surface generation. We demonstrate our method on a variety of data and show example applications of the GVD in 2D and 3D.  相似文献   

20.
We present an isosurface meshing algorithm, DelIso, based on the Delaunay refinement paradigm. This paradigm has been successfully applied to mesh a variety of domains with guarantees for topology, geometry, mesh gradedness, and triangle shape. A restricted Delaunay triangulation, dual of the intersection between the surface and the three-dimensional Voronoi diagram, is often the main ingredient in Delaunay refinement. Computing and storing three-dimensional Voronoi/Delaunay diagrams become bottlenecks for Delaunay refinement techniques since isosurface computations generally have large input datasets and output meshes. A highlight of our algorithm is that we find a simple way to recover the restricted Delaunay triangulation of the surface without computing the full 3D structure. We employ techniques for efficient ray tracing of isosurfaces to generate surface sample points, and demonstrate the effectiveness of our implementation using a variety of volume datasets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号