首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper describes a novel approach to the parameterization of triangle meshes representing 2‐manifolds with an arbitrary genus. A topology‐based decomposition of the shape is computed and used to segment the shape into primitives, which define a chart decomposition of the mesh. Then, each chart is parameterized using an extension of the barycentric coordinates method. The charts are all 0‐genus and can be of three types only, depending on the number of boundary components. The chart decomposition and the parameterization are used to define a shape graph where each node represents one primitive and the arcs code the adjacency relationships between the primitives. Conical and cylindrical primitives are coded together with their skeletal lines that are computed from and aligned with their parameterization. The application of the parameterization approach to remeshing guarantees that extraordinary vertices are localized only where two patches share a boundary and they are not scattered on the whole surface.  相似文献   

2.
Fast GPU-based Adaptive Tessellation with CUDA   总被引:1,自引:0,他引:1  
  相似文献   

3.
The technique of Delaunay refinement has been recognized as a versatile tool to generate Delaunay meshes of a variety of geometries. Despite its usefulness, it suffers from one lacuna that limits its application. It does not scale well with the mesh size. As the sample point set grows, the Delaunay triangulation starts stressing the available memory space which ultimately stalls any effective progress. A natural solution to the problem is to maintain the point set in clusters and run the refinement on each individual cluster. However, this needs a careful point insertion strategy and a balanced coordination among the neighboring clusters to ensure consistency across individual meshes. We design an octtree based localized Delaunay refinement method for meshing surfaces in three dimensions which meets these goals. We prove that the algorithm terminates and provide guarantees about structural properties of the output mesh. Experimental results show that the method can avoid memory thrashing while computing large meshes and thus scales much better than the standard Delaunay refinement method.  相似文献   

4.
Very large scale networks have become common in distributed systems. To efficiently manage these networks, various techniques are being developed in the distributed and networking research community. In this paper, we focus on one of those techniques, network clustering, i.e., the partitioning of a system into connected subsystems. The clustering we compute is size-oriented: given a parameter K of the algorithm, we compute, as far as possible, clusters of size K. We present an algorithm to compute a binary hierarchy of nested disjoint clusters. A token browses the network and recruits nodes to its cluster. When a cluster reaches a maximal size defined by a parameter of the algorithm, it is divided when possible, and tokens are created in both of the new clusters. The new clusters are then built and divided in the same fashion. The token browsing scheme chosen is a random walk, in order to ensure local load balancing. To allow the division of clusters, a spanning tree is built for each cluster. At each division, information on how to route messages between the clusters is stored. The naming process used for the clusters, along with the information stored during each division, allows routing between any two clusters.  相似文献   

5.
针对三角格网提出一种对称边双循环链表结构,对称边是指一条边由2个有向边表示,双循环链表是指这些线段分别以其2个端点为源点,通过同源点关系使线段之间能够顺时针与逆时针方向关联。该结构能方便地维护三角格网拓扑结构。给出三角格网中几个基本操作的伪码实现。与通用的多边形格网结构相比,它具有理解容易、操作方便、使用内存少的优点。  相似文献   

6.
任意三角网格模型体积的快速精确计算方法   总被引:3,自引:0,他引:3       下载免费PDF全文
三角网格是使用最为广泛的网格模型。提出了一种仅根据三角网格模型的三角面片集合计算模型体积的方法。该方法通过指定投影平面,计算每个三角面片及其在投影平面上的投影所围成的凸五面体的带符号体积(必要时对三角面片进行细分),整个模型的体积为所有凸五面体带符号体积的代数和。所提出的三角网格模型体积计算方法能实现模型体积的快速准确计算。在大型水电工程施工模拟中施工单元的方量计算测试和不同规模的三角网格模型体积计算测试都证实了该方法的有效性。  相似文献   

7.
We present an acceleration structure to efficiently query the Signed Distance Field (SDF) of volumes represented by triangle meshes. The method is based on a discretization of space. In each node, we store the triangles defining the SDF behaviour in that region. Consequently, we reduce the cost of the nearest triangle search, prioritizing query performance, while avoiding approximations of the field. We propose a method to conservatively compute the set of triangles influencing each node. Given a node, each triangle defines a region of space such that all points inside it are closer to a point in the node than the triangle is. This property is used to build the SDF acceleration structure. We do not need to explicitly compute these regions, which is crucial to the performance of our approach. We prove the correctness of the proposed method and compare it to similar approaches, confirming that our method produces faster query times than other exact methods.  相似文献   

8.
Negative Samples Analysis in Relevance Feedback   总被引:1,自引:0,他引:1  
Recently, relevance feedback (RF) in content-based image retrieval (CBIR) has been implemented as an online binary classifier to separate the positive samples from the negative samples, where both sets of samples are labeled by the user. In many applications, it is reasonable to assume that all the positive samples are alike and thus that the region of the feature space occupied by the positive samples can be described by a single hypersurface. However, for the negative samples, previous RF methods either treat each one of the negative samples as an isolated point or assume the whole negative set can be described by a single convex hypersurface. In this paper, we argue that these treatments of the negative samples are not sound. Our belief is all positive samples are included in a set and the negative samples split into a small number of subsets, each one of which has a simple distribution. Therefore, we first cluster the negative samples into several groups; for each such negative group, we build a marginal convex machine (MCM) subclassifier between it and the single positive group which results in a series of subclassifiers. These subclassifiers are then incorporated into a biased MCM (BMCM) for RF. Experiments were carried out to prove the advantages of BMCM-based RF over previous methods for RF  相似文献   

9.
Optimized Sub-Sampling of Point Sets for Surface Splatting   总被引:9,自引:0,他引:9  
Using surface splats as a rendering primitive has gained increasing attention recently due to its potential for high‐performance and high‐quality rendering of complex geometric models. However, as with any other rendering primitive, the processing costs are still proportional to the number of primitives that we use to represent a given object. This is why complexity reduction for point‐sampled geometry is as important as it is, e.g., for triangle meshes. In this paper we present a new sub‐sampling technique for dense point clouds which is specifically adjusted to the particular geometric properties of circular or elliptical surface splats. A global optimization scheme computes an approximately minimal set of splats that covers the entire surface while staying below a globally prescribed maximum error toleranceε. Since our algorithm converts pure point sample data into surface splats with normal vectors and spatial extent, it can also be considered as a surface reconstruction technique which generates a hole‐free piecewise linearC?1 continuous approximation of the input data. Here we can exploit the higher flexibility of surface splats compared to triangle meshes. Compared to previous work in this area we are able to obtain significantly lower splat numbers for a given error tolerance.  相似文献   

10.
In this paper we present a novel method to reconstruct watertight quad meshes on scanned 3D geometry. There exist many different approaches to acquire 3D information from real world objects and sceneries. Resulting point clouds depict scanned surfaces as sparse sets of positional information. A common downside is the lack of normals, connectivity or topological adjacency data which makes it difficult to actually recover a meaningful surface. The concept described in this paper is designed to reconstruct a surface mesh despite all this missing information. Even when facing varying sample density, our algorithm is still guaranteed to produce watertight manifold meshes featuring quad faces only. The topology can be set‐up to follow superimposed regular structures or align naturally to the point cloud's shape. Our proposed approach is based on an initial divide and conquer subsampling procedure: Surface samples are clustered in meaningful neighborhoods as leafs of a kd‐tree. A representative sample of the surface neighborhood is determined for each leaf using a spherical surface approximation. The hierarchical structure of the binary tree is utilized to construct a basic set of loose tiles and to interconnect them. As a final step, missing parts of the now coherent tile structure are filled up with an incremental algorithm for locally optimal gap closure. Disfigured or concave faces in the resulting mesh can be removed with a constrained smoothing operator.  相似文献   

11.
Constructing hierarchies for triangle meshes   总被引:5,自引:0,他引:5  
We present a method to produce a hierarchy of triangle meshes that can be used to blend different levels of detail in a smooth fashion. The algorithm produces a sequence of meshes M0, M1, M 2..., Mn, where each mesh Mi can be transformed to mesh Mi+1 through a set of triangle-collapse operations. For each triangle, a function is generated that approximates the underlying surface in the area of the triangle, and this function serves as a basis for assigning a weight to the triangle in the ordering operation and for supplying the points to which the triangles are collapsed. The algorithm produces a limited number of intermediate meshes by selecting, at each step, a number of triangles that can be collapsed simultaneously. This technique allows us to view a triangulated surface model at varying levels of detail while insuring that the simplified mesh approximates the original surface well  相似文献   

12.
13.
Simple algorithms for the execution of a Breadth First Search on large graphs lead, running on clusters of GPUs, to a situation of load unbalance among threads and un-coalesced memory accesses, resulting in pretty low performances. To obtain a significant improvement on a single GPU and to scale by using multiple GPUs, we resort to a suitable combination of operations to rearrange data before processing them. We propose a novel technique for mapping threads to data that achieves a perfect load balance by leveraging prefix-sum and binary search operations. To reduce the communication overhead, we perform a pruning operation on the set of edges that needs to be exchanged at each BFS level. The result is an algorithm that exploits at its best the parallelism available on a single GPU and minimizes communication among GPUs. We show that a cluster of GPUs can efficiently perform a distributed BFS on graphs with billions of nodes.  相似文献   

14.
In this paper we present a hybrid algorithm for building the bounding volume hierarchy (BVH) that is used in accelerating ray tracing of animated models. This algorithm precomputes densely packed clusters of triangles on surfaces. Folowing that, a set of clusters is used to rebuild the BVH in every frame. Our approach utilizes the assumption that groups of connected triangles remain connected throughout the course of the animation. We introduce a novel heuristic to create triangle clusters that are designed for high performance ray tracing. This heuristic combines the density of connectivity, geometric size and the shape of the cluster.
Our approach accelerates the BVH builder by an order of magnitude rebuilding only the set of clusters that is much smaller than the original set of triangles. The speed-up is achieved against a 'brute-force' BVH builder that repartitions all triangles in every frame of animation without using any pre-clustering. The rendering performance is not affected when a cluster contains a few dozen triangles. We demonstrate the real-time/interactive ray tracing performance for highly-dynamic complex models.  相似文献   

15.
在推荐系统中应用K-means算法聚类可有效降维,然而聚类效果往往依赖于选定的初始中心,并且一旦选定目标簇后,推荐过程只针对目标簇进行,与其他簇无关。针对上述两个问题,提出一种基于满二叉树的二分K-means聚类并行推荐算法。该算法首先反复迭代二分K-means算法,迭代过程中使用簇内凝聚度作为分裂阈值,形成一颗满二叉树;然后通过层次遍历将用户归入到K个叶子节点(簇);最后针对K个簇,应用MapReduce框架进行并行推荐预测。MovieLens上的实验结果表明,该算法可大幅度提高推荐系统准确性,同时增强系统可扩展性。  相似文献   

16.
In this paper, we present a modified filtering algorithm (MFA) by making use of center variations to speed up clustering process. Our method first divides clusters into static and active groups. We use the information of cluster displacements to reject unlikely cluster centers for all nodes in the kd-tree. We reduce the computational complexity of filtering algorithm (FA) through finding candidates for each node mainly from the set of active cluster centers. Two conditions for determining the set of candidate cluster centers for each node from active clusters are developed. Our approach is different from the major available algorithm, which passes no information from one stage of iteration to the next. Theoretical analysis shows that our method can reduce the computational complexity, in terms of the number of distance calculations, of FA at each stage of iteration by a factor of FC/AC, where FC and AC are the numbers of total clusters and active clusters, respectively. Compared with the FA, our algorithm can effectively reduce the computing time and number of distance calculations. It is noted that our proposed algorithm can generate the same clusters as that produced by hard k-means clustering. The superiority of our method is more remarkable when a larger data set with higher dimension is used.  相似文献   

17.
Online Clustering with Variable Sized Clusters   总被引:1,自引:0,他引:1  
Online clustering problems are problems where the classification of points into sets (called clusters) is performed in an online fashion. Points arrive at arbitrary locations, one by one, to be assigned to clusters at the time of arrival. A point can be either assigned to an existing cluster or a new cluster can be opened for it. Here, we study a one-dimensional variant on a line. Each cluster is a closed interval, and there is no restriction on the length of a cluster. The cost of a cluster is the sum of a fixed set-up cost and its diameter (or length). The goal is to minimize the sum of costs of the clusters used by the algorithm. We study several variants, each having the two essential properties that a point which has been assigned to a given cluster must remain assigned to that cluster and no pair of clusters can be merged. In the strict variant, the diameter and the exact location of the cluster must be fixed when it is initialized. In the flexible variant, the algorithm can shift the cluster or expand it, as long as it contains all points assigned to it. In an intermediate model, the diameter is fixed in advance but the exact location can be modified. Here we give tight bounds on the competitive ratio of any online algorithm in each of these variants. In addition, for each model we also consider the semi-online case where points are presented ordered by their location.  相似文献   

18.
We introduce a reliable intersection algorithm for manifold surface meshes. The proposed algorithm builds conforming surface meshes from a set of intersecting triangulated surfaces. This algorithm effectively handles all degenerate triangle–triangle intersection cases. The key idea of the algorithm is based on an extensive set of triangle–edge intersection cases, combined with an intersection curve tracking method. The intersection operations do not rely on global spatial search operations and no remeshing steps are needed. The intersection curves are introduced into each surface mesh using a unique curve imprinting algorithm. The imprinting algorithm naturally handles degenerate intersection cases of many surfaces at an edge or at a point. The algorithm produces a consistent mesh data structure for subsequent mesh optimization operations. The mesh intersection algorithm is used within a general framework for modelling and meshing of geological formations, which are essential for reliable mathematical modelling of oil reservoirs.  相似文献   

19.
In this paper, we present an adaptive-coding method for generic triangular meshes including both regular and irregular meshes. Though it is also based on iterative octree decomposition of the object space for the original mesh, as some prior arts, it has novelties in the following two aspects. First, it mathematically models the occupancy codes containing only a single–“1” bit for accurate initialization of the arithmetic coder at each octree level. Second, it adaptively prioritizes the bits in an occupancy code using a local surface smoothness measure that is based on triangle areas and therefore mitigates the effect of non-uniform vertex sampling over the surface. As a result, the proposed 3D mesh coder yields outstanding coding performance for both regular and irregular meshes and especially for the latter, as demonstrated by the experiments.  相似文献   

20.
Robust Blind Watermarking of Point-Sampled Geometry   总被引:1,自引:0,他引:1  
Digital watermarking for copyright protection of 3-D meshes cannot be directly applied to point clouds, since we need to derive consistent connectivity information, which might change due to attacks, such as noise addition and cropping. Schemes for point clouds operate only on the geometric data and, hence, are generic and applicable to mesh-based representations of 3-D models. For building generic copyright schemes for 3-D models, this paper presents a robust blind watermarking mechanism for 3-D point-sampled geometry. The basic idea is to find a cluster tree from clusters of 3-D points. Using the cluster tree, watermarks can be embedded and extracted by deriving an order among points at global (intracluster) and local levels (intercluster). The multiple bit watermarks are encoded/decoded inside each cluster based on an extension of the cluster structure-based 3-D quantization index modulation. The encoding mechanism makes the technique robust against uniform affine transformations (rotation, scaling, and transformation), reordering, cropping, simplification, and noise addition attacks. The technique when applied to 3-D meshes also achieves robustness against retriangulation and progressive compression techniques. Customization of the bit-encoding scheme achieves high hiding capacity with embedding rates that are equal to 4 b/point, while maintaining the imperceptibility of the watermark with low distortions. The estimated time complexity is $O (n log n)$, where $n$ is the number of 3-D points.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号