首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The goal of a multilevel simplification method is to produce different levels of refinement of a mesh, reducing the resolution (total number of faces), while preserving the original topology and a good approximation to the original geometry. A new approach to simplification based on the evolution of surfaces under p-Laplacian flow is presented. Such an evolution provides a natural geometric clustering process where the spatial effect of the p-Laplacian allows for identifying suitable regions that need to be simplified. The concrete scheme is a multiresolution framework composed, at each simplification level, of a spatial clustering diffusion flow to determine the potential candidates for deletion, followed by an incremental decimation process to update the mesh vertex locations in order to decrease the overall resolution. Numerical results show the effectiveness of our strategy in multilevel simplification of different models with different complexities, in particular for models characterized by sharp features and flat parts.  相似文献   

2.
Controlled topology simplification   总被引:3,自引:0,他引:3  
We present a simple, robust, and practical method for object simplification for applications where gradual elimination of high frequency details is desired. This is accomplished by converting an object into multi resolution volume rasters using a controlled filtering and sampling technique. A multiresolution triangle mesh hierarchy can then be generated by applying the Marching Cubes algorithm. We further propose an adaptive surface generation algorithm to reduce the number of triangles generated by the standard Marching Cubes. Our method simplifies the topology of objects in a controlled fashion. In addition, at each level of detail, multilayered meshes can be used for an efficient antialiased rendering  相似文献   

3.
4.
Superfaces: polygonal mesh simplification with bounded error   总被引:19,自引:0,他引:19  
The algorithm presented simplifies polyhedral meshes within prespecified tolerances based on a bounded approximation criterion. The vertices in the simplified mesh are a proper subset of the original vertices. The algorithm, called Superfaces, makes two major contributions to the research in this area: it uses a bounded approximation approach, which guarantees that a simplified mesh approximates the original mesh to within a prespecified tolerance (that is, every vertex v in the original mesh will lie within a user specified distance ϵ of the simplified mesh); its face merging procedure is efficient and greedy-that is, it does not backtrack or undo any merging once completed and thus, the algorithm is practical for simplifying very large meshes  相似文献   

5.
A fast algorithm for performing simplification and matching is described. The algorithm gives an improvement of up to an order of magnitude on suitable problems. It makes use of a dag data structure and tag fields to avoid redundant matches. Performance studies were done to determine the relative importance of various features in improving the running time.  相似文献   

6.
We study the problem of simplifying a given directed graph by keeping a small subset of its arcs. Our goal is to maintain the connectivity required to explain a set of observed traces of information propagation across the graph. Unlike previous work, we do not make any assumption about an underlying model of information propagation. Instead, we approach the task as a combinatorial problem. We prove that the resulting optimization problem is $\mathbf{NP}$ NP -hard. We show that a standard greedy algorithm performs very well in practice, even though it does not have theoretical guarantees. Additionally, if the activity traces have a tree structure, we show that the objective function is supermodular, and experimentally verify that the approach for size-constrained submodular minimization recently proposed by Nagano et al. (28th International Conference on Machine Learning, 2011) produces very good results. Moreover, when applied to the task of reconstructing an unobserved graph, our methods perform comparably to a state-of-the-art algorithm devised specifically for this task.  相似文献   

7.
In this paper we present an innovative approach to incremental quad mesh simplification, i.e. the task of producing a low complexity quad mesh starting from a high complexity one. The process is based on a novel set of strictly local operations which preserve quad structure. We show how good tessellation quality (e.g. in terms of vertex valencies) can be achieved by pursuing uniform length and canonical proportions of edges and diagonals. The decimation process is interleaved with smoothing in tangent space. The latter strongly contributes to identify a suitable sequence of local modification operations. The method is naturally extended to manage preservation of feature lines (e.g. creases) and varying (e.g. adaptive) tessellation densities. We also present an original Triangle‐to‐Quad conversion algorithm that behaves well in terms of geometrical complexity and tessellation quality, which we use to obtain the initial quad mesh from a given triangle mesh.  相似文献   

8.
Locally toleranced surface simplification   总被引:5,自引:0,他引:5  
We present a technique for simplifying a triangulated surface. Simplifying consists of approximating the surface with another surface of lower triangle count. Our algorithm can preserve the volume of a solid to within machine accuracy; it favors the creation of near-equilateral triangles. We develop novel methods for reporting and representing a bound to the approximation error between a simplified surface and the original, and respecting a variable tolerance across the surface. A different positive error value is reported at each vertex. By linearly blending the error values in between vertices, we define a volume of space, called the error volume, as the union of balls of linearly varying radii. The error volume is built dynamically as the simplification progresses, on top of preexisting error volumes that it contains. We also build a tolerance volume to forbid simplification errors exceeding a local tolerance. The information necessary to compute error values is local to the star of a vertex; accordingly, the complexity of the algorithm is either linear or in O(n log n) in the original number of surface edges, depending on the variant. We extend the mechanisms of error and tolerance volumes to preserve during simplification scalar and vector attributes associated with surface vertices. Assuming a linear variation across triangles, error and tolerance volumes are defined in the same fashion as for positional error. For normals, a corrective term is applied to the error measured at the vertices to compensate for nonlinearities  相似文献   

9.
Evaluation of memoryless simplification   总被引:4,自引:0,他引:4  
We investigate the effectiveness of the memoryless simplification approach described by Lindstrom and Turk (1998). Like many polygon simplification methods, this approach reduces the number of triangles in a model by performing a sequence of edge collapses. It differs from most recent methods, however, in that it does not retain a history of the geometry of the original model during simplification. We present numerical comparisons showing that the memoryless method results in smaller mean distance measures than many published techniques that retain geometric history. We compare a number of different vertex placement schemes for an edge collapse in order to identify the aspects of the memoryless simplification that are responsible for its high level of fidelity. We also evaluate simplification of models with boundaries, and we show how the memoryless method may be tuned to trade between manifold and boundary fidelity. We found that the memoryless approach yields consistently low mean errors when measured by the Metro mesh comparison tool. In addition to using complex models for the evaluations, we also perform comparisons using a sphere and portions of a sphere. These simple surfaces turn out to match the simplification behaviors for the more complex models that we used  相似文献   

10.
We present a simplification algorithm for manifold polygonal meshes of plane-dominant models. Models of this type are likely to appear in man-made environments. While traditional simplification algorithms focus on generality and smooth meshes, the approach presented here considers a specific class of man-made models. By detecting and classifying edge loops on the mesh and providing a guided series of binary mesh partitions, our approach generates a series of simplified models, each of which better respects the semantics of these kinds of models than conventional approaches do. A guiding principle is to eliminate simplifications that do not make sense in constructed environments. This, coupled with the concept of “punctuated simplification”, leads to an approach that is both efficient and delivers high visual quality. Comparative results are given.  相似文献   

11.
12.
Curvature-aware simplification for point-sampled geometry   总被引:1,自引:0,他引:1  
We propose a novel curvature-aware simplification technique for point-sampled geometry based on the locally optimal projection(LOP) operator.Our algorithm includes two new developments.First,a weight term related to surface variation at each point is introduced to the classic LOP operator.It produces output points with a spatially adaptive distribution.Second,for speeding up the convergence of our method,an initialization process is proposed based on geometry-aware stochastic sampling.Owing to the initialization,the relaxation process achieves a faster convergence rate than those initialized by uniform sampling.Our simplification method possesses a number of distinguishing features.In particular,it provides resilience to noise and outliers,and an intuitively controllable distribution of simplification.Finally,we show the results of our approach with publicly available point cloud data,and compare the results with those obtained using previous methods.Our method outperforms these methods on raw scanned data.  相似文献   

13.
14.
We present an algorithm for simplifying terrain data that preserves topology. We use a decimation algorithm that simplifies the given data set using hierarchical clustering. Topology constraints, along with local error metrics, are used to ensure topology-preserving simplification and to compute precise error bounds in the simplified data. The earths mover distance is used as a global metric to compute the degradation in topology as the simplification proceeds. Experiments with both analytic and real terrain data are presented. Results indicate that one can obtain significant simplification with low errors without losing topology information.  相似文献   

15.
16.
Streaming simplification of tetrahedral meshes   总被引:1,自引:0,他引:1  
Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory  相似文献   

17.
Forcing a simpler topology than the theoretical optimum by additional constraints may have several advantages, such as ease of manufacturing, mesh independence and checkerboard control. It is shown, however, that topology simplification may result in considerable weight increases. In examining various numerical anomalies such as checkerboard patterns and diagonal element chains, it is shown analytically that their correct stiffness tends to zero.  相似文献   

18.
Two flow network simplification algorithms   总被引:1,自引:0,他引:1  
Flow network simplification can reduce the size of the flow network and hence the amount of computation performed by flow algorithms. We present the first linear time algorithm for the undirected network case. We also give an O(|E|∗(|V|+|E|)) time algorithm for the directed case, an improvement over the previous best O(|V|+2|E|log|V|) time solution. Both of our algorithms are quite simple.  相似文献   

19.
In this paper, a method for computing the closure of a set of attributes according to a specification of functional dependencies of the relational model is described. The main feature of this method is that it computes the closure using solely the inference system of the SL FD logic. For the first time, logic is used in the design of automated deduction methods to solve the closure problem. The strong link between the SL FD logic and the closure algorithm is presented and an SL FD simplification paradigm emerges as the key element of our method. In addition, the soundness and completeness of the closure algorithm are shown. Our method has linear complexity, as the classical closure algorithms, and it has all the advantages provided by the use of logic. We have empirically compared our algorithm with the Diederich and Milton classical algorithm. This experiment reveals the best behaviour of our method which shows a significant improvement in the average speed.  相似文献   

20.
This paper describes a genetic programming (GP) approach to medical data classification problems. In this approach, the evolved genetic programs are simplified online during the evolutionary process using algebraic simplification rules, algebraic equivalence and prime techniques. The new simplification GP approach is examined and compared to the standard GP approach on two medical data classification problems. The results suggest that the new simplification GP approach can not only be more efficient with slightly better classification performance than the basic GP system on these problems, but also significantly reduce the sizes of evolved programs. Comparison with other methods including decision trees, naive Bayes, nearest neighbour, nearest centroid, and neural networks suggests that the new GP approach achieved superior results to almost all of these methods on these problems. The evolved genetic programs are also easier to interpret than the “hidden patterns” discovered by the other methods.
Phillip WongEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号