首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 11 毫秒
1.
This paper presents a new, volumetric subdivision scheme for interpolation of arbitrary hexahedral meshes. To date, nearly every existing volumetric subdivision scheme is approximating, i.e., with each application of the subdivision algorithm, the geometry shrinks away from its control mesh. Often, an approximating algorithm is undesirable and inappropriate, producing unsatisfactory results for certain applications in solid modeling and engineering design (e.g., finite element meshing). We address this lack of smooth, interpolatory subdivision algorithms by devising a new scheme founded upon the concept of tri-cubic Lagrange interpolating polynomials. We show that our algorithm is a natural generalization of the butterfly subdivision surface scheme to a tri-variate, volumetric setting.  相似文献   

2.
We address in this paper the problem of the data structures used for the representation and the manipulation of multiresolution subdivision surfaces. The classically used data structures are based on quadtrees, straightforwardly derived from the nested hierarchy of faces generated by the subdivision schemes. Nevertheless, these structures have some drawbacks: specificity to the kind of mesh (triangle or quad); the time complexity of neighborhood queries is not optimal; topological cracks are created in the mesh in the adaptive subdivision case. We present in this paper a new topological model for encoding multiresolution subdivision surfaces. This model is an extension to the well-known half-edge data structure. It allows instant and efficient navigation at any resolution level of the mesh. Its generality allows the support of many subdivision schemes including primal and dual schemes. Moreover, subdividing the mesh adaptively does not create topological cracks in the mesh. The extension proposed here is formalized in the combinatorial maps framework. This allows us to give a very general formulation of our extension.  相似文献   

3.
A deformation technique is a method to deform any part of, or an entire object, into a desired shape. Existing deformation methods take a lot of computational cost to represent smoothness correctly due to the constraints caused by differential coefficients of high degree. Thus, it is very difficult to find a general solution. In this paper we propose a LSM (layered subdivision method) that integrates a controlling mechanism, surface deformation, and mesh refinement processing 3D modeling and free-form deformable object matching. The proposed method is considerably more efficient and robust when compared to the existing method of free-form surface, because of the computation of the reference points of deformation edge using geometry of free-form surface. This approach can be applied to automatic inspection of NURBS models and object recognition.  相似文献   

4.
A cascadic geometric filtering approach to subdivision   总被引:1,自引:0,他引:1  
A new approach to subdivision based on the evolution of surfaces under curvature motion is presented. Such an evolution can be understood as a natural geometric filter process where time corresponds to the filter width. Thus, subdivision can be interpreted as the application of a geometric filter on an initial surface. The concrete scheme is a model of such a filtering based on a successively improved spatial approximation starting with some initial coarse mesh and leading to a smooth limit surface.

In every subdivision step the underlying grid is refined by some regular refinement rule and a linear finite element problem is either solved exactly or, especially on fine grid levels, one confines to a small number of smoothing steps within the corresponding iterative linear solver. The approach closely connects subdivision to surface fairing concerning the geometric smoothing and to cascadic multigrid methods with respect to the actual numerical procedure. The derived method does not distinguish between different valences of nodes nor between different mesh refinement types. Furthermore, the method comes along with a new approach for the theoretical treatment of subdivision.  相似文献   


5.
A feature-based approach for individualized human head modeling   总被引:4,自引:0,他引:4  
Published online: 18 September 2001  相似文献   

6.
This article presents a new and direct approach for fitting a subdivision surface from an irregular and dense triangle mesh of arbitrary topological type. All feature edges and feature vertices of the original mesh model are first identified. A topology- and feature-preserving mesh simplification algorithm is developed to further simplify the dense triangle mesh into a coarse mesh. A subdivision surface with exactly the same topological structure and sharp features as that of the simplified mesh is finally fitted from a subset of vertices of the original dense mesh. During the fitting process, both the position masks and subdivision rules are used for setting up the fitting equation. Some examples are provided to demonstrate the proposed approach.  相似文献   

7.
In urban scenes, many of the surfaces are planar and bounded by simple shapes. In a laser scan of such a scene, these simple shapes can still be identified. We present a one-parameter algorithm that can identify point sets on a plane for which a rectangle is a fitting boundary. These rectangles have a guaranteed density: no large part of the rectangle is empty of points. We prove that our algorithm identifies all angles for which a rectangle fits the point set of size n in O(nlogn) time. We evaluate our method experimentally on 13 urban data sets and we compare the rectangles found by our algorithm to the αshape as a surface boundary.  相似文献   

8.
9.
This paper presents an algorithm dealing with the data reduction and the approximation of 3D polygonal curves. Our method is able to approximate efficiently a set of straight 3D segments or points with a piecewise smooth subdivision curve, in a near optimal way in terms of control point number. Our algorithm is a generalization for subdivision rules, including sharp vertex processing, of the Active B-Spline Curve developed by Pottmann et al. We have also developed a theoretically demonstrated approach, analysing curvature properties of B-Splines, which computes a near optimal evaluation of the initial number and positions of control points. Moreover, our original Active Footpoint Parameterization method prevents wrong matching problems occurring particularly for self-intersecting curves. Thus, the stability of the algorithm is highly increased. Our method was tested on different sets of curves and gives satisfying results regarding to approximation error, convergence speed and compression rate. This method is in line with a larger 3D CAD object compression scheme by piecewise subdivision surface approximation. The objective is to fit a subdivision surface on a target patch by first fitting its boundary with a subdivision curve whose control polygon will represent the boundary of the surface control polyhedron.  相似文献   

10.
A divide-and-conquer approach for automatic polycube map construction   总被引:1,自引:0,他引:1  
Polycube map is a global cross-surface parameterization technique, where the polycube shape can roughly approximate the geometry of modeled objects while retaining the same topology. The large variation of shape geometry and its complex topological type in real-world applications make it difficult to effectively construct a high-quality polycube that can serve as a good global parametric domain for a given object. In practice, existing polycube map construction algorithms typically require a large amount of user interaction for either pre-constructing the polycubes with great care or interactively specifying the geometric constraints to arrive at the user-satisfied maps. Hence, it is tedious and labor intensive to construct polycube maps for surfaces of complicated geometry and topology. This paper aims to develop an effective method to construct polycube maps for surfaces with complicated topology and geometry. Using our method, users can simply specify how close the target polycube mimics a given shape in a quantitative way. Our algorithm can both construct a similar polycube of high geometric fidelity and compute a high-quality polycube map in an automatic fashion. In addition, our method is theoretically guaranteed to output a one-to-one map. To demonstrate the efficacy of our method, we apply the automatically-constructed polycube maps in a number of computer graphics applications, such as seamless texture tiling, T-spline construction, and quadrilateral mesh generation.  相似文献   

11.
Unified modeling language (UML) is the standard modeling language for object-oriented system development. Despite its status as a standard, UML has a fuzzy formal specification and a weak theoretical foundation. Semiotics, the study of signs, provides a good theoretical foundation for UML research because graphical notations (or visual signs) of UML are subjected to the principles of signs. In our research, we use semiotics to study the effectiveness of graphical notations in UML. We hypothesized that the use of iconic signs as UML graphical notations leads to representation that is more accurately interpreted and that arouses fewer connotations than the use of symbolic signs. An open-ended survey was used to test these hypotheses. The results support our propositions that iconic UML graphical notations are more accurately interpreted by subjects and that the number of connotations is lower for iconic UML graphical notations than for symbolic UML graphical notations. The results have both theoretical and practical significance. This study illustrates the usefulness of using semiotics as a theoretical underpinning in analyzing, evaluating, and comparing graphical notations for modeling constructs. The results of this research also suggest ways and means of enhancing the graphical notations of UML modeling constructs.  相似文献   

12.
13.
The notion of parts in a shape plays an important role in many geometry problems, including segmentation, correspondence, recognition, editing, and animation. As the fundamental geometric representation of 3D objects in computer graphics is surface-based, solutions of many such problems utilize a surface metric, a distance function defined over pairs of points on the surface, to assist shape analysis and understanding. The main contribution of our work is to bring together these two fundamental concepts: shape parts and surface metric. Specifically, we develop a surface metric that is part-aware. To encode part information at a point on a shape, we model its volumetric context – called the volumetric shape image (VSI) – inside the shape's enclosed volume, to capture relevant visibility information. We then define the part-aware metric by combining an appropriate VSI distance with geodesic distance and normal variation. We show how the volumetric view on part separation addresses certain limitations of the surface view, which relies on concavity measures over a surface as implied by the well-known minima rule. We demonstrate how the new metric can be effectively utilized in various applications including mesh segmentation, shape registration, part-aware sampling and shape retrieval.  相似文献   

14.
Spectral mesh analysis and processing methods, namely ones that utilize eigenvalues and eigenfunctions of linear operators on meshes, have been applied to numerous geometric processing applications. The operator used predominantly in these methods is the Laplace‐Beltrami operator, which has the often‐cited property that it is intrinsic, namely invariant to isometric deformation of the underlying geometry, including rigid transformations. Depending on the application, this can be either an advantage or a drawback. Recent work has proposed the alternative of using the Dirac operator on surfaces for spectral processing. The available versions of the Dirac operator either only focus on the extrinsic version, or introduce a range of mixed operators on a spectrum between fully extrinsic Dirac operator and intrinsic Laplace operator. In this work, we introduce a unified discretization scheme that describes both an extrinsic and intrinsic Dirac operator on meshes, based on their continuous counterparts on smooth manifolds. In this discretization, both operators are very closely related, and preserve their key properties from the smooth case. We showcase various applications of our operators, with improved numerics over prior work.  相似文献   

15.
Direct display algorithms display a CSG model without first converting the model into a boundary representation. Three such algorithms are described. All three are based on the scanline display algorithm, and are able to handle both polygonal and quadratic faces.The first algorithm is based on Atherton's recursive subdivision scanline algorithm, the second is a combination of a scanline and a ray casting algorithm, and the third is a scanline version of the Trickle algorithm. A multiprocessor system in which these algorithms can be incorporated is also described.The performances of the algorithms are compared. It turns out that the algorithms efficiently display CSG models on general-purpose architectures. A comparison is also made between the performances for polygon-approximated models and exact models for objects bounded by quadratic faces, such as spherical, cylindrical and conical faces, to get an indication of how many polygons can at most be used to approximate quadratic faces and still have better performance.  相似文献   

16.
Computational science and engineering are dominated by field problems. Traditionally, engineering practice involves repeated iterations of shape design (i.e., shaping and modeling of material properties), simulation of the physical field, evaluation of the result, and re-design. In this paper, we propose a specific interpretation of the algebraic-topological formulation of field problems, which is conceptually simple, physically sound, computational effective and comprehensive. In the proposed approach, physical information is attached to an adaptive, full-dimensional decomposition of the domain of interest. Giving preeminence to the cells of highest dimension allows us to generate the geometry and to simulate the physics simultaneously. We will also demonstrate that our formulation removes artificial constraints on the shape of discrete elements and unifies commonly unrelated methods in a single computational framework. This framework, by using an efficient graph-representation of the domain of interest, unifies several geometric and physical finite formulations, and supports local progressive refinement (and coarsening) effected only where and when required.  相似文献   

17.
Hybrid fuzzy-first principles models can be attractive if a complete physical model is difficult to derive. These hybrid models consist of a framework of dynamic mass and energy balances, supplemented with fuzzy submodels describing additional equations, such as mass transformation and transfer rates. In this paper, a structured approach for designing this type of model is presented. The modeling problem is reduced to several simpler problems, which are solved independently: hybrid model structure and subprocess determination, subprocess behavior estimation, identification and integration of the submodels to form the hybrid model. The hybrid model is interpretable and transparent. The approach is illustrated using data from a (simulated) fed-batch bioreactor.  相似文献   

18.
19.
The problem of clustering probability density functions is emerging in different scientific domains. The methods proposed for clustering probability density functions are mainly focused on univariate settings and are based on heuristic clustering solutions. New aspects of the problem associated with the multivariate setting and a model-based perspective are investigated. The novel approach relies on a hierarchical mixture modeling of the data. The method is introduced in the univariate context and then extended to multivariate densities by means of a factorial model performing dimension reduction. Model fitting is carried out using an EM-algorithm. The proposed method is illustrated through simulated experiments and applied to two real data sets in order to compare its performance with alternative clustering strategies.  相似文献   

20.
This paper presents a systematic approach to design first order Tagaki-Sugeno-Kang (TSK) fuzzy systems. This approach attempts to obtain the fuzzy rules without any assumption about the structure of the data. The structure identification and parameter optimization steps in this approach are carried out automatically, and are capable of finding the optimal number of the rules with an acceptable accuracy. Starting with an initial structure, the system first tries to improve the structure and, then, as soon as an improved structure is found, it fine tunes its rules’ parameters. Then, it goes back to improve the structure again to find a better structure and re-fine tune the rules’ parameters. This loop continues until a satisfactory solution (TSK model) is found. The proposed approach has successfully been applied to well-known benchmark datasets and real-world problems. The obtained results are compared with those obtained with other methods from the literature. Experimental studies demonstrate that the predicted properties have a good agreement with the measured data by using the elicited fuzzy model with a small number of rules. Finally, as a case study, the proposed approach is applied to the desulfurization process of a real steel industry. Comparing the proposed approach with some other fuzzy systems and neural networks, it is shown that the developed TSK fuzzy system exhibits better results with higher accuracy and smaller size of architecture.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号