首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a new algorithm for the efficient and reliable generation of offset surfaces for polygonal meshes. The algorithm is robust with respect to degenerate configurations and computes (self‐)intersection free offsets that do not miss small and thin components. The results are correct within a prescribed ε‐tolerance. This is achieved by using a volumetric approach where the offset surface is defined as the union of a set of spheres, cylinders, and prisms instead of surface‐based approaches that generally construct an offset surface by shifting the input mesh in normal direction. Since we are using the unsigned distance field, we can handle any type of topological inconsistencies including non‐manifold configurations and degenerate triangles. A simple but effective mesh operation allows us to detect and include sharp features (shocks) into the output mesh and to preserve them during post‐processing (decimation and smoothing). We discretize the distance function by an efficient multi‐level scheme on an adaptive octree data structure. The problem of limited voxel resolutions inherent to every volumetric approach is avoided by breaking the bounding volume into smaller tiles and processing them independently. This allows for almost arbitrarily high voxel resolutions on a commodity PC while keeping the output mesh complexity low. The quality and performance of our algorithm is demonstrated for a number of challenging examples.  相似文献   

2.
In this paper, we introduce a novel coordinate‐free method for manipulating and analyzing vector fields on discrete surfaces. Unlike the commonly used representations of a vector field as an assignment of vectors to the faces of the mesh, or as real values on edges, we argue that vector fields can also be naturally viewed as operators whose domain and range are functions defined on the mesh. Although this point of view is common in differential geometry it has so far not been adopted in geometry processing applications. We recall the theoretical properties of vector fields represented as operators, and show that composition of vector fields with other functional operators is natural in this setup. This leads to the characterization of vector field properties through commutativity with other operators such as the Laplace‐Beltrami and symmetry operators, as well as to a straight‐forward definition of differential properties such as the Lie derivative. Finally, we demonstrate a range of applications, such as Killing vector field design, symmetric vector field estimation and joint design on multiple surfaces.  相似文献   

3.
Numerical simulations and experimental observations are inherently imprecise. Therefore, most vector fields of interest in scientific visualization are known only up to an error. In such cases, some topological features, especially those not stable enough, may be artifacts of the imprecision of the input. This paper introduces a technique to compute topological features of user‐prescribed stability with respect to perturbation of the input vector field. In order to make our approach simple and efficient, we develop our algorithms for the case of piecewise constant (PC) vector fields. Our approach is based on a super‐transition graph, a common graph representation of all PC vector fields whose vector value in a mesh triangle is contained in a convex set of vectors associated with that triangle. The graph is used to compute a Morse decomposition that is coarse enough to be correct for all vector fields satisfying the constraint. Apart from computing stable Morse decompositions, our technique can also be used to estimate the stability of Morse sets with respect to perturbation of the vector field or to compute topological features of continuous vector fields using the PC framework.  相似文献   

4.
We introduce a new variational formulation for the problem of reconstructing a watertight surface defined by an implicit equation, from a finite set of oriented points; a problem which has attracted a lot of attention for more than two decades. As in the Poisson Surface Reconstruction approach, discretizations of the continuous formulation reduce to the solution of sparse linear systems of equations. But rather than forcing the implicit function to approximate the indicator function of the volume bounded by the implicit surface, in our formulation the implicit function is forced to be a smooth approximation of the signed distance function to the surface. Since an indicator function is discontinuous, its gradient does not exist exactly where it needs to be compared with the normal vector data. The smooth signed distance has approximate unit slope in the neighborhood of the data points. As a result, the normal vector data can be incorporated directly into the energy function without implicit function smoothing. In addition, rather than first extending the oriented points to a vector field within the bounding volume, and then approximating the vector field by a gradient field in the least squares sense, here the vector field is constrained to be the gradient of the implicit function, and a single variational problem is solved directly in one step. The formulation allows for a number of different efficient discretizations, reduces to a finite least squares problem for all linearly parameterized families of functions, and does not require boundary conditions. The resulting algorithms are significantly simpler and easier to implement, and produce results of quality comparable with state‐of‐the‐art algorithms. An efficient implementation based on a primal‐graph octree‐based hybrid finite element‐finite difference discretization, and the Dual Marching Cubes isosurface extraction algorithm, is shown to produce high quality crack‐free adaptive manifold polygon meshes.  相似文献   

5.
The problem of finding the set of all continuous straight lines which lead to the same digitization is quite well known and has been studied in [1, 2, 3]. In this paper, we propose a new method of analysis which is simple and straightforward but is shown to be as powerful as the other techniques. Moreover, our scheme, being algebraic in nature, can be generalized to more complex shapes and figures and thus has a wider applicability.  相似文献   

6.
A relatively fixed structure vector field which includes clockwise vortex, anticlockwise vortex, convergence, divergence, and saddle is defined as axisymmetric vector fields (AVF) in this study. A method for characterizing and classifying the type of flow patterns in 2-dimension actual vector field for the meteorology application is proposed. First of all, the collected AVF samples are transformed to a directional pseudo-color image by means of mapping them to the hue component space in the HSL color model. Secondly, the directional hue difference and similarity degree to the normal AVF are respectively extracted as two features by the technique of image processing. Thirdly, two improved physical properties (vorticity and divergence) of AVF are introduced and improved for this study. Finally, in the experiment, the probability density distribution for every type of AVF samples is employed to analyze the four features advantages and disadvantages on the five AVF patterns classification. The correlation of the features is discussed by the PCA method. The statistics results show that the features are effective to describe the AVF patterns. By training a classifier based on constructing a decision tree, the classification ability of the features is proved on processing different scale and resolution AVF samples by comparing with traditional methods.  相似文献   

7.
Multi‐core systems equipped with micro processing units and accelerators such as digital signal processors (DSPs) and graphics processing units (GPUs) have become a major trend in processor design in recent years in attempts to meet ever‐increasing application performance requirements. Open Computing Language (OpenCL) is one of the programming languages that include new extensions proposed to exploit the computing power of these kinds of processors. Among the newly extended language features, the single‐instruction multiple‐data (SIMD) linguistics and vector types are added to OpenCL to exploit hardware features of the accelerators. The addition makes it necessary to consider how traditional compiler data flow analysis can be adopted to meet the optimization requirements of vector linguistics. In this paper, we propose a calculus framework to support the data flow analysis of vector constructs for OpenCL programs that compilers can use to perform SIMD optimizations. We model OpenCL vector operations as data access functions in the style of mathematical functions. We then show that the data flow analysis for OpenCL vector linguistics can be performed based on the data access functions. Based on the information gathered from data flow analysis, we illustrate a set of SIMD optimizations on OpenCL programs. The experimental results incorporating our calculus and our proposed compiler optimizations show that the proposed SIMD optimizations can provide average performance improvements of 22% on x86 CPUs and 4% on advanced micro devices GPUs. For the selected 15 benchmarks, 11 of them are improved on x86 CPUs, and six of them are improved on advanced micro devices GPUs. The proposed framework has the potential to be used to construct other SIMD optimizations on OpenCL programs. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
The purpose of this paper is to propose a new method for the optimization of the output transition in the case of set‐point reset for LTI, non‐minimum phase, possibly non‐hyperbolic plants. Assuming that the plant is stabilized by a proper feedback controller, the problem consists in finding a feedforward linear filter yielding a suitable reference trajectory for the closed‐loop system. The approach situates in the framework of model pseudo‐inversion because the external reference trajectory is computed starting from some desired features of the transient output between the two set points. A significant aspect of the new method is that the transition trajectory is not ‘ad hoc’ exactly prespecified by the designer. Rather, it is implicitly defined by the procedure for the minimization of a suitable multi‐objective quadratic cost functional. As no pre‐actuation is required, the method can be practically implemented on line and also works for the critical class of non‐hyperbolic systems. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
In digital image editing, environment matting and compositing are fundamental and interesting operations that can capture and simulate the refraction and reflection effects of light from an environment. The state‐of‐the‐art real‐time environment matting and compositing method is short of flexibility, in the sense that it has to repeat the entire complex matte acquisition process if the distance between the object and the background is different from that in the acquisition stage, and also lacks accuracy, in the sense that it can only remove noises but not errors. In this paper, we introduce the concept of refractive vector and propose to use a refractive vector field as a new representation for environment matte. Such refractive vector field provides great flexibility for transparent‐object environment matting and compositing. Particularly, with only one process of the matte acquisition and the refractive vector field extraction, we are able to composite the transparent object into an arbitrary background at any distance. Furthermore, we introduce a piecewise vector field fitting algorithm to simultaneously remove both noises and errors contained in the extracted matte data. Experimental results show that our method is less sensitive to artefacts and can generate perceptually good composition results for more general scenarios.  相似文献   

10.
Two commonly sought characteristics of a digital controller are conveniently combined in a so-called comb filter [1], [2]. Such filters are frequently used for bandpass filtering in digital communications systems but are not widely used in digital control systems. These characteristics are phase lead, such as obtained by digitization of a conventional lead network, and high-frequency notching to remove some unwanted signal and its harmonics.  相似文献   

11.
Data sets coming from simulations or sampling of real‐world phenomena often contain noise that hinders their processing and analysis. Automatic filtering and denoising can be challenging: when the nature of the noise is unknown, it is difficult to distinguish between noise and actual data features; in addition, the filtering process itself may introduce “artificial” features into the data set that were not originally present. In this paper, we propose a smoothing method for 2D scalar fields that gives the user explicit control over the data features. We define features as critical points of the given scalar function, and the topological structure they induce (i.e., the Morse‐Smale complex). Feature significance is rated according to topological persistence. Our method allows filtering out spurious features that arise due to noise by means of topological simplification, providing the user with a simple interface that defines the significance threshold, coupled with immediate visual feedback of the remaining data features. In contrast to previous work, our smoothing method guarantees a C1‐continuous output scalar field with the exact specified features and topological structures.  相似文献   

12.
1:500地形图的数字化采集与图形输出,是城市GIS的基本内容之一。本文在ARC/INFO的基础上提出了1:500地形图从数字化采集到图形输出的工艺流程及其实现方法,试验表明此方法是可行的。此外还对某些特殊地物符号的生成算法做了较深入的研究。  相似文献   

13.
We present a discrete‐time mathematical formulation for applying recursive digital filters to non‐uniformly sampled signals. Our solution presents several desirable features: it preserves the stability of the original filters; is well‐conditioned for low‐pass, high‐pass, and band‐pass filters alike; its cost is linear in the number of samples and is not affected by the size of the filter support. Our method is general and works with any non‐uniformly sampled signal and any recursive digital filter defined by a difference equation. Since our formulation directly uses the filter coefficients, it works out‐of‐the‐box with existing methodologies for digital filter design. We demonstrate the effectiveness of our approach by filtering non‐uniformly sampled signals in various image and video processing tasks including edge‐preserving color filtering, noise reduction, stylization, and detail enhancement. Our formulation enables, for the first time, edge‐aware evaluation of any recursive infinite impulse response digital filter (not only low‐pass), producing high‐quality filtering results in real time.  相似文献   

14.
With the advance of digitization and digital processing techniques, digital images are now easy to create and manipulate, and leave no clues of artificial evidence. There are some known digital fakery for images, e.g., computer graphics (CGs) and digital forgeries. As valid records of natural world, natural images, i.e., photographic images, are no longer believable. In this paper, a detection scheme for natural images and fake images is proposed. Features are first extracted using multiresolution decomposition and higher order local autocorrelations (HLACs). The support vector machines (SVMs) are then used to differentiate the natural and fake images. Because the inner product between features can be obtained directly without computing features, it can be integrated into SVM, and the computation complexity is decreased. Experiments show that the proposed detection scheme is effective, demonstrating that the proposed statistical features can model the differences between natural images and fake images.  相似文献   

15.
Differentiating hole from component is an important issue in digital topology. In a recent paper, Lee, Poston, and Rosenfeld proposed a method to distinguish external and internal boundaries in 2D and 3D images relying on the property of normal vector and winding number. The method uses a smoothing function to replace digital lattice for calculating normal vector on image boundary. In this paper, we show that normal vector and winding number can be defined directly in 2D digital images and used for hole detection without resorting to any smoothing operation. We analyze first the discontinuity of Freeman codes of contour and prove its properties. We define then outward normal vector in 2D images and demonstrate also its discontinuity properties. The difficulty of counting the transition of normal vector in a given direction is analyzed and a solution is proposed. Based on the theoretic properties of edge code and normal vector, we propound the first algorithm to count the transitions of normal vector in a given direction, and consequently holes and external contours can be distinguished easily. We further define winding number directly in digital images, show its properties, and propose a second algorithm implementing the idea of winding number which is conceptually simpler and easier than the first one. A proof of correctness of our both algorithms is given and computation results are presented.  相似文献   

16.
档案数字化是信息时代对于档案工作的新要求,推进档案数字化建设也是高校信息化工程中的一项基础性工作。档案数字化工作给予了档案人新的使命和机遇,然而在实际工作中也存在很多问题和困难。本文就高校档案数字化实践中存在的若干问题以及我们的思考探索做了阐述。  相似文献   

17.
针对网格模型平滑区域提取特征困难,以及现有特征识别方法无法检测仅沿某一特定方向分布的特征点的问题,提出一种方向感知的网格模型特征识别方法。首先,分别从x、y、z三个方向探测网格顶点邻接面法向量沿不同方向变化的情况。设定合适的阈值,只要检测到在任何一个方向上顶点邻接面法向量的变化超过阈值,该顶点即被识别为特征点。然后,针对现有网格模型特征识别算法无法检测三维医学模型普遍存在的一种仅沿z轴方向分布的梯田型结构的问题,单独探测医学模型网格顶点邻接面法向量沿z轴方向变化的情况,将变化超出阈值的顶点识别为梯田型结构顶点,正确地将非正常梯田型结构从人体模型正常结构特征中分离出来。与二面角法的对比实验的结果显示:在相同阈值设置下,所提方法能更好地识别出网格模型特征,解决了二面角法在没有明显折线的平滑区域上无法有效识别特征点的问题;同时,也解决了现有网格模型特征检测算法因不具备方向探测能力而无法将医学模型非正常梯田型结构与正常人体结构区分开来的问题,为医学模型后续数字几何处理工作提供了条件。  相似文献   

18.
We present a method for accelerating the convergence of continuous non‐linear shape optimization algorithms. We start with a general method for constructing gradient vector fields on a manifold, and we analyse this method from a signal processing viewpoint. This analysis reveals that we can construct various filters using the Laplace–Beltrami operator of the shape that can effectively separate the components of the gradient at different scales. We use this idea to adaptively change the scale of features being optimized to arrive at a solution that is optimal across multiple scales. This is in contrast to traditional descent‐based methods, for which the rate of convergence often stalls early once the high frequency components have been optimized. We demonstrate how our method can be easily integrated into existing non‐linear optimization frameworks such as gradient descent, Broyden–Fletcher–Goldfarb–Shanno (BFGS) and the non‐linear conjugate gradient method. We show significant performance improvement for shape optimization in variational shape modelling and parameterization, and we also demonstrate the use of our method for efficient physical simulation.  相似文献   

19.
Signed distance functions (SDF) to explicit or implicit surface representations are intensively used in various computer graphics and visualization algorithms. Among others, they are applied to optimize collision detection, are used to reconstruct data fields or surfaces, and, in particular, are an obligatory ingredient for most level set methods. Level set methods are common in scientific visualization to extract surfaces from scalar or vector fields. Usual approaches for the construction of an SDF to a surface are either based on iterative solutions of a special partial differential equation or on marching algorithms involving a polygonization of the surface. We propose a novel method for a non‐iterative approximation of an SDF and its derivatives in a vicinity of a manifold. We use a second‐order algebraic fitting scheme to ensure high accuracy of the approximation. The manifold is defined (explicitly or implicitly) as an isosurface of a given volumetric scalar field. The field may be given at a set of irregular and unstructured samples. Stability and reliability of the SDF generation is achieved by a proper scaling of weights for the Moving Least Squares approximation, accurate choice of neighbors, and appropriate handling of degenerate cases. We obtain the solution in an explicit form, such that no iterative solving is necessary, which makes our approach fast.  相似文献   

20.
Rendering materials such as metallic paints, scratched metals and rough plastics requires glint integrators that can capture all micro‐specular highlights falling into a pixel footprint, faithfully replicating surface appearance. Specular normal maps can be used to represent a wide range of arbitrary micro‐structures. The use of normal maps comes with important drawbacks though: the appearance is dark overall due to back‐facing normals and importance sampling is suboptimal, especially when the micro‐surface is very rough. We propose a new glint integrator relying on a multiple‐scattering patch‐based BRDF addressing these issues. To do so, our method uses a modified version of microfacet‐based normal mapping [SHHD17] designed for glint rendering, leveraging symmetric microfacets. To model multiple‐scattering, we re‐introduce the lost energy caused by a perfectly specular, single‐scattering formulation instead of using expensive random walks. This reflectance model is the basis of our patch‐based BRDF, enabling robust sampling and artifact‐free rendering with a natural appearance. Additional calculation costs amount to about 40% in the worst cases compared to previous methods [YHMR16, CCM18].  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号