首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The problems of defining convexity and circularity of a digital region are considered. A new definition of digital convexity, called DL- (digital line) convexity, is proposed. A region is DL-convex if, for any two pixels belonging to it, there exists a digital straight line between them all of whose pixels belong to the region. DL-convexity is shown to be stronger that two other definitions, T- (triangle) convexity and L- (line) convexity. A digital region is T-convex if it is DL-convex, but the converse is not generally true. This is because a DL-convex region must be connected, but T- and L-convex regions can be disconnected. An algorithm to compute the DL-convex hull of a digital region is described. A related problem, the computation of the circular hull and its application to testing the circularity of a digital region, is also considered, and an algorithm is given that is computationally cheaper than a previous algorithm for testing circularity.  相似文献   

2.
We present an efficient real-time algorithm for computing the precise convex hulls of planar freeform curves. For this purpose, the planar curves are initially approximated with G1-biarcs within a given error bound ε in a preprocessing step. The algorithm is based on an efficient construction of approximate convex hulls using circular arcs. The majority of redundant curve segments can be eliminated using simple geometric tests on circular arcs. In several experimental results, we demonstrate the effectiveness of the proposed approach, which shows the performance improvement in the range of 200-300 times speed up compared with the previous results (Elber et al., 2001) [8].  相似文献   

3.
This paper is concerned with the digital line recognition problem for lines of fixed thickness in the naive and general cases. Previous incremental algorithms from Debled-Rennesson and Reveillès [A linear algorithm for segmentation of digital curves, Int. J. Pattern Recognition Artif. Intell. 9(6) (1995)] and from Buzer [A linear incremental algorithm for naive and standard digital lines and planes recognition, Graphical Models 65(1-3) (2003) 61-76] deal with the 8-connected case or with sophisticated machinery coming from linear programming. We present the first elementary method that works with any set of points (not necessarily 8-connected) and we propose a linear time algorithm under some restrictions. This paper deals with implementation details giving pseudo-code of our method. We insist on linking the recognition problem to the intrinsic properties of convex hulls.  相似文献   

4.
Many contour-based applications rely on the estimation of the geometry of the shape, such as pattern recognition or classification methods. This paper proposes a comprehensive evaluation on the problem of tangent estimators on digital curves. The methods taken into account use different paradigms: approximation and digital geometry. In the former paradigm, methods based on polynomial fitting, smoothing and filtering are reviewed. In the latter case of digital geometry, we consider two methods that mainly rely on digital straight line recognition [J.-O. Lachaud, A. Vialard, F. de Vieilleville, Fast, accurate and convergent tangent estimation on digital contours, Image and Vision Computing 25(10) (2007) 1572-1587] and optimization [B. Kerautret, J.-O. Lachaud, Robust estimation of curvature along digital contours with global optimization, in: Proceedings of Discrete Geometry for Computer Imagery, Lyon, France, Lecture Notes in Computer Science, vol. 4992, Springer, Berlin, 2008]. The comparison takes into account objective criteria such as multi-grid convergence, average error, maximum error, isotropy and length estimation. Experiments underline that adaptive methods based on digital straight line recognition often propose a good trade-off between time and precision and that if precision is to be sought, non-adaptive methods can be easily transformed into adaptive methods to get more accurate estimations.  相似文献   

5.
This paper presents a new algorithm that detects a set of dominant points on the boundary of an eight-connected shape to obtain a polygonal approximation of the shape itself. The set of dominant points is obtained from the original break points of the initial boundary, where the integral square is zero. For this goal, most of the original break points are deleted by suppressing those whose perpendicular distance to an approximating straight line is lower than a variable threshold value. The proposed algorithm iteratively deletes redundant break points until the required approximation, which relies on a decrease in the length of the contour and the highest error, is achieved. A comparative experiment with another commonly used algorithm showed that the proposed method produced efficient and effective polygonal approximations for digital planar curves with features of several sizes.  相似文献   

6.
Based on a classical convex hull algorithm called gift-wrapping, the purpose of the paper is to provide a new algorithm for computing the vertices of a polytope called preimage—roughly the set of naive digital planes containing a finite subset S of Z3. The vertices of the upper hemisphere, the ones of the lower hemisphere and at last the equatorial vertices are computed independently. The principle of the algorithm is based on duality and especially on the fact that the vertices of the preimage correspond to faces of the input set S or of its chords set S?S∪{(0,0,1)}. It allows to go from one vertex to another by gift-wrapping until the whole region of interest has been explored.  相似文献   

7.
Computing the convex hull of a set of points is a fundamental operation in many research fields, including geometric computing, computer graphics, computer vision, robotics, and so forth. This problem is particularly challenging when the number of points goes beyond some millions. In this article, we describe a very fast algorithm that copes with millions of points in a short period of time without using any kind of parallel computing. This has been made possible because the algorithm reduces to a sorting problem of the input point set, what dramatically minimizes the geometric computations (e.g., angles, distances, and so forth) that are typical in other algorithms. When compared with popular convex hull algorithms (namely, Graham’s scan, Andrew’s monotone chain, Jarvis’ gift wrapping, Chan’s, and Quickhull), our algorithm is capable of generating the convex hull of a point set in the plane much faster than those five algorithms without penalties in memory space.  相似文献   

8.
This paper presents a novel method for assessing the accuracy of unsupervised polygonal approximation algorithms. This measurement relies on a polygonal approximation called the “reference approximation”. The reference approximation is obtained using the method of Perez and Vidal [11] by an iterative method that optimizes an objective function. Then, the proposed measurement is calculated by comparing the reference approximation with the approximation to be evaluated, taking into account the similarity between the polygonal approximation and the original contour, and penalizing polygonal approximations with an excessive number of points. A comparative experiment by using polygonal approximations obtained with commonly used algorithms showed that the proposed measurement is more efficient than other proposed measurements at comparing polygonal approximations with different number of points.  相似文献   

9.
After analyzing the existing methods, based on holo-extraction method of information, this paper develops a recognition method of digital curves scanned from paper drawings for subsequent pattern recognition and 3D reconstruction. This method is first to construct the networks of single closed region (SCRs) of black pixels with all the information about both segments and their linking points, to classify all the digital contours represented by SCRs into three types: straight-line segments, circular arcs, and combined lines, and then to decompose the combined lines into least basic sub-lines or segments (straight-line segments or circular arcs) with least fitting errors using genetic algorithms with adaptive probabilities of crossover and mutation and to determine their relationships (intersecting or being tangential to each other). It is verified that the recognition method based on the networks of SCRs and the genetic algorithm is feasible and efficient. This method and its software prototype can be used as a base for further work on subsequent engineering drawing understanding and 3D reconstruction.  相似文献   

10.
本文提出了将凸包技术与自组织拓扑映射技术相结合的一种针对封闭曲线特征提取的主曲线学习算法,解决了一般主曲线算法无法有效模拟封闭和较为复杂分布数据集的难题。算法以数据集的凸包络线为起始步,通过分析数据集的全局和局部特征,逐步逼近数据集分布并获得封闭主曲线。算法的关键在于凹点挖掘算法的研究。实验结果表明,对于一般封闭曲线点集,该方法均能在较短的时间步内较好地逼近源数据集。该算法结构简单,复杂性在最坏情况下也不超过O(n^2),同时对图像的有界连通区域外部边界特征的提取与图形识别亦将具有较高的应用价值。  相似文献   

11.
An enhanced convex-hull edge method for flatness tolerance evaluation   总被引:2,自引:0,他引:2  
Flatness is one of the elementary geometric forms to be tested for ensuring the acceptable quality of machined components. The minimum-zone evaluation problem for flatness is to determine the minimum tolerance zone enclosing all the points measured from a surface of the machined components. Although a variety of attempts has been made for the past two decades to solve the problem, methods guaranteeing an exact minimum-zone solution are very limited. One category of the exact methods is certainly the computational-geometry-based approaches such as the so-called convex-hull edge method. This paper presents a modified convex-hull edge method which significantly enhances its computational efficiency such that large-sized data sets with thousands of points can be solved within a negligible computation time using a personal computer. To test the validity of the method, a number of test data sets including those published in the literature and new large data sets generated in this paper were solved. The obtained solutions were compared with those produced by the existing methods. The results show the effectiveness, efficiency, and robustness of the method in that an exact minimum tolerance is always obtained for each of the test data sets solved in less than tens of CPU milliseconds.  相似文献   

12.
求解简单多边形和平面点集凸包的新算法   总被引:3,自引:0,他引:3  
刘光惠  陈传波 《计算机科学》2007,34(12):222-226
沿一定方向遍历凸多边形的边,其内部在边的同一侧。本文依据凸多边形的这一特性,提出求解简单多边形凸包的新算法,进而扩展得到求解平面点集凸包的新算法。新点集凸包算法先找到点集的极值点,得到极值点间的候选点子集,再求得相邻极值点间的有序凸包点列,最后顺序连接极值点间的有序凸包点列,得到凸包。新算法达到目前平面点集凸包问题的渐进最好算法的时间复杂度O(n log h),其中,n为平面点集的点数,h为平面点集凸包的顶点数。相比相同复杂度的凸包算法,新算法简单、易于实现。又由于是顺序求得凸包上的点,新算法还具有更易于实现基于其上的有效空间算法的优点。  相似文献   

13.
This paper gives hypercube algorithms for some simple problems involving geometric properties of sets of points. The properties considered emphasize aspects of convexity and domination. Efficient algorithms are given for both fine- and medium-grain hypercube computers, including a discussion of implementation, running times and results on an Intel iPSC hypercube, as well as theoretical results. For both serial and parallel computers, sorting plays an important role in geometric algorithms for determining simple properties, often being the dominant component of the running time. Since the time required to sort data on a hypercube computer is still not fully understood, the running times of some of our algorithms for unsorted data are not completely determined. For both the fine- and medium-grain models, we show that faster expected-case running time algorithms are possible for point sets generated randomly. Our algorithms are developed for sets of planar points, with several of them extending to sets of points in spaces of higher dimension.The research of E. Cohen, R. Miller, and E. M. Sarraf was partially supported by National Science Foundation Grant ASC-8705104. R. Miller was also partially supported by National Science Foundation Grants DCR-8608640 and IRI-8800514. Q. F. Stout's research was partially supported by National Science Foundation Grant DCR-85-07851, and an Incentives for Excellence Grant from the Digital Equipment Corporation.  相似文献   

14.
海量平面点集凸壳的快速算法   总被引:4,自引:0,他引:4       下载免费PDF全文
提出并证明了凸壳的城堡定理,设计并实现了城墙的快速搜索算法。该算法可以作为海量平面点集凸壳计算的数据预处理过程。在计算海量平面点集凸壳时,可以先用该算法从点集中筛选出一小部分点作为候选点集,再用其他凸壳算法就可以很快地计算出整个点集的凸壳。  相似文献   

15.
Point-based geometric models are gaining popularity in both the computer graphics and CAD fields. A related design/modelling problem is the focus of the reported research: drawing curves onto digital surfaces represented by clouds of points. The problem is analyzed and solved, and a set of ‘design tools’ are proposed which allow the user/designer to efficiently perform ‘product development’ (alternative name: ‘detail design’) tasks which require efficient processing of a ‘digital surface’. The primary tool is a robust and efficient point projection algorithm combined with a smoothing technique for producing smooth ‘digital curves’ lying onto the cloud surface. The new design tools are tested on a real-life industrial example with very satisfactory results, which are thoroughly presented in the paper.  相似文献   

16.
This paper focuses on the design of an effective method that computes the measure of circularity of a part of a digital boundary. An existing circularity measure of a set of discrete points, which is used in computational metrology, is extended to the case of parts of digital boundaries. From a single digital boundary, two sets of points are extracted so that the circularity measure computed from these sets is representative of the circularity of the digital boundary. Therefore, the computation consists of two steps. First, the inner and outer sets of points are extracted from the input part of a digital boundary using digital geometry tools. Next, the circularity measure of these sets is computed using classical tools of computational geometry. It is proved that the algorithm is linear in time in the case of convex parts thanks to the specificity of digital data, and is in O(nlogn) otherwise. Experiments done on synthetic and real images illustrate the interest of the properties of the circularity measure.  相似文献   

17.
We present a blind watermarking scheme for rational Bézier and B-spline curves and surfaces which is shape-preserving and robust against the affine transformations and Möbius reparameterization that are commonly used in geometric modeling operations in CAD systems. We construct a watermark polynomial with real coefficients of degree four which has the watermark as the cross-ratio of its complex roots. We then multiply the numerator and denominator of the original curve or surface by this polynomial, increasing its degree by four but preserving its shape. Subsequent affine transformations and Möbius reparameterization leave the cross-ratio of these roots unchanged. The watermark can be extracted by finding all the roots of the numerator and denominator of the curve or surface: the cross-ratio of the four common roots will be the watermark. Experimental results confirm both the shape-preserving property and its robustness against attacks by affine transformations and Möbius reparameterization.  相似文献   

18.
In recent years, the digital twin has attracted widespread attention as an important means of digitalization and intelligence. However, the digital twin is becoming more and more complex due to the expansion of need on the simulation of multi-scale and multi-scenario in reality. The instance of digital twin in references mostly concentrates a particular application, while it is still a lack of a method for constructing the complex digital twin in the total elements, the variable scale of working environments, changeable process, not even the coupling effects. In this paper, a novel modeling method for such a complex digital twin is proposed based on the standardized processing on the model division and assembly. Firstly, the complex model of digital twin is divided into several simple models according to the composition, context, component, and code in 4C architecture. Composition and context make the digital twin focus on the effective elements in a specific scale and scenario. Component and code develop the digital twin in standard-based modularization. Secondly, assemble the simple models of digital twins into the complex model through information fusion, multi-scale association and multi-scenarios iterations. Ontology establishes the complete information library of the entities on different digital twins. Knowledge graph bridges the structure relationship between the different scales of digital twins. The scenario iterations realize the behavior interaction and the accuracy calculation results. It provides an implementable method to construct a complex model of digital twin, and the reuse of components and code also enables rapid development of digital twins.  相似文献   

19.
In this paper we give parallel algorithms for a number of problems defined on point sets and polygons. All our algorithms have optimalT(n) * P(n) products, whereT(n) is the time complexity andP(n) is the number of processors used, and are for the EREW PRAM or CREW PRAM models. Our algorithms provide parallel analogues to well-known phenomena from sequential computational geometry, such as the fact that problems for polygons can oftentimes be solved more efficiently than point-set problems, and that nearest-neighbor problems can be solved without explicitly constructing a Voronoi diagram.The research of R. Cole was supported in part by NSF Grants CCR-8702271, CCR-8902221, and CCR-8906949, by ONR Grant N00014-85-K-0046, and by a John Simon Guggenheim Memorial Foundation fellowship. M. T. Goodrich's research was supported by the National Science Foundation under Grant CCR-8810568 and by the National Science Foundation and DARPA under Grant CCR-8908092.  相似文献   

20.
A digital straight line segment is defined as the grid-intersect quantization of a straight line segment in the plane. Let S be a set of pixels on a square grid. Rosenfeld [8] showed that S is a digital straight line segment if and only if it is a Digital arc having the chord property. Then Kim and Rosenfeld [3,6] showed that S has the chord properly if and if for every p, q?S there is a digital straight line segment C ? S such that p and q are the extremities of C.We give a simple proof of these two results based on the Transversal Theorem of Santaló. We show how the underlying methodology can be generalized to the case of (infinite) digital straight lines and to the quantization of hyperplanes in an n-dimensional space for n ≥ 3.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号