共查询到18条相似文献,搜索用时 187 毫秒
1.
点在多边形内的检测是计算几何中的一个基本问题,有着广泛的应用需求。已提 出许多方法减少要测试的多边形的边以加速。其中,均匀网格法具有很好的作用,因为各网格 中的边很少,而测试点可迅即定位于一个网格。我们曾提出一种均匀网格法,预计算各网格中 心点位于多边形内/外的属性,然后将测试点与所在网格的中心点连线,检测该连线与多边形的 边的相交情况即可。其预处理和检测的复杂度分别为 O(N)和 O( N ),N 为多边形的边数。本 文在此基础上进一步改进,预计算网格交点位于多边形内/外的属性,然后将测试点与其邻近网 格交点的连线,转换为与坐标轴平行的两条相连直线段,以提高与多边形边求交计算的便捷性。 实验结果表明,可将检测速度提高 2 倍多。 相似文献
2.
针对大规模矢量线与大量裁剪窗口同时出现的线裁剪算法存在的三个主要问题,减少线段求交次数、简化交点出入属性计算以及无交点矢量线的取舍,本文提出了一种基于双空间索引的大规模线图任意多边形裁剪算法。算法根据裁剪多边形的边分别建立R-树索引和均匀Cell索引,应用两种索引各自的优点大幅减少被裁剪线段与裁剪多边形上线段的求交次数。在此基础上,基于均匀网格索引,提出局部射线法,简化交点出入属性计算和无交点矢量线的取舍。本文在传统算法基础上提出三点改进:首先提出基于两种空间索引模型进行线段求交计算,保证算法在理论上具有较低的时间复杂度;其次,在射线法和网格索引基础上提出局部射线法,使得判断每个交点出入属性的时间复杂度为O(1)~ O(n~(1/2)),与参考文献中的算法相比,此方法的优点是避免判断多边形上顶点的方向;最后,算法中裁剪多边形可以是包含任意多个洞的任意简单多边形,克服传统算法中对裁剪多边形的特定约束条件。 相似文献
3.
检测点在多边形中的可见边是计算几何中的一种基本计算,文中对此提出一种加速算法.首先对多边形进行凸片段分解,以利用点在凸多边形中可见边的快速计算;然后利用格网结构实现由近及远的计算,避免处理被遮挡的凸片段.该算法可基于格网结构方便地进行并行处理,并可统一处理含空洞和不含空洞的多边形,其预处理时间复杂度为O(n),空间复杂度也是很低的O(n),而检测的时间复杂度在O(logn)~O(n)之间自适应变化,其中n为多边形的边数. 相似文献
4.
基于凸剖分的多边形窗口线裁剪算法 总被引:1,自引:0,他引:1
以不增加新点的方式将多边形剖分为一些凸多边形,并基于这些多边形的边建立二叉树进行管理.裁剪计算时,根据二叉树快速地找到与被裁剪线有相交的凸多边形,然后运用高效的凸多边形裁剪算法进行线裁剪.该方法能自适应地降低裁剪计算的复杂度,使其在O(logn)和O(n)之间变化,并在大多数情况下小于O(n),其中n是多边形边数.虽然该方法需要进行预处理,但在许多应用(如多边形窗口对多边形的裁剪)中,其总执行时间(包括预处理时间和裁剪时间)比已有的不需要预处理的裁剪算法少很多. 相似文献
5.
设计了基于双簇头网格调度反馈结构的无线传感器网络(WSNs)非均布节点能量空洞缓解机制,并设计了主副簇头网格聚类算法,形成网格单元;依据节点身份(ID)与网格ID,定义鉴定规则,确定网格中的WSNs节点;构造了网格单元中心点的计算数学模型,依据该中心点坐标确定每个网格单元的簇头,调度网格内的节点;构建了主-副-相邻簇头的数据调度传输结构,有效分散了节点所承担的负载,并对本机制性能进行理论分析.仿真结果表明:与其他机制相比,在非均布节点环境下,该算法更能有效避免网络能量空洞,其节点持续时间最长,显著消除了“漏斗效应”. 相似文献
6.
7.
8.
树结构在N体问题中的应用* 总被引:1,自引:0,他引:1
N体问题的数值模拟在每个时间步都需要计算每对粒子之间的相互作用,其复杂度为O(N2).采用树结构代码不仅减少了存储开销,而且更有利于快速计算和并行划分.Barnes-Hut算法(BHA)和快速多极子方法(FMM)都是基于树结构的快速算法.BHA可快速计算各点受到的场力,计算复杂度为O(N log N),但计算精度通常只有1%;FMM通过层次划分和位势函数的多极子展开计算各点位势,其复杂度为O(N),却能达到任意精度.数值结果表明,树结构的并行效果也很好. 相似文献
9.
10.
多边形模型的布尔运算中包含复杂的求交计算以及多边形重建过程,精度控制和处理效率是其中的关键.为了降低布尔运算复杂度,提出一种适合硬件加速的基于渐进式布尔运算的多层次细节网格模型生成方法.该方法采用分层深度图像来近似表示多边形实体的封闭边界,将多边形的求交计算简化为坐标轴平行的采样点的实体内外部判断;为了免去各层次细节模型的重复采样过程,渐进式地将边界采样点归并到低分辨率下的立方体中;运用特征保持的多边形重建算法将相同立方体内的边界采样点转换成多边形顶点,根据邻接关系生成网格模型.上述算法使用支持图形硬件加速的CUDA编程并行实现.实验结果表明了算法的可行性. 相似文献
11.
12.
拓扑关系自动生成算法的效率直接影响地理数据空间关系的建立和查询等操作的性能。作者在实际的软件设计过程中,发现双邻点判断法可以在算法至关重要的2个环节处大大减少运算量,显著提高算法效率。这2个环节就是多边形的区域归属判断以及点与多边形包含关系的判断。 相似文献
13.
The point-in-polygon query in geographical information systems under certainty and uncertainty is formally analyzed in this paper. It is argued that points and polygons can be precise, fuzzy (imprecise), and random (with error) with different schemes of representations. Under certainty, points and polygons can generally be represented by their characteristic functions. Under imprecision induced uncertainty, they can be represented by fuzzy sets characterized by membership functions. If uncertainty is induced by randomness, points and polygons can be described by locational error models in which probability arguments are employed. Points and polygons under certainty turn out to be a special case of that under imprecision and randomness induced uncertainty.Since points and polygons may be precisely, imprecisely or randomly captured or recognized within a spatial information system, the point-in-polygon query is then rather complicated, and its entertainment is not straight forward. In general, the point-in-polygon query can be entertained under nine basic situations. It consists of the queries of whether a precise or fuzzy or random point is in a precise or fuzzy or random polygon. As a consequence, the answer to the query may take on various forms with certain types of uncertainty arguments attached. It involves the integrative utilization of fuzzy set and probability theories to derive the results.The present analysis clarifies some unresolved issues of the point-in-polygon query and provides a generalization to its entertainment. Furthermore, it sheds light on the way certainty and uncertainty can be addressed and implemented in spatial information systems. 相似文献
14.
The point-in-polygon or containment test is fundamental in computational geometry and is applied intensively in geographical information systems. When this test is repeated several times with the same polygon a data structure is necessary in order to reduce the linear time needed to obtain an inclusion result. In the literature different approaches, like grids or quadtrees, have been proposed for reducing the complexity of these algorithms. We propose a new data structure based on hierarchical subdivisions by means of tri-cones, which reduces the time necessary for this inclusion test. This data structure, named tri-tree, decomposes the whole space by means of tri-cones and classifies the edges of the polygon. For the inclusion test only the edges classified in one tri-cone are considered. The tri-tree has a set of properties which makes it specially suited to this aim, with a similar complexity to other data structures, but well suited to polygons which represent regions. We include a theoretical and practical study in which the new data structure is compared with classical ones, showing a significant improvement. 相似文献
15.
16.
This paper presents a scheme for decomposing polygons called multi-L-REP. The scheme can be considered as a generalization of the L-REP decomposition, which associates the edges of a polygon with a set of layered triangles. In the multi-L-REP these layered triangles are grouped into regions of a plane division. The paper also shows some alternative algorithms for its construction, and one of its applications: the point-in polygon inclusion test. Finally, a special case of multi-L-REP that has several interesting properties and shows a very fast point-in-polygon inclusion test is presented. 相似文献
17.
The paper describes a new algorithm for solving the point-in-polygon problem. It is especially suitable when it is necessary to check whether many points are placed inside or outside a polygon. The algorithm works in two steps. First, a grid of cells equal in size is generated, and the polygon is laid on that grid. A heuristic approach is proposed for cell dimensioning. The cells of the grid are marked as being inside, outside, or on the polygon border. A modified flood-fill algorithm is applied for cell classification. In the second step, points are tested individually. If the tested point falls into an inner or an outer cell, the result is returned without any additional calculations. If the cell contains the polygon border, it is possible to determine the local point position. The analysis of time complexity shows that the initialization is finished in
time, while the expected time complexity for checking an individual point is
, where n represents the number of polygon edges. The algorithm works with O(n) space complexity. The paper also gives practical results using artificial and real polygons from a GIS environment. 相似文献