首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
The convex differences tree (CDT) representation of a simple polygon is useful in computer graphics, computer vision, computer aided design and robotics. The root of the tree contains the convex hull of the polygon and there is a child node recursively representing every connectivity component of the set difference between the convex hull and the polygon. We give an O(n log K + K log2 n) time algorithm for constructing the CDT, where n is the number of polygon vertices and K is the number of nodes in the CDT. The algorithm is adaptive to a complexity measure defined on its output while still being worst case efficient. For simply shaped polygons, where K is a constant, the algorithm is linear. In the worst case K = O(n) and the complexity is O(n log2 n). We also give an O(n log n) algorithm which is an application of the recently introduced compact interval tree data structure.  相似文献   

2.
A coloring of a tree is convex if the vertices that pertain to any color induce a connected subtree; a partial coloring (which assigns colors to some of the vertices) is convex if it can be completed to a convex (total) coloring. Convex colorings of trees arise in areas such as phylogenetics, linguistics, etc., e.g., a perfect phylogenetic tree is one in which the states of each character induce a convex coloring of the tree.When a coloring of a tree is not convex, it is desirable to know “how far” it is from a convex one, and what are the convex colorings which are “closest” to it. In this paper we study a natural definition of this distance—the recoloring distance, which is the minimal number of color changes at the vertices needed to make the coloring convex. We show that finding this distance is NP-hard even for a colored string (a path), and for some other interesting variants of the problem. In the positive side, we present algorithms for computing the recoloring distance under some natural generalizations of this concept: the first generalization is the uniform weighted model, where each vertex has a weight which is the cost of changing its color. The other is the non-uniform model, in which the cost of coloring a vertex v by a color d is an arbitrary non-negative number cost(v,d). Our first algorithms find optimal convex recolorings of strings and bounded degree trees under the non-uniform model in time which, for any fixed number of colors, is linear in the input size. Next we improve these algorithm for the uniform model to run in time which is linear in the input size for a fixed number of bad colors, which are colors which violate convexity in some natural sense. Finally, we generalize the above result to hold for trees of unbounded degree.  相似文献   

3.
We present the first exact and robust implementation of the 3D Minkowski sum of two non-convex polyhedra. Our implementation decomposes the two polyhedra into convex pieces, performs pairwise Minkowski sums on the convex pieces, and constructs their union. We achieve exactness and the handling of all degeneracies by building upon 3D Nef polyhedra as provided by Cgal. The implementation also supports open and closed polyhedra. This allows the handling of degenerate scenarios like the tight passage problem in robot motion planning. The bottleneck of our approach is the union step. We address efficiency by optimizing this step by two means: we implement an efficient decomposition that yields a small number of convex pieces, and develop, test and optimize multiple strategies for uniting the partial sums by consecutive binary union operations. The decomposition that we implemented as part of the Minkowski sum is interesting in its own right. It is the first robust implementation of a decomposition of polyhedra into convex pieces that yields at most O(r 2) pieces, where r is the number of edges whose adjacent facets comprise an angle of more than 180 degrees with respect to the interior of the polyhedron. This work was partially supported by the Netherlands’ Organisation for Scientific Research (NWO) under project no. 639.023.301.  相似文献   

4.
Mining association rules in relational databases is a significant computational task with lots of applications. A fundamental ingredient of this task is the discovery of sets of attributes (itemsets) whose frequency in the data exceeds some threshold value. In this paper we describe two algorithms for completing the calculation of frequent sets using a tree structure for storing partial supports, called interim‐support (IS) tree. The first of our algorithms (T‐Tree‐First (TTF)) uses a novel tree pruning technique, based on the notion of (fixed‐prefix) potential inclusion, which is specially designed for trees that are implemented using only two pointers per node. This allows to implement the IS tree in a space‐efficient manner. The second algorithm (P‐Tree‐First (PTF)) explores the idea of storing the frequent itemsets in a second tree structure, called the total support tree (T‐tree); the main innovation lies in the use of multiple pointers per node, which provides rapid access to the nodes of the T‐tree and makes it possible to design a new, usually faster, method for updating them. Experimental comparison shows that these techniques result in considerable speedup for both algorithms compared with earlier approaches that also use IS trees (Principles of Data Mining and Knowledge Discovery, Proceedings of the 5th European Conference, PKDD, 2001, Freiburg, September 2001 (Lecture Notes in Artificial Intelligence, vol. 2168). Springer: Berlin, Heidelberg, 54–66; Journal of Knowledge‐Based Syst. 2000; 13 :141–149). Further comparison between the two new algorithms, shows that the PTF is generally faster on instances with a large number of frequent itemsets, provided that they are relatively short, whereas TTF is more appropriate whenever there exist few or quite long frequent itemsets; in addition, TTF behaves well on instances in which the densities of the items of the database have a high variance. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
Constraint satisfaction problems are ubiquitous in artificial intelligence and many algorithms have been developed for their solution. This paper provides a unified survey of some of these, in terms of three classes: (i) tree search, (ii) arc consistency (AC), and (iii) hybrid tree search/arc consistency algorithms. It is shown that several important algorithms, when slightly rearranged, are of the latter hybrid form, but with arc consistency components that do not necessarily achieve full arc consistency at the tree nodes. Accordingly, we define several new partial AC procedures, AC1/5, AC1/4, AC1/3, and AC½, analogous to the well-known full AC algorithms which Mackworth has called AC1, AC2, and AC3. The fractional suffixes on our AC algorithms are roughly proportional to the degree of partial arc consistency they achieve. Unlike traditional versions, our AC algorithms (full and partial) are presented in a parameterized form to allow them to be embedded efficiently at the nodes of a tree search process. Algorithm complexities are compared empirically, using the n-queens problem and a new version called confused n-queens. Gaschnig's Backmarking (a tree search algorithm) and Haralick's Forward Checking (a hybrid algorithm) are found to be the most efficient. For the hybrid algorithms, we find that it pays to do little arc consistency processing at the nodes, incurring more nodes, but sufficiently reducing the work per node so as to obtain less work over the whole tree. The unified view taken here suggests several new algorithms. Preliminary results show one of these to be the best algorithm so far.  相似文献   

6.
针对多模态优化问题,提出了基于广义凸下界估计模型的改进差分进化算法。首先,基于模型变换方法将原优化问题转变为单位单纯形约束条件下的严格递增射线凸优化问题;其次,基于广义凸理论,利用差分进化算法中更新个体的适应度知识,建立原优化问题广义凸下界估计模型,设计实现了基于 N-叉树的估计模型快速计算方法;进而,综合考虑原问题目标值与其估计值之间的差异,提出一种基于有偏采样的小生境指标,并设计区域进化树更新策略来保证算法的局部搜索能力。数值实验结果表明,提出的算法能够有效地发现并维持一定数量的满意解模态,动态地实现全局模态搜索到模态内局部增强的自适应平滑过渡。对于给出的测试问题,能够发现所有的全局最优解以及一些较好的局部极值解。  相似文献   

7.
A pair (T,C) of a tree T and a coloring C is called a colored tree. Given a colored tree (T,C) any coloring C′ of T is called a recoloring of T. Given a weight function on the vertices of the tree the recoloring distance of a recoloring is the total weight of recolored vertices. A coloring of a tree is convex if for any two vertices u and v that are colored by the same color c, every vertex on the path from u to v is also colored by c. In the minimum convex recoloring problem we are given a colored tree and a weight function and our goal is to find a convex recoloring of minimum recoloring distance. The minimum convex recoloring problem naturally arises in the context of phylogenetic trees. Given a set of related species the goal of phylogenetic reconstruction is to construct a tree that would best describe the evolution of this set of species. In this context a convex coloring corresponds to perfect phylogeny. Since perfect phylogeny is not always possible the next best thing is to find a tree which is as close to convex as possible, or, in other words, a tree with minimum recoloring distance. We present a (2+ε)-approximation algorithm for the minimum convex recoloring problem, whose running time is O(n 2+n(1/ε)241/ε ). This result improves the previously known 3-approximation algorithm for this NP-hard problem. We also present an algorithm for computing an optimal convex recoloring whose running time is , where n * is the number of colors that violate convexity in the input tree, and Δ is the maximum degree of vertices in the tree. The parameterized complexity of this algorithm is O(n 2+nk⋅2 k ).  相似文献   

8.
We consider the following problem: Given ordered labeled trees S and T can S be obtained from T by deleting nodes? Deletion of the root node u of a subtree with children means replacing the subtree by the trees . For the tree inclusion problem, there can generally be exponentially many ways to obtain the included tree. P. Kilpelinen and H. Mannila [5,7] gave an algorithm based on dynamic programming requiring time and space in the worst case and also on the average for solving this problem. We give an algorithm whose idea is similar to that of [5,7] but which improves the previous one and on the average breaks the barrier. Received: 4 November 1996 / 2 March 2001  相似文献   

9.
基于凸剖分的多边形窗口线裁剪算法   总被引:1,自引:0,他引:1  
以不增加新点的方式将多边形剖分为一些凸多边形,并基于这些多边形的边建立二叉树进行管理.裁剪计算时,根据二叉树快速地找到与被裁剪线有相交的凸多边形,然后运用高效的凸多边形裁剪算法进行线裁剪.该方法能自适应地降低裁剪计算的复杂度,使其在O(logn)和O(n)之间变化,并在大多数情况下小于O(n),其中n是多边形边数.虽然该方法需要进行预处理,但在许多应用(如多边形窗口对多边形的裁剪)中,其总执行时间(包括预处理时间和裁剪时间)比已有的不需要预处理的裁剪算法少很多.  相似文献   

10.
In the connected dominating set problem we are given an n-node undirected graph, and we are asked to find a minimum cardinality connected subset S of nodes such that each node not in S is adjacent to some node in S. This problem is also equivalent to finding a spanning tree with maximum number of leaves. Despite its relevance in applications, the best known exact algorithm for the problem is the trivial Ω(2 n ) algorithm that enumerates all the subsets of nodes. This is not the case for the general (unconnected) version of the problem, for which much faster algorithms are available. Such a difference is not surprising, since connectivity is a global property, and non-local problems are typically much harder to solve exactly. In this paper we break the 2 n barrier, by presenting a simple O(1.9407 n ) algorithm for the connected dominating set problem. The algorithm makes use of new domination rules, and its analysis is based on the Measure and Conquer technique. An extended abstract of this paper appeared in the proceedings of FSTTCS’06. Fedor V. Fomin was additionally supported by the Research Council of Norway.  相似文献   

11.
We present a polynomial time 1.5h-approximation algorithm for the problem of finding the largest common subtree between two rooted, labeled, and unordered trees of height at most h, where a tree S is called a subtree of a tree T if S is obtained from T by deletion of some nodes in T. This result improves the previous 2h-approximation algorithm.  相似文献   

12.
Yu Lasheng  Li Jie  Liu Renjie 《Computing》2013,95(9):723-738
This paper proposes an effective subtree merging based data collection algorithm for wireless sensor networks (WSNs), named as SMDC algorithm, which can be applied in a new kind of applications in WSNs, i.e., area query application. The SMDC algorithm can prevent unnecessary energy consumption in ancestor nodes for routing through the union of disjoint sets for different subtrees in the network. The SMDC algorithm includes four phases. Firstly the cluster trees are constructed respectively in the target area. Then the disjoint node sets for each subtrees will be found; thirdly the disjoint subtrees are connected via the closest node between two subtrees; and the last phase is to disconnect the subtrees which have been connected to a new tree branch from their previous tree structure. This paper also presents the simulation to compare the SMDC algorithm with some related works including conventional minimum spanning tree algorithm. Simulation results show that the SMDC algorithm can reduce the redundant energy consumption and the number of hops which results in the reduction of total energy consumption. Especially, it is more efficient as the number of sensor nodes in a target area increases.  相似文献   

13.
While spectral clustering can produce high-quality clusterings on small data sets, computational cost makes it infeasible for large data sets. Affinity Propagation (AP) has a limitation that it is hard to determine the value of parameter ‘preference’ which can lead to an optimal clustering solution. These problems limit the scope of application of the two methods. In this paper, we develop a novel fast two-stage spectral clustering framework with local and global consistency. Under this framework, we propose a Fast density-Weighted low-rank Approximation Spectral Clustering (FWASC) algorithm to address the above issues. The proposed algorithm is a high-quality graph partitioning method, and simultaneously considers both the local and global structure information contained in the data sets. Specifically, we first present a new Fast Two-Stage AP (FTSAP) algorithm to coarsen the input sparse graph and produce a small number of final representative exemplars, which is a simple and efficient sampling scheme. Then we present a density-weighted low-rank approximation spectral clustering algorithm to operate those representative exemplars on the global underlying structure of data manifold. Experimental results show that our algorithm outperforms the state-of-the-art spectral clustering and original AP algorithms in terms of speed, memory usage, and quality.  相似文献   

14.
目的压缩感知信号重构过程是求解不定线性系统稀疏解的过程。针对不定线性系统稀疏解3种求解方法不够鲁棒的问题:最小化l0-范数属于NP问题,最小化l1-范数的无解情况以及最小化lp-范数的非凸问题,提出一种基于光滑正则凸优化的方法进行求解。方法为了获得全局最优解并保证算法的鲁棒性,首先,设计了全空间信号l0-范数凸拟合函数作为优化的目标函数;其次,将n元函数优化问题转变为n个一元函数优化问题;最后,求解过程中利用快速收缩算法进行求解,使收敛速度达到二阶收敛。结果该算法无论在仿真数据集还是在真实数据集上,都取得了优于其他3种类型算法的效果。在仿真实验中,当信号维数大于150维时,该方法重构时间为其他算法的50%左右,具有快速性;在真实数据实验中,该方法重构出的信号与原始信号差的F-范数为其他算法的70%,具有良好的鲁棒性。结论本文算法为二阶收敛的凸优化算法,可确保快速收敛到全局最优解,适合处理大型数据,在信息检索、字典学习和图像压缩等领域具有较大的潜在应用价值。  相似文献   

15.
A k-core Ck of a tree T is subtree with exactly k leaves for k?nl, where nl the number of leaves in T, and minimizes the sum of the distances of all nodes from Ck. In this paper first we propose a distributed algorithm for constructing a rooted spanning tree of a dynamic graph such that root of the tree is located near the center of the graph. Then we provide a distributed algorithm for finding k-core of that spanning tree. The spanning tree is constructed in two stages. In the first stage, a forest of trees is generated. In the next stage these trees are connected to form a single rooted tree. An interesting aspect of the first stage of proposed spanning algorithm is that it implicitly constructs the (convex) hull of those nodes which are not already included in the spanning forest. The process is repeated till all non root nodes of the graph have chosen a unique parent. We implemented the algorithms for finding spanning tree and its k-core. A core can be quite useful for routing messages in a dynamic network consisting of a set of mobile devices.  相似文献   

16.
We show that several classes of tree patterns observe the independence of containing patterns property, that is, if a pattern is contained in the union of several patterns, then it is contained in one of them. We apply this property to two related problems on tree pattern rewriting using views. First, given view V and query Q, is it possible for Q to have an equivalent rewriting using V which is the union of two or more tree patterns, but not an equivalent rewriting which is a single pattern? This problem is of both theoretical and practical importance because, if the answer is no, then, to find an equivalent rewriting of a tree pattern using a view, we should use more efficient methods, such as the polynomial time algorithm of Xu and Özsoyoglu (2005), rather than try to find the union of all contained rewritings (which takes exponential time in the worst case) and test its equivalence to Q. Second, given a set S of views, we want to know under what conditions a subset S′ of S is redundant in the sense that for any query Q, the contained rewritings of Q using the views in S′ are contained in those using the views in S???S′. Solving this problem can help us to, for example, choose the minimum number of views to be cached, or better design the virtual schema in a mediated data integration system, or avoid repeated calculation in query optimization. For the first problem, we identify several classes of tree patterns for which the equivalent rewriting can be expressed as a single tree pattern. For the second problem, we present necessary and sufficient conditions for S′ to be redundant with respect to some classes of tree patterns. For both problems we consider extension to cases where there are rewritings using the intersection of multiple views and/or where a schema graph is present.  相似文献   

17.
在研究了大量的求平面点集凸包的算法基础上,提出了一种新的构造平面点集的凸壳算法。此算法先求出四个极值点,构造出一个四边形。对于四边形外面的点依次用二分法进行判断是属于哪个线段区域;对于一个线段区域上的点只需要找出右侧的点,分别和线段的两个端点连接得到新的多边形链,依次这样处理每个点,直到结束。这样就得到四个简单多边形单调链,然后对单调链求凸点,时间复杂度为O(n),最后求得的每个凸点就是平面点集的凸壳,此算法总的时间复杂度不超过O(n log n)。  相似文献   

18.
19.
Image‐based collision detection algorithms make efficient use of the graphics rendering hardware and reduce the computational cost of CPU. In this article, a fast collision detection algorithm based on image space is presented, which combines graphics hardware capabilities with a simplified geometric representation (oriented bounding box) in order to rapidly detect collisions between complex objects. The method can deal with arbitrary polyhedra, while preserving the merits of image‐based collision detection algorithms. This is achieved by decomposing the surfaces of an object into a list of convex pieces. High efficiency of the algorithm is obtained by organizing the convex pieces into a binary tree with each node representing a convex piece, and by adopting triangle strip compression. The algorithm has been implemented and compared with related algorithms. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

20.
On-board radiometric calibration is the most efficient method to improve the measurement accuracy of satellite-borne sensors. Chinese Medium Resolution Spectral Imager (MERSI), loaded on the Fengyun-3C (FY-3C) satellite, uses one satellite-borne fixed-temperature blackbody (BB) for its thermal emission band’s (TEB) radiometric calibration, but its optimal calibration algorithm is not determined. By using MERSI’s on-board calibration data taken on November 2013, this article investigates two algorithms of linear and semi-nonlinear calibration for MERSI TEB’s on-board radiometric calibration and finds that the linear calibration is more reasonable than semi-nonlinear calibration because linear coefficients’ variation tendency can reflect MERSI’s inherent systematic properties better. The relative difference between linear properties and inherent properties for pixel variability being 9.5%, mirror-side variability being 21.5% and scan variability being 17.8% are all smaller than those between semi-nonlinear case and inherent case. All of those suggest that the linear calibration is coincident with the inherent systematic properties. By using MERSI’s calibration data at June 2014, the performance of those two algorithms is validated by comparing the difference between inferred BB radiance LI and standard BB radiance LS (0.01%), and between inferred BB brightness temperature TI and standard BB temperature TS (0.25 K).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号