共查询到20条相似文献,搜索用时 15 毫秒
1.
Farshad Rostamabadi Iman Sadeghi Ramtin Khosravi 《Information Processing Letters》2008,105(3):108-113
An optimal labeling where labels are disjoint axis-parallel equal-size squares is called 2PM labeling if the labels have maximum length each attached to its corresponding point on the middle of one of its horizontal edges. In a closed-2PM labeling, no two edges of labels containing points should intersect. Removing one point and its label, makes free room for its adjacent labels and may cause a global label expansion. In this paper, we construct several data structures in the preprocessing stage, so that any point removal event is handled efficiently. We present an algorithm which decides in O(lgn) amortized time whether a label removal leads to label expansion in which case a new optimal labeling for the remaining points is generated in O(n) amortized time. 相似文献
2.
3.
Minghui Jiang 《Information Processing Letters》2006,99(4):125-129
We study the NP-hard problem of labeling points with maximum-radius circle pairs: given n point sites in the plane, find a placement for 2n interior-disjoint uniform circles, such that each site touches two circles and the circle radius is maximized. We present a new approximation algorithm for this problem that runs in time and O(n) space and achieves an approximation factor of (≈1.486+ε), which improves the previous best bound of 1.491+ε. 相似文献
4.
Joo-Won Jung 《Information Processing Letters》2004,89(3):115-121
In this paper, we consider the following problem: Given n pairs of a point and an axis-parallel rectangle in the plane, place each rectangle at each point in order that the point lies on the corner of the rectangle and the rectangles do not intersect. If the size of the rectangles may be enlarged or reduced at the same factor, maximize the factor. This paper generalizes the results of Formann and Wagner [Proc. 7th Annual ACM Symp. on Comput. Geometry (SoCG'91), 1991, pp. 281-288]. They considered the uniform squares case and showed that there is no polynomial time algorithm less than 2-approximation. We present a 2-approximation algorithm of the non-uniform rectangle case which runs in O(n2logn) time and takes O(n2) space. We also show that the decision problem can be solved in O(nlogn) time and space in the RAM model by transforming the problem to a simpler geometric problem. 相似文献
5.
Minghui Jiang Zhongping Qin Binhai Zhu Robert Cimikowski 《Information Processing Letters》2003,87(2):101-105
In this paper we present a simple factor-(3+ε), 0<ε<1, approximation algorithm, which runs in O(nlogn+n(1/ε)O(1/ε2)log(D3/εD2)) time, for the problem of labeling a set P of n distinct points with uniform circles. (D2 is the closest pair of P and D3 is the minimum diameter of all subsets of P with size three.) This problem is known to be NP-hard. Our bound improves the previous factor of 3.6+ε. 相似文献
6.
在汉语的自然语言处理领域中,汉语的语义标注一直是一个重要的研究课题。在以往的研究中,大多使用手工的方式取得模板进行标注;采用抽取自动模板的方法,对汉语的语义进行标注,以解决对词的类别进行标注,以及对复合结构语义关系进行标注的问题。实验效果表明,对词的类别进行标注取得了在把维度降到363时的精确率为81.6406%的结果;对复合结构语义关系之间的标注也取得了比以往工作有所改进的成果。 相似文献
7.
In this paper we propose a new strategy for designing algorithms, called the searching over separators strategy. Suppose that we have a problem where the divide-and-conquer strategy can not be applied directly. Yet, also suppose that in an optimal solution to this problem, there exists a separator which divides the input points into two parts,A
d andC
d, in such a way that after solving these two subproblems withA
d andC
d as inputs, respectively, we can merge the respective subsolutions into an optimal solution. Let us further assume that this problem is an optimization problem. In this case our searching over separators strategy will use a separator generator to generate all possible separators. For each separator, the problem is solved by the divide-and-conquer strategy. If the separator generator is guaranteed to generate the desired separator existing in an optimal solution, our searching over separators strategy will always produce an optimal solution. The performance of our approach will critically depend upon the performance of the separator generator. It will perform well if the total number of separators generated is relatively small. We apply this approach to solve the discrete EuclideanP-median problem (DEPM), the discrete EuclideanP-center problem (DEPC), the EuclideanP-center problem (EPC), and the Euclidean traveling salesperson problem (ETSP). We propose
algorithms for the DEPM problem, the DEPC problem, and the EPC problem, and we propose an
algorithm for the ETSP problem, wheren is the number of input points.This research work was partially supported by the National Science Council of the Republic of China under Grant NSC 79-0408-E007-04. 相似文献
8.
Emo Welzl 《Information Processing Letters》1985,20(4):167-171
Given a set S of line segments in the plane, its visibility graph GS is the undirected graph which has the endpoints of the line segments in S as nodes and in which two nodes (points) are adjacent whenever they ‘see’ each other (the line segments in S are regarded as nontransparent obstacles). It is shown that GS can be constructed in O(n2) time and space for a set S of n nonintersecting line segments. As an immediate implication, the shortest path between two points in the plane avoiding a set of n nonintersecting line segments can be computed in O(n2) time and space 相似文献
9.
Lucanus J. Simonson 《Computer aided design》2010,42(12):1189-1196
We present an algorithm to compute the topology and geometry of an arbitrary number of polygon sets in the plane, also known as the map overlay. This algorithm can perform polygon clipping and related operations of interest in VLSI CAD. The algorithm requires no preconditions from input polygons and satisfies a strict set of post conditions suitable for immediate processing of output polygons by downstream tools. The algorithm uses sweepline to compute a Riemann–Stieltjes integral over polygon overlaps in O((n+s)log(n)) time given n polygon edges with s intersections. The algorithm is efficient and general, handling degenerate inputs implicitly. Particular care was taken in implementing the algorithm to ensure numerical robustness without sacrificing efficiency. We present performance comparisons with other polygon clipping algorithms and give examples of real world applications of our algorithm in an industrial software setting. 相似文献
10.
Ugur Dogrusoz Konstantinos G. Kakoulis Ioannis G. Tollis 《Information Sciences》2007,177(12):2459-2472
When visualizing graphs, it is essential to communicate the meaning of each graph object via text or graphical labels. Automatic placement of labels in a graph is an NP-Hard problem, for which efficient heuristic solutions have been recently developed. In this paper, we describe a general framework for modeling, drawing, editing, and automatic placement of labels respecting user constraints. In addition, we present the interface and the basic engine of the Graph Editor Toolkit - a family of portable graph visualization libraries designed for integration into graphical user interface application programs. This toolkit produces a high quality automated placement of labels in a graph using our framework. A brief survey of automatic label placement algorithms is also presented. Finally we describe extensions to certain existing automatic label placement algorithms, allowing their integration into this visualization tool. 相似文献
11.
C.B Chittineni 《Pattern recognition》1982,15(3):201-216
This paper presents some approaches for the problem of labeling clusters using information from a given set of labeled and unlabeled patterns. Assigning class labels to the clusters is formulated as finding the best label assignment over all possible label assignments with respect to a criterion. Labeling clusters is also viewed as obtaining probabilities of class labels to the clusters with the maximization of the likelihood function and probability of correct labeling as criteria. Closed form solutions are obtained for the probabilities of class labels to the clusters by maximizing a lower bound on the likelihood criterion. Fixed point iteration equations are developed for obtaining probabilities of class labels to the clusters. The problem of obtaining class labels to the clusters is further formulated as that of minimizing the variance of the proportion estimates of the classes that use both the given labeled and unlabeled patterns. Imperfections in the labels of the given labeled set are incorporated into the criteria. Furthermore, the results of application of these techniques in the processing of remotely sensed multispectral scanner imagery data are presented. 相似文献
12.
In this paper we study a cell of the subdivision induced by a union ofn half-lines (or rays) in the plane. We present two results. The first one is a novel proof of theO(n) bound on the number of edges of the boundary of such a cell, which is essentially of methodological interest. The second is an algorithm for constructing the boundary of any cell, which runs in optimal (n logn) time. A by-product of our results are the notions of skeleton and of skeletal order, which may be of interest in their own right.This work was partly supported by CEE ESPRIT Project P-940, by the Ecole Normale Supérieure, Paris, and by NSF Grant ECS-84-10902.This work was done in part while this author was visiting the Ecole Normale Supérieure, Paris, France. 相似文献
13.
J. Peethambaran A.D. Parakkat A. Tagliasacchi R. Wang R. Muthuganapathy 《Computer Graphics Forum》2019,38(1):521-536
We present an incremental Voronoi vertex labelling algorithm for approximating contours, medial axes and dominant points (high curvature points) from 2D point sets. Though there exist many number of algorithms for reconstructing curves, medial axes or dominant points, a unified framework capable of approximating all the three in one place from points is missing in the literature. Our algorithm estimates the normals at each sample point through poles (farthest Voronoi vertices of a sample point) and uses the estimated normals and the corresponding tangents to determine the spatial locations (inner or outer) of the Voronoi vertices with respect to the original curve. The vertex classification helps to construct a piece‐wise linear approximation to the object boundary. We provide a theoretical analysis of the algorithm for points non‐uniformly (ε‐sampling) sampled from simple, closed, concave and smooth curves. The proposed framework has been thoroughly evaluated for its usefulness using various test data. Results indicate that even sparsely and non‐uniformly sampled curves with outliers or collection of curves are faithfully reconstructed by the proposed algorithm. 相似文献
14.
On the density and discrepancy of a 2D point set with applications to thermal analysis of VLSI chips
Subhashis Majumder 《Information Processing Letters》2008,107(5):177-182
In this era of giga-scale integration, thermal analysis has become one of the hot topics in VLSI chip design. Active thermal sources may be abstracted as a set of weighted points on a 2D chip-floor. The conventional notion of discrepancy that deals with the congestion properties of a set of scattered points may not be able to capture properly all real-life instances in this context. In this paper, we have introduced a new concept, called the density of a region to study some of the properties of the distribution of these weighted points. We prove several counter-intuitive results concerning the properties of the regions that have maximum or minimum density. We then outline algorithms for recognizing these regions. We also compare the attributes of density with the existing concept of discrepancy. 相似文献
15.
Connected component labeling on a 2D grid using CUDA 总被引:2,自引:0,他引:2
Oleksandr KalentevAuthor Vitae Abha RaiAuthor VitaeStefan KemnitzAuthor Vitae Ralf SchneiderAuthor Vitae 《Journal of Parallel and Distributed Computing》2011,71(4):615-620
Connected component labeling is an important but computationally expensive operation required in many fields of research. The goal in the present work is to label connected components on a 2D binary map. Two different iterative algorithms for doing this task are presented. The first algorithm (Row-Col Unify) is based upon the directional propagation labeling, whereas the second algorithm uses the Label Equivalence technique. The Row-Col Unify algorithm uses a local array of references and the reduction technique intrinsically. The usage of shared memory extensively makes the code efficient. The Label Equivalence algorithm is an extended version of the one presented by Hawick et al. (2010) [3]. At the end the comparison depending on the performances of both of the algorithms is presented. 相似文献
16.
B. Chazelle H. Edelsbrunner M. Grigni L. Guibas J. Hershberger M. Sharir J. Snoeyink 《Algorithmica》1994,12(1):54-68
LetP be a simple polygon withn vertices. We present a simple decomposition scheme that partitions the interior ofP intoO(n) so-called geodesic triangles, so that any line segment interior toP crosses at most 2 logn of these triangles. This decomposition can be used to preprocessP in a very simple manner, so that any ray-shooting query can be answered in timeO(logn). The data structure requiresO(n) storage andO(n logn) preprocessing time. By using more sophisticated techniques, we can reduce the preprocessing time toO(n). We also extend our general technique to the case of ray shooting amidstk polygonal obstacles with a total ofn edges, so that a query can be answered inO( logn) time.Work by Bernard Chazelle has been supported by NSF Grant CCR-87-00917. Work by Herbert Edelsbrunner has been supported by NSF Grant CCR-89-21421. Work by Micha Sharir has been supported by ONR Grants N00014-89-J-3042 and N00014-90-J-1284, by NSF Grant CCR-89-01484, and by grants from the U.S.-Israeli Binational Science Foundation, the Fund for Basic Research administered by the Israeli Academy of Sciences, and the G.I.F., the German-Israeli Foundation for Scientific Research and Development. 相似文献
17.
Annalisa Franco Author Vitae Dario Maio Author Vitae Davide Maltoni Author Vitae 《Pattern recognition》2010,43(8):2891-2903
The extreme variability of faces in smart environment applications, due to continuous changes in terms of pose, illumination and subject appearance (hairstyle, make-up, etc.), requires the relevant mode of variations of the subject's faces to be encoded in the templates and to be continuously updated based on new inputs. This work proposes a new video-based template updating approach suitable for home environments where the image acquisition process is totally unconstrained but a large amount of face data is available for continuous learning. A small set of labeled images is initially used to create the templates and the updating is then totally unsupervised. Although the method is here presented in conjunction with a subspace-based face recognition approach, it can be easily adapted to deal with different kinds of face representations. A thorough performance evaluation is carried out to show the efficacy and reliability of the proposed technique. 相似文献
18.
Jan Vahrenhold 《Information Processing Letters》2007,102(4):169-174
We present the first in-place algorithm for solving Klee's measure problem for a set of n axis-parallel rectangles in the plane. Our algorithm runs in O(n3/2logn) time and uses O(1) extra words in addition to the space needed for representing the input. The algorithm is surprisingly simple and thus very likely to yield an implementation that could be of practical interest. As a byproduct, we develop an optimal algorithm for solving Klee's measure problem for a set of n intervals; this algorithm runs in optimal time O(nlogn) and uses O(1) extra space. 相似文献
19.
Ioannis Koutis 《Information Processing Letters》2006,100(1):8-13
We consider the problem of computing the squared volume of the largest j-simplex contained in an n-dimensional polytope presented by its vertices (a V-polytope). We show that the related decision problem is W[1]-complete, with respect to the parameter j. We also improve the constant inapproximability factor given in [A. Packer, Polynomial-time approximation of largest simplices in V-polytopes, Discrete Appl. Math. 134 (1-3) (2004) 213-237], by showing that there are constants μ<1,c>1 such that it is NP-hard to approximate within a factor of cμn the volume of the largest ⌊μn⌋-simplex contained in an n-dimensional polytope with O(n) vertices. 相似文献
20.
对三维角色模型进行语义化检索与重组的前提是进行准确的分割和标记。本文针对游戏与影视中应用广泛的三维角色模型,提出了一种基于结构先验知识的角色部件分割和标记方法。总结了动画角色模型的一般性结构规则,并形式化为可以指导分割的层次化知识表达;采用区域生长法逐层剥离模型部件,在此规则下进行末端检测与边界判断,同时对部件进行语义标记。在两足与四足角色模型上进行测试,与其它可标记的角色模型分割方法相比,本方法具有更好的部件化结果,以及对动画角色有良好的扩展性。 相似文献