首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is shown that the order-k Voronoi diagram of n sites with additive weights in the plane has at most (4k–2)(nk) vertices, (6k–3)(n–k) edges, and (2k–1)(n–itk) + 1 regions. These bounds are approximately the same as the ones known for unweighted order-k Voronoi diagrams. Furthermore, tight upper bounds on the number of edges and vertices are given for the case that every weighted site has a nonempty region in the order-1 diagram. The proof is based on a new algorithm for the construction of these diagrams which generalizes a plane-sweep algorithm for order-1 diagrams developed by Steven Fortune. The new algorithm has time-complexityO(k 2 n logn) and space-complexityO(kn). It is the only nontrivial algorithm known for constructing order-kc Voronoi diagrams of sites withadditive weights. It is fairly simple and of practical interest also in the special case of unweighted sites.Work on this paper has been supported by Amoco Fnd. Fac. Dev. Comput. Sci. 1-6-44862.  相似文献   

2.
In this paper, we present lower and upper bounds on the size of limited width, bounded and unbounded fan-out parallel prefix circuits. The lower bounds on the sizes of such circuits are a function of the depth, width, and number of inputs. The size requirement of an N input bounded fan-out parallel prefix circuit having limited width W and extra depth k (the difference between allowed and minimum possible depth) is shown to be (N log2 W/2 k + N) for k log2 W. This implies that insisting on minimum depth causes the circuit size to be nonlinear, while as little as log2log2 W of extra depth can possibly reduce the size to linear. Also, we show that there is a clear difference between the two cases of bounded and unbounded fan-out by proving the size of a limited width, unbounded fan-out parallel prefix circuit lies between a lower bound of ((2 + 21–k /3)N) and an upper bound of O((2 + 21–k )N).Uniform, systolic constructions of limited width parallel prefix circuits are provided here and shown to be asymptotically optimal. By associating the width of the circuit with the number of processors and the fan-out capabilities of the circuit with the interconnection structure of a multiprocessor, time- and processor-efficient algorithms may be developed.  相似文献   

3.
LetN max(q) denote the maximum number of points of an elliptic curve over F q . Given a prime powerq=p f and an integern satisfying 1/2q+1<n(N max(q)–2)/2, we present an algorithm which on inputq andn produces an optimal bilinear algorithm of length 2n for multiplication in F q n /F q . The algorithm takes roughlyO(q 4+n 4logq) F q -operations or equivalentlyO((q 4+n 4logq)f 2log2 p) bit-operations to compute the output data.  相似文献   

4.
This paper is composed of two parts. In the first part, an improved algorithm is presented for the problem of finding length-bounded two vertex-disjoint paths in an undirected planar graph. The presented algorithm requires O(n3bmin) time and O(n2bmin) space, where bmin is the smaller of the two given length bounds. In the second part of this paper, we consider the minmax k vertex-disjoint paths problem on a directed acyclic graph, where k?2 is a constant. An improved algorithm and a faster approximation scheme are presented. The presented algorithm requires O(nk+1Mk−1) time and O(nkMk−1) space, and the presented approximation scheme requires O((1/?)k−1n2klogk−1M) time and O((1/?)k−1n2k−1logk−1M) space, where ? is the given approximation parameter and M is the length of the longest path in an optimal solution.  相似文献   

5.
As is well known, a finite field n = GF(q n ) can be described in terms of n × n matrices A over the field = GF(q) such that their powers A i , i = 1, 2, ..., q n – 1, correspond to all nonzero elements of the field. It is proved that, for fields n of characteristic 2, such a matrix A can be chosen to be symmetric. Several constructions of field-representing symmetric matrices are given. These matrices A i together with the all-zero matrix can be considered as a n -linear matrix code in the rank metric with maximum rank distance d = n and maximum possible cardinality q n . These codes are called symmetric rank codes. In the vector representation, such codes are maximum rank distance (MRD) linear [n, 1, n] codes, which allows one to use known rank-error-correcting algorithms. For symmetric codes, an algorithm of erasure symmetrization is proposed, which considerably reduces the decoding complexity as compared with standard algorithms. It is also shown that a linear [n, k, d = nk + 1] MRD code k containing the above-mentioned one-dimensional symmetric code as a subcode has the following property: the corresponding transposed code is also n -linear. Such codes have an extended capability of correcting symmetric errors and erasures.  相似文献   

6.
The minimum k-terminal cut problem is of considerable theoretical interest and arises in several applied areas such as parallel and distributed computing, VLSI circuit design, and networking. In this paper we present two new approximation and exact algorithms for this problem on an n-vertex undirected weighted planar graph G. For the case when the k terminals are covered by the boundaries of m > 1 faces of G, we give a min{O(n 2 log n logm), O(m 2 n 1.5 log2 n + k n)} time algorithm with a (2–2/k)-approximation ratio (clearly, m \le k). For the case when all k terminals are covered by the boundary of one face of G, we give an O(n k3 + (n log n)k 2) time exact algorithm, or a linear time exact algorithm if k = 3, for computing an optimal k-terminal cut. Our algorithms are based on interesting observations and improve the previous algorithms when they are applied to planar graphs. To our best knowledge, no previous approximation algorithms specifically for solving the k-terminal cut problem on planar graphs were known before. The (2–2/k)-approximation algorithm of Dahlhaus et al. (for general graphs) takes O(k n 2 log n) time when applied to planar graphs. Our approximation algorithm for planar graphs runs faster than that of Dahlhaus et al. by at least an O(k/logm) factor (m \le k).  相似文献   

7.
We determine the functions on GF(2)n which satisfy the propagation criterion of degree n−2, PC(n−2). We study subsequently the propagation criterion of degree ℓ and order k and its extended version EPC. We determine those Boolean functions on GF(2)n which satisfy PC(ℓ) of order kn−ℓ−2. We show that none of them satisfies EPC(ℓ) of the same order. We finally give a general construction of nonquadratic functions satisfying EPC(ℓ) of order k. This construction uses the existence of nonlinear, systematic codes with good minimum distances and dual distances (e.g., Kerdock codes and Preparata codes).  相似文献   

8.
We obtain faster algorithms for problems such as r-dimensional matching and r-set packing when the size k of the solution is considered a parameter. We first establish a general framework for finding and exploiting small problem kernels (of size polynomial in k). This technique lets us combine Alon, Yuster and Zwick’s color-coding technique with dynamic programming to obtain faster fixed-parameter algorithms for these problems. Our algorithms run in time O(n+2 O(k)), an improvement over previous algorithms for some of these problems running in time O(n+k O(k)). The flexibility of our approach allows tuning of algorithms to obtain smaller constants in the exponent. Research initiated at the International Workshop on Fixed Parameter Tractability in Computational Geometry and Games, Bellairs Research Institute of McGill University, Holetown, Barbados, Feb. 7–13, 2004, organized by S. Whitesides. D.M. Thilikos supported by the EU within the 6th Framework Programme under contract 001907 (DELIS) and by the Spanish CICYT project TIC-2002-04498-C05-03 (TRACER).  相似文献   

9.
A general lower bound on the number of examples needed for learning   总被引:1,自引:0,他引:1  
We prove a lower bound of Ω((1/)ln(1/δ)+VCdim(C)/) on the number of random examples required for distribution-free learning of a concept class C, where VCdim(C) is the Vapnik-Chervonenkis dimension and and δ are the accuracy and confidence parameters. This improves the previous best lower bound of Ω((1/)ln(1/δ)+VCdim(C)) and comes close to the known general upper bound of O((1/)ln(1/δ)+(VCdim(C)/)ln(1/)) for consistent algorithms. We show that for many interesting concept classes, including kCNF and kDNF, our bound is actually tight to within a constant factor.  相似文献   

10.
This work concerns the trade-offs between the dimension and the time and space complexity of computations on nondeterministic cellular automata. We assume that the space complexity is the diameter of area in space involved in computation. It is proved that (1) every nondeterministic cellular automata (NCA) of dimensionr, computing a predicatePwith time complexityT(n) and space complexityS(n) can be simulated byr-dimensional NCA with time and space complexityO(T1/(r+1)Sr/(r+1)) and byr+1 dimensional NCA with time and space complexityO(T1/2+S), whereTandSare functions constructible in time, (2) for any predicatePand integerr>1 if is a fastestr-dimensional NCA computingPwith time complexityT(n) and space complexityS(n), thenT=O(S), and (3) ifTr, Pis the time complexity of a fastestr-dimensional NCA computing predicatePthenTr+1,P=O((Tr, P)1−r/(r+1)2),Tr+1,P=O((Tr, P)1+2/r).Similar problems for deterministic cellular automata (CA) are discussed.  相似文献   

11.
Set multi-covering is a generalization of the set covering problem where each element may need to be covered more than once and thus some subset in the given family of subsets may be picked several times for minimizing the number of sets to satisfy the coverage requirement. In this paper, we propose a family of exact algorithms for the set multi-covering problem based on the inclusion–exclusion principle. The presented ESMC (Exact Set Multi-Covering) algorithm takes O*((2t)n) time and O*((t+1)n) space where t is the maximum value in the coverage requirement set (The O*(f(n)) notation omits a polylog(f(n)) factor). We also propose the other three exact algorithms through different tradeoffs of the time and space complexities. To the best of our knowledge, this present paper is the first one to give exact algorithms for the set multi-covering problem with nontrivial time and space complexities. This paper can also be regarded as a generalization of the exact algorithm for the set covering problem given in [A. Björklund, T. Husfeldt, M. Koivisto, Set partitioning via inclusion–exclusion, SIAM Journal on Computing, in: FOCS 2006 (in press, special issue)].  相似文献   

12.
Based on the method of (n,k)-universal sets, we present a deterministic parameterized algorithm for the weighted rd-matching problem with time complexity O(4(r−1)k+o(k)), improving the previous best upper bound O(4rk+o(k)). In particular, the algorithm applied to the unweighted 3d-matching problem results in a deterministic algorithm with time O(16k+o(k)), improving the previous best result O(21.26k). For the weighted r-set packing problem, we present a deterministic parameterized algorithm with time complexity O(2(2r−1)k+o(k)), improving the previous best result O(22rk+o(k)). The algorithm, when applied to the unweighted 3-set packing problem, has running time O(32k+o(k)), improving the previous best result O(43.62k+o(k)). Moreover, for the weighted r-set packing and weighted rd-matching problems, we give a kernel of size O(kr), which is the first kernelization algorithm for the problems on weighted versions.  相似文献   

13.
We present a new method of solving graph problems related to Vertex Cover by enumerating and expanding appropriate sets of nodes. As an application, we obtain dramatically improved runtime bounds for two variants of the Vertex Cover problem. In the case of Connected Vertex Cover, we take the upper bound from O *(6 k ) to O *(2.7606 k ) without large hidden factors. For Tree Cover, we show a complexity of O *(3.2361 k ), improving over the previous bound of O *((2k) k ). In the process, faster algorithms for solving subclasses of the Steiner tree problem on graphs are investigated. Supported by the DFG under grant RO 927/6-1 (TAPI).  相似文献   

14.
Consider any known sequential algorithm for matrix multiplication over an arbitrary ring with time complexity O(Nα), where 2<α3. We show that such an algorithm can be parallelized on a distributed memory parallel computer (DMPC) in O(log N) time by using Nα/log N processors. Such a parallel computation is cost optimal and matches the performance of PRAM. Furthermore, our parallelization on a DMPC can be made fully scalable, that is, for all 1pNα/log N, multiplying two N×N matrices can be performed by a DMPC with p processors in O(Nα/p) time, i.e., linear speedup and cost optimality can be achieved in the range [1..Nα/log N]. This unifies all known algorithms for matrix multiplication on DMPC, standard or non- standard, sequential or parallel. Extensions of our methods and results to other parallel systems are also presented. For instance, for all 1p Nα /log N, multiplying two N×N matrices can be performed by p processors connected by a hypercubic network in O(Nα/p+(N2/p2/α)(log p)2(α−1)/α) time, which implies that if p=O(Nα/(log N)2(α−1)/(α−2)), linear speedup can be achieved. Such a parallelization is highly scalable. The above claims result in significant progress in scalable parallel matrix multiplication (as well as solving many other important problems) on distributed memory systems, both theoretically and practically.  相似文献   

15.
In this paper we consider general simulations of algorithms designed for fully operational BSP and CGM machines on machines with faulty processors. The BSP (or CGM) machine is a parallel multicomputer consisting of p processors for which a memory of n words is evenly distributed and each processor can send and receive at most h messages in a superstep. The faults are deterministic (i.e., worst-case distributions of faults are considered) and static (i.e., they do not change in the course of computation). We assume that a constant fraction of processors are faulty.  We present two fault-tolerant simulation techniques for BSP and CGM:  1. A deterministic simulation that achieves O(1) slowdown for local computations and O((logh p)2) slowdown for communications per superstep, provided that a preprocessing is done that requires O((logh p)2) supersteps and linear (in h) computation per processor in each superstep.  2. A randomized simulation that achieves O(1) slowdown for local computations and O(logh p) slowdown for communications per superstep with high probability, after the same (deterministic) preprocessing as above.  Our results are fully scalable over all values of p from Θ(1) to Θ(n). Furthermore, our results imply that if pn for 0<<1 and h=Θ((n/p)δ) for 0<δ1 (which hold in almost all practical BSP and CGM computations), algorithms can be made resilient to a constant fraction of processor faults without any asymptotic slowdown.  相似文献   

16.
Let [n, k, d] q -codes be linear codes of length n, dimension k, and minimum Hamming distance d over GF(q). In this paper we consider codes over GF(3), GF(5), GF(7), and GF(8). Over GF(3), three new linear codes are constructed. Over GF(5), eight new linear codes are constructed and the nonexistence of six codes is proved. Over GF(7), the existence of 33 new codes is proved. Over GF(8), the existence of ten new codes and the nonexistence of six codes is proved. All of these results improve the corresponding lower and upper bounds in Brouwer's table [www.win.tue.nl/aeb/voorlincod.html].  相似文献   

17.
The notion of distance constrained graph labelings, motivated by the Frequency Assignment Problem, reads as follows: A mapping from the vertex set of a graph G=(V,E) into an interval of integers {0,…,k} is an L(2,1)-labeling of G of span k if any two adjacent vertices are mapped onto integers that are at least 2 apart, and every two vertices with a common neighbor are mapped onto distinct integers. It is known that for any fixed k≥4, deciding the existence of such a labeling is an NP-complete problem. We present exact exponential time algorithms that are faster than the naive O *((k+1) n ) algorithm that would try all possible mappings. The improvement is best seen in the first NP-complete case of k=4, where the running time of our algorithm is O(1.3006 n ). Furthermore we show that dynamic programming can be used to establish an O(3.8730 n ) algorithm to compute an optimal L(2,1)-labeling.  相似文献   

18.
Constructing normal bases of GF(qn) over GF (q) can be done by probabilistic methods as well as deterministic ones. In the following paper we consider only deterministic constructions. As far as we know, the best complexity for probabilistic algorithms is O(n2 log4n log2 (log n) + n log n log (log n) log q ) (see von zur Gathen and Shoup, 1992). For deterministic constructions, some prior ones, e.g. Lueneburg (1986), do not use the factorization of Xn - 1 over GF(q). As analysed by Bach, Driscoll and Shallit (1993), the best complexity (from Lueneburg, 1986) is O(n3 log n log(log n) + n2 log n log(log n) log q). For other deterministic constructions, which need such a factorization, the best complexities are O(n3,376 + n2 log n log(log n) log q) (von zur Gathen and Giesbrecht, 1990), or O(n3 log q); see Augot and Camion (1993). Here we propose a new deterministic construction that does not require the factorization of Xn - 1. Its complexity is reduced to O (n3 + n log n log(log n) log q ).  相似文献   

19.
Summary Given a file of N records each of which has k keys, the worst-case analysis for the region and partial region queries in multidimensional binary search trees and balanced quad trees are presented. It is shown that the search algorithms proposed in [1, 3] run in time O(k·N 1–1/k) for region queries in both tree structures. For partial region queries with s keys specified, the search algorithms run at most in time O(s·N 1–1/k ) in both structures.This author's research was supported in part by the National Science Foundation under grant MCS-76-17321 and in part by the Joint Service Electronics Program under contract DAAB-07-72-C-0259  相似文献   

20.
We study the problem of finding a minimum weight complete matching in the complete graph on a set V ofn points ink-dimensional space. The points are the vertices of the graph and the weight of an edge between any two points is the distance between the points under someL q,-metric. We give anO((2c q )1.5k –1.5k ((n, n))0.5 n 1.5(logn)2.5) algorithm for finding an almost minimum weight complete matching in such a graph, wherec q =6k 1/q for theL q -metric, is the inverse Ackermann function, and 1. The weight of the complete matching obtained by our algorithm is guaranteed to be at most (1 + ) times the weight of a minimum weight complete matching.This research was supported by a fellowship from the Shell Foundation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号