首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
E. Bach, following an idea of T. Itoh, has shown how to build a small set of numbers modulo a prime p such that at least one element of this set is a generator of Z/pZ. E. Bach suggests also that at least half of his set should be generators. We show here that a slight variant of this set can indeed be made to contain a ratio of primitive roots as close to 1 as necessary. In particular we present an asymptotically algorithm providing primitive roots of p with probability of correctness greater than 1−? and several O(logα(p)), α?5.23, algorithms computing “Industrial-strength” primitive roots.  相似文献   

2.
We show that pairwise equality comparisons are necessary and sufficient (in the worst case) to find a plurality in n colored balls.  相似文献   

3.
Matrix domination is the NP-complete problem of determining whether a given {0,1} matrix contains a set of k non-zero entries that are in the same row or same column as all other non-zero entries. Using a kernelization and search tree approach, we show the problem to be fixed-parameter tractable with running time .  相似文献   

4.
We consider the conjectured O(N2+?) time complexity of multiplying any two N×N matrices A and B. Our main result is a deterministic Compressed Sensing (CS) algorithm that both rapidly and accurately computes AB provided that the resulting matrix product is sparse/compressible. As a consequence of our main result we increase the class of matrices A, for any given N×N matrix B, which allows the exact computation of AB to be carried out using the conjectured O(N2+?) operations. Additionally, in the process of developing our matrix multiplication procedure, we present a modified version of Indyk's recently proposed extractor-based CS algorithm [P. Indyk, Explicit constructions for compressed sensing of sparse signals, in: SODA, 2008] which is resilient to noise.  相似文献   

5.
In this note, we outline a very simple algorithm for the following problem: Given a set S of n points p1,p2,p3,…,pn in the plane, we have O(n2) segments implicitly defined on pairs of these n points. For each point pi, find a segment from this set of implicitly defined segments that is farthest from pi. The time complexity of our algorithm is in O(nh+nlogn), where n is the number of input points, and h is the number of vertices on the convex hull of S.  相似文献   

6.
In this note, we show, through the use of examples, how generic results for proving fixed-parameter tractability which apply to restricted classes of structures can sometimes be more widely applied.  相似文献   

7.
8.
9.
10.
11.
In 1986 in his seminal paper Bryant has introduced Ordered Binary Decision Diagrams (OBDDs) as a dynamic data structure for the representation and manipulation of Boolean functions which allow efficient algorithms for important operations like synthesis, the combination of two Boolean functions by a Boolean operator, and equivalence checking. Based on his empirical evaluation he has conjectured that his apply algorithm for the synthesis works in linear time with respect to the input and output size. Recently in 2012, Yoshinaka et al. have presented a counterexample which contradicts this conjecture but their example has the drawback that the chosen variable ordering for the OBDD representation of the input and output is bad. Therefore, they have raised the question whether Bryant?s conjecture may still stand for reasonable variable orderings. Here, a negative answer is given by presenting a simple counterexample which works for such kind of variable orderings.  相似文献   

12.
We substantially improve the known algorithms for approximating all the complex zeros of an nth degree polynomial p(x). Our new algorithms save both Boolean and arithmetic sequential time, versus the previous best algorithms of Schönhage [1], Pan [2], and Neff and Reif [3]. In parallel (NC) implementation, we dramatically decrease the number of processors, versus the parallel algorithm of Neff [4], which was the only NC algorithm known for this problem so far. Specifically, under the simple normalization assumption that the variable x has been scaled so as to confine the zeros of p(x) to the unit disc x : |x| ≤ 1, our algorithms (which promise to be practically effective) approximate all the zeros of p(x) within the absolute error bound 2b, by using order of n arithmetic operations and order of (b + n)n2 Boolean (bitwise) operations (in both cases up to within polylogarithmic factors). The algorithms allow their optimal (work preserving) NC parallelization, so that they can be implemented by using polylogarithmic time and the orders of n arithmetic processors or (b + n)n2 Boolean processors. All the cited bounds on the computational complexity are within polylogarithmic factors from the optimum (in terms of n and b) under both arithmetic and Boolean models of computation (in the Boolean case, under the additional (realistic) assumption that n = O(b)).  相似文献   

13.
This paper considers a weak simulation preorder for Markov chains that allows for stuttering. Despite the second-order quantification in its definition, we present a polynomial-time algorithm to compute the weak simulation preorder of a finite Markov chain.  相似文献   

14.
Ordered Binary Decision Diagrams (OBDDs) are a data structure for Boolean functions which supports many useful operations. Among others it finds applications in CAD, model checking, and symbolic graph algorithms. Nevertheless, many simple functions are known to have exponential OBDD size with respect to their number of variables. In order to investigate the limits of symbolic graph algorithms which work on OBDD-represented graph instances, it is useful to have simply-structured graphs whose OBDD representation has exponential size. Therefore, we consider two fundamental functions with exponential lower bounds on their OBDD size and transfer these results to their corresponding graphs. Concretely, we consider the Indirect Storage Access function and the Hidden Weighted Bit function.  相似文献   

15.
It is well known that the average case deterministic communication complexity is bounded below by an entropic quantity, which one would now call deterministic information complexity. In this paper we show a corresponding upper bound. We also improve known lower bounds for the public coin Las Vegas communication complexity by a constant factor.  相似文献   

16.
Rotation distance between trees measures the number of simple operations it takes to transform one tree into another. There are no known polynomial-time algorithms for computing rotation distance. In the case of ordered rooted trees, we show that the rotation distance between two ordered trees is fixed-parameter tractable, in the parameter, k, the rotation distance. The proof relies on the kernelization of the initial trees to trees with size bounded by 5k.  相似文献   

17.
An algorithm for learning a subclass of erasing regular pattern languages is presented. On extended regular pattern languages generated by patterns ππ of the form x0α1x1αmxmx0α1x1αmxm, where x0,…,xmx0,,xm are variables and α1,...,αmα1,...,αm strings of terminals of length c each, it runs with arbitrarily high probability of success using a number of examples polynomial in m (and exponential in c). It is assumed that m is unknown, but c is known and that samples are randomly drawn according to some distribution, for which we only require that it has certain natural and plausible properties.  相似文献   

18.
We give a simple O(nlogn) algorithm to compute the convex hull of the (possibly Θ(n2)) intersection points in an arrangement of n line segments in the plane. We also show an arrangement of dn hyperplanes in d-dimensions whose arrangement has Θ(nd−1) intersection points on the convex hull.  相似文献   

19.
We consider the scheduling of orders in an environment with m uniform machines in parallel. Each order requests certain amounts of k different product types. Each product type can be produced by any one of the m machines. No setup is required if a machine switches over from one product type to another. Different product types intended for the same order can be produced at the same time (concurrently) on different machines. Each order is released at time zero and has a positive weight. Preemptions are allowed. The completion time of an order is the finish time of the product type that is completed last for that order. The objective is to minimize the total weighted completion time. We propose heuristics for the non-preemptive as well as the preemptive case and obtain worst case bounds that are a function of the number of machines as well as the differences in the speeds of the machines. Even though the worst-case bounds we showed for the two heuristics are not very tight, our experimental results show that they yield solutions that are very close to optimal.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号