首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Karchmer, Raz, and Wigderson (1995) discuss the circuit depth complexity of n-bit Boolean functions constructed by composing up to d = log n/log log n levels of k = log n-bit Boolean functions. Any such function is in AC1 . They conjecture that circuit depth is additive under composition, which would imply that any (bounded fan-in) circuit for this problem requires depth. This would separate AC1 from NC1. They recommend using the communication game characterization of circuit depth. In order to develop techniques for using communication complexity to prove circuit depth lower bounds, they suggest an intermediate communication complexity problem which they call the Universal Composition Relation. We give an almost optimal lower bound of dkO(d 2(k log k)1/2) for this problem. In addition, we present a proof, directly in terms of communication complexity, that there is a function on k bits requiring circuit depth. Although this fact can be easily established using a counting argument, we hope that the ideas in our proof will be incorporated more easily into subsequent arguments which use communication complexity to prove circuit depth bounds. Received: July 30, 1999.  相似文献   

2.
In contrast to machine models like Turing machines or random access machines, circuits are a static computational model. The internal information flow of a computation is fixed in advance, independent of the actual input. Therefore, size and depth are natural and simple measures for circuits and provide a worst-case analysis. We consider a new model in which an internal gate is evaluated as soon as its result has been determined by a partial assignment of its inputs. This way, a dynamic notion of delay is obtained which gives rise to an average case measure for the time complexity of circuits. In a previous paper we have obtained tight upper and lower bounds for the average case complexity of several basic Boolean functions. This paper examines the asymptotic average case complexity for the set of alln-ary Boolean functions. In contrast to worst case analysis a simple counting argument does not work. We prove that with respect to the uniform probability distribution almost all Boolean functions require at leastn−log n−log log nexpected time. On the other hand, there is a significantly large subset of functions that can be computed with a constant average delay. Finally, for an arbitrary Boolean function we compare its worst case and average case complexity. It is shown that for each function that requires circuit depthd, i.e. of worst-case complexityd, the expected time complexity will be at leastd−log n−log dwith respect to an explicitly defined probability distribution. In addition, a nontrivial upper bound on the complexity of such a distribution will be obtained.  相似文献   

3.
In this paper we consider general simulations of algorithms designed for fully operational BSP and CGM machines on machines with faulty processors. The BSP (or CGM) machine is a parallel multicomputer consisting of p processors for which a memory of n words is evenly distributed and each processor can send and receive at most h messages in a superstep. The faults are deterministic (i.e., worst-case distributions of faults are considered) and static (i.e., they do not change in the course of computation). We assume that a constant fraction of processors are faulty.  We present two fault-tolerant simulation techniques for BSP and CGM:  1. A deterministic simulation that achieves O(1) slowdown for local computations and O((logh p)2) slowdown for communications per superstep, provided that a preprocessing is done that requires O((logh p)2) supersteps and linear (in h) computation per processor in each superstep.  2. A randomized simulation that achieves O(1) slowdown for local computations and O(logh p) slowdown for communications per superstep with high probability, after the same (deterministic) preprocessing as above.  Our results are fully scalable over all values of p from Θ(1) to Θ(n). Furthermore, our results imply that if pn for 0<<1 and h=Θ((n/p)δ) for 0<δ1 (which hold in almost all practical BSP and CGM computations), algorithms can be made resilient to a constant fraction of processor faults without any asymptotic slowdown.  相似文献   

4.
We consider one-dimensional and multidimensional vector covering with variable sized bins. In the one-dimensional case, we consider variable sized bin covering with bounded item sizes. For every finite set of bins B, and upper bound 1/m on the size of items for some integer m, we define a ratio r(B, m). We prove this is the best possible competitive ratio for the set of bins B and the parameter m by giving both an algorithm with competitive ratio r(B, m) and an upper bound of r(B, m) on the competitive ratio of any online deterministic or randomized algorithm. The ratio satisfies r(B, m)≥m/(m+1) and equals this number if all bins are of size 1. For multidimensional vector covering we consider the case where each bin is a binary d-dimensional vector. It was shown by N. Alon, Y. Azar, J. Csirik, L. Epstein, S. V. Sevastianov, A. P. A. Vestjens, and G. J. Woeginger (1998, Algorithmica 21, 104–118) that if B contains a single bin which is all 1, then the best competitive ratio is Θ(1/d). We show an upper bound of 1/2d(1−o(1)) for the general problem, and consider four special case variants. We show an algorithm with optimal competitive ratio 1/2 for the model where each bin in B is a standard basis vector. We consider the model where B consists of all unit prefix vectors. A unit prefix vector has i leftmost components of 1, and all other components are 0. We show that this model is harder than the case of standard basis vector bins by giving an upper bound of O(1/log d) on the competitive ratio of any deterministic or randomized algorithm. Next, we discuss the model where B contains all binary vectors. We show this model is easier than the model of one bin type which is all 1 by giving an algorithm with competitive ratio Ω(1/log d). The most interesting multidimensional case is d=2. The results of N. Alon et al. give a 0.25-competitive algorithm for B={(1, 1)} and an upper bound of 0.4 on the competitive ratio of any algorithm. In this paper we consider all other models for d=2. For standard basis vectors, we give an algorithm with optimal competitive ratio 1/2. For unit prefix vectors we give an upper bound of 4/9 on the competitive ratio of any deterministic or randomized algorithm. For the model where B consists of all binary vectors, we design an algorithm with ratio larger than 0.4. These results show that all above relations between models hold for d=2 as well.  相似文献   

5.
We prove lower bounds on the randomized two-party communication complexity of functions that arise from read-once boolean formulae. A read-once boolean formula is a formula in propositional logic with the property that every variable appears exactly once. Such a formula can be represented by a tree, where the leaves correspond to variables, and the internal nodes are labeled by binary connectives. Under certain assumptions, this representation is unique. Thus, one can define the depth of a formula as the depth of the tree that represents it. The complexity of the evaluation of general read-once formulae has attracted interest mainly in the decision tree model. In the communication complexity model many interesting results deal with specific read-once formulae, such as DISJOINTNESS and TRIBES. In this paper we use information theory methods to prove lower bounds that hold for any read-once formula. Our lower bounds are of the form n(f)/cd(f), where n(f) is the number of variables and d(f) is the depth of the formula, and they are optimal up to the constant in the base of the denominator.  相似文献   

6.
Damaschke  Peter 《Machine Learning》2000,41(2):197-215
We study the complexity of learning arbitrary Boolean functions of n variables by membership queries, if at most r variables are relevant. Problems of this type have important applications in fault searching, e.g. logical circuit testing and generalized group testing. Previous literature concentrates on special classes of such Boolean functions and considers only adaptive strategies. First we give a straightforward adaptive algorithm using O(r2 r log n) queries, but actually, most queries are asked nonadaptively. This leads to the problem of purely nonadaptive learning. We give a graph-theoretic characterization of nonadaptive learning families, called r-wise bipartite connected families. By the probabilistic method we show the existence of such families of size O(r2 r log n + r 22 r ). This implies that nonadaptive attribute-efficient learning is not essentially more expensive than adaptive learning. We also sketch an explicit pseudopolynomial construction, though with a slightly worse bound. It uses the common derandomization technique of small-biased k-independent sample spaces. For the special case r = 2, we get roughly 2.275 log n adaptive queries, which is fairly close to the obvious lower bound of 2 log n. For the class of monotone functions, we prove that the optimal query number O(2 r + r log n) can be already achieved in O(r) stages. On the other hand, (2 r log n) is a lower bound on nonadaptive queries.  相似文献   

7.
Every Boolean function on n variables can be expressed as a unique multivariate polynomial modulo p for every prime p. In this work, we study how the degree of a function in one characteristic affects its complexity in other characteristics. We establish the following general principle: functions with low degree modulo p must have high complexity in every other characteristic q. More precisely, we show the following results about Boolean functions f : {0, 1}n → {0, 1} which depend on all n variables, and distinct primes pq:
  o If f has degree o(log n) modulo p, then it must have degree Ω(n1−o(1)) modulo q. Thus a Boolean function has degree o(log n) in at most one characteristic. This result is essentially tight as there exist functions that have degree log n in every characteristic.  相似文献   

8.
We present deterministic upper and lower bounds on the slowdown required to simulate an (n, m)-PRAM on a variety of networks. The upper bounds are based on a novel scheme that exploits the splitting and combining of messages. This scheme can be implemented on an n-node d-dimensional mesh (for constant d) and on an n-leaf pruned butterfly and attains the smallest worst-case slowdown to date for such interconnections, namely, O(n1/d(log(m/n))1-1/d) for the d-dimensional mesh (with constant d) and O( ) for the pruned butterfly. In fact, the simulation on the pruned butterfly is the first PRAM simulation scheme on an area-universal network. Finally, we prove restricted and unrestricted lower bounds on the slowdown of any deterministic PRAM simulation on an arbitrary network, formulated in terms of the bandwidth properties of the interconnection as expressed by its decomposition tree.  相似文献   

9.
We provide a uniform framework for the study of index data structures for a two-dimensional matrixTEXT[1:n, 1:n] whose entries are drawn from an ordered alphabetΣ. An index forTEXTcan be informally seen as the two-dimensional analog of the suffix tree for a string. It allows on-line searches and statistics to be performed onTEXTby representing compactly theΘ(n3) square submatrices ofTEXTin optimalO(n2) space. We identify 4n−1families of indices forTEXT, each containing ∏ni=1 (2i−1)! isomorphic data structures. We also develop techniques leading to a single algorithm that efficiently builds any index in any family inO(n2 log n) time andO(n2) space. Such an algorithm improves in various respects the algorithms for the construction of the PAT tree and the Lsuffix tree. The framework and the algorithm easily generalize tod>2 dimensions. Moreover, as part of our algorithm, we provide new algorithmic tools that yield a space-efficient implementation of the “naming scheme” of R. Karpet al.(in“Proceedings, Fourth Symposium on Theory of Computing,” pp. 125–136) for strings and matrices.  相似文献   

10.
We present new efficient deterministic and randomized distributed algorithms for decomposing a graph with n nodes into a disjoint set of connected clusters with radius at most k−1 and having O(n 1+1/k ) intercluster edges. We show how to implement our algorithms in the distributed CONGEST\mathcal{CONGEST} model of computation, i.e., limited message size, which improves the time complexity of previous algorithms (Moran and Snir in Theor. Comput. Sci. 243(1–2):217–241, 2000; Awerbuch in J. ACM 32:804–823, 1985; Peleg in Distributed Computing: A Locality-Sensitive Approach, 2000) from O(n) to O(n 1−1/k ). We apply our algorithms for constructing low stretch graph spanners and network synchronizers in sublinear deterministic time in the CONGEST\mathcal{CONGEST} model.  相似文献   

11.
Nisan showed that any randomized logarithmic space algorithm (running in polynomial time and with two-sided error) can be simulated by a deterministic algorithm that runs simultaneously in polynomial time and Θ(log2 n) space. Subsequently Saks and Zhou improved the space complexity and showed that a deterministic simulation can be carried out in space Θ(log1.5n). However, their simulation runs in time nΘ(log^{0.5}n). We prove a time--space tradeoff that interpolates these two simulations. Specifically, we prove that, for any 0 ≤ α ≤ 0.5, any randomized logarithmic space algorithm (running in polynomial time and with two-sided error) can be simulated deterministically in time nO(log^{0.5-α}n) and space O(log^{1.5+α}n). That is, we prove that BPL ⊆ DTISP[nO(log^{0.5-α}n), O(log1.5+αn)].  相似文献   

12.
The paper describes several algorithms related to a problem of computing the local dimension of a semialgebraic set. Let a semialgebraic set V be defined by a system of k inequalities of the formf  ≥  0 with f  R [ X1, ,Xn ], deg(f)  < d , andx   V . An algorithm is constructed for computing the dimension of the Zariski tangent space to V at x in time (kd)O(n). Let x belong to a stratum of codimension lxin V with respect to a smooth stratification ofV . Another algorithm computes the local dimension dimx(V) with the complexity (k(lx +  1)d)O(lx2n). Ifl  = maxx  Vlx, and for every connected component the local dimension is the same at each point, then the algorithm computes the dimension of every connected component with complexity (k(l +  1)d)O(l2n). If V is a real algebraic variety defined by a system of equations, then the complexity of the algorithm is less thankdO(l2n) , and the algorithm also finds the dimension of the tangent space to V at x in time kdO(n). Whenl is fixed, like in the case of a smooth V , the complexity bounds for computing the local dimension are (kd)O(n)andkdO(n) respectively. A third algorithm finds the singular locus ofV in time (kd)O(n2).  相似文献   

13.
D. Eppstein 《Algorithmica》1995,13(5):462-471
We convert constructive solid geometry input to explicit representations of polygons, polyhedra, or more generallyd-dimensional polyhedra, in time and space 0(nd), improving a previous0(nd logn) time bound. We then show that any Boolean formula can be preprocessed in time0(n log n/log logn) and linear space so that the value of the formula can be maintained, as variables are changed one by one, in time O(log n/log logn) per change; this speeds up certain output-sensitive algorithms for constructive solid geometry.  相似文献   

14.
   Abstract. A graph-theoretic approach to study the complexity of Boolean functions was initiated by Pudlák, R?dl, and Savicky [PRS] by defining models of computation on graphs. These models generalize well-known models of Boolean complexity such as circuits, branching programs, and two-party communication complexity. A Boolean function f is called a 2-slice function if it evaluates to zero on inputs with less than two 1's and evaluates to one on inputs with more than two 1's. On inputs with exactly two 1's f may be nontrivially defined. There is a natural correspondence between 2-slice functions and graphs. Using the framework of graph complexity, we show that sufficiently strong superlinear monotone lower bounds for the very special class of {2-slice functions} would imply superpolynomial lower bounds over a complete basis for certain functions derived from them. We prove, for instance, that a lower bound of n 1+Ω(1) on the (monotone) formula size of an explicit 2-slice function f on n variables would imply a 2 Ω(ℓ) lower bound on the formula size over a complete basis of another explicit function g on l variables, where l=Θ( log n) . We also consider lower bound questions for depth-3 bipartite graph complexity. We prove a weak lower bound on this measure using algebraic methods. For instance, our result gives a lower bound of Ω(( log n) 3 / ( log log n) 5 ) for bipartite graphs arising from Hadamard matrices, such as the Paley-type bipartite graphs. Lower bounds for depth-3 bipartite graph complexity are motivated by two significant applications: (i) a lower bound of n Ω(1) on the depth-3 complexity of an explicit n -vertex bipartite graph would yield superlinear size lower bounds on log-depth Boolean circuits for an explicit function, and (ii) a lower bound of
would give an explicit language outside the class Σ 2 cc of the two-party communication complexity as defined by Babai, Frankl, and Simon [BFS]. Our lower bound proof is based on sign-representing polynomials for DNFs and lower bounds on ranks of ±1 matrices even after being subjected to sign-preserving changes to their entries. For the former, we use a result of Nisan and Szegedy [NS] and an idea from a recent result of Klivans and Servedio [KS]. For the latter, we use a recent remarkable lower bound due to Forster [F1].  相似文献   

15.
We give a lower bound of Ω(n (d−1)/2) on the quantum query complexity for finding a fixed point of a discrete Brouwer function over grid [n] d . Our lower bound is nearly tight, as Grover Search can be used to find a fixed point with O(n d/2) quantum queries. Our result establishes a nearly tight bound for the computation of d-dimensional approximate Brouwer fixed points defined by Scarf and by Hirsch, Papadimitriou, and Vavasis. It can be extended to the quantum model for Sperner’s Lemma in any dimensions: The quantum query complexity of finding a panchromatic cell in a Sperner coloring of a triangulation of a d-dimensional simplex with n d cells is Ω(n (d−1)/2). For d=2, this result improves the bound of Ω(n 1/4) of Friedl, Ivanyos, Santha, and Verhoeven. More significantly, our result provides a quantum separation of local search and fixed point computation over [n] d , for d≥4. Aaronson’s local search algorithm for grid [n] d , using Aldous Sampling and Grover Search, makes O(n d/3) quantum queries. Thus, the quantum query model over [n] d for d≥4 strictly separates these two fundamental search problems.  相似文献   

16.
This paper considers the problem of performing tasks in asynchronous distributed settings. This problem, called Do-All, has been substantially studied in synchronous models, but there is a dearth of efficient algorithms for asynchronous message-passing processors. Do-All can be trivially solved without any communication by an algorithm where each processor performs all tasks. Assuming p processors and t tasks, this requires work Θ (p · t). Thus, it is important to develop subquadratic solutions (when p and t are comparable) by trading computation for communication. Following the observation that it is not possible to obtain subquadratic work when the message delay d is substantial, e.g., d = Θ (t), this work pursues a message-delay-sensitive approach. Here, the upper bounds on work and communication are given as functions of p, t, and d, the upper bound on message delays, however, algorithms have no knowledge of d and they cannot rely on the existence of an upper bound on d. This paper presents two families of asynchronous algorithms achieving, for the first time, subquadratic work as long as d = o (t). The first family uses as its basis a shared-memory algorithm without having to emulate atomic registers assumed by that algorithm. These deterministic algorithms have work O (tpε + pdt/dε) for any ε > 0. The second family uses specific permutations of tasks, with certain combinatorial properties, to sequence the work of the processors. These randomized (deterministic) algorithms have expected (worst-case) work O (t log p + pd log (2 + t/d)). Another important contribution in this work is the first delay-sensitive lower bound for this problem that helps explain the behavior of our algorithms: any randomized (deterministic) algorithm has expected (worst-case) work of Ω (t + pd logd+1t).  相似文献   

17.
We determine the functions on GF(2)n which satisfy the propagation criterion of degree n−2, PC(n−2). We study subsequently the propagation criterion of degree ℓ and order k and its extended version EPC. We determine those Boolean functions on GF(2)n which satisfy PC(ℓ) of order kn−ℓ−2. We show that none of them satisfies EPC(ℓ) of the same order. We finally give a general construction of nonquadratic functions satisfying EPC(ℓ) of order k. This construction uses the existence of nonlinear, systematic codes with good minimum distances and dual distances (e.g., Kerdock codes and Preparata codes).  相似文献   

18.
For all d ? 1 and all e >d, every deterministic multihead e-dimensional Turing machine of time complexity T(n) can be simulated on-line by a deterministic multihead d-dimensional Turing machine in time O(T(n)1+1?d?1?(logT(n))0(1)). This simulation almost achieves the known lower bound Ω(T(n)1+1?d?1?e) on the time required. The simulation is interpreted in terms of dynamic embeddings among arrays with local access.  相似文献   

19.
Boolean networks provide a simple and intuitive model for gene regulatory networks, but a critical defect is the time required to learn the networks. In recent years, efficient network search algorithms have been developed for a noise-free case and for a limited function class. In general, the conventional algorithm has the high time complexity of O(22k mn k+1) where m is the number of measurements, n is the number of nodes (genes), and k is the number of input parents. Here, we suggest a simple and new approach to Boolean networks, and provide a randomized network search algorithm with average time complexity O (mn k+1/ (log m)(k−1)). We show the efficiency of our algorithm via computational experiments, and present optimal parameters. Additionally, we provide tests for yeast expression data. Editor: David Page  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号