首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we study two interprocedural program-analysis problems—interprocedural slicing and interprocedural dataflow analysis— and present the following results:
  • ? Interprocedural slicing is log-space complete forP.
  • ? The problem of obtaining “meet-over-all-valid-paths” solutions to interprocedural versions of distributive dataflow-analysis problems isP-hard.
  • ? Obtaining “meet-over-all-valid-paths” solutions to interprocedural versions of distributive dataflow-analysis problems that involve finite sets of dataflow facts (such as the classical “gen/kill” problems) is log-space complete forP.
  • These results provide evidence that there do not exist fast (N?-class) parallel algorithms for interprocedural slicing and precise interprocedural dataflow analysis (unlessP =N?). That is, it is unlikely that there are algorithms for interprocedural slicing and precise interprocedural dataflow analysis for which the number of processors is bounded by a polynomial in the size of the input, and whose running time is bounded by a polynomial in the logarithm of the size of the input. This suggests that there are limitations on the ability to use parallelism to overcome compiler bottlenecks due to expensive interprocedural-analysis computations.  相似文献   

    2.
    We present a uniform approach to problems involving lines in 3-space. This approach is based on mapping lines inR 3 into points and hyperplanes in five-dimensional projective space (Plücker space). We obtain new results on the following problems:
    1. Preprocessn triangles so as to answer efficiently the query: “Given a ray, which is the first triangle hit?” (Ray- shooting problem). We discuss the ray-shooting problem for both disjoint and nondisjoint triangles.
    2. Construct the intersection of two nonconvex polyhedra in an output sensitive way with asubquadratic overhead term.
    3. Construct the arrangement ofn intersecting triangles in 3-space in an output-sensitive way, with asubquadratic overhead term.
    4. Efficiently detect the first face hit by any ray in a set of axis-oriented polyhedra.
    5. Preprocessn lines (segments) so as to answer efficiently the query “Given two lines, is it possible to move one into the other without crossing any of the initial lines (segments)?” (Isotopy problem). If the movement is possible produce an explicit representation of it.
      相似文献   

    3.
    Computing the Cauchy index of a rational fractionQ/P betweena andb is important in most of the basic algorithms of real algebraic geometry: real root counting, exact sign determination, Routh-Hurwitz problem, signature of a Hankel matrix. One way to compute the Cauchy index ofQ/P is to compute some variant of the Euclidean remainder sequence ofP andQ and to compute the signs of the successive remainders evaluated ata andb. So, we want to compute efficiently the signs of the polynomials in the Euclidean remainder sequence evaluated at a given point. In the first part of the paper, we are only interested in counting arithmetic operations and comparisons. For input polynomials of degrees bounded byd we design an algorithm with complexity $O(\mathcal{M}(d)\log (d + 1))$ for computing the Cauchy index based on the strategy of a known efficient algorithm for computing a GCD; here $\mathcal{M}(d)$ denotes the cost of multiplying polynomials of degree at mostd. One has $\mathcal{M}(d) = O(d\log (d + 1))$ if the coefficient field allows Fourier transform, and $\mathcal{M}(d) = O(d\log (d + 1)\log \log (d + 2))$ otherwise. In the second part of the paper we are interested in the bit complexity when the coefficients ofP andQ are integers of size τ. A better way to compute the Cauchy index is then to evaluate the values of some variant of the subresultants rather than the values of the remainders. For polynomials of bit size τ we design an algorithm with bit complexity $O(\mathcal{M}(d,\sigma )\log (d + 1))$ with σ=O(dτ) where $\mathcal{M}(d,\sigma ) = O(d\sigma \cdot \log (d\sigma ) \cdot \log \log (d\sigma ))$ is Schönhage’s bound for multiplication of integer polynomials of degrees bounded byd and bit sizes bounded by σ in the multitape Turing machine model. So our bound isO(d 2τ·log(dτ)·log log(dτ)·log(d+1)). The same bound holds for computing the signature of a regular Hankel matrix. Our analysis shows a new and natural exact divisibility for subresultants.  相似文献   

    4.
    Trial and Error     
    A pac-learning algorithm is -space bounded, if it stores at most examples from the sample at any time. We characterize the -space learnable concept classes. For this purpose we introduce the compression parameter of a concept class and design our Trial and Error Learning Algorithm. We show : is -space learnable if and only if the compression parameter of is at most . This learning algorithm does not produce a hypothesis consistent with the whole sample as previous approaches e.g. by Floyd, who presents consistent space bounded learning algorithms, but has to restrict herself to very special concept classes. On the other hand our algorithm needs large samples; the compression parameter appears as exponent in the sample size. We present several examples of polynomial time space bounded learnable concept classes: – all intersection closed concept classes with finite VC–dimension. – convex -gons in . – halfspaces in . – unions of triangles in . We further relate the compression parameter to the VC–dimension, and discuss variants of this parameter. Received May 24, 1994 / July 4, 1995  相似文献   

    5.
    We give unified and simplified algorithms and proofs for three results on channel routing in knock-knee mode. LetP be a channel routing problem with densityd max.
    1. [Rivest/Baratz/Miller, Preparata/Lipski]. If all nets inP are two-terminal nets thend max tracks suffice.
    2. [Preparata/Sarrafzadeh]. If all nets inP are two- or three-terminal nets then [3d max/2] tracks suffice.
    3. [Sarrafzadeh/Preparata]. 2d max-1 tracks always suffice.
    In all three cases a solution can be found in linear time; this is an improvement in case (b).  相似文献   

    6.
    In the Parameterized Connected Dominating Set problem the input consists of a graph G and a positive integer k, and the question is whether there is a set S of at most k vertices in G—a connected dominating set of G—such that (i) S is a dominating set of G, and (ii) the subgraph G[S] induced by S is connected; the parameter is k. The underlying decision problem is a basic connectivity problem which is long known to be NP-complete, and it has been extensively studied using several algorithmic approaches. Parameterized Connected Dominating Set is W[2]-hard, and therefore it is unlikely (Downey and Fellows, Parameterized Complexity, Springer, 1999) that the problem has fixed-parameter tractable (FPT) algorithms or polynomial kernels in graphs in general. We investigate the effect of excluding short cycles, as subgraphs, on the kernelization complexity of Parameterized Connected Dominating Set. The girth of a graph G is the length of a shortest cycle in G. It turns out that the Parameterized Connected Dominating Set problem is hard on graphs with small cycles, and becomes progressively easier as the girth increases. More precisely, we obtain the following kernelization landscape: Parameterized Connected Dominating Set
    • does not have a kernel of any size on graphs of girth three or four (since the problem is W[2]-hard);
    • admits a kernel of size 2 k k 3k on graphs of girth at least five;
    • has no polynomial kernel (unless the Polynomial Hierarchy collapses to the third level) on graphs of girth at most six, and,
    • has a cubic ( $\mathcal {O}(k^{3})$ ) vertex kernel on graphs of girth at least seven.
    While there is a large and growing collection of parameterized complexity results available for problems on graph classes characterized by excluded minors, our results add to the very few known in the field for graph classes characterized by excluded subgraphs.  相似文献   

    7.
    We report progress on the NL versus UL problem.
  • We show that counting the number of s-t paths in graphs where the number of s-v paths for any v is bounded by a polynomial can be done in FUL: the unambiguous log-space function class. Several new upper bounds follow from this including ${{{ReachFewL} \subseteq {UL}}}$ and ${{{LFew} \subseteq {UL}^{FewL}}}$
  • We investigate the complexity of min-uniqueness—a central notion in studying the NL versus UL problem. In this regard we revisit the class OptL[log n] and introduce UOptL[log n], an unambiguous version of OptL[log n]. We investigate the relation between UOptL[log n] and other existing complexity classes.
  • We consider the unambiguous hierarchies over UL and UOptL[log n]. We show that the hierarchy over UOptL[log n] collapses. This implies that ${{{ULH} \subseteq {L}^{{promiseUL}}}}$ thus collapsing the UL hierarchy.
  • We show that the reachability problem over graphs embedded on 3 pages is complete for NL. This contrasts with the reachability problem over graphs embedded on 2 pages, which is log-space equivalent to the reachability problem in planar graphs and hence is in UL.
  •   相似文献   

    8.
    Efficiently extendible mappings for balanced data distribution   总被引:2,自引:0,他引:2  
    In data storage applications, a large collection of consecutively numbered data “buckets” are often mapped to a relatively small collection of consecutively numbered storage “bins.” For example, in parallel database applications, buckets correspond to hash buckets of data and bins correspond to database nodes. In disk array applications, buckets correspond to logical tracks and bins correspond to physical disks in an array. Measures of the “goodness” of a mapping method include:
    1. Thetime (number of operations) needed to compute the mapping.
    2. Thestorage needed to store a representation of the mapping.
    3. Thebalance of the mapping, i.e., the extent to which all bins receive the same number of buckets.
    4. The cost ofrelocation, that is, the number of buckets that must be relocated to a new bin if a new mapping is needed due to an expansion of the number of bins or the number of buckets.
    One contribution of this paper is to give a new mapping method, theInterval-Round-Robin (IRR) method. The IRR method has optimal balance and relocation cost, and its time complexity and storage requirements compare favorably with known methods. Specifically, ifm is the number of times that the number of bins and/or buckets has increased, then the time complexity isO(logm) and the storage isO(m 2). Another contribution of the paper is to identify the concept of ahistory-independent mapping, meaning informally that the mapping does not “remember” the past history of expansions to the number of buckets and bins, but only the current number of buckets and bins. Thus, such mappings require very little information to be stored. Assuming that balance and relocation are optimal, we prove that history-independent mappings are possible if the number of buckets is fixed (so only the number of bins can increase), but not possible if the number of bins and buckets can both increase.  相似文献   

    9.
    Stable semantics for disjunctive programs   总被引:1,自引:0,他引:1  
    We introduce the stable model semantics fordisjunctive logic programs and deductive databases, which generalizes the stable model semantics, defined earlier for normal (i.e., non-disjunctive) programs. Depending on whether only total (2-valued) or all partial (3-valued) models are used we obtain thedisjunctive stable semantics or thepartial disjunctive stable semantics, respectively. The proposed semantics are shown to have the following properties:
  • ? For normal programs, the disjunctive (respectively, partial disjunctive) stable semantics coincides with thestable (respectively,partial stable) semantics.
  • ? For normal programs, the partial disjunctive stable semantics also coincides with thewell-founded semantics.
  • ? For locally stratified disjunctive programs both (total and partial) disjunctive stable semantics coincide with theperfect model semantics.
  • ? The partial disjunctive stable semantics can be generalized to the class ofall disjunctive logic programs.
  • ? Both (total and partial) disjunctive stable semantics can be naturally extended to a broader class of disjunctive programs that permit the use ofclassical negation.
  • ? After translation of the programP into a suitable autoepistemic theory \( \hat P \) the disjunctive (respectively, partial disjunctive) stable semantics ofP coincides with the autoepistemic (respectively, 3-valued autoepistemic) semantics of \( \hat P \) .
  •   相似文献   

    10.
    We consider nonlinear boundary value problems with arbitrarily many solutionsuεC 2 [a, b]. In this paper an Algorithm will be established for a priori bounds \(\bar u,\bar d \in C[a,b]\) with the following properties:
    1. For every solutionu of the nonlinear problem we obtain $$\bar u(x) \leqslant u(x) \leqslant \bar u(x), - \bar d(x) \leqslant u'(x) \leqslant \bar d(x)$$ for any,xε[a, b].
    2. The bounds \(\bar u\) and % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9% vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x% fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGabmizayaara% aaaa!36EE!\[\bar d\] are defined by the use of the functions exp, sin and cos.
    3. We use neither the knowledge of solutions nor the number of solutions.
      相似文献   

    11.
    The concept of a translation is fundamental to any theory of compiling. Formally, atranslation is any set of pairs of words. Classes of finitely describable translations are considered in general, from the point of view of balloon automata [17, 18, 19]. A translation can be defined by atransducer, a device with an input tape and an output terminal. If, with inputx, the stringy appears at the output terminal, then (x, y) is in the translation defined by the transducer. One can also define a translation by a two input taperecognizer. Ifx andy are placed on the two tapes, the recognizer tells if (x, y) is in the defined translation. One can define closed classes of transducers and recognizers by:
    1. restricting the way in which infinite storage may be used (pushdown structure, stack structure, etc.),
    2. allowing the finite control to be nondeterministic or deterministic,
    3. allowing one way or two way motion on the input tapes.
    We have some results on classes of translations which can be categorized roughly into three types.
    1. Translations defined by certain classes of transducers and recognizers are equivalent.
    2. Translations of a given class are sometimes closed under composition and decomposition with a finite memory translation (gsm mapping).
    3. A nondeterministically defined translation can be expressed as the composition of a finitely defined translation and a related deterministically defined translation in many cases.
    In addition, ifC is a class of translations, then one can write a compiler-compiler to render any translationT inC and only if the following question is solvable: For any translationT inC and stringx, does there exist ay such that (x, y) is inT? We shall show that, in general, the decidability of this question is equivalent to the decidability of one or more questions from automata theory, depending upon the type of devices defining the classC.  相似文献   

    12.
    Anatomic snapshot memory object in shared memory systems enables a set of processes, calledscanners, to obtain a consistent picture of the shared memory while other processes, calledupdaters, keep updating memory locations concurrently. In this paper we present two conversion methods of snapshot implementations. Using the first conversion method we obtain a new snapshot implementation in which the scan operation has linear time complexity and the time complexity of the update operation becomes the sum of the time complexities of the original implementation. Applying the second conversion method yields similar results, where in this case the time complexity of the update protocol becomes linear. Although our conversion methods use unbounded space, their space complexity can be bounded using known techniques. One of the most intriguing open problems in distributed wait-free computing is the existence of a linear-time implementation of this object. Using our conversion methods and known constructions we obtain the following results:
  • ?Consider a system ofn processes, each an updater and a scanner. We present an implementation in which the time complexity of either the update or the scan operation is linear, while the time complexity of the second operation isO(n logn).
  • ?We present an implementation with linear time complexity when the number of either updaters or scanners isO(n/logn), wheren is the total number of processes.
  • ?We present an implementation with amortized linear time complexity when one of the protocols (either upate or scan) is executed significantly more often than the other protocol.
  •   相似文献   

    13.
    In this paper we study parallel algorithms for the Mesh-of-Processors architecture to solve visibility and related separability problems for sets of simple polygons in the plane. In particular, we present the following algorithms:
  • - AnO( \(\sqrt N\) time algorithm for computing on a Mesh-of-Processors of size N the visibility polygon from a point located in anN-vertex polygon, possibly with holes.
  • -O( \(\sqrt N\) ) time algorithms for computing on a Mesh-of-Processors of sizeN the set of all points on the boundary of anN-vertex polygonP which are visible in a given directiond as well as the visibility hull ofP for a given directiond.
  • - AnO( \(\sqrt N\) ) time algorithm for detecting on a Mesh-of-Processors of size 2N whether twoN-vertex polygons are separable in a given direction and anO( \(\sqrt {MN}\) ) time algorithm for detecting on a Mesh-of-Processors of sizeMN whetherM N-vertex polygons are sequentially separable in a given direction.
  • All proposed algorithms are asymptotically optimal (for the Mesh-of-Processors) with respect to time and number of processors.  相似文献   

    14.
    This paper presents a kernel language KLND on the basis of analysing the kernel languagerequirements of new generation computer systems. These requirements are: the ability ofknow-ledge processing, the parallelism, the elegant mathematical properties of the comput-ation model which is appropriate for working as the basis of the novel architecture design, andthe suitability for writing large scale softwares. The main features of KLND are as follows: 1. several new language concepts. 2. the modularity, 3. the unification of logical and functional programming styles, 4. the exploitation of the parallelism. 5. the introduction of the type concept, 6. the introduction of the storage concept.  相似文献   

    15.
    We give a self-reduction for the Circuit Evaluation problem (CircEval) and prove the following consequences.
    1. Amplifying size–depth lower bounds. If CircEval has Boolean circuits of n k size and n 1?δ depth for some k and δ, then for every ${\epsilon > 0}$ , there is a δ′ > 0 such that CircEval has circuits of ${n^{1 + \epsilon}}$ size and ${n^{1- \delta^{\prime}}}$ depth. Moreover, the resulting circuits require only ${\tilde{O}(n^{\epsilon})}$ bits of non-uniformity to construct. As a consequence, strong enough depth lower bounds for Circuit Evaluation imply a full separation of P and NC (even with a weak size lower bound).
    2. Lower bounds for quantified Boolean formulas. Let c, d > 1 and e < 1 satisfy c < (1 ? e d )/d. Either the problem of recognizing valid quantified Boolean formulas (QBF) is not solvable in TIME[n c ], or the Circuit Evaluation problem cannot be solved with circuits of n d size and n e depth. This implies unconditional polynomial-time uniform circuit lower bounds for solving QBF. We also prove that QBF does not have n c -time uniform NC circuits, for all c < 2.
      相似文献   

    16.
    The general specifications and design for a High-Speed General Information Management System, HSGIMS, to serve as the basis for a Global Information Network are given. Some of the key specifications that have been confirmed in experiments with a prototype of the HSGIMS are:
    1. Information (or data) and Question-type (or logical data) independence.
    2. Very small bounded search times that are independent of the amount of information that is managed and can be computed exactly.
    3. A fool-proof security system that can be used to protect databases against viruses and can also be easily invoked to deny unauthorized access by users.
    4. Efficient use of all storage and communications resources.
      相似文献   

    17.
    We strengthen a previously known connection between the size complexity of two-way finite automata ( ) and the space complexity of Turing machines (tms). Specifically, we prove that
  • every s-state has a poly(s)-state that agrees with it on all inputs of length ≤s if and only if NL?L/poly, and
  • every s-state has a poly(s)-state that agrees with it on all inputs of length ≤2 s if and only if NLL?LL/polylog.
  • Here, and are the deterministic and nondeterministic , NL and L/poly are the standard classes of languages recognizable in logarithmic space by nondeterministic tms and by deterministic tms with access to polynomially long advice, and NLL and LL/polylog are the corresponding complexity classes for space O(loglogn) and advice length poly(logn). Our arguments strengthen and extend an old theorem by Berman and Lingas and can be used to obtain variants of the above statements for other modes of computation or other combinations of bounds for the input length, the space usage, and the length of advice.  相似文献   

    18.
    We use algorithmic tools for graphs of small treewidth to address questions in complexity theory. For our main construction, we prove that multiplicatively disjoint arithmetic circuits of size n O(1) and treewidth k can be simulated by bounded fan-in arithmetic formulas of depth O(k 2logn). From this we derive an analogous statement for syntactically multilinear arithmetic circuits, which strengthens the central theorem of M. Mahajan and B.V.R. Rao (Proc. 33rd International Symposium on Mathematical Foundations of Computer Science, vol. 5162, pp. 455–466, 2008). We show our main construction has the following three applications:
    • Bounded width arithmetic circuits of size n O(1) can be balanced to depth O(logn), provided chains of iterated multiplication in the circuit are of length O(1).
    • Boolean bounded fan-in circuits of size n O(1) and treewidth k can be simulated by bounded fan-in formulas of depth O(k 2logn). This strengthens in the non-uniform setting the known inclusion that SC0?NC1.
    • We demonstrate treewidth restricted cases of Directed-Reachability and Circuit Value Problem that can be solved in LogDCFL.
    We also give a construction showing, for both arithmetic and Boolean circuits, that any circuit of size n O(1) and treewidth O(log i n) can be simulated by a circuit of width O(log i+1 n) and size n c , where c=O(1), if i=0, and c=O(loglogn) otherwise.  相似文献   

    19.
    Every Boolean function may be represented as a real polynomial. In this paper, we characterize the degree of this polynomial in terms of certain combinatorial properties of the Boolean function. Our first result is a tight lower bound of Ω(logn) on the degree needed to represent any Boolean function that depends onn variables. Our second result states that for every Boolean functionf, the following measures are all polynomially related:
  • o The decision tree complexity off.
  • o The degree of the polynomial representingf.
  • o The smallest degree of a polynomialapproximating f in theL max norm.
  •   相似文献   

    20.
    We present three new approximation algorithms with improved constant ratios for selecting n points in n disks such that the minimum pairwise distance among the points is maximized.
    1. A very simple O(nlog?n)-time algorithm with ratio 0.511 for disjoint unit disks.
    2. An LP-based algorithm with ratio 0.707 for disjoint disks of arbitrary radii that uses a linear number of variables and constraints, and runs in polynomial time.
    3. A hybrid algorithm with ratio either 0.4487 or 0.4674 for (not necessarily disjoint) unit disks that uses an algorithm of Cabello in combination with either the simple O(nlog?n)-time algorithm or the LP-based algorithm.
    The LP algorithm can be extended for disjoint balls of arbitrary radii in ? d , for any (fixed) dimension d, while preserving the features of the planar algorithm. The algorithm introduces a novel technique which combines linear programming and projections for approximating Euclidean distances. The previous best approximation ratio for dispersion in disjoint disks, even when all disks have the same radius, was 1/2. Our results give a positive answer to an open question raised by Cabello, who asked whether the ratio 1/2 could be improved.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号