首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We study the problem of computing leveled tree drawings, i.e., straight-line drawings of trees where the vertices have distinct preassigned y-coordinates. Our optimization goal is to maximize the crossing angle resolution (i.e., the minimum angle formed by any two crossing edges) and/or the vertex angle resolution (i.e., the minimum angle formed by two edges incident to the same vertex) of the drawing. We provide tight and almost tight worst case bounds for the crossing angle resolution and for the total angle resolution (i.e., the minimum of crossing and vertex angle resolution), respectively.  相似文献   

2.
We show that the Player-Adversary game of Pudlák and Impagliazzo [A lower bound for DLL algorithms for k-SAT, in: Proc. 11th Annual ACM-SIAM Symposium on Discrete Algorithms, 2000, pp. 128-136] played over CNF propositional formulas gives an exact characterization of the space needed in treelike resolution refutations. This characterization is purely combinatorial and independent of the notion of resolution. We use this characterization to give for the first time a separation between the space needed in treelike and general resolution.  相似文献   

3.
We introduce the λ-coiteration schema for a distributive law λ of a functor T over a functor F. Under certain conditions it can be shown to uniquely characterise functions into the carrier of a final F-coalgebra, generalising the basic coiteration schema as given by finality. The duals of primitive recursion and course-of-value iteration, which are known extensions of coiteration, arise as instances of our framework. One can furthermore obtain schemata justifying recursive specifications that involve operators such as addition of power series, regular operators on languages, or parallel and sequential composition of processes.Next, the same type of distributive law λ is used to generalise coinductive proof techniques. To this end, we introduce the notion of a λ-bisimulation relation. It specialises to what could be called bisimulation up-to-equality or bisimulation up-to-context for contexts built from operators of the type mentioned above. We state that every such relation is contained in some larger conventional bisimulation and demonstrate that this principle leads to simpler bisimilarity proofs using less complex relations.  相似文献   

4.
《Information Sciences》2007,177(10):2152-2166
Computer security policies specify conditions for permissions to access various computer resources and information. Merging two security policies is needed when two organizations, together with their computer systems, merge into one entity as in corporate business acquisition. We propose a graph-theoretic method for merging the role/object hierarchies of two security policies. The formulation of merged hierachies is based on the graph minor relation in graph theory. Ideally, the merged role hierarchy should contain both the participating role hierarchies as graph minors, and similarly for the object hierarchy. We show that one can decide in polynomial time whether this ideal case is possible when the participating hierarchies are trees. We also show that in case the merged hierarchy exists, it can be constructed in polynomial time. Algorithms for detecting the feasibility of an ideal merged tree and for constructing the merged tree are presented. Our hierarchy/tree merge method is also applicable to the integration of heterogeneous databases with generalization hierarchies.  相似文献   

5.
Many algorithms for Boolean satisfiability (SAT) work within the framework of resolution as a proof system, and thus on unsatisfiable instances they can be viewed as attempting to find proofs by resolution. However it has been known since the 1980s that every resolution proof of the pigeonhole principle (PHPnm), suitably encoded as a CNF instance, includes exponentially many steps [18]. Therefore SAT solvers based upon the DLL procedure [12] or the DP procedure [13] must take exponential time. Polynomial-sized proofs of the pigeonhole principle exist for different proof systems, but general-purpose SAT solvers often remain confined to resolution. This result is in correlation with empirical evidence. Previously, we introduced the Compressed-BFS algorithm to solve the SAT decision problem. In an earlier work [27], an implementation of a Compressed-BFS algorithm empirically solved instances in (n4) time. Here, we add to this claim, and show analytically that these instances are solvable in polynomial time by Compressed-BFS. Thus the class of tautologies efficiently provable by Compressed-BFS is different than that of any resolution-based procedure. We hope that the details of our complexity analysis shed some light on the proof system implied by Compressed-BFS. Our proof focuses on structural invariants within the compressed data structure that stores collections of sets of open clauses during the Compressed-BFS algorithm. We bound the size of this data structure, as well as the overall memory, by a polynomial. We then use this to show that the overall runtime is bounded by a polynomial.  相似文献   

6.
Every Boolean function may be represented as a real polynomial. In this paper, we characterize the degree of this polynomial in terms of certain combinatorial properties of the Boolean function. Our first result is a tight lower bound of Ω(logn) on the degree needed to represent any Boolean function that depends onn variables. Our second result states that for every Boolean functionf, the following measures are all polynomially related:
  • o The decision tree complexity off.
  • o The degree of the polynomial representingf.
  • o The smallest degree of a polynomialapproximating f in theL max norm.
  •   相似文献   

    7.
    K. Kalpakis  Y. Yesha 《Algorithmica》1999,23(2):159-179
    We find, in polynomial time, a schedule for a complete binary tree directed acyclic graph (dag) with n unit execution time tasks on a linear array whose makespan is optimal within a factor of 1+o(1) . Further, given a binary tree dag T with n tasks and height h , we find, in polynomial time, a schedule for T on a linear array whose makespan is optimal within a factor of 5 + o(1) . On the other hand, we prove that explicit lower and upper bounds on the makespan of optimal schedules of binary tree dags on linear arrays differ at least by a factor of 1+ . We also find, in polynomial time, schedules for bounded tree dags with n unit execution time tasks, degree d , and height on a linear array which are optimal within a factor of 1+o(1) , this time under the assumption of links with unlimited bandwidth. Finally, we compute an improved upper bound on the makespan of an optimal schedule for a tree dag on the architecture independent model of Papadimitriou and Yannakakis [14], provided that its height is not too large. Received January 21, 1997; revised June 5, 1997.  相似文献   

    8.
    We give a review of existing methods for solving the absolute and vertex restricted p-center problems on networks and propose a new integer programming formulation, a tightened version of this formulation and a new method based on successive restrictions of the new formulation. A specialization of the new method with two-element restrictions obtains the optimal p-center solution by solving a series of simple structured integer programs in recognition form. This specialization is called the double bound method. A relaxation of the proposed formulation gives the tightest known lower bound in the literature (obtained earlier by Elloumi et al., [1]). A polynomial time algorithm is presented to compute this bound. New lower and upper bounds are proposed. Problems from the OR-Library [2] and TSPLIB [3] are solved by the proposed algorithms with up to 3038 nodes. Previous computational results were restricted to networks with at most 1817 nodes.  相似文献   

    9.
    The design of tree classifiers is considered from the statistical point of view. The procedure for calculating the a posteriori probabilities is decomposed into a sequence of steps. In every step the a posteriori probabilities for a certain subtask of the given pattern recognition task are calculated. The resulting tree classifier realizes a soft-decision strategy in contrast to the hard-decision strategy of the conventional decision tree. At the different nonterminal nodes, mean square polynomial classifiers are applied having the property of estimating the desired a posteriori probabilities together with an integrated feature selection capability.  相似文献   

    10.
    In this paper, we show that the problems Disjoint Cycles and Disjoint Paths do not have polynomial kernels, unless NPcoNP/poly. Thus, these problems do not allow polynomial time preprocessing that results in instances whose size is bounded by a polynomial in the parameter at hand. We build upon recent results by Bodlaender et al. [6] and Fortnow and Santhanam [20], that show that NP-complete problems that are ‘or-compositional’ do not have polynomial kernels, unless NPcoNP/poly. To this machinery, we add a notion of transformation, and obtain that Disjoint Cycles, and Disjoint Paths do not have polynomial kernels, unless NPcoNP/poly. For the proof, we introduce a problem on strings, called Disjoint Factors, and first show that this problem has no polynomial kernel unless NPcoNP/poly. We also show that the related Disjoint Cycles Packing problem has a kernel of size O(klogk).  相似文献   

    11.
    In this work, we generalize previous constructions of fuzzy set categories, introduced in [1], by considering L-fuzzy sets in which the values of the characteristic functions run on a completely distributive lattice, rather than in the unit real interval. Later, these L-fuzzy sets are used to define the L-fuzzy categories, which are proven to be rational. In the final part of the paper, the L-fuzzy functors given by the extension principles are provided with a structure of monad which is used, together with the functorial definition of the term monad, to provide monad compositions as a basis for a notion of generalised terms.  相似文献   

    12.
    In a balloon drawing of a tree, all the children under the same parent are placed on the circumference of the circle centered at their parent, and the radius of the circle centered at each node along any path from the root reflects the number of descendants associated with the node. Among various styles of tree drawings reported in the literature, the balloon drawing enjoys a desirable feature of displaying tree structures in a rather balanced fashion. For each internal node in a balloon drawing, the ray from the node to each of its children divides the wedge accommodating the subtree rooted at the child into two sub-wedges. Depending on whether the two sub-wedge angles are required to be identical or not, a balloon drawing can further be divided into two types: even sub-wedge and uneven sub-wedge types. In the most general case, for any internal node in the tree there are two dimensions of freedom that affect the quality of a balloon drawing: (1) altering the order in which the children of the node appear in the drawing, and (2) for the subtree rooted at each child of the node, flipping the two sub-wedges of the subtree. In this paper, we give a comprehensive complexity analysis for optimizing balloon drawings of rooted trees with respect to angular resolution, aspect ratio and standard deviation of angles under various drawing cases depending on whether the tree is of even or uneven sub-wedge type and whether (1) and (2) above are allowed. It turns out that some are NP-complete while others can be solved in polynomial time. We also derive approximation algorithms for those that are intractable in general.  相似文献   

    13.
    Yong Gao 《Artificial Intelligence》2009,173(14):1343-1366
    Data reduction is a key technique in the study of fixed parameter algorithms. In the AI literature, pruning techniques based on simple and efficient-to-implement reduction rules also play a crucial role in the success of many industrial-strength solvers. Understanding the effectiveness and the applicability of data reduction as a technique for designing heuristics for intractable problems has been one of the main motivations in studying the phase transition of randomly-generated instances of NP-complete problems.In this paper, we take the initiative to study the power of data reductions in the context of random instances of a generic intractable parameterized problem, the weighted d-CNF satisfiability problem. We propose a non-trivial random model for the problem and study the probabilistic behavior of the random instances from the model. We design an algorithm based on data reduction and other algorithmic techniques and prove that the algorithm solves the random instances with high probability and in fixed-parameter polynomial time O(dknm) where n is the number of variables, m is the number of clauses, and k is the fixed parameter. We establish the exact threshold of the phase transition of the solution probability and show that in some region of the problem space, unsatisfiable random instances of the problem have parametric resolution proof of fixed-parameter polynomial size. Also discussed is a more general random model and the generalization of the results to the model.  相似文献   

    14.
    M. C. Golumbic 《Computing》1977,18(3):199-208
    Using the notion ofG-decomposition introduced in Golumbic [8, 9], we present an implementation of an algorithm which assigns a transitive orientation to a comparability graph inO(δ·|E|) time andO(|E|) space where δ is the maximum degree of a vertex and |E| is the number of edges. A quotient operation reducing the graph in question and preservingG-decomposition and transitive orientability is shown, and efficient solutions to a number ofNP-complete problems which reduce to polynomial time for comparability graphs are discussed.  相似文献   

    15.
    The Perron–Frobenius (PF) theorem provides a simple characterization of the eigenvectors and eigenvalues of irreducible nonnegative square matrices. A generalization of the PF theorem to nonsquare matrices, which can be interpreted as representing systems with additional degrees of freedom, was recently presented in [1]. This generalized theorem requires a notion of irreducibility for nonsquare systems. A suitable definition, based on the property that every maximal square (legal) subsystem is irreducible, is provided in [1], and is shown to be necessary and sufficient for the generalized theorem to hold. This note shows that irreducibility of a nonsquare system can be tested in polynomial time. The analysis uses a graphic representation of the nonsquare system, termed the constraint graph, representing the flow of influence between the constraints of the system.  相似文献   

    16.
    We prove that the hp finite elements for H(curl) spaces, introduced in [1], fit into a general de Rham diagram involving hp approximations. The corresponding interpolation operators generalize the notion of hp interpolation introduced in [2] and are different from the classical operators of Nedelec and Raviart-Thomas.  相似文献   

    17.
    We define a general family of canonical labelled calculi, of which many previously studied sequent and labelled calculi are particular instances. We then provide a uniform and modular method to obtain finite-valued semantics for every canonical labelled calculus by introducing the notion of partial non-deterministic matrices. The semantics is applied to provide simple decidable semantic criteria for two crucial syntactic properties of these calculi: (strong) analyticity and cut-admissibility. Finally, we demonstrate an application of this framework for a large family of paraconsistent logics.  相似文献   

    18.
    We consider instances of the Stable Roommates problem that arise from geometric representation of participants' preferences: a participant is a point in a metric space, and his preference list is given by the sorted list of distances to the other participants. We show that contrary to the general case, the problem admits a polynomial-time solution even in the case when ties are present in the preference lists.We define the notion of an α-stable matching: the participants are willing to switch partners only for a (multiplicative) improvement of at least α. We prove that, in general, finding α-stable matchings is not easier than finding matchings that are stable in the usual sense. We show that, unlike in the general case, in a three-dimensional geometric stable roommates problem, a 2-stable matching can be found in polynomial time.  相似文献   

    19.
    Disjunctive logic programming (DLP) is a powerful formalism for knowledge representation and reasoning. The high expressiveness of DLP language, together with the recent availability of some efficient DLP system, has favoured the application of DLP in emerging areas like Knowledge Management and Information Integration. These applications have often to deal with huge input data, and have evidenced the need to improve the efficiency of DLP instantiators. Program instantiation is the first phase of a DLP computation; in this phase, variables are replaced by constants to generate a ground program which is then evaluated by propositional algorithms in the second phase of the computation. The instantiation process may be computationally expensive, and in fact its efficiency has been recognized to be a key issue for solving real-world problems by using disjunctive logic programming. Given a program P, a good instantiation for P is a ground program P′ having precisely the same answer sets as P and such that: (1) P′ can be computed efficiently from P, and (2) P′ does not contain “useless” rules, (P′ is as small as possible) and can thus be evaluated efficiently. In this paper, we present a structure-based backjumping algorithm for the instantiation of disjunctive logic programs, that meets the above requirements. In particular, given a rule r to be grounded, our algorithm exploits both the semantical and the structural information about r for computing efficiently the ground instances of r, avoiding the generation of “useless” rules. That is, from each general rule r, we compute only a relevant subset of its ground instances, avoiding the generation of “useless” instances, while fully preserving the semantic of the program. We have implemented this algorithm in DLV—the state-of-the-art implementation of DLP—and we have carried out an experimentation activity on an ample collection of benchmark problems. The experimental results are very positive: the new technique improves sensibly the efficiency of the DLV system on many program classes.  相似文献   

    20.
    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号