首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There are many decision problems in automata theory (including membership, emptiness, inclusion and universality problems) that are NP-hard for some classes of tree automata (TA). The study of their parameterized complexity allows us to find new bounds of their nonpolynomial time algorithmic behaviours. We present results of such a study for classical TA, rigid tree automata, TA with global equality and disequality and t-DAG automata. As parameters we consider the number of states, the cardinality of the signature, the size of the term or the t-dag and the size of the automaton.  相似文献   

2.
We define a class of program schemes RFDPS constructed around notions of forall-loops, repeat-loops, arrays and if-then-else instructions, and which take finite structures as inputs, and we examine the class of problems, denoted RFDPS also, accepted by such program schemes. The class of program schemes RFDPS is a logic, in Gurevich's sense, in that: every program scheme accepts an isomorphism-closed class of finite structures; we can recursively check whether a given finite structure is accepted by a given program scheme; and we can recursively enumerate the program schemes of RFDPS. We show that the class of problems RFDPS properly contains the class of problems definable in inflationary fixed-point logic (for example, the well-known problem Parity is in RFDPS) and that there is a strict, infinite hierarchy of classes of problems within RFDPS (the union of which is RFDPS) parameterized by the depth of nesting of forall-loops in our program schemes. This is the first strict, infinite hierarchy in any polynomial-time logic properly extending inflationary fixed-point logic (with the property that the union of the classes in the hierarchy consists of all problems definable in the logic). The fact that there are problems (like Parity) in RFDPS which cannot be defined in many of the more traditional logics of finite model theory (which often have zero-one laws) essentially means that existing tools, techniques and logical hierarchy results are of limited use to us.  相似文献   

3.
Lutz developed a general theory of resource-bounded measurability and measure on suitable complexity classes CC (see Proceedings of the 13th IEEE Conference on Computational Complexity, pp. 236–248, 1998), where Cantor Space C is the class of all decision problems, and classes C include various exponential time and space complexity classes, the class of all decidable languages, and the Cantor space C itself. In this paper, a different general theory of resource-bounded measurability and measure on those complexity classes is developed. Our approach is parallel to the Carathéodory outer measure approach in classical Lebesgue measure theory. We shall show that many nice properties in the classical Lebesgue measure theory hold in the resource-bounded case also. The Carathéodory approach gives short and easy proofs of theorems in the resource-bounded case as well as in the classical case. The class of measurable sets in our paper is strictly larger than that of Lutz, and the two resource-bounded measures assign the same measure for a set if the set is measurable in the sense of Lutz.  相似文献   

4.
The classes W[P] and W[1] are parameterized analogues of NP in that they can be characterized by machines with restricted existential nondeterminism. These machine characterizations give rise to two natural notions of parameterized randomized algorithms that we call W[P]-randomization and W[1]-randomization. This paper develops the corresponding theory.  相似文献   

5.
Set Packing问题起源于分割问题的应用,是在强约束条件对元素进行划分。在复杂性理论中,此问题是一类重要的NP难问题,被广泛应用于调度、代码优化和生物信息学等领域。特别是在参数计算理论产生后,此问题再次成为研究的热点问题。依据所研究问题的差异,本文将Set Packing问题分成5类,并给出了具体的定义。在此基础上,分别介绍了求解这5类问题的相关算法,着重分析和比较了参数算法中所运用的各项技术,并提出了该问题算法研究的一些发展方向。  相似文献   

6.
Covering problems are fundamental classical problems in optimization, computer science and complexity theory. Typically an input to these problems is a family of sets over a finite universe and the goal is to cover the elements of the universe with as few sets of the family as possible. The variations of covering problems include well-known problems like Set Cover, Vertex Cover, Dominating Set and Facility Location to name a few. Recently there has been a lot of study on partial covering problems, a natural generalization of covering problems. Here, the goal is not to cover all the elements but to cover the specified number of elements with the minimum number of sets. In this paper we study partial covering problems in graphs in the realm of parameterized complexity. Classical (non-partial) version of all these problems has been intensively studied in planar graphs and in graphs excluding a fixed graph H as a minor. However, the techniques developed for parameterized version of non-partial covering problems cannot be applied directly to their partial counterparts. The approach we use, to show that various partial covering problems are fixed parameter tractable on planar graphs, graphs of bounded local treewidth and graph excluding some graph as a minor, is quite different from previously known techniques. The main idea behind our approach is the concept of implicit branching. We find implicit branching technique to be interesting on its own and believe that it can be used for some other problems.  相似文献   

7.
We show that restricting a number of characterizations of the complexity classPto be positive (in natural ways) results in the same class of (monotone) problems, which we denote byposP. By a well-known result of Razborov,posPis a proper subclass of the class of monotone problems inP. We exhibit complete problems forposPvia weak logical reductions, as we do for other logically defined classes of problems. Our work is a continuation of research undertaken by Grigni and Sipser, and subsequently Stewart; indeed, we introduce the notion of a positive deterministic Turing machine and consequently solve a problem posed by Grigni and Sipser.  相似文献   

8.
There are different ways for an external agent to influence the outcome of an election. We concentrate on “control” by adding or deleting candidates. Our main focus is to investigate the parameterized complexity of various control problems for different voting systems. To this end, we introduce natural digraph problems that may be of independent interest. They help in determining the parameterized complexity of control for different voting systems including Llull, Copeland, and plurality voting. Devising several parameterized reductions, we provide an overview of the parameterized complexity of the digraph and control problems with respect to natural parameters such as adding/deleting only a bounded number of candidates or having only few voters.  相似文献   

9.
Although classical first-order logic is the de facto standard logical foundation for artificial intelligence, the lack of a built-in, semantically grounded capability for reasoning under uncertainty renders it inadequate for many important classes of problems. Probability is the best-understood and most widely applied formalism for computational scientific reasoning under uncertainty. Increasingly expressive languages are emerging for which the fundamental logical basis is probability. This paper presents Multi-Entity Bayesian Networks (MEBN), a first-order language for specifying probabilistic knowledge bases as parameterized fragments of Bayesian networks. MEBN fragments (MFrags) can be instantiated and combined to form arbitrarily complex graphical probability models. An MFrag represents probabilistic relationships among a conceptually meaningful group of uncertain hypotheses. Thus, MEBN facilitates representation of knowledge at a natural level of granularity. The semantics of MEBN assigns a probability distribution over interpretations of an associated classical first-order theory on a finite or countably infinite domain. Bayesian inference provides both a proof theory for combining prior knowledge with observations, and a learning theory for refining a representation as evidence accrues. A proof is given that MEBN can represent a probability distribution on interpretations of any finitely axiomatizable first-order theory.  相似文献   

10.
In this paper we investigate to what extent a very simple and natural “reachability as deducibility” approach, originating in research on formal methods for security, is applicable to the automated verification of large classes of infinite state and parameterized systems. In this approach the verification of a safety property is reduced to the purely logical problem of finding a countermodel for a first-order formula. This task is delegated then to generic automated finite model building procedures. A finite countermodel, if found, provides with a concise representation for a system invariant sufficient to establish the safety. In this paper we first present a detailed case study on the verification of a parameterized mutual exclusion protocol. Further we establish the relative completeness of the finite countermodel finding method (FCM) for a class of parameterized linear arrays of finite automata with respect to known methods based on monotonic abstraction and symbolic backward reachability. The practical efficiency of the method is illustrated on a set of verification problems taken from the literature using Mace4 model finding procedure.  相似文献   

11.
Modern algorithm theory includes numerous techniques to attack hard problems, such as approximation algorithms on the one hand and parameterized complexity on the other hand. However, it is still uncommon to use these two techniques simultaneously, which is unfortunate, as there are natural problems that cannot be solved using either technique alone, but rather well if we combine them. The problem to be studied here is not only natural, but also practical: Consider TSP, generalized as follows. We impose deadlines on some of the vertices, effectively constraining them to be visited prior to a given point of time. The resulting problem DlTSP (a special case of the well-known TSP with time windows) inherits its hardness from classical TSP, which is both well known from practice and renowned to be one of the hardest problems with respect to approximability: Within polynomial time, not even a polynomial approximation ratio (let alone a constant one) can be achieved (unless P = NP). We will show that DlTSP is even harder than classical TSP in the following sense. Classical TSP, despite its hardness, admits good approximation algorithms if restricted to metric (or near-metric) inputs. Not so DlTSP (and hence, neither TSP with time windows): We will prove that even for metric inputs, no constant approximation ratio can ever be achieved (unless P = NP). This is where parameterization becomes crucial: By combining methods from the field of approximation algorithms with ideas from the theory of parameterized complexity, we apply the concept of parameterized approximation. Thereby, we obtain a 2.5-approximation algorithm for DlTSP with a running time of k! · poly(|G|), where k denotes the number of deadlines. Furthermore, we prove that there is no fpt-algorithm with an approximation guarantee of 2-ε for any ε > 0, unless P = NP. Finally, we show that, unlike TSP, DlTSP becomes much harder when relaxing the triangle inequality. More precisely, for an arbitrary small violation of the triangle inequality, DlTSP does not admit an fpt-algorithm with approximation guarantee ((1-ε)/2)|V| for any ε > 0, unless P = NP.  相似文献   

12.
针对面向对象方法的数学理论基础相对薄弱的问题,利用共代数方法从范畴论及观察的角度研究面向对象的形式语义及行为关系。首先,给出类和对象的共代数描述,其中抽象类定义成一个类规范,类定义为满足类规范的共代数,类的各个对象则看成共代数状态空间上的元素,并分别利用强Monads理论和断言给出方法的行为的参数化描述和语义约束;接着,利用共代数互模拟探讨了不同对象在强Monads下的行为等价关系;最后用实例说明如何通过PVS工具证明类规范的一致性及对象的行为关系。  相似文献   

13.
We consider classes of well-known program schemes from a complexity theoretic viewpoint. We define logics which express all those problems solvable using our program schemes and show that the class of problems so solved or expressed coincides exactly with the complexity classPSPACE (our problems are viewed as sets of finite structures over some vocabulary). We derive normal form theorems for our logics and use these normal form theorems to show that certain problems concerning acceptance and termination of our program schemes and satisfiability of our logical formulae arePSPACE-complete. Moreover, we show that a game problem, seemingly disjoint from logic and program schemes, isPSPACE-complete using the results described above. We also highlight similarities between the results of this paper and the literature, so providing the reader with an introduction to this area of research.  相似文献   

14.
In this article we consider the application of ideas from parameterized complexity, and topological graph theory, to online problems. We focus on parameterized promise problems, where we are promised that the problem input obeys certain properties, or is presented in a certain fashion.We explore the effects of using graph width metrics as restrictions on the input to online problems. It seems natural to suppose that, for graphs having some form of bounded width, good online algorithms may exist for a number of natural problems. In the work presented we concentrate on online graph coloring problems, where we restrict the allowed input to instances having some form of bounded treewidth or pathwidth.We also consider the effects of restricting the presentation of the input to some form of bounded width decomposition or layout. A consequence of this part of the work is the clarification of a new parameter for graphs, persistence, which arises naturally in the online setting, and is of interest in its own right. We present some basic results regarding the general recognition of graphs having bounded persistence path decompositions.  相似文献   

15.
Some computationally hard problems, e.g., deduction in logical knowledge bases– are such that part of an instance is known well before the rest of it, and remains the same for several subsequent instances of the problem. In these cases, it is useful to preprocess off-line this known part so as to simplify the remaining on-line problem. In this paper we investigate such a technique in the context of intractable, i.e., NP-hard, problems. Recent results in the literature show that not all NP-hard problems behave in the same way: for some of them preprocessing yields polynomial-time on-line simplified problems (we call them compilable), while for other ones their compilability implies some consequences that are considered unlikely. Our primary goal is to provide a sound methodology that can be used to either prove or disprove that a problem is compilable. To this end, we define new models of computation, complexity classes, and reductions. We find complete problems for such classes, “completeness” meaning they are “the less likely to be compilable.” We also investigate preprocessing that does not yield polynomial-time on-line algorithms, but generically “decreases” complexity. This leads us to define “hierarchies of compilability,” that are the analog of the polynomial hierarchy. A detailed comparison of our framework to the idea of “parameterized tractability” shows the differences between the two approaches.  相似文献   

16.
We study the problem of generating the monomials of a black box polynomial in the context of enumeration complexity. We present three new randomized algorithms for restricted classes of polynomials with a polynomial or incremental delay, and the same global running time as the classical ones. We introduce TotalBPP, IncBPP and DelayBPP, which are probabilistic counterparts of the most common classes for enumeration problems. Our interpolation algorithms are applied to algebraic representations of several combinatorial enumeration problems, which are so proved to belong to the introduced complexity classes. In particular, the spanning hypertrees of a 3-uniform hypergraph can be enumerated with a polynomial delay. Finally, we study polynomials given by circuits and prove that we can derandomize the interpolation algorithms on classes of bounded-depth circuits. We also prove the hardness of some problems on polynomials of low degree and small circuit complexity, which suggests that our good interpolation algorithm for multilinear polynomials cannot be generalized to degree 2 polynomials. This article is an improved and extended version of Strozecki (Mathematical Foundations of Computer Science, pp. 629–640, 2010) and of the third chapter of the author’s Ph.D. Thesis (Strozecki, Ph.D. Thesis, 2010).  相似文献   

17.
We introduce a new framework for designing and analyzing algorithms. Our framework applies best to problems that are inapproximable according to the standard worst-case analysis. We circumvent such negative results by designing guarantees for classes of instances, parameterized according to properties of the optimal solution. Given our parameterized approximation, called PArametrized by the Signature of the Solution (PASS) approximation, we design algorithms with optimal approximation ratios for problems with additive and submodular objective functions such as the capacitated maximum facility location problems. We consider two types of algorithms for these problems. For greedy algorithms, our framework provides a justification for preferring a certain natural greedy rule over some alternative greedy rules that have been used in similar contexts. For LP-based algorithms, we show that the natural LP relaxation for these problems is not optimal in our framework. We design a new LP relaxation and show that this LP relaxation coupled with a new randomized rounding technique is optimal in our framework. In passing, we note that our results strictly improve over previous results of Kleinberg et al. (J. ACM 51(2):263–280, 2004) concerning the approximation ratio of the greedy algorithm.  相似文献   

18.
The theory of parameterized computation and complexity is a recently developed subarea in theoretical computer science. The theory is aimed at practically solving a large number of computational problems that are theoretically intractable.The theory is based on the observation that many intractable computational problems in practice are associated with a parameter that varies within a small or moderate range. Therefore, by taking the advantages of the small parameters, many theoretically intractable problems can be solved effectively and practically. On the other hand, the theory of parameterized computation and complexity has also offered powerful techniques that enable us to derive strong computational lower bounds for many computational problems, thus explaining why certain theoretically tractable problems cannot be solved effectively and practically. The theory of parameterized computation and complexity has found wide applications in areas such as database systems, programming languages, networks, VLSI design, parallel and distributed computing, computational biology, and robotics. This survey gives an overview on the fundamentals, algorithms, techniques, and applications developed in the research of parameterized computation and complexity. We will also report the most recent advances and excitements, and discuss further research directions in the area.  相似文献   

19.
We characterize the complexity of some natural and important problems in linear algebra. In particular, we identify natural complexity classes for which the problems of (a) determining if a system of linear equations is feasible and (b) computing the rank of an integer matrix (as well as other problems) are complete under logspace reductions.?As an important part of presenting this classification, we show that the "exact counting logspace hierarchy" collapses to near the bottom level. We review the definition of this hierarchy below. We further show that this class is closed under NC1-reducibility, and that it consists of exactly those languages that have logspace uniform span programs (introduced by Karchmer and Wigderson) over the rationals.?In addition, we contrast the complexity of these problems with the complexity of determining if a system of linear equations has an integer solution. Received: June 9, 1997.  相似文献   

20.
We extend previous work on the parameterized complexity of local search for the Traveling Salesperson Problem (TSP). So far, its parameterized complexity has been investigated with respect to the distance measures (defining the local search area) “Edge Exchange” and “Max-Shift”. We perform studies with respect to the distance measures “Swap” and “r-Swap”, “Reversal” and “r-Reversal”, and “Edit”, achieving both fixed-parameter tractability and W[1]-hardness results. In particular, from the parameterized reduction showing W[1]-hardness we infer running time lower bounds (based on the exponential time hypothesis) for all corresponding distance measures. Moreover, we provide non-existence results for polynomial-size problem kernels and we show that some in general W[1]-hard problems turn fixed-parameter tractable when restricted to planar graphs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号