首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 20 毫秒
1.
The 1-versus-2 queries problem, which has been extensively studied in computational complexity theory, asks in its generality whether every efficient algorithm that makes at most 2 queries to a Σ k p -complete language L k has an efficient simulation that makes at most 1 query to L k . We obtain solutions to this problem for hypotheses weaker than previously considered. We prove that:
(I)  For each k≥2, PSpk[2]tt í ZPPSpk[1]T PH=Spk\mathrm{P}^{\Sigma^{p}_{k}[2]}_{tt}\subseteq \mathrm{ZPP}^{\Sigma^{p}_{k}[1]}\Rightarrow \mathrm{PH}=\Sigma^{p}_{k} , and
(II)  P tt NP[2]⊆ZPPNP[1] PH=S2 p .
Here, for any complexity class C\mathcal{C} and integer j≥1, we define ZPPC[j]\mathrm{ZPP}^{\mathcal{C}[j]} to be the class of problems solvable by zero-error randomized algorithms that run in polynomial time, make at most j queries to C\mathcal{C} , and succeed with probability at least 1/2+1/poly(⋅). This same definition of ZPPC[j]\mathrm{ZPP}^{\mathcal{C}[j]} , also considered in Cai and Chakaravarthy (J. Comb. Optim. 11(2):189–202, 2006), subsumes the class of problems solvable by randomized algorithms that always answer correctly in expected polynomial time and make at most j queries to C\mathcal{C} . Hemaspaandra, Hemaspaandra, and Hempel (SIAM J. Comput. 28(2):383–393, 1998), for k>2, and Buhrman and Fortnow (J. Comput. Syst. Sci. 59(2):182–194, 1999), for k=2, had obtained the same consequence as ours in (I) using the stronger hypothesis PSpk[2]tt í PSpk[1]\mathrm{P}^{\Sigma^{p}_{k}[2]}_{tt}\subseteq \mathrm{P}^{\Sigma^{p}_{k}[1]} . Fortnow, Pavan, and Sengupta (J. Comput. Syst. Sci. 74(3):358–363, 2008) had obtained the same consequence as ours in (II) using the stronger hypothesis P tt NP[2]⊆PNP[1].  相似文献   

2.
The concept of a state MV-algebra was firstly introduced by Flaminio and Montagna (An algebraic approach to states on MV-algebras. In: Novák V (ed) Fuzzy logic 2, proceedings of the 5th EUSFLAT conference, September 11–14, Ostrava, vol II, pp 201–206, 2007; Int J Approx Reason 50:138–152, 2009) as an MV-algebra with internal state as a unary operation. Di Nola and Dvurečenskij (Ann Pure Appl Logic 161:161–173, 2009a; Math Slovaca 59:517–534, 2009b) gave a stronger version of a state MV-algebra. In the present paper, we introduce the notion of a state BL-algebra, or more precisely, a BL-algebra with internal state. We present different types of state BL-algebras, like strong state BL-algebras and state-morphism BL-algebras, and we study some classes of state BL-algebras. In addition, we give a sample of important examples of state BL-algebras and present some open problems.  相似文献   

3.
Weighted timed automata (WTA), introduced in Alur et al. (Proceedings of HSCC’01, LNCS, vol. 2034, pp. 49–62, Springer, Berlin, 2001), Behrmann et al. (Proceedings of HSCC’01, LNCS, vol. 2034, pp. 147–161, Springer, Berlin, 2001) are an extension of Alur and Dill (Theor. Comput. Sci. 126(2):183–235, 1994) timed automata, a widely accepted formalism for the modelling and verification of real time systems. Weighted timed automata extend timed automata by allowing costs on the locations and edges. There has been a lot of interest Bouyer et al. (Inf. Process. Lett. 98(5):188–194, 2006), Bouyer et al. (Log. Methods Comput. Sci. 4(2):9, 2008), Brihaye et al. (Proceedings of FORMATS/FTRTFT’04, LNCS, vol. 3253, pp. 277–292, Springer, Berlin, 2004), Brihaye et al. (Inf. Comput. 204(3):408–433, 2006) in studying the model checking problem of weighted timed automata. The properties of interest are written using logic weighted CTL (WCTL), an extension of CTL with costs. It has been shown Bouyer et al. (Log. Methods Comput. Sci. 4(2):9, 2008) that the problem of model checking WTAs with a single clock using WCTL with no external cost variables is decidable, while 3 clocks render the problem undecidable Bouyer et al. (Inf. Process. Lett. 98(5):188–194, 2006). The question of 2 clocks is open. In this paper, we introduce a subclass of weighted timed automata called weighted integer reset timed automata (WIRTA) and study the model checking problem. We give a clock reduction technique for WIRTA. Given a WIRTA A\mathcal{A} with n≥1 clocks, we show that a single clock WIRTA A¢\mathcal{A}' preserving the paths and costs of A\mathcal{A} can be obtained. This gives us the decidability of model checking WIRTA with n≥1 clocks and m≥1 costs using WCTL with no external cost variables. We then show that for a restricted version of WCTL with external cost variables, the model checking problem is undecidable for WIRTA with 3 stopwatch costs and 1 clock. Finally, we show that model checking WTA with 2 clocks and 1 stopwatch cost against WCTL with no external cost variables is undecidable, thereby answering a question that has remained long open.  相似文献   

4.
We present new efficient deterministic and randomized distributed algorithms for decomposing a graph with n nodes into a disjoint set of connected clusters with radius at most k−1 and having O(n 1+1/k ) intercluster edges. We show how to implement our algorithms in the distributed CONGEST\mathcal{CONGEST} model of computation, i.e., limited message size, which improves the time complexity of previous algorithms (Moran and Snir in Theor. Comput. Sci. 243(1–2):217–241, 2000; Awerbuch in J. ACM 32:804–823, 1985; Peleg in Distributed Computing: A Locality-Sensitive Approach, 2000) from O(n) to O(n 1−1/k ). We apply our algorithms for constructing low stretch graph spanners and network synchronizers in sublinear deterministic time in the CONGEST\mathcal{CONGEST} model.  相似文献   

5.
In 1994, S.G. Matthews introduced the notion of partial metric space in order to obtain a suitable mathematical tool for program verification (Ann. N.Y. Acad. Sci. 728:183–197, 1994). He gave an application of this new structure to parallel computing by means of a partial metric version of the celebrated Banach fixed point theorem (Theor. Comput. Sci. 151:195–205, 1995). Later on, M.P. Schellekens introduced the theory of complexity (quasi-metric) spaces as a part of the development of a topological foundation for the asymptotic complexity analysis of programs and algorithms (Electron. Notes Theor. Comput. Sci. 1:211–232, 1995). The applicability of this theory to the asymptotic complexity analysis of Divide and Conquer algorithms was also illustrated by Schellekens. In particular, he gave a new proof, based on the use of the aforenamed Banach fixed point theorem, of the well-known fact that Mergesort algorithm has optimal asymptotic average running time of computing. In this paper, motivated by the utility of partial metrics in Computer Science, we discuss whether the Matthews fixed point theorem is a suitable tool to analyze the asymptotic complexity of algorithms in the spirit of Schellekens. Specifically, we show that a slight modification of the well-known Baire partial metric on the set of all words over an alphabet constitutes an appropriate tool to carry out the asymptotic complexity analysis of algorithms via fixed point methods without the need for assuming the convergence condition inherent to the definition of the complexity space in the Schellekens framework. Finally, in order to illustrate and to validate the developed theory we apply our results to analyze the asymptotic complexity of Quicksort, Mergesort and Largesort algorithms. Concretely we retrieve through our new approach the well-known facts that the running time of computing of Quicksort (worst case behaviour), Mergesort and Largesort (average case behaviour) are in the complexity classes O(n2)\mathcal{O}(n^{2}), O(nlog2(n))\mathcal{O}(n\log_{2}(n)) and O(2(n-1)-log2(n))\mathcal{O}(2(n-1)-\log_{2}(n)), respectively.  相似文献   

6.
In Choi (Quantum Inf Process, 7:193–209, 2008), we introduced the notion of minor-embedding in adiabatic quantum optimization. A minor-embedding of a graph G in a quantum hardware graph U is a subgraph of U such that G can be obtained from it by contracting edges. In this paper, we describe the intertwined adiabatic quantum architecture design problem, which is to construct a hardware graph U that satisfies all known physical constraints and, at the same time, permits an efficient minor-embedding algorithm. We illustrate an optimal complete-graph-minor hardware graph. Given a family F{\mathcal{F}} of graphs, a (host) graph U is called F{\mathcal{F}}-minor-universal if for each graph G in F, U{\mathcal{F}, U} contains a minor-embedding of G. The problem for designing a F{{\mathcal{F}}}-minor-universal hardware graph U sparse in which F{{\mathcal{F}}} consists of a family of sparse graphs (e.g., bounded degree graphs) is open.  相似文献   

7.
An instance of the path hitting problem consists of two families of paths, and ℋ, in a common undirected graph, where each path in ℋ is associated with a non-negative cost. We refer to and ℋ as the sets of demand and hitting paths, respectively. When p∈ℋ and share at least one mutual edge, we say that p hits q. The objective is to find a minimum cost subset of ℋ whose members collectively hit those of . In this paper we provide constant factor approximation algorithms for path hitting, confined to instances in which the underlying graph is a tree, a spider, or a star. Although such restricted settings may appear to be very simple, we demonstrate that they still capture some of the most basic covering problems in graphs. Our approach combines several novel ideas: We extend the algorithm of Garg, Vazirani and Yannakakis (Algorithmica, 18:3–20, 1997) for approximate multicuts and multicommodity flows in trees to prove new integrality properties; we present a reduction that involves multiple calls to this extended algorithm; and we introduce a polynomial-time solvable variant of the edge cover problem, which may be of independent interest. An extended abstract of this paper appeared in Proceedings of the 14th Annual European Symposium on Algorithms, 2006. This work is part of D. Segev’s Ph.D. thesis prepared at Tel-Aviv University under the supervision of Prof. Refael Hassin.  相似文献   

8.
We consider the problem of packing rectangles into bins that are unit squares, where the goal is to minimize the number of bins used. All rectangles have to be packed non-overlapping and orthogonal, i.e., axis-parallel. We present an algorithm with an absolute worst-case ratio of 2 for the case where the rectangles can be rotated by 90 degrees. This is optimal provided P 1 NP\mathcal{P}\not=\mathcal{NP} . For the case where rotation is not allowed, we prove an approximation ratio of 3 for the algorithm Hybrid First Fit which was introduced by Chung et al. (SIAM J. Algebr. Discrete Methods 3(1):66–76, 1982) and whose running time is slightly better than the running time of Zhang’s previously known 3-approximation algorithm (Zhang in Oper. Res. Lett. 33(2):121–126, 2005).  相似文献   

9.
In this paper, definitions of K{\mathcal{K}} automata, K{\mathcal{K}} regular languages, K{\mathcal{K}} regular expressions and K{\mathcal{K}} regular grammars based on lattice-ordered semirings are given. It is shown that K{\mathcal{K}}NFA is equivalent to K{\mathcal{K}}DFA under some finite condition, the Pump Lemma holds if K{\mathcal{K}} is finite, and Ke{{\mathcal{K}}}\epsilonNFA is equivalent to K{\mathcal{K}}NFA. Further, it is verified that the concatenation of K{\mathcal{K}} regular languages remains a K{\mathcal{K}} regular language. Similar to classical cases and automata theory based on lattice-ordered monoids, it is also found that K{\mathcal{K}}NFA, K{\mathcal{K}} regular expressions and K{\mathcal{K}} regular grammars are equivalent to each other when K{\mathcal{K}} is a complete lattice.  相似文献   

10.
We consider a variant of the path cover problem, namely, the k-fixed-endpoint path cover problem, or kPC for short, on interval graphs. Given a graph G and a subset T\mathcal{T} of k vertices of V(G), a k-fixed-endpoint path cover of G with respect to T\mathcal{T} is a set of vertex-disjoint paths ℘ that covers the vertices of G such that the k vertices of T\mathcal{T} are all endpoints of the paths in ℘. The kPC problem is to find a k-fixed-endpoint path cover of G of minimum cardinality; note that, if T\mathcal{T} is empty the stated problem coincides with the classical path cover problem. In this paper, we study the 1-fixed-endpoint path cover problem on interval graphs, or 1PC for short, generalizing the 1HP problem which has been proved to be NP-complete even for small classes of graphs. Motivated by a work of Damaschke (Discrete Math. 112:49–64, 1993), where he left both 1HP and 2HP problems open for the class of interval graphs, we show that the 1PC problem can be solved in polynomial time on the class of interval graphs. We propose a polynomial-time algorithm for the problem, which also enables us to solve the 1HP problem on interval graphs within the same time and space complexity.  相似文献   

11.
The theory of average case complexity studies the expected complexity of computational tasks under various specific distributions on the instances, rather than their worst case complexity. Thus, this theory deals with distributional problems, defined as pairs each consisting of a decision problem and a probability distribution over the instances. While for applications utilizing hardness, such as cryptography, one seeks an efficient algorithm that outputs random instances of some problem that are hard for any algorithm with high probability, the resulting hard distributions in these cases are typically highly artificial, and do not establish the hardness of the problem under “interesting” or “natural” distributions. This paper studies the possibility of proving generic hardness results (i.e., for a wide class of NP{\mathcal{NP}} -complete problems), under “natural” distributions. Since it is not clear how to define a class of “natural” distributions for general NP{\mathcal{NP}} -complete problems, one possibility is to impose some strong computational constraint on the distributions, with the intention of this constraint being to force the distributions to “look natural”. Levin, in his seminal paper on average case complexity from 1984, defined such a class of distributions, which he called P-computable distributions. He then showed that the NP{\mathcal{NP}} -complete Tiling problem, under some P-computable distribution, is hard for the complexity class of distributional NP{\mathcal{NP}} problems (i.e. NP{\mathcal{NP}} with P-computable distributions). However, since then very few NP{\mathcal{NP}}- complete problems (coupled with P-computable distributions), and in particular “natural” problems, were shown to be hard in this sense. In this paper we show that all natural NP{\mathcal{NP}} -complete problems can be coupled with P-computable distributions such that the resulting distributional problem is hard for distributional NP{\mathcal{NP}}.  相似文献   

12.
We continue the work initiated in Herzig and Lorini (J Logic Lang Inform, in press) whose aim is to provide a minimalistic logical framework combining the expressiveness of dynamic logic in which actions are first-class citizens in the object language, with the expressiveness of logics of agency such as STIT and logics of group capabilities such as CL and ATL. We present a logic called DDLA{\mathcal{DDLA}} (Deterministic Dynamic logic of Agency) which supports reasoning about actions and joint actions of agents and coalitions, and agentive and coalitional capabilities. In DDLA{\mathcal{DDLA}} it is supposed that, once all agents have selected a joint action, the effect of this joint action is deterministic. In order to assess DDLA{\mathcal{DDLA}} we prove that it embeds Coalition Logic. We then extend DDLA{\mathcal{DDLA}} with modal operators for agents’ preferences, and show that the resulting logic is sufficiently expressive to capture the game-theoretic concepts of best response and Nash equilibrium.  相似文献   

13.
In this paper, we propose a new H{\mathcal H_\infty} weight learning algorithm (HWLA) for nonlinear system identification via Takagi–Sugeno (T–S) fuzzy Hopfield neural networks with time-delay. Based on Lyapunov stability theory, for the first time, the HWLA for nonlinear system identification is presented to reduce the effect of disturbance to an H{\mathcal{H}_{\infty }} norm constraint. The HWLA can be obtained by solving a convex optimization problem which is represented in terms of linear matrix inequality (LMI). An illustrative example is given to demonstrate the effectiveness of the proposed identification scheme.  相似文献   

14.
The problem of maximizing the weighted number of just-in-time jobs in a two-machine flow shop scheduling system is known to be NP\mathcal {NP}-hard (Choi and Yoon in J. Shed. 10:237–243, 2007). However, the question of whether this problem is strongly or ordinarily NP\mathcal{NP}-hard remains an open question. We provide a pseudo-polynomial time algorithm to solve this problem, proving that it is NP\mathcal{NP}-hard in the ordinary sense. Moreover, we show how the pseudo-polynomial algorithm can be converted to a fully polynomial time approximation scheme (FPTAS). In addition, we prove that the same problem is strongly NP\mathcal{NP}-hard for both a two-machine job shop scheduling system and a two-machine open shop scheduling system.  相似文献   

15.
 This article deals with the prototype of the strong law of large numbers, with individual ergodic theorem on probability MV-algebras with product. Dedicated to Prof. Ján Jakubík on the occasion of his 80th birthday This research is supported by Grant 1/9056/02 1991 Mathematics Subject Classification: 28E10  相似文献   

16.
We describe an O(n 3/log n)-time algorithm for the all-pairs-shortest-paths problem for a real-weighted directed graph with n vertices. This slightly improves a series of previous, slightly subcubic algorithms by Fredman (SIAM J. Comput. 5:49–60, 1976), Takaoka (Inform. Process. Lett. 43:195–199, 1992), Dobosiewicz (Int. J. Comput. Math. 32:49–60, 1990), Han (Inform. Process. Lett. 91:245–250, 2004), Takaoka (Proc. 10th Int. Conf. Comput. Comb., Lect. Notes Comput. Sci., vol. 3106, pp. 278–289, Springer, 2004), and Zwick (Proc. 15th Int. Sympos. Algorithms and Computation, Lect. Notes Comput. Sci., vol. 3341, pp. 921–932, Springer, 2004). The new algorithm is surprisingly simple and different from previous ones. A preliminary version of this paper appeared in Proc. 9th Workshop Algorithms Data Struct. (WADS), Lect. Notes Comput. Sci., vol. 3608, pp. 318–324, Springer, 2005.  相似文献   

17.
18.
We propose B{\mathcal{B}}-Tropos as a modeling framework to support agent-oriented systems engineering, from high-level requirements elicitation down to execution-level tasks. In particular, we show how B{\mathcal{B}}-Tropos extends the Tropos methodology by means of declarative business constraints, inspired by the ConDec graphical language. We demonstrate the functioning of B{\mathcal{B}}-Tropos using a running example inspired by a real-world industrial scenario, and we describe how B{\mathcal{B}}-Tropos models can be automatically formalized in computational logic, discussing formal properties of the resulting framework and its verification capabilities.  相似文献   

19.
We investigate interpretations of formulas ψ in a first order fuzzy logic in models which are based on objects of a category SetR(Ω) which consists of Ω-sets, i.e. sets with similarity relations with values in a complete MV-algebra Ω and with morphisms defined as special fuzzy relations between Ω-sets. The interpretations are then morphisms in a category SetR(Ω) from some Ω-set to the object . We define homomorphisms between models in a category SetR(Ω) and we prove that if is a (special) homomorphism of models in a category SetR(Ω) then there is a relation between interpretations of a formula ψ in models . Supported by MSM6198898701, grant 201/07/0191 of GAČR and grant 1M0572.  相似文献   

20.
Let G be an undirected graph and $\mathcal{T}=\{T_{1},\ldots,T_{k}\}Let G be an undirected graph and T={T1,?,Tk}\mathcal{T}=\{T_{1},\ldots,T_{k}\} be a collection of disjoint subsets of nodes. Nodes in T 1⋅⋅⋅T k are called terminals, other nodes are called inner. By a T\mathcal{T} -path we mean a path P such that P connects terminals from distinct sets in T\mathcal{T} and all internal nodes of P are inner. We study the problem of finding a maximum cardinality collection ℘ of T\mathcal{T} -paths such that at most two paths in ℘ pass through any node. Our algorithm is purely combinatorial and has the time complexity O(mn 2), where n and m denote the numbers of nodes and edges in G, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号