首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
Alternating-time temporal logic (atl) is a logic for reasoning about open computational systems and multi-agent systems. It is well known that atl model checking is linear in the size of the model. We point out, however, that the size of an atl model is usually exponential in the number of agents. When the size of models is defined in terms of states and agents rather than transitions, it turns out that the problem is (1) Δ 3 P -complete for concurrent game structures, and (2) Δ 2 P -complete for alternating transition systems. Moreover, for “Positive atl” that allows for negation only on the level of propositions, model checking is (1) Σ 2 P -complete for concurrent game structures, and (2) NP-complete for alternating transition systems. We show a nondeterministic polynomial reduction from checking arbitrary alternating transition systems to checking turn-based transition systems, We also discuss the determinism assumption in alternating transition systems, and show that it can be easily removed. In the second part of the paper, we study the model checking complexity for formulae of atl with imperfect information (atl ir ). We show that the problem is Δ 2 P -complete in the number of transitions and the length of the formula (thereby closing a gap in previous work of Schobbens in Electron. Notes Theor. Comput. Sci. 85(2), 2004). Then, we take a closer look and use the same fine structure complexity measure as we did for atl with perfect information. We get the surprising result that checking formulae of atl ir is also Δ 3 P -complete in the general case, and Σ 2 P -complete for “Positive atl ir ”. Thus, model checking agents’ abilities for both perfect and imperfect information systems belongs to the same complexity class when a finer-grained analysis is used.  相似文献   

2.
M. Caliari 《Computing》2007,80(2):189-201
In this paper, we propose an approach to the computation of more accurate divided differences for the interpolation in the Newton form of the matrix exponential propagator φ(hA)v, φ (z) = (e z − 1)/z. In this way, it is possible to approximate φ (hA)v with larger time step size h than with traditionally computed divided differences, as confirmed by numerical examples. The technique can be also extended to “higher” order φ k functions, k≥0.  相似文献   

3.
A note on MULTIFIT scheduling for uniform machines   总被引:1,自引:0,他引:1  
R. E. Burkard  Y. He 《Computing》1998,61(3):277-283
In this note, we derive the tight worst case bound √6/2+(1/2) k for scheduling with the MULTIFIT heuristic on two parallel uniform machines withk calls of FFD within MULTIFIT. When MULTIFIT is combined with LPT as an incumbent algorithm the worst case bound decreases to √2+1/2+(1/2) k . Partially supported by SFB F003 “Optimierung und Kontrolle”, Projektbereich Diskrete Optimierung and by the National Natural Science Foundation of China, Grant 19701028.  相似文献   

4.
5.
We introduce the parameter cutwidth for the Cutting Planes (CP) system of Gomory and Chvátal. We provide linear lower bounds on cutwidth for two simple polytopes. Considering CP as a propositional refutation system, one can see that the cutwidth of a CNF contradiction F is always bound above by the Resolution width of F. We provide an example proving that the converse fails: there is an F which has constant cutwidth, but has Resolution width Ω(n). Following a standard method for converting an FO sentence ψ, without finite models, into a sequence of CNFs, F ψ,n , we provide a classification theorem for CP based on the sum cutwidth plus rank. Specifically, the cutwidth + rank of F ψ,n is bound by a constant c (depending on ψ only) iff ψ has no (infinite) models. This result may be seen as a relative of various gap theorems extant in the literature.  相似文献   

6.
We introduce nondeterministic graph searching with a controlled amount of nondeterminism and show how this new tool can be used in algorithm design and combinatorial analysis applying to both pathwidth and treewidth. We prove equivalence between this game-theoretic approach and graph decompositions called q -branched tree decompositions, which can be interpreted as a parameterized version of tree decompositions. Path decomposition and (standard) tree decomposition are two extreme cases of q-branched tree decompositions. The equivalence between nondeterministic graph searching and q-branched tree decomposition enables us to design an exact (exponential time) algorithm computing q-branched treewidth for all q≥0, which is thus valid for both treewidth and pathwidth. This algorithm performs as fast as the best known exact algorithm for pathwidth. Conversely, this equivalence also enables us to design a lower bound on the amount of nondeterminism required to search a graph with the minimum number of searchers. Additional support of F.V. Fomin by the Research Council of Norway. Additional supports of P. Fraigniaud from the INRIA Project “Grand Large”, and from the Project PairAPair of the ACI “Masse de Données”. Additional supports of N. Nisse from the Project Fragile of the ACI “Sécurité & Informatique”.  相似文献   

7.
The Convex Recoloring (CR) problem measures how far a tree of characters differs from exhibiting a so-called “perfect phylogeny”. For an input consisting of a vertex-colored tree T, the problem is to determine whether recoloring at most k vertices can achieve a convex coloring, meaning by this a coloring where each color class induces a subtree. The problem was introduced by Moran and Snir (J. Comput. Syst. Sci. 73:1078–1089, 2007; J. Comput. Syst. Sci. 74:850–869, 2008) who showed that CR is NP-hard, and described a search-tree based FPT algorithm with a running time of O(k(k/log k) k n 4). The Moran and Snir result did not provide any nontrivial kernelization. In this paper, we show that CR has a kernel of size O(k 2).  相似文献   

8.
9.
10.
We describe a mechanism called SpaceGlue for adaptively locating services based on the preferences and locations of users in a distributed and dynamic network environment. In SpaceGlue, services are bound to physical locations, and a mobile user accesses local services depending on the current space he/she is visiting. SpaceGlue dynamically identifies the relationships between different spaces and links or “glues” spaces together depending on how previous users moved among them and used those services. Once spaces have been glued, users receive a recommendation of remote services (i.e., services provided in a remote space) reflecting the preferences of the crowd of users visiting the area. The strengths of bonds are implicitly evaluated by users and adjusted by the system on the basis of their evaluation. SpaceGlue is an alternative to existing schemes such as data mining and recommendation systems and it is suitable for distributed and dynamic environments. The bonding algorithm for SpaceGlue incrementally computes the relationships or “bonds” between different spaces in a distributed way. We implemented SpaceGlue using a distributed network application platform Ja-Net and evaluated it by simulation to show that it adaptively locates services reflecting trends in user preferences. By using “Mutual Information (MI)” and “F-measure” as measures to indicate the level of such trends and the accuracy of service recommendation, the simulation results showed that (1) in SpaceGlue, the F-measure increases depending on the level of MI (i.e., the more significant the trends, the greater the F-measure values), (2) SpaceGlue achives better precision and F-measure than “Flooding case (i.e., every service information is broadcast to everybody)” and “No glue case” by narrowing appropriate partners to send recommendations based on bonds, and (3) SpaceGlue achieves better F-measure with large number of spaces and users than other cases (i.e., “flooding” and “no glue”). Tomoko Itao is an alumna of NTT Network Innovation Laboratories  相似文献   

11.
The “Priority Algorithm” is a model of computation introduced by Borodin, Nielsen and Rackoff ((Incremental) Priority algorithms, Algorithmica 37(4):295–326, 2003) which formulates a wide class of greedy algorithms. For an arbitrary set \mathbbS\mathbb{S} of jobs, we are interested in whether or not there exists a priority algorithm that gains optimal profit on every subset of \mathbbS\mathbb{S} . In the case where the jobs are all intervals, we characterize such sets \mathbbS\mathbb{S} and give an efficient algorithm (when \mathbbS\mathbb{S} is finite) for determining this. We show that in general, however, the problem is NP-hard.  相似文献   

12.
Modern multicore processors, such as the Cell Broadband Engine, achieve high performance by equipping accelerator cores with small “scratch-pad” memories. The price for increased performance is higher programming complexity – the programmer must manually orchestrate data movement using direct memory access (DMA) operations. Programming using asynchronous DMA operations is error-prone, and DMA races can lead to nondeterministic bugs which are hard to reproduce and fix. We present a method for DMA race analysis in C programs. Our method works by automatically instrumenting a program with assertions modeling the semantics of a memory flow controller. The instrumented program can then be analyzed using state-of-the-art software model checkers. We show that bounded model checking is effective for detecting DMA races in buggy programs. To enable automatic verification of the correctness of instrumented programs, we present a new formulation of k-induction geared towards software, as a proof rule operating on loops. Our techniques are implemented as a tool, Scratch, which we apply to a large set of programs supplied with the IBM Cell SDK, in which we discover a previously unknown bug. Our experimental results indicate that our k-induction method performs extremely well on this problem class. To our knowledge, this marks both the first application of k-induction to software verification, and the first example of software model checking in the context of heterogeneous multicore processors.  相似文献   

13.
Partial Order Reduction (POR) techniques improve the basic model checking algorithm by reducing the numbers of states and transitions explored in verifying a property of the model. In the “ample set” POR framework for the verification of an LTLX formula φ, one associates to each state s a subset T s of the set of all transitions enabled at s. The approach requires that whenever T s is a proper subset, the transitions in T s must be invisible, i.e., their execution can never change the truth values of the atomic propositions occurring in φ. In this paper, we show that the invisibility restriction can be relaxed: for propositions that only occur negatively in φ, it suffices that the transitions in T s merely never change the truth value from true to false, and for those that occur only positively, from false to true. This opens up opportunities for reduction, in many commonly occurring scenarios, that would not be allowed by the stricter invisibility criterion.  相似文献   

14.
Examining several sources of data on smartphone use, this paper presents evidence for the popular conjecture that mobile devices are “habit-forming.” The form of habits we identified is called a checking habit: brief, repetitive inspection of dynamic content quickly accessible on the device. We describe findings on kinds and frequencies of checking behaviors in three studies. We found that checking habits occasionally spur users to do other things with the device and may increase usage overall. Data from a controlled field experiment show that checking behaviors emerge and are reinforced by informational “rewards” that are very quickly accessible. Qualitative data suggest that although repetitive habitual use is frequent, it is experienced more as an annoyance than an addiction. We conclude that supporting habit-formation is an opportunity for making smartphones more “personal” and “pervasive.”  相似文献   

15.
In this article we develop quantum algorithms for learning and testing juntas, i.e. Boolean functions which depend only on an unknown set of k out of n input variables. Our aim is to develop efficient algorithms: (1) whose sample complexity has no dependence on n, the dimension of the domain the Boolean functions are defined over; (2) with no access to any classical or quantum membership (“black-box”) queries. Instead, our algorithms use only classical examples generated uniformly at random and fixed quantum superpositions of such classical examples; (3) which require only a few quantum examples but possibly many classical random examples (which are considered quite “cheap” relative to quantum examples). Our quantum algorithms are based on a subroutine FS which enables sampling according to the Fourier spectrum of f; the FS subroutine was used in earlier work of Bshouty and Jackson on quantum learning. Our results are as follows: (1) We give an algorithm for testing k-juntas to accuracy ε that uses O(k/ϵ) quantum examples. This improves on the number of examples used by the best known classical algorithm. (2) We establish the following lower bound: any FS-based k-junta testing algorithm requires queries. (3) We give an algorithm for learning k-juntas to accuracy ϵ that uses O−1 k log k) quantum examples and O(2 k log(1/ϵ)) random examples. We show that this learning algorithm is close to optimal by giving a related lower bound. Supported in part by NSF award CCF-0347282, by NSF award CCF-0523664, and by a Sloan Foundation Fellowship.  相似文献   

16.
Temporal logics are commonly used for reasoning about concurrent systems. Model checkers and other finite-state verification techniques allow for automated checking of system model compliance to given temporal properties. These properties are typically specified as linear-time formulae in temporal logics. Unfortunately, the level of inherent sophistication required by these formalisms too often represents an impediment to move these techniques from “research theory” to “industry practice”. The objective of this work is to facilitate the nontrivial and error prone task of specifying, correctly and without expertise in temporal logic, temporal properties. In order to understand the basis of a simple but expressive formalism for specifying temporal properties we critically analyze commonly used in practice visual notations. Then we present a scenario-based visual language called Property Sequence Chart (PSC) that, in our opinion, fixes the highlighted lacks of these notations by extending a subset of UML 2.0 Interaction Sequence Diagrams. We also provide PSC with both denotational and operational semantics. The operational semantics is obtained via translation into Büchi automata and the translation algorithm is implemented as a plugin of our Charmy tool. Expressiveness of PSC has been validated with respect to well known property specification patterns. Preliminary results appeared in (Autili et al. 2006a).  相似文献   

17.
We show that the space of polygonizations of a fixed planar point set S of n points is connected by O(n 2) “moves” between simple polygons. Each move is composed of a sequence of atomic moves called “stretches” and “twangs,” which walk between weakly simple “polygonal wraps” of S. These moves show promise to serve as a basis for generating random polygons.  相似文献   

18.
The typechecking problem for transformations of relational data into tree data is the following: given a relational-to-XML transformation P, and an XML type d, decide whether for every database instance the result of the transformation P on satisfies d. TreeQL programs with projection-free conjunctive queries (see Alon et al. in ACM Trans. Comput. Log. 4(3):315–354, 2003) are considered as transformations and DTDs with arbitrary regular expressions as XML types. A non-elementary upper bound for the typechecking problem was already given by Alon et al. (ACM Trans. Comput. Log. 4(3):315–354, 2003) (although in a more general setting, where equality and negation in projection-free conjunctive queries and additional universal integrity constraints are allowed). In this paper we show that the typechecking problem is coNEXPTIME-complete. As an intermediate step we consider the following problem, which can be formulated independently of XML notions. Given a set of triples of the form (φ,k,j), where φ is a projection-free conjunctive query and k,j are natural numbers, decide whether there exists a database such that, for each triple (φ,k,j) in the set, there exists a natural number α, such that there are exactly k+j*α tuples satisfying the query φ in . Our main technical contribution consists of a NEXPTIME algorithm for the last problem. Partially supported by Polish Ministry of Science and Higher Education research project N206 022 31/3660, 2006/2009. This paper is an extended version of 20, where the coNEXPTIME upper bound was shown.  相似文献   

19.
The Ambient Calculus was developed by Cardelli and Gordon as a formal framework to study issues of mobility and migrant code. Numerous analyses have been developed for numerous variants of that calculus. We take up the challenge of developing, in a type-based setting, a relatively precise “topology” analysis for the original version of the calculus. To compensate for the lack of “co-capabilities” (an otherwise increasingly popular extension), the analysis is flow-sensitive, with the actions of processes being summarized by “behaviors”. A subject reduction property guarantees that for a well-typed process, the location of any ambient is included in what is predicted by its type; additionally it ensures that communicating subprocesses agree on their “topic of conversation”. Based on techniques borrowed from finite automata theory, type checking of type-annotated processes is decidable (though potentially exponential).  相似文献   

20.
The Feedback Vertex Set problem on unweighted, undirected graphs is considered. Improving upon a result by Burrage et al. (Proceedings 2nd International Workshop on Parameterized and Exact Computation, pp. 192–202, 2006), we show that this problem has a kernel with O(k 3) vertices, i.e., there is a polynomial time algorithm, that given a graph G and an integer k, finds a graph G′ with O(k 3) vertices and integer k′≤k, such that G has a feedback vertex set of size at most k, if and only if G′ has a feedback vertex set of size at most k′. Moreover, the algorithm can be made constructive: if the reduced instance G′ has a feedback vertex set of size k′, then we can easily transform a minimum size feedback vertex set of G′ into a minimum size feedback vertex set of G. This kernelization algorithm can be used as the first step of an FPT algorithm for Feedback Vertex Set, but also as a preprocessing heuristic for Feedback Vertex Set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号