首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
Model checking is a useful method to verify automatically the correctness of a system with respect to a desired behavior, by checking whether a mathematical model of the system satisfies a formal specification of this behavior. Many systems of interest are open, in the sense that their behavior depends on the interaction with their environment. The model checking problem for finite-state open systems (called module checking) has been intensively studied in the literature. In this paper, we focus on open pushdown systems and we study the related model-checking problem (pushdown module checking, for short) with respect to properties expressed by CTL and CTL * formulas. We show that pushdown module checking against CTL (resp., CTL *) is 2Exptime-complete (resp., 3Exptime-complete). Moreover, we prove that for a fixed CTL or CTL * formula, the problem is Exptime-complete.  相似文献   

4.
In the research of service composition,it demands efficient algorithms that not only retrieve correct service compositions automatically from thousands of services but also satisfy the quality requirements of different service users.However,most approaches treat these two aspects as two separate problems,automatic service composition and service selection.Although the latest researches realize the restriction of this separate view and some specific methods are proposed,they still suffer from serious limitations in scalability and accuracy when addressing both requirements simultaneously.In order to cope with these limitations and efficiently solve the combined problem which is known as QoS-aware or QoS-driven automatic service composition problem,we propose a new graph search problem,single-source optimal directed acyclic graphs (DAGs),for the first time.This novel single-source optimal DAGs (SSOD) problem is similar to,but more general than the classical single-source shortest paths (SSSP) problem.In this paper,a new graph model of SSOD problem is proposed and a Sim-Dijkstra algorithm is presented to address the SSOD problem with the time complexity of O(n log n + m) (n and m are the number of nodes and edges in the graph respectively),and the proofs of its soundness.It is also directly applied to solve the QoS-aware automatic service composition problem,and a service composition tool named QSynth is implemented.Evaluations show that Sim-Dijkstra algorithm achieves superior scalability and efficiency with respect to a large variety of composition scenarios,even more efficient than our worklist algorithm that won the performance championship of Web Services Challenge 2009.  相似文献   

5.
We present a logic-based verification framework for multilevel security and transactional correctness of service oriented architectures. The framework is targeted at the analysis of data confidentiality, enforced by non-interference, and of service responsiveness, captured by a notion of compliance that implies deadlock and livelock freedom. We isolate a class of modal μ-calculus formulae, interpreted over service configurations, that characterise configurations satisfying the properties of interest. We then investigate an adaptation technique based on the use of coercion filters to block any action that might potentially break security or transactional correctness. Based on the above, we devise a model checking algorithm for adaptive service compositions which automatically synthesises the maximal (most expressive/permissive) filter enforcing the desired security and correctness properties.  相似文献   

6.
In the requirements engineering community, consistency and completeness have been identified as important properties of system specifications. Custom algorithms to verify these properties automatically have been devised for a number of specification languages, including SCR, RSML, and Statecharts. In this paper, we provide means to automatically verify completeness and consistency of Abstract State Machine (ASM) specifications. The verification is performed using a widely available tool, a SAT solver. The use of a SAT solver removes the need for designing and fine tuning language specific verification algorithms. Furthermore, the use of a SAT solver automates the verification procedure and produces a counterexample automatically when a specification is incomplete or inconsistent. We provide an algorithm to translate ASM specifications to a SAT problem instance. The translation is illustrated using the TASM toolset in conjunction with the “production cell system” case study.  相似文献   

7.
Web services promise the interoperability of various applications running on heterogeneous platforms over the Internet, and are gaining more and more attention. Web service composition refers to the process of combining Web services to provide value-added services, which has received much interest in supporting enterprize application integration. Industry standards for Web Service composition, such as WSBPEL, provide the notation and additional control mechanisms for the execution of business processes in Web service collaborations. However, these standards do not provide support for checking interesting properties related to Web Service and process behavior. In an attempt to fill this gap, we describe a formalization of WSBPEL business processes, that adds communications semantics to the specifications of interacting Web services, and uses a formal logic to model their dynamic behavior, which enables their formal analysis and the inference of relevant properties of the systems being built.  相似文献   

8.
Although several approaches have been proposed to specify multi-agent commitment-based protocols that capture flexible and rich interactions among autonomous and heterogeneous agents, very few of them synthesize their formal specification and automatic verification in an integrated framework. In this paper, we present a new logic-based language to specify commitment-based protocols, which is derived from ACTL1c, a logic extending CTL1 with modalities to represent and reason about social commitments and their actions. We present a reduction technique that formally transforms the problem of model checking ACTL1c to the problem of model checking GCTL1 (an extension of CTL1 with action formulae). We prove that the reduction technique is sound and we fully implement it on top of the CWB-NC model checker to automatically verify the NetBill protocol, a motivated and specified example in the proposed specification language. We also apply the proposed technique to check the compliance of another protocol: the Contract Net protocol with given properties and report and discuss the obtained results. We finally develop a new symbolic algorithm to perform model checking dedicated to the proposed logic.  相似文献   

9.
Despite substantial investments in customer‐relationship‐management (CRM) systems, companies continue to experience pain rather than profit. Meanwhile, the concept of “adaptive behaviour” of frontline employees has received little attention in the literature related to CRM systems in which the frontline employees are the primary users. In this study, we propose that with the aid of CRM systems, individual employees are able to immediately access information about customers and service offerings, thus enabling their adaptive behaviours to provide personalized service to individual customers. Based on coping theory, we develop a CRM system‐driven adaptive behaviour model that explains how CRM systems facilitate individual employees' service performance by enabling adaptive behaviour during their service encounters. Multisourced data from a financial company in China largely support our proposed model, showing that employees' postadoption of CRM systems (routinization and infusion of use) enables interpersonal and offering adaptive behaviours, which in turn enhance employees' service performance. In addition, forming a postadoption behaviour of CRM systems relies on the frequent use. We discuss the theoretical and practical implications of adaptive behaviour in service encounters with the aid of CRM systems.  相似文献   

10.
We describe an approach to verifying bit-level pipelined machine models using a combination of deductive reasoning and decision procedures. While theorem-proving systems such as ACL2 have been used to verify bit-level designs, they typically require extensive expert user support. Decision procedures such as those implemented in UCLID can be used to automatically and efficiently verify term-level pipelined machine models, but these models use numerous abstractions, implement a subset of the instruction set, and are far from executable. We show that by integrating UCLID with the ACL2 theorem-proving system, we can use ACL2 to reduce the proof that an executable, bit-level machine refines its instruction set architecture to a proof that a term-level abstraction of the bit-level machine refines the instruction set architecture, which is then handled automatically by UCLID. We demonstrate the efficiency of our approach by applying it to verify a complex, seven-stage, bit-level interface pipelined machine model that implements 593 instructions and has features such as branch prediction, exceptions, and predicated instruction execution. Such a proof is not possible using UCLID and would require prohibitively more effort using just ACL2. This research was funded in part by NSF grants CCF-0429924, IIS-0417413, and CCF-0438871.  相似文献   

11.
12.
The population protocol model has emerged as an elegant computation paradigm for describing mobile ad hoc networks, consisting of a number of mobile nodes that interact with each other to carry out a computation. The interactions of nodes are subject to a fairness constraint. One essential property of population protocols is that all nodes must eventually converge to the correct output value (or configuration). In this paper, we aim to automatically verify self-stabilizing population protocols for leader election and token circulation in the Spin model checker. We report our verification results and discuss the issue of modeling strong fairness constraints in Spin.  相似文献   

13.
Service oriented computing (SOC) has brought a simplification in the way distributed applications can be built. Mainstream approaches, however, failed to support dynamic, self-managed compositions that would empower even non-technical users to build their own orchestrations. Indeed, because of the changeable world in which they are embedded, service compositions must be able to adapt to changes that may happen at run-time. Unfortunately, mainstream SOC languages, like BPEL and BPMN, make it quite hard to develop such kind of self-adapting orchestrations. We claim that this is mostly due to the imperative programming paradigm they are based on. To overcome this limitation we propose a radically different, strongly declarative approach to model service orchestration, which is easier to use and results in more flexible and self-adapting orchestrations. An ad-hoc engine, leveraging well-known planning techniques, interprets such models to support dynamic service orchestration at run-time.  相似文献   

14.
Modern optimizing compilers apply a fixed sequence of optimizations, which we call a compilation sequence, to each program that they compile. These compilers let the user modify their behavior in a small number of specified ways, using command-line flags (e.g.,-O1,-O2,...). For five years, we have been working with compilers that automatically select an appropriate compilation sequence for each input program. These adaptive compilers discover a good compilation sequence tailored to the input program, the target machine, and a user-chosen objective function. We have shown, as have others, that program-specific sequences can produce better results than any single universal sequence [1, 7, 10, 21, 23] Our adaptive compiler looks for compilation sequences in a large and complex search space. Its typical compilation sequence includes 10 passes (with possible repeats) chosen from the 16 available—there are 1610 or [1,099,511,627,776] such sequences. To learn about the properties of such spaces, we have studied subspaces that consist of 10 passes drawn from a set of 5 (510 or 9,765,625 sequences). These 10-of-5 subspaces are small enough that we can analyze them thoroughly but large enough to reflect important properties of the full spaces.This paper reports, in detail, on our analysis of several of these subspaces and on the consequences of those observed properties for the design of search algorithms. This work has been supported by the Los Alamos Computer Science Institute and by the National Science Foundation through grant CCR-0205303.  相似文献   

15.
Static analysis tools, such as resource analyzers, give useful information on software systems, especially in real-time and safety-critical applications. Therefore, the question of the reliability of the obtained results is highly important. State-of-the-art static analyzers typically combine a range of complex techniques, make use of external tools, and evolve quickly. To formally verify such systems is not a realistic option. In this work, we propose a different approach whereby, instead of the tools, we formally verify the results of the tools. The central idea of such a formal verification framework for static analysis is the method-wise translation of the information about a program gathered during its static analysis into specification contracts that contain enough information for them to be verified automatically. We instantiate this framework with costa, a state-of-the-art static analysis system for sequential Java programs, for producing resource guarantees and KeY, a state-of-the-art verification tool, for formally verifying the correctness of such resource guarantees. Resource guarantees allow to be certain that programs will run within the indicated amount of resources, which may refer to memory consumption, number of instructions executed, etc. Our results show that the proposed tool cooperation can be used for automatically producing verified resource guarantees.  相似文献   

16.
随着网格商业化的不断深入,网格计算要求:服务提供者和消费者间在组合服务中需达成端对端的高服务质量协议,高可靠性的组合服务协商机制和保障机制在网格计算中扮演着越来越重要的角色。所提出的网格合同计算模型,在组合服务过程中提供统一的用户接口,使交易双方自动动态协商满足每个单独服务的服务质量约束条件。在网格合同计算模型中,利用代理技术实现了自治网格合同计算协商机制,该协商机制遵循改进的两阶段提交FIPA协议。在服务合同中导入了软服务合同SSC和硬服务合同HSC概念,并指出动态绑定服务之前所生成的软服务合同SSC仅仅起到占位符的作用,只有在绑定时通过“硬化”才获得真实的服务承诺。  相似文献   

17.
Static analyses based on denotational semantics can naturally model functional behaviours of the code in a compositional and completely context and flow sensitive way. But they only model the functional i.e., input/output behaviour of a program P, not enough if one needs P’s internal behaviours i.e., from the input to some internal program points. This is, however, a frequent requirement for a useful static analysis. In this paper, we overcome this limitation, for the case of mono-threaded Java bytecode, with a technique used up to now for logic programs only. Namely, we define a program transformation that adds new magic blocks of code to the program P, whose functional behaviours are the internal behaviours of P. We prove the transformation correct w.r.t. an operational semantics and define an equivalent denotational semantics, devised for abstract interpretation, whose denotations for the magic blocks are hence the internal behaviours of P. We implement our transformation and instantiate it with abstract domains modelling sharing of two variables, non-cyclicity of variables, nullness of variables, class initialisation information and size of the values bound to program variables. We get a static analyser for full mono-threaded Java bytecode that is faster and scales better than another operational pair-sharing analyser. It has the same speed but is more precise than a constraint-based nullness analyser. It makes a polyhedral size analysis of Java bytecode scale up to 1300 methods in a couple of minutes and a zone-based size analysis scale to still larger applications.  相似文献   

18.
Comprehensive computational lexicons areessential to practical natural languageprocessing (NLP). To compile such computationallexicons by automatically acquiring lexicalinformation, however, we previously requiresufficiently large corpora. This study aims atpredicting the ideal size of suchautomatic-lexical-acquisition oriented corpora,focusing on six specific factors: (1) specificversus general purpose prediction, (2)variation among corpora, (3) base forms versus inflected forms, (4) open class items,(5) homographs, and (6) unknown words.Another important and related issue withregard to predictability has something to dowith data sparseness. Research using theTOTAL Corpus reveals serious datasparseness in this corpus. This, again, pointstowards the importance and necessity ofreducing data sparseness to a satisfactorylevel for the automatic lexical acquisition andreliable corpus predictions. The functions ofpredicting the number of tokens and lemmas in acorpus are based on the piecewisecurve-fitting algorithm. Unfortunately, thepredicted size of a corpus for automaticlexical acquisition is too astronomicalto compile it by using presently existingcompiling strategies. Therefore, we suggest apractical and efficient alternative method. Weare confident that this study will shed newlight on issues such as corpus predictability,compiling strategies and linguisticcomprehensiveness.  相似文献   

19.
There is a great deal of research aimed toward the development of temporal logics and model checking algorithms which can be used to verify properties of systems. In this paper, we present a methodology and supporting tools which allow researchers and practitioners to automatically generate model checking algorithms for temporal logics from algebraic specifications. These tools are extensions of algebraic compiler generation tools and are used to specify model checkers as mappings of the form , where L s is a temporal logic source language and L t is a target language representing sets of states of a model M, such that . The algebraic specifications for a model checker define the logic source language, the target language representing sets of states in a model, and the embedding of the source language into the target language. Since users can modify and extend existing specifications or write original specifications, new model checking algorithms for new temporal logics can be easily and quickly developed; this allows the user more time to experiment with the logic and its model checking algorithm instead of developing its implementation. Here we show how this algebraic framework can be used to specify model checking algorithms for CTL, a real-time CTL, CTL*, and a custom extension called CTL e that makes use of propositions labeling the edges as well as the nodes of a model. We also show how the target language can be changed to a language of binary decision diagrams to generate symbolic model checkers from algebraic specifications.  相似文献   

20.
In this paper, we propose a variational soft segmentation framework inspired by the level set formulation of multiphase Chan-Vese model. We use soft membership functions valued in [0,1] to replace the Heaviside functions of level sets (or characteristic functions) such that we get a representation of regions by soft membership functions which automatically satisfies the sum to one constraint. We give general formulas for arbitrary N-phase segmentation, in contrast to Chan-Vese’s level set method only 2 m -phase are studied. To ensure smoothness on membership functions, both total variation (TV) regularization and H 1 regularization used as two choices for the definition of regularization term. TV regularization has geometric meaning which requires that the segmentation curve length as short as possible, while H 1 regularization has no explicit geometric meaning but is easier to implement with less parameters and has higher tolerance to noise. Fast numerical schemes are designed for both of the regularization methods. By changing the distance function, the proposed segmentation framework can be easily extended to the segmentation of other types of images. Numerical results on cartoon images, piecewise smooth images and texture images demonstrate that our methods are effective in multiphase image segmentation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号