首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 427 毫秒
1.
Model checking is a successful approach for verifying hardware and software systems. Despite its success, the technique suffers from the state explosion problem which arises due to the large state space of real-life systems. One solution to the state explosion problem is compositional verification, that aims to decompose the verification of a large system into the more manageable verification of its components. To account for dependencies between components, assume-guarantee reasoning defines rules that break-up the global verification of a system into local verification of individual components, using assumptions about the rest of the system. In recent years, compositional techniques have gained significant successes following a breakthrough in the ability to automate assume-guarantee reasoning. However, automation has been restricted to simple acyclic assume-guarantee rules. In this work, we focus on automating circular assume-guarantee reasoning in which the verification of individual components mutually depends on each other. We use a sound and complete circular assume-guarantee rule and we describe how to automatically build the assumptions needed for using the rule. Our algorithm accumulates joint constraints on the assumptions based on (spurious) counterexamples obtained from checking the premises of the rule, and uses a SAT solver to synthesize minimal assumptions that satisfy these constraints. To the best of our knowledge, our work is the first to fully automate circular assume-guarantee reasoning. We implemented our approach and compared it with established non-circular compositional methods that use learning or SAT-based techniques. The experiments show that the assumptions generated for the circular rule are generally smaller, and on the larger examples, we obtain a significant speedup.  相似文献   

2.
Compositional reasoning aims to improve scalability of verification tools by reducing the original verification task into subproblems. The simplification is typically based on assume-guarantee reasoning principles, and requires user guidance to identify appropriate assumptions for components. In this paper, we propose a fully automated approach to compositional reasoning that consists of automated decomposition using a hypergraph partitioning algorithm for balanced clustering of variables, and discovering assumptions using the L * algorithm for active learning of regular languages. We present a symbolic implementation of the learning algorithm, and incorporate it in the model checker NuSmv. In some cases, our experiments demonstrate significant savings in the computational requirements of symbolic model checking. This research was partially supported by ARO grant DAAD19-01-1-0473, and NSF grants ITR/SY 0121431 and CCR0306382.  相似文献   

3.
Hybrid systems represent an important and powerful formalism for modeling real-world applications such as embedded systems. A verification tool like SpaceEx is based on the exploration of a symbolic search space (the region space). As a verification tool, it is typically optimized towards proving the absence of errors. In some settings, e.g., when the verification tool is employed in a feedback-directed design cycle, one would like to have the option to call a version that is optimized towards finding an error trajectory in the region space. A recent approach in this direction is based on guided search. Guided search relies on a cost function that indicates which states are promising to be explored, and preferably explores more promising states first. In this paper, we propose an abstraction-based cost function based on coarse-grained space abstractions for guiding the reachability analysis. For this purpose, a suitable abstraction technique that exploits the flexible granularity of modern reachability analysis algorithms is introduced. The new cost function is an effective extension of pattern database approaches that have been successfully applied in other areas. The approach has been implemented in the SpaceEx model checker. The evaluation shows its practical potential.  相似文献   

4.
Compositional verification using assume-guarantee reasoning has recently seen an uprise due to the introduction of automatic techniques for learning assumptions. In this paper, we transfer this technique to a setting with CSP as modelling and property specification language, and present an approach to compositional traces refinement checking. The approach has been implemented using the CSP model checker FDR as teacher during learning. The implementation shows that the compositional approach can both drastically outperform as well as underperform FDR's performance, depending on the example at hand.  相似文献   

5.
Assume-guarantee reasoning enables a “divide-and-conquer” approach to the verification of large systems that checks system components separately while using assumptions about each component’s environment. Developing appropriate assumptions used to be a difficult and manual process. Over the past five years, we have developed a framework for performing assume-guarantee verification of systems in an incremental and fully automated fashion. The framework uses an off-the-shelf learning algorithm to compute the assumptions. The assumptions are initially approximate and become more precise by means of counterexamples obtained by model checking components separately. The framework supports different assume-guarantee rules, both symmetric and asymmetric. Moreover, we have recently introduced alphabet refinement, which extends the assumption learning process to also infer assumption alphabets. This refinement technique starts with assumption alphabets that are a subset of the minimal interface between a component and its environment, and adds actions to it as necessary until a given property is shown to hold or to be violated in the system. We have applied the learning framework to a number of case studies that show that compositional verification by learning assumptions can be significantly more scalable than non-compositional verification. J.M. Cobleigh currently employed by The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 01760, USA.  相似文献   

6.
密码协议的一种安全模型   总被引:8,自引:0,他引:8       下载免费PDF全文
刘怡文  李伟琴  冯登国 《软件学报》2003,14(6):1148-1156
将密码协议与密码算法视为一个系统,建立了密码协议系统的一种安全模型.基于假设/保证的组合推理技术提出了新的假设/保证推理规则和假设/保证推理算法,证明了该规则的完备性,实现了密码协议系统的模型检查,并重点解决了系统分解问题、假设函数的设定问题、进程+逻辑的系统特性描述问题等难题.以kerberos密码协议系统为例,利用该安全模型和假设/保证推理技术对密码协议系统进行了安全验证.  相似文献   

7.
This paper studies the performance degradation of Gaussian probabilistic linear discriminant analysis (GPLDA) speaker verification system, when only short-utterance data is used for speaker verification system development. Subsequently, a number of techniques, including utterance partitioning and source-normalised weighted linear discriminant analysis (SN-WLDA) projections are introduced to improve the speaker verification performance in such conditions. Experimental studies have found that when short utterance data is available for speaker verification development, GPLDA system overall achieves best performance with a lower number of universal background model (UBM) components. As a lower number of UBM components significantly reduces the computational complexity of speaker verification system, that is a useful observation. In limited session data conditions, we propose a simple utterance-partitioning technique, which when applied to the LDA-projected GPLDA system shows over 8% relative improvement on EER values over baseline system on NIST 2008 truncated 10–10 s conditions. We conjecture that this improvement arises from the apparent increase in the number of sessions arising from our partitioning technique and this helps to better model the GPLDA parameters. Further, partitioning SN-WLDA-projected GPLDA shows over 16% and 6% relative improvement on EER values over LDA-projected GPLDA systems respectively on NIST 2008 truncated 10–10 s interview-interview, and NIST 2010 truncated 10–10 s interview-interview and telephone-telephone conditions.  相似文献   

8.
Inaccuracies, or deviations, in the measurements of monitored variables in a control system are facts of life that control software must accommodate. Deviation analysis can be used to determine how a software specification will behave in the face of such deviations. Deviation analysis is intended to answer questions such as “What is the effect on output O if input I is off by 0 to 100?”. This property is best checked with some form of symbolic execution approach. In this report we wish to propose a new approach to deviation analysis using model checking techniques. The key observation that allows us to use model checkers is that the property can be restated as “Will there be an effect on output O if input I is off by 0 to 100?”—this restatement of the property changes the analysis from an exploratory analysis to a verification task suitable for model checking.  相似文献   

9.
Weighted Markov decision processes (MDPs) have long been used to model quantitative aspects of systems in the presence of uncertainty. However, much of the literature on such MDPs takes a monolithic approach, by modelling a system as a particular MDP; properties of the system are then inferred by analysis of that particular MDP. In contrast in this paper we develop compositional methods for reasoning about weighted MDPs, as a possible basis for compositional reasoning about their quantitative behaviour. In particular we approach these systems from a process algebraic point of view. For these we define a coinductive simulation-based behavioural preorder which is compositional in the sense that it is preserved by structural operators for constructing weighted MDPs from components.  相似文献   

10.
Formal synthesis approaches over stochastic systems have received significant attention in the past few years, in view of their ability to provide provably correct controllers for complex logical specifications in an automated fashion. Examples of complex specifications include properties expressed as formulae in linear temporal logic (LTL) or as automata on infinite strings. A general methodology to synthesize controllers for such properties resorts to symbolic models of the given stochastic systems. Symbolic models are finite abstractions of the given concrete systems with the property that a controller designed on the abstraction can be refined (or implemented) into a controller on the original system. Although the recent development of techniques for the construction of symbolic models has been quite encouraging, the general goal of formal synthesis over stochastic control systems is by no means solved. A fundamental issue with the existing techniques is the known “curse of dimensionality,” which is due to the need to discretize state and input sets. Such discretization generally results in an exponential complexity over the number of state and input variables in the concrete system. In this work we propose a novel abstraction technique for incrementally stable stochastic control systems, which does not require state-space discretization but only input set discretization, and that can be potentially more efficient (and thus scalable) than existing approaches. We elucidate the effectiveness of the proposed approach by synthesizing a schedule for the coordination of two traffic lights under some safety and fairness requirements for a road traffic model. Further we argue that this 5-dimensional linear stochastic control system cannot be studied with existing approaches based on state-space discretization due to the very large number of generated discrete states.  相似文献   

11.
In this paper, we address the problem of verifying probabilistic and epistemic properties in concurrent probabilistic systems expressed in PCTLK. PCTLK is an extension of the Probabilistic Computation Tree Logic (PCTL) augmented with Knowledge (K). In fact, PCTLK enjoys two epistemic modalities Ki for knowledge and \(Pr_{\triangledown b}K_{i}\) for probabilistic knowledge. The approach presented for verifying PCTLK specifications in such concurrent systems is based on a transformation technique. More precisely, we convert PCTLK model checking into the problem of model checking Probabilistic Branching Time Logic (PBTL), which enjoys path quantifiers in the range of adversaries. We then prove that model checking a formula of PCTLK in concurrent probabilistic programs is PSPACE-complete. Furthermore, we represent models associated with PCTLK logic symbolically with Multi-Terminal Binary Decision Diagrams (MTBDDs), which are supported by the probabilistic model checker PRISM. Finally, an application, namely the NetBill online shopping payment protocol, and an example about synchronization illustrated through the dining philosophers problem are implemented with the MTBDD engine of this model checker to verify probabilistic epistemic properties and evaluate the practical complexity of this verification.  相似文献   

12.
The emergence of portable 3D mapping systems are revolutionizing the way we generate digital 3D models of environments. These systems are human-centric and require the user to hold or carry the device while continuously walking and mapping an environment. In this paper, we adapt this unique coexistence of man and machines to propose SAGE (Semantic Annotation of Georeferenced Environments). SAGE consists of a portable 3D mobile mapping system and a smartphone that enables the user to assign semantic content to georeferenced 3D point clouds while scanning a scene. The proposed system contains several components including touchless speech acquisition, background noise adaptation, real time audio and vibrotactile feedback, automatic speech recognition, distributed clock synchronization, 3D annotation localization, user interaction, and interactive visualization. The most crucial advantage of SAGE technology is that it can be used to infer dynamic activities within an environment. Such activities are difficult to be identified with existing post-processing semantic annotation techniques. The capability of SAGE leads to many promising applications such as intelligent scene classification, place recognition and navigational aid tasks. We conduct several experiments to demonstrate the effectiveness of the proposed system.  相似文献   

13.
14.
In this paper, we propose a sub-vector based speaker characterization method for biometric speaker verification, where speakers are represented by uniform segmentation of their maximum likelihood linear regression (MLLR) super-vectors called m-vectors. The MLLR transformation is estimated with respect to universal background model (UBM) without any speech/phonetic information. We introduce two strategies for segmentation of MLLR super-vector: one is called disjoint and other is an overlapped window technique. During test phase, m-vectors of the test utterance are scored against the claimant speaker. Before scoring, m-vectors are post-processed to compensate the session variability. In addition, we propose a clustering algorithm for multiple-class wise MLLR transformation, where Gaussian components of the UBM are clustered into different groups using the concept of expectation maximization (EM) and maximum likelihood (ML). In this case, MLLR transformations are estimated with respect to each class using the sufficient statistics accumulated from the Gaussian components belonging to the particular class, which are then used for m-vector system. The proposed method needs only once alignment of the data with respect to the UBM for multiple MLLR transformations. We first show that the proposed multi-class m-vector system shows promising speaker verification performance when compared to the conventional i-vector based speaker verification system. Secondly, the proposed EM based clustering technique is robust to the random initialization in-contrast to the conventional K-means algorithm and yields system performance better/equal which is best obtained by the K-means. Finally, we show that the fusion of the m-vector with the i-vector further improves the performance of the speaker verification in both score as well as feature domain. The experimental results are shown on various tasks of NIST 2008 speaker recognition evaluation (SRE) core condition.  相似文献   

15.
Leader election protocols are fundamental for coordination problems—such as consensus—in distributed computing. Recently, hierarchical leader election protocols have been proposed for dynamic systems where processes can dynamically join and leave, and no process has global information. However, quantitative analysis of such protocols is generally lacking. In this paper, we present a probabilistic model checking based approach to verify quantitative properties of these protocols. Particularly, we employ the compositional technique in the style of assume-guarantee reasoning such that the sub-protocols for each of the two layers are verified separately and the correctness of the whole protocol is guaranteed by the assume-guarantee rules. Moreover, within this framework we also augment the proposed model with additional features such as rewards. This allows the analysis of time or energy consumption of the protocol. Experiments have been conducted to demonstrate the effectiveness of our approach.  相似文献   

16.
17.
The concept of “probabilistic logic” known in artificial intelligence needs a more through substantiation. A new approach to constructing probabilistic logic based on the n-tuple algebra developed by the author is proposed. A brief introduction is given to the n-tuple algebra and its properties that provide efficient paralleling of algorithms for solving problems of logical analysis of systems in computer implementation are generalized. Methods for solving direct and inverse problems of probabilistic simulation of logical systems are considered.  相似文献   

18.
In the first part of this paper (STI, Ser. 2, 2017, No. 11, pp. 11–25), we considered four types of strategies in JSM reasoning that contain predicates M ag σ (V,X,Y) to generate hypotheses with a ternary causality relations. This paper investigates quasi-axiomatic theories (QAT), whose logical means is JSM reasoning with different types of strategies and considers the semantic bases of QAT.  相似文献   

19.
Active appearance models (AAMs) are one of the most popular and well-established techniques for modeling deformable objects in computer vision. In this paper, we study the problem of fitting AAMs using compositional gradient descent (CGD) algorithms. We present a unified and complete view of these algorithms and classify them with respect to three main characteristics: (i) cost function; (ii) type of composition; and (iii) optimization method. Furthermore, we extend the previous view by: (a) proposing a novel Bayesian cost function that can be interpreted as a general probabilistic formulation of the well-known project-out loss; (b) introducing two new types of composition, asymmetric and bidirectional, that combine the gradients of both image and appearance model to derive better convergent and more robust CGD algorithms; and (c) providing new valuable insights into existent CGD algorithms by reinterpreting them as direct applications of the Schur complement and the Wiberg method. Finally, in order to encourage open research and facilitate future comparisons with our work, we make the implementation of the algorithms studied in this paper publicly available as part of the Menpo Project (http://www.menpo.org).  相似文献   

20.
Static analysis tools, such as resource analyzers, give useful information on software systems, especially in real-time and safety-critical applications. Therefore, the question of the reliability of the obtained results is highly important. State-of-the-art static analyzers typically combine a range of complex techniques, make use of external tools, and evolve quickly. To formally verify such systems is not a realistic option. In this work, we propose a different approach whereby, instead of the tools, we formally verify the results of the tools. The central idea of such a formal verification framework for static analysis is the method-wise translation of the information about a program gathered during its static analysis into specification contracts that contain enough information for them to be verified automatically. We instantiate this framework with costa, a state-of-the-art static analysis system for sequential Java programs, for producing resource guarantees and KeY, a state-of-the-art verification tool, for formally verifying the correctness of such resource guarantees. Resource guarantees allow to be certain that programs will run within the indicated amount of resources, which may refer to memory consumption, number of instructions executed, etc. Our results show that the proposed tool cooperation can be used for automatically producing verified resource guarantees.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号