首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Error explanation with distance metrics   总被引:1,自引:0,他引:1  
In the event that a system does not satisfy a specification, a model checker will typically automatically produce a counterexample trace that shows a particular instance of the undesirable behavior. Unfortunately, the important steps that follow the discovery of a counterexample are generally not automated. The user must first decide if the counterexample shows genuinely erroneous behavior or is an artifact of improper specification or abstraction. In the event that the error is real, there remains the difficult task of understanding the error well enough to isolate and modify the faulty aspects of the system. This paper describes a (semi-)automated approach for assisting users in understanding and isolating errors in ANSI C programs. The approach, derived from Lewis’ counterfactual approach to causality, is based on distance metrics for program executions. Experimental results show that the power of the model checking engine can be used to provide assistance in understanding errors and to isolate faulty portions of the source code.  相似文献   

2.
Model-checking is becoming an accepted technique for debugging hardware and software systems. Debugging is based on the “Check/Analyze/Fix” loop: check the system against a desired property, producing a counterexample when the property fails to hold; analyze the generated counterexample to locate the source of the error; fix the flawed artifact—the property or the model. The success of model-checking non-trivial systems critically depends on making this Check/Analyze/Fix loop as tight as possible. In this paper, we concentrate on the Analyze part of the debugging loop. To this end, we present a framework for generating, structuring and exploring counterexamples, implemented in a tool called KEGVis. The framework is based on the idea that the most general type of evidence to why a property holds or fails to hold is a proof. Such proofs can be presented to the user in the form of proof-like counterexamples, without sacrificing any of the intuitiveness and close relation to the model that users have learned to expect from model-checkers. Moreover, proof generation is flexible, and can be controlled by strategies, whether built into the tool or specified by the user, thus enabling generation of the most “interesting” counterexample and its interactive exploration. Moreover, proofs can be used to generate and display all relevant evidence together, a technique referred to as abstract counterexamples. Overall, our framework can be used for explaining the reason why the property failed or succeeded, determining whether the property was correct (“specification debugging”), and for general model exploration.  相似文献   

3.
Visual interaction processes are modeled in this paper as sequences of visual sentences in which for each visual sentence only a limited set of user actions is possible. We introduce the notion of "dynamic visual language" as a weakly ordered set of visual sentences characterized by the presence of common elements. We present a formal model of derivation of visual sentences in a dynamic visual language in which each visual sentence specifies the possible actions which can be performed on it and the possible transformations it can go through. In this way, we offer a formal setting in which the interaction process can be formally specified. A user interface can be derived from the formal specification, so that it embeds proper context elements which limit user disorientation. The concepts are illustrated by the user interaction with a prototype of a digital library developed at the University of Bari.  相似文献   

4.
5.
In this paper we present a framework for the fast prototyping of visual languages exploiting their local context based specification.In previous research, the local context specification has been used as a weak form of syntactic specification to define when visual sentences are well formed. In this paper we add new features to the local context specification in order to fully specify complex constructs of visual languages such as entity-relationships, use case and class diagrams. One of the advantages of this technique is its simplicity of application and, to show this, we present a tool implementing our framework.Moreover, we describe a user study aimed at evaluating the effectiveness and the user satisfaction when prototyping a visual language.  相似文献   

6.
We present an overview of the Java PathExplorer runtime verification tool, in short referred to as JPAX. JPAX can monitor the execution of a Java program and check that it conforms with a set of user provided properties formulated in temporal logic. JPAX can in addition analyze the program for concurrency errors such as deadlocks and data races. The concurrency analysis requires no user provided specification. The tool facilitates automated instrumentation of a program's bytecode, which when executed will emit an event stream, the execution trace, to an observer. The observer dispatches the incoming event stream to a set of observer processes, each performing a specialized analysis, such as the temporal logic verification, the deadlock analysis and the data race analysis. Temporal logic specifications can be formulated by the user in the Maude rewriting logic, where Maude is a high-speed rewriting system for equational logic, but here extended with executable temporal logic. The Maude rewriting engine is then activated as an event driven monitoring process. Alternatively, temporal specifications can be translated into automata or algorithms that can efficiently check the event stream. JPAX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems.  相似文献   

7.

Hyperproperties, such as non-interference and observational determinism, relate multiple computation traces with each other and are thus not monitorable by tools that consider computations in isolation. We present the monitoring approach implemented in the latest version of \(\text {RVHyper}\), a runtime verification tool for hyperproperties. The input to the tool are specifications given in the temporal logic \(\text {HyperLTL}\), which extends linear-time temporal logic (LTL) with trace quantifiers and trace variables. \(\text {RVHyper}\) processes execution traces sequentially until a violation of the specification is detected. In this case, a counterexample, in the form of a set of traces, is returned. \(\text {RVHyper}\) employs a range of optimizations: a preprocessing analysis of the specification and a procedure that minimizes the traces that need to be stored during the monitoring process. In this article, we introduce a novel trace storage technique that arranges the traces in a tree-like structure to exploit partially equal traces. We evaluate \(\text {RVHyper}\) on existing benchmarks on secure information flow control, error correcting codes, and symmetry in hardware designs. As an example application outside of security, we show how \(\text {RVHyper}\) can be used to detect spurious dependencies in hardware designs.

  相似文献   

8.
The design, analysis, control and diagnosis of business workflows have been major challenges for enterprise information system designers. We propose a structured framework for workflow design, formal semantics, consistency analysis, execution automation and failure reasoning targeting E-commerce applications. A business workflow is modeled by using a visual tool named activity-control (AC) diagram. Frequently occurring business procedures are captured by the adoptions of reusable AC templates. With formally defined semantics by a combination of first-order logic and happen-before causal ordering in distributed system theory, workflow consistency can be mechanically analyzed at design time while failure reasoning can be applied at execution time for problem diagnosis. A completely specified model is automatically converted to a workflow by an iterative traversal algorithm that maps an AC diagram to an XML workflow specification which can then be executed automatically by an XML workflow engine. A failure reasoning and diagnosis algorithm is devised to find all possible causes of a failed execution when problems occur. Preliminary proof-of-concept implementation and evaluation results demonstrate the feasibility and effectiveness of our framework and techniques.  相似文献   

9.
Techniques and tools for formally verifying compliance with industry standards are important, especially in System-on-Chip (SoC) designs: a failure to integrate externally developed intellectual property (IP) cores is prohibitively costly. There are three essential components in the practical verification of compliance with a standard. First, an easy-to-read and yet formal specification of the standard is needed; we propose Live Sequence Charts (LSCs) as a high-level visual notation for writing specifications. Second, assertions should be generated directly from the specification; an implementation will be scrutinized, usually by model checking, to check that it satisfies each assertion. Third, a formal link must be made between proofs of assertions and compliance with the original specification. As an example, we take the Virtual Component Interface (VCI) Standard. We compare three efforts in verifying that the same register transfer level code is VCI-compliant. The first two efforts were manual, while the third used a tool, lscAssert, to automatically generate assertions in LTL. We discuss the details of the assertion generation algorithm.  相似文献   

10.
The development of user interfaces for safety critical systems is driven by requirements specifications. Because user interface specifications are typically embedded within complex systems requirements specifications, they can be intractable to manage. Proprietary requirements specification tools do not support the user interface designer in modelling and specifying the user interface. In this paper, a new way of working with embedded user interface specifications is proposed, exploiting sequence diagrams with a hypertext structure for representing and retrieving use cases. This new tool concept is assessed through an application to the requirements specification for the Airbus A380 air traffic control Datalink system; engineers involved in the development of the Airbus cockpit used a prototype of the tool concept to resolve a set of user interface design anomalies in the requirements specification. The results of the study are positive and indicate the user interface to requirements specification tools which user interface designers themselves need.  相似文献   

11.
If a program does not fulfill a given specification, a model checker delivers a counterexample, a run which demonstrates the wrong behavior. Even with a counterexample, locating the actual fault in the source code is often a difficult task for the verification engineer.We present an automatic approach for fault localization in C programs. The method is based on model checking and reports only components that can be changed such that the difference between actual and intended behavior of the example is removed. To identify these components, we use the bounded model checker CBMC on an instrumented version of the program. We present experimental data that supports the applicability of our approach.  相似文献   

12.
蒋亚军  朱理 《计算机仿真》2006,23(10):178-180
在计算机图形学和几何造型中,实体模型经常采用多边形网格描述,由于绘制时间和存储量与网格的数量成正比,因此复杂的网格模型通常并不实用,从而必须进行模型简化。因为任意多边形可以很方便地被剖分为三角形,由此该文提出一种新的基于视觉特性的三角形网格简化算法。该算法基于人类的视觉特性对三角形网格进行重要性分析,模型细节的选择取决于整个模型对视觉效果的贡献程度,在用户指定的尺度范围内,通过采用收缩三角形以达到迅速简化的目的,以较小的图形生成代价获取丰富的图形视觉效果。实验结果表明,该算法具有实现简单,速度快的特点,能有效地支持细节层次模型的表示。  相似文献   

13.
Model-driven engineering refers to a range of approaches that uses models throughout systems and software development life cycle. Towards sustaining such a successful approach in practice, we present a model-based verification framework that supports the quantitative and qualitative analysis of SysML activity diagrams. To this end, we propose an algorithm that maps SysML activity diagrams into Markov decision processes expressed using the language of the probabilistic symbolic model checker PRISM. Furthermore, we elaborate on the correctness of our translation algorithm by proving its soundness with respect to a SysML activity diagrams operational semantics that we also present in this work. The generated models can be verified against a set of properties expressed in the probabilistic computation tree logic. To automate our approach, we developed a prototype tool that interfaces a modeling environment and the probabilistic model checker. We also show how to leverage adversary generation to provide the developer with a useful counterexample/witness as a feedback on the verified properties. Finally, the established theoretical foundations are complemented with an illustrative case study that demonstrates the usability and benefit of such a framework.  相似文献   

14.
This paper describes how the state space exploration ool VeriSoft can be used to analyze parallel C/C++ programs compositionally. VeriSoft is employed for two analyses: transition traceanalysis and assume/guarantee reasoning. Both analyses are compositional in the sense that the behaviour of a parallel program is determined in terms of the behaviour of its constituent processes. While both analyses have traditionally been carried out with “pencil and paper”, the paper demonstrates how VeriSoft can be used to automate them. In the context of transition trace analysis, the question whether a given program can exhibit a given trace is addressed with VeriSoft. To implement assume/guarantee reasoning, VeriSoft is used to determine whether a given program satisfies a given assume/guarantee specification. Since VeriSoft’s state space exploration is bounded and thus not complete in general, our proposed analyses are only meant to complement standard reasoning about parallel programs using traces or assume/guarantee specifications. For instance, a successful analysis does not always imply the general correctness of an assume/guarantee specification. However, it increases the confidence in the verification effort. On the other hand, an unsuccessful analysis always produces a counterexample which can be used to correct the specification or the program. VeriSoft’s optimization and visualization techniques make the analyses relatively efficient and effective.  相似文献   

15.
The learning-based automated Assume–Guarantee reasoning paradigm has been applied in the last few years for the compositional verification of concurrent systems. Specifically, L* has been used for learning the assumption, based on strings derived from counterexamples, which are given to it by a model-checker that attempts to verify the Assume–Guarantee rules. We suggest three optimizations to this paradigm. First, we derive from each counterexample multiple strings to L*, rather than a single one as in previous approaches. This small improvement saves candidate queries and hence model-checking runs. Second, we observe that in existing instances of this paradigm, the learning algorithm is coupled weakly with the teacher. Thus, the learner completely ignores the details of the internal structure of the system and specification being verified, which are available already to the teacher. We suggest an optimization that uses this information in order to avoid many unnecessary membership queries (it reduces the number of such queries by more than an order of magnitude). Finally, we develop a method for minimizing the alphabet used by the assumption, which reduces the size of the assumption and the number of queries required to construct it. We present these three optimizations in the context of verifying trace containment for concurrent systems composed of finite state machines. We have implemented our approach in the ComFoRT tool, and experimented with real-life examples. Our results exhibit an average speedup of between 4 to 11 times, depending on the Assume–Guarantee rule used and the set of activated optimizations. This research was supported by the Predictable Assembly from Certifiable Components (PACC) initiative at the Software Engineering Institute, Pittsburgh.  相似文献   

16.
Graph transformation has recently become more and more popular as a general, rule-based visual specification paradigm to formally capture (a) requirements or behavior of user models (on the model-level), and (b) the operational semantics of modeling languages (on the meta-level) as demonstrated by benchmark applications around the Unified Modeling Language (UML). The current paper focuses on the model checking-based automated formal verification of graph transformation systems used either on the model-level or meta-level. We present a general translation that inputs (i) a metamodel of an arbitrary visual modeling language, (ii) a set of graph transformation rules that defines a formal operational semantics for the language, and (iii) an arbitrary well-formed model instance of the language and generates a transitions system (TS) that serve as the underlying mathematical specification formalism of various model checker tools. The main theoretical benefit of our approach is an optimization technique that projects only the dynamic parts of the graph transformation system into the target transition system, which results in a drastical reduction in the state space. The main practical benefit is the use of existing back-end model checker tools, which directly provides formal verification facilities (without additional efforts required to implement an analysis tool) for many practical applications captured in a very high-level visual notation. The practical feasibility of the approach is demonstrated by modeling and analyzing the well-known verification benchmark of dining philosophers both on the model and meta-level.  相似文献   

17.
应用于传递函数设定的交互式体绘制工具   总被引:9,自引:0,他引:9  
黄汉青  唐泽圣 《计算机学报》2005,28(6):1062-1067
传递函数是体绘制过程中用以定出体数据与光学特征的对应关系,因此,传递函数的设定对成像质量有着直接的影响,文章提出一应用于传递函数设定、简单且有效的交互式体绘制工具,由于二维纹理硬件在通用的个人计算机上被普遍使用,因而该工具采用基于二维纹理硬件的体绘制方法,利用本工具,用户能根据体数据的直方图来交互地分别设定R、G、B和A四种传递函数,以定出体数据与光学特征的对应关系,并获得实时的反馈视觉信息(绘制结果),该工具亦提供一虚拟轨迹球让用户交互地改变观察体数据的视点,用户不但可以交互地控制放大或缩小比率来绘制体数据,还可以选择采用光照或由多重纹理实现的三线性插值来获得不同的绘制效果,该文描述开发此工具的各种技术,并给出利用此工具得到的一些绘制结果。  相似文献   

18.
杨文华  周宇  黄志球 《软件学报》2021,32(4):889-903
信息物理系统被广泛应用于众多关键领域,例如工业控制与智能制造.作为部署在这些关键领域中的系统,其系统质量尤为重要.然而,由于信息物理系统自身的复杂性以及系统中存在的不确定性(例如系统通过传感器感知环境时的偏差),信息物理系统的质量保障面临巨大挑战.验证是保障系统质量的有效途径之一,基于系统模型与规约它可以证明系统是否满足要求的性质.现有一些信息物理系统的验证工作也取得了显著进展,例如模型检验技术就被已用工作用于验证系统在不确定性影响下的行为是否满足性质规约,并在性质违反的情况给出具体的反例.这些验证工作的一个重要输入就是不确定性模型,它描述了系统中不确定性的具体情况.而实际中要对系统中不确定性精确建模却并非易事,因此验证中使用的不确定性模型很可能与实际不完全相符,这将导致验证结果不准确并与现实偏离.针对这一问题,本文提出了一种基于反例确认的不确定性模型校准方法,来进一步精化验证结果以提高其准确度.首先通过确认反例在系统的执行中能否被触发来判断验证使用的不确定性模型是否精确.对于不精确的模型再利用遗传算法进行校准,并根据反例确认的结果来构造遗传算法的适应度函数以指导搜索,最后结合假设检验来帮助决定是否接受校准后的结果.在代表案例上的实验结果表明了我们提出的不确定性模型校准方法的有效性.  相似文献   

19.
A Problem Solving Environment (PSE) is an integrated system of application tools that support the solution of a given problem, or a set of related problems. Paramount in the development of such environments is the design, specification and integration of user interface tools that communicate between the application tools of the system and the user. Typically these interactions are object oriented and involve the interaction with tool parameters, which in many applications (CAD/ CAM, Imaging Systems, Image Processing), are represented by graphical data. This paper describes a user-interface tool development system in which both textual and graphical display, and interaction techniques are integrated under a single model. This allows the user to interact with tool parameters in either graphical or textual modes, and to have the parameters displayed in the manner most relevant to the problem set.  相似文献   

20.
Abstract: Currently, classifying samples into a fixed number of clusters (i.e. supervised cluster analysis) as well as unsupervised cluster analysis are limited in their ability to support 'cross-algorithms' analysis. It is well known that each cluster analysis algorithm yields different results (i.e. a different classification); even running the same algorithm with two different similarity measures commonly yields different results. Researchers usually choose the preferred algorithm and similarity measure according to analysis objectives and data set features, but they have neither a formal method nor tool that supports comparisons and evaluations of the different classifications that result from the diverse algorithms. Current research development and prototype decisions support a methodology based upon formal quantitative measures and a visual approach, enabling presentation, comparison and evaluation of multiple classification suggestions resulting from diverse algorithms. This methodology and tool were used in two basic scenarios: (I) a classification problem in which a 'true result' is known, using the Fisher iris data set; (II) a classification problem in which there is no 'true result' to compare with. In this case, we used a small data set from a user profile study (a study that tries to relate users to a set of stereotypes based on sociological aspects and interests). In each scenario, ten diverse algorithms were executed. The suggested methodology and decision support system produced a cross-algorithms presentation; all ten resultant classifications are presented together in a 'Tetris-like' format. Each column represents a specific classification algorithm, each line represents a specific sample, and formal quantitative measures analyse the 'Tetris blocks', arranging them according to their best structures, i.e. best classification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号