首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
 This paper proposes an approach to producing a verified design of a complex intelligent system. The system is modeled using a hierarchical structure based on a coordination protocol. The coordination protocol and the system architecture are described using a state machine representation. The Clarke-McMillan Symbolic Model Checker based on branching time temporal logic is used to verify some of the desired formal properties of the protocol such as completeness, boundedness and termination. This work shows that the model checker helps to bring the automatic verification of complex intelligent systems closer to a practical proposition.  相似文献   

2.
Increasingly powerful computing technology suggests an expansion of multimedia features in requirements engineering methods. It is not obvious, however, that techniques such as animation and narration would improve the effectiveness of communicating domain information to stakeholders for validation. Three principles from the cognitive theory of multimedia learning (CTML) are used to compare validation methods using animation and narration with more traditional methods using static diagrams and text. Results suggest animation and narration can have significant positive impact on the level of domain understanding attained by participants. In particular, narration yielded strongly significant results. While these results should be viewed as preliminary, they indicate a potential advantage in the use of narration and possibly animation in requirements validation. The results provide justification for further research on the integration of multimedia techniques in developing and validating requirements.  相似文献   

3.
An image segmentation algorithm delineates (an) object(s) of interest in an image. Its output is referred to as a segmentation. Developing these algorithms is a manual, iterative process involving repetitive verification and validation tasks. This process is time-consuming and depends on the availability of experts, who may be a scarce resource (e.g., medical experts). We propose a framework referred to as Image Segmentation Automated Oracle (ISAO) that uses machine learning to construct an oracle, which can then be used to automatically verify the correctness of image segmentations, thus saving substantial resources and making the image segmentation verification and validation task significantly more efficient. The framework also gives informative feedback to the developer as the segmentation algorithm evolves and provides a systematic means of testing different parametric configurations of the algorithm. During the initial learning phase, segmentations from the first few (optimally two) versions of the segmentation algorithm are manually verified by experts. The similarity of successive segmentations of the same images is also measured in various ways. This information is then fed to a machine learning algorithm to construct a classifier that distinguishes between consistent and inconsistent segmentation pairs (as determined by an expert) based on the values of the similarity measures associated with each segmentation pair. Once the accuracy of the classifier is deemed satisfactory to support a consistency determination, the classifier is then used to determine whether the segmentations that are produced by subsequent versions of the algorithm under test, are (in)consistent with already verified segmentations from previous versions. This information is then used to automatically draw conclusions about the correctness of the segmentations. We have successfully applied this approach to 3D segmentations of the cardiac left ventricle obtained from CT scans and have obtained promising results (accuracies of 95%). Even though more experiments are needed to quantify the effectiveness of the approach in real-world applications, ISAO shows promise in increasing the quality and testing efficiency of image segmentation algorithms.  相似文献   

4.
Functional Safety is a major concern in the design of automation systems today. Many of those systems are realized using Programmable Logic Controllers (PLCs) programmed according to IEC 61131-3. PLCopen - as IEC 61131 user organization - semi-formally specified a set of software function blocks to be used in safety applications according to IEC 61508. In the presented work, formal models in the form of timed automata for the safety function blocks (SFBs) are constructed from the semi-formal specifications. The accordance of the formalized blocks to the specification is verified using model checking. Furthermore, their behaviour is validated against specified test cases by simulation. The resulting verified and validated library of formal models is used to build a formal model of a given safety application - built from SFBs - and to verify and validate its properties.  相似文献   

5.
We apply the scenario-based approach to modeling, via the language of live sequence charts (LSCs) and the Play-Engine tool to a real-world complex telecommunication service, . It allows a user to call for help from a doctor, the fire brigade, a car maintenance service, etc. These kinds of services are built on top of an embedded platform, using both new and existing service components, and their complexity stems from their distributed architecture, the various time constraints they entail, and their rapidly evolving underlying systems. A well known problem in this class of telecommunication applications is that of feature interaction, whereby a new feature might cause problems in the execution of existing features. Our approach provides a methodology for high-level modeling of telecommunication applications that can help in detecting feature interaction at early development stages. We exhibit the results of applying the methodology to the specification, animation and formal verification of the Depannage service.
Hillel Kugler (Corresponding author)Email:
  相似文献   

6.
This paper presents the results of a verification and validation process for an intelligent system. The system being studied is an Intelligent Tutorial that employs fuzzy logic and multiagent systems. Software engineering techniques were used in the verification process, while the validation exploited both qualitative and quantitative techniques.  相似文献   

7.
8.
This paper presents the findings of a survey of software tools built to assist in the verification and validation of knowledge-based systems. The tools were identified from literature sources from the period 1985-1995. The tool builders were contacted and asked to complete and return a survey that identified which testing and analysis techniques were utilised and covered by their tool. From these survey results it is possible to identify trends in tool development, technique coverage and areas for future research.  相似文献   

9.
The aim of this paper is to show, how a multitasking application running under a real-time operating system compliant with an OSEK/VDX standard can be modeled by timed automata. The application under consideration consists of several non-preemptive tasks and interrupt service routines that can be synchronized by events. A model checking tool is used to verify time and logical properties of the proposed model. Use of this methodology is demonstrated on an automated gearbox case study and the result of the worst-case response time verification is compared with the classical method based on the time-demand analysis. It is shown that the model-checking approach provides less pessimistic results due to a more detailed model and exhaustive state-space exploration.
Zdeněk HanzálekEmail:
  相似文献   

10.
Most of today's embedded systems are very complex. These systems, controlled by computer programs, continuously interact with their physical environments through network of sensory input and output devices. Consequently, the operations of such embedded systems are highly reactive and concurrent. Since embedded systems are deployed in many safety-critical applications, where failures can lead to catastrophic events, an approach that combines mathematical logic and formal verification is employed in order to ensure correct behavior of the control algorithm. This paper presents What You Prove Is What You Execute (WYPIWYE) compilation strategy for a Globally Asynchronous Locally Synchronous (GALS) programming language called Safey-Critical SystemJ. SC-SystemJ is a safety-critical subset of the SystemJ language. A formal big-step transition semantics of SC-SystemJ is developed for compiling SC-SystemJ programs into propositional Linear Temporal Logic formulas. These LTL formulas are then converted into a network of Mealy automata using a novel and efficient compilation algorithm. The resultant Mealy automata have a straightforward syntactic translation into Promela code. The resultant Promela models can be used for verifying correctness properties via the SPIN model-checker. Finally there is a single translation procedure to compile both: Promela and C/Java code for execution, which satisfies the De-Bruijn index, i.e. this final translation step is simple enough that is can be manually verified.  相似文献   

11.
Reliability has become a key factor in KBS development. For this reason, it has been suggested that verification and validation (V&V) should become an integrated part of activities throughout the whole KBS development cycle. In this paper, it will be illustrated how the PROLOGA workbench integrates V&V aspects into its modelling environment, such that these techniques can be of assistance in the process of knowledge acquisition and representation. To this end, verification has to be performed incrementally and can no longer be delayed until after the system has been completed. It will be shown how this objective can be realised through an approach that uses the decision table formalism as a modelling instrument.  相似文献   

12.
ContextBuilding a quality software product in the shortest possible time to satisfy the global market demand gives an enterprise a competitive advantage. However, uncertainties and risks exist at every stage of a software development project. These can have an extremely high influence on the success of the final software product. Early risk management practice is effective to manage such risks and contributes effectively towards the project success.ObjectiveDespite risk management approaches, a detailed guideline that explains where to integrate risk management activities into the project is still missing. Little effort has been directed towards the evaluation of the overall impact of a risk management method. We present a Goal-driven Software Development Risk Management Model (GSRM) and its explicit integration into the requirements engineering phase and an empirical investigation result of applying GSRM into a project.MethodWe combine the case study method with action research so that the results from the case study directly contribute to manage the studied project risks and to identify ways to improve the proposed methodology. The data is collected from multiple sources and analysed both in a qualitative and quantitative way.ResultsWhen risk factors are beyond the control of the project manager and project environment, it is difficult to control these risks. The project scope affects all the dimensions of risk. GSRM is a reasonable risk management method that can be employed in an industrial context. The study results have been compared against other study results in order to generalise findings and identify contextual factors.ConclusionA formal early stage risk management practice provides early warning related to the problems that exists in a project, and it contributes to the overall project success. It is not necessary to always consider budget and schedule constraints as top priority. There exist issues such as requirements, change management, and user satisfaction which can influence these constraints.  相似文献   

13.
We investigate the application of query-based verification to the analysis of behavioural trends of stochastic models of biochemical systems. We derive temporal logic properties which address specific behavioural questions, such as the likelihood for a species to reach a peak/deadlock state, or to exhibit monotonic/oscillatory trends. We introduce a specific modelling convention through which stochastic models of biochemical systems are made suitable to verification of the behavioural queries we define. Based on the queries we identify, we define a classification procedure which, given a stochastic model, allows for identifying meaningful qualitative behavioural trends. We illustrate the proposed query-based classification on a number of simple abstract models of biochemical systems.  相似文献   

14.
There is a great deal of requirements engineering specifically, and information systems development research in general, in the area of scenarios as the ‘vocabulary’ for discussing and characterising designs for new artefacts. This is partly due to opening up the design process to a variety of participants in a project, and making explicit their ways of working, thinking and interactions with the design products. Scenarios, being concrete, and easy to understand and use, provide the means to describe the design vocabulary. They are used to focus on episodic cases, exchange ideas and thoughts about them effectively, and generally describe requirements and designs for a new artefact from the user’s perspective. In this paper, we review the effectiveness of the current state-of-practice in scenario-based approaches. The objective of this evaluation exercise is to define the requirements for improved ‘by scenario’ approaches to cope with requirements and designs for developing new artefacts.  相似文献   

15.
The size and complexity of software systems make integration of the new/modified requirements to the software system costly and time consuming. The impact of requirements changes on other requirements, design elements and source code should be traced to determine parts of the software to be changed. Considerable research has been devoted to relating requirements and design artifacts with source code. Less attention has been paid to relating requirements (R) with architecture (A) by using well-defined semantics of traces. Traces between R&A might be manually assigned. This is time-consuming, and error prone. Traces might be incomplete and invalid. In this paper, we present an approach for automatic trace generation and validation of traces between requirements (R) and architecture (A). Requirements relations and architecture verification techniques are used. A trace metamodel is defined with commonly used trace types between R&A. We use the semantics of traces and requirements relations for generating and validating traces with a tool support. The tool provides the following: (1) generation and validation of traces by using requirements relations and/or verification of architecture, (2) generation and validation of requirements relations by using traces. The tool is based on model transformation in ATL and term-rewriting logic in Maude.  相似文献   

16.
The decision table is one of the simplest representations of the rules underlying a systematic decision-making process, and is especially valuable in the development of knowledge-based systems. A two dimensional table links all relevant combinations of input conditions to the desired combinations of output actions in a very intuitive way. This simplicity belies the complex considerations involved in verifying, validating, formulating or interpreting this (or any other) representation of machine-based knowledge. In this paper, the common styles of decision table representation are reviewed, a formulation of their meaning is presented, construction methods are reviewed, and an algorithm for ensuring consistency is suggested. The problems that may occur in imperfectly constructed tables are discussed, detection methods reviewed, and some implementation methods are presented.  相似文献   

17.
The development of user interfaces (UI) needs validation and verification of a set of required properties. Different kinds of properties are relevant to the human computer interaction (HCI) area. Not all of them may be checked using classical software engineering validation and verification tools. Indeed, a large part of properties is related to the user and to usability. Moreover, this kind of properties usually requires an experimental validation. This paper addresses the cooperation between formal and experimental HCI properties validation and verification. It focuses on a proof based technique (event B) and a Model Based System (MBS) based technique (SUIDT). Moreover, this paper tries to bridge the gap between both approaches in order to reduce the heterogeneity they lead to.  相似文献   

18.
Software components are specified, designed and implemented with the intention to be reused, and they are assembled in various contexts in order to produce a multitude of software systems. However, in the practice of software development, this ideal scenario is often unrealistic. This is mainly due to the lack of an automatic and efficient support to predict properties of the assembly code by only assuming a limited knowledge of the properties of single components. Moreover, to make effective the component-based vision, the assembly code should evolve when things change, i.e., the properties guaranteed by the assembly, before a change occurs, must hold also after the change. Glue code synthesis approaches technically permit one to construct an assembly of components that guarantees specific properties but, practically, they may suffer from the state-space explosion phenomenon.In this paper, we propose a Software Architecture (SA) based approach in which the usage of the system SA and of SA verification techniques allows the system assembler to design architectural components whose interaction is verified with respect to the specified properties. By exploiting this validation, the system assembler can perform code synthesis by only focusing on each single architectural component, hence refining it as an assembly of actual components which respect the architectural component observable behaviour. In this way code synthesis is performed locally on each architectural component, instead of globally on the whole system interactions, hence reducing the state-space explosion phenomenon.The approach can be equally well applied to efficiently manage the whole reconfiguration of the system when one or more components need to be updated, still maintaining the required properties. The specified and verified system SA is used as starting point for the derivation of glue adaptors that are required to apply changes in the composed system. The approach is firstly illustrated over an explanatory example and is then applied and validated over a real-world industrial case study.  相似文献   

19.
This paper describes a formal model for expressing the functional requirements of the man-machine interfaces of interactive systems. It also shows how this model can facilitate the automation of other useful activities such as checking for inconsistency, redundancy, and incompleteness in the specification, and validating the implementation of the interface against its original requirements. Finally, the paper comments on the authors' experience in developing an interactive system using this formal model.  相似文献   

20.
Safety verification and reachability analysis for hybrid systems   总被引:1,自引:0,他引:1  
Safety verification and reachability analysis for hybrid systems is a very active research domain. Many approaches that seem quite different, have been proposed to solve this complex problem. This paper presents an overview of various approaches for autonomous, continuous-time hybrid systems and presents them with respect to basic problems related to verification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号