首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Manufacturers of automated systems and their components have been allocating an enormous amount of time and effort in R&D activities, which led to the availability of prototypes demonstrating new capabilities as well as the introduction of such systems to the market within different domains. Manufacturers need to make sure that the systems function in the intended way and according to specifications. This is not a trivial task as system complexity rises dramatically the more integrated and interconnected these systems become with the addition of automated functionality and features to them. This effort translates into an overhead on the V&V (verification and validation) process making it time-consuming and costly. In this paper, we present VALU3S, an ECSEL JU (joint undertaking) project that aims to evaluate the state-of-the-art V&V methods and tools, and design a multi-domain framework to create a clear structure around the components and elements needed to conduct the V&V process. The main expected benefit of the framework is to reduce time and cost needed to verify and validate automated systems with respect to safety, cyber-security, and privacy requirements. This is done through identification and classification of evaluation methods, tools, environments and concepts for V&V of automated systems with respect to the mentioned requirements. VALU3S will provide guidelines to the V&V community including engineers and researchers on how the V&V of automated systems could be improved considering the cost, time and effort of conducting V&V processes. To this end, VALU3S brings together a consortium with partners from 10 different countries, amounting to a mix of 25 industrial partners, 6 leading research institutes, and 10 universities to reach the project goal.  相似文献   

2.
《Knowledge》1999,12(7):341-353
Despite the fact that there has been a surge of publications in verification and validation of knowledge-based systems and expert systems in the past decade, there are still gaps in the study of verification and validation (V&V) of expert systems, not the least of which is the lack of appropriate semantics for expert system programming languages. Without a semantics, it is hard to formally define and analyze knowledge base anomalies such as inconsistency and redundancy, and it is hard to assess the effectiveness of V&V tools, methods and techniques that have been developed or proposed. In this paper, we develop an approximate declarative semantics for rule-based knowledge bases and provide a formal definition and analysis of knowledge base inconsistency, redundancy, circularity and incompleteness in terms of theories in the first order predicate logic. In the paper, we offer classifications of commonly found cases of inconsistency, redundancy, circularity and incompleteness. Finally, general guidelines on how to remedy knowledge base anomalies are given.  相似文献   

3.
One of the most important phases in the methodology for the development of intelligent systems is that corresponding to the evaluation of the performance of the implemented product. This process is popularly known as verification and validation (V&V). The majority of tools designed to support the V&V process are preferentially directed at verification in detriment to validation, and limited to an analysis of the internal structures of the system. The authors of this article propose a methodology for the development of a results-oriented validation, and a tool (SHIVA) is presented which facilitates the fulfilment of the tasks included in the methodology, whilst covering quantitative as well as heuristic aspects. The result is an intelligent tool for the validation of intelligent systems.  相似文献   

4.
知识库的异常是影响整个知识系统性能的重要因素之一,因此必须对获取的知识进行校验。本文综述了知识库异常检测和验证的相关研究,给出了异常知识的分类及其危害性,分析了知识库验证困难的原因,介绍了用于知识库验证的静态和动态方法,列举了国际上几个著名的知识库验证工具,并对知识库验证的研究进行了展望。  相似文献   

5.
As blockchain technology is gaining popularity in industry and society, solutions for Verification and Validation (V&V) of blockchain-based software applications (BC-Apps) have started gaining equal attention. To ensure that BC-Apps are properly developed before deployment, it is paramount to apply systematic V&V to verify their functional and non-functional requirements. While existing research aims at addressing the challenges of engineering BC-Apps by providing testing techniques and tools, blockchain-based software development is still an emerging research discipline, and therefore, best practices and tools for the V&V of BC-Apps are not yet sufficiently developed. In this paper, we provide a comprehensive survey on V&V solutions for BC-Apps. Specifically, using a layered approach, we synthesize V&V tools and techniques addressing different components at various layers of the BC-App stack, as well as across the whole stack. Next, we provide a discussion on the challenges associated with BC-App V&V, and summarize a set of future research directions based on the challenges and gaps identified in existing research work. Our study aims to highlight the importance of BC-App V&V and pave the way for a disciplined, testable, and verifiable BC development.  相似文献   

6.
As software and software intensive systems are becoming increasingly ubiquitous, the impact of failures can be tremendous. In some industries such as aerospace, medical devices, or automotive, such failures can cost lives or endanger mission success. Software faults can arise due to the interaction between the software, the hardware, and the operating environment. Unanticipated environmental changes lead to software anomalies that may have significant impact on the overall success of the mission. Latent coding errors can at any time during system operation trigger faults despite the fact that usually a significant effort has been expended in verification and validation (V&V) of the software system. Nevertheless, it is becoming increasingly more apparent that pre-deployment V&V is not enough to guarantee that a complex software system meets all safety, security, and reliability requirements. Software Health Management (SWHM) is a new field that is concerned with the development of tools and technologies to enable automated detection, diagnosis, prediction, and mitigation of adverse events due to software anomalies, while the system is in operation. The prognostic capability of the SWHM to detect and diagnose failures before they happen will yield safer and more dependable systems for the future. This paper addresses the motivation, needs, and requirements of software health management as a new discipline and motivates the need for SWHM in safety critical applications.  相似文献   

7.
To support debugging, maintenance, verification and validation (V&V) and/or independent V&V (IV&V), it is necessary to understand the relationship between defect reports and their related artifacts. For example, one cannot correct a code-related defect report without being able to find the code that is affected. Information retrieval (IR) techniques have been used effectively to trace textual artifacts to each other. This has generally been applied to the problem of dynamically generating a trace between artifacts in the software document hierarchy after the fact (after development has proceeded to at least the next lifecycle phase). The same techniques can also be used to trace textual artifacts of the software engineering lifecycle to defect reports. We have applied the term frequency–inverse document frequency (TF-IDF) technique with relevance feedback, as implemented in our requirements tracing on-target (RETRO) tool, to the problem of tracing textual requirement elements to related textual defect reports. We have evaluated the technique using a dataset for a NASA scientific instrument. We found that recall of over 85% and precision of 69%, and recall of 70% and precision of 99% could be achieved, respectively, on two subsets of the dataset.  相似文献   

8.
Knowledge-based systems (KBSs) are being used in many applications areas where their failures can be costly because of losses in services, property or even life. To ensure their reliability and dependability, it is therefore important that these systems are verified and validated before they are deployed. This paper provides perspectives on issues and problems that impact the verification and validation (V&V) of KBSs. Some of the reasons why V&V of KBSs is difficult are presented. The paper also provides an overview of different techniques and tools that have been developed for performing V&V activities. Finally, some of the research issues that are relevant for future work in this field are discussed  相似文献   

9.
We describe a progression from pilot studies to development and use of domain-specific verification and validation (V&V) automation. Our domain is the testing of an AI planning system that forms a key component of an autonomous spacecraft. We used pilot studies to ascertain opportunities for, and suitability of, automating various analyses whose results would contribute to V&V in our domain. These studies culminated in development of an automatic generator of automated test oracles. This was then applied and extended in the course of testing the spacecraft's AI planning system.Richardson et al. (1992, In Proceedings of the 14th International Conference on Software Engineering, Melbourne, Australia, pp. 105–118), presents motivation for automatic test oracles, and considered the issues and approaches particular to test oracles derived from specifications. Our work, carried through from conception to application, confirms many of their insights. Generalizing from our specific domain, we present some additional insights and recommendations concerning the use of test oracles for V&V of knowledge-based systems.  相似文献   

10.
11.
A Formal Verification Environment for Railway Signaling System Design   总被引:2,自引:0,他引:2  
A fundamental problem in the design and development of embedded control systems is the verification of safety requirements. Formal methods, offering a mathematical way to specify and analyze the behavior of a system, together with the related support tools can successfully be applied in the formal proof that a system is safe. However, the complexity of real systems is such that automated tools often fail to formally validate such systems.This paper outlines an experience on formal specification and verification carried out in a pilot project aiming at the validation of a railway computer based interlocking system. Both the specification and the verification phases were carried out in the JACK (Just Another Concurrency Kit) integrated environment. The formal specification of the system was done by means of process algebra terms. The formal verification of the safety requirements was done first by giving a logical specification of such safety requirements, and then by means of model checking algorithms. Abstraction techniques were defined to make the problem of safety requirements validation tractable by the JACK environment.  相似文献   

12.

Verification and validation (V&V) of computer codes and models used in simulations are two aspects of the scientific practice of high importance that recently have been discussed widely by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to the model’s relation to the real world and its intended use. Because complex simulations are generally opaque to a practitioner, the Duhem problem can arise with verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or the model’s general inadequacy to its target should be blamed in the case of a failure. I argue that a clear distinction between computer modeling and simulation has to be made to disentangle verification and validation. Drawing on that distinction, I suggest to associate modeling with verification and simulation, which shares common epistemic strategies with experimentation, with validation. To explain the reasons for their entanglement in practice, I propose a Weberian ideal–typical model of modeling and simulation as roles in practice. I examine an approach to mitigate the Duhem problem for verification and validation that is generally applicable in practice and is based on differences in epistemic strategies and scopes. Based on this analysis, I suggest two strategies to increase the reliability of simulation results, namely, avoiding alterations of verified models at the validation stage as well as performing simulations of the same target system using two or more different models. In response to Winsberg’s claim that verification and validation are entangled I argue that deploying the methodology proposed in this work it is possible to mitigate inseparability of V&V in many if not all domains where modeling and simulation are used.

  相似文献   

13.
Informal validation techniques such as simulation are extensively used in the development of embedded systems. Formal approaches such as model-checking and testing are important means to carry out Verification and Validation (V&V) activities. Model-checking consists in exploring all possible behaviors of a model in order to perform a qualitative and quantitative analysis. However, this method remains of limited use as it runs into the problem of combinatorial explosion. Testing and model-checking do not take into account the context of use objectives of the model. Simulation overcomes these problems but it is not exhaustive. Submitted to simulation scenarios which are an operational formulation of the V&V activity considered, simulation consists in exploring a subset of the state space of the model. This paper proposes a formal approach to assess simulation scenarios. The formal specification of a model and the simulation scenarios applied to that model serve to compute the effective evolutions taken by the simulation. It is then possible to check whether a simulation fulfills its intended purpose. To illustrate this approach, the application study of an intelligent cruise controller is presented. The main contribution of this paper is that combining simulation objectives and formal methods leads to define a qualitative metric for a simulation evaluation without running a simulation.  相似文献   

14.
This paper presents a framework for augmenting independent validation and verification (IV&V) of software systems with computer-based IV&V techniques. The framework allows an IV&V team to capture its own understanding of the application as well as the expected behavior of any proposed system for solving the underlying problem by using an executable system reference model, which uses formal assertions to specify mission- and safety-critical behaviors. The framework uses execution-based model checking to validate the correctness of the assertions and to verify the correctness and adequacy of the system under test.  相似文献   

15.
《Software, IEEE》1989,6(3):10-17
An explanation is given of software verification and validation (V&V) and how it fits in the development life cycle. How to apply V&V is also discussed. Evaluations of its effectiveness are summarized  相似文献   

16.
As the architecture of modern software systems continues to evolve in a distributed fashion, the development of such systems becomes increasingly complex, which requires the integration of more sophisticated specification techniques, tools, and procedures into the conventional methodology. An essential capability of an integrated software development environment is a formal specification method to capture effectively the system's functional requirements as well as its performance requirements. A validation and verification (V&V) system based on a formal specification method is of paramount importance to the development and maintenance of distributed systems.

There has been recent interest in integrating software techniques and tools at the specification level. It is also noted that an effective way of achieving such integration is by using wide-spectrum specification techniques. In view of these points, an integrated V&V system, called Integral, is presented that provides comprehensive and homogeneous analysis capabilities to both specification and testing phases of the life-cycle of distributed software systems. The underlying software model that supports various V&V activities in Integral is primarily based on Petri nets and is intended to be wide spectrum. The ultimate goal of this research is to demonstrate to the software industry, domestic or foreign, the availability and applicability of a new Petri-net-based software development paradigm. Integral is a prototype V&V system to support such a paradigm.  相似文献   


17.
The results of empirical studies in Software Engineering are limited to particular contexts, difficult to generalise and the studies themselves are expensive to perform. Despite these problems, empirical studies can be made effective and they are important to both researchers and practitioners. The key to their effectiveness lies in the maximisation of the information that can be gained by examining and replicating existing studies and using power analyses for an accurate minimum sample size. This approach was applied in a controlled experiment examining the combination of automated static analysis tools and code inspection in the context of the verification and validation (V&V) of concurrent Java components. The paper presents the results of this controlled experiment and shows that the combination of automated static analysis and code inspection is cost-effective. Throughout the experiment a strategy to maximise the information gained from the experiment was used. As a result, despite the size of the study, conclusive results were obtained, contributing to the research on V&V technology evaluation.
Paul StrooperEmail:
  相似文献   

18.
Reliability has become a key factor in KBS development. For this reason, it has been suggested that verification and validation (V&V) should become an integrated part of activities throughout the whole KBS development cycle. In this paper, it will be illustrated how the PROLOGA workbench integrates V&V aspects into its modelling environment, such that these techniques can be of assistance in the process of knowledge acquisition and representation. To this end, verification has to be performed incrementally and can no longer be delayed until after the system has been completed. It will be shown how this objective can be realised through an approach that uses the decision table formalism as a modelling instrument.  相似文献   

19.
20.
Visualization is often employed as part of the simulation science pipeline-it's the window through which scientists examine their data for deriving new science, and the lens used to view modeling and discretization interactions within their simulations. We advocate that as a component of the simulation science pipeline, visualization must be explicitly considered as part of the validation and verification (V&V) process. In this article, the authors define V&V in the context of computational science, discuss the role of V&V in the scientific process, and present arguments for the need for verifiable visualization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号