首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Before Expert System (ES) technology can be effectively used for complex systems a comprehensive verification and validation (V&V) methodology must be defined. Evidence from the computer industry suggests that without such a methodology Expert Systems will not be considered safe and reliable for field use. to this end, we have defined a design language called TOP (Terms, Operators, and Productions) for specifying knowledge-base designs which support verification and validation. This article discusses the key features of TOP which are: term subsumption, operator methods, sequence expressions, and a method for translating TOP designs into C-Language Production System (CLIPS) rules. to put these features of TOP into perspective, several illustrations from applications using TOP are presented. We believe that the concepts presented here serve as a solid foundation for practical application of Expert Systems and continued research into the verification and validation of Expert Systems. © 1994 John Wiley & Sons, Inc.  相似文献   

2.
At the core of any engineering discipline is the use of measures, based on ISO standards or on widely recognized conventions, for the development and analysis of the artifacts produced by engineers. In the software domain, many alternatives have been proposed to measure the same attributes, but there is no consensus on a framework for how to analyze or choose among these measures. Furthermore, there is often not even a consensus on the characteristics of the attributes to be measured.In this paper, a framework is proposed for a software measurement life cycle with a particular focus on the design phase of a software measure. The framework includes definitions of the verification criteria that can be used to understand the stages of software measurement design. This framework also integrates the different perspectives of existing measurement approaches. In addition to inputs from the software measurement literature the framework integrates the concepts and vocabulary of metrology. This metrological approach provides a clear definition of the concepts, as well as the activities and products, related to measurement. The aim is to give an integrated view, involving the practical side and the theoretical side, as well as the basic underlying concepts of measurement.  相似文献   

3.
In this paper we propose a framework for validating software measurement. We start by defining a measurement structure model that identifies the elementary component of measures and the measurement process, and then consider five other models involved in measurement: unit definition models, instrumentation models, attribute relationship models, measurement protocols and entity population models. We consider a number of measures from the viewpoint of our measurement validation framework and identify a number of shortcomings; in particular we identify a number of problems with the construction of function points. We also compare our view of measurement validation with ideas presented by other researchers and identify a number of areas of disagreement. Finally, we suggest several rules that practitioners and researchers can use to avoid measurement problems, including the use of measurement vectors rather than artificially contrived scalars  相似文献   

4.
An overview of a comprehensive framework is given for estimating the predictive uncertainty of scientific computing applications. The framework is comprehensive in the sense that it treats both types of uncertainty (aleatory and epistemic), incorporates uncertainty due to the mathematical form of the model, and it provides a procedure for including estimates of numerical error in the predictive uncertainty. Aleatory (random) uncertainties in model inputs are treated as random variables, while epistemic (lack of knowledge) uncertainties are treated as intervals with no assumed probability distributions. Approaches for propagating both types of uncertainties through the model to the system response quantities of interest are briefly discussed. Numerical approximation errors (due to discretization, iteration, and computer round off) are estimated using verification techniques, and the conversion of these errors into epistemic uncertainties is discussed. Model form uncertainty is quantified using (a) model validation procedures, i.e., statistical comparisons of model predictions to available experimental data, and (b) extrapolation of this uncertainty structure to points in the application domain where experimental data do not exist. Finally, methods for conveying the total predictive uncertainty to decision makers are presented. The different steps in the predictive uncertainty framework are illustrated using a simple example in computational fluid dynamics applied to a hypersonic wind tunnel.  相似文献   

5.
6.
The first part of this paper reviews our efforts on knowledge-based software engineering, namely PROMIS, started from 1990s. The key point of PROMIS is to generate applications automatically based on domain knowledge as well as software knowledge. That is featured by separating the development of domain knowledge from the development of software. But in PROMIS, we did not find an appropriate representation for the domain knowledge. Fortunately, in our recent work, we found such a carrier for knowledge modules, i.e. knowware. Knowware is a commercialized form of domain knowledge. This paper briefly introduces the basic definitions of knowware, knowledge middleware and knowware engineering. Three life circle models of knowware engineering and the design of corresponding knowware implementations are given. Finally we discuss application system automatic generation and domain knowledge modeling on the J2EE platform, which combines the techniques of PROMIS, knowware and J2EE, and the development and deployment framework, i.e. PROMIS/KW**.  相似文献   

7.
Security and requirements engineering are two of the most important factors of success in the development of a software product line (SPL). Goal-driven security requirements engineering approaches, such as Secure Tropos, have been proposed as a suitable paradigm for elicitation of security requirements and their analysis on both a social and a technical dimension. Nevertheless, goal-driven security requirements engineering methodologies are not appropriately tailored to the specific demands of SPL, while on the other hand specific proposals of SPL engineering have traditionally ignored security requirements. This paper presents work that fills this gap by proposing “SecureTropos-SPL” framework.  相似文献   

8.
9.
10.
This paper presents a framework for augmenting independent validation and verification (IV&V) of software systems with computer-based IV&V techniques. The framework allows an IV&V team to capture its own understanding of the application as well as the expected behavior of any proposed system for solving the underlying problem by using an executable system reference model, which uses formal assertions to specify mission- and safety-critical behaviors. The framework uses execution-based model checking to validate the correctness of the assertions and to verify the correctness and adequacy of the system under test.  相似文献   

11.
The Software Defined Systems (SDSys) paradigm has been introduced recently as a solution to reduce the overhead in the control and management operations of complex computing systems and to maintain a high level of security and protection. The main concept behind this technology is around isolating the data plane from the control plane. Building a Software Defined System in a real life environment is considered an expensive solution and may have a lot of risks. Thus, there is a need to simulate such systems before the real-life implementation and deployment. In this paper we present a novel experimental framework as a virtualized testbed environment for software defined based secure storage systems. Its also covers some related issues for large scale data storage and sharing such as deduplication. This work builds on the Mininet simulator, where its core components, the host, switch and the controller, are customized to build the proposed experimental simulation framework. The developed emulator, will not only support the development and testing of SD-based secure storage solutions, it will also serve as an experimentation tool for researchers and for benchmarking purposes. The developed simulator/emulator could also be used as an educational tool to train students and novice researchers.  相似文献   

12.
Verification and validation (V&V) of Knowledge Bases (KBs) are two sides of the same coin: one is intended to assure the structural correctness of the KB, while the other is intended to assure the functional correctness of the domain model embodied in the KB. Knowledge base refinement aims to appropriately revise the KB if a structural or functional error is detected during the V&V process. This paper presents a uniform framework for verification, validation and refinement of KBs represented as sets of production rules, called the VVR system. It incorporates a contradiction-tolerant truth maintenance system (CTMS) for performing both verification and validation analyses, and some simple explanation-based learning techniques for guiding the refinement process. Verification analysis consists of detecting and correcting the main types of structural anomalies: circular rules, redundant rules, inconsistent rules, and inconsistent data, and checks the KB for completeness and violated semantic constraints. In terms of validation, given a set of test cases, the VVR system is capable of detecting and correcting functional errors caused by overgeneralization and/or overspecialization of the KB. If the set of test cases is not available, the VVR system can generate synthetic test cases intended to help the user evaluate KBS performance. © 1994 John Wiley & Sons, Inc.  相似文献   

13.
Concurrent Embedded Real-Time Software (CERTS) is intrinsically different from traditional, sequential, independent, and temporally unconstrained software. The verification of software is more complex than hardware due to inherent flexibilities (dynamic behavior) that incur a multitude of possible system states. The verification of CERTS is all the more difficult due to its concurrency and embeddedness. The work presented here shows how the complexity of CERTS verification can be reduced significantly through answering common engineering questions such as when, where, and how one must verify embedded software. First, a new Schedule-Verify-Map strategy is proposed to answer the when question. Second, verification under system concurrency is proposed to answer the where question. Finally, a complete symbolic model checking procedure is proposed for CERTS verification. Several application examples illustrate the usefulness of our technique in increasing verification scalability.  相似文献   

14.
Ontology creation and management related processes are very important to define and develop semantic services. Ontology Engineering is the research field that provides the mechanisms to manage the life cycle of the ontologies. However, the process of building ontologies can be tedious and sometimes exhaustive. OWL-VisMod is a tool designed for developing ontological engineering based on visual analytics conceptual modeling for OWL ontologies life cycle management, supporting both creation and understanding tasks. This paper is devoted to evaluate OWL-VisMod through a set of defined tasks. The same tasks also will be done with the most known tool in Ontology Engineering, Protégé, in order to compare the obtained results and be able to know how is OWL-VisMod perceived for the expert users. The comparison shows that both tools have similar acceptation scores, but OWL-VisMod presents better feelings regarding user’s perception tasks due to the visual analytics influence.  相似文献   

15.
Today component- and service-based technologies play a central role in many aspects of enterprise computing. However, although the technologies used to define, implement, and assemble components have improved significantly over recent years, techniques for verifying systems created from them have changed very little. The correctness and reliability of component-based systems are still usually checked using the traditional testing techniques that were in use before components and services became widespread, and the associated costs and overheads still remain high. This paper presents an approach that addresses this problem by making the system verification process more component-oriented. Based on the notion of built-in tests (BIT)—tests that are packaged and distributed with prefabricated, off-the-shelf components—the approach partially automates the testing process, thereby reducing the level of effort needed to establish the acceptability of the system. The approach consists of a method that defines how components should be written to support and make use of run-time tests, and a resource-aware infrastructure that arranges for tests to be executed when they have a minimal impact on the delivery of system services. After providing an introduction to the principles behind component-based verification and explaining the main features of the approach and its supporting infrastructure, we show by means of a case study how it can reduce system verification effort.  相似文献   

16.
Static analysis tools, such as resource analyzers, give useful information on software systems, especially in real-time and safety-critical applications. Therefore, the question of the reliability of the obtained results is highly important. State-of-the-art static analyzers typically combine a range of complex techniques, make use of external tools, and evolve quickly. To formally verify such systems is not a realistic option. In this work, we propose a different approach whereby, instead of the tools, we formally verify the results of the tools. The central idea of such a formal verification framework for static analysis is the method-wise translation of the information about a program gathered during its static analysis into specification contracts that contain enough information for them to be verified automatically. We instantiate this framework with costa, a state-of-the-art static analysis system for sequential Java programs, for producing resource guarantees and KeY, a state-of-the-art verification tool, for formally verifying the correctness of such resource guarantees. Resource guarantees allow to be certain that programs will run within the indicated amount of resources, which may refer to memory consumption, number of instructions executed, etc. Our results show that the proposed tool cooperation can be used for automatically producing verified resource guarantees.  相似文献   

17.
This paper describes the application of quantitative techniques to software process improvement in an industrial environment. This investigation is based on theami approach which was developed during the ESPRIT II project 5494ami (Application of Metrics in Industry). The activity-based improvement paradigm behind ami (assess, analyse, metricate, improve) advocates measurement as an essential component of process improvement for understanding the process, monitoring the changes and evaluating the return on investment. This experience focuses on the improvement of reviews and inspections in a software development environment certified to ISO 9001. A measurement framework is put into place for understanding these verification processes and performing them efficiently. After a brief introduction of theami approach, the case study is described following the fourami activities. The first data analysis findings are emphasized as well as the role played by the management during the initiative.  相似文献   

18.
Secure software engineering is a new research area that has been proposed to address security issues during the development of software systems. This new area of research advocates that security characteristics should be considered from the early stages of the software development life cycle and should not be added as another layer in the system on an ad-hoc basis after the system is built. In this paper, we describe a UML-based Static Verification Framework (USVF) to support the design and verification of secure software systems in early stages of the software development life-cycle taking into consideration security and general requirements of the software system. USVF performs static verification on UML models consisting of UML class and state machine diagrams extended by an action language. We present an operational semantics of UML models, define a property specification language designed to reason about temporal and general properties of UML state machines using the semantic domains of the former, and implement the model checking process by translating models and properties into Promela, the input language of the SPIN model checker. We show that the methodology can be applied to the verification of security properties by representing the main aspects of security, namely availability, integrity and confidentiality, in the USVF property specification language.  相似文献   

19.
In this paper we present a clustering framework for type-2 fuzzy clustering which covers all steps of the clustering process including: clustering algorithm, parameters estimation, and validation and verification indices. The proposed clustering algorithm is developed based on dual-centers type-2 fuzzy clustering model. In this model the centers of clusters are defined by a pair of objects rather than a single object. The membership values of the objects to the clusters are defined by type-2 fuzzy numbers and there are not any type reduction or defuzzification steps in the proposed clustering algorithm. In addition, the relation among the size of the cluster bandwidth, distance between dual-centers and fuzzifier parameter are indicated and analyzed to facilitate the parameters estimation step. To determine the optimum number of clusters, we develop a new validation index which is compatible with the proposed model structure. A new compatible verification index is also defined to compare the results of the proposed model with existing type-1 fuzzy clustering model. Finally, the results of computational experiments are presented to show the efficiency of the proposed approach.  相似文献   

20.
The growing complexity of embedded real-time software requirements calls for the design of reusable software components, the synthesis and generation of software code, and the automatic guarantee of nonfunctional properties such as performance, time constraints, reliability, and security. Available application frameworks targeted at the automatic design of embedded real-time software are poor in integrating functional and nonfunctional requirements. To bridge this gap, we reveal the design flow and the internal architecture of a newly proposed framework called verifiable embedded real-time application framework (VERTAF), which integrates software component-based reuse, formal synthesis, and formal verification. A formal UML-based embedded real-time object model is proposed for component reuse. Formal synthesis employs quasistatic and quasidynamic scheduling with automatic generation of multilayer portable efficient code. Formal verification integrates a model checker kernel from SGM, by adapting it for embedded software. The proposed architecture for VERTAF is component-based and allows plug-and-play for the scheduler and the verifier. Using VERTAF to develop application examples significantly reduced design effort and illustrated how high-level reuse of software components combined with automatic synthesis and verification can increase design productivity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号