首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Event-B是一种基于集合论和谓词逻辑的形式化系统语言,能够采用精化策略为系统建立逐渐精化的模型。提出了如何将Event B应用到实际工业领域的方法,包括重写需求、建立抽象模型及逐层精化三个步骤。首先从环境、功能、性质三个主要方面重写需求,明确精化策略;然后利用形式化方法建立抽象模型并验证该模型;最后,在正确的抽象模型上按照精化策略添加需求、逐层精化,并对每层模型进行验证,基于满足需求的最后一层模型,可进一步利用工具完成代码自动生成。该方法学采用精化理论,以逐层递增的方式明确被开发系统的需求及性质,并进行形式化建模与验证,确保了模型的正确性。为了说明该方法学的可行性,以真正工业界的多应用智能卡为实例,基于Event-B方法及其工具平台Rodin给出了该方法在实际建模及验证过程中的应用。  相似文献   

2.
The notion of uniform closure operator is introduced, and it is shown how this concept surfaces in two different areas of application of abstract interpretation, notably in semantics design for logic programs and in the theory of abstract domain refinements. In logic programming, uniform closures permit generalization, from an order-theoretic perspective, of the standard hierarchy of declarative semantics. In particular, we show how to reconstruct the model-theoretic characterization of the well-known s-semantics using pure order-theoretic concepts only. As far as the systematic refinement operators on abstract domains are concerned, we show that uniform closures capture precisely the property of a refinement of being invertible, namely of admitting a related operator that simplifies as much as possible a given abstract domain of input for that refinement. Exploiting the same argument used to reconstruct the s-semantics of logic programming, we yield a precise relationship between refinements and their inverse operators: we demonstrate that they form an adjunction with respect to a conveniently modified complete order among abstract domains.  相似文献   

3.
4.
Automated Refinement of First-Order Horn-Clause Domain Theories   总被引:8,自引:0,他引:8  
Knowledge acquisition is a difficult, error-prone, and time-consuming task. The task of automatically improving an existing knowledge base using learning methods is addressed by the class of systems performing theory refinement. This paper presents a system, forte (First-Order Revision of Theories from Examples), which refines first-order Horn-clause theories by integrating a variety of different revision techniques into a coherent whole. FORTE uses these techniques within a hill-climbing framework, guided by a global heuristic. It identifies possible errors in the theory and calls on a library of operators to develop possible revisions. The best revision is implemented, and the process repeats until no further revisions are possible. Operators are drawn from a variety of sources, including prepositional theory refinement, first-order induction, and inverse resolution. FORTE is demonstrated in several domains, including logic programming and qualitative modelling.  相似文献   

5.
Expert classification systems have proven themselves effective decision makers for many types of problems. However, the accuracy of such systems is often highly dependent upon the accuracy of a human expert's domain theory. When human experts learn or create a set of rules, they are subject to a number of hindrances. Most significantly experts are, to a greater or lesser extent, restricted by the tradition of scholarship which has preceded them and by an inability to examine large amounts of data in a rigorous fashion without the effects of boredom or frustration. As a result, human theories are often erroneous or incomplete. To escape this dependency, machine learning systems have been developed to automatically refine and correct an expert's domain theory. When theory revision systems are applied to expert theories, they often concentrate on the reformulation of the knowledge provided rather than on the reformulation or selection of input features. The general assumption seems to be that the expert has already selected the set of features that will be most useful for the given task. That set may, however, be suboptimal. This paper studies theory refinement and the relative benefits of applying feature selection versus more extensive theory reformulation.  相似文献   

6.
This paper explores a formalism for describing a wide class of multimedia document constraints, based on an interval temporal logic. We describe the requirements that arise from the multimedia documents application area, and we illustrate these requirements using several examples. Then we present the temporal logic formalism that we use. This logic extends existing interval temporal logic with a number of new features: actions, framing of actions, past operators, a projection-like operator called filter and a new handling of interval length. The notation is applied to the specification of the examples, and in particular a set of logical manipulations, providing feedback to an author, is presented. A model theory, logic and satisfaction relation are defined for the notation.  相似文献   

7.
In this paper we consider the relationship between refinement-oriented specification and specifications using a temporal logic. We investigate the extent to which one can check whether a program in a process algebra, such as Communicating Sequential Processes (CSP), satisfies a temporal logic specification using a refinement-based model checker, such as FDR. We consider what atomic formulae are appropriate in a temporal logic for specifying communicating processes, in particular where one wants to talk about the availability of events. We then show that, perhaps surprisingly, the standard stable failures model is not adequate for capturing specifications in such a logic: instead the refusal traces model must be used. We formalise the logic by giving it a semantics in this model. We show that the temporal operators eventually and until, and negation, cannot, in general, be tested for via simple refinement checks. For the remaining fragment of the logic, we present a translation into simple refinement checks. Finally, we show that refusal traces equivalence is characterised by a slightly augmented version of that fragment. M. J. Butler  相似文献   

8.
Probabilistic Automata (PAs) are a widely-recognized mathematical framework for the specification and analysis of systems with non-deterministic and stochastic behaviors. In a series of recent papers, we proposed Abstract Probabilistic Automata (APAs), a new abstraction framework for representing possibly infinite sets of PAs. We have developed a complete abstraction theory for APAs, and also proposed the first specification theory for them. APAs support both satisfaction and refinement operators, together with classical stepwise design operators.One of the major drawbacks of APAs is that the formalism cannot capture PAs with hidden actions – such actions are however necessary to describe behaviors that shall not be visible to a third party. In this paper, we revisit and extend the theory of APAs to such context. Our first main result takes the form of proposal for a new probabilistic satisfaction relation that captures several definitions of PAs with hidden actions. Our second main contribution is to revisit all the operations and properties defined on APAs for such notions of PAs. Finally, we also establish the first link between stochastic modal logic and APAs, hence linking an automata-based specification theory to a logical one.  相似文献   

9.
Summary The notion of abstractions in programming is characterized by the distinction between specification and implementation. As far as the specification structures are concerned, hierarchical program development with abstraction mechanisms is naturally regarded as a process of theory extensions in a many-sorted logic. To support such program development, a language called t is proposed with which one can structuredly build up theories and write their program implementation. There, the implementation is regarded as another level of theory extension, and the relation between the specification and the implementation of an abstraction is characterized in terms of a homomorphism between the two theories. On this formalism, a mechanizable proof method is introduced for validation of implementations of both data and procedural abstraction. Finally, a new data type concept is introduced to generalize the so-called type-parametrization mechanism. A justification of this concept within the first order logic is provided as well as its applications to program structuring and verification.  相似文献   

10.
Verification and validation (V&V) of Knowledge Bases (KBs) are two sides of the same coin: one is intended to assure the structural correctness of the KB, while the other is intended to assure the functional correctness of the domain model embodied in the KB. Knowledge base refinement aims to appropriately revise the KB if a structural or functional error is detected during the V&V process. This paper presents a uniform framework for verification, validation and refinement of KBs represented as sets of production rules, called the VVR system. It incorporates a contradiction-tolerant truth maintenance system (CTMS) for performing both verification and validation analyses, and some simple explanation-based learning techniques for guiding the refinement process. Verification analysis consists of detecting and correcting the main types of structural anomalies: circular rules, redundant rules, inconsistent rules, and inconsistent data, and checks the KB for completeness and violated semantic constraints. In terms of validation, given a set of test cases, the VVR system is capable of detecting and correcting functional errors caused by overgeneralization and/or overspecialization of the KB. If the set of test cases is not available, the VVR system can generate synthetic test cases intended to help the user evaluate KBS performance. © 1994 John Wiley & Sons, Inc.  相似文献   

11.
This paper presents an integration of induction and abduction in INTHELEX, a prototypical incremental learning system. The refinement operators perform theory revision in a search space whose structure is induced by a quasi-ordering, derived from Plotkin's -subsumption, compliant with the principle of Object Identity. A reduced complexity of the refinement is obtained, without a major loss in terms of expressiveness. These inductive operators have been proven ideal for this search space. Abduction supports the inductive operators in the completion of the incoming new observations. Experiments have been run on a standard dataset about family trees as well as in the domain of document classification to prove the effectiveness of such multistrategy incremental learning system with respect to a classical batch algorithm.  相似文献   

12.
The main goal of this paper is to illustrate applications of some recent developments in the theory of logic programming to knowledge representation and reasoning in common sense domains. We are especially interested in better understanding the process of development of such representations together with their specifications. We build on the previous work of Gelfond and Przymusinska in which the authors suggest that, at least in some cases, a formal specification of the domain can be obtained from specifications of its parts by applying certain operators on specifications called specification constructors and that a better understanding of these operators can substantially facilitate the programming process by providing the programmer with a useful heuristic guidance. We discuss some of these specification constructors and their realization theorems which allow us to transform specifications built by applying these constructors to declarative logic programs. Proofs of two such theorems, previously announced in a paper by Gelfond and Gabaldon, appear here for the first time. The method of specifying knowledge representation problems via specification constructors and of using these specifications for the development of their logic programming representations is illustrated by design of a simple, but fairly powerful program representing simple hierarchical domains. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

13.
In many applications, especially from the business domain, the requirements specification mainly deals with use cases and class models. Unfortunately, these models are based on different modelling techniques and aim at different levels of abstraction, such that serious consistency and completeness problems are induced. To overcome these deficiencies, we refine activity graphs to meet the needs for a suitable modelling element for use case behaviour. The refinement in particular supports the proper coupling of use cases via activity graphs and the class model. The granularity and semantics of our approach allow for a seamless, traceable transition of use cases to the class model and for the verification of the class model against the use case model. The validation of the use case model and parts of the class model is supported as well. Experience from several applications has shown that the investment in specification, validation and verification not only pays off during system and acceptance testing but also significantly improves the quality of the final product.    相似文献   

14.
ContextIn the past decade, the World Wide Web has been subject to rapid changes. Web sites have evolved from static information pages to dynamic and service-oriented applications that are used for a broad range of activities on a daily basis. For this reason, thorough analysis and verification of Web Applications help assure the deployment of high quality applications.ObjectivesIn this paper, an approach is presented to the formal verification and validation of existing web applications. The approach consists of using execution traces of a web application to automatically generate a communicating automata model. The obtained model is used to model checking the application against predefined properties, to perform regression testing, and for documentation.MethodsTraces used in the proposed approach are collected by monitoring a web application while it is explored by a user or a program. An automata-based model is derived from the collected traces by mapping the pages of the application under test into states and the links and forms used to browse the application into transitions between the states. Properties, meanwhile, express correctness and quality requirements on web applications and might concern all states of the model; in many cases, these properties concern only a proper subset of the states, in which case the model is refined to designate the subset of the global states of interest. A related problem of property specification in Linear Temporal Logic (LTL) over only a subset of states of a system is solved by means of specialized operators that facilitate specifying properties over propositional scopes in a concise and intuitive way. Each scope constitutes a subset of states that satisfy a propositional logic formula.ResultsAn implementation of the verification approach that uses the model checker Spin is presented where an integrated toolset is developed and empirical results are shown. Also, Linear Temporal Logic is extended with propositional scopes.Conclusiona formal approach is developed to build a finite automata model tuned to features of web applications that have to be validated, while delegating the task of property verification to an existing model checker. Also, the problem of property specification in LTL over a subset of the states of a given system is addressed, and a generic and practical solution is proposed which does not require any changes in the system model by defining specialized operators in LTL using scopes.  相似文献   

15.
16.
Flaws in requirements often have a negative impact on the subsequent development phases. In this paper, we present a novel approach for the formal representation and validation of requirements, which we used in an industrial project. The formalism allows us to represent and reason about object models and their temporal evolution. The key ingredients are class diagrams to represent classes of objects, their relationships and their attributes, fragments of first order logic to constrain the possible configurations of such objects, and temporal logic operators to deal with the dynamic evolution of the configurations. The approach to formal validation allows to check whether the requirements are consistent, if they are compatible with some scenarios, and if they guarantee some implicit properties. The validation procedure is based on satisfiability checking, which is carried out by means of finite instantiation and model checking techniques.  相似文献   

17.
Retrieval, validation, and explanation tools are described for cooperative assistance during requirements engineering and are illustrated by a library system case study. Generic models of applications are reused as templates for modeling and critiquing requirements for new applications. The validation tools depend on a matching process which takes facts describing a new application and retrieves the appropriate generic model from the system library. The algorithms of the matcher, which implement a computational theory of analogical structure matching, are described. A theory of domain knowledge is proposed to define the semantics and composition of generic domain models in the context of requirements engineering. A modeling language and a library of models arranged in families of classes are described. The models represent the basic transaction processing or `use case' for a class of applications. Critical difference rules are given to distinguish between families and hierarchical levels. Related work and future directions of the domain theory are discussed  相似文献   

18.
Health care is characterized by highly complex processes of patient care that require unusual amount of communication between different health care professionals of different institutions. Sub-optimal processes can significantly impact on the patient’s health, increase the consumption of services and resources and in severe cases can lead to the patient death. For these reasons, requirements engineering for the development of information technology in health care is a complex process as well: without constant and rigorous evaluation, the impact of new systems on the quality of care is unknown and it is possible that badly designed systems significantly harm patients. To overcome these limitations, we present and discuss an approach to requirements engineering that we applied for the development of applications for chemotherapy planning in paediatric oncology. Chemotherapy planning in paediatric oncology is complex and time-consuming and errors must be avoided by all means. In the multi-hospital/multi-trial-centre environment of paediatric oncology, it is especially difficult and time-consuming to analyse requirements. Our approach combines a grounded theory approach with evolutionary prototyping based on the constant development and refinement of a generic domain model, in this case a domain model for chemotherapy planning in paediatric oncology. The prototypes were introduced in medical centres and final results show that the developed generic domain model is adequate.  相似文献   

19.
This paper is concerned with a sufficient condition under which a concept class is learnable in Gold’s classical model of identification in the limit from positive data. The standard principle of learning algorithms working under this model is called the MINL strategy, which is to conjecture a hypothesis representing a minimal concept among the ones consistent with the given positive data. The minimality of a concept is defined with respect to the set-inclusion relation – the strategy is semantics-based. On the other hand, refinement operators have been developed in the field of learning logic programs, where a learner constructs logic programs as hypotheses consistent with given logical formulae. Refinement operators have syntax-based definitions – they are defined based on inference rules in first-order logic. This paper investigates the relation between the MINL strategy and refinement operators in inductive inference. We first show that if a hypothesis space admits a refinement operator with certain properties, the concept class will be learnable by an algorithm based on the MINL strategy. We then present an additional condition that ensures the learnability of the class of unbounded finite unions of concepts. Furthermore, we show that under certain assumptions a learning algorithm runs in polynomial time.  相似文献   

20.
In the research of software reuse, feature models have been widely adopted to capture, organize and reuse the requirements of a set of similar applications in a software domain. However, the construction, especially the refinement, of feature models is a labor-intensive process, and there lacks an effective way to aid domain engineers in refining feature models. In this paper, we propose a new approach to support interactive refinement of feature models based on the view updating technique. The basic idea of our approach is to first extract features and relationships of interest from a possibly large and complicated feature model, then organize them into a comprehensible view, and finally refine the feature model through modifications on the view. The main characteristics of this approach are twofold: a set of powerful rules (as the slicing criterion) to slice the feature model into a view automatically, and a novel use of a bidirectional transformation language to make the view updatable. We have successfully developed a tool, and a nontrivial case study shows the feasibility of this approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号