首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary Equivalence is a fundamental notion for the semantic analysis of algebraic specifications. In this paper the notion of crypt-equivalence is introduced and studied w.r.t. two loose approaches to the semantics of an algebraic specification T: the class of all first-order models of T and the class of all term-generated models of T. Two specifications are called crypt-equivalent if for one specification there exists a predicate logic formula which implicitly defines an expansion (by new functions) of every model of that specification in such a way that the expansion (after forgetting unnecessary functions) is homologous to a model of the other specification, and if vice versa there exists another predicate logic formula with the same properties for the other specification. We speak of first-order crypt-equivalence if this holds for all first-order models, and of inductive crypt-equivalence if this holds for all term-generated models. Characterizations and structural properties of these notions are studied. In particular, it is shown that first order crypt-equivalence is equivalent to the existence of explicit definitions and that in case of positive definability two first-order crypt-equivalent specifications admit the same categories of models and homomorphisms. Similarly, two specifications which are inductively crypt-equivalent via sufficiently complete implicit definitions determine the same associated categories. Moreover, crypt-equivalence is compared with other notions of equivalence for algebraic specifications: in particular, it is shown that first-order cryptequivalence is strictly coarser than abstract semantic equivalence and that inductive crypt-equivalence is strictly finer than inductive simulation equivalence and implementation equivalence.  相似文献   

2.
We present a model-theoretic study of correct behavioral subtyping for first-order, deterministic, abstract data types with immutable objects. For such types, we give a new algebraic criterion for proving correct behavioral subtyping that is both necessary and sufficient. This proof technique handles incomplete specifications by allowing proofs of correct behavioral subtyping to be based on comparison with one of several paradigmatic models. It compares a model to a selected paradigm with a generalization of the usual notion of simulation relations. This generalization is necessary for specifications that are not term-generated and that use multiple dispatch. However, we also show that the usual notion of simulation gives a necessary and sufficient proof technique for the special cases of term-generated specifications and specifications that only use single dispatch. Received: 12 November 1997 / 12 November 1999  相似文献   

3.
One of the advantages of temporal-logic model-checking tools is their ability to accompany a negative answer to the correctness query by a counterexample to the satisfaction of the specification in the system. On the other hand, when the answer to the correctness query is positive, most model-checking tools provide no witness for the satisfaction of the specification. In the last few years there has been growing awareness as to the importance of suspecting the system or the specification of containing an error also in the case model checking succeeds. The main justification of such suspects are possible errors in the modeling of the system or of the specification. Many such errors can be detected by further automatic reasoning about the system and the environment. In particular, Beer et al. described a method for the detection of vacuous satisfaction of temporal logic specifications and the generation of interesting witnesses for the satisfaction of specifications. For example, verifying a system with respect to the specification ϕ=AG(reqAFgrant) (“every request is eventually followed by a grant”), we say that ϕ is satisfied vacuously in systems in which requests are never sent. An interesting witness for the satisfaction of ϕ is then a computation that satisfies ϕ and contains a request. Beer et al. considered only specifications of a limited fragment of ACTL, and with a restricted interpretation of vacuity. In this paper we present a general method for detection of vacuity and generation of interesting witnesses for specifications in CTL*. Our definition of vacuity is stronger, in the sense that we check whether all the subformulas of the specification affect its truth value in the system. In addition, we study the advantages and disadvantages of alternative definitions of vacuity, study the problem of generating linear witnesses and counterexamples for branching temporal logic specifications, and analyze the complexity of the problem. Published online: 22 January 2002  相似文献   

4.
The literature on definitions of security based on causality-like notions such as noninterference has used several distinct semantic models for systems. Early work was based on state machine and trace-set definitions; more recent work has dealt with definitions of security in two distinct process algebraic settings. Comparisons between the definitions has been carried out mainly within semantic frameworks. This paper studies the relationship between semantic frameworks, by defining mappings between a number of semantic models and studying the relationship between notions of noninterference under these mappings.  相似文献   

5.
The paper is devoted to the ioco relation, which determines conformance of an implementation to the specification. Problems related to nonreflexivity of the ioco relation, presence of nonconformal traces (which are lacking in any conformal implementation) in the specification, lack of the ioco preservation upon composition (composition of implementations conformal to their specifications may be not conformal to the composition of these specifications), and “false” errors when testing in a context are considered. To solve these problems, an algorithm of specification completion preserving ioco is proposed (the class of conformal implementations is preserved). The above-specified problems are lacking in the class of completed specifications.  相似文献   

6.
Erik Hollnagel’s body of work in the past three decades has molded much of the current research approach to system safety, particularly notions of “error”. Hollnagel regards “error” as a dead-end and avoids using the term. This position is consistent with Rasmussen’s claim that there is no scientifically stable category of human performance that can be described as “error”. While this systems view is undoubtedly correct, “error” persists. Organizations, especially formal business, political, and regulatory structures, use “error” as if it were a stable category of human performance. They apply the term to performances associated with undesired outcomes, tabulate occurrences of “error”, and justify control and sanctions through “error”. Although a compelling argument can be made for Hollnagel’s view, it is clear that notions of “error” are socially and organizationally productive. The persistence of “error” in management and regulatory circles reflects its value as a means for social control.  相似文献   

7.
With the recent trend to model driven engineering a common understanding of basic notions such as “model” and “metamodel” becomes a pivotal issue. Even though these notions have been in widespread use for quite a while, there is still little consensus about when exactly it is appropriate to use them. The aim of this article is to start establishing a consensus about generally acceptable terminology. Its main contributions are the distinction between two fundamentally different kinds of model roles, i.e. “token model” versus “type model” (The terms “type” and “token” have been introduced by C.S. Peirce, 1839–1914.), a formal notion of “metaness”, and the consideration of “generalization” as yet another basic relationship between models. In particular, the recognition of the fundamental difference between the above mentioned two kinds of model roles is crucial in order to enable communication among the model driven engineering community that is free of both unnoticed misunderstandings and unnecessary disagreement.  相似文献   

8.
This paper deals with test case selection from axiomatic specifications whose axioms are quantifier-free first-order formulas with equality. We first prove the existence of an ideal exhaustive test set to start the selection from. We then propose an extension of the test selection method called axiom unfolding, originally defined for algebraic specifications, to quantifier-free first-order specifications with equality. This method basically consists of a case analysis of the property under test (the test purpose) according to the specification axioms. It is based on a proof search for the different instances of the test purpose. Since the calculus is sound and complete, this allows us to provide a full coverage of this property. The generalisation we propose allows to deal with any kind of predicate (not only equality) and with any form of axiom and test purpose (not only equations or Horn clauses). Moreover, it improves our previous works with efficiently dealing with the equality predicate, thanks to the paramodulation rule.  相似文献   

9.
A theory of fairness which supports the specification and development of a wide variety of “fair” systems is developed. The definition of fairness presented is much more general than the standard forms of weak and strong fairness, allowing the uniform treatment of many different kinds of fairness within the same formalism, such as probabilistic behaviour, for example. The semantic definition of fairness consists of a safety condition on finite sequences of actions and a liveness or termination condition on the fair infinite sequences of actions. The definition of the predicate transformer of a fair action system permits the use of the existing framework for program development, including the existing definitions of refinement and data refinement, thus avoiding an ad hoc treatment of fairness. The theory includes results that support the modular development of fair action systems, like monotonicity, adding skips, and data refinement. The weakest precondition and the weakest errorfree precondition are unified, so that in particular a standard action system is a special case of a fair action system. The results are illustrated with the development from specification of an unreliable buffer. Received: 3 January 2000 / 17 November 2002  相似文献   

10.
Handling message semantics with Generic Broadcast protocols   总被引:1,自引:0,他引:1  
Summary. Message ordering is a fundamental abstraction in distributed systems. However, ordering guarantees are usually purely “syntactic,” that is, message “semantics” is not taken into consideration despite the fact that in several cases semantic information about messages could be exploited to avoid ordering messages unnecessarily. In this paper we define the Generic Broadcast problem, which orders messages only if needed, based on the semantics of the messages. The semantic information about messages is introduced by conflict relations. We show that Reliable Broadcast and Atomic Broadcast are special instances of Generic Broadcast. The paper also presents two algorithms that solve Generic Broadcast. Received: August 2000 / Accepted: August 2001  相似文献   

11.
FGSPEC is a wide spectrum specification language intended to facilitate the software specification and the expression of transformation process from the functional specification whic describes “what to do ”to the corresponding design(perational)specification whic describer“how to do ”.The design emphasizes the coherence of multi-level specification mechanisms and a tree structure model is provided whic unifies the wide spectrum specification styles from“what”to“how”.  相似文献   

12.
This paper presents a transformational approach to the derivation of implementations from model-oriented specifications of abstract data types.The purpose of this research is to reduce the number of formal proofs required in model refinement, which hinder software development. It is shown to be applicable to the transformation of models written in META-IV (the specification language of VDM) towards their refinement into, for example, Pascal or relational DBMSs. The approach includes the automatic synthesis of retrieve functions between models, and data-type invariants.The underlying algebraic semantics is the so-calledfinal semantics à la Wand: a specification is amodel (heterogeneous algebra) which is the final object (up to isomorphism) in the category of all its implementations.The transformational calculus approached in this paper follows from exploring the properties of finite, recursively defined sets.This work extends the well-known strategy of program transformation to model transformation, adding to previous work on a transformational style for operation-decomposition in META-IV. The model-calculus is also useful for improving model-oriented specifications.  相似文献   

13.
Summary. This paper proposes a framework for detecting global state predicates in systems of processes with approximately-synchronized real-time clocks. Timestamps from these clocks are used to define two orderings on events: “definitely occurred before” and “possibly occurred before”. These orderings lead naturally to definitions of 3 distinct detection modalities, i.e., 3 meanings of “predicate held during a computation”, namely: (“ possibly held”), (“ definitely held”), and (“ definitely held in a specific global state”). This paper defines these modalities and gives efficient algorithms for detecting them. The algorithms are based on algorithms of Garg and Waldecker, Alagar and Venkatesan, Cooper and Marzullo, and Fromentin and Raynal. Complexity analysis shows that under reasonable assumptions, these real-time-clock-based detection algorithms are less expensive than detection algorithms based on Lamport's happened-before ordering. Sample applications are given to illustrate the benefits of this approach. Received: January 1999 / Accepted: November 1999  相似文献   

14.
A new original approach to the formalization and implementation methods for the problem of inductive programming is described. This approach makes it possible for the first time to describe a wide range of problems within the given formalization based on examples from the implementation of problem-oriented languages to the development of applied systems with the help of these languages. Methods for passing on “procedure knowledge” that are accepted in information technologies and human communication are discussed and the concept of an “anthropomorphic information technology” is formulated. The general scheme for constructing this system based on the given technology is described. The fundamental role played by the mechanism of partial evaluation in providing the efficiency of implementation and maintenance of the extension mode for inductively specified languages is stressed. An example of inductive specification of a simple programming language and discuss the prospects of using the concept proposed is presented.  相似文献   

15.
A Horn definition is a set of Horn clauses with the same predicate in all head literals. In this paper, we consider learning non-recursive, first-order Horn definitions from entailment. We show that this class is exactly learnable from equivalence and membership queries. It follows then that this class is PAC learnable using examples and membership queries. Finally, we apply our results to learning control knowledge for efficient planning in the form of goal-decomposition rules. Chandra Reddy, Ph.D.: He is currently a doctoral student in the Department of Computer Science at Oregon State University. He is completing his Ph.D. on June 30, 1998. His dissertation is entitled “Learning Hierarchical Decomposition Rules for Planning: An Inductive Logic Programming Approach.” Earlier, he had an M. Tech in Artificial Intelligence and Robotics from University of Hyderabad, India, and an M.Sc.(tech) in Computer Science from Birla Institute of Technology and Science, India. His current research interests broadly fall under machine learning and planning/scheduling—more specifically, inductive logic programming, speedup learning, data mining, and hierarchical planning and optimization. Prasad Tadepalli, Ph.D.: He has an M.Tech in Computer Science from Indian Institute of Technology, Madras, India and a Ph.D. from Rutgers University, New Brunswick, USA. He joined Oregon State University, Corvallis, as an assistant professor in 1989. He is now an associate professor in the Department of Computer Science of Oregon State University. His main area of research is machine learning, including reinforcement learning, inductive logic programming, and computational learning theory, with applications to classification, planning, scheduling, manufacturing, and information retrieval.  相似文献   

16.
17.
This paper deals with learning first-order logic rules from data lacking an explicit classification predicate. Consequently, the learned rules are not restricted to predicate definitions as in supervised inductive logic programming. First-order logic offers the ability to deal with structured, multi-relational knowledge. Possible applications include first-order knowledge discovery, induction of integrity constraints in databases, multiple predicate learning, and learning mixed theories of predicate definitions and integrity constraints. One of the contributions of our work is a heuristic measure of confirmation, trading off novelty and satisfaction of the rule. The approach has been implemented in the Tertius system. The system performs an optimal best-first search, finding the k most confirmed hypotheses, and includes a non-redundant refinement operator to avoid duplicates in the search. Tertius can be adapted to many different domains by tuning its parameters, and it can deal either with individual-based representations by upgrading propositional representations to first-order, or with general logical rules. We describe a number of experiments demonstrating the feasibility and flexibility of our approach.  相似文献   

18.
Processes to use environments which store reusable software components can be classified into “registration” (representation) and “retrieval” (remembering) processes. A conceptual space called “reuse space” is introduced which consists of the presentations of software entities and predicates to define the properties which the target entity should satisfy. The predicate parts are implemented by property definitions for entities, described with the language called HSML, and associative networks. The associative network is structured with using a psychological principle called category-;based induction. In the registration processes, nodes and links, which represent the new entity and the relationships with existing nodes, can be added to the associative networks. In the retrieval processes, the target entity can be remembered by searching the highest rating cluster in the associative networks with the aid of an inference engine. Clustering is performed with the use of coverages and proximities attached to the links in the network. The environment called MANDALA consists of user interfaces for displaying the reuse space on the client stations, a central web server and many distributed local servers which mount the contents of reusable components.  相似文献   

19.
Summary An observational approach to the construction of implementations of algebraic specifications is presented. Based on the theory of observational specifications an implementation relation is defined which formalizes the intuitive idea that an implementation is correct if it produces correct observable output. To be useful in practice proof theoretic criteria for observational implementations are provided and a proof technique (called context induction) for the verification of implementation relations is presented. As an example an abstract specification of (the algebraic semantics of) a small imperative programming language is implemented by a state oriented specification of the language.In order to support the modular construction of implementations the approach is extended to parameterized observational specifications. Based on the notion of observable parameter context a proof theoretic criterion for parametrized observational implementations is presented and it is shown that under appropriate conditions observational implementations compose horizontally. The given implementation criteria are applied to examples.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号