首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 16 毫秒
1.
The Unifying Theories of Programming (UTP) of Hoare and He is a general framework in which the semantics of a variety of specification and programming languages can be uniformly defined. In this paper we present a semantic embedding of the UTP into the ProofPower-Z theorem prover; it concisely captures the notion of UTP theory, theory instantiation, and, additionally, type restrictions on the alphabet of UTP predicates. We show how the encoding can be used to reason about UTP theories and their predicates, including models of particular specifications and programs. We support encoding and reasoning about combinations of predicates of various theory instantiations, as typically found in UTP models. Our results go beyond what has already been discussed in the literature in that we support encoding of both theories and programs (or their specifications), and high-level proof tactics. We also create structuring mechanisms that support the incremental construction and reuse of encoded theories, associated laws and proof tactics.  相似文献   

2.
3.
We describe the theory of refinements of specifications based on localizations of categories. The approach allows us to enlarge the family of refinements (i.e. specification morphisms) of the category Spec – the category of first order theories (specifications) of multi-sorted algebras. We prove that the class of specification morphisms in the category Spec can be enriched by the class of all interpretations of theories from Spec in all definitional extensions of theories of multi-sorted algebras. It provides a guide for finding a path leading from a given specification to a specification which is a provably correct code in a programming language (like C++, Lisp, Java).  相似文献   

4.
E. Boje 《Automatica》2002,38(1):131-138
The use of tracking error specifications in quantitative feedback theory (QFT) design is discussed for multi-input, multi-output (MIMO) systems. These specifications bound the closed loop transfer function within a disk around some nominal (model) performance while preserving the QFT approach that allows treatment of highly structured (and unstructured) uncertainty. Because the specifications capture phase information, the level of over-design in certain MIMO QFT designs is reduced. The method presented allows independent, two-degree-of-freedom design.  相似文献   

5.
We develop a quantifier-free logic for deriving consequences of multialgebraic theories. Multialgebras are used as models for nondeterminism in the context of algebraic specifications. They are many sorted algebras with set-valued operations. Formulae are sequents over atoms allowing one to state set-inclusion or identity of 1-element sets (determinacy). We introduce a sound and weakly complete Rasiowa–Sikorski (R–S) logic for proving multialgebraic tautologies. We then extend this system for proving consequences of specifications based on translation of finite theories into logical formulae. Finally, we show how such a translation may be avoided—introduction of the specific cut rules leads to a sound and strongly complete Gentzen system for proving directly consequences of specifications. Besides giving examples of the general techniques of R–S and the specific cut rules, we improve the earlier logics for multialgebras by providing means to handle empty carriers (as well as empty result-sets) without the use of quantifiers, and to derive consequences of theories without translation into another format and without using general cut.  相似文献   

6.
With the advent of intelligent computer aided design systems, companies such as Boeing are embarking on an era in which core competitive engineering knowledge and design rationale is being encoded in software systems. The promise of this technology is that this knowledge can be leveraged across many different designs, product families, and even different uses (e.g., generative process planning for manufacturing). The current state of the practice attempts to achieve this goal through the reuse of software components. A fundamental problem with this approach to knowledge sharing and reuse is that what we are trying to reuse is software—the end artifact in a long and complicated process that goes from requirement specifications, through a process of design, to implementations. Knowledge sharing and reuse can not easily and uniformly occur at the software level. So what can be done as an alternative? This paper describes a theory, methodology, language, and tool for the semi-automatic development and maintenance of engineering software from requirement specifications. In essence, this paradigm for software development and maintenance is one that explicitly captures requirement specifications, designs, implementations, and the refinement processes that lead from requirements all the way down to software. By recording this entire refinement history, we stand a better chance of leveraging knowledge for different uses.  相似文献   

7.
The National Knowledge Infrastructure (NKI) is a multi-domain knowledge base. The classical type theory is no longer appropriate to describe every kind of object in multi-domains, such as artifacts, natural or micro objects. Three different kinds of type theories are defined: the classical, atomic and pseudo type theories; in the classical type theory, two new type constructors are defined: setm and ∨, to describe the types of sets of all the elements of the types and unions of two sets of different types, respectively. The structures and categories in the type theory are defined, and the sub-structures and homomorphic structures are used to describe the part-of relations that give the algebraic specifications for the natural objects and the part-of relations between the natural objects, micro objects and artifacts.  相似文献   

8.
The increasing complexity of digital systems makes designers begin to design using abstract system level modeling (SLM). However, SLM brings new challenges for verification engineers to guarantee the functional equivalence between SLM specifications and lower-level implementations such as those of transaction level modeling (TLM). This paper proposes a novel method for equivalence checking between SLM and TLM based on coverage directed simulation. Our method randomly simulates an SLM model and uses an satisfiability modulo theories (SMT) solver to generate stimuli for the uncovered area with the direction of a composite coverage metric (code coverage and functional coverage). Then we run all the generated stimuli (random stimuli and direct stimuli) on both SLM and TLM designs. At the same time, the selected observation variables are compared to evaluate the equivalence between SLM and TLM. Promising experimental results show that our equivalence checking method is more efficient with lower simulation cost.  相似文献   

9.
Decision-making theories aiming at solving decision problems that involve multiple criteria have often been incorporated in knowledge-based systems for the improvement of these systems' reasoning process. However, multicriteria analysis has not been used adequately in intelligent user interfaces, even though user-computer interaction is, by nature, multicriteria-based. The actual process of incorporating multicriteria analysis into an intelligent user interface is neither clearly defined nor adequately described in the literature. It involves many experimental studies throughout the software life-cycle. Moreover, each multicriteria decision-making theory requires different kinds of experiments for the criteria to be determined and then for the proper respective weight of each criterion to be specified. In our research, we address the complex issue of developing intelligent user interfaces that are based on multicriteria decision-making theories. In particular, we present and discuss a software life-cycle framework that is appropriate for the development of such user interfaces. The life-cycle framework is called MBIUI. Given the fact, that very little has been reported in the literature about the required experimental studies, their participants and the appropriate life-cycle phase during which the experimental studies should take place, MBIUI provides useful insight for future developments of intelligent user interfaces that incorporate multicriteria theories. One significant advantage of MBIUI is that it provides a unifying life-cycle framework that may be used for the application of many different multicriteria decision-making theories. In the paper, we discuss the incorporation features of four distinct multicriteria theories: TOPSIS, SAW, MAUT, and DEA. Furthermore, we give detailed specifications of the experiments that should take place and reveal their similarities and differences with respect to the theories.  相似文献   

10.
喻钢  徐中伟 《计算机应用》2008,28(11):2929-2932
构件理论在软件工程领域正扮演着越来越重要的角色。基于传统构件模型的软件开发技术尚不能满足安全苛求软件的开发需求。为了规范化地描述和设计符合安全苛求软件开发需要的构件, 提出了一种形式化的面向安全需求的安全构件(SC)模型框架, 利用故障模式与冗余比较来确保构件模型的安全性, 并将该模型应用于CTCS-2级列车控制中心仿真系统中。  相似文献   

11.
陈鑫 《软件学报》2008,19(5):1134-1148
现代构件系统通常包含多个并发执行的主动构件,这使得验证构件系统的正确性变得十分困难.通过对构件演算进行扩展,提出了一种主动构件的精化方法.在构件接口层引入契约.契约使用卫式设计描述公共方法和主动活动的功能规约.通过一对发散、失败集合定义契约的动态行为,并利用发散、失败集合之间的包含关系定义契约间的精化关系.证明了应用仿真技术确认契约精化关系的定理.定义构件的语义为其需求接口契约到其服务接口契约的函数,以此为基础,可以通过契约的精化来证明构件的精化.给出了构件的组装规则.在构件系统自底向上的构造过程中,应用构件的精化方法和组装规则可以保证最终系统的正确性.  相似文献   

12.
13.
Automata theory provides two ways of defining an automaton: either by its transition system, defining its states and events, or by its language, the set of sequences (traces) of events in which it can engage. For many classes of automaton, these forms of definition have been proved equivalent. For example, there is a well-known isomorphism between regular languages and finite deterministic automata. This paper suggests that for (demonically) the non-deterministic automata (as treated in process algebra), the appropriate link between transition systems and languages may be a retraction rather than an isomorphism.A pair of automata, defined in the tradition of CCS by their transition systems, may be compared by a pre-ordering based on some kind of simulation or bisimulation, for example, weak, strong, or barbed. Automata defined in the tradition of CSP are naturally ordered by set inclusion of their languages (often called refinement); variations in ordering arise from different choices of basic event, including for example, refusals and divergences. In both cases, we characterise a theory by its underlying transition system and its choice of ordering. Our treatment is therefore wholly semantic, independent of the syntax and definition of operators of the calculus.We put forward a series of retractions relating the above-mentioned versions of CSP to their corresponding CCS transition models. A retraction is an injection that is (with respect to the chosen ordering) monotonic, increasing and idempotent (up to equivalence). It maps the nodes of a transition system of its source theory to those of a system that has been saturated by additional transitions. Each retraction will be defined by a transition rule, in the style of operational semantics; the proofs use the familiar technique of co-induction, often abbreviated by encoding in the relational calculus.The aim of this paper is to contribute to unification of theories of reactive system programming. More practical benefits may follow. For example, we justify a method to improve the efficiency of model checking based on simulation. Furthermore, we show how model checking of a transition network fits consistently with theorem-proving tools, which reason directly about specifications and designs that are expressed in terms of sets of sequences of observable events.  相似文献   

14.
Techniques and tools for formally verifying compliance with industry standards are important, especially in System-on-Chip (SoC) designs: a failure to integrate externally developed intellectual property (IP) cores is prohibitively costly. There are three essential components in the practical verification of compliance with a standard. First, an easy-to-read and yet formal specification of the standard is needed; we propose Live Sequence Charts (LSCs) as a high-level visual notation for writing specifications. Second, assertions should be generated directly from the specification; an implementation will be scrutinized, usually by model checking, to check that it satisfies each assertion. Third, a formal link must be made between proofs of assertions and compliance with the original specification. As an example, we take the Virtual Component Interface (VCI) Standard. We compare three efforts in verifying that the same register transfer level code is VCI-compliant. The first two efforts were manual, while the third used a tool, lscAssert, to automatically generate assertions in LTL. We discuss the details of the assertion generation algorithm.  相似文献   

15.
How to index or retrieve multimedia objects is by no means obvious, because the computer can retrieve right multimedia material only if it reasons about its contents. We show that it is possible to write formal specifications of this reasoning process using set theory and mereology. We discuss the theoretical consequences of trying to use mereology and set theory for multimedia indexing and retrieval. We re-examine the roles of mereology and set theory in knowledge representation. We conclude that both commonsense set theories and mereologies should play the role of constraining databases of arbitrary multimedia objects, e.g. video clips. But although both should be viewed as database constraints, we argue that part-of hierarchies should be used to encode relatively permanent background knowledge, elsewhere names thereferential level, while member-of hierarchies should describe arbitrary multimedia records. We also propose a language and a set of axioms, SetNM, for natural mereologies with sets. A multimedia indexing system can then be viewed as a particular SetNM theory.  相似文献   

16.
Coordinate metrology aims to answer two questions: whether a manufactured part meets design tolerance specifications and how well the manufactured part meets the specifications. Existing methods for analyzing measured coordinate data are not adequate or effective for parts of complex tolerance zones.This paper presents a new approach to dimensional qualification of manufactured parts. In this paper, we view the part qualification problem as an issue of finding an admissible point in transformation space. Based on the concept of admissible point, we develop theories and algorithms for part geometric dimensioning and tolerancing (GD&T) conformance check. A formulation based on containment fit for tolerance check is developed. An admissible transformation volume (ATV) is used to quantitatively characterize the quality of manufactured parts with respect to design tolerance specifications.We examine our approach in three tolerance examples and conclude that admissible transformation volume is an effective metric for part dimensional quality gauging and it is especially useful for multi-tolerance zone check where traditional methods fail to address it effectively.  相似文献   

17.
适用于大样本体系的实验设计方法   总被引:1,自引:0,他引:1  
实验设计,尤其是混合水平实验设计,已经在科研领域和制造业中得到了广泛使用.虽然很多学者已经提出了许多实验设计理论以及实验设计用表,但是对于含有混合水平的大样本体系的实验设计方法仍然是相当困难的.本文根据Yong Guo提出的实验设计方法,并依据正交设计理论和遗传算法,把此方法改进为适用于大样本体系的实验设计方法.为了配合遗传算法,而改进了交叉与变异操作:为了从复杂体系的全实验设计中高效地获取某一次实验设计,引进了数学方法中的求余运算.最后,当把此实验设计方法应用于一个真实的、多因子多水平的体系时,获得了比较满意的实验方案.因此,对于大样本体系,此实验设计方法所得到的实验方案是可以被接受的.  相似文献   

18.
The paper addresses a notion of configuring systems, constructing them from specified component parts with specified sharing. This notion is independent of any underlying specification language and has been abstractly identified with the taking of colimits in category theory. Mathematically it is known that these can be expressed by presheaves and the present paper applies this idea to configuration. We interpret the category theory informally as follows. Suppose ? is a category whose objects are interpreted as specifications, and for which each morphism u : XY is interpreted as contravariant ‘instance reduction’, reducing instances of specification Y to instances of X. Then a presheaf P: Set ?op represents a collection of instances that is closed under reduction. We develop an algebraic account of presheaves in which we present configurations by generators (for components) and relations (for shared reducts), and we outline a proposed configuration language based on the techniques. Oriat uses diagrams to express colimits of specifications, and we show that Oriat's category Diag(?) of finite diagrams is equivalent to the category of finitely presented presheaves over ?. Received May 1998 / Accepted in revised form August 2000  相似文献   

19.
Semantic foundations for generalized rewrite theories   总被引:1,自引:0,他引:1  
Rewriting logic (RL) is a logic of actions whose models are concurrent systems. Rewrite theories involve the specification of equational theories of data and state structures together with a set of rewrite rules that model the dynamics of concurrent systems. Since its introduction, more than one decade ago, RL has attracted the interest of both theorists and practitioners, who have contributed in showing its generality as a semantic and logical framework and also as a programming paradigm. The experimentation conducted in these years has suggested that some significant extensions to the original definition of the logic would be very useful in practice. These extensions may develop along several dimensions, like the choice of the underlying equational logic, the kind of side conditions allowed in rewrite rules and operational concerns for the execution of certain rewrites. In particular, the Maude system now supports subsorting and conditional sentences in the equational logic for data, and also frozen arguments to block undesired nested rewrites; moreover, it allows equality and membership assertions in rule conditions. In this paper, we give a detailed presentation of the inference rules, model theory, and completeness of such generalized rewrite theories. Our results provide a mathematical semantics for Maude, and a foundation for formal reasoning about Maude specifications.  相似文献   

20.
This paper explores theories that help in (i) proving that a system composed from components satisfies a system specification given only specifications of components and the composition operator, and (ii) deducing desirable properties of components from the system specification and properties of the composition operator. The paper studies compositional systems in general without making assumptions that components are computer programs. The results obtained from such abstract representations are general but also weaker than results that can be obtained from more restrictive assumptions such as assuming that systems are parallel compositions of concurrent programs. Explorations of general theories of composition can help identify fundamental issues common to many problem domains. The theory presented here is based on predicate transformers.Received: 30 May 2002, Revised: 16 August 2003, Published online: 30 October 2003  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号