首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
王云峰  庞军  查鸣  杨朝晖  郑国梁 《软件学报》2000,11(8):1041-1046
COOZ(complete object-oriented Z)的优势在于精确描述大型程序的规约.COOZ本身的结构 不支持精化演算,这限制了COOZ的应用能力,使COOZ难以作为完整的方法应用于软件的开发. 将精化演算引入COOZ,弥补了COOZ在设计和实现阶段的不足,同时也消除了规约与实现之间在 结构和表示方法上的完全分离,使程序开发在一个完整的框架下平滑进行.该文提出了基于CO OZ和精化演算的软件开发模型,通过实例讨论了数据精化和操作精化问题.在精化演算实现技 术方面构造了一种数据精化算子,提出一  相似文献   

2.
介绍了一种新的支持算法设计自动化的形式化方法Designware,详细分析了其理论基础及规约精化机理,阐述了其半自动算法设计支撑系统,并结合一个开发实例展示了Designware的具体使用,给出了Designware的两个实际应用项目,最后对Designware进行了评述.  相似文献   

3.
本文首先阐述了利用DFA模型技术进行状态转换系统描述存在的主要问题,提出了利用代数规约技术解决这些问题的可行性,然后介绍了新一代具有松散语义的代数规约语言SPECTRUM及其主要规约操作符的语法和语义,并根据DFA模型及其语言的数学定义,给出了它们的结构化代数规约,为基于DFA模型的状态转换系统的形式化设计和开发奠定了基础。  相似文献   

4.
全炳哲  陈伟 《计算机学报》1996,19(11):833-840
通过证明程序规约的方法可构造相应程序,但对“难题”和无证明的规约,无法使用这种方法构造程序,另一方面,如果可构造程序规约的验证程序,则可把这种程序看成该仙约的程序。本文讨论了程序规约的验证程序的构造方法,作为研究程序自动佛的一种途径。  相似文献   

5.
提出一个新的基于DPLL的编译算法KCDP,从而成功地将EPCCL理论和SAT求解联系起来,使得目前很多应用在基于DPLL的SAT求解器中先进的技术都能被引入到EPCCL理论的编译中以提高编译效率;提出规约规则,并基于该规则,提出能在多项式时间内终止的REDUCE算法对EPCCL理论进行规约;结合KCDP和REDUCE算法,实现了编译器C2E,并在随机问题和国际通用的测试用例上测试了C2E的编译效率和编译质量,实验结果表明,无论从编译效率还是编译质量来说,C2E都是一个高性能的EPCCL编译器.  相似文献   

6.
骆伟忠  蔡昭权  兰远东  刘运龙 《计算机科学》2017,44(Z11):115-118, 132
完全支配集是一个著名的NP难解问题,在无线传感器网络中具有重要应用。主要研究了能降低问题规模的规约化算法设计。通过对问题结构进行深入分析并对图中顶点进行着色,得到图中顶点之间的新的组合特性,在此基础上提出一系列高效的多项式时间的局部规约规则。证明了规约规则的正确性,并通过仿真实验验证了规约规则的有效性。  相似文献   

7.
一种过程定义模型及其验证性分析   总被引:3,自引:1,他引:3  
软件过程是管理、开发、维护软件系统所需要的一系列活动的偏序集合。软件过程作为一种工作流,其建模和分析可以采用工作流的理论和技术作为支撑。因为任务控制流的图示表示办法直观方便,因此大多系统都采用这种定义方式为软件过程建模。而过程定义的合理性验证问题复杂度通常是非常高的,必须寻求合理的算法。图规约算法就是一种实际可行的算法。  相似文献   

8.
贾国平  郑国梁 《软件》1995,(8):8-14
本文从软件工程角度,对软件的规约方法进行了分类,得到两类规约方法:一类是基于逻辑规约方法,此方法一般是给出系统应该满足的性质集合,其代表是时序逻辑方法;另一类方法是基于模型规约方法,此方法一般是给出一个抽象模型,这个抽象模型指出程序应该如何活动,其代表是通信系统演算(CCS)。本文进一步从软件工程原理,对这两类规约方法进行了比较和讨论,得到的结论是:两种类型的规约方法在系统开发的整个过程中都起着不同而重要的作用,它们相辅相成,缺一不可.最后,我们指出了今后的研究工作.在软件开发中,应该考虑多种规约方法和多种语义相结合的开发过程.  相似文献   

9.
高文宇 《计算机应用》2010,30(6):1431-1433
为求解有向图最多叶子生成树(出分枝)问题,提出了一些规约规则,对有向图实施这些规约规则能降低原图的规模;随后设计了近似算法在规约后的图中求解指定根节点的最多叶子出分枝问题。对于用近似算法求得的出分枝,又结合前面的规约规则设计了优化规则,以进一步通过优化变换增加出分枝的叶子节点。仿真实验表明,规约规则、近似算法和优化规则是有效的。  相似文献   

10.
白健  刘念  李子臣  刘慧 《计算机工程》2013,(11):147-149,162
格是多维空间中点的规则排列,基于格的公钥密码体制是密码学中研究的热点。针对传统格基规约算法效率较低、消耗时间较长的问题,分析Gauss和LLL规约算法,在此基础上提出一种新型格基规约算法(Gauss—LLL),对算法进行正确性验证,并给出实现伪码。该算法可对格的任意一组基进行规约,最终获得一组长度较短的规约基。分析结果表明,与LLL算法相比,Gauss—LLL算法得到的规约基较优,规约效率较高。  相似文献   

11.
We describe the theory of refinements of specifications based on localizations of categories. The approach allows us to enlarge the family of refinements (i.e. specification morphisms) of the category Spec – the category of first order theories (specifications) of multi-sorted algebras. We prove that the class of specification morphisms in the category Spec can be enriched by the class of all interpretations of theories from Spec in all definitional extensions of theories of multi-sorted algebras. It provides a guide for finding a path leading from a given specification to a specification which is a provably correct code in a programming language (like C++, Lisp, Java).  相似文献   

12.
13.
A testing-based approach for constructing and refining very high-level software functionality representations such as intentions, natural language assertions, and formal specifications is presented and applied to a standard line-editing problem as an illustration. The approach involves the use of specification-based (black-box) test-case generation strategies, high-level specification formalisms, redundant or parallel development and cross-validation, and a logic programming support environment. Test-case reference sets are used as software functionality representations for the purposes of cross-validating two distinct high-level representations, and identifying ambiguities and omissions in those representations. In fact, we propose the use of successive refinements of such test reference sets as the authoritative specification throughout the software development process. Potential benefits of the approach include improvements in user/ designer communication over all life cycle phases, and an increase in the quality of specifications and designs.  相似文献   

14.
Creating High Confidence in a Separation Kernel   总被引:2,自引:0,他引:2  
Separation of processes is the foundation for security and safety properties of systems. This paper reports on a collaborative effort of Government, Industry and Academia to achieve high confidence in the separation of processes. To this end, this paper will discuss (1) what a separation kernel is, (2) why the separation of processes is fundamental to security systems, (3) how high confidence in the separation property of the kernel was obtained, and (4) some of the ways government, industry, and academia cooperated to achieve high confidence in a separation kernel. What is separation? Strict separation is the inability of one process to interfere with another. In a separation kernel, the word separation is interpreted very strictly. Any means for one process to disturb another, be it by communication primitives, by sharing of data, or by subtle uses of kernel primitives not intended for communication, is ruled out when twoprocesses are separated. Why is separation fundamental? Strict separation between processes enables the evaluation of a system to check that the system meets its security policy. For example, if a red process is strictly separated from a black process, then it can be concluded that there is no flow of information from red to black. How was high confidence achieved? We have collaborated and shared our expertise in the use of SPECWARE. SPECWARE is a correct by construction method, in which high level specifications are built up from modules using specification combinators. Refinements of the specifications are made until an implementation is achieved. These refinements are also subject to combinators. The high confidence in the separation property of the kernel stems from the use of formal methods in the development of the kernel. How did we collaborate? Staff from the Kestrel Institute (developers of SPECWARE), the Department of Defense (DoD), and Motorola (developers of the kernel) cooperated in the creation of the Mathematically Analyzed Separation Kernel (MASK). DoD provided the separation kernel concept, and expertise in computer security and high confidence development. Kestrel provided expertise in SPECWARE. Motorola combined its own the expertise with that of DoD and Kestrel in creating MASK.  相似文献   

15.
Software Product Line Engineering (SPLE) deals with developing artifacts that capture the common and variable aspects of software product families. Domain models are one kind of such artifacts. Being developed in early stages, domain models need to specify commonality and variability and guide the reuse of the artifacts in particular software products. Although different modeling methods have been proposed to manage and support these activities, the assessment of these methods is still in an inceptive stage. In this work, we examined the comprehensibility of domain models specified in ADOM, a UML-based SPLE method. In particular, we conducted a controlled experiment in which 116 undergraduate students were required to answer comprehension questions regarding a domain model that was equipped with explicit reuse guidance and/or variability specification. We found that explicit specification of reuse guidance within the domain model helped understand the model, whereas explicit specification of variability increased comprehensibility only to a limited extent. Explicit specification of both reuse guidance and variability often provided intermediate results, namely, results that were better than specification of variability without reuse guidance, but worse than specification of reuse guidance without variability. All these results were perceived in different UML diagram types, namely, use case, class, and sequence diagrams and for different commonality-, variability-, and reuse-related aspects.  相似文献   

16.
It is well known that the principal operators in the Z schema calculus are not monotonic with respect to refinement, which limits their usefulness in software development. The usual reaction to this observation is to remove all schema operators before performing any kind of refinement, or to move to a different formalism such as B or the refinement calculus. This paper examines the interaction between refinement and the schema calculus more closely, showing exactly how non-monotonicity arises, and identifying various conditions under which components of schema expressions can be safely replaced by their refinements. This analysis uses a decomposition of the standard refinement relation into two simpler relations that allow us to study refinements that modify a specification in different ways.  相似文献   

17.
Generalized queries are defined as sets of clauses in implication form. They cover several tasks of practical importance for database maintenance such as answering positive queries, computing database completions and integrity constraints checking. We address the issue of answering generalized queries under the minimal model semantics for the class of disjunctive deductive databases (DDDBs). The advanced approach is based on having the query induce an order on the models returned by a sound and complete minimal model generating procedure. We consider answers that are true in all and those that are true in some minimal models of the theory. We address the issue of answering positive queries through the construction of the minimal model state of the DDDB, using a minimal model generating procedure. The refinements allowed by the procedure include isolating a minimal component of a disjunctive answer, the specification of possible updates to the theory to enable the derivability of certain queries and deciding the monotonicity properties of answers to different classes of queries.  相似文献   

18.
Verifying data refinements using a model checker   总被引:2,自引:1,他引:1  
In this paper, we consider how refinements between state-based specifications (e.g., written in Z) can be checked by use of a model checker. Specifically, we are interested in the verification of downward and upward simulations which are the standard approach to verifying refinements in state-based notations. We show how downward and upward simulations can be checked using existing temporal logic model checkers.In particular, we show how the branching time temporal logic CTL can be used to encode the standard simulation conditions. We do this for both a blocking, or guarded, interpretation of operations (often used when specifying reactive systems) as well as the more common non-blocking interpretation of operations used in many state-based specification languages (for modelling sequential systems). The approach is general enough to use with any state-based specification language, and we illustrate how refinements between Z specifications can be checked using the SAL CTL model checker using a small example.  相似文献   

19.
《Theoretical computer science》2003,290(2):1201-1222
This paper presents a theory of component based development for exception-handling in fault tolerant systems. The theory is based on a general theory of composition, which enables us to factorize the temporal specification of a system into the specifications of its components. This is a new development because in the past efforts to set up such a theory have always been hindered by the problem of composing progress properties.  相似文献   

20.
Reviews and inspections of software artifacts throughout the development life cycle are effective techniques for identifying defects and improving software quality. While review methods for text-based artifacts (e.g., code) are well understood, very little guidance is available for performing reviews of software diagrams, which are rapidly becoming the dominant form of software specification and design. Drawing upon human cognitive theory, we study how 12 experienced software developers perform individual reviews on a software design containing two types of diagrams: entity-relationship diagrams and data flow diagrams. Verbal protocol methods are employed to describe and analyze defect search patterns among the software artifacts, both text and diagrams, within the design. Results indicate that search patterns that rapidly switch between the two design diagrams are the most effective. These findings support the cognitive theory thesis that how an individual processes information impacts processing success. We conclude with specific recommendations for improving the practice of reviewing software diagrams.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号