共查询到20条相似文献,搜索用时 15 毫秒
1.
利用UML状态图,采用基于状态的测试数据生成标准生成测试用例。其中UML状态图是测试用例生成的关键部分,在某种意义上,UML状态图能够图容易生成测试用例。 相似文献
2.
介绍了国内计算机化考试系统的一些特点,根据国际组织制定的考试规范,把考试进一步细化成具有4个子过程的模型。最后,在J2EE平台上设计和实现了一个遵循考试规范的、可扩展的通用考试系统。 相似文献
3.
基于身份的加密相等性测试(IBEET)方案可在保证数据机密性的同时简化密钥和证书的管理,但其缺少对授权粒度的控制,难以满足实际应用中不同数据粒度的管理需求。为此,引入任意用户级别、任意密文级别、指定用户级别和密文-用户级别4种不同类型的授权机制,基于非对称的双线性映射,构建支持灵活授权的IBEET方案,并给出相关定义及安全模型。分析结果表明,该方案具有OW-ID-CCA安全性,能实现用户隐私保护。 相似文献
4.
We present ProTest, an automatic test environment for B specifications. B is a model-oriented notation where systems are specified in terms of abstract states and operations on abstract states. ProTest first generates a state coverage graph of a B specification through exhaustive model checking, and the coverage graph is traversed to generate a set of test cases, each being a sequence of B operations. For the model checking to be exhaustive, some transformations are applied to the sets used in the B machine. The approach also works if it is not exhaustive; one can stop at any point in time during the state space exploration and generate test cases from the coverage graph obtained so far. ProTest then simultaneously performs animation of the B machine and the execution of the corresponding implementation in Java, and assigns verdicts on the test results. With some restrictions imposed on the B operations, the whole of the testing process is performed mechanically. We demonstrate the efficacy of our test environment by performing a small case study from industry. Furthermore, we present a solution to the problem of handling non-determinism in B operations. 相似文献
5.
We develop foundations for computing Craig-Lyndon interpolants of two given formulas with first-order theorem provers that construct clausal tableaux. Provers that can be understood in this way include efficient machine-oriented systems based on calculi of two families: goal-oriented such as model elimination and the connection method, and bottom-up such as the hypertableau calculus. We present the first interpolation method for first-order proofs represented by closed tableaux that proceeds in two stages, similar to known interpolation methods for resolution proofs. The first stage is an induction on the tableau structure, which is sufficient to compute propositional interpolants. We show that this can linearly simulate different prominent propositional interpolation methods that operate by an induction on a resolution deduction tree. In the second stage, interpolant lifting, quantified variables that replace certain terms (constants and compound terms) by variables are introduced. We justify the correctness of interpolant lifting (for the case without built-in equality) abstractly on the basis of Herbrand’s theorem and for a different characterization of the formulas to be lifted than in the literature. In addition, we discuss various subtle aspects that are relevant for the investigation and practical realization of first-order interpolation based on clausal tableaux. 相似文献
7.
There is a great deal of research aimed toward the development of temporal logics and model checking algorithms which can be used to verify properties of systems. In this paper, we present a methodology and supporting tools which allow researchers and practitioners to automatically generate model checking algorithms for temporal logics from algebraic specifications. These tools are extensions of algebraic compiler generation tools and are used to specify model checkers as mappings of the form
, where L
s is a temporal logic source language and L
t is a target language representing sets of states of a model M, such that
. The algebraic specifications for a model checker define the logic source language, the target language representing sets of states in a model, and the embedding of the source language into the target language. Since users can modify and extend existing specifications or write original specifications, new model checking algorithms for new temporal logics can be easily and quickly developed; this allows the user more time to experiment with the logic and its model checking algorithm instead of developing its implementation. Here we show how this algebraic framework can be used to specify model checking algorithms for CTL, a real-time CTL, CTL*, and a custom extension called CTL
e that makes use of propositions labeling the edges as well as the nodes of a model. We also show how the target language can be changed to a language of binary decision diagrams to generate symbolic model checkers from algebraic specifications. 相似文献
8.
By analogy with a Software Requirements Specification (SRS), it is argued that a Method Requirements Specification (MRS) should
be introduced in method engineering. It shares with the SRS the property of implementation-independence. This means that an
MRS must be an instance of an abstract metamodel and not of a technical metamodel like GOPRR (Graph, Object, Property, Relationship,
and Role). The MRS is then translated to be an instantiation of a technical metamodel. We develop a representation system
for an MRS and describe an automated process for instantiating a technical metamodel with an MRS. This instantiation is used
to produce the actual method which is then given to a metaCASE to produce a CASE tool. Thus, we propose a method engineering
approach rooted in the MRS. 相似文献
9.
This paper deals with learning first-order logic rules from data lacking an explicit classification predicate. Consequently, the learned rules are not restricted to predicate definitions as in supervised inductive logic programming. First-order logic offers the ability to deal with structured, multi-relational knowledge. Possible applications include first-order knowledge discovery, induction of integrity constraints in databases, multiple predicate learning, and learning mixed theories of predicate definitions and integrity constraints. One of the contributions of our work is a heuristic measure of confirmation, trading off novelty and satisfaction of the rule. The approach has been implemented in the Tertius system. The system performs an optimal best-first search, finding the k most confirmed hypotheses, and includes a non-redundant refinement operator to avoid duplicates in the search. Tertius can be adapted to many different domains by tuning its parameters, and it can deal either with individual-based representations by upgrading propositional representations to first-order, or with general logical rules. We describe a number of experiments demonstrating the feasibility and flexibility of our approach. 相似文献
11.
In this paper we present a study of definability properties of fixed points of effective operators on the real numbers without the equality test. In particular we prove that Gandy theorem holds for the reals without the equality test. This provides a useful tool for dealing with recursive definitions using σ-formulas. 相似文献
12.
Structured documents are usually processed by tree-based document transformers, which transform the document tree representing the structure of the input document into another tree structure. Event-based document transformers, by contrast, recognize the input as a stream of parsing events, i.e., lexical tokens, and process the events one by one in an event-driven manner. Event-based document transformers have advantages that they need less memory space and that they are more tolerant of large inputs, compared to tree-based transformers, which construct the intermediate tree representation.This paper proposes an algorithm which derives an event-based transformer from a given specification of a document transformation over a tree structure. The derivation of an event-based transformer is carried out in the framework of attribute grammars. We first obtain an attribute grammar which processes a stream of parsing events, by applying a deforestation method; We then derive an attribute evaluation scheme relevant to the event-based transformation. Using this algorithm, one can develop event-based document transformers in a more declarative style than directly programming over the stream of parsing events. 相似文献
14.
A propositional temporal logic is briefly introduced and its use for reactive systems specification is motivated and illustrated. G-automata are proposed as a new operational semantics domain designed to cope with fairness/liveness properties. G-automata are a class of labelled transition systems with an additional structure of goal achievement which forces the eventual fulfilment of every pending goal. An algorithm is then presented, that takes a finite system specification as input and that, by a stepwise tableaux analysis method, builds up a canonical G-automaton matching the specification. Eventuality formulae correspond to goals of the automaton their satisfaction being thus assured. The direct execution of G-automata, and consequently of specifications, is then discussed and suggested as an alternative approach to the execution of propositional temporal logic. A short overview of the advantages of applying the techniques to the specific field of database monitoring is presented. 相似文献
15.
现阶段面向对象软件系统占据了很大的比重,在软件测试方面,形式化规格说明通常作为正确性验证的基础.本文主要研究了基于程序的代数规格化说明生成测试用例的方法.首先,根据代数规格化说明得到一组基本项.为了避免从理想基本项集合中选择一组基本项时受理想基本项集合的制约,本文用基本项模型图生成基本项,然后,从基本项集合中得到等价的范式集合.针对用范式模型树生成范式时,生成的不完全是范式,并且范式的长度可能无穷大的问题,本文提出对基本项模型图路径拆分的方法.最后,用范式替换规格化说明公理系统中的变量,生成测试用例.对于公理中的条件语句和循环语句,还提出一种公理变换方法,保证了测试路径的覆盖.实例分析和实验验证表明,本文的方法可以生成一个范式的最小集合,减少了生成测试用例的数量,提高了测试用例的效率. 相似文献
17.
派生谓词是描述动作非直接效果的主要方式.但是由人类专家设计的派生谓词规则(即领域理论)不能保证总是正确或者完备的,因此有时很难解释一个观察到的规划解为什么是有效的.结合归纳学习与分析学习的优点,文中提出一种称为FODRL(First-Order Derived Rules Learning)的算法,在不完美的初始领域理论的引导下从观察到的规划解中学习一阶派生谓词规则.FODRL基于归纳学习算法FOIL(First-Order Inductive Learning),最主要的改进是可以使用派生谓词的激活集来扩大搜索步,从而提高学习到的规则的精确度.学习过程分为两个步骤:先从规划解中提取训练例,然后学习能够最好拟合训练例和初始领域理论的一阶规则集.在PSR和PROME-LA两个派生规划领域进行实验,结果表明,在大部分情况下FODRL比FOIL(甚至包括其变型算法FOCL)学习到的规则的精确度都要高. 相似文献
18.
A systematic approach to the development of totally correct iterative programs is investigated for the class of accumulation problems. In these problems, the required output information is usually obtained by accumulation during successive passes over input data structures. The development of iterative programs for accumulation problems is shown to involve successive generalizations of the data domain and the corresponding function specifications. The problem of locating these generalizations is discussed. It is shown that not all function specifications can be realized in terms of terminating computations of a stand-alone iterative program. A linear data domain is defined in terms of decomposition and finiteness axioms, and the property of well behavedness of a loop body over a linear data domain is introduced. It is shown that this property can be used to generate loop body specifications from specifically chosen examples of program behavior. An abstract program for an accumulation problem is developed using these considerations. The role of generalizations as an added parameter to the program development process is discussed. 相似文献
19.
The spectrum of a first-order formula is the set of numbers α such that for a random graph in a binomial model where the edge probability is a power function of the number of graph vertices with exponent ?α the truth probability of this formula does not tend to either zero or one. In 1990 J. Spenser proved that there exists a first-order formula with an infinite spectrum. We have proved that the minimum quantifier depth of a first-order formula with an infinite spectrum is either 4 or 5. In the present paper we find a wide class of first-order formulas of depth 4 with finite spectra and also prove that the minimum quantifier alternation number for a first-order formula with an infinite spectrum is 3. 相似文献
20.
Formal specifications of software systems are extremely useful because they can be rigorously analyzed, verified, and validated, giving high confidence that the specification captures the desired behavior. To transfer this confidence to the actual source code implementation, a formal link is needed between the specification and the implementation. Generating the implementation directly from the specification provides one such link. A program transformation system such as Paige's APTS can be useful in developing a source code generator. This paper describes a case study in which APTS was used to produce code generators that construct C source code from a requirements specification in the SCR (Software Cost Reduction) tabular notation. In the study, two different code generation strategies were explored. The first strategy uses rewrite rules to transform the parse tree of an SCR specification into a parse tree for the corresponding C code. The second strategy associates a relation with each node of the specification parse tree. Each member of this relation acts as an attribute, holding the C code corresponding to the tree at the associated node; the root of the tree has the entire C program as its member of the relation. This paper describes the two code generators supported by APTS, how each was used to synthesize code for two example SCR requirements specifications, and what was learned about APTS from these implementations. 相似文献
|