首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 144 毫秒
1.
A powerful methodology for scenario-based specification of reactive systems is described, in which the behavior is played in directly from the systems GUI or some abstract version thereof, and can then be played out. The approach is supported and illustrated by a tool, which we call the play-engine. As the behavior is played in, the play-engine automatically generates a formal version in an extended version of the language of live sequence charts (LSCs). As they are played out, it causes the application to react according to the universal (must) parts of the specification; the existential (may) parts can be monitored to check their successful completion. Play-in is a user-friendly high-level way of specifying behavior and play-out is a rather surprising way of working with a fully operational system directly from its inter-object requirements. The ideas appear to be relevant to many stages of system development, including requirements engineering, specification, testing, analysis and implementation.  相似文献   

2.
Broadcast MSCs     
Message sequence charts (MSCs) have proven to be a useful modeling technique especially within the requirements analysis phase of software development. MSCs, however, do not support the concept of broadcast communication, which is frequently used in technical applications. In this paper, we present an extension to MSCs for the modeling of broadcast interaction scenarios. Based on the mathematical framework of timed streams we also introduce a semantics for broadcast MSCs. We thoroughly discuss methodological benefits and semantic properties of this approach, consider alternative solutions, and address its scaleability with respect to complex real-time systems applications.Our research was supported, in part, by the DFG within the priority program SoftSpez (SPP 1064) under project name InTime, and by the California Institute for Telecommunications and Information Technology (CAL-(IT)2).Received October 2002Accepted in revised form November 2003 by M. Broy, G. Lüttgen and M. Mendler  相似文献   

3.
We consider a generalized form of the conventional decentralized control architecture for discrete-event systems where the control actions of a set of supervisors can be fused using both union and intersection of enabled events. Namely, the supervisors agree a priori on choosing fusion by union for certain controllable events and fusion by intersection for certain other controllable events. We show that under this architecture, a larger class of languages can be achieved than before since a relaxed version of the notion of co-observability appears in the necessary and sufficient conditions for the existence of supervisors. The computational complexity of verifying these new conditions is studied. A method of partitioning the controllable events between fusion by union and fusion by intersection is presented. The algebraic properties of co-observability in the context of this architecture are presented. We show that appropriate combinations of fusion rules with corresponding decoupled local decision rules guarantee the safety of the closed-loop behavior with respect to a given specification that is not co-observable. We characterize an optimal combination of fusion rules among those combinations guaranteeing the safety of the closed-loop behavior. In addition, a simple supervisor synthesis technique generating the infimal prefix-closed controllable and co-observable superlanguage is presented.  相似文献   

4.
5.
We analyze two-scale Finite Element Methods for the numerical solution of elliptic homogenization problems with coefficients oscillating at a small length scale 1. Based on a refined two-scale regularity on the solutions, two-scale tensor product FE spaces are introduced and error estimates which are robust (i.e., independent of ) are given. We show that under additional two-scale regularity assumptions on the solution, resolution of the fine scale is possible with substantially fewer degrees of freedom and the two-scale full tensor product spaces can be thinned out by means of sparse interpolation preserving at the same time the error estimates.  相似文献   

6.
Semantics of context-free languages   总被引:6,自引:0,他引:6  
Meaning may be assigned to a string in a context-free language by defining attributes of the symbols in a derivation tree for that string. The attributes can be defined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are synthesized, i.e., defined solely in terms of attributes of thedescendants of the corresponding nonterminal symbol, while other attributes are inherited, i.e., defined in terms of attributes of theancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature.  相似文献   

7.
The language of standard propositional modal logic has one operator ( or ), that can be thought of as being determined by the quantifiers or , respectively: for example, a formula of the form is true at a point s just in case all the immediate successors of s verify .This paper uses a propositional modal language with one operator determined by a generalized quantifier to discuss a simple connection between standard invariance conditions on modal formulas and generalized quantifiers: the combined generalized quantifier conditions of conservativity and extension correspond to the modal condition of invariance under generated submodels, and the modal condition of invariance under bisimulations corresponds to the generalized quantifier being a Boolean combination of and .  相似文献   

8.
The specification of a function is often given by a logical formula, called a -formula, of the following form: xy(x,y). More precisely, a specification is given in the context of a certain theory E and is stated by the judgment E xy (x,y).In this paper, we consider the case in which E is an equational theory. It is divided into two parts. In the first part, we develop a theory for the automated proof of such judgments in the initial model ofE . The validity in the initial model means that we consider not only equational theorems but also inductive ones. From our theory we deduce an automated method for the proof of a class of such judgments. In the second part, we present an automatedmethod for program synthesis. We show how the previous proof method can be used to generate a recursive program for a function f that satisfies a judgment E x (x, f(x)).We illustrate our method with the automated synthesis of some recursive programs on domains such as integers and lists. Finally, we describe our system LEMMA, which is an implementation in Common Lisp of these new methods.  相似文献   

9.
Ward Elliott (from 1987) and Robert Valenza (from 1989) set out to the find the true Shakespeare from among 37 anti-Stratfordian Claimants. As directors of the Claremont Shakespeare Authorship Clinic, Elliott and Valenza developed novel attributional tests, from which they concluded that most Claimants are not-Shakespeare. From 1990-4, Elliott and Valenza developed tests purporting further to reject much of the Shakespeare canon as not-Shakespeare (1996a). Foster (1996b) details extensive and persistent flaws in the Clinic's work: data were collected haphazardly; canonical and comparative text-samples were chronologically mismatched; procedural controls for genre, stanzaic structure, and date were lacking. Elliott and Valenza counter by estimating maximum erosion of the Clinic's findings to include five of our 54 tests, which can amount, at most, to half of one percent (1998). This essay provides a brief history, showing why the Clinic foundered. Examining several of the Clinic's representative tests, I evaluate claims that Elliott and Valenza continue to make for their methodology. A final section addresses doubts about accuracy, validity and replicability that have dogged the Clinic's work from the outset.  相似文献   

10.
We analyze four nce Memed novels of Yaar Kemal using six style markers: most frequent words, syllable counts, word type – or part of speech – information, sentence length in terms of words, word length in text, and word length in vocabulary. For analysis we divide each novel into five thousand word text blocks and count the frequencies of each style marker in these blocks. The style markers showing the best separation are most frequent words and sentence lengths. We use stepwise discriminant analysis to determine the best discriminators of each style marker. We then use these markers in cross validation based discriminant analysis. Further investigation based on multiple analysis of variance (MANOVA) reveals how the attributes of each style marker group distinguish among the volumes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号