首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
There are numerous methods of formally defining the semantics of computer languages. Each method has been designed to fulfil a different purpose. For example, some have been designed to make reasoning about languages as easy as possible; others have been designed to be accessible to a large audience and some have been designed to ease implementation of languages. Given two semantics definitions of a language written using two separate semantics definition methods, we must be able to show that the two are in fact equivalent. If we cannot do this then we either have an error in one of the semantics definitions, or more seriously we have a problem with the semantics definition methods themselves.Three methods of defining the semantics of computer languages have been considered, i.e. Denotational Semantics, Structural Operational Semantics and Action Semantics. An equivalence between these three is shown for a specific example language by first defining its semantics using each of the three definition methods. The proof of the equivalence is then constructed by selecting pairs of the semantics definitions and showing that they define the same language.A full version of this paper can be accessed via our web page http://www.cs.man.ac.uk/fmethods/ facj.html  相似文献   

2.
This paper proposes a new style of product line engineering methods. It focuses on constructing embedded systems that take into account the contexts such as the external physical environments. In current product line development projects, Feature Analysis is mainly conducted from the viewpoint of system configurations: how hardware and software components are configured to constitute a system. In most cases, contexts are not considered explicitly. As a result, unexpected and unfavorable behavior might emerge in a system if a developer does not recognize any possible conflicting combinations between the system and contexts. To deal with this problem, this paper provides the notion of a context-dependent product line, which is composed of the system and context lines. The former is obtained by analyzing a family of systems. The latter is obtained by analyzing features of contexts associated to the systems. The system and context lines contain reusable core assets. The configuration of selected system components and contexts can be formally checked at the specification level. In this paper, we show a development process that includes the creation of both product line assets as well as context assets.  相似文献   

3.
Business rules give rise to an important set of requirements on any system being developed or procured for an enterprise. While most of the work done in this area focuses on identifying and documenting business rules, we have proposed a methodology that addresses several aspects of the business rules lifecycle: acquisition, deployment and evolution. The methodology assumes that business rules are expressed in terms of business concepts and corporate knowledge that are captured in a high level architecture. The architecture proposed consists of three interconnected components: the enterprise model, the business rules model and the decision support model. This approach permits a greater variety of rules to be specified while providing an opportunity to automate the production of deployable business rules. The ability to deal with the inconsistent and ambiguous rules is crucial in capturing the conflicting requirements placed on the operation of any large scale enterprise. This paper presents a flexible deployment of business rules, which not only supports decision making in the face of conflicting requirements, but also the evolution of those requirements in the face of changing regulatory environments, competitive markets and corporate goals.  相似文献   

4.
Summary This paper presents the formal definition of TOMAL (Task-Oriented Microprocessor Applications Language), a programming language intended for real-time systems running on small processors. The formal definition addresses all aspects of the language. Because some modes of semantic definition seem particularly well-suited to certain aspects of a language, and not as suitable for others, the formal definition employs several complementary modes of definition.The primary definition is axiomatic and is employed to define most statements of the language. Simple, denotational (but not lattice-theoretic) semantics complement the axiomatic semantics to define type-related features, such as binding of names to types, data type coercions, and evaluation of expressions. Together, the axiomatic and denotational semantics define all features of the sequential language. An operational definition is used to define real-time execution, and to extend the axiomatic definition to account for all aspects of concurrent execution. Semantic constraints, sufficient to guarantee conformity of a program with the axiomatic definition, can be checked by analysis of a TOMAL program at compilation.  相似文献   

5.
The potential benefits of using formal methods in the design of software are discussed. Concepts are illustrated by several small examples, with the objective of helping to bridge the gap between theory and practice. The paper introduces and explains some of the terminology, symbols and notation for the discrete mathematics used in the formal methods literature, intended to assist the reader in further study.  相似文献   

6.
In this paper, we describe a quality-directed perspective on the lifecycle process of designing and assembling communications systems and services. We claim this perspective addresses some of the industrial concerns of quality and productivity for the protocol engineering process, while allowing for some of the best formal techniques known for protocol synthesis, verification, conformance testing and performance assessment. We hope that this perspective will assist in the development of a generic conceptual framework which enables the evolution, integration and practical application of protocol engineering models, methods, languages and tools.  相似文献   

7.
Towards a formal framework for software reuse   总被引:3,自引:0,他引:3  
It is reasonable to expect that the use of formal methods in software reuse will help improve the practice of this discipline as well as enhance our understanding of its products and processes. We have identified the following technical activities that take place in software reuse as candidates for a formal modeling: representing reusable assets, representing reuse queries, defining matching criteria, defining a storage structure, deriving measures of distance and deriving a calculus of program modification. In this paper we discuss how a simple mathematical model based on set theory and relation theory allows us to capture these activities in a unified, coherent framework.  相似文献   

8.
Miquel Bertran-Salvans 《Software》1988,18(11):1029-1045
Dimensional design (DD) is a simple, practical and systematic layout technique for the display of programs, specifications, expressions, etc. for application in the general area of software. Formalizations of DD are introduced, the main one being algebraic, and the usage of DD in real software projects is outlined; one of them corresponds to a software system for the telecontrol centre of a power network. The formal definitions of DD which are presented arise during the design of a syntax-driven editor generator for languages whose ‘phrases’ are DDs. Grammars for the definition of such languages are introduced in the paper. The varied usage of DD within the generator design is examined: grammatical, functional and algebraic notations in particular are considered. The samples of these DD representations that are given illustrate the enhancement of readability achieved, and illustrate the suitability of DD for use in the specification area in general.  相似文献   

9.
XYZ system is a CASE tools system based on a temporal logic language XYZ/E which can represent every essential feature of conventional HLL's (sequential or concurrent), specifications of different levels, production rules, operational semantics of graphic languages in a uniform framework. With this formal language as the common basis, all the CASE tools including various kinds of graphic tools for distributed process, concurrent programs with phased memory and sequential programs, tools for verification, rapid-prototyping, language transformation, and module management can be connected freely to form more sophisticated and integrated systems.  相似文献   

10.
BackgroundThe measurement of Function Points is based on Base Functional Components. The process of identifying and weighting Base Functional Components is hardly automatable, due to the informality of both the Function Point method and the requirements documents being measured. So, Function Point measurement generally requires a lengthy and costly process.ObjectivesWe investigate whether it is possible to take into account only subsets of Base Functional Components so as to obtain functional size measures that simplify Function Points with the same effort estimation accuracy as the original Function Points measure. Simplifying the definition of Function Points would imply a reduction of measurement costs and may help spread the adoption of this type of measurement practices. Specifically, we empirically investigate the following issues: whether available data provide evidence that simplified software functionality measures can be defined in a way that is consistent with Function Point Analysis; whether simplified functional size measures by themselves can be used without any appreciable loss in software development effort prediction accuracy; whether simplified functional size measures can be used as software development effort predictors in models that also use other software requirements measures.MethodWe analyze the relationships between Function Points and their Base Functional Components. We also analyze the relationships between Base Functional Components and development effort. Finally, we built effort prediction models that contain both the simplified functional measures and additional requirements measures.ResultsSignificant statistical models correlate Function Points with Base Functional Components. Basic Functional Components can be used to build models of effort that are equivalent, in terms of accuracy, to those based on Function Points. Finally, simplified Function Points measures can be used as software development effort predictors in models that also use other requirements measures.ConclusionThe definition and measurement processes of Function Points can be dramatically simplified by taking into account a subset of the Base Functional Components used in the original definition of the measure, thus allowing for substantial savings in measurement effort, without sacrificing the accuracy of software development effort estimates.  相似文献   

11.
In this paper we present some ideas about how to formally relate various uncertainty representations together in a taxonomic structure, capturing both syntactic and semantic generalization. Fuzziness and nonspecificity are presumed as primitive concepts of uncertainty, and transitive and intransitive methods operating with nonspecificity and fuzziness are introduced to generate a base class of hybrid uncertainty representational forms. Additive, maximal, and interval constraints then complete the characterization of the most important hybrid forms.  相似文献   

12.
13.
Results on use of methodology and CASE-tools from a survey investigation performed in Norwegian organizations are presented. The results are based on responses from 52 Norwegian organizations on a survey investigation on development and maintenance.Although there appears to be a trend towards the use of more packaged solutions, the investigation indicates a larger proportions of application systems being developed as customized systems in larger organizations. The presence of a comprehensive development and maintenance methodology and the use of CASE-tools are also more prominent in larger organizations. Larger organizations also use statistically significant less of their effort on functional maintenance. Even though, the impact of CASE-tools on the information systems portfolios of Norwegian organizations are not yet large, and improvements in functional maintenance can not be attributed to the use of CASE. A notably different perception on the benefit of CASE-technology for productivity was observed between users and non-users of CASE, but the difference was not found to be statistically significant.  相似文献   

14.
Onboard spacecraft computing system is a case of a functionally distributed system that requires continuous interaction among the nodes to control the operations at different nodes. A simple and reliable protocol is desired for such an application. This paper discusses a formal approach to specify the computing system with respect to some important issues encountered in the design and development of a protocol for the onboard distributed system. The issues considered in this paper are concurrency, exclusiveness and sequencing relationships among the various processes at different nodes. A 6-tuple model is developed for the precise specification of the system. The model also enables us to check the consistency of specification and deadlock caused due to improper specification. An example is given to illustrate the use of the proposed methodology for a typical spacecraft configuration. Although the theory is motivated by a specific application the same may be applied to other distributed computing system such as those encountered in process control industries, power plant control and other similar environments.  相似文献   

15.
The paper outlines the approach to the analysis of deontic conditionals taken in earlier work of Jones and Pörn, comparing it briefly with two main trends within dyadic deontic logic, and discussing problems associated with the augmentation principle and the factual detachment principle. A modification of the Jones and Pörn system is then presented, using a classical but not normal (in the sense of Chellas) deontic modality, to provide the basis for an alternative analysis of deontic conditionals. This new analysis validates neither the factual detachment nor the augmentation principles. However, influenced by the approach of Delgrande to default reasoning, it is shown how a restricted form of factual detachment might be accommodated within the revised system.  相似文献   

16.
诠释(annotation)是JavaSE5引入的一种新的编程语言成分,目前得到广泛关注。但从可视化建模来看,已有的Java元模型和UML2元模型并不支持诠释规范,这导致诠释的语义特征在高层抽象中难以体现,也不能实现诠释的可视化、规范化建模。本文扩展已有的Java元模型,以支持诠释的可视化建模。文章提出了诠释的4个重要特征,添加了3个元类以扩展Java元模型,使其能反映诠释特征,并给出2个图符以支持诠释的可视化建模。新的Java元模型符合MOF规范,设计简单,方便实现以支持MDA及相关工具的开发。  相似文献   

17.
Standard methods as such are not normally used for information system development. The particular circumstances of each project make it necessary to adapt the methods to deal with the situation at hand. This is the concern of situational method engineering, where the term situational method is used to refer to a method tailored to the needs of a particular development setting. Situational method engineering prescribes the performance of this method customization within the framework of a meta-modelling technique provided with mechanisms to manipulate methods (or fragments of them) for their modification, integration, adaptation or evolution. As a first step towards the definition of a situational method engineering technique, in this paper we propose the Noesis meta-modelling technique together with a complete and minimal family of transformations. The Noesis technique allows recursive and decompositional structures to be captured in the meta-models (which is a demandable requirement for meta-modelling techniques) and situational methods to be obtained by the assembly of method fragments. In addition, the family of transformations allows method fragment customization processes to be accomplished. The main contribution of this paper is the definition of this family and the proof of its completeness and minimality (which is an important open issue with respect to customization of method fragments), the Noesis technique being the scaffolding needed to show this.  相似文献   

18.
19.
UML notations require adaptation for applications such as Information Systems (IS). Thus we have defined IS-UML. The purpose of this article is twofold. First, we propose an extension to this language to deal with functional aspects of IS. We use two views to specify IS transactions: the first one is defined as a combination of behavioural UML diagrams (collaboration and state diagrams), and the second one is based on the definition of specific classes of an extended class diagram. The final objective of the article is to consider consistency issues between the various diagrams of an IS-UML specification. In common with other UML languages, we use a metamodel to define IS-UML. We use class diagrams to summarize the metamodel structure and a formal language, B, for the full metamodel. This allows us to formally express consistency checks and mapping rules between specific metamodel concepts.  相似文献   

20.
ContextFormal methods, and particularly formal verification, is becoming more feasible to use in the engineering of large highly dependable software-based systems, but so far has had little rigorous empirical study. Its artefacts and activities are different to those of conventional software engineering, and the nature and drivers of productivity for formal methods are not yet understood.ObjectiveTo develop a research agenda for the empirical study of productivity in software projects using formal methods and in particular formal verification. To this end we aim to identify research questions about productivity in formal methods, and survey existing literature on these questions to establish face validity of these questions. And further we aim to identify metrics and data sources relevant to these questions.MethodWe define a space of GQM goals as an investigative framework, focusing on productivity from the perspective of managers of projects using formal methods. We then derive questions for these goals using Easterbrook et al.’s (2008) taxonomy of research questions. To establish face validity, we document the literature to date that reflects on these questions and then explore possible metrics related to these questions. Extensive use is made of literature concerning the L4.verified project completed within NICTA, as it is one of the few projects to achieve code-level formal verification for a large-scale industrially deployed software system.ResultsWe identify more than thirty research questions on the topic in need of investigation. These questions arise not just out of the new type of project context, but also because of the different artefacts and activities in formal methods projects. Prior literature supports the need for research on the questions in our catalogue, but as yet provides little evidence about them. Metrics are identified that would be needed to investigate the questions. Thus although it is obvious that at the highest level concepts such as size, effort, rework and so on are common to all software projects, in the case of formal methods, measurement at the micro level for these concepts will exhibit significant differences.ConclusionsEmpirical software engineering for formal methods is a large open research field. For the empirical software engineering community our paper provides a view into the entities and research questions in this domain. For the formal methods community we identify some of the benefits that empirical studies could bring to the effective management of large formal methods projects, and list some basic metrics and data sources that could support empirical studies. Understanding productivity is important in its own right for efficient software engineering practice, but can also support future research on cost-effectiveness of formal methods, and on the emerging field of Proof Engineering.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号