共查询到20条相似文献,搜索用时 15 毫秒
1.
Since their introduction, formal methods have been applied in various ways to different standards. This paper gives an account of these applications, focusing on one application in particular: the development of a framework for creating standards for Open Distributed Processing (ODP). Following an introduction to ODP, the paper gives an insight into the current work on formalising the architecture of the Reference Model of ODP (RM-ODP), highlighting the advantages to be gained. The different approaches currently being taken are shown, together with their associated advantages and disadvantages. The paper concludes that there is no one all-purpose approach which can be used in preference to all others, but that a combination of approaches is desirable to best fulfil the potential of formal methods in developing an architectural semantics for ODP. 相似文献
2.
ODP系统中的观点及观点规范语言 总被引:1,自引:0,他引:1
开放分布式处理ODP的目标是给应用程序间提供一个一致的接口模型,以实现分布透明性、互操作性和可移植性。开放分布式参考模型ISO RM-ODP提供了一个分布式系统的框架,从五个不同的观点描述ODP系统,并提出相应的规范语言的概念和构造规则以便对该观点进行描述,本文首先介绍了这五种观点及相应的规范语言的概念和构造规则,并讨论了它们之间的关系。 相似文献
3.
4.
Howard Bowman 《New Generation Computing》1998,16(4):343-372
The majority of formal methods for distributed systems have their origins in the 1980’s and were targeted at the early generations
of distributed systems. However, modern distributed systems have new features not found in the early systems, e.g. they areobject-oriented, havemobile components, aretime sensitive and are constructed according to advanced system development architectures, e.g.viewpoints models. A major topic of current research is thus, how to enhance the existing formal techniques in order to support these new features.
This paper gives a tutorial level review of this research area. We particularly focus on the process algebra LOTOS and consider
how the technique can be reconciled with these new features.
Howard Bowman, Ph.D.: He is a lecturer in the Computing Laboratory at the University of Kent at Canterbury. He received his Ph.D. from Lancaster
University in 1991. His research focuses on applying formal techniques to the construction of distributed systems and he is
a grant holder for a number of projects in this area. He is on the editorial board of the journal New Generation Computing
and on the programme committees of a number of conferences, including, FORTE/PSTV. He was the programme co-chair of FMOODS’97,
the IFIP conference on Formal Methods for Open Object Based Distributed Systems. 相似文献
5.
ODAC: An Agent-Oriented Methodology Based on ODP 总被引:2,自引:0,他引:2
Marie-Pierre Gervais 《Autonomous Agents and Multi-Agent Systems》2003,7(3):199-228
The ODAC methodology (Open Distributed Applications Construction) aims to provide a designer of agent-based systems with a set of methods and tools to allow him/her to control the construction process complexity of such systems. It enables software engineers to specify agent-based systems that will be implemented within an execution environment, for example a mobile agent platform. Our work is based on the Open Distributed Processing (ODP) standards and the agent paradigm. 相似文献
6.
The importance of formalising the specification of standards has been recognised for a number of years. This paper advocates the use of the formal specification language Object-Z in the definition of standards. Object-Z is an extension to the Z language specifically to facilitate specification in an object-oriented style. First, the syntax and semantics of Object-Z are described informally. Then the use of Object-Z in formalising standards is demonstrated by presenting a case study based on the ODP Trader. Finally, a formal semantics is introduced that suggests an approach to the standardisation of Object-Z itself. Because standards are typically large complex systems, the extra structuring afforded by the Object-Z class construct and operation expressions enables the various hierarchical relationships and the communication between objects in a system to be succinctly specified. 相似文献
7.
8.
A new measure of consistency for positive reciprocal matrices 总被引:4,自引:0,他引:4
The analytic hierarchy process (AHP) provides a decision maker with a way of examining the consistency of entries in a pairwise comparison matrix and the hierarchy as a whole through the consistency ratio measure. It has always seemed to us that this commonly used measure could be improved upon. The purpose of this paper is to present an alternative consistency measure and demonstrate how it might be applied in different types of matrices. 相似文献
9.
10.
11.
12.
13.
Standards come in many different forms, to fulfil many different purposes. In general, however, in a fast changing field, a standard — whether de facto or de jure — emerges and survives if it offers some basis for effective but constrained development, reducing uncertainty and risk [1]. In the field of Object Orientation (hereafter OO) the OMG has sought to control change and variety through numerous standard-like and other consensus building activities. This has proved difficult, given the time needed to establish consensus, and the immediate and pressing demands of the market. The idea of a conceptual core model was proposed early on in OO development, and OMG have sought to establish it at the heart of its programme and perspective. In recent years, however, the development of models such as CORBA, and a host of other extensions have far outstripped the original core. Rather than jettisoning the core object model, the OMG Object Model Subcommittee is now seeking a revision and more rigorous restatement of the key concepts in order that future OO innovations and extensions can be inter-related and reconciled through an agreed and unambiguous standard. Our paper establishes the background to this project, and explains the rationale and benefits of this use of formal notations in standardization. 相似文献
14.
Claudia Popien Axel Kuepper Bernd Meyer 《Journal of Network and Systems Management》1994,2(4):383-400
New requirements of growing computer networks and information systems have an influence on extended client/server models with increased functionality. This forms the basis for service management in distributed systems which is realized by a service trading concept. This paper studies the requirements derived from the Open Distributed Processing (ODP) Reference Model in order to consider an open service market. Furthermore, it examines management possibilities for describing the service trading scenario. Because of similar architectures and properties ODP services, service offers, types, exporters and traders are mapped onto management components and modeled as managed objects. Therefore, the Guidelines for the Definition of Managed Objects (GDMO) are used. The final concept allows a precise and unambiguous study of the service trading scenario and provides means for exporting and importing of service offers in a distributed environment. 相似文献
15.
In this paper two domain decomposition formulations are presented in conjunction with the preconditioned conjugate gradient method (PCG) for the solution of large-scale problems in solid and structural mechanics. In the first approach, the PCG method is applied to the global coefficient matrix, while in the second approach it is applied to the interface problem after eliminating the internal degrees of freedom. For both implementations, a subdomain-by-subdomain (SBS) polynomial preconditioner is employed, based on local information of each subdomain. The approximate inverse of the global coefficient matrix or the Schur complement matrix, which acts as the preconditioner, is expressed by a truncated Neumann series resulting in an additive type local preconditioner. Block type preconditioning, where full elimination is performed inside each block, is also studied and compared with the proposed polynomial preconditioning. 相似文献
16.
Martin Große-Rhode 《Formal Aspects of Computing》2002,13(2):161-186
In a model-based software systems development formal specifications of the components of the system are developed. Thereby
different specifications are used to represent the different aspects or views of the components, possibly following different
paradigms. These heterogeneous viewpoint specifications have to be integrated in order to obtain a consistent global specification
of the whole system. In this paper transformation systems are introduced as a common semantic domain where specifications
written in different languages can be interpreted and formally compared. A transformation system is a transition system where
the transitions are labelled by sets of actions and the states are labelled by algebras representing the data states. Development
relations and composition operations for transformation systems are investigated, and it is shown that compatible local developments
of components induce a global development of their composition. As an application two specifications of the alternating bit
protocol are formally compared component-wise, one given in the process calculus CCS, the other one in the parallel programming
language UNITY.
Received September 2000 / Accepted in revised form June 2001 相似文献
17.
The role of formal methods is examined in the context of the process of developing and adopting open standards. Against the broad backdrop of concerns for improving the quality of standards, issues of conformance assessment, test specification, and test methodology guidelines are considered. The experience gained from the attempts to formalize the test specifications for POSIX 2003.5 is presented as lessons learned. The tradeoffs associated with the various formal methods are considered in terms of the properties of common semantic model for assertions languages. The intent here is to collect the common features in a form that provides insights on issues such as encapsulation and inheritance of specifications, inter-operation semantics, state and control structures for assertions, and name space management conventions. 相似文献
18.
There have been many proposals of shared memory systems, each one providing different types of memory coherence for interprocess communication. However, they have usually been defined using different formalisms. This makes it difficult to compare among them the different proposals put forward. In this paper we present a formal framework for specifying memory models with different coherency properties. We specify most of the known shared memory models using our framework, showing some of the relationships that hold among them. 相似文献
19.
The Semantic Web has motivated many communities to provide data in a machine-readable format. However, the available data has not been utilized to far to the extent possible. The data, which has been created by a large number of people, is dispersed across the Web. Creating the data without central coordination results in RDF of varying quality and makes it obligatory to cleanse the collected data before integration. The SECO system presented in this paper harvests RDF files from the Web and consolidates the different data sets into a coherent representation aligned along an internal schema. SECO provides interfaces for humans to browse and for software agents to query the data repository. In this paper, we describe the characteristics of RDF data available online, the architecture and implementation of the SECO application, and discuss some of the experienced gained while collecting and integrating RDF on the Web. 相似文献
20.
Over the past decade, there has been increasing interest in functionalizing silicon surfaces, avoiding the ubiquitous oxide overlayer utilized in most integrated circuits based upon silicon. While silicon is clearly the backbone material of the microelectronics industry, an understanding of its surface chemistry still remains in preliminary stages. The native oxide overlayer on silicon chips has served admirably well for most microelectronic applications but as the features on integrated circuits become increasingly small, the ability to tailor the interfacial properties of the surface is extremely desirable. As the well known Moore's law describes, the number of devices on an integrated circuit has been increasing exponentially since the early 1960's, approximately doubling every year to 18 months. The smallest feature on a Pentium processor chip is now about 300 nm, and both industry and academia look to make even tinier surface features and devices as the state-of-the-art moves from ultra-to giga-scale integration, with over 10 to 100 million transistors per chip. Because the surface properties of these nanoscale devices will have crucial effects on their performance, new surface terminations and the chemical tools to access them are required. 相似文献