共查询到20条相似文献,搜索用时 46 毫秒
1.
The concept of multiplicity in UML derives from that of cardinality in entity-relationship modeling techniques. The UML documentation
defines this concept but at the same time acknowledges some lack of obviousness in the specification of multiplicities for
n-ary associations. This paper shows an ambiguity in the definition given by UML documentation and proposes a clarification
to this definition, as well as the use of outer and inner multiplicities as a simple extension to the current notation to
represent other multiplicity constraints, such as participation constraints, that are equally valuable in understanding n-ary
associations.
Initial submission: 16 January 2002 / Revised submission: 17 October 2002
Published online: 2 December 2002
RID="*"
ID="*"A previous shorter version of this paper was presented under the title “Semantics of the Minimum Multiplicity in Ternary
Associations in UML” at The 4th International Conference on the Unified Modeling Language-UML’2001, October 1–5 2001, Toronto,
Ontario, Canada, Springer Verlag, LNCS 2185, pp. 329–341. 相似文献
2.
Hirozumi Yamaguchi Khaled El-Fakih Gregor von Bochmann Teruo Higashino 《Distributed Computing》2003,16(1):21-35
Protocol synthesis is used to derive a protocol specification, that is, the specification of a set of application components
running in a distributed system of networked computers, from a specification of services (called the service specification)
to be provided by the distributed application to its users. Protocol synthesis reduces design costs and errors by specifying
the message exchanges between the application components, as defined by the protocol specification. In general, maintaining
such a distributed application involves applying frequent minor modifications to the service specification due to changes
in the user requirements. Deriving the protocol specification after each modification using the existing synthesis methods
is considered expensive and time consuming. Moreover, we cannot identify what changes we should make to the protocol specification
in correspondence to the changes in the service specification. In this paper, we present a new synthesis method to re-synthesize
only those parts of the protocol specification that must be modified in order to satisfy the changes in the service specification.
The method consists of a set of simple rules that are applied to the protocol specification written in an extended Petri net
model. An application example is given along with some experimental results.
Received: July 2001 / Accepted: July 2002
RID="*"
ID="*" Supported by International Communications Foundation (ICF), Japan
RID="**"
ID="**" Supported by Communications and Information Technology Ontario (CITO) and Natural Sciences and Engineering Research
Council (NSERC), Canada
RID="*"
ID="*" Supported by International Communications Foundation (ICF), Japan 相似文献
3.
Edward E. Cobb 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(3):173-190
Businesses today are searching for information solutions that enable them to compete in the global marketplace. To minimize
risk, these solutions must build on existing investments, permit the best technology to be applied to the problem, and be
manageable. Object technology, with its promise of improved productivity and quality in application development, delivers
these characteristics but, to date, its deployment in commercial business applications has been limited. One possible reason
is the absence of the transaction paradigm, widely used in commercial environments and essential for reliable business applications.
For object technology to be a serious contender in the construction of these solutions requires:
– technology for transactional objects. In December 1994, the Object Management Group adopted a specification for an object
transaction service (OTS). The OTS specifies mechanisms for defining and manipulating transactions. Though derived from the X/Open distributed
transaction processing model, OTS contains additional enhancements specifically designed for the object environment. Similar
technology from Microsoft appeared at the end of 1995.
– methodologies for building new business systems from existing parts. Business process re-engineering is forcing businesses
to improve their operations which bring products to market. Workflow computing, when used in conjunction with “object wrappers” provides tools to both define and track execution of business processes which leverage existing applications and infrastructure.
– an execution environment which satisfies the requirements of the operational needs of the business. Transaction processing
(TP) monitor technology, though widely accepted for mainframe transaction processing, has yet to enjoy similar success in
the client/server marketplace. Instead the database vendors, with their extensive tool suites, dominate. As object brokers
mature they will require many of the functions of today's TP monitors. Marrying these two technologies can produce a robust
execution environment which offers a superior alternative for building and deploying client/server applications.
Edited by Andreas Reuter, Received February 1995 / Revised August 1995 / Accepted May 1996 相似文献
4.
The most common way of designing databases is by means of a conceptual model, such as E/R, without taking into account other
views of the system. New object-oriented design languages, such as UML (Unified Modelling Language), allow the whole system,
including the database schema, to be modelled in a uniform way. Moreover, as UML is an extendable language, it allows for
any necessary introduction of new stereotypes for specific applications. Proposals exist to extend UML with stereotypes for
database design but, unfortunately, they are focused on relational databases. However, new applications require complex objects
to be represented in complex relationships, object-relational databases being more appropriate for these requirements. The
framework of this paper is an Object-Relational Database Design Methodology, which defines new UML stereotypes for Object-Relational
Database Design and proposes some guidelines to translate a UML conceptual schema into an object-relational schema. The guidelines
are based on the SQL:1999 object-relational model and on Oracle8i as a product example.
Initial submission: 22 January 2002 / Revised submission: 10 June 2002
Published online: 7 January 2003
This paper is a revised and extended version of Extending UML for Object-Relational Database Design, presented in the UML’2001
conference [17]. 相似文献
5.
Symmetric Spin 总被引:1,自引:0,他引:1
Dragan Bošnački Dennis Dams Leszek Holenderski 《International Journal on Software Tools for Technology Transfer (STTT)》2002,4(1):92-106
6.
The traditional style of working with computers generally revolves around the computer being used as a tool, with individual
users directly initiating operations and waiting for the results of them. A more recent paradigm of human-computer interaction,
based on the indirect management of computing resources, is agent-based interaction. The idea of delegation plays a key part
in this approach to computer-based work, which allows individuals to relinquish the routine, mechanistic parts of their everyday
tasks, having them performed automatically instead. Adaptive interfaces combine elements of both these approaches, where the
goal is to have the interface adapt to its users rather than the reverse. This paper addresses some of the issues arising
from a practical software development process which aimed to support individuals using this style of interaction. This paper
documents the development of a set of classes which implement an architecture for adaptive interfaces. These classes are intended
to be used as part of larger user interface systems which are to exhibit adaptive behaviour. One approach to the implementation
of an adaptive interface is to use a set of software “agents”– simple processes which effectively run “in the background”–
to decompose the task of implementing the interface. These agents form part of a larger adaptive interface architecture, which
in turn forms a component of the adaptive system. 相似文献
7.
8.
Holger Hermanns Joost-Pieter Katoen Joachim Meyer-Kayser Markus Siegle 《International Journal on Software Tools for Technology Transfer (STTT)》2003,4(2):153-172
Markov chains are widely used in the context of the performance and reliability modeling of various systems. Model checking
of such chains with respect to a given (branching) temporal logic formula has been proposed for both discrete [34, 10] and
continuous time settings [7, 12]. In this paper, we describe a prototype model checker for discrete and continuous-time Markov
chains, the Erlangen–Twente Markov Chain Checker E⊢MC2, where properties are expressed in appropriate extensions of CTL. We illustrate the general benefits of this approach and
discuss the structure of the tool. Furthermore, we report on successful applications of the tool to some examples, highlighting
lessons learned during the development and application of E⊢MC2.
Published online: 19 November 2002
Correspondence to: Holger Hermanns 相似文献
9.
Yehuda Afek Anat Bremler-Barr Haim Kaplan Edith Cohen Michael Merritt 《Distributed Computing》2002,15(4):273-283
A new general theory about restoration of network paths is first introduced. The theory pertains to restoration of shortest paths in a network following failure,
e.g., we prove that a shortest path in a network after removing k edges is the concatenation of at most k+1 shortest paths in the original network. The theory is then combined with efficient path concatenation techniques in MPLS
(multi-protocol label switching), to achieve powerful schemes for restoration in MPLS based networks. We thus transform MPLS
into a flexible and robust method for forwarding packets in a network. Finally, the different schemes suggested are evaluated
experimentally on three large networks (a large ISP, the AS graph of the Internet, and the full Internet topology). These
experiments demonstrate that the restoration schemes perform well in actual topologies.
Received: December 2001 / Accepted: July 2002
RID="*"
ID="*" This research was supported by a grant from the Ministry of Science, Israel 相似文献
10.
The significant changes in the social, legal, demographic, and economic landscape over the past 10–15 years present enormous
opportunities for the human–computer interface design community. These changes will have a significant impact on the design
and development of systems for older and disabled people. This paper brings together a number of proposals to improve both
specialist and mainstream design methods in the field as a contribution to the debate about design for older and disabled
people and the concept of universal usability.
Published online: 6 November 2002 相似文献
11.
A new class of gossip protocols to diffuse updates securely is presented. The protocols rely on annotating updates with the
path along which they travel. To avoid a combinatorial explosion in the number of annotated updates, rules are employed to
choose which updates to keep. Different sets of rules lead to different protocols. Results of simulated executions for a collection
of such protocols are described – the protocols would appear to be practical, even in large networks.
Received: October 2001 / Accepted: July 2002
Supported in part by ARPA/RADC grant F30602-96-1-0317, AFOSR grant F49620-00-1-0198, Defense Advanced Research Projects Agency
(DARPA) and Air Force Research Laboratory Air Force Material Command USAF under agreement number F30602-99-1-0533, National
Science Foundation Grant 9703470, and a grant from Intel Corporation. The views and conclusions contained herein are those
of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed
or implied, of these organizations or the U.S. Government. 相似文献
12.
In control systems, the interfaces between software and its embedding environment are a major source of costly errors. For
example, Lutz reported that 20–35% of the safety-related errors discovered during integration and system testing of two spacecraft
were related to the interfaces between the software and the embedding hardware. Also, the software’s operating environment
is likely to change over time, further complicating the issues related to system-level inter-component communication. In this
paper we discuss a formal approach to the specification and analysis of inter-component communication using a revised version
of RSML (Requirements State Machine Language). The formalism allows rigorous specification of the physical aspects of the
inter-component communication and forces encapsulation of communication-related properties in well-defined and easy-to-read
interface specifications. This enables us both to analyse a system design to detect incompatibilities between connected components
and to use the interface specifications as safety kernels to enforce safety constraints. 相似文献
13.
Steven D. Johnson Yanhong A. Liu Yuchen Zhang 《International Journal on Software Tools for Technology Transfer (STTT)》2003,4(2):211-223
A systematic transformation method based on incrementalization and value caching generalizes a broad family of program optimizations. It yields significant performance improvements in many program classes,
including iterative schemes that characterize hardware specifications. CACHET is an interactive incrementalization tool. Although incrementalization is highly structured and automatable, better results
are obtained through interaction, where the main task is to guide term rewriting based on data-specific identities. Incrementalization
specialized to iteration corresponds to strength reduction, a familiar program improvement technique. This correspondence is illustrated by the derivation of a hardware-efficient nonrestoring square-root algorithm, which has also served as an example of theorem prover-based implementation verification.
Published online: 9 October 2001
RID="*"
ID="*"S.D. Johnson supported, in part, by the National Science Foundation under grant MIP-9601358.
RID="**"
ID="**"Y.A. Liu supported in part by the National Science Foundation under grant CCR-9711253, the Office of Naval Research
under grant N00014-99-1-0132, and Motorola Inc. under a Motorola University Partnership in Research Grant.
RID="***"
ID="***"Y. Zhang is a student recipient of a Motorola University Partnership in Research Grant. 相似文献
14.
Lawrence D. Bergman Jerre Shoudt Vittorio Castelli Chung-Sheng Li Loey Knapp 《International Journal on Digital Libraries》1999,2(2-3):178-189
In this paper, we describe a new interface for querying multimedia digital libraries and an interface building framework.
The interface employs a drag-and-drop style of interaction and combines a structured natural-language style query specification
with reusable multimedia objects. We call this interface DanDMM, short for “drag-and-drop multimedia”. DanDMM interfaces capture
the syntax of the underlying query language, and dynamically reconfigure to reflect the contents of the data repository.
A distinguishing feature of DanDMM is its ability to synthesize integrated interfaces that incorporate both example-based
specification using multimedia objects, and traditional techniques including keyword, attribute, and free text-based search.
We describe the DanDMM-builder, a framework for synthesizing DanDMM interfaces, and give several examples of interfaces that
have been constructed using DanDMM-builder, including a remote-sensing library application and a video digital library.
Received: 15 December 1997 / Revised: June 1999 相似文献
15.
Cynthia E. Irvine Timothy Levin Jeffery D. Wilson David Shifflett Barbara Pereira 《Requirements Engineering》2002,7(4):192-206
Requirements specifications for high-assurance secure systems are rare in the open literature. This paper examines the development
of a requirements document for a multilevel secure system that must meet stringent assurance and evaluation requirements.
The system is designed to be secure, yet combines popular commercial components with specialised high-assurance ones. Functional
and non-functional requirements pertinent to security are discussed. A multidimensional threat model is presented. The threat
model accounts for the developmental and operational phases of system evolution and for each phase accounts for both physical
and non-physical threats. We describe our team-based method for developing a requirements document and relate that process
to techniques in requirements engineering. The system requirements document presented provides a calibration point for future
security requirements engineering techniques intended to meet both functional and assurance goals.
RID="*"
ID="*"The views expressed in this paper are those of the authors and should not be construed to reflect those of their employers
or the Department of Defense. This work was supported in part by the MSHN project of the DARPA/ITO Quorum programme and by
the MYSEA project of the DARPA/ATO CHATS programme.
Correspondence and offprint requests to: T. Levin, Department of Computer Science, Naval Postgraduate School, Monterey, CA 93943-5118, USA. Tel.: +1 831 656 2339;
Fax: +1 831 656 2814; Email: levin@nps.navy.mil 相似文献
16.
Theo C. Ruys Ed Brinksma 《International Journal on Software Tools for Technology Transfer (STTT)》2003,4(2):246-259
In this paper we take a closer look at the automated analysis of designs, in particular of verification by model checking.
Model checking tools are increasingly being used for the verification of real-life systems in an industrial context. In addition
to ongoing research aimed at curbing the complexity of dealing with the inherent state space explosion problem – which allows
us to apply these techniques to ever larger systems – attention must now also be paid to the methodology of model checking,
to decide how to use these techniques to their best advantage. Model checking “in the large” causes a substantial proliferation
of interrelated models and model checking sessions that must be carefully managed in order to control the overall verification
process. We show that in order to do this well both notational and tool support are required. We discuss the use of software
configuration management techniques and tools to manage and control the verification trajectory. We present Xspin/Project,
an extension to Xspin, which automatically controls and manages the validation trajectory when using the model checker Spin.
Published online: 18 June 2002 相似文献
17.
Abstract. This paper describes the design of a reconfigurable architecture for implementing image processing algorithms. This architecture
is a pipeline of small identical processing elements that contain a programmable logic device (FPGA) and double port memories.
This processing system has been adapted to accelerate the computation of differential algorithms. The log-polar vision selectively
reduces the amount of data to be processed and simplifies several vision algorithms, making possible their implementation
using few hardware resources. The reconfigurable architecture design has been devoted to implementation, and has been employed
in an autonomous platform, which has power consumption, size and weight restrictions. Two different vision algorithms have
been implemented in the reconfigurable pipeline, for which some experimental results are shown.
Received: 30 March 2001 / Accepted: 11 February 2002
RID="*"
ID="*" This work has been supported by the Ministerio de Ciencia y Tecnología and FEDER under project TIC2001-3546
Correspondence to: J.A. Boluda 相似文献
18.
The requirements specification – as outcome of the requirements engineering process – falls short of capturing other useful
information generated during this process, such as the justification for selected requirements, trade-offs negotiated by stakeholders
and alternative requirements that were discarded. In the context of evolving systems and distributed development, this information
is essential. Rationale methods focus on capturing and structuring this missing information. In this paper, we propose an
integrated process with dedicated guidance for capturing requirements and their rationale, discuss its tool support and describe
the experiences we made during several case studies with students. Although the idea of integrating rationale methods with
requirements engineering is not new, few research projects so far have focused on smooth integration, dedicated tool support
and detailed guidance for such methods. 相似文献
19.
Scott D. Stoller 《International Journal on Software Tools for Technology Transfer (STTT)》2002,4(1):71-91
State-space exploration is a powerful technique for verification of concurrent software systems. Applying it to software systems
written in standard programming languages requires powerful abstractions (of data) and reductions (of atomicity), which focus
on simplifying the data and control, respectively, by aggregation. We propose a reduction that exploits a common pattern of
synchronization, namely, the use of locks to protect shared data structures. This pattern of synchronization is particularly
common in concurrent Java programs, because Java provides built-in locks. We describe the design of a new tool for state-less
state-space exploration of Java programs that incorporates this reduction. We also describe an implementation of the reduction
in Java PathFinder, a more traditional state-space exploration tool for Java programs.
Published online: 2 October 2002
RID="*"
ID="*"Present address: Computer Science Dept., SUNY at Stony Brook, Stony Brook, NY 11794-4400, USA. The author gratefully
acknowledges the support of ONR under Grants N00014-99-1-0358 and N00014-01-1-0109 and the support of NSF under Grant CCR-9876058. 相似文献
20.
Alan Mycroft Richard Sharp 《International Journal on Software Tools for Technology Transfer (STTT)》2003,4(3):271-297
The FLaSH (Functional Languages for Synthesising Hardware) system allows a designer to map a high-level functional language,
SAFL, and its more expressive extension, SAFL+, into hardware. The system has two phases: first we perform architectural exploration
by applying a series of semantics-preserving transformations to SAFL specifications; then the resulting specification is
compiled into hardware in a resource-aware manner – that is, we map separate functions to separate hardware functional units
(functions which are called multiple times become shared functional units). This article introduces the SAFL language and
shows how program transformations on it can explore area-time trade-offs. We then show how the FLaSH compiler compiles SAFL
to synchronous hardware and how SAFL transformations can also express hardware/software co-design. As a case study we demonstrate
how SAFL transformations allow us to refine a simple specification of a MIPS-style processor into pipelined and superscalar
implementations. The superset language SAFL+ (adding process calculi features but retaining many of the design aims) is then
described and given semantics both as hardware and as a programming language.
Published online: 17 December 2002 相似文献