共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper describes a distributed algorithm that implements the abstraction of e-Transaction: a transaction that executes exactly-once despite failures. Our algorithm is based on an asynchronous replication scheme that generalizes well-known active-replication and primary-backup schemes. We devised the algorithm with a three-tier architecture in mind: the end-user interacts with front-end clients (e.g., browsers) that invoke middle-tier application servers (e.g., web servers) to access back-end databases. The algorithm preserves the three-tier nature of the architecture and introduces a very acceptable overhead with respect to unreliable solutions 相似文献
2.
Edward E. Cobb 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(3):173-190
Businesses today are searching for information solutions that enable them to compete in the global marketplace. To minimize
risk, these solutions must build on existing investments, permit the best technology to be applied to the problem, and be
manageable. Object technology, with its promise of improved productivity and quality in application development, delivers
these characteristics but, to date, its deployment in commercial business applications has been limited. One possible reason
is the absence of the transaction paradigm, widely used in commercial environments and essential for reliable business applications.
For object technology to be a serious contender in the construction of these solutions requires:
– technology for transactional objects. In December 1994, the Object Management Group adopted a specification for an object
transaction service (OTS). The OTS specifies mechanisms for defining and manipulating transactions. Though derived from the X/Open distributed
transaction processing model, OTS contains additional enhancements specifically designed for the object environment. Similar
technology from Microsoft appeared at the end of 1995.
– methodologies for building new business systems from existing parts. Business process re-engineering is forcing businesses
to improve their operations which bring products to market. Workflow computing, when used in conjunction with “object wrappers” provides tools to both define and track execution of business processes which leverage existing applications and infrastructure.
– an execution environment which satisfies the requirements of the operational needs of the business. Transaction processing
(TP) monitor technology, though widely accepted for mainframe transaction processing, has yet to enjoy similar success in
the client/server marketplace. Instead the database vendors, with their extensive tool suites, dominate. As object brokers
mature they will require many of the functions of today's TP monitors. Marrying these two technologies can produce a robust
execution environment which offers a superior alternative for building and deploying client/server applications.
Edited by Andreas Reuter, Received February 1995 / Revised August 1995 / Accepted May 1996 相似文献
3.
Hirozumi Yamaguchi Khaled El-Fakih Gregor von Bochmann Teruo Higashino 《Distributed Computing》2003,16(1):21-35
Protocol synthesis is used to derive a protocol specification, that is, the specification of a set of application components
running in a distributed system of networked computers, from a specification of services (called the service specification)
to be provided by the distributed application to its users. Protocol synthesis reduces design costs and errors by specifying
the message exchanges between the application components, as defined by the protocol specification. In general, maintaining
such a distributed application involves applying frequent minor modifications to the service specification due to changes
in the user requirements. Deriving the protocol specification after each modification using the existing synthesis methods
is considered expensive and time consuming. Moreover, we cannot identify what changes we should make to the protocol specification
in correspondence to the changes in the service specification. In this paper, we present a new synthesis method to re-synthesize
only those parts of the protocol specification that must be modified in order to satisfy the changes in the service specification.
The method consists of a set of simple rules that are applied to the protocol specification written in an extended Petri net
model. An application example is given along with some experimental results.
Received: July 2001 / Accepted: July 2002
RID="*"
ID="*" Supported by International Communications Foundation (ICF), Japan
RID="**"
ID="**" Supported by Communications and Information Technology Ontario (CITO) and Natural Sciences and Engineering Research
Council (NSERC), Canada
RID="*"
ID="*" Supported by International Communications Foundation (ICF), Japan 相似文献
4.
Summary. This work considers the problem of performing t tasks in a distributed system of p fault-prone processors. This problem, called do-all herein, was introduced by Dwork, Halpern and Waarts. The solutions presented here are for the model of computation that abstracts
a synchronous message-passing distributed system with processor stop-failures and restarts. We present two new algorithms
based on a new aggressive coordination paradigm by which multiple coordinators may be active as the result of failures. The
first algorithm is tolerant of stop-failures and does not allow restarts. Its available processor steps (work) complexity is and its message complexity is . Unlike prior solutions, our algorithm uses redundant broadcasts when encountering failures and, for p =t and largef, it achieves better work complexity. This algorithm is used as the basis for another algorithm that tolerates stop-failures
and restarts. This new algorithm is the first solution for the do-all problem that efficiently deals with processor restarts. Its available processor steps is , and its message complexity is , wheref is the total number of failures.
Received: October 1998 / Accepted: September 2000 相似文献
5.
Large-Scale Requirements Analysis Revisited: The need for Understanding the Political Ecology of Requirements Engineering 总被引:1,自引:1,他引:1
This paper addresses the political nature of requirements for large systems, and argues that requirements engineering theory and practice must become more engaged with these issues.
It argues that large-scale system requirements is constructed through a political decision process, whereby requirements emerge
as a set of mappings between consecutive solution spaces justified by a problem space of concern to a set of principals. These
solution spaces are complex socio-technical ensembles that often exhibit non-linear behaviour in expansion due to domain complexity
and political ambiguity. Stabilisation of solutions into agreed-on specifications occurs only through the exercise of organisational
power. Effective requirements engineering in such cases is most effectively seen as a form of heterogeneous engineering in which technical, social, economic and institutional factors are brought together in a current solution space that provides
the baseline for construction of proposed new solution spaces. 相似文献
6.
Asynchronous group mutual exclusion 总被引:1,自引:1,他引:0
Yuh-Jzer Joung 《Distributed Computing》2000,13(4):189-206
Abstract. Mutual exclusion and concurrency are two fundamental and essentially opposite features in distributed systems. However, in
some applications such as Computer Supported Cooperative Work (CSCW) we have found it necessary to impose mutual exclusion
on different groups of processes in accessing a resource, while allowing processes of the same group to share the resource.
To our knowledge, no such design issue has been previously raised in the literature. In this paper we address this issue by
presenting a new problem, called Congenial Talking Philosophers, to model group mutual exclusion. We also propose several criteria to evaluate solutions of the problem and to measure their
performance. Finally, we provide an efficient and highly concurrent distributed algorithm for the problem in a shared-memory
model where processes communicate by reading from and writing to shared variables. The distributed algorithm meets the proposed
criteria, and has performance similar to some naive but centralized solutions to the problem.
Received: November 1998 / Accepted: April 2000 相似文献
7.
Summary. A useless checkpoint is a local checkpoint that cannot be part of a consistent global checkpoint. This paper addresses the following
problem. Given a set of processes that take (basic) local checkpoints in an independent and unknown way, the problem is to design communication-induced checkpointing protocols
that direct processes to take additional local (forced) checkpoints to ensure no local checkpoint is useless.
The paper first proves two properties related to integer timestamps which are associated with each local checkpoint. The first
property is a necessary and sufficient condition that these timestamps must satisfy for no checkpoint to be useless. The second
property provides an easy timestamp-based determination of consistent global checkpoints. Then, a general communication-induced
checkpointing protocol is proposed. This protocol, derived from the two previous properties, actually defines a family of
timestamp-based communication-induced checkpointing protocols. It is shown that several existing checkpointing protocols for
the same problem are particular instances of the general protocol. The design of this general protocol is motivated by the
use of communication-induced checkpointing protocols in “consistent global checkpoint”-based distributed applications such
as the detection of stable or unstable properties and the determination of distributed breakpoints.
Received: July 1997 / Accepted: August 1999 相似文献
8.
Orna Kupferman Moshe Y. Vardi 《International Journal on Software Tools for Technology Transfer (STTT)》2003,4(2):224-233
One of the advantages of temporal-logic model-checking tools is their ability to accompany a negative answer to the correctness
query by a counterexample to the satisfaction of the specification in the system. On the other hand, when the answer to the
correctness query is positive, most model-checking tools provide no witness for the satisfaction of the specification. In
the last few years there has been growing awareness as to the importance of suspecting the system or the specification of
containing an error also in the case model checking succeeds. The main justification of such suspects are possible errors
in the modeling of the system or of the specification. Many such errors can be detected by further automatic reasoning about
the system and the environment. In particular, Beer et al. described a method for the detection of vacuous satisfaction of
temporal logic specifications and the generation of interesting witnesses for the satisfaction of specifications. For example,
verifying a system with respect to the specification ϕ=AG(reqAFgrant) (“every request is eventually followed by a grant”), we say that ϕ is satisfied vacuously in systems in which requests are
never sent. An interesting witness for the satisfaction of ϕ is then a computation that satisfies ϕ and contains a request.
Beer et al. considered only specifications of a limited fragment of ACTL, and with a restricted interpretation of vacuity.
In this paper we present a general method for detection of vacuity and generation of interesting witnesses for specifications
in CTL*. Our definition of vacuity is stronger, in the sense that we check whether all the subformulas of the specification affect
its truth value in the system. In addition, we study the advantages and disadvantages of alternative definitions of vacuity,
study the problem of generating linear witnesses and counterexamples for branching temporal logic specifications, and analyze
the complexity of the problem.
Published online: 22 January 2002 相似文献
9.
Making Workflow Change Acceptable 总被引:1,自引:0,他引:1
Virtual professional communities are supported by network information systems composed from standard Internet tools. To satisfy
the interests of all community members, a user-driven approach to requirements engineering is proposed that produces not only
meaningful but also acceptable specifications. This approach is especially suited for workflow systems that support partially structured, evolving work
processes. To ensure the acceptability, social norms must guide the specification process. The RENISYS specification method
is introduced, which facilitates this process using composition norms as formal representations of social norms. Conceptual
graph theory is used to represent four categories of knowledge definitions: type definitions, state definitions, action norms
and composition norms. It is shown how the composition norms guide the legitimate user-driven specification process by analysing
a case on the development of an electronic law journal. 相似文献
10.
Local model checking and protocol analysis 总被引:2,自引:1,他引:1
Xiaoqun Du Scott A. Smolka Rance Cleaveland 《International Journal on Software Tools for Technology Transfer (STTT)》1999,2(3):219-241
This paper describes a local model-checking algorithm for the alternation-free fragment of the modal mu-calculus that has
been implemented in the Concurrency Factory and discusses its application to the analysis of a real-time communications protocol.
The protocol considered is RETHER, a software-based, real-time Ethernet protocol developed at SUNY at Stony Brook. Its purpose is to provide guaranteed bandwidth
and deterministic, periodic network access to multimedia applications over commodity Ethernet hardware. Our model-checking
results show that (for a particular network configuration) RETHER makes good on its bandwidth guarantees to real-time nodes without exposing non-real-time nodes to the possibility of starvation.
Our data also indicate that, in many cases, the state-exploration overhead of the local model checker is significantly smaller
than the total amount that would result from a global analysis of the protocol. In the course of specifying and verifying
RETHER, we also identified an alternative design of the protocol that warranted further study due to its potentially smaller run-time
overhead in servicing requests for data transmission. Again, using local model checking, we showed that this alternative design
also possesses the properties of interest. This observation points out one of the often-overlooked benefits of formal verification:
by forcing designers to understand their designs rigorously and abstractly, these techniques often enable the designers to
uncover interesting design alternatives. 相似文献
11.
A connection between two hosts across a wide-area network may consist of many sessions over time, each called an incarnation. A connection is synchronized using a connection establishment protocol, based on a handshake mechanism, to allow reliable exchange of data. This paper identifies the precise level of handshake needed under different assumptions
on the nodes and on the network, using a formal model for connection management. In particular, the following parameters are
studied: the size of the memory at the nodes, the information retained between incarnations, and the existence of time constraints
on the network. Among the results we obtain are: (1) If both nodes have bounded memory, no incarnation management protocol
exists. (2) If the nodes have unbounded memory, then a two-way handshake incarnation management protocol exists. (3) If the
nodes have unbounded memory, and the server does not retain connection-specific information between incarnations, then a three-way
handshake incarnation management protocol exists. On the other hand, a two-way handshake incarnation management protocol does
not exist, even if some global information is retained. (4) If a bound on maximum packet lifetime (MPL) is known, then a two-way
handshake incarnation management protocol exists, in which the server does not retain connection-specific information between
incarnations.
Received: July 1995 / Accepted: July 1997 相似文献
12.
Ed Brinksma 《Distributed Computing》1999,12(2-3):61-74
Summary. In this paper we present a proof of the sequential consistency of the lazy caching protocol of Afek, Brown, and Merritt.
The proof will follow a strategy of stepwise refinement, developing the distributed caching memory in five transformation steps from a specification of the serial memory, whilst
preserving the sequential consistency in each step. The proof, in fact, presents a rationalized design of the distributed
caching memory. We will carry out our proof using a simple process-algebraic formalism for the specification of the various
design stages. We will not follow a strictly algebraic exposition, however. At some points the correctness will be shown using
direct semantic arguments, and we will also employ higher-order constructs like action transducers to relate behaviours. The distribution of the design/proof over five transformation steps provides a good insight into the
variations that could have been allowed at each point of the design while still maintaining sequential consistency. The design/proof
in fact establishes the correctness of a whole family of related memory architectures. The factorization in smaller steps
also allows for a closer analysis of the fairness assumptions about the distributed memory. 相似文献
13.
Discussion-based exercises are a prevalent form of training in emergency management, aimed at improving coordinative decision
making between the various agencies involved in disaster response. In each exercise, small multi-agency groups of decision
makers discuss potential courses of action within a fictitious disaster scenario presented as a textual narrative supported
by visual materials. We present a cognitive engineering analysis of the problem of designing disaster scenarios for effective
discussion-based exercises. The analysis was carried out through the development of a pilot authoring environment to establish
and address the requirements of a training organisation in the UK. The pilot authoring environment embodies a simple theoretical
model of the exercise process in which facts of a disaster scenario afford discussion of pertinent issues which are elicited by considerations fed to trainees. This representational scheme allows the authoring environment to complement and extend authors’ mental models
of exercises, and thereby enhance five aspects of authoring: rationalisation; continuity of rationale; evolution; adaptability;
and the integration of evaluation feedback. 相似文献
14.
Minos N. Garofalakis Banu Özden Avi Silberschatz 《The VLDB Journal The International Journal on Very Large Data Bases》1998,7(4):206-225
The Enhanced Pay-Per-View (EPPV) model for providing continuous-media services associates with each continuous-media clip
a display frequency that depends on the clip's popularity. The aim is to increase the number of clients that can be serviced
concurrently beyond the capacity limitations of available resources, while guaranteeing a constraint on the response time.
This is achieved by sharing periodic continuous-media streams among multiple clients. The EPPV model offers a number of advantages
over other data-sharing schemes (e.g., batching), which make it more attractive to large-scale service providers. In this
paper, we provide a comprehensive study of the resource-scheduling problems associated with supporting EPPV for continuous-media
clips with (possibly) different display rates, frequencies, and lengths. Our main objective is to maximize the amount of disk
bandwidth that is effectively scheduled under the given data layout and storage constraints. Our formulation gives rise to
-hard combinatorial optimization problems that fall within the realm of hard real-time scheduling theory. Given the intractability
of the problems, we propose novel heuristic solutions with polynomial-time complexity. We also present preliminary experimental
results for the average case behavior of the proposed scheduling schemes and examine how they compare to each other under
different workloads. A major contribution of our work is the introduction of a robust scheduling framework that, we believe,
can provide solutions for a variety of realistic EPPV resource-scheduling scenarios, as well as any scheduling problem involving
regular, periodic use of a shared resource. Based on this framework, we propose various interesting research directions for
extending the results presented in this paper.
Received June 9, 1998 / Accepted October 13, 1998 相似文献
15.
To support emerging real-time applications, high-speed integrated services networks must provide end-to-end performance guarantees
on a per-connection basis in a networking environment. Resource management algorithms must accommodate traffic that may get
burstier as it traverses the network due to complex interactions among packet streams at each switch. To address this problem,
several non-work-conserving packet-service disciplines have been proposed. Non-work-conserving servers may be idle and hold
packets under certain conditions, to reconstruct, fully or partially, the traffic pattern of the original source inside the
network and prevent the traffic from becoming burstier. We compare two non-work-conserving service disciplines. Stop-and-go
uses a multilevel framing strategy to allocate resources in a single switch and to ensure traffic smoothness throughout the
network. Rate controlled static priority (RCSP) decouples the server functions with two components: (1) a regulator to control
traffic distortion introduced by multiplexing effects and load fluctuations in previous servers, and 2) a static priority
scheduler to multiplex the regulated traffic. We compare the two service disciplines in terms of traffic specification, scheduling
mechanism, buffer space requirement, end-to-end delay characteristics, connection admission-control algorithms, and achievable
network utilization. The comparison is first done analytically, and then empirically by using two 10-min traces of MPEG compressed
video. 相似文献
16.
17.
In control systems, the interfaces between software and its embedding environment are a major source of costly errors. For
example, Lutz reported that 20–35% of the safety-related errors discovered during integration and system testing of two spacecraft
were related to the interfaces between the software and the embedding hardware. Also, the software’s operating environment
is likely to change over time, further complicating the issues related to system-level inter-component communication. In this
paper we discuss a formal approach to the specification and analysis of inter-component communication using a revised version
of RSML (Requirements State Machine Language). The formalism allows rigorous specification of the physical aspects of the
inter-component communication and forces encapsulation of communication-related properties in well-defined and easy-to-read
interface specifications. This enables us both to analyse a system design to detect incompatibilities between connected components
and to use the interface specifications as safety kernels to enforce safety constraints. 相似文献
18.
We consider the problem of scheduling a set of pages on a single broadcast channel using time-multiplexing. In a perfectly periodic schedule, time is divided into equal size slots, and each page is transmitted in a time slot precisely every fixed interval of time (the period of the page). We study the case in which each page i has a given demand probability , and the goal is to design a perfectly periodic schedule that minimizes the average time a random client waits until its
page is transmitted. We seek approximate polynomial solutions. Approximation bounds are obtained by comparing the costs of
a solution provided by an algorithm and a solution to a relaxed (non-integral) version of the problem. A key quantity in our
methodology is a fraction we denote by , that depends on the maximum demand probability: . The best known polynomial algorithm to date guarantees an approximation of . In this paper, we develop a tree-based methodology for perfectly periodic scheduling, and using new techniques, we derive
algorithms with better bounds. For small values, our best algorithm guarantees approximation of . On the other hand, we show that the integrality gap between the cost of any perfectly periodic schedule and the cost of
the fractional problem is at least . We also provide algorithms with good performance guarantees for large values of .
Received: December 2001 / Accepted: September 2002 相似文献
19.
Summary. We prove the existence of a “universal” synchronous self-stabilizing protocol, that is, a protocol that allows a distributed
system to stabilize to a desired nonreactive behaviour (as long as a protocol stabilizing to that behaviour exists). Previous
proposals required drastic increases in asymmetry and knowledge to work, whereas our protocol does not use any additional
knowledge, and does not require more symmetry-breaking conditions than available; thus, it is also stabilizing with respect
to dynamic changes in the topology. We prove an optimal quiescence time n+D for a synchronous network of n processors and diameter D; the protocol can be made finite state with a negligible loss in quiescence time. Moreover, an optimal D+1 protocol is given for the case of unique identifiers. As a consequence, we provide an effective proof technique that allows
to show whether self-stabilization to a certain behaviour is possible under a wide range of models.
Received: January 1999 / Accepted: July 2001 相似文献
20.
Constructing stories is a type of playing that involves mobilizing the storyteller’s imagination and finding original ways
to convey narrative intentions. When a child invents a story, there is a natural interaction with the local environment and
the use of various means of expression. We adopted a user-centered approach to design POGO, a playful environment which utilizes
the child’s physical environment and sensory modalities. Pogo is a system of active tools that enable children to create stories
by connecting physical and virtual environments. By providing children with the possibility of capturing and manipulating
images and various media, and combining them in sequential form, Pogo triggered new strategies in the construction of narrative
logic, time and space, in the construction of the episodes and in the visual narration.
Correspondence to: Fran?oise Decortis, Psychology and Education Sciences Department, University of Liege, 4000 Liege, Belgium. Email:francoise.decortis@ulg.ac.be 相似文献