共查询到20条相似文献,搜索用时 359 毫秒
1.
Summary. Long-lived and adaptive implementations of mutual exclusion and renaming in the read/write shared memory model are presented.
An implementation of a task is adaptive if the step complexity of any operation in the implementation is a function of the number of processes that take steps concurrently
with the operation. The renaming algorithm assigns a new unique id in the range to any process whose initial unique name is taken from a set of size N, for an arbitrary N and where k is the number of processes that actually take steps or hold a name while the new name is being acquired. The step complexity
of acquiring a new name is , while the step complexity of releasing a name is 1. The space complexity of the algorithm is where n is an upper bound on the number of processes that may be active at the same time (acquiring or holding new names), which
could be N in the worst case.
Both the system response time and the worst case number of operations per process in the presented mutual-exclusion algorithm
are adaptive. Both algorithms rely on the basic building block of a long-lived and adaptive splitter. While the adaptive-splitter
satisfies a slightly different set of properties than the Moir-Anderson splitter [MA95], it is adaptive and long-lived. In
addition, the new splitter properties enable the construction of a non-blocking long-lived (2k-1)-renaming algorithm (which is optimal in the size of the new name space). We believe that the mechanisms introduced in
our splitter implementation are interesting on their own, and might be used in other adaptive and long-lived constructions.
Received: March 2000 / Accepted July 2001 相似文献
2.
Asynchronous group mutual exclusion 总被引:1,自引:1,他引:0
Yuh-Jzer Joung 《Distributed Computing》2000,13(4):189-206
Abstract. Mutual exclusion and concurrency are two fundamental and essentially opposite features in distributed systems. However, in
some applications such as Computer Supported Cooperative Work (CSCW) we have found it necessary to impose mutual exclusion
on different groups of processes in accessing a resource, while allowing processes of the same group to share the resource.
To our knowledge, no such design issue has been previously raised in the literature. In this paper we address this issue by
presenting a new problem, called Congenial Talking Philosophers, to model group mutual exclusion. We also propose several criteria to evaluate solutions of the problem and to measure their
performance. Finally, we provide an efficient and highly concurrent distributed algorithm for the problem in a shared-memory
model where processes communicate by reading from and writing to shared variables. The distributed algorithm meets the proposed
criteria, and has performance similar to some naive but centralized solutions to the problem.
Received: November 1998 / Accepted: April 2000 相似文献
3.
Yuh-Jzer Joung 《Distributed Computing》2002,15(3):155-175
Summary. Group mutual exclusion occurs naturally in situations where a resource can be shared by processes of the same group, but
not by processes of different groups. For example, suppose data is stored in a CD-jukebox. Then when a disc is loaded for
access, users that need data on the disc can concurrently access the disc, while users that need data on a different disc
have to wait until the current disc is unloaded.
The design issues for group mutual exclusion have been modeled as the Congenial Talking Philosophers problem, and solutions for shared-memory models have been proposed [12,14]. As in ordinary mutual exclusion and many other
problems in distributed systems, however, techniques developed for shared memory do not necessary apply to message passing
(and vice versa). So in this paper we investigate solutions for Congenial Talking Philosophers in computer networks where
processes communicate by asynchronous message passing. We first present a solution that is a straightforward adaptation from
Ricart and Agrawala's algorithm for ordinary mutual exclusion. Then we show that the simple modification suffers a severe
performance degradation that could cause the system to behave as though only one process of a group can be in the critical
section at a time. We then present a more efficient and highly concurrent distributed algorithm for the problem, the first
such solution in computer networks.
Received: August 2000 / Accepted: November 2001 相似文献
4.
Ted Herman 《Distributed Computing》2000,13(1):1-17
Summary. A superstabilizing protocol is a protocol that i is self-stabilizing, meaning that it can recover from an arbitrarily severe transient fault; and ii can recover from a local transient fault while satisfying a passage predicate during recovery. This paper investigates the possibility of superstabilizing protocols for mutual exclusion in a ring of
processors, where a local fault consists of any transient fault at a single processor; the passage predicate specifies that
there be at most one token in the ring, with the single exception of a spurious token colocated with the transient fault. The first result
of the paper is an impossibility theorem for a class of superstabilizing mutual exclusion protocols. Two unidirectional protocols
are then presented to show that conditions for impossibility can independently be relaxed so that superstabilization is possible
using either additional time or communication registers. A bidirectional protocol subsequently demonstrates that superstabilization
in O(1) time is possible. All three superstabilizing protocols are optimal with respect to the number of communication registers
used.
Received: August 1996 / Accepted: March 1999 相似文献
5.
Summary. This paper presents adaptive algorithms for mutual exclusion using only read and write operations; the performance of the
algorithms depends only on the point contention, i.e., the number of processes that are concurrently active during the algorithm execution (and not on n, the total number of processes). Our algorithm has O(k) remote step complexity and O(logk) system response time, wherek is the point contention. The remote step complexity is the maximal number of steps performed by a process where a wait is counted as one step. The system response time is the time interval between subsequent entries to the critical section, where one time unit is the minimal interval in which
every active process performs at least one step. The space complexity of this algorithm is O(N logn), where N is the range of processes' names. We show how to make the space complexity of our algorithm depend solely on n, while preserving the other performance measures of the algorithm.
Received: March 2001 / Accepted: November 2001 相似文献
6.
Summary. When designing distributed systems, one is faced with the problem of verifying a refinement between two specifications, given
at different levels of abstraction. Suggested verification techniques in the literature include refinement mappings and various
forms of simulation. We present a verification method, in which refinement between two systems is proven by constructing a
transducer that inputs a computation of a concrete system and outputs a matching computation of the abstract system. The transducer
uses a FIFO queue that holds segments of the concrete computation that have not been matched yet. This allows a finite delay between
the occurrence of a concrete event and the determination of the corresponding abstract event. This delay often makes the use
of prophecy variables or backward simulation unnecessary.
An important generalization of the method is to prove refinement modulo some transformation on the observed sequences of events.
The method is adapted by replacing the FIFO queue by a component that allows the appropriate transformation on sequences of events. A particular case is partial-order refinement, i.e., refinement that preserves only a subset of the orderings between events of a system. Examples are sequential consistency
and serializability. The case of sequential consistency is illustrated on a proof of sequential consistency of a cache protocol. 相似文献
7.
We consider concurrent probabilistic systems, based on probabilistic automata of Segala & Lynch [55], which allow non-deterministic
choice between probability distributions. These systems can be decomposed into a collection of “computation trees” which arise
by resolving the non-deterministic, but not probabilistic, choices. The presence of non-determinism means that certain liveness
properties cannot be established unless fairness is assumed. We introduce a probabilistic branching time logic PBTL, based on the logic TPCTL of Hansson [30] and the logic PCTL of [55], resp. pCTL [14]. The formulas of the logic express properties such as “every request is eventually granted with probability at least
p”. We give three interpretations for PBTL on concurrent probabilistic processes: the first is standard, while in the remaining two interpretations the branching time
quantifiers are taken to range over a certain kind of fair computation trees. We then present a model checking algorithm for
verifying whether a concurrent probabilistic process satisfies a PBTL formula assuming fairness constraints. We also propose adaptations of existing model checking algorithms for pCTL
[4, 14] to obtain procedures for PBTL
under fairness constraints. The techniques developed in this paper have applications in automatic verification of randomized
distributed systems.
Received: June 1997 / Accepted: May 1998 相似文献
8.
We establish a lower bound of remote memory references for N-process mutual exclusion algorithms based on reads, writes, or comparison primitives such as test-and-set and compare-and-swap. Our bound improves an earlier lower bound of established by Cypher. Our lower bound is of importance for two reasons. First, it almost matches the time complexity of the best known algorithms based on reads, writes, or comparison primitives. Second, our lower bound suggests
that it is likely that, from an asymptotic standpoint, comparison primitives are no better than reads and writes when implementing
local-spin mutual exclusion algorithms. Thus, comparison primitives may not be the best choice to provide in hardware if one
is interested in scalable synchronization.
Received: January 2002 / Accepted: September 2002
RID="*"
ID="*" Work supported by NSF grants CCR 9732916, CCR 9972211, CCR 9988327, ITR 0082866, and CCR 0208289. 相似文献
9.
Local model checking and protocol analysis 总被引:2,自引:1,他引:1
Xiaoqun Du Scott A. Smolka Rance Cleaveland 《International Journal on Software Tools for Technology Transfer (STTT)》1999,2(3):219-241
This paper describes a local model-checking algorithm for the alternation-free fragment of the modal mu-calculus that has
been implemented in the Concurrency Factory and discusses its application to the analysis of a real-time communications protocol.
The protocol considered is RETHER, a software-based, real-time Ethernet protocol developed at SUNY at Stony Brook. Its purpose is to provide guaranteed bandwidth
and deterministic, periodic network access to multimedia applications over commodity Ethernet hardware. Our model-checking
results show that (for a particular network configuration) RETHER makes good on its bandwidth guarantees to real-time nodes without exposing non-real-time nodes to the possibility of starvation.
Our data also indicate that, in many cases, the state-exploration overhead of the local model checker is significantly smaller
than the total amount that would result from a global analysis of the protocol. In the course of specifying and verifying
RETHER, we also identified an alternative design of the protocol that warranted further study due to its potentially smaller run-time
overhead in servicing requests for data transmission. Again, using local model checking, we showed that this alternative design
also possesses the properties of interest. This observation points out one of the often-overlooked benefits of formal verification:
by forcing designers to understand their designs rigorously and abstractly, these techniques often enable the designers to
uncover interesting design alternatives. 相似文献
10.
Constantine Stephanidis Anthony Savidis 《Universal Access in the Information Society》2001,1(1):40-55
Accessibility and high quality of interaction with products, applications, and services by anyone, anywhere, and at any time are fundamental requirements
for universal access in the emerging Information Society. This paper discusses these requirements, and their relation to the concept of automated
adaptation of user interfaces. An example application is presented, showing how adaptation can be used to accommodate the
requirements of different user categories and contexts of use. This application is then used as a vehicle for discussing a
new engineering paradigm appropriate for the development of adaptation-based user interfaces. Finally, the paper investigates
issues concerning the interaction technologies required for universal access.
Published online: 23 May 2001 相似文献
11.
Summary. In a shared-memory distributed system, n independent asynchronous processes communicate by reading and writing to shared variables. An algorithm is adaptive (to total contention) if its step complexity depends only on the actual number, k, of active processes in the execution; this number is unknown in advance and may change in different executions of the algorithm.
Adaptive algorithms are inherently wait-free, providing fault-tolerance in the presence of an arbitrary number of crash failures and different processes' speed.
A wait-free adaptive collect algorithm with O(k) step complexity is presented, together with its applications in wait-free adaptive alg orithms for atomic snapshots, immediate
snapshots and renaming.
Received: August 1999 / Accepted: August 2001 相似文献
12.
We present a shared memory algorithm that allows a set of f+1 processes to wait-free “simulate” a larger system of n processes, that may also exhibit up to f stopping failures.
Applying this simulation algorithm to the k-set-agreement problem enables conversion of an arbitrary k-fault-tolerant{\it n}-process solution for the k-set-agreement problem into a wait-free k+1-process solution for the same problem. Since the k+1-processk-set-agreement problem has been shown to have no wait-free solution [5,18,26], this transformation implies that there is no
k-fault-tolerant solution to the n-process k-set-agreement problem, for any n.
More generally, the algorithm satisfies the requirements of a fault-tolerant distributed simulation.\/ The distributed simulation implements a notion of fault-tolerant reducibility\/ between decision problems. This paper defines these notions and gives examples of their application to fundamental distributed
computing problems.
The algorithm is presented and verified in terms of I/O automata. The presentation has a great deal of interesting modularity,
expressed by I/O automaton composition and both forward and backward simulation relations. Composition is used to include
a safe agreement\/ module as a subroutine. Forward and backward simulation relations are used to view the algorithm as implementing a multi-try snapshot\/ strategy.
The main algorithm works in snapshot shared memory systems; a simple modification of the algorithm that works in read/write
shared memory systems is also presented.
Received: February 2001 / Accepted: February 2001 相似文献
13.
Fabio Casati Maria Grazia Fugini Isabelle Mirbel Barbara Pernici 《Requirements Engineering》2002,7(2):73-106
Workflow management systems are becoming a relevant support for a large class of business applications, and many workflow
models as well as commercial products are currently available. While the large availability of tools facilitates the development
and the fulfilment of customer requirements, workflow application development still requires methodological guidelines that
drive the developers in the complex task of rapidly producing effective applications. In fact, it is necessary to identify
and model the business processes, to design the interfaces towards existing cooperating systems, and to manage implementation
aspects in an integrated way. This paper presents the WIRES methodology for developing workflow applications under a uniform
modelling paradigm – UML modelling tools with some extensions – that covers all the life cycle of these applications: from
conceptual analysis to implementation. High-level analysis is performed under different perspectives, including a business and an organisational perspective. Distribution, interoperability and cooperation with external information systems are considered in this early
stage. A set of “workflowability” criteria is provided in order to identify which candidate processes are suited to be implemented
as workflows. Non-functional requirements receive particular emphasis in that they are among the most important criteria for
deciding whether workflow technology can be actually useful for implementing the business process at hand. The design phase
tackles aspects of concurrency and cooperation, distributed transactions and exception handling. Reuse of component workflows,
available in a repository as workflow fragments, is a distinguishing feature of the method. Implementation aspects are presented
in terms of rules that guide in the selection of a commercial workflow management system suitable for supporting the designed
processes, coupled with guidelines for mapping the designed workflows onto the model offered by the selected system. 相似文献
14.
Odd-Wiking Rahlff Rolf Kenneth Rolfsen Jo Herstad 《Personal and Ubiquitous Computing》2001,5(1):50-53
Wearables are often described with a focus on providing the user with wearable information access and communication means.
The contextual information retrieval aspect is, however, an essential feature of such systems, as in, for example, the Remembrance Agent [1] where manually entered search-terms
are used for presenting relevant situational information, or as in different location-based systems [2]. In this position paper we outline a general framework of contextually aware wearable systems, and suggest how such mechanisms,
collecting massive traces of the user context, may lead to several other interesting uses in what we will call context trace technology. 相似文献
15.
This paper describes a parameterized distributed algorithm applicable to any directed graph topology. The function parameter
of our algorithm is instantiated to produce distributed algorithms for both fundamental and high level applications, such
as shortest path calculus and depth-first-search tree construction. Due to fault resilience properties of our algorithm, the
resulting protocols are self-stabilizing at no additional cost. Self-stabilizing protocols can resist transient failures and
guarantee system recovery in a finite time. Since the condition on the function parameter (being a strictly idempotent r-operator) permits a broad range of applications to be implemented, the solution presented in our paper can be useful for
a large class of distributed systems.
Received: August 1999 / Accepted: January 2001 相似文献
16.
Serge Autexier Dieter Hutter Bruno Langenstein Heiko Mantel Georg Rock Axel Schairer Werner Stephan Roland Vogt Andreas Wolpers 《International Journal on Software Tools for Technology Transfer (STTT)》2000,3(1):66-77
The Verification Support Environment (VSE) is a tool to formally specify and verify complex systems. It provides the means to structure specifications and supports
the development process from the specification of a system to the automatic generation of code. Formal developments following
the VSE method are stored and maintained in an administration system that guides the user and maintains a consistent state
of development. An integrated deduction system provides proof support for the deduction problems arising during the development
process.
We describe the application of VSE to an industrial case study and give an overview of the enhanced VSE system and the VSE
methodology. 相似文献
17.
We describe a system which supports dynamic user interaction with multimedia information using content-based hypermedia navigation
techniques, specialising in a technique for navigation of musical content. The model combines the principles of open hypermedia, whereby hypermedia link information is maintained by a link service, with content-based retrieval techniques in which a database is queried based on a feature of the multimedia content; our approach could be described as
‘content-based retrieval of hypermedia links’. The experimental system focuses on temporal media and consists of a set of
component-based navigational hypermedia tools. We propose the use of melodic pitch contours in this context and we present
techniques for storing and querying contours, together with experimental results. Techniques for integrating the contour database
with open hypermedia systems are also discussed. 相似文献
18.
Failure detection and consensus in the crash-recovery model 总被引:2,自引:0,他引:2
Summary. We study the problems of failure detection and consensus in asynchronous systems in which processes may crash and recover,
and links may lose messages. We first propose new failure detectors that are particularly suitable to the crash-recovery model.
We next determine under what conditions stable storage is necessary to solve consensus in this model. Using the new failure
detectors, we give two consensus algorithms that match these conditions: one requires stable storage and the other does not.
Both algorithms tolerate link failures and are particularly efficient in the runs that are most likely in practice – those
with no failures or failure detector mistakes. In such runs, consensus is achieved within time and with 4 n messages, where is the maximum message delay and n is the number of processes in the system.
Received: May 1998 / Accepted: November 1999 相似文献
19.
Deepak Kapur Mahadevan Subramaniam 《International Journal on Software Tools for Technology Transfer (STTT)》2000,3(1):32-65
We show that existing theorem proving technology can be used effectively for mechanically verifying a family of arithmetic
circuits. A theorem prover implementing: (i) a decision procedure for quantifier-free Presburger arithmetic with uninterpreted
function symbols; (ii) conditional rewriting; and (iii) heuristics for carefully selecting induction schemes from terminating
recursive function definitions; and (iv) well integrated with backtracking, can automatically verify number-theoretic properties
of parameterized and generic adders, multipliers and division circuits. This is illustrated using our theorem prover Rewrite Rule Laboratory (RRL). To our knowledge, this is the first such demonstration of the capabilities of a theorem prover mechanizing induction.
The above features of RRL are briefly discussed using illustrations from the verification of adder, multiplier and division
circuits. Extensions to the prover likely to make it even more effective for hardware verification are discussed. Furthermore,
it is believed that these results are scalable, and the proposed approach is likely to be effective for other arithmetic circuits
as well. 相似文献
20.
Location is one of the most important elements of context in ubiquitous computing. In this paper we describe a location model, a spatial-aware communication model and an implementation of the models that exploit location for processing and communicating context. The location model presented describes a location
tree, which contains human-readable semantic and geometric information about an organisation and a structure to describe the
current location of an object or a context. The proposed system is dedicated to work not only on more powerful devices like
handhelds, but also on small computer systems that are embedded into everyday artefact (making them a digital artefact). Model and design decisions were made on the basis of experiences from three prototype setups with several applications,
which we built from 1998 to 2002. While running these prototypes we collected experiences from designers, implementers and users and formulated them as guidelines in this paper. All the prototype applications heavily use location information for providing their functionality. We found
that location is not only of use as information for the application but also important for communicating context. In this
paper we introduce the concept of spatial-aware communication where data is communicated based on the relative location of
digital artefacts rather than on their identity.
Correspondence to: Michael Biegl, Telecooperation Office (TecO), University of Karlsruhe, Vincenz-Prieβritz-Str. 1 D-76131 Karlsruhe, Germany.
Email: michael@teco.edu 相似文献