共查询到20条相似文献,搜索用时 828 毫秒
1.
Twan Basten Thomas Kunz James P. Black Michael H. Coffin David J. Taylor 《Distributed Computing》1997,11(1):21-39
An important problem in analyzing distributed computations is the amount of information. In event-based models, even for
simple applications, the number of events is large and the causal structure is complex. Event abstraction can be used to reduce
the apparent complexity of a distributed computation. This paper discusses one important aspect of event abstraction: causality
among abstract events. Following Lamport [24], two causality relations are defined on abstract events, called weak and strong
precedence. A general theoretical framework based on logical vector time is developed in which several meaningful timestamps
for abstract events are derived. These timestamps can be used to efficiently determine causal relationships between arbitrary
abstract events. The class of convex abstract events is identified as a subclass of abstract events that is general enough to be widely applicable and restricted
enough to simplify timestamping schemes used for characterizing weak precedence. We explain why such a simplification seems
not possible for strong precedence.
Received: February 1994 / Accepted: July 1997 相似文献
2.
We present a shared memory algorithm that allows a set of f+1 processes to wait-free “simulate” a larger system of n processes, that may also exhibit up to f stopping failures.
Applying this simulation algorithm to the k-set-agreement problem enables conversion of an arbitrary k-fault-tolerant{\it n}-process solution for the k-set-agreement problem into a wait-free k+1-process solution for the same problem. Since the k+1-processk-set-agreement problem has been shown to have no wait-free solution [5,18,26], this transformation implies that there is no
k-fault-tolerant solution to the n-process k-set-agreement problem, for any n.
More generally, the algorithm satisfies the requirements of a fault-tolerant distributed simulation.\/ The distributed simulation implements a notion of fault-tolerant reducibility\/ between decision problems. This paper defines these notions and gives examples of their application to fundamental distributed
computing problems.
The algorithm is presented and verified in terms of I/O automata. The presentation has a great deal of interesting modularity,
expressed by I/O automaton composition and both forward and backward simulation relations. Composition is used to include
a safe agreement\/ module as a subroutine. Forward and backward simulation relations are used to view the algorithm as implementing a multi-try snapshot\/ strategy.
The main algorithm works in snapshot shared memory systems; a simple modification of the algorithm that works in read/write
shared memory systems is also presented.
Received: February 2001 / Accepted: February 2001 相似文献
3.
Summary. This work considers the problem of performing t tasks in a distributed system of p fault-prone processors. This problem, called do-all herein, was introduced by Dwork, Halpern and Waarts. The solutions presented here are for the model of computation that abstracts
a synchronous message-passing distributed system with processor stop-failures and restarts. We present two new algorithms
based on a new aggressive coordination paradigm by which multiple coordinators may be active as the result of failures. The
first algorithm is tolerant of stop-failures and does not allow restarts. Its available processor steps (work) complexity is and its message complexity is . Unlike prior solutions, our algorithm uses redundant broadcasts when encountering failures and, for p =t and largef, it achieves better work complexity. This algorithm is used as the basis for another algorithm that tolerates stop-failures
and restarts. This new algorithm is the first solution for the do-all problem that efficiently deals with processor restarts. Its available processor steps is , and its message complexity is , wheref is the total number of failures.
Received: October 1998 / Accepted: September 2000 相似文献
4.
Bogdan S. Chlebus Leszek Gasieniec Alan Gibbons Andrzej Pelc Wojciech Rytter 《Distributed Computing》2002,15(1):27-38
We consider the problem of distributed deterministic broadcasting in radio networks of unknown topology and size. The network
is synchronous. If a node u can be reached from two nodes which send messages in the same round, none of the messages is received by u. Such messages block each other and node u either hears the noise of interference of messages, enabling it to detect a collision, or does not hear anything at all, depending on the model. We assume that nodes know neither the topology nor the size of
the network, nor even their immediate neighborhood. The initial knowledge of every node is limited to its own label. Such
networks are called ad hoc multi-hop networks. We study the time of deterministic broadcasting under this scenario.
For the model without collision detection, we develop a linear-time broadcasting algorithm for symmetric graphs, which is
optimal, and an algorithm for arbitrary n-node graphs, working in time . Next we show that broadcasting with acknowledgement is not possible in this model at all.
For the model with collision detection, we develop efficient algorithms for broadcasting and for acknowledged broadcasting
in strongly connected graphs.
Received: January 2000 / Accepted: June 2001 相似文献
5.
Summary. In this paper, we propose a self-stabilizing K-clock protocol for unidirectional rings with odd size, where and m is any positive integer. Besides the variable for maintaining the clock, the proposed protocol only requires one additional
bit. The worst-case stabilizing time is ), where n is the ring size.
Received: July 1998 / Accepted: January 1999 相似文献
6.
Boris Chidlovskii Uwe M. Borghoff 《The VLDB Journal The International Journal on Very Large Data Bases》2000,9(1):2-17
Abstract. In meta-searchers accessing distributed Web-based information repositories, performance is a major issue. Efficient query
processing requires an appropriate caching mechanism. Unfortunately, standard page-based as well as tuple-based caching mechanisms
designed for conventional databases are not efficient on the Web, where keyword-based querying is often the only way to retrieve
data. In this work, we study the problem of semantic caching of Web queries and develop a caching mechanism for conjunctive
Web queries based on signature files. Our algorithms cope with both relations of semantic containment and intersection between a query and the corresponding cache
items. We also develop the cache replacement strategy to treat situations when cached items differ in size and contribution
when providing partial query answers. We report results of experiments and show how the caching mechanism is realized in the
Knowledge Broker system.
Received June 15, 1999 / Accepted December 24, 1999 相似文献
7.
Summary. The purpose of compact routing is to provide a labeling of the nodes of a network and a way to encode the routing tables, so that routing can be performed
efficiently (e.g., on shortest paths) whilst keeping the memory-space required to store the routing tables as small as possible.
In this paper, we answer a long-standing conjecture by showing that compact routing may also assist in the performance of
distributed computations. In particular, we show that a network supporting a shortest path interval routing scheme allows broadcasting with a message-complexity of O(n), where n is the number of nodes of the network. As a consequence, we prove that O(n) messages suffice to solve leader-election for any graph labeled by a shortest path interval routing scheme, improving the
previous known bound of O(m+n). A general consequence of our result is that a shortest path interval routing scheme contains ample structural information to avoid developing ad-hoc or network-specific solutions for basic problems that distributed systems must handle repeatedly.
Received: December 2000 / Accepted: July 2001 相似文献
8.
Pierre Fraigniaud Andrzej Pelc David Peleg Stéphane Pérennes 《Distributed Computing》2001,14(3):163-183
We consider the task of assigning distinct labels to nodes of an unknown anonymous network in a distributed manner. A priori, nodes do not have any identities, except for one distinguished node, called the source, and do not know the topology or
the size of the network. They execute identical algorithms, apart from the source which plays the role of a leader and starts
the labeling process. Our goal is to assign short labels, as fast as possible. The quality of a labeling algorithm is measured by the range from which the algorithm picks the labels, or alternatively, the length
of the assigned labels. Natural efficiency measures are the time, i.e., the number of rounds required for the label assignment, and the message and bit complexities
of the label assignment protocol, i.e., the total number of messages (resp., bits) circulating in the network. We present
label assignment algorithms whose time and message complexity are asymptotically optimal and which assign short labels. On
the other hand, we establish inherent trade-offs between quality and efficiency for labeling algorithms.
Received: July 2000 / Accepted: February 2001 相似文献
9.
Detection of global predicates: Techniques and their limitations 总被引:1,自引:0,他引:1
Summary. We show that the problem of predicate detection in distributed systems is NP-complete. In the past, efficient algorithms
have been developed for special classes of predicates such as stable predicates, observer independent predicates, and conjunctive
predicates. We introduce a class of predicates, semi-linear predicates, which properly contains all of the above classes. We first discuss stable, observer independent and semi-linear classes
of predicates and their relationships with each other. We also study closure properties of these classes with respect to conjunction
and disjunction. Finally, we discuss algorithms for detection of predicates in these classes. We provide a non-deterministic
detection algorithm for each class of predicate. We show that each class can be equivalently characterized by the degree of
non-determinism present in the algorithm. Stable predicates are defined as those that can be detected by an algorithm with
the most non-determinism. All other classes can be derived by appropriately constraining the non-determinism in this algorithm. 相似文献
10.
Yuh-Jzer Joung 《Distributed Computing》2002,15(3):155-175
Summary. Group mutual exclusion occurs naturally in situations where a resource can be shared by processes of the same group, but
not by processes of different groups. For example, suppose data is stored in a CD-jukebox. Then when a disc is loaded for
access, users that need data on the disc can concurrently access the disc, while users that need data on a different disc
have to wait until the current disc is unloaded.
The design issues for group mutual exclusion have been modeled as the Congenial Talking Philosophers problem, and solutions for shared-memory models have been proposed [12,14]. As in ordinary mutual exclusion and many other
problems in distributed systems, however, techniques developed for shared memory do not necessary apply to message passing
(and vice versa). So in this paper we investigate solutions for Congenial Talking Philosophers in computer networks where
processes communicate by asynchronous message passing. We first present a solution that is a straightforward adaptation from
Ricart and Agrawala's algorithm for ordinary mutual exclusion. Then we show that the simple modification suffers a severe
performance degradation that could cause the system to behave as though only one process of a group can be in the critical
section at a time. We then present a more efficient and highly concurrent distributed algorithm for the problem, the first
such solution in computer networks.
Received: August 2000 / Accepted: November 2001 相似文献
11.
Summary. When designing distributed systems, one is faced with the problem of verifying a refinement between two specifications, given
at different levels of abstraction. Suggested verification techniques in the literature include refinement mappings and various
forms of simulation. We present a verification method, in which refinement between two systems is proven by constructing a
transducer that inputs a computation of a concrete system and outputs a matching computation of the abstract system. The transducer
uses a FIFO queue that holds segments of the concrete computation that have not been matched yet. This allows a finite delay between
the occurrence of a concrete event and the determination of the corresponding abstract event. This delay often makes the use
of prophecy variables or backward simulation unnecessary.
An important generalization of the method is to prove refinement modulo some transformation on the observed sequences of events.
The method is adapted by replacing the FIFO queue by a component that allows the appropriate transformation on sequences of events. A particular case is partial-order refinement, i.e., refinement that preserves only a subset of the orderings between events of a system. Examples are sequential consistency
and serializability. The case of sequential consistency is illustrated on a proof of sequential consistency of a cache protocol. 相似文献
12.
13.
Summary. In a shared-memory distributed system, n independent asynchronous processes communicate by reading and writing to shared variables. An algorithm is adaptive (to total contention) if its step complexity depends only on the actual number, k, of active processes in the execution; this number is unknown in advance and may change in different executions of the algorithm.
Adaptive algorithms are inherently wait-free, providing fault-tolerance in the presence of an arbitrary number of crash failures and different processes' speed.
A wait-free adaptive collect algorithm with O(k) step complexity is presented, together with its applications in wait-free adaptive alg orithms for atomic snapshots, immediate
snapshots and renaming.
Received: August 1999 / Accepted: August 2001 相似文献
14.
This paper describes a parameterized distributed algorithm applicable to any directed graph topology. The function parameter
of our algorithm is instantiated to produce distributed algorithms for both fundamental and high level applications, such
as shortest path calculus and depth-first-search tree construction. Due to fault resilience properties of our algorithm, the
resulting protocols are self-stabilizing at no additional cost. Self-stabilizing protocols can resist transient failures and
guarantee system recovery in a finite time. Since the condition on the function parameter (being a strictly idempotent r-operator) permits a broad range of applications to be implemented, the solution presented in our paper can be useful for
a large class of distributed systems.
Received: August 1999 / Accepted: January 2001 相似文献
15.
Ada Wai-chee Fu Polly Mei-shuen Chan Yin-Ling Cheung Yiu Sang Moon 《The VLDB Journal The International Journal on Very Large Data Bases》2000,9(2):154-173
Abstract. For some multimedia applications, it has been found that domain objects cannot be represented as feature vectors in a multidimensional
space. Instead, pair-wise distances between data objects are the only input. To support content-based retrieval, one approach
maps each object to a k-dimensional (k-d) point and tries to preserve the distances among the points. Then, existing spatial access index methods such as the R-trees
and KD-trees can support fast searching on the resulting k-d points. However, information loss is inevitable with such an approach since the distances between data objects can only
be preserved to a certain extent. Here we investigate the use of a distance-based indexing method. In particular, we apply
the vantage point tree (vp-tree) method. There are two important problems for the vp-tree method that warrant further investigation,
the n-nearest neighbors search and the updating mechanisms. We study an n-nearest neighbors search algorithm for the vp-tree, which is shown by experiments to scale up well with the size of the dataset
and the desired number of nearest neighbors, n. Experiments also show that the searching in the vp-tree is more efficient than that for the -tree and the M-tree. Next, we propose solutions for the update problem for the vp-tree, and show by experiments that the algorithms are
efficient and effective. Finally, we investigate the problem of selecting vantage-point, propose a few alternative methods,
and study their impact on the number of distance computation.
Received June 9, 1998 / Accepted January 31, 2000 相似文献
16.
Alex Aizman 《International Journal on Software Tools for Technology Transfer (STTT)》2001,3(4):456-468
Advances in technology raise expectations. As far as software engineering is concerned, the common expectation is that coding
and deploying applications is going to be simple. It seems, though, that software engineering is not getting easier, and the
complexity moves to an application domain. One of the sources of complexity is an application concurrency. It is not an uncommon
development practice that concurrency and transaction management in multi-user, multi-threaded, event-driven applications
are postponed until after most of the required functionality is implemented. This situation has various explanations. On the
one hand, business logic may require access and modification of large sets of inter-connected application objects. On the
other, testing and stress-testing of this logic becomes possible only at advanced stages of product development. At these
stages, increasing lock granularities may appear to be less "expensive" than debugging race conditions and deadlocks. Coarse-grained
locking has, of course, an adverse effect on application scalability.
Declaring rules of concurrency outside of the application may solve part of the problem. This paper presents an approach allowing
developers to define concurrency in application-specific terms, design it in the early stages of development, and implement
it using a documented API of the concurrency engine (CE). Simple notation makes it possible to record concurrency specifications in terms of application operations, relationships between application resources, and synchronization conflicts between operations. These concepts are demonstrated on examples. The final sections include the CE UML diagram, notes on API usage, and performance
benchmarks.
Published online: 25 July 2001 相似文献
17.
Asynchronous group mutual exclusion 总被引:1,自引:1,他引:0
Yuh-Jzer Joung 《Distributed Computing》2000,13(4):189-206
Abstract. Mutual exclusion and concurrency are two fundamental and essentially opposite features in distributed systems. However, in
some applications such as Computer Supported Cooperative Work (CSCW) we have found it necessary to impose mutual exclusion
on different groups of processes in accessing a resource, while allowing processes of the same group to share the resource.
To our knowledge, no such design issue has been previously raised in the literature. In this paper we address this issue by
presenting a new problem, called Congenial Talking Philosophers, to model group mutual exclusion. We also propose several criteria to evaluate solutions of the problem and to measure their
performance. Finally, we provide an efficient and highly concurrent distributed algorithm for the problem in a shared-memory
model where processes communicate by reading from and writing to shared variables. The distributed algorithm meets the proposed
criteria, and has performance similar to some naive but centralized solutions to the problem.
Received: November 1998 / Accepted: April 2000 相似文献
18.
Jan Friso Groote Wim H. Hesselink Sjouke Mauw Rogier Vermeulen 《Distributed Computing》2001,14(2):75-81
Summary. The problem of using P processes to write a given value to all positions of a shared array of size N is called the Write-All problem. We present and analyze an asynchronous algorithm with work complexity , where (assuming and ). Our algorithm is a generalization of the naive two-processor algorithm where the two processes each start at one side of
the array and walk towards each other until they collide.
Received: October 1999 / Accepted: September 2000 相似文献
19.
D. Laurent J. Lechtenbörger N. Spyratos G. Vossen 《The VLDB Journal The International Journal on Very Large Data Bases》2001,10(4):295-315
Views over databases have regained attention in the context of data warehouses, which are seen as materialized views. In this setting, efficient view maintenance is an important issue, for which the notion of self-maintainability has been identified as desirable. In this paper, we extend the concept of self-maintainability to (query and update) independence within a formal framework, where independence with respect to arbitrary given sets of queries and updates over the sources
can be guaranteed. To this end we establish an intuitively appealing connection between warehouse independence and view complements. Moreover, we study special kinds of complements, namely monotonic complements, and show how to compute minimal ones in the presence of keys and foreign keys in the underlying databases. Taking advantage
of these complements, an algorithmic approach is proposed for the specification of independent warehouses with respect to
given sets of queries and updates.
Received: 21 November 2000 / Accepted: 1 May 2001 Published online: 6 September 2001 相似文献
20.
R. Braumandl M. Keidl A. Kemper D. Kossmann A. Kreutz S. Seltzsam K. Stocker 《The VLDB Journal The International Journal on Very Large Data Bases》2001,10(1):48-71
We present the design of ObjectGlobe, a distributed and open query processor for Internet data sources. Today, data is published
on the Internet via Web servers which have, if at all, very localized query processing capabilities. The goal of the ObjectGlobe
project is to establish an open marketplace in which data and query processing capabilities can be distributed and used by any kind of Internet application. Furthermore, ObjectGlobe integrates cycle providers (i.e., machines) which carry out query processing operators. The overall picture is to make it possible to execute a query
with – in principle – unrelated query operators, cycle providers, and data sources. Such an infrastructure can serve as enabling
technology for scalable e-commerce applications, e.g., B2B and B2C market places, to be able to integrate data and data processing
operations of a large number of participants. One of the main challenges in the design of such an open system is to ensure
privacy and security. We discuss the ObjectGlobe security requirements, show how basic components such as the optimizer and
runtime system need to be extended, and present the results of performance experiments that assess the additional cost for
secure distributed query processing. Another challenge is quality of service management so that users can constrain the costs
and running times of their queries.
Received: 30 October 2000 / Accepted: 14 March 2001 Published online: 7 June 2001 相似文献