首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Using a random deal of cards to players and a computationally unlimited eavesdropper, all players wish to share a one-bit secret key which is information-theoretically secure from the eavesdropper. This can be done by a protocol to make several pairs of players share one-bit secret keys so that all these pairs form a tree over players. In this paper we obtain a necessary and sufficient condition on the number of cards for the existence of such a protocol. Published online: 29 January 2002  相似文献   

2.
Summary. Quorum systems have been used to implement many coordination problems in distributed systems. In this paper, we study the cost of accessing quorums in asynchronous systems. We formally define the asynchronous access cost of quorum systems and argue that the asynchronous access cost and not the size of a quorum is the right measure of message complexity of protocols using quorums in asynchronous systems. We show that previous quorum systems proposed in the literature have a very high asynchronous access cost. We propose a reformulation of the definition of Byzantine quorum systems that captures the requirement for non-blocking access to quorums in asynchronous systems. We present new Byzantine quorum systems with low asynchronous access cost whose other performance parameters match those of the best Byzantine quorum systems proposed in the literature. In particular, we present a construction for the disjoint failure pattern that outperforms previously proposed systems for that pattern. Received: September 1999 / Accepted: September 2000  相似文献   

3.
Summary. This paper formulates necessary and sufficient conditions on the information required for enforcing causal ordering in a distributed system with asynchronous communication. The paper then presents an algorithm for enforcing causal message ordering. The algorithm allows a process to multicast to arbitrary and dynamically changing process groups. We show that the algorithm is optimal in the space complexity of the overhead of control information in both messages and message logs. The algorithm achieves optimality by transmitting the bare minimum causal dependency information specified by the necessity conditions, and using an encoding scheme to represent and transmit this information. We show that, in general, the space complexity of causal 0message ordering in an asynchronous system is , where is the number of nodes in the system. Although the upper bound on space complexity of the overhead of control information in the algorithm is , the overhead is likely to be much smaller on the average, and is always the least possible. Received: January 1996 / Accepted: February 1998  相似文献   

4.
Protocol synthesis is used to derive a protocol specification, that is, the specification of a set of application components running in a distributed system of networked computers, from a specification of services (called the service specification) to be provided by the distributed application to its users. Protocol synthesis reduces design costs and errors by specifying the message exchanges between the application components, as defined by the protocol specification. In general, maintaining such a distributed application involves applying frequent minor modifications to the service specification due to changes in the user requirements. Deriving the protocol specification after each modification using the existing synthesis methods is considered expensive and time consuming. Moreover, we cannot identify what changes we should make to the protocol specification in correspondence to the changes in the service specification. In this paper, we present a new synthesis method to re-synthesize only those parts of the protocol specification that must be modified in order to satisfy the changes in the service specification. The method consists of a set of simple rules that are applied to the protocol specification written in an extended Petri net model. An application example is given along with some experimental results. Received: July 2001 / Accepted: July 2002 RID="*" ID="*" Supported by International Communications Foundation (ICF), Japan RID="**" ID="**" Supported by Communications and Information Technology Ontario (CITO) and Natural Sciences and Engineering Research Council (NSERC), Canada RID="*" ID="*" Supported by International Communications Foundation (ICF), Japan  相似文献   

5.
We consider the task of assigning distinct labels to nodes of an unknown anonymous network in a distributed manner. A priori, nodes do not have any identities, except for one distinguished node, called the source, and do not know the topology or the size of the network. They execute identical algorithms, apart from the source which plays the role of a leader and starts the labeling process. Our goal is to assign short labels, as fast as possible. The quality of a labeling algorithm is measured by the range from which the algorithm picks the labels, or alternatively, the length of the assigned labels. Natural efficiency measures are the time, i.e., the number of rounds required for the label assignment, and the message and bit complexities of the label assignment protocol, i.e., the total number of messages (resp., bits) circulating in the network. We present label assignment algorithms whose time and message complexity are asymptotically optimal and which assign short labels. On the other hand, we establish inherent trade-offs between quality and efficiency for labeling algorithms. Received: July 2000 / Accepted: February 2001  相似文献   

6.
Casual message-logging protocols have several attractive properties: they introduce no blocking, send no additional messages over those sent by the application, and never create orphans. Causal message logging, however, does require the casual effects of the deliveries of messages to be tracked. The information concerning causality tracking is piggybacked on application messages, and the amount of such information can become large. In this paper we study the cost of tracking causality in causal message-logging protocols. One can track causality as accurately as possible, but to do so requires piggybacking a considerable amount of additional information. One can reduce the amount of piggybacked information on each message by reducing the accuracy of causality tracking. But then, causal message logging may piggyback the reduced amount of information on more messages. We specify six different methods of tracking causality, each representing a natural choice based on the specification of causal message logging. We describe how these six methods can be implemented and compare them in terms of how large of a piggyback load they impose. This load depends on the application that is using causal message logging. We characterize some applications for which a given method has the smallest piggyback load, and study using simulation the size of the piggyback load for two different models of applications. Received: July 1999 / Accepted: July 2001  相似文献   

7.
An adaptive protocol for synchronizing media streams   总被引:8,自引:0,他引:8  
Stream synchronization is widely regarded as a fundamental problem in the field of multimedia systems. Solutions to this problem can be divided into adaptive and rigid mechanisms. While rigid mechanisms are based on worst case assumptions, adaptive ones monitor the underlying network and are able to adapt themselves to changing network conditions. In this paper, we will present an adaptive stream synchronization protocol. This protocol supports any kind of distribution of the sources and sinks of the streams to be synchronized. It is based on a buffer-level control mechanism, allowing immediate corrections when the danger of a buffer overflow or underflow is recognized. Moreover, the proposed protocol is flexible enough to support a wide variety of synchronization policies, which can be dynamically changed while synchronization is in progress. Finally, the message overhead of this protocol is low, because control messages are only exchanged when network conditions change.  相似文献   

8.
This paper reports on the mechanical verification of the IEEE 1394 root contention protocol. This is an industrial leader election protocol, in which timing parameters play an essential role. A manual verification of this protocol using I/O automata has been published in [24]. We improve the communication model presented in that paper. Using the Uppaal2k tool, we investigate the timing constraints on the parameters which are necessary and sufficient for correct protocol operation: by analyzing large numbers of protocol instances with different parameter values, we derive the required timing constraints. We explore the use of model checking in combination with stepwise abstraction. That is, we show that the implementation automaton correctly implements the specification via several intermediate automata, using Uppaal to prove the trace inclusion in each step. Published online: 18 July 2001  相似文献   

9.
We present the design of ObjectGlobe, a distributed and open query processor for Internet data sources. Today, data is published on the Internet via Web servers which have, if at all, very localized query processing capabilities. The goal of the ObjectGlobe project is to establish an open marketplace in which data and query processing capabilities can be distributed and used by any kind of Internet application. Furthermore, ObjectGlobe integrates cycle providers (i.e., machines) which carry out query processing operators. The overall picture is to make it possible to execute a query with – in principle – unrelated query operators, cycle providers, and data sources. Such an infrastructure can serve as enabling technology for scalable e-commerce applications, e.g., B2B and B2C market places, to be able to integrate data and data processing operations of a large number of participants. One of the main challenges in the design of such an open system is to ensure privacy and security. We discuss the ObjectGlobe security requirements, show how basic components such as the optimizer and runtime system need to be extended, and present the results of performance experiments that assess the additional cost for secure distributed query processing. Another challenge is quality of service management so that users can constrain the costs and running times of their queries. Received: 30 October 2000 / Accepted: 14 March 2001 Published online: 7 June 2001  相似文献   

10.
Summary. We prove the existence of a “universal” synchronous self-stabilizing protocol, that is, a protocol that allows a distributed system to stabilize to a desired nonreactive behaviour (as long as a protocol stabilizing to that behaviour exists). Previous proposals required drastic increases in asymmetry and knowledge to work, whereas our protocol does not use any additional knowledge, and does not require more symmetry-breaking conditions than available; thus, it is also stabilizing with respect to dynamic changes in the topology. We prove an optimal quiescence time n+D for a synchronous network of n processors and diameter D; the protocol can be made finite state with a negligible loss in quiescence time. Moreover, an optimal D+1 protocol is given for the case of unique identifiers. As a consequence, we provide an effective proof technique that allows to show whether self-stabilization to a certain behaviour is possible under a wide range of models. Received: January 1999 / Accepted: July 2001  相似文献   

11.
Summary. Group mutual exclusion occurs naturally in situations where a resource can be shared by processes of the same group, but not by processes of different groups. For example, suppose data is stored in a CD-jukebox. Then when a disc is loaded for access, users that need data on the disc can concurrently access the disc, while users that need data on a different disc have to wait until the current disc is unloaded. The design issues for group mutual exclusion have been modeled as the Congenial Talking Philosophers problem, and solutions for shared-memory models have been proposed [12,14]. As in ordinary mutual exclusion and many other problems in distributed systems, however, techniques developed for shared memory do not necessary apply to message passing (and vice versa). So in this paper we investigate solutions for Congenial Talking Philosophers in computer networks where processes communicate by asynchronous message passing. We first present a solution that is a straightforward adaptation from Ricart and Agrawala's algorithm for ordinary mutual exclusion. Then we show that the simple modification suffers a severe performance degradation that could cause the system to behave as though only one process of a group can be in the critical section at a time. We then present a more efficient and highly concurrent distributed algorithm for the problem, the first such solution in computer networks. Received: August 2000 / Accepted: November 2001  相似文献   

12.
Local model checking and protocol analysis   总被引:2,自引:1,他引:1  
This paper describes a local model-checking algorithm for the alternation-free fragment of the modal mu-calculus that has been implemented in the Concurrency Factory and discusses its application to the analysis of a real-time communications protocol. The protocol considered is RETHER, a software-based, real-time Ethernet protocol developed at SUNY at Stony Brook. Its purpose is to provide guaranteed bandwidth and deterministic, periodic network access to multimedia applications over commodity Ethernet hardware. Our model-checking results show that (for a particular network configuration) RETHER makes good on its bandwidth guarantees to real-time nodes without exposing non-real-time nodes to the possibility of starvation. Our data also indicate that, in many cases, the state-exploration overhead of the local model checker is significantly smaller than the total amount that would result from a global analysis of the protocol. In the course of specifying and verifying RETHER, we also identified an alternative design of the protocol that warranted further study due to its potentially smaller run-time overhead in servicing requests for data transmission. Again, using local model checking, we showed that this alternative design also possesses the properties of interest. This observation points out one of the often-overlooked benefits of formal verification: by forcing designers to understand their designs rigorously and abstractly, these techniques often enable the designers to uncover interesting design alternatives.  相似文献   

13.
This paper concerns resource allocation in distributed message passing systems, i.e., the scheduling of accesses to exclusive system resources shared among concurrent processes. An efficient modular resource allocation algorithm is presented that uses any arbitrary resource allocation algorithm as a subroutine. It improves the performance of the subroutine by letting each process wait only for its currently conflicting processes, and therefore, allows more concurrency. For appropriate choices of the subroutine, we obtain resource allocation algorithms with the minimum worst case response times. Simulation studies were conducted which also indicate that on average, the obtained algorithms perform faster and require a smaller number of messages than other previously known algorithms, especially when resource contention among processes is high and the average time that a process remains in the critical region is large. Received: May 1997 / Accepted: May 1998  相似文献   

14.
Failure detection and consensus in the crash-recovery model   总被引:2,自引:0,他引:2  
Summary. We study the problems of failure detection and consensus in asynchronous systems in which processes may crash and recover, and links may lose messages. We first propose new failure detectors that are particularly suitable to the crash-recovery model. We next determine under what conditions stable storage is necessary to solve consensus in this model. Using the new failure detectors, we give two consensus algorithms that match these conditions: one requires stable storage and the other does not. Both algorithms tolerate link failures and are particularly efficient in the runs that are most likely in practice – those with no failures or failure detector mistakes. In such runs, consensus is achieved within time and with 4 n messages, where is the maximum message delay and n is the number of processes in the system. Received: May 1998 / Accepted: November 1999  相似文献   

15.
Requirements specifications for high-assurance secure systems are rare in the open literature. This paper examines the development of a requirements document for a multilevel secure system that must meet stringent assurance and evaluation requirements. The system is designed to be secure, yet combines popular commercial components with specialised high-assurance ones. Functional and non-functional requirements pertinent to security are discussed. A multidimensional threat model is presented. The threat model accounts for the developmental and operational phases of system evolution and for each phase accounts for both physical and non-physical threats. We describe our team-based method for developing a requirements document and relate that process to techniques in requirements engineering. The system requirements document presented provides a calibration point for future security requirements engineering techniques intended to meet both functional and assurance goals. RID="*" ID="*"The views expressed in this paper are those of the authors and should not be construed to reflect those of their employers or the Department of Defense. This work was supported in part by the MSHN project of the DARPA/ITO Quorum programme and by the MYSEA project of the DARPA/ATO CHATS programme. Correspondence and offprint requests to: T. Levin, Department of Computer Science, Naval Postgraduate School, Monterey, CA 93943-5118, USA. Tel.: +1 831 656 2339; Fax: +1 831 656 2814; Email: levin@nps.navy.mil  相似文献   

16.
Using AVL trees for fault-tolerant group key management   总被引:1,自引:0,他引:1  
In this paper we describe an efficient algorithm for the management of group keys for group communication systems. Our algorithm is based on the notion of key graphs, previously used for managing keys in large Internet-protocol multicast groups. The standard protocol requires a centralized key server that has knowledge of the full key graph. Our protocol does not delegate this role to any one process. Rather, members enlist in a collaborative effort to create the group key graph. The key graph contains n keys, of which each member learns log2n of them. We show how to balance the key graph, a result that is applicable to the centralized protocol. We also show how to optimize our distributed protocol, and provide a performance study of its capabilities. Published online: 26 October 2001  相似文献   

17.
Handling message semantics with Generic Broadcast protocols   总被引:1,自引:0,他引:1  
Summary. Message ordering is a fundamental abstraction in distributed systems. However, ordering guarantees are usually purely “syntactic,” that is, message “semantics” is not taken into consideration despite the fact that in several cases semantic information about messages could be exploited to avoid ordering messages unnecessarily. In this paper we define the Generic Broadcast problem, which orders messages only if needed, based on the semantics of the messages. The semantic information about messages is introduced by conflict relations. We show that Reliable Broadcast and Atomic Broadcast are special instances of Generic Broadcast. The paper also presents two algorithms that solve Generic Broadcast. Received: August 2000 / Accepted: August 2001  相似文献   

18.
The success of the Object Management Group's General Inter-ORB Protocol (GIOP) is leading to the desire to deploy GIOP in an ever-wider range of application areas, many of which are significantly more demanding than traditional areas in terms of performance. The well-known performance limitations of present day GIOP-based object request brokers (ORBs) are therefore increasingly being seen as a problem. To help address this problem, this paper discusses a GIOP implementation which has high performance and quality of service support as explicit goals. The implementation, which is embedded in a research ORB called Gopi, is modular and extensible in nature and includes novel optimization techniques which should be separately portable to other ORB environments. This paper focuses on the message protocol aspects of Gopi's GIOP implementation; higher layer issues such as marshalling and operation demultiplexing are not covered in detail. Figures are provided which position Gopi's GIOP performance against comparable ORBs. The paper also discusses some of the design decisions that have been made in the development of the GIOP protocol in the light of our implementation experience. Received: May 2000 / Accepted: December 2000  相似文献   

19.
PicoDBMS: Scaling down database techniques for the smartcard   总被引:1,自引:0,他引:1  
Smartcards are the most secure portable computing device today. They have been used successfully in applications involving money, and proprietary and personal data (such as banking, healthcare, insurance, etc.). As smartcards get more powerful (with 32-bit CPU and more than 1 MB of stable memory in the next versions) and become multi-application, the need for database management arises. However, smartcards have severe hardware limitations (very slow write, very little RAM, constrained stable memory, no autonomy, etc.) which make traditional database technology irrelevant. The major problem is scaling down database techniques so they perform well under these limitations. In this paper, we give an in-depth analysis of this problem and propose a PicoDBMS solution based on highly compact data structures, query execution without RAM, and specific techniques for atomicity and durability. We show the effectiveness of our techniques through performance evaluation. Received: 15 October 2000 / Accepted: 15 April 2001 Published online: 23 July 2001  相似文献   

20.
We give efficient algorithms for distributed computation on oriented, anonymous, asynchronous hypercubes with possible faulty components (i.e. processors and links) and deterministic processors. Initially, the processors know only the size of the network and that they are inter-connected in a hypercube topology. Faults may occur only before the start of the computation (and that despite this the hypercube remains a connected network). However, the processors do not know where these faults are located. As a measure of complexity we use the total number of bits transmitted during the execution of the algorithm and we concentrate on giving algorithms that will minimize this number of bits. The main result of this paper is an algorithm for computing Boolean functions on anonymous hypercubes with bit cost , where is the number of faulty components (i.e. links plus processors), is the number of links which are either faulty, or non-faulty but adjacent to faulty processors, and is the diameter of the hypercube with faulty components. Received: October 1992 / Accepted: April 2001  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号