首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Sanjeev  Sencun  Sushil   《Performance Evaluation》2002,49(1-4):21-41
In this paper, we present a new scalable and reliable key distribution protocol for group key management schemes that use logical key hierarchies (LKH) for scalable group rekeying. Our protocol called WKA-BKR is based upon two ideas—weighted key assignment and batched key retransmission—both of which exploit the special properties of LKH and the group rekey transport payload to reduce the bandwidth overhead of the reliable key delivery protocol. Using both analytic modeling and simulation, we compare the performance of WKA-BKR with that of other rekey transport protocols, including a recently proposed protocol based on proactive FEC. Our results show that for most network loss scenarios, the bandwidth used by WKA-BKR is lower than that of the other protocols.  相似文献   

2.
Periodic broadcast and scheduled multicast have been shown to be very effective in reducing the demand on server bandwidth. While periodic broadcast is better for popular videos, scheduled multicast is more suitable for less popular ones. Work has also been done to show that a hybrid of these techniques offer the best performance. Existing hybrid schemes, however, assume that the characteristic of the workload does not change with time. This assumption is not true for many applications, such as movie on demand, digital video libraries, or electronic commerce. In this paper, we show that existing scheduled multicast techniques are not suited for hybrid designs. To address this issue, we propose a new approach and use it to design an adaptive hybrid strategy. Our technique adjusts itself to cope with a changing workload. We provide simulation results to demonstrate that the proposed technique is significantly better than the best static approach in terms of service latency, throughput, defection rate, and unfairness.  相似文献   

3.
Secure buffering in firm real-time database systems   总被引:2,自引:0,他引:2  
Many real-time database applications arise in electronic financial services, safety-critical installations and military systems where enforcing security is crucial to the success of the enterprise. We investigate here the performance implications, in terms of killed transactions, of guaranteeing multi-level secrecy in a real-time database system supporting applications with firm deadlines. In particular, we focus on the buffer management aspects of this issue. Our main contributions are the following. First, we identify the importance and difficulties of providing secure buffer management in the real-time database environment. Second, we present SABRE, a novel buffer management algorithm that provides covert-channel-free security. SABRE employs a fully dynamic one-copy allocation policy for efficient usage of buffer resources. It also incorporates several optimizations for reducing the overall number of killed transactions and for decreasing the unfairness in the distribution of killed transactions across security levels. Third, using a detailed simulation model, the real-time performance of SABRE is evaluated against unsecure conventional and real-time buffer management policies for a variety of security-classified transaction workloads and system configurations. Our experiments show that SABRE provides security with only a modest drop in real-time performance. Finally, we evaluate SABRE's performance when augmented with the GUARD adaptive admission control policy. Our experiments show that this combination provides close to ideal fairness for real-time applications that can tolerate covert-channel bandwidths of up to one bit per second (a limit specified in military standards). Received March 1, 1999 / Accepted October 1, 1999  相似文献   

4.
Casual message-logging protocols have several attractive properties: they introduce no blocking, send no additional messages over those sent by the application, and never create orphans. Causal message logging, however, does require the casual effects of the deliveries of messages to be tracked. The information concerning causality tracking is piggybacked on application messages, and the amount of such information can become large. In this paper we study the cost of tracking causality in causal message-logging protocols. One can track causality as accurately as possible, but to do so requires piggybacking a considerable amount of additional information. One can reduce the amount of piggybacked information on each message by reducing the accuracy of causality tracking. But then, causal message logging may piggyback the reduced amount of information on more messages. We specify six different methods of tracking causality, each representing a natural choice based on the specification of causal message logging. We describe how these six methods can be implemented and compare them in terms of how large of a piggyback load they impose. This load depends on the application that is using causal message logging. We characterize some applications for which a given method has the smallest piggyback load, and study using simulation the size of the piggyback load for two different models of applications. Received: July 1999 / Accepted: July 2001  相似文献   

5.
A new 2D code called Secure 2D code is designed in this paper, both encoder and decoder are also proposed. Secure 2D code can store any kind of data and provides high security. With regard to security, the input data is divided into two parts: general and secret. The general data is transformed into a 2D code pattern, then secret data is hidden in the 2D code pattern. To raise the reading speed and allow various reading environments, some features are added around the 2D code pattern boundary. As to the reliability, RS code is adopted to treat damaged patterns. Received: 9 September 1997 / Accepted: 2 March 1998  相似文献   

6.
Protocols for multimedia communication are needed to integrate into a single network services intended to satisfy the different requirements of multiple types of traffic. An essential prerequisite for designing these protocols is that the services to be offered by the network must be selected and specified in detail. We present the service models proposed, or being developed, by the Internet community, by the ATM community, and by the Tenet Group. We compare their common characteristics, which reveal the characteristics of the first integrated services networks are likely to offer. The services referred to in this paper are those at the network and transport layers, which support the services to be offered to the system's end users.  相似文献   

7.
Using a random deal of cards to players and a computationally unlimited eavesdropper, all players wish to share a one-bit secret key which is information-theoretically secure from the eavesdropper. This can be done by a protocol to make several pairs of players share one-bit secret keys so that all these pairs form a tree over players. In this paper we obtain a necessary and sufficient condition on the number of cards for the existence of such a protocol. Published online: 29 January 2002  相似文献   

8.
Handling message semantics with Generic Broadcast protocols   总被引:1,自引:0,他引:1  
Summary. Message ordering is a fundamental abstraction in distributed systems. However, ordering guarantees are usually purely “syntactic,” that is, message “semantics” is not taken into consideration despite the fact that in several cases semantic information about messages could be exploited to avoid ordering messages unnecessarily. In this paper we define the Generic Broadcast problem, which orders messages only if needed, based on the semantics of the messages. The semantic information about messages is introduced by conflict relations. We show that Reliable Broadcast and Atomic Broadcast are special instances of Generic Broadcast. The paper also presents two algorithms that solve Generic Broadcast. Received: August 2000 / Accepted: August 2001  相似文献   

9.
Recent results in the Rio project at the University of Michigan show that it is possible to create an area of main memory that is as safe as disk from operating system crashes. This paper explores how to integrate the reliable memory provided by the Rio file cache into a database system. Prior studies have analyzed the performance benefits of reliable memory; we focus instead on how different designs affect reliability. We propose three designs for integrating reliable memory into databases: non-persistent database buffer cache, persistent database buffer cache, and persistent database buffer cache with protection. Non-persistent buffer caches use an I/O interface to reliable memory and require the fewest modifications to existing databases. However, they waste memory capacity and bandwidth due to double buffering. Persistent buffer caches use a memory interface to reliable memory by mapping it into the database address space. This places reliable memory under complete database control and eliminates double buffering, but it may expose the buffer cache to database errors. Our third design reduces this exposure by write protecting the buffer pages. Extensive fault tests show that mapping reliable memory into the database address space does not significantly hurt reliability. This is because wild stores rarely touch dirty, committed pages written by previous transactions. As a result, we believe that databases should use a memory interface to reliable memory. Received January 1, 1998 / Accepted June 20, 1998  相似文献   

10.
We describe a collection of algorithms designed to support reliable synchronization and group membership services for distributed multimedia applications. In particular, we consider those applications that require interactivity, isochronous rendering of multimedia data, and high reliability. We show that the algorithms we propose (i) provide reliable support for the synchronization of multimedia data streams, despite the occurrence of possible communication failures, (ii) maintain a consistent view of the relative group membership of all the nonfaulty application components, (iii) guarantee time-bounded delay of component failure detection and join, and (iv) meet effectively possible scalability requirements of the applications.  相似文献   

11.
多媒体组播协议分析及其实现   总被引:2,自引:2,他引:2  
从IP网络多媒体组播协议体系结构出发,在网络协议,传输协议和应用协议方面,讨论组播协议的特点,功能和实现方法,在传输层讨论RTP/RTCP协议的设计原则,以适用于IP上的视频会议;在应用层讨论设计组播服务器时相关的协议及产品;在网络层由实现了IGMP的路由器组成Mbone,组播用户通过IGMP加入组播组,以达到合理利用网络带宽,并给出实例,组播技术还在继续发展,IGMP协议的V3版IGMPV3在线路由 器的层次上提供了对源路由组播的支持。  相似文献   

12.
Summary. The complexity of designing protocols has led to compositional techniques for designing and verifying protocols. We propose a technique based on the notion of parallel composition of protocols. We view a composite protocol as an interleaved execution of the component protocols subject to a set of constraints. Using the constraints as building blocks, we define several constraint-based structures with each structure combining the properties of the component protocols in a different way. For instance, the component protocols of a multifunction protocol can be structured so that the composite protocol performs all the individual functions concurrently or performs only one of them depending on the order of initiation of the component protocols. We provide inference rules to infer safety and liveness properties of the composite protocol. Some properties are derived from those of the component protocols while others are derived from the structuring mechanism (the set of constraints) used to combine the component protocols. Received: October 1996 / Accepted: August 1998  相似文献   

13.
Model checking for a probabilistic branching time logic with fairness   总被引:4,自引:0,他引:4  
We consider concurrent probabilistic systems, based on probabilistic automata of Segala & Lynch [55], which allow non-deterministic choice between probability distributions. These systems can be decomposed into a collection of “computation trees” which arise by resolving the non-deterministic, but not probabilistic, choices. The presence of non-determinism means that certain liveness properties cannot be established unless fairness is assumed. We introduce a probabilistic branching time logic PBTL, based on the logic TPCTL of Hansson [30] and the logic PCTL of [55], resp. pCTL [14]. The formulas of the logic express properties such as “every request is eventually granted with probability at least p”. We give three interpretations for PBTL on concurrent probabilistic processes: the first is standard, while in the remaining two interpretations the branching time quantifiers are taken to range over a certain kind of fair computation trees. We then present a model checking algorithm for verifying whether a concurrent probabilistic process satisfies a PBTL formula assuming fairness constraints. We also propose adaptations of existing model checking algorithms for pCTL [4, 14] to obtain procedures for PBTL under fairness constraints. The techniques developed in this paper have applications in automatic verification of randomized distributed systems. Received: June 1997 / Accepted: May 1998  相似文献   

14.
15.
The research done by the Tenet Group in multimedia networking has reached a point where it may be useful to reflect on the significance of its results for the current debate on how integrated-services internetworks should be designed. Such reflections constitute the main subject of this paper. The principles of the work and the conclusions reached so far by the Tenet researchers are discussed in the light of the conflict between the two major technologies being proposed to build future information infrastructures: namely, the Internet and the ATM technologies. The Tenet approach suggests one feasible way for resolving the conflict to the advantage of all the users of those infrastructures. This paper discusses various fundamental aspects of integrated-services network design: the choice of the service model, the type of charging policy to be adopted, and the selection of a suitable architecture.  相似文献   

16.
In this paper we explore how partial-order reduction can make the task of verifying security protocols more efficient. These reduction techniques have been implemented in our tool Brutus. Partial-order reductions have proved very useful in the domain of model checking reactive systems. These reductions are not directly applicable in our context because of additional complications caused by tracking knowledge of various agents. We present partial-order reductions in the context of verifying security protocols and prove their correctness. Experimental results demonstrating the effectiveness of this reduction technique are also presented. Published online: 24 January 2003  相似文献   

17.
Most existing multicast protocols adopt a static retransmission scheme(unicast or multicast) to retransmit lost packets.In the mobile multicast environment,static multicast retransmission mode may lead to congestion in the receivers‘ wireless interfaces,while static unicast mode may result in great network load.Both static unicast and multicast retransmission modes will cause a performance loss.This paper logically divides the mobile multicast network into fixed and mobile parts,and focuses on the mobile part.Then this paper analyzes the retransmission costs when multicast or unicast mode is chosen.Two main parameters are used to compare their efficiencies:the average air-interface utilization of each receiver and the average network load.Based on the results of analysis,two new algorithms,called NLPA(Network Load Priority Algorithm)and AUPA(Air-interface Utilization Priority Algorithm)are presented.Finally,simulation results conclude that,with proper parameters,both NLPA and AUPA can dynamically alternate between unicast and multicast retransmission modes acording to the conditions of network and receiver,and avoid congestion in receivers‘ wireless interfaces as well as great network load,with a better use of network and terminal resources.  相似文献   

18.
Summary. The Probabilistic I/O Automaton model of [31] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time. The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succession of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk. The algorithm is hard to analyze because of its use of nontrivial results of probability theory (specifically, random walk theory which is based on infinitely many coin flips rather than on finitely many coin flips), because of its complex setting, including asynchrony and both nondeterministic and probabilistic choice, and because of the interplay among several different sub-protocols. We formalize the Aspnes-Herlihy algorithm using probabilistic I/O automata. In doing so, we decompose it formally into three subprotocols: one to carry out the agreement attempts, one to conduct the random walks, and one to implement a shared counter needed by the random walks. Properties of all three subprotocols are proved separately, and combined using general results about automaton composition. It turns out that most of the work involves proving non-probabilistic properties (invariants, simulation mappings, non-probabilistic progress properties, etc.). The probabilistic reasoning is isolated to a few small sections of the proof. The task of carrying out this proof has led us to develop several general proof techniques for probabilistic I/O automata. These include ways to combine expectations for different complexity measures, to compose expected complexity properties, to convert probabilistic claims to deterministic claims, to use abstraction mappings to prove probabilistic properties, and to apply random walk theory in a distributed computational setting. We apply all of these techniques to analyze the expected complexity of the algorithm. Received: February 1999 / Accepted: March 2000  相似文献   

19.
This paper describes a set of interfaces and mechanisms to enhance access to the World Wide Web for persons with sensory, cognitive, or motor limitations. Paradoxically, although complex Web architectures are often accused of impeding accessibility, their layers expand the range of points where interventions can be staged to improve it. This paper identifies some of these access control points and evaluates the particular strengths and weaknesses of each. In particular, it describes an approach to enhance access that is distributed across multiple control points and implemented as an aggregation of services. Published online: 6 November 2002  相似文献   

20.
Heuristic and randomized optimization for the join ordering problem   总被引:11,自引:0,他引:11  
Recent developments in database technology, such as deductive database systems, have given rise to the demand for new, cost-effective optimization techniques for join expressions. In this paper many different algorithms that compute approximate solutions for optimizing join orders are studied since traditional dynamic programming techniques are not appropriate for complex problems. Two possible solution spaces, the space of left-deep and bushy processing trees, are evaluated from a statistical point of view. The result is that the common limitation to left-deep processing trees is only advisable for certain join graph types. Basically, optimizers from three classes are analysed: heuristic, randomized and genetic algorithms. Each one is extensively scrutinized with respect to its working principle and its fitness for the desired application. It turns out that randomized and genetic algorithms are well suited for optimizing join expressions. They generate solutions of high quality within a reasonable running time. The benefits of heuristic optimizers, namely the short running time, are often outweighed by merely moderate optimization performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号