首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Failure detection and consensus in the crash-recovery model   总被引:2,自引:0,他引:2  
Summary. We study the problems of failure detection and consensus in asynchronous systems in which processes may crash and recover, and links may lose messages. We first propose new failure detectors that are particularly suitable to the crash-recovery model. We next determine under what conditions stable storage is necessary to solve consensus in this model. Using the new failure detectors, we give two consensus algorithms that match these conditions: one requires stable storage and the other does not. Both algorithms tolerate link failures and are particularly efficient in the runs that are most likely in practice – those with no failures or failure detector mistakes. In such runs, consensus is achieved within time and with 4 n messages, where is the maximum message delay and n is the number of processes in the system. Received: May 1998 / Accepted: November 1999  相似文献   

2.
Summary. This paper presents adaptive algorithms for mutual exclusion using only read and write operations; the performance of the algorithms depends only on the point contention, i.e., the number of processes that are concurrently active during the algorithm execution (and not on n, the total number of processes). Our algorithm has O(k) remote step complexity and O(logk) system response time, wherek is the point contention. The remote step complexity is the maximal number of steps performed by a process where a wait is counted as one step. The system response time is the time interval between subsequent entries to the critical section, where one time unit is the minimal interval in which every active process performs at least one step. The space complexity of this algorithm is O(N logn), where N is the range of processes' names. We show how to make the space complexity of our algorithm depend solely on n, while preserving the other performance measures of the algorithm. Received: March 2001 / Accepted: November 2001  相似文献   

3.
The transportation of prerecorded, compressed video data without loss of picture quality requires the network and video servers to support large fluctuations in bandwidth requirements. Fully utilizing a client-side buffer for smoothing bandwidth requirements can limit the fluctuations in bandwidth required from the underlying network and the video-on-demand servers. This paper shows that, for a fixed-size buffer constraint, the critical bandwidth allocation technique results in plans for continuous playback of stored video that have (1) the minimum number of bandwidth increases, (2) the smallest peak bandwidth requirements, and (3) the largest minimum bandwidth requirements. In addition, this paper introduces an optimal bandwidth allocation algorithm which, in addition to the three critical bandwidth allocation properties, minimizes the total number of bandwidth changes necessary for continuous playback. A comparison between the optimal bandwidth allocation algorithm and other critical bandwidth-based algorithms using 17 full-length movie videos and 3 seminar videos is also presented.  相似文献   

4.
We present a shared memory algorithm that allows a set of f+1 processes to wait-free “simulate” a larger system of n processes, that may also exhibit up to f stopping failures. Applying this simulation algorithm to the k-set-agreement problem enables conversion of an arbitrary k-fault-tolerant{\it n}-process solution for the k-set-agreement problem into a wait-free k+1-process solution for the same problem. Since the k+1-processk-set-agreement problem has been shown to have no wait-free solution [5,18,26], this transformation implies that there is no k-fault-tolerant solution to the n-process k-set-agreement problem, for any n. More generally, the algorithm satisfies the requirements of a fault-tolerant distributed simulation.\/ The distributed simulation implements a notion of fault-tolerant reducibility\/ between decision problems. This paper defines these notions and gives examples of their application to fundamental distributed computing problems. The algorithm is presented and verified in terms of I/O automata. The presentation has a great deal of interesting modularity, expressed by I/O automaton composition and both forward and backward simulation relations. Composition is used to include a safe agreement\/ module as a subroutine. Forward and backward simulation relations are used to view the algorithm as implementing a multi-try snapshot\/ strategy. The main algorithm works in snapshot shared memory systems; a simple modification of the algorithm that works in read/write shared memory systems is also presented. Received: February 2001 / Accepted: February 2001  相似文献   

5.
Abstract. In this paper we study the ability of shared object types to implement Consensus in asynchronous shared-memory systems where at most one process may crash. More specifically, we consider the following question: Let and be a set of object types that can be used to solve one-resilient Consensus among n processes. Can always be used to solve one-resilient Consensus among n - 1 processes? We prove that for n = 3 the answer is negative, even if consists only ofdeterministic types. (This strengthens an earlier result by the first author proving the same fact for nondeterministic types.) We also prove that, in contrast, for the answer to the above question is affirmative. Received: July 1997 / Accepted: May 2000  相似文献   

6.
Summary. Long-lived and adaptive implementations of mutual exclusion and renaming in the read/write shared memory model are presented. An implementation of a task is adaptive if the step complexity of any operation in the implementation is a function of the number of processes that take steps concurrently with the operation. The renaming algorithm assigns a new unique id in the range to any process whose initial unique name is taken from a set of size N, for an arbitrary N and where k is the number of processes that actually take steps or hold a name while the new name is being acquired. The step complexity of acquiring a new name is , while the step complexity of releasing a name is 1. The space complexity of the algorithm is where n is an upper bound on the number of processes that may be active at the same time (acquiring or holding new names), which could be N in the worst case. Both the system response time and the worst case number of operations per process in the presented mutual-exclusion algorithm are adaptive. Both algorithms rely on the basic building block of a long-lived and adaptive splitter. While the adaptive-splitter satisfies a slightly different set of properties than the Moir-Anderson splitter [MA95], it is adaptive and long-lived. In addition, the new splitter properties enable the construction of a non-blocking long-lived (2k-1)-renaming algorithm (which is optimal in the size of the new name space). We believe that the mechanisms introduced in our splitter implementation are interesting on their own, and might be used in other adaptive and long-lived constructions. Received: March 2000 / Accepted July 2001  相似文献   

7.
Summary. Group mutual exclusion occurs naturally in situations where a resource can be shared by processes of the same group, but not by processes of different groups. For example, suppose data is stored in a CD-jukebox. Then when a disc is loaded for access, users that need data on the disc can concurrently access the disc, while users that need data on a different disc have to wait until the current disc is unloaded. The design issues for group mutual exclusion have been modeled as the Congenial Talking Philosophers problem, and solutions for shared-memory models have been proposed [12,14]. As in ordinary mutual exclusion and many other problems in distributed systems, however, techniques developed for shared memory do not necessary apply to message passing (and vice versa). So in this paper we investigate solutions for Congenial Talking Philosophers in computer networks where processes communicate by asynchronous message passing. We first present a solution that is a straightforward adaptation from Ricart and Agrawala's algorithm for ordinary mutual exclusion. Then we show that the simple modification suffers a severe performance degradation that could cause the system to behave as though only one process of a group can be in the critical section at a time. We then present a more efficient and highly concurrent distributed algorithm for the problem, the first such solution in computer networks. Received: August 2000 / Accepted: November 2001  相似文献   

8.
Summary. In a shared-memory distributed system, n independent asynchronous processes communicate by reading and writing to shared variables. An algorithm is adaptive (to total contention) if its step complexity depends only on the actual number, k, of active processes in the execution; this number is unknown in advance and may change in different executions of the algorithm. Adaptive algorithms are inherently wait-free, providing fault-tolerance in the presence of an arbitrary number of crash failures and different processes' speed. A wait-free adaptive collect algorithm with O(k) step complexity is presented, together with its applications in wait-free adaptive alg orithms for atomic snapshots, immediate snapshots and renaming. Received: August 1999 / Accepted: August 2001  相似文献   

9.
A distributed dynamic channel allocation algorithm has been proposed in [11]. In this paper the algorithm is modelled using predicate/transition nets. The same model can be used for any number of cell and channel configurations. The Maria reachability analyser has been used to analyse the protocol for some configurations. These are deadlock-free and are shown to satisfy the requirement that the same channel is never allocated to two neighbouring cells. The suitability of high-level nets for the modelling and analysis of distributed algorithms is discussed. Published online: 24 August 2001  相似文献   

10.
Model checking for a probabilistic branching time logic with fairness   总被引:4,自引:0,他引:4  
We consider concurrent probabilistic systems, based on probabilistic automata of Segala & Lynch [55], which allow non-deterministic choice between probability distributions. These systems can be decomposed into a collection of “computation trees” which arise by resolving the non-deterministic, but not probabilistic, choices. The presence of non-determinism means that certain liveness properties cannot be established unless fairness is assumed. We introduce a probabilistic branching time logic PBTL, based on the logic TPCTL of Hansson [30] and the logic PCTL of [55], resp. pCTL [14]. The formulas of the logic express properties such as “every request is eventually granted with probability at least p”. We give three interpretations for PBTL on concurrent probabilistic processes: the first is standard, while in the remaining two interpretations the branching time quantifiers are taken to range over a certain kind of fair computation trees. We then present a model checking algorithm for verifying whether a concurrent probabilistic process satisfies a PBTL formula assuming fairness constraints. We also propose adaptations of existing model checking algorithms for pCTL [4, 14] to obtain procedures for PBTL under fairness constraints. The techniques developed in this paper have applications in automatic verification of randomized distributed systems. Received: June 1997 / Accepted: May 1998  相似文献   

11.
Geometric groundtruth at the character, word, and line levels is crucial for designing and evaluating optical character recognition (OCR) algorithms. Kanungo and Haralick proposed a closed-loop methodology for generating geometric groundtruth for rescanned document images. The procedure assumed that the original image and the corresponding groundtruth were available. It automatically registered the original image to the rescanned one using four corner points and then transformed the original groundtruth using the estimated registration transformation. In this paper, we present an attributed branch-and-bound algorithm for establishing the point correspondence that uses all the data points. We group the original feature points into blobs and use corners of blobs for matching. The Euclidean distance between character centroids is used as the error metric. We conducted experiments on synthetic point sets with varying layout complexity to characterize the performance of two matching algorithms. We also report results on experiments conducted using the University of Washington dataset. Finally, we show examples of application of this methodology for generating groundtruth for microfilmed and FAXed versions of the University of Washington dataset documents. Received: July 24, 2001 / Accepted: May 20, 2002  相似文献   

12.
Asynchronous group mutual exclusion   总被引:1,自引:1,他引:0  
Abstract. Mutual exclusion and concurrency are two fundamental and essentially opposite features in distributed systems. However, in some applications such as Computer Supported Cooperative Work (CSCW) we have found it necessary to impose mutual exclusion on different groups of processes in accessing a resource, while allowing processes of the same group to share the resource. To our knowledge, no such design issue has been previously raised in the literature. In this paper we address this issue by presenting a new problem, called Congenial Talking Philosophers, to model group mutual exclusion. We also propose several criteria to evaluate solutions of the problem and to measure their performance. Finally, we provide an efficient and highly concurrent distributed algorithm for the problem in a shared-memory model where processes communicate by reading from and writing to shared variables. The distributed algorithm meets the proposed criteria, and has performance similar to some naive but centralized solutions to the problem. Received: November 1998 / Accepted: April 2000  相似文献   

13.
Efficient admission control algorithms for multimedia servers   总被引:3,自引:0,他引:3  
In this paper, we have proposed efficient admission control algorithms for multimedia storage servers that are providers of variable-bit-rate media streams. The proposed schemes are based on a slicing technique and use aggressive methods for admission control. We have developed two types of admission control schemes: Future-Max (FM) and Interval Estimation (IE). The FM algorithm uses the maximum bandwidth requirement of the future to estimate the bandwidth requirement. The IE algorithm defines a class of admission control schemes that use a combination of the maximum and average bandwidths within each interval to estimate the bandwidth requirement of the interval. The performance evaluations done through simulations show that the server utilization is improved by using the FM and IE algorithms. Furthermore, the quality of service is also improved by using the FM and IE algorithms. Several results depicting the trade-off between the implementation complexity, the desired accuracy, the number of accepted requests, and the quality of service are presented.  相似文献   

14.
We consider the problem of scheduling a set of pages on a single broadcast channel using time-multiplexing. In a perfectly periodic schedule, time is divided into equal size slots, and each page is transmitted in a time slot precisely every fixed interval of time (the period of the page). We study the case in which each page i has a given demand probability , and the goal is to design a perfectly periodic schedule that minimizes the average time a random client waits until its page is transmitted. We seek approximate polynomial solutions. Approximation bounds are obtained by comparing the costs of a solution provided by an algorithm and a solution to a relaxed (non-integral) version of the problem. A key quantity in our methodology is a fraction we denote by , that depends on the maximum demand probability: . The best known polynomial algorithm to date guarantees an approximation of . In this paper, we develop a tree-based methodology for perfectly periodic scheduling, and using new techniques, we derive algorithms with better bounds. For small values, our best algorithm guarantees approximation of . On the other hand, we show that the integrality gap between the cost of any perfectly periodic schedule and the cost of the fractional problem is at least . We also provide algorithms with good performance guarantees for large values of . Received: December 2001 / Accepted: September 2002  相似文献   

15.
Summary. This paper formulates necessary and sufficient conditions on the information required for enforcing causal ordering in a distributed system with asynchronous communication. The paper then presents an algorithm for enforcing causal message ordering. The algorithm allows a process to multicast to arbitrary and dynamically changing process groups. We show that the algorithm is optimal in the space complexity of the overhead of control information in both messages and message logs. The algorithm achieves optimality by transmitting the bare minimum causal dependency information specified by the necessity conditions, and using an encoding scheme to represent and transmit this information. We show that, in general, the space complexity of causal 0message ordering in an asynchronous system is , where is the number of nodes in the system. Although the upper bound on space complexity of the overhead of control information in the algorithm is , the overhead is likely to be much smaller on the average, and is always the least possible. Received: January 1996 / Accepted: February 1998  相似文献   

16.
In this paper, we re-examine the results of prior work on methods for computing ad hoc joins. We develop a detailed cost model for predicting join algorithm performance, and we use the model to develop cost formulas for the major ad hoc join methods found in the relational database literature. We show that various pieces of “common wisdom” about join algorithm performance fail to hold up when analyzed carefully, and we use our detailed cost model to derive op timal buffer allocation schemes for each of the join methods examined here. We show that optimizing their buffer allocations can lead to large performance improvements, e.g., as much as a 400% improvement in some cases. We also validate our cost model's predictions by measuring an actual implementation of each join algorithm considered. The results of this work should be directly useful to implementors of relational query optimizers and query processing systems. Edited by M. Adiba. Received May 1993 / Accepted April 1996  相似文献   

17.
The Dempster-Shafer theory and the convex Bayesian theory have recently been proposed as alternatives to the (strict) Bayesian theory in the field of reasoning with uncertainty. These relatively new formalisms claim that missing information in the probabilistic model of a process not necessarily disables uncertainty reasoning. However, this paper shows that this does not apply to processes where the reasoning is part of a decision-making process, such as object recognition. In these cases, a complete probabilistic model is required and can be obtained by estimating missing probabilistic information. An examplary approach towards the estimation of uncertain probabilistic information is described in this paper for a multi-sensor system for recognition of electronic components on printed circuit boards. Received: 21 June 1998 / Accepted: 23 May 2000  相似文献   

18.
Summary. Progress is investigated for a shared-memory distributed system with a weak form of fault tolerance that allows processes to stop and restart functioning without notification. The concept of bounded fairness is introduced to formalize bounded delay under the assumption that each family of related processes continuously contains at least one active member. This is a generalization of wait-freedom, and also of a finitary form of weak fairness. Several useful proof rules are stated and proved. In a system with bounded fairness, a wait-free process can be constructed by forming a new process in which processes from the various families are scheduled in a round robin way. The theory is applied to prove progress within bounded delay for a linearizing concurrent data-object in shared memory. The safety properties of this algorithm have been treated elsewhere. Received: April 1998 / Accepted: March 1999  相似文献   

19.
Binarization of document images with poor contrast, strong noise, complex patterns, and variable modalities in the gray-scale histograms is a challenging problem. A new binarization algorithm has been developed to address this problem for personal cheque images. The main contribution of this approach is optimizing the binarization of a part of the document image that suffers from noise interference, referred to as the Target Sub-Image (TSI), using information easily extracted from another noise-free part of the same image, referred to as the Model Sub-Image (MSI). Simple spatial features extracted from MSI are used as a model for handwriting strokes. This model captures the underlying characteristics of the writing strokes, and is invariant to the handwriting style or content. This model is then utilized to guide the binarization in the TSI. Another contribution is a new technique for the structural analysis of document images, which we call “Wavelet Partial Reconstruction” (WPR). The algorithm was tested on 4,200 cheque images and the results show significant improvement in binarization quality in comparison with other well-established algorithms. Received: October 10, 2001 / Accepted: May 7, 2002 This research was supported in part by NCR and NSERC's industrial postgraduate scholarship No. 239464. A simplified version of this paper has been presented at ICDAR 2001 [3].  相似文献   

20.
李燕君  蒋华同  高美惠 《控制与决策》2022,37(11):2880-2886
针对边缘计算应用对实时性的要求,引入软件定义网络和网络功能虚拟化技术对边缘计算网络进行重构.基于此,考虑以最大化长期平均实时任务处理成功率为目标的计算和通信资源在线分配问题.通过建立马尔可夫决策过程模型,提出基于Q学习的资源在线分配方法.Q学习在状态动作空间较大时内存占用大且会发生维度灾难,鉴于此,进一步提出基于DQN的资源在线分配方法.实验结果表明,所提出算法能够较快收敛,且DQN算法相较于Q学习和其他基准方法能够获得更高的实时任务处理成功率.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号