首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An adaptive algorithm, whose step complexity adjusts to the number of active processes, is attractive for distributed systems with a highly-variable number of processes. The cornerstone of many adaptive algorithms is an adaptive mechanism to collect up-to-date information from all participating processes. To date, all known collect algorithms either have non-linear step complexity or they are impractical because of unrealistic memory overhead. This paper presents new randomized collect algorithms with asymptotically optimal O(k) step complexity and linear memory overhead only. In addition we present a new deterministic collect algorithm that beats the best step complexity for previous polynomial-memory algorithms. Partially supported by NSF Grants CCR–0310970 and ANI–0326001. A preliminary version of this paper appeared in the Proceedings of the 18th Annual Conference on Distributed Computing (DISC) 2004 [10].  相似文献   

2.
This paper proposes a new class of efficient adaptive nonlinear filters whose estimation error performance (in a minimum mean square sense) is superior to that of competing approximate nonlinear filters, e.g., the well-known extended Kalman filter (EKF). The proposed filters include as special cases both the EKF and previously proposed partitioning filters. The new methodology performs an adaptive selection of appropriate reference points for linearization from an ensemble of generated trajectories that have been processed and clustered accordingly to span the whole state space of the desired signal. Through a series of simulation examples, the approach is shown significantly superior to the classical EKF with comparable computational burden  相似文献   

3.
Summary. In a shared-memory distributed system, n independent asynchronous processes communicate by reading and writing to shared variables. An algorithm is adaptive (to total contention) if its step complexity depends only on the actual number, k, of active processes in the execution; this number is unknown in advance and may change in different executions of the algorithm. Adaptive algorithms are inherently wait-free, providing fault-tolerance in the presence of an arbitrary number of crash failures and different processes' speed. A wait-free adaptive collect algorithm with O(k) step complexity is presented, together with its applications in wait-free adaptive alg orithms for atomic snapshots, immediate snapshots and renaming. Received: August 1999 / Accepted: August 2001  相似文献   

4.
This study aims to provide adaptive communication services for real-time applications. In particular, how to admit adaptive real-time connections is addressed. Various search techniques in connection with admission control are investigated, and their tradeoffs under different performance criteria are assessed. The implementation and application of this new connection model in the NetEx system is also discussed.  相似文献   

5.
6.
Algorithms for the maximum flow problem can be grouped into two categories: augmenting path algorithms [Ford LR, Fulkerson DR. Flows in networks. Princeton University Press: Princeton, NJ: 1962], and preflow push algorithms [Goldberg AV, Tarjan RE. A new approach to the maximum flow problem. In: Proceedings of the 18th annual ACM symposium on theory of computing, 1986; p. 136–46]. Preflow push algorithms are characterized by a drawback known as ping pong effect. In this paper we present a technique that allows to avoid such an effect and can be considered as an approach combining the augmenting path and preflow push methods. An extended experimentation shows the effectiveness of the proposed approach.  相似文献   

7.
Kuo-Liang  Wan-Yu 《Pattern recognition》2003,36(12):2793-2804
Thresholding is a fundamental operation in image processing. Based on the pairwise nearest neighbor technique and the variance criterion, this theme presents two fast adaptive thresholding algorithms. The proposed first algorithm takes O((mk)mτ) time where k denotes the number of thresholds specified by the user; m denotes the size of the compact image histogram, and the parameter τ has the constraint 1τm. On a set of different real images, experimental results reveal that the proposed first algorithm is faster than the previous three algorithms considerably while having a good feature-preserving capability. The previous three mentioned algorithms need O(mk) time. Given a specific peak-signal-to-noise ratio (PSNR), we further present the second thresholding algorithm to determine the number of thresholds as few as possible in order to obtain a thresholded image satisfying the given PSNR. The proposed second algorithm takes O((mk)mτ+γN) time where N and γ denote the image size and the fewest number of thresholds required, respectively. Some experiments are carried out to demonstrate the thresholded images that are encouraging. Since the time complexities required in our proposed two thresholding algorithms are polynomial, they could meet the real-time demand in image preprocessing.  相似文献   

8.
The k-clique problem is a cornerstone of NP-completeness and parametrized complexity. When k is a fixed constant, the asymptotically fastest known algorithm for finding a k-clique in an n-node graph runs in O(n0.792k) time (given by Nešet?il and Poljak). However, this algorithm is infamously inapplicable, as it relies on Coppersmith and Winograd's fast matrix multiplication.We present good combinatorial algorithms for solving k-clique problems. These algorithms do not require large constants in their runtime, they can be readily implemented in any reasonable random access model, and are very space-efficient compared to their algebraic counterparts. Our results are the following:
We give an algorithm for k-clique that runs in O(nk/(εlogn)k−1) time and O(nε) space, for all ε>0, on graphs with n nodes. This is the first algorithm to take o(nk) time and O(nc) space for c independent of k.
Let k be even. Define a k-semiclique to be a k-node graph G that can be divided into two disjoint subgraphs U={u1,…,uk/2} and V={v1,…,vk/2} such that U and V are cliques, and for all i?j, the graph G contains the edge {ui,vj}. We give an time algorithm for determining if a graph has a k-semiclique. This yields an approximation algorithm for k-clique, in the following sense: if a given graph contains a k-clique, then our algorithm returns a subgraph with at least 3/4 of the edges in a k-clique.
  相似文献   

9.
10.
Suppose we are given a set S of n (possibly intersecting) simple objects in the plane such that, for every pair of objects in S, the intersection of the boundaries of these two objects has O(1) connected components. We consider the problem of determining whether there exists a straight line that goes through every object in S. We give an O(n log n γ (n)) time algorithm for this problem, where γ(n) is a very slowly growing function of n. In many cases, our algorithm runs in O(n log n) time. Previously, only special cases of this problem were considered: the case when every object is a straight-line segment (Edelsbrunner et al., 1982), the case when the objects are equal-radius circles (Bajaj and Li, 1983), and the case when objects all maintain the same orientation (Edelsbrunner, 1985). All these cases follow from our general approach, which places no constraints on the size and/or configuration of the objects in S.  相似文献   

11.
Efficient parallel hierarchical clustering algorithms   总被引:3,自引:0,他引:3  
Clustering of data has numerous applications and has been studied extensively. Though most of the algorithms in the literature are sequential, many parallel algorithms have also been designed. In this paper, we present parallel algorithms with better performance than known algorithms. We consider algorithms that work well in the worst case as well as algorithms with good expected performance.  相似文献   

12.
Dynamic redistribution of arrays is required very often in programs on distributed presents efficient algorithms for redistribution between different cyclic(k) distributions, as defined in High Performance Fortran. We first propose special optimized algorithms for a cyclic(x) to cyclic(y) redistribution when x is a multiple of y, or y is a multiple of x. We then propose two algorithms, called the GCD method and the LCM method, for the general cyclic(x) to cyclic(y) redistribution when there is no particular relation between x and y. We have implemented these algorithms on the Intel Touchstone Delta, and find that they perform well for different array sizes and number of processors  相似文献   

13.
In the last decade, new methods of forecasting were developed different from traditional statistical methods. In particular, it is possible to “efficiently” predict any sequence of outcomes without using any hypothesis on the nature of a source generating it. In the present paper, a modified version of the universal forecasting algorithm is considered. The main part of the paper is devoted to algorithmic analysis of universal forecasting methods and to exploring limits of their performance.  相似文献   

14.
The speeds of the adaptive response of Lyapunov-designed adaptive systems are related to some extent to the degree of stability of the system. A generalized quadratic Lyapunov function is constructed to allow several new adaptive algorithms to be synthesized. The new algorithms each improve the degree of stability over previously reported adaptive algorithms. A root-locus analysis substantiates the improvement in response speed. While the framework for this short paper is that of full-state model-reference adaptation, the synthesis is applicable to systems employing output measurement, such as reduced-order model-reference systems [5] or an adaptive observer [20].  相似文献   

15.
The paper presents a grammatical inference methodology for the generation of visual languages, that benefits from the availability of semantic information about the sample sentences. Several well-known syntactic inference algorithms are shown to obey a general inference scheme, which the authors call the Gen-Inf scheme. Then, all the algorithms of the Gen-Inf scheme are modified in agreement with the introduced semantics-based inference methodology. The use of grammatical inference techniques in the design of adaptive user interfaces was previously experimented with the VLG system for visual language generation. The system is a powerful tool for specifying, designing, and interpreting customized visual languages for different applications. They enhance the adaptivity of the VLG system to any visual environment by exploiting the proposed semantics-based inference methodology. As a matter of fact, a more general model of visual language generation is achieved, based on the Gen-Inf scheme, where the end-user is allowed to choose the algorithm which best fits his/her requirements within the particular application environment  相似文献   

16.
Finite-model adaptive control problem is studied for a class of discrete-time nonlinear uncertain systems. This problem was motivated by recent efforts on the capability and limitation of feedback mechanism and has the characteristics of “essentially” finite internal uncertainties. To solve this type of problem, based on different ideas, we introduce several approaches, controller falsification, controller combination, and pseudo-parameter estimation, to design the feedback control law and rigorously establish the stability of closed-loop system for several typical algorithms in these approaches. Our results show that, under reasonably weak conditions, capable feedback control laws exist dealing with the finite internal uncertainties of the system. These results together with related results in companion papers provide partial answers to the finite-model adaptive control problem and may lead to deeper understanding on the capability of the whole feedback mechanism.  相似文献   

17.
Efficient parallel algorithms for graph problems   总被引:1,自引:0,他引:1  
We present an efficient technique for parallel manipulation of data structures that avoids memory access conflicts. That is, this technique works on the Exclusive Read/Exclusive Write (EREW) model of computation, which is the weakest shared memory, MIMD machine model. It is used in a new parallel radix sort algorithm that is optimal for keys whose values are over a small range. Using the radix sort and known results for parallel prefix on linked lists, we develop parallel algorithms that efficiently solve various computations on trees and unicycular graphs. Finally, we develop parallel algorithms for connected components, spanning trees, minimum spanning trees, and other graph problems. All of the graph algorithms achieve linear speedup for all but the sparsest graphs.Part of this work was done while the first author was at the University of Illinois, Urbana-Champaign, the second author was at Carnegie-Mellon University, and the third author was at the Hebrew University and the Courant Institute of Mathematical Sciences, New York University. A preliminary version of this work was presented at the 1986 International Conference on Parallel Processing.  相似文献   

18.
We present an efficient technique for parallel manipulation of data structures that avoids memory access conflicts. That is, this technique works on the Exclusive Read/Exclusive Write (EREW) model of computation, which is the weakest shared memory, MIMD machine model. It is used in a new parallel radix sort algorithm that is optimal for keys whose values are over a small range. Using the radix sort and known results for parallel prefix on linked lists, we develop parallel algorithms that efficiently solve various computations on trees and “unicycular graphs.” Finally, we develop parallel algorithms for connected components, spanning trees, minimum spanning trees, and other graph problems. All of the graph algorithms achieve linear speedup for all but the sparsest graphs.  相似文献   

19.
We present four polylog-time parallel algorithms for matching parentheses on an exclusive-read and exclusive-write (EREW) parallel random-access machine (PRAM) model. These algorithms provide new insights into the parentheses-matching problem. The first algorithm has a time complexity of O(log2 n) employing O(n/(log n)) processors for an input string containing n parentheses. Although this algorithm is not cost-optimal, it is extremely simple to implement. The remaining three algorithms, which are based on a different approach, achieve O(log n) time complexity in each case, and represent successive improvements. The second algorithm requires O(n) processors and working space, and it is comparable to the first algorithm in its ease of implementation. The third algorithm uses O(n/(log n)) processors and O(n log n) space. Thus, it is cost-optimal, but uses extra space compared to the standard stack-based sequential algorithm. The last algorithm reduces the space complexity to O(n) while maintaining the same processor and time complexities. Compared to other existing time-optimal algorithms for the parentheses-matching problem that either employ extensive pipelining or use linked lists and comparable data structures, and employ sorting or a linked list ranking algorithm as subroutines, the last two algorithms have two distinct advantages. First, these algorithms employ arrays as their basic data structures, and second, they do not use any pipelining, sorting, or linked list ranking algorithms  相似文献   

20.
Efficient algorithms for large-scale temporal aggregation   总被引:2,自引:0,他引:2  
The ability to model time-varying natures is essential to many database applications such as data warehousing and mining. However, the temporal aspects provide many unique characteristics and challenges for query processing and optimization. Among the challenges is computing temporal aggregates, which is complicated by having to compute temporal grouping. We introduce a variety of temporal aggregation algorithms that overcome major drawbacks of previous work. First, for small-scale aggregations, both the worst-case and average-case processing time have been improved significantly. Second, for large-scale aggregations, the proposed algorithms can deal with a database that is substantially larger than the size of available memory. Third, the parallel algorithm designed on a shared-nothing architecture achieves scalable performance by delivering nearly linear scale-up and speed-up, even at the presence of data skew. The contributions made in this paper are particularly important because the rate of increase in database size and response time requirements has out-paced advancements in processor and mass storage technology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号