首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a decentralized, symmetric mutual exclusion algorithm that is tailored to the hypercube architecture. Our algorithm has better time complexity than a suitable hypercube adaptation of Maekawa′s O(√N) Mutual Exclusion algorithm. We show that, in the presence of contention, our algorithm has a shorter Blocking Delay than Maekawa′s algorithm. In the absence of contention, our algorithm achieves optimal Round-trip Delay under both the n-port and the one-port communication models.  相似文献   

2.
This paper describes a system-level diagnosis algorithm for hypercube multicomputer systems. The algorithm is based on the PMC model and can isolate all faulty processors to within a set that contains at most one fault-free processor. If we denote by N the total number of processors in a hypercube system to be diagnosed, then, based on the judiciously designed data structures, the algorithm can run in O(Nlog2N) time; whereas the best-known diagnosis algorithm, the YML algorithm, runs in O(N2.5) time. Consequently, the new algorithm is remarkably superior to the YML algorithm in terms of the time cost.  相似文献   

3.
We provide in this article a branch-and-bound algorithm that solves the problem of finding the k closest pairs of points (p,q), p?∈?P,q?∈?Q, considering two sets of points in the euclidean plane P,Q stored in external memory assuming that only one of the sets has a spatial index. This problem arises naturally in many scenarios, for instance when the set without an index is the answer to a spatial query. The main idea of our algorithm is to partition the space occupied by the set without an index into several cells or subspaces and to make use of the properties of a set of metrics defined on two Minimum Bounding Rectangles (MBRs). We evaluated our algorithm for different values of k by means of a series of experiments that considered both synthetical and real world datasets. We compared the performance of our algorithm with that of techniques that either assume that both datasets have a spatial index or that none has an index. The results show that our algorithm needs only between a 0.3 and a 35 % of the disk accesses required by such techniques. Our algorithm also shows a good scalability, both in terms of k and of the size of the data set.  相似文献   

4.
As a novel evolutionary technique, particle swarm optimization (PSO) has received increasing attention and wide applications in a variety of fields. To our knowledge this paper investigates the first application of PSO algorithm to tackle the parallel machines scheduling problem. Proposing equations analogous to those of the classical PSO equations, we present a discrete PSO algorithm (DPSO) to minimize makespan (Cmax) criterion. We also investigate the effectiveness of DPSO algorithm through hybridizing it with an efficient local search heuristic. To verify the performance of DPSO algorithm and its hybridized version (HDPSO), comparisons are made through using a recently proposed simulated annealing algorithm for the problem, addressed in the literature, as a comparator algorithm. Computational results signify that the proposed DPSO algorithm is very competitive and can be rapidly guided when hybridizing with a local search heuristic.  相似文献   

5.
A certifying algorithm for a problem is an algorithm that provides a certificate with each answer that it produces. The certificate is an evidence that can be used to authenticate the correctness of the answer. A Hamiltonian cycle in a graph is a simple cycle in which each vertex of the graph appears exactly once. The Hamiltonian cycle problem is to determine whether or not a graph contains a Hamiltonian cycle. The best result for the Hamiltonian cycle problem on circular-arc graphs is an O(n2logn)-time algorithm, where n is the number of vertices of the input graph. In fact, the O(n2logn)-time algorithm can be modified as a certifying algorithm although it was published before the term certifying algorithms appeared in the literature. However, whether there exists an algorithm whose time complexity is better than O(n2logn) for solving the Hamiltonian cycle problem on circular-arc graphs has been opened for two decades. In this paper, we present an O(Δn)-time certifying algorithm to solve this problem, where Δ represents the maximum degree of the input graph. The certificates provided by our algorithm can be authenticated in O(n) time.  相似文献   

6.
Very recently, Chen and Chen [Y. Chen, Y. Chen, A new tree inclusion algorithm, Information Processing Letters 98 (2006) 253-262] gave a new algorithm for the tree inclusion problem, which requires O(|T|×min{depth(P),|leaves(P)|}) time and no extra space. In this Note, we show that there are flaws in their time-complexity analysis by presenting two counterexamples. We also give an example to show that the worst-case time complexity of their algorithm is non-polynomial. Consequently, the asymptotically most efficient algorithm for the tree inclusion problem is the former algorithm in [W. Chen, More efficient algorithm for ordered tree inclusion, Journal of Algorithms 26 (1998) 370-385].  相似文献   

7.
Given a graph G and a bound d?≥?2, the bounded-diameter minimum spanning tree problem seeks a spanning tree on G of minimum weight subject to the constraint that its diameter does not exceed d. This problem is NP-hard; several heuristics have been proposed to find near-optimal solutions to it in reasonable times. A decentralized learning automata-based algorithm creates spanning trees that honor the diameter constraint. The algorithm rewards a tree if it has the smallest weight found so far and penalizes it otherwise. As the algorithm proceeds, the choice probability of the tree converges to one; and the algorithm halts when this probability exceeds a predefined value. Experiments confirm the superiority of the algorithm over other heuristics in terms of both speed and solution quality.  相似文献   

8.
9.
We present a highly concurrent priority queue algorithm based on the B-link tree, which is a B+-tree in which every node has a pointer to its right sibling. The algorithm is built on the concurrent B-link tree algorithms. Since the priority queue is based on highly concurrent search structure algorithms, a large number of insert operations can execute concurrently with little or no interference. We first present an algorithm that executes deletemin operations serially. We extend the serialized-deletemin algorithm to allow both parallel and concurrent deletemin operations. We discuss a decisive operation serializable algorithm that permits concurrent deletemin operations, and an algorithm in which p processors cooperate to perform p deletemin operations in O(log p) time.  相似文献   

10.
We present an engineered version of the divide-and-conquer algorithm for finding the closest pair of points,within a given set of points in the XY-plane.For this version of the algorithm we show that only two pairwise comparisons are required in the combine step,for each point that lies in the 2δ-wide vertical slab.The correctness of the algorithm is shown for all Minkowski distances with p 1.We also show empirically that,although the time complexity of the algorithm is still O(n lg n),the reduction in the total number of comparisons leads to a significant reduction in the total execution time,for inputs with size sufficiently large.  相似文献   

11.
《Information Fusion》2008,9(2):223-233
Clustering categorical data is an integral part of data mining and has attracted much attention recently. In this paper, we present k-ANMI, a new efficient algorithm for clustering categorical data. The k-ANMI algorithm works in a way that is similar to the popular k-means algorithm, and the goodness of clustering in each step is evaluated using a mutual information based criterion (namely, average normalized mutual information – ANMI) borrowed from cluster ensemble. This algorithm is easy to implement, requiring multiple hash tables as the only major data structure. Experimental results on real datasets show that k-ANMI algorithm is competitive with those state-of-the-art categorical data clustering algorithms with respect to clustering accuracy.  相似文献   

12.
We present the first in-place algorithm for solving Klee's measure problem for a set of n axis-parallel rectangles in the plane. Our algorithm runs in O(n3/2logn) time and uses O(1) extra words in addition to the space needed for representing the input. The algorithm is surprisingly simple and thus very likely to yield an implementation that could be of practical interest. As a byproduct, we develop an optimal algorithm for solving Klee's measure problem for a set of n intervals; this algorithm runs in optimal time O(nlogn) and uses O(1) extra space.  相似文献   

13.
A deadlock-free fully adaptive minimal routing algorithm for meshes that is optimal in the number of virtual channels required and in the number of restrictions placed on the use of these virtual channels is presented. It is also proved that, ignoring symmetry, this routing algorithm is the only fully adaptive routing algorithm with uniform routers that achieves both of these goals. This new algorithm requires only 4n-2 virtual channels for an n-dimensional mesh. In addition, if more than the minimum number of virtual channels is available, the routing algorithm can use these additional channels with the fewest possible number of restrictions. The proofs are first presented for the 2D mesh and then generalized to n-dimensional meshes.  相似文献   

14.
This paper presents a hybrid filter-wrapper feature subset selection algorithm based on particle swarm optimization (PSO) for support vector machine (SVM) classification. The filter model is based on the mutual information and is a composite measure of feature relevance and redundancy with respect to the feature subset selected. The wrapper model is a modified discrete PSO algorithm. This hybrid algorithm, called maximum relevance minimum redundancy PSO (mr2PSO), is novel in the sense that it uses the mutual information available from the filter model to weigh the bit selection probabilities in the discrete PSO. Hence, mr2PSO uniquely brings together the efficiency of filters and the greater accuracy of wrappers. The proposed algorithm is tested over several well-known benchmarking datasets. The performance of the proposed algorithm is also compared with a recent hybrid filter-wrapper algorithm based on a genetic algorithm and a wrapper algorithm based on PSO. The results show that the mr2PSO algorithm is competitive in terms of both classification accuracy and computational performance.  相似文献   

15.
In a ground-breaking paper that appeared in 1983, Ben-Or presented the first randomized algorithm to solve consensus in an asynchronous message-passing system where processes can fail by crashing. Although more efficient randomized algorithms were subsequently proposed, Ben-Or’s algorithm is still the simplest and most elegant one. For this reason, it is often taught in distributed computing courses and it appears in several textbooks. Even though Ben-Or’s algorithm is widely known and it is very simple, surprisingly a proof of correctness of the algorithm has not yet appeared: previously published proofs make some simplifying assumptions—specifically, they either assume that f < n/3 (n is the total number of processes and f is maximum number of processes that may crash) or that the adversary is weak, that is, it cannot see the process states or the content of the messages. In this paper, we present a correctness proof for Ben-Or’s randomized consensus algorithm for the case that f < n/2 process crashes and the adversary is strong (i.e., it can see the process states and message contents, and schedule the process steps and message receipts accordingly). To the best of our knowledge, this is the first full proof of this classical algorithm. We also demonstrate a counterintuitive problem that may occur if one uses the well-known abstraction of a “global coin” to modularize and speed up randomized consensus algorithms, such as Ben-Or’s algorithm. Specifically, we show that contrary to common belief, the use of a global coin can sometimes be deleterious rather than beneficial: instead of speeding up Ben-Or’s algorithm, the use of a global coin in this algorithm may actually prevent termination.  相似文献   

16.
We present three new approximation algorithms with improved constant ratios for selecting n points in n disks such that the minimum pairwise distance among the points is maximized.
  1. A very simple O(nlog?n)-time algorithm with ratio 0.511 for disjoint unit disks.
  2. An LP-based algorithm with ratio 0.707 for disjoint disks of arbitrary radii that uses a linear number of variables and constraints, and runs in polynomial time.
  3. A hybrid algorithm with ratio either 0.4487 or 0.4674 for (not necessarily disjoint) unit disks that uses an algorithm of Cabello in combination with either the simple O(nlog?n)-time algorithm or the LP-based algorithm.
The LP algorithm can be extended for disjoint balls of arbitrary radii in ? d , for any (fixed) dimension d, while preserving the features of the planar algorithm. The algorithm introduces a novel technique which combines linear programming and projections for approximating Euclidean distances. The previous best approximation ratio for dispersion in disjoint disks, even when all disks have the same radius, was 1/2. Our results give a positive answer to an open question raised by Cabello, who asked whether the ratio 1/2 could be improved.  相似文献   

17.
The maxima-finding is a fundamental problem in computational geometry with many applications. In this paper, a volume first maxima-finding algorithm is proposed. It is proved that the expected running time of the algorithm is N+o(N) when choosing points from CI distribution, which is a new theoretical result when the points belong to d(>2) dimensional space. Experimental results and theoretical analysis indicate that the algorithm runs faster than the Move-To-Front maxima-finding algorithm.  相似文献   

18.
This paper studies the Quality-of-Service (QoS)-aware replica placement problem in a general graph model. Since the problem was proved NP-hard, heuristic algorithms are the current solutions to the problem. However, these algorithms cannot always find the effective replica placement strategy. We propose two algorithms that can obtain better results within the given time period. The first algorithm is called Cover Distance algorithm, which is based on the Greedy Cover algorithm. The second algorithm is an optimized genetic algorithm, in which we use random heuristic algorithms to generate initial population to avoid enormous useless searching. Then, the 0-Greedy-Delete algorithm is used to optimize the genetic algorithm solutions. According to the performance evaluation, our Cover Distance algorithm can obtain relatively better solution in time critical scenarios. Whereas, the optimized genetic algorithm is better when the replica cost is of higher priority than algorithm execution time. The QoS-aware data replication heuristic algorithms are applied into the data distribution service of an astronomy data grid pipeline prototype, and the operation process is studied in detail.  相似文献   

19.
This paper presents a novel approach for computer viruses detection based on modeling the structures and dynamics of real life paradigm that exists in the bodies of all living creatures. It aims to develop an algorithm based on the concept of the artificial immune system (AIS) for the purpose of detecting viruses. The algorithm is called Virus Detection Clonal algorithm (VDC), and it is derived from the clonal selection algorithm. The VDC algorithm consists of three basic steps: cloning, hyper-mutation and stochastic re-selection. In later stage, the developed VDC algorithm is subjected to validation, which consists of two phases; learning and testing. Two main parameters are determined; one of them is setting the number of signatures per clone (Fat), while the other defines the hypermutation probability (Pm). Later on, the Genetic Algorithm (GA) is used as a tool, to improve the developed algorithm by searching the values of the main parameters (Fat and Pm) to reproduce better results. The results have shown that the detection rate of viruses, by using the developed algorithm, is 94.4%, whereas the detection rate of false positives has reached 0%. These percentages indicate that the VDC algorithm is sufficient and usable in this field. Moreover, the results of employing the GA to optimize the VDC algorithm have shown an improvement in the detection speed of the algorithm.  相似文献   

20.
In this paper, new efficient pairings on genus 2 hyperelliptic curves of the form C:y2=x5+ax with embedding degree k satisfying 4|k are constructed, that is an improvement for the results of Fan et al. (2008) [10]. Then a variant of Miller?s algorithm is given to compute our pairings. In this algorithm, we just need to evaluate the Miller function at two divisors for each loop iteration. However, Fan et al. had to compute the Miller function at four divisors. Moreover, compared with Fan et al.?s algorithm, the exponentiation calculation is simplified. We finally analyze the computational complexity of our pairings, which shows that our algorithm can save 2036m operations in the base field or be 34.1% faster than Fan et al.?s algorithm. The experimental result shows that our pairing can achieve a better performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号