首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present three new approximation algorithms with improved constant ratios for selecting n points in n disks such that the minimum pairwise distance among the points is maximized.
  1. A very simple O(nlog?n)-time algorithm with ratio 0.511 for disjoint unit disks.
  2. An LP-based algorithm with ratio 0.707 for disjoint disks of arbitrary radii that uses a linear number of variables and constraints, and runs in polynomial time.
  3. A hybrid algorithm with ratio either 0.4487 or 0.4674 for (not necessarily disjoint) unit disks that uses an algorithm of Cabello in combination with either the simple O(nlog?n)-time algorithm or the LP-based algorithm.
The LP algorithm can be extended for disjoint balls of arbitrary radii in ? d , for any (fixed) dimension d, while preserving the features of the planar algorithm. The algorithm introduces a novel technique which combines linear programming and projections for approximating Euclidean distances. The previous best approximation ratio for dispersion in disjoint disks, even when all disks have the same radius, was 1/2. Our results give a positive answer to an open question raised by Cabello, who asked whether the ratio 1/2 could be improved.  相似文献   

2.
We present a linear-time algorithm for computing a triangulation of n points in 2D whose positions are constrained to n disjoint disks of uniform size, after O(nlogn) preprocessing applied to these disks. Our algorithm can be extended to any collection of convex sets of bounded areas and aspect ratios, assuming no point lies in more than some constant number of sets (bounded depth of overlap), and each set contains only a constant number of query points.  相似文献   

3.
X-code extends parity coding for correcting single disk failures in RAID 5 to two disks. Similar to RAID 6 X-code uses the minimum level of redundancy by dedicating the capacity of two out of N disks to check disks, but unlike RAID 6 it solely relies on parity codes, which are placed horizontally rather than vertically. The parity groups in X-code are defined as diagonals with positive and negative slopes. This study is mainly concerned with X-code arrays with two disk failures, since it requires a multistep recovery process, while its operation is otherwise similar to RAID 6. We use examples to gain insight into the cost of recovery and to develop an algorithm to estimate recovery cost for reconstructing two failed disks. We observe that disk loads are unbalanced and the overall load increase depends on the separation of the failed disks. Cyclically shifting or randomly permuting successive N×N arrays of blocks may be used to reduce the load increase to the mean across all of the possible (N−1)/2 distances of the failed disks. We also present an improvement in recovery cost for single disk failures.  相似文献   

4.
We consider the following circle placement problem: given a set of pointsp i ,i=1,2, ...,n, each of weightw i , in the plane, and a fixed disk of radiusr, find a location to place the disk such that the total weight of the points covered by the disk is maximized. The problem is equivalent to the so-called maximum weighted clique problem for circle intersection graphs. That is, given a setS ofn circles,D i ,i=1,2, ...,n, of the same radiusr, each of weightw i , find a subset ofS whose common intersection is nonempty and whose total weight is maximum. AnO (n 2) algorithm is presented for the maximum clique problem. The algorithm is better than a previously known algorithm which is based on sorting and runs inO (n 2 logn) time.  相似文献   

5.
A coarse-grain parallel solver for systems of linear algebraic equations with general sparse matrices by Gaussian elimination is discussed. Before the factorization two other steps are performed. A reordering algorithm is used during the first step in order to obtain a permuted matrix with as many zero elements under the main diagonal as possible. During the second step the reordered matrix is partitioned into blocks for asynchronous parallel processing (normally the number of blocks is equal to the number of processors). It is possible to obtain blocks with nearly the same number of rows, because there is no requirement to produce square diagonal blocks. The first step is much more important than the second one and has a significant influence on the performance of the solver. A straightforward implementation of the reordering algorithm will result inO(n 2) operations. By using binary trees this cost can be reduced toO(NZ logn), whereNZ is the number of non-zero elements in the matrix andn is its order (normallyNZ is much smaller thann 2). Some experiments on parallel computers with shared memory have been performed. The results show that a solver based on the proposed reordering performs better than another solver based on a cheaper (but at the same time rather crude) reordering whose cost is onlyO(NZ) operations.  相似文献   

6.
The disk dimension of a planar graph G is the least number k for which G embeds in the plane minus k open disks, with every vertex on the boundary of some disk. Useful properties of graphs with a given disk dimension are derived, leading to an algorithm to obtain an outerplanar subgraph of a graph with disk dimension k by removing at most 2k−2 vertices. This reduction is used to obtain linear-time exact and approximation algorithms on graphs with fixed disk dimension. In particular, a linear-time approximation algorithm is presented for the pathwidth problem.  相似文献   

7.
An algorithm is presented to answer window queries in a quadtree-based spatial database environment by retrieving all of the quadtree blocks in the underlying spatial database that cover the quadtree blocks that comprise the window. It works by decomposing the window operation into sub-operations over smaller window partitions. These partitions are the quadtree blocks corresponding to the window. Although a block b in the underlying spatial database may cover several of the smaller window partitions, b is only retrieved once rather than multiple times. This is achieved by using an auxiliary main memory data structure called the active border which requires O(n) additional storage for a window query of size n×n. As a result, the algorithm generates an optimal number of disk I/O requests to answer a window query (i.e., one request per covering quadtree block). A proof of correctness and an analysis of the algorithm's execution time and space requirements are given, as are some experimental results.  相似文献   

8.
Given an unreliable communication network, we seek for a node which maximizes the expected number of nodes that are reachable from it. Such a node is called a most reliable source (MRS) of the network. In communication networks, failures may occur to both links and nodes. Previous studies have considered the case where each link has an independent operational probability, while the nodes are immune to failures. In practice, however, failures may happen to the nodes as well, including both transmitting fault and receiving fault. Recently, another variant of the MRS problem is studied, where all links are immune to failures and each node has an independent transmitting probability and receiving probability, and an O(n2) time algorithm is presented for computing an MRS on tree networks with n nodes. In this paper, we present a faster algorithm for this problem, with a time complexity of O(n).  相似文献   

9.
Supporting continuous media data-such as video and audio-imposes stringent demands on the retrieval performance of a multimedia server. In this paper, we propose and evaluate a set of data placement and retrieval algorithms to exploit the full capacity of the disks in a multimedia server. The data placement algorithm declusters every object over all of the disks in the server-using a time-based declustering unit-with the aim of balancing the disk load. As for runtime retrieval, the quintessence of the algorithm is to give each disk advance notification of the blocks that have to be fetched in the impending time periods, so that the disk can optimize its service schedule accordingly. Moreover, in processing a block request for a replicated object, the server will dynamically channel the retrieval operation to the most lightly loaded disk that holds a copy of the required block. We have implemented a multimedia server based on these algorithms. Performance tests reveal that the server achieves very high disk efficiency. Specifically, each disk is able to support up to 25 MPEG-1 streams. Moreover, experiments suggest that the aggregate retrieval capacity of the server scales almost linearly with the number of disks  相似文献   

10.
Range Query Processing in Multidisk Systems   总被引:3,自引:0,他引:3       下载免费PDF全文
In order to reduce the disk access time,a database can be stored on several simultaneously accessible disks.In this paper,we are concerned with the dynamic d-attribute database allocation problem for range queries,An allocation method,called coordinate moule allocation method,is proposed to allocate data in a d-attribute database among disks so that the maximum disk accessing concurrency can be achieved for range queries.Our analysis and experiments show that the method achieves the optimum or near-optimum parallelism for range queries.The paper offers the conditions under which the method is optimal .The worst case bounds of the performance of the method are also given.In addition,the parallel algorithm of processing range queries in described at the end of the paper.The method has been used in the statistic and scientific database management system whic is being designed by us.  相似文献   

11.
Sanders  Egner  Korst 《Algorithmica》2003,35(1):21-55
   Abstract. High performance applications involving large data sets require the efficient and flexible use of multiple disks. In an external memory machine with D parallel, independent disks, only one block can be accessed on each disk in one I/ O step. This restriction leads to a load balancing problem that is perhaps the main inhibitor for the efficient adaptation of single-disk external memory algorithms to multiple disks. We solve this problem for arbitrary access patterns by randomly mapping blocks of a logical address space to the disks. We show that a shared buffer of O (D) blocks suffices to support efficient writing. The analysis uses the properties of negative association to handle dependencies between the random variables involved. This approach might be of independent interest for probabilistic analysis in general. If two randomly allocated copies of each block exist, N arbitrary blocks can be read within
I/ O steps with high probability. The redundancy can be further reduced from 2 to 1+1/r for any integer r without a big impact on reading efficiency. From the point of view of external memory models, these results rehabilitate Aggarwal and Vitter's ``single-disk multi-head' model [1] that allows access to D arbitrary blocks in each I/ O step. This powerful model can be emulated on the physically more realistic independent disk model [2] with small constant overhead factors. Parallel disk external memory algorithms can therefore be developed in the multi-head model first. The emulation result can then be applied directly or further refinements can be added.  相似文献   

12.
RAID5 (Redundant Arrays of Independent Disk level 5) is a popular paradigm, which uses parity to protect against single disk failures. A major shortcoming of RAID5 is the small write penalty, i.e., the cost of updating parity when a data block is modified. Read-modify writes and reconstruct writes are alternative methods for updating small data and parity blocks. We use a queuing formulation to determine conditions under which one method outperforms the other. Our analysis shows that in the case of RAID6 and more generally disk arrays with k check disks tolerating k disk failures, RCW outperforms RMW for higher values of N and G. We note that clustered RAID and variable scope of parity protection methods favor reconstruct writes. A dynamic scheme to determine the more desirable policy based on the availability of appropriate cached blocks is proposed.  相似文献   

13.
Variable Order Panel Clustering   总被引:3,自引:0,他引:3  
Stefan Sauter 《Computing》2000,64(3):223-261
We present a new version of the panel clustering method for a sparse representation of boundary integral equations. Instead of applying the algorithm separately for each matrix row (as in the classical version of the algorithm) we employ more general block partitionings. Furthermore, a variable order of approximation is used depending on the size of blocks. We apply this algorithm to a second kind Fredholm integral equation and show that the complexity of the method only depends linearly on the number, say n, of unknowns. The complexity of the classical matrix oriented approach is O(n 2) while, for the classical panel clustering algorithm, it is O(nlog7 n). Received July 28, 1999; revised September 21, 1999  相似文献   

14.
This paper describes a magnetic/optical access structure for append–only temporal databases. We formally define the properties of an access structure, called the MonotonicB + -Tree (MBT), that is well suited for Write-Once Read Many optical disks. We present an insertion algorithm for the MBT that does not require splitting of index nodes and give the time analysis for this algorithm. We also describe a storage architecture where optical disks work in tandem with magnetic disks. Magnetic disks are used for storing current versions and recent past versions, whereas optical disks are dedicated for archiving older past versions. Our archiving techniques: (1) allow temporal data and the MBT access structure to span magnetic disks and optical disks; (2) minimize the overhead of the migration process by taking advantage of append–only nature of temporal databases; (3) gracefully handle object versions with very long time intervals so that the delay in the migration process is kept to minimum; and (4) ensure that no false magnetic or optical disk address lookup is performed during search operations by duplicating some closed versions on both magnetic and optical disks. To validate our claims for the efficiency of migration techniques, we analyze the performance of temporal access structures partitioned between magnetic and optical disks. We show that the migration process has a minimal effect on the search time. Our simulation identifies important parameters, and shows how they affect the performance of the temporal access structures. These include mean of version lifespan, block size, query time interval length, and total number of versions.  相似文献   

15.
The idea of preplanning strings on disks which are merged together is investigated from a performance point of view. Schemes of internal buffer allocation, initial string creation by an internal sort, and string distribution on disks are evaluated. An algorithm is given for the construction of suboptimal merge trees called plannable merge trees. A cost model is presented for accurate preplanning which consists of detailed assumptions on disk allocation fork input disks andr-way merge planning. Timing considerations for sort and merge including hardware characteristics of moveable head disks show a significant gain of time compared to widely used sort/merge applications.  相似文献   

16.
《Computer Networks》2003,41(4):363-383
Layered video is a video-compression technique to encode video data in multiple layers. It typically consists of a base layer and some additional layers that provide enhanced video quality. The multicasting operation of layered video consists of many receivers dynamically joining and leaving different multicast sessions of different layers depending on their network condition. A layered video multicasting system needs to satisfy: (i) bounded end-to-end delay from the video source to each receiver; (ii) minimum total cost; and (iii) minimum delay jitter between the various video streams received by each receiver. The problem of computing such data distribution paths is NP-complete. This paper presents a new heuristic algorithm, called layered video multicast super-tree routing algorithm, with O(Rn2) time complexity and O(R2) message complexity, where n is the number of nodes in the network and R is the receiver group size. Our investigation shows that the multicast data paths computed by our algorithm can always satisfy the delay constraint with reasonably low total cost.  相似文献   

17.
Recently, zoning technique has been widely applied to disks to increase their capacities. Under the technique, an interesting feature of a disk is that there are a number of various bandwidths on it. Herein, a novel data layout scheme called cluster-pairing on efficiently exploiting this feature for continuous media (CM) servers is proposed. We first applied track-pairing method between a pair of homogeneous disks, and then partitioned each disk into a same number of clusters to facilitate the retrieval of region-based data placement. The proposed method can take the advantages of track-pairing and region-based data placement schemes to fully utilize the various bandwidths from zoned-disks and meanwhile reduce the seek time overhead. According to the simulation results, the disk throughput after applying our approach can be promoted by 35% to 65% than that of the traditional data striping strategies, 10% to 30% than that of region-based data placement method, and about 10% than that of track-pairing scheme. And the wasted storage space is less than 1% which is negligible in terms of the improvement of the disk throughput.  相似文献   

18.
《Real》2000,6(2):129-141
The major emphasis in fast Hough transform algorithms has been placed on the transformation involved. Little attention has been paid to fast processing of a Hough array without requiring one to specify a threshold value to determine candidate parameters in the Hough array. This paper gives a comprehensive discussion of Hough array processing as a part of Hough transform, and presents a time efficient clustering algorithm, called Fast Multi-Scale Clustering, to obtain the number of and hence to select the locations of candidate parameters in a Hough array in a threshold independent manner. It is shown that the complexity of this algorithm is O (ndr) where n is the number of non-zero cells in the Hough array, d is the number of cells used in the discretization of the corresponding parameter space, and r is the dimensionality of the Hough array. Two examples of line and circle detection are provided to illustrate the steps involved in deploying this Hough array processing approach.  相似文献   

19.
The single source shortest paths problem with positive edge weights (SSSPP) is one of the more widely studied problems in operations research and theoretical computer science, on account of its wide applicability to practical situations. This problem was first solved in polynomial time by Dijkstra, who showed that by extracting vertices with the smallest distance from the source and relaxing its outgoing edges, the shortest path to each vertex is obtained. Variations of this general theme have led to a number of algorithms which work well in practice. At the heart of a Dijkstra implementation is the technique used to implement a priority queue. It is well known that using Dijkstra’s approach requires Ω(n log n) steps on a graph having n vertices, since it essentially sorts vertices based on their distances from the source. Accordingly, the fastest implementation of Dijkstra’s algorithm on a graph with n vertices and m edges should take Ω(m + n · log n) time, and consequently, the Dijkstra procedure for SSSPP using Fibonacci Heaps is optimal in the comparison-based model. In this paper, we introduce a new data structure to implement priority queues called two-level heap (TLH) and a new variant of Dijkstra’s algorithm called Phased Dijkstra. We contrast the performance of Dijkstra’s algorithm (both the simple and the phased variants) using a number of data structures to implement the priority queue and empirically establish that TLH are far superior to Fibonacci heaps on every graph family considered. It is to be noted that our profiling includes both sparse and dense graphs.  相似文献   

20.
Let P be a set of n weighted points. We study approximation algorithms for the following two continuous facility-location problems. In the first problem we want to place m unit disks, for a given constant m≥1, such that the total weight of the points from P inside the union of the disks is maximized. We present algorithms that compute, for any fixed ε>0, a (1−ε)-approximation to the optimal solution in O(nlog n) time. In the second problem we want to place a single disk with center in a given constant-complexity region X such that the total weight of the points from P inside the disk is minimized. Here we present an algorithm that computes, for any fixed ε>0, in O(nlog 2 n) expected time a disk that is, with high probability, a (1+ε)-approximation to the optimal solution. A preliminary version of this work has appeared in Approximation and Online Algorithms—WAOA 2006, LNCS, vol. 4368.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号