首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We introduce a probabilistic sequential algorithm for stable sorting n uniformly distributed keys in an arbitrary range. The algorithm runs in linear time and sorts all but a very small fraction of the input sequences; the best previously known bound was . An EREW PRAM extension of this sequential algorithm sorts in O((n/p+lgp)lgn/lg(n/p+lgn)) time using p?n processors under the same probabilistic conditions. For a CRCW PRAM we improve upon the probabilistic bound of obtained by Rajasekaran and Sen to derive a bound. Additionally, we present experimental results for the sequential algorithm that establish the practicality of our method.  相似文献   

2.
3.
4.
5.
6.
7.
8.
Motivated by routing issues in ad hoc networks, we present polylogarithmic-time distributed algorithms for two problems. Given a network, we first show how to compute connected and weakly connected dominating sets whose size is at most times the optimum, Δ being the maximum degree of the input network. This is best-possible if and if the processors are required to run in polynomial-time. We then show how to construct dominating sets that have the above properties, as well as the “low stretch” property that any two adjacent nodes in the network have their dominators at a distance of at most in the output network. (Given a dominating set S, a dominator of a vertex u is any vS such that the distance between u and v is at most one.) We also show our time bounds to be essentially optimal.  相似文献   

9.
10.
An implicit data structure for the dictionary problem maintains n data values in the first n locations of an array in such a way that it efficiently supports the operations insert, delete and search. No information other than that in O(1) memory cells and in the input data is to be retained; and the only operations performed on the data values (other than reads and writes) are comparisons. This paper describes the implicit B-tree, a new data structure supporting these operations in block transfers like in regular B-trees, under the realistic assumption that a block stores keys, so that reporting r consecutive keys in sorted order has a cost of block transfers. En route a number of space efficient techniques for handling segments of a large array in a memory hierarchy are developed. Being implicit, the proposed data structure occupies exactly ⌈n/B⌉ blocks of memory after each update, where n is the number of keys after each update and B is the number of keys contained in a memory block. In main memory, the time complexity of the operations is , disproving a conjecture of the mid 1980s.  相似文献   

11.
12.
Let be a fixed collection of digraphs. Given a digraph H, a -packing of H is a collection of vertex disjoint subgraphs of H, each isomorphic to a member of . For undirected graphs, Loebl and Poljak have completely characterized the complexity of deciding the existence of a perfect -packing, in the case that consists of two graphs one of which is a single edge on two vertices. We characterize -packing where consists of two digraphs one of which is a single arc on two vertices.  相似文献   

13.
14.
A fast deterministic smallest enclosing disk approximation algorithm   总被引:1,自引:0,他引:1  
We describe a simple and fast -time algorithm for finding a (1+?)-approximation of the smallest enclosing disk of a planar set of n points or disks. Experimental results of a readily available implementation are presented.  相似文献   

15.
The longest increasing circular subsequence (LICS) of a list is considered. A Monte Carlo algorithm to compute it is given which has worst case execution time O(n3/2logn) and storage requirement O(n). It is proved that the expected length μ(n) of the LICS satisfies . Numerical experiments with the algorithm suggest that .  相似文献   

16.
We consider the problem of computing a minimum cycle basis of a directed graph with m arcs and n nodes. We adapt the greedy approach proposed by Horton [A polynomial-time algorithm to find the shortest cycle basis of a graph, SIAM J. Comput. 16 (1987) 358] and hereby obtain a very simple exact algorithm of complexity , being as fast as the first algorithm proposed for this problem [A polynomial time algorithm for minimum cycle basis in directed graphs, Kurt Mehlhorn's List of Publications, 185, MPI, Saarbrücken, 2004, http://www.mpi-sb.mpg.de/~mehlhorn/ftp/DirCycleBasis.ps; Proc. STACS 2005, submitted for publication]. Moreover, the speed-up of Golynski and Horton [A polynomial time algorithm to find the minimum cycle basis of a regular matroid, in: M. Penttonen, E. Meineche Schmidt (Eds.), SWAT 2002, Lecture Notes in Comput. Sci., vol. 2368, Springer, Berlin, 2002, pp. 200-209] applies to this problem, providing an exact algorithm of complexity , in particular . Finally, we prove that these greedy approaches fail for more specialized subclasses of directed cycle bases.  相似文献   

17.
18.
We model sparse functional data from multiple subjects with a mixed-effects regression spline. In this model, the expected values for any subject (conditioned on the random effects) can be written as the sum of a population curve and a subject-specific deviate from this population curve. The population curve and the subject-specific deviates are both modeled as free-knot b-splines with k and k knots located at and , respectively. To identify the number and location of the “free” knots, we sample from the posterior using reversible jump MCMC methods. Sampling from this posterior distribution is complicated, however, by the flexibility we allow for the model’s covariance structure. No restrictions (other than positive definiteness) are placed on the covariance parameters ψ and σ2 and, as a result, no analytical form for the likelihood exists. In this paper, we consider two approximations to and then sample from the corresponding approximations to . We also sample from which has a likelihood that is available in closed form. While sampling from this larger posterior is less efficient, the resulting marginal distribution of knots is exact and allows us to evaluate the accuracy of each approximation. We then consider a real data set and explore the difference between and the more accurate approximation to .  相似文献   

19.
We prove algorithmic and hardness results for the problem of finding the largest set of a fixed diameter in the Euclidean space. In particular, we prove that if A is the largest subset of diameter r of n points in the Euclidean space, then for every ε>0 there exists a polynomial time algorithm that outputs a set B of size at least |A| and of diameter at most . On the hardness side, roughly speaking, we show that unless P=NP for every ε>0 it is not possible to guarantee the diameter for B even if the algorithm is allowed to output a set of size .  相似文献   

20.
We show that for a connected graph with n nodes and e edges and maximum degree at most 3, the size of the dominating set found by the greedy algorithm is at most if , if , and if .  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号