首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider the problem of approximately integrating a Lipschitz function f (with a known Lipschitz constant) over an interval. The goal is to achieve an additive error of at most ε using as few samples of f as possible. We use the adaptive framework: on all problem instances an adaptive algorithm should perform almost as well as the best possible algorithm tuned for the particular problem instance. We distinguish between and , the performances of the best possible deterministic and randomized algorithms, respectively. We give a deterministic algorithm that uses samples and show that an asymptotically better algorithm is impossible. However, any deterministic algorithm requires samples on some problem instance. By combining a deterministic adaptive algorithm and Monte Carlo sampling with variance reduction, we give an algorithm that uses at most samples. We also show that any algorithm requires samples in expectation on some problem instance (f,ε), which proves that our algorithm is optimal.  相似文献   

2.
We investigate the diameter problem in the streaming and sliding-window models. We show that, for a stream of $n$ points or a sliding window of size $n$, any exact algorithm for diameter requires $\Omega(n)$ bits of space. We present a simple $\epsilon$-approximation algorithm for computing the diameter in the streaming model. Our main result is an $\epsilon$-approximation algorithm that maintains the diameter in two dimensions in the sliding-window model using $O(({1}/{\epsilon^{3/2}}) \log^{3}n(\log R+\log\log n + \log ({1}/{\epsilon})))$ bits of space, where $R$ is the maximum, over all windows, of the ratio of the diameter to the minimum non-zero distance between any two points in the window.  相似文献   

3.
Matching Polyhedral Terrains Using Overlays of Envelopes   总被引:2,自引:0,他引:2  
For a collection $\F$ of $d$-variate piecewise linear functions of overall combinatorial complexity $n$, the lower envelope $\E(\F)$ of $\F$ is the pointwise minimum of these functions. The minimization diagram $\M(\F)$ is the subdivision of $\reals^d$ obtained by vertically (i.e., in direction $x_{d+1}$) projecting $\E(\F)$. The overlay $\O(\F,\G)$ of two such subdivisions $\M(\F)$ and $\M(\G)$ is their superposition. We extend and improve the analysis of de Berg et al. \cite{bgh-vdt3s-96} by showing that the combinatorial complexity of $\O(\F,\G)$ is $\Omega(n^d \alpha^{2}(n))$ and $O(n^{d+\eps})$ for any $\eps>0$ when $d \ge 2$, and $O(n^2 \alpha(n) \log n)$ when $d=2$. We also describe an algorithm that constructs $\O(\F,\G)$ in this time. We apply these results to obtain efficient general solutions to the problem of matching two polyhedral terrains in higher dimensions under translation. That is, given two piecewise linear terrains of combinatorial complexity $n$ in $\reals^{d+1}$, we wish to find a translation of the first terrain that minimizes its distance to the second, according to some distance measure. For the perpendicular distance measure, which we adopt from functional analysis since it is natural for measuring the similarity of terrains, we present a matching algorithm that runs in time $O(n^{2d+\eps})$ for any $\eps>0$. Sharper running time bounds are shown for $d \le 2$. For the directed and undirected \Hd\ distance measures, we present a matching algorithm that runs in time $O(n^{d^2+d+\eps})$ for any $\eps>0$.  相似文献   

4.
We study an online job scheduling problem arising in networks with aggregated links. The goal is to schedule n jobs, divided into k disjoint chains, on m identical machines, without preemption, so that the jobs within each chain complete in the order of release times and the maximum flow time is minimized. We present a deterministic online algorithm with competitive ratio , and show a matching lower bound, even for randomized algorithms. The performance bound for we derive in the paper is, in fact, more subtle than a standard competitive ratio bound, and it shows that in overload conditions (when many jobs are released in a short amount of time), ’s performance is close to the optimum. We also show how to compute an offline solution efficiently for k=1, and that minimizing the maximum flow time for k,m≥2 is -hard. As by-products of our method, we obtain two offline polynomial-time algorithms for minimizing makespan: an optimal algorithm for k=1, and a 2-approximation algorithm for any k. W. Jawor and M. Chrobak supported by NSF grants OISE-0340752 and CCR-0208856. Work of C. Dürr conducted while being affiliated with the Laboratoire de Recherche en Informatique, Université Paris-Sud, 91405 Orsay. Supported by the CNRS/NSF grant 17171 and ANR Alpage.  相似文献   

5.
The resource discovery problem was introduced by Harchol-Balter, Leighton, and Lewin. They developed a number of algorithms for the problem in the weakly connected directed graph model. This model is a directed logical graph that represents the vertices’ knowledge about the topology of the underlying communication network. The current paper proposes a deterministic algorithm for the problem in the same model, with improved time, message, and communication complexities. Each previous algorithm had a complexity that was higher at least in one of the measures. Specifically, previous deterministic solutions required either time linear in the diameter of the initial network, or communication complexity $O(n^3)$ (with message complexity $O(n^2)$), or message complexity $O(|E_0| \log n)$ (where $E_0$ is the arc set of the initial graph $G_0$). Compared with the main randomized algorithm of Harchol-Balter, Leighton, and Lewin, the time complexity is reduced from $O(\log^2n)$ to\pagebreak[4] $O(\log n )$, the message complexity from $O(n \log^2 n)$ to $O(n \log n )$, and the communication complexity from $O(n^2 \log^3 n)$ to $O(|E_0|\log ^2 n )$. \par Our work significantly extends the connectivity algorithm of Shiloach and Vishkin which was originally given for a parallel model of computation. Our result also confirms a conjecture of Harchol-Balter, Leighton, and Lewin, and addresses an open question due to Lipton.  相似文献   

6.
Yuichi Yoshida  Hiro Ito 《Algorithmica》2012,62(3-4):701-712
We present an algorithm for testing the k-vertex-connectivity of graphs with the given maximum degree. The time complexity of the algorithm is independent of the number of vertices and edges of graphs. Fixed degree bound d, a graph G with n vertices and a maximum degree at most d is called ε-far from k-vertex-connectivity when at least $\frac{\epsilon dn}{2}$ edges must be added to or removed from G to obtain a k-vertex-connected graph with a maximum degree at most d. The algorithm always accepts every graph that is k-vertex-connected and rejects every graph that is ε-far from k-vertex-connectivity with a probability of at least 2/3. The algorithm runs in $O(d(\frac{c}{\epsilon d})^{k}\log\frac {1}{\epsilon d})$ time (c>1 is a constant) for (k?1)-vertex-connected graphs, and in $O(d(\frac{ck}{\epsilon d})^{k}\log\frac{k}{\epsilon d})$ time (c>1 is a constant) for general graphs. It is the first constant-time k-vertex-connectivity testing algorithm for general k≥4.  相似文献   

7.
Cees Duin 《Algorithmica》2005,41(2):131-145
We formulate and study an algorithm for all-pairs shortest paths in a network with $n $ nodes and $m $ arcs of positive length. Using the dynamic programming principle of optimality of subpaths the algorithm avoids redundant updates of distance labels. A shortest $v$--$w$ path, say $\langle v, r_{1} , r_{2} , \ldots , r_{k } = w \rangle$ with $k $ arcs ($k \geq 1$), is only then combined with an arc $(w,t) \in A$ to update the distance label of pair $v$--$t$, if $(w,t) $ is present on the shortest $r_{\ell } $--$ t$ path for each node $r_{\ell}$ $(\ell=k- 1 , k- 2, \ldots, 1) $. The algorithm extracts shortest paths in order of length from a data structure and builds two shortest path trees per node, an extra effort of $O(n^{2})$. This way it can execute efficiently only the aforementioned distance updates, by picking the arcs $(w,t)$ out of these trees. The time complexity order per distance update and path extraction is similar as in other algorithms. An implementation with a data structure of heaps is possible, but a bucket-type data structure may be more appropriate. The implied number of distance updates does not exceed $nm_{0}$ ($m_{0}$ being the total number of shortest path arcs), but is frequently much lower. In extreme cases the new algorithm applies $O(n^{2})$ distance updates, whereas known algorithms require $\Omega( n ^{3})$ updates. The algorithm is especially suited for undirected graphs; here the construction of one tree per node is sufficient and the computation times halve.  相似文献   

8.
For hyper-rectangles in $\mathbb{R}^{d}$ Auer (1997) proved a PAC bound of $O(\frac{1}{\varepsilon}(d+\log \frac{1}{\delta}))$ , where $\varepsilon$ and $\delta$ are the accuracy and confidence parameters. It is still an open question whether one can obtain the same bound for intersection-closed concept classes of VC-dimension $d$ in general. We present a step towards a solution of this problem showing on one hand a new PAC bound of $O(\frac{1}{\varepsilon}(d\log d + \log \frac{1}{\delta}))$ for arbitrary intersection-closed concept classes, complementing the well-known bounds $O(\frac{1}{\varepsilon}(\log \frac{1}{\delta}+d\log \frac{1}{\varepsilon}))$ and $O(\frac{d}{\varepsilon}\log \frac{1}{\delta})$ of Blumer et al. and (1989) and Haussler, Littlestone and Warmuth (1994). Our bound is established using the closure algorithm, that generates as its hypothesis the intersection of all concepts that are consistent with the positive training examples. On the other hand, we show that many intersection-closed concept classes including e.g. maximum intersection-closed classes satisfy an additional combinatorial property that allows a proof of the optimal bound of $O(\frac{1}{\varepsilon}(d+\log \frac{1}{\delta}))$ . For such improved bounds the choice of the learning algorithm is crucial, as there are consistent learning algorithms that need $\Omega(\frac{1}{\varepsilon}(d\log\frac{1}{\varepsilon} +\log\frac{1}{\delta}))$ examples to learn some particular maximum intersection-closed concept classes.  相似文献   

9.
We present several results on the complexity of various forms of Sperner’s Lemma in the black-box model of computing. We give a deterministic algorithm for Sperner problems over pseudo-manifolds of arbitrary dimension. The query complexity of our algorithm is linear in the separation number of the skeleton graph of the manifold and the size of its boundary. As a corollary we get an deterministic query algorithm for the black-box version of the problem 2D-SPERNER, a well studied member of Papadimitriou’s complexity class PPAD. This upper bound matches the deterministic lower bound of Crescenzi and Silvestri. The tightness of this bound was not known before. In another result we prove for the same problem an lower bound for its probabilistic, and an lower bound for its quantum query complexity, showing that all these measures are polynomially related. Research supported by the European Commission IST Integrated Project Qubit Application (QAP) 015848, the OTKA grants T42559 and T46234, and by the ANR Blanc AlgoQP grant of the French Research Ministry.  相似文献   

10.
Zeev Nutov 《Algorithmica》2006,44(3):213-231
A graph is called {\em $\el$-connected from $U$ to $r$} if there are $\el$ internally disjoint paths from every node $u \in U$ to $r$. The {\em Rooted Subset Connectivity Augmentation Problem} ({\em RSCAP}) is as follows: given a graph $G=(V+r,E)$, a node subset $U \subseteq V$, and an integer $k$, find a smallest set $F$ of new edges such that $G+F$ is $k$-connected from $U$ to $r$. In this paper we consider mainly a restricted version of RSCAP in which the input graph $G$ is already $(k-1)$-connected from $U$ to $r$. For this version we give an $O(\ln\! |U|)$-approximation algorithm, and show that the problem cannot achieve a better approximation guarantee than the Set Cover Problem (SCP) on $|U|$ elements and with $|V|-|U|$ sets. For the general version of RSCAP we give an $O(\ln k \ln\!|U|)$-approximation algorithm. For $U=V$ we get the {\em Rooted Connectivity Augmentation Problem} ({\em RCAP}). For directed graphs RCAP is polynomially solvable, but for undirected graphs its complexity status is not known: no polynomial algorithm is known, and it is also not known to be NP-hard. For undirected graphs with the input graph $G$ being $(k-1)$-connected from $V$ to $r$, we give an algorithm that computes a solution of size at most ${\it opt}+\min\{opt,k\}/2$, where {\it opt} denotes the optimal solution size.  相似文献   

11.
Cees Duin 《Algorithmica》2004,41(2):131-145
We formulate and study an algorithm for all-pairs shortest paths in a network with $n $ nodes and $m $ arcs of positive length. Using the dynamic programming principle of optimality of subpaths the algorithm avoids redundant updates of distance labels. A shortest $v$--$w$ path, say $\langle v, r_{1} , r_{2} , \ldots , r_{k } = w \rangle$ with $k $ arcs ($k \geq 1$), is only then combined with an arc $(w,t) \in A$ to update the distance label of pair $v$--$t$, if $(w,t) $ is present on the shortest $r_{\ell } $--$ t$ path for each node $r_{\ell}$ $(\ell=k- 1 , k- 2, \ldots, 1) $. The algorithm extracts shortest paths in order of length from a data structure and builds two shortest path trees per node, an extra effort of $O(n^{2})$. This way it can execute efficiently only the aforementioned distance updates, by picking the arcs $(w,t)$ out of these trees. The time complexity order per distance update and path extraction is similar as in other algorithms. An implementation with a data structure of heaps is possible, but a bucket-type data structure may be more appropriate. The implied number of distance updates does not exceed $nm_{0}$ ($m_{0}$ being the total number of shortest path arcs), but is frequently much lower. In extreme cases the new algorithm applies $O(n^{2})$ distance updates, whereas known algorithms require $\Omega( n ^{3})$ updates. The algorithm is especially suited for undirected graphs; here the construction of one tree per node is sufficient and the computation times halve.  相似文献   

12.
Given a set $\T$ of rooted, unordered trees, where each $T_i \in \T$ is distinctly leaf-labeled by a set $\Lambda(T_i)$ and where the sets $\Lambda(T_i)$ may overlap, the maximum agreement supertree problem~(MASP) is to construct a distinctly leaf-labeled tree $Q$ with leaf set $\Lambda(Q) \subseteq $\cup$_{T_i \in \T} \Lambda(T_i)$ such that $|\Lambda(Q)|$ is maximized and for each $T_i \in \T$, the topological restriction of $T_i$ to $\Lambda(Q)$ is isomorphic to the topological restriction of $Q$ to $\Lambda(T_i)$. Let $n = \left| $\cup$_{T_i \in \T} \Lambda(T_i)\right|$, $k = |\T|$, and $D = \max_{T_i \in \T}\{\deg(T_i)\}$. We first show that MASP with $k = 2$ can be solved in $O(\sqrt{D} n \log (2n/D))$ time, which is $O(n \log n)$ when $D = O(1)$ and $O(n^{1.5})$ when $D$ is unrestricted. We then present an algorithm for MASP with $D = 2$ whose running time is polynomial if $k = O(1)$. On the other hand, we prove that MASP is NP-hard for any fixed $k \geq 3$ when $D$ is unrestricted, and also NP-hard for any fixed $D \geq 2$ when $k$ is unrestricted even if each input tree is required to contain at most three leaves. Finally, we describe a polynomial-time $(n/\!\log n)$-approximation algorithm for MASP.  相似文献   

13.
This paper considers scheduling tasks while minimizing the power consumption of one or more processors, each of which can go to sleep at a fixed cost  $\alpha $ . There are two natural versions of this problem, both considered extensively in recent work: minimize the total power consumption (including computation time), or minimize the number of “gaps” in execution. For both versions in a multiprocessor system, we develop a polynomial-time algorithm based on sophisticated dynamic programming. In a generalization of the power-saving problem, where each task can execute in any of a specified set of time intervals, we develop a $(1+{2 \over 3} \alpha )$ -approximation, and show that dependence on $\alpha $ is necessary. In contrast, the analogous multi-interval gap scheduling problem is set-cover hard (and thus not $o(\lg n)$ -approximable), even in the special cases of just two intervals per job or just three unit intervals per job. We also prove several other hardness-of-approximation results. Finally, we give an $O(\sqrt{n})$ -approximation for maximizing throughput given a hard upper bound on the number of gaps.  相似文献   

14.
We study inherent structural properties of a strongly NP-hard problem of scheduling $n$ jobs with release times and due dates on a single machine to minimize the number of late jobs. Our study leads to two polynomial-time algorithms. The first algorithm with the time complexity $O(n^3\log n)$ solves the problem if during its execution no job with some special property occurs. The second algorithm solves the version of the problem when all jobs have the same length. The time complexity of the latter algorithm is $O(n^2\log n)$ , which is an improvement over the earlier known algorithm with the time complexity $O(n^5)$ .  相似文献   

15.
On the Competitive Ratio for Online Facility Location   总被引:2,自引:0,他引:2  
We consider the problem of Online Facility Location, where the demand points arrive online and must be assigned irrevocably to an open facility upon arrival. The objective is to minimize the sum of facility and assignment costs. We prove that the competitive ratio for Online Facility Location is Θ . On the negative side, we show that no randomized algorithm can achieve a competitive ratio better than Ω against an oblivious adversary even if the demands lie on a line segment. On the positive side, we present a deterministic algorithm which achieves a competitive ratio of in every metric space. A preliminary version of this work appeared in the Proceedings of the 30th International Colloquium on Automata, Languages and Programming (ICALP 2003), Lecture Notes in Computer Science 2719. This work was done while the author was at the Max-Planck-Institut für Informatik, Saarbrücken, Germany, and was partially supported by the Future and Emerging Technologies programme of the EU under contract number IST-1999-14186 (ALCOM–FT).  相似文献   

16.
In this paper we give efficient distributed algorithms computing approximate solutions to general scheduling and matching problems. All approximation guarantees are within a constant factor of the optimum. By “efficient”, we mean that the number of communication rounds is poly-logarithmic in the size of the input. In the scheduling problem, we have a bipartite graph with computing agents on one side and resources on the other. Agents that share a resource can communicate in one time step. Each agent has a list of jobs, each with its own length and profit, to be executed on a neighbouring resource within a given time-window. Each job is also associated with a rational number in the range between zero and one (width), specifying the amount of resource required by the job. Resources can execute non preemptively multiple jobs whose total width at any given time is at most one. The goal is to maximize the profit of the jobs that are scheduled. We then adapt our algorithm for scheduling, to solve the weighted b-matching problem, which is the generalization of the weighted matching problem where for each vertex v, at most b(v) edges incident to v, can be included in the matching. For this problem we obtain a randomized distributed algorithm with approximation guarantee of \frac16+e{\frac{1}{6+\epsilon}}, for any ${\epsilon >0 }${\epsilon >0 }. For weighted matching, we devise a deterministic distributed algorithm with the same approximation ratio. To our knowledge, we give the first distributed algorithm for the aforementioned scheduling problem as well as the first deterministic distributed algorithm for weighted matching with poly-logaritmic running time. A very interesting feature of our algorithms is that they are all derived in a systematic manner from primal-dual algorithms.  相似文献   

17.
Advance reservation is important to guarantee the quality of services of jobs by allowing exclusive access to resources over a defined time interval on resources. It is a challenge for the scheduler to organize available resources efficiently and to allocate them for parallel advance reservation jobs with deadline constraint appropriately. This paper provides a slot-based data structure to organize available resources of multiprocessor systems in a way that enables efficient search and update operations and formulates a suite of scheduling policies to allocate resources for dynamically arriving advance reservation requests. The performance of the scheduling algorithms were investigated by simulations with different job sizes and durations, system loads, and scheduling flexibilities. Simulation results show that job sizes and durations, system load and the flexibility of scheduling will impact the performance metrics of all the scheduling algorithms, and the $\textit{PE}\; \textit{Worst Fit}$ algorithm becomes the best algorithm for the scheduler with the highest acceptance rate of advance reservation requests, and the jobs with the $\textit{First Fit}$ algorithm experience the lowest average slowdown. The data structure and scheduling policies can be used to organize and allocate resources for parallel advance reservation jobs with deadline constraint in large-scale computing systems.  相似文献   

18.
This paper examines a number of variants of the sparse k-spanner problem and presents hardness results concerning their approximability. Previously, it was known that most k-spanner problems are weakly inapproximable (namely, they are NP-hard to approximate with ratio O(log n), for every k ≥ 2) and that the unit-length k-spanner problem for constant stretch requirement k ≥ 5 is strongly inapproximable (namely, it is NP-hard to approximate with ratio ). The results of this paper significantly expand the ranges of hardness for k-spanner problems. In general, strong hardness is shown for a number of k-spanner problems, for certain ranges of the stretch requirement k depending on the particular variant at hand. The problems studied differ by the types of edge weights and lengths used, and they include directed, augmentation and client-server variants. The paper also considers k-spanner problems in which the stretch requirement k is relaxed (e.g., . For these cases, no inapproximability results were known (even for a constant approximation ratio) for any spanner problem. Moreover, some versions of the k-spanner problem are known to enjoy the ratio-degradation property; namely, their complexity decreases exponentially with the inverse of the stretch requirement. So far, no hardness result existed precluding any k-spanner problem from enjoying this property. This paper establishes strong inapproximability results for the case of relaxed stretch requirement (up to , for any ), for a large variety of k-spanner problems. It is also shown that these problems do not enjoy the ratio-degradation property.  相似文献   

19.
An optimal prefetching and I/O scheduling algorithm L-OPT, for parallel I/O systems, using a read-once model of block references is presented. The algorithm uses knowledge of the next $L$ references, $L$-block lookahead, to create a minimal-length I/O schedule. For a system with $D$ disks and a buffer of capacity $m$ blocks, we show that the competitive ratio of L-OPT is $\Theta(\sqrt{mD/L})$ when $L \geq m$, which matches the lower bound of any prefetching algorithm with $L$-block lookahead. Tight bounds for the remaining ranges of lookahead are also presented. In addition we show that L-OPT is the optimal offline algorithm: when the lookahead consists of the entire reference string, it performs the absolute minimum possible number of I/Os. Finally, we show that L-OPT is comparable with the best online algorithm with the same amount of lookahead; the ratio of the length of its schedule to the length of the optimal schedule is always within a constant factor.  相似文献   

20.
We give a complete characterization of the complexity of the element distinctness problem for n elements of bits each on deterministic and nondeterministic one-tape Turing machines. We present an algorithm running in time for deterministic machines and nondeterministic solutions that are of time complexity . For elements of logarithmic size , on nondeterministic machines, these results close the gap between the known lower bound and the previous upper bound . Additional lower bounds are given to show that the upper bounds are optimal for all other possible relations between m and n. The upper bounds employ hashing techniques, while the lower bounds make use of the communication complexity of set disjointness.Received: 23 April 2001, Published online: 2 September 2003Holger Petersen: Supported by Deutsche Akademie der Naturforscher Leopoldina, grant number BMBF-LPD 9901/8-1 of Bundesministerium für Bildung und Forschung.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号