首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   21篇
  免费   0篇
自动化技术   21篇
  2012年   1篇
  2011年   4篇
  2010年   1篇
  2009年   2篇
  2008年   3篇
  2007年   1篇
  2006年   1篇
  2005年   1篇
  2003年   4篇
  2002年   1篇
  2000年   2篇
排序方式: 共有21条查询结果,搜索用时 15 毫秒
1.
Edmonds  Pruhs 《Algorithmica》2008,36(3):315-330
Abstract. We investigate server scheduling policies to minimize average user perceived latency in pull-based client-server systems (systems where multiple clients request data from a server) where the server answers requests on a multicast/ broadcast channel. We first show that there is no O(1) -competitive algorithm for this problem. We then give a method to convert any nonclairvoyant unicast scheduling algorithm A to nonclairvoyant multicast scheduling algorithm B . We show that if A works well, when jobs can have parallel and sequential phases, then B works well if it is given twice the resources. More formally, if A is an s -speed c -approximation unicast algorithm, then its counterpart algorithm B is a 2s -speed c -approximation multicast algorithm. It is already known [5] that Equi-partition, which devotes an equal amount of processing power to each job, is a (2 + ε) -speed O(1 + 1/ε) -approximation algorithm for unicast scheduling of such jobs. Hence, it follows that the algorithm {BEQUI}, which broadcasts all requested files at a rate proportional to the number of outstanding requests for that file, is a (4 + ε) -speed O(1 + 1/ε) -approximation algorithm. We give another algorithm BEQUI-EDF and show that BEQUI-EDF is also a (4 + ε) -speed O(1 + 1/ε) -approximation algorithm. However, BEQUI-EDF has the advantage that the maximum number of preemptions is linear in the number of requests, and the advantage that no preemptions occur if the data items have unit size.  相似文献   
2.
We consider the setting of a multiprocessor where the speeds of the m processors can be individually scaled. Jobs arrive over time and have varying degrees of parallelizability. A nonclairvoyant scheduler must assign the processes to processors, and scale the speeds of the processors. We consider the objective of energy plus flow time. We assume that a processor running at speed s uses power s α for some constant α>1. For processes that may have side effects or that are not checkpointable, we show an W(m(a-1)/a2)\Omega(m^{(\alpha -1)/\alpha^{2}}) bound on the competitive ratio of any randomized algorithm. For checkpointable processes without side effects, we give an O(log m)-competitive algorithm. Thus for processes that may have side effects or that are not checkpointable, the achievable competitive ratio grows quickly with the number of processors, but for checkpointable processes without side effects, the achievable competitive ratio grows slowly with the number of processors. We then show a lower bound of Ω(log 1/α m) on the competitive ratio of any randomized algorithm for checkpointable processes without side effects.  相似文献   
3.
We consider the setting of a device that obtains its energy from a battery and some regenerative source such as a solar cell. We consider the speed scaling problem of scheduling a collection of tasks with release times, deadlines, and sizes, so as to minimize the energy recharge rate of the regenerative source. This is the first theoretical investigation of speed scaling for devices with a regenerative energy source. We show that the problem can be expressed as a polynomial sized convex program. We show that, using the KKT conditions, one can obtain an efficient algorithm to verify the optimality of a schedule. We show that the energy optimal YDS schedule is 2-approximate with respect to the recharge rate. We show that the online algorithm BKP is O(1)O(1)-competitive with respect to recharge rate.  相似文献   
4.
In this paper we develop an easily applicable algorithmic technique/tool for developing approximation schemes for certain types of combinatorial optimization problems. Special cases that are covered by our result show up in many places in the literature. For every such special case, a particular rounding trick has been implemented in a slightly different way, with slightly different arguments, and with slightly different worst case estimations. Usually, the rounding procedure depended on certain upper or lower bounds on the optimal objective value that have to be justified in a separate argument. Our easily applied result unifies many of these results, and sometimes it even leads to a simpler proof.  相似文献   
5.
6.
Dedication     
  相似文献   
7.
Abstract. We study Web Caching when the input sequence is a depth first search traversal of some tree. There are at least two good motivations for investigating tree traversal as a search technique on the WWW: First, empirical studies of people browsing and searching the WWW have shown that user access patterns commonly are nearly depth first traversals of some tree. Secondly (as we will show in this paper), the problem of visiting all the pages on some WWW site using anchor clicks (clicks on links) and back button clicks—by far the two most common user actions—reduces to the problem of how best to cache a tree traversal sequence (up to constant factors). We show that for tree traversal sequences the optimal offline strategy can be computed efficiently. In the bit model, where the access time of a page is proportional to its size, we show that the online algorithm LRU is (1 + 1/ɛ) -competitive against an adversary with unbounded cache as long as LRU has a cache of size at least (1+ ɛ) times the size of the largest item in the input sequence. In the general model, where pages have arbitrary access times and sizes, we show that in order to be constant competitive, any online algorithm needs a cache large enough to store Ω(log n) pages; here n is the number of distinct pages in the input sequence. We provide a matching upper bound by showing that the online algorithm Landlord is constant competitive against an adversary with an unbounded cache if Landlord has a cache large enough to store the Ω(log n) largest pages. This is further theoretical evidence that Landlord is the ``right' algorithm for Web Caching.  相似文献   
8.
9.
One of the major problems in the Internet today is the scalable delivery of data. With more and more people joining the Internet community, web servers and services are being forced to deal with workloads beyond their original data dissemination design capacity. One solution that has arisen to address scalability is to use multicasting, or push-based data dissemination, to send out data to many clients at once. More recently, the idea of using multicasting as part of a hybrid system with unicasting has shown positive results in increasing server scalability. In this paper we focus on solving problems associated with the hybrid dissemination model. In particular, we address the issues of document popularity and document division while arguing for the use of a third channel, called the multicast pull channel, in the hybrid system model. This channel improves performance in terms of response time while improving the robustness of the hybrid system. We show through extensive simulation using our working hybrid server the usefulness of this additional channel and its improving effects in creating a more scalable and more efficient web server.  相似文献   
10.
We state some of the most important open algorithmic problems in real-time scheduling, and survey progress made on these problems since the 2009 Dagstuhl scheduling seminar.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号