排序方式: 共有16条查询结果,搜索用时 31 毫秒
1.
Abstract. We consider two fundamental problems in dynamic scheduling: scheduling to meet deadlines in a preemptive multiprocessor setting,
and scheduling to provide good response time in a number of scheduling environments. When viewed from the perspective of traditional
worst-case analysis, no good on-line algorithms exist for these problems, and for some variants no good off-line algorithms
exist unless P = NP .
We study these problems using a relaxed notion of competitive analysis, introduced by Kalyanasundaram and Pruhs, in which
the on-line algorithm is allowed more resources than the optimal off-line algorithm to which it is compared. Using this approach,
we establish that several well-known on-line algorithms, that have poor performance from an absolute worst-case perspective,
are optimal for the problems in question when allowed moderately more resources. For optimization of average flow time, these
are the first results of any sort, for any NP -hard version of the problem, that indicate that it might be possible to design good approximation algorithms. 相似文献
2.
Experimental design methods can be applied to engineering design activities to understand which variables affect the system under consideration, how these variables affect the system, and how to select variable settings that will give uniformly long life to the system. The objective of this paper is to demonstrate the use of Design and Analysis of Computer Experiment (DACE) methods (Sacks, J. et al., 1989) and design optimization via the Surrogate Management Framework (Booker, A. J. et al., 1999; Audet, C. et al., 2000) on reliability optimization problems. Reliabilities are calculated using the Probabilistic Structural Analysis Method (Palle Thoft–Christensen and Baker, 1982; Achintya Haldar and Sankaran Mahadevan, 2000), a method for estimation of reliabilities and reliability indices for a structural model given probability distributions for design variables and “environmental” variables such as loads. By maximizing reliability, or minimizing the probability of failure, we attempt to achieve a minimum cost design that is affected minimally by the variability in the design variables. 相似文献
3.
Franklin T. Luk Eric K. Torng Cynthia J. Anfinson 《The Journal of VLSI Signal Processing》1989,1(3):181-188
Existing fault tolerance schemes have often been ignored by systolic array designers because they are too costly and unwieldy
to implement. With this in mind, we have developed a new technique specially tailored for recursive least squares minimization
that emphasizes simplicity. We propose a new decoding scheme that allows for error detection while wasting no precious processor
cycles and preserving the basic structure of the systolic array. We will show that errors can be detected by examining a single
scalar. The technique can be implemented with negligible algorithmic modification and little additional hardware. The simplicity
of our method invites its use in future systolic arrays. 相似文献
4.
No abstract.
Received March 16, 2000. 相似文献
5.
We analyze the List scheduling algorithm for the problem of minimizing makespan using a worst-average-case or wac analysis
technique, previously used by Kenyon for analyzing the Best Fit bin packing algorithm. We show that List’s worst-average-case
or wac ratio asymptotically approaches 2, as m→∞. Thus, List’s worst-case behavior is not overly dependent on the order of job arrivals.
C.J. Osborn is supported in part by NSF grant CCR 0105283. This work was done while the author was an undergraduate student
at Michigan State University.
E. Torng is supported in part by NSF grant CCR 0105283. 相似文献
6.
Optimal Time-Critical Scheduling via Resource Augmentation 总被引:1,自引:0,他引:1
We consider two fundamental problems in dynamic scheduling: scheduling to meet deadlines in a preemptive multiprocessor setting, and scheduling to provide good response time in a number of scheduling environments. When viewed from the perspective of traditional worst-case analysis, no good on-line algorithms exist for these problems, and for some variants no good off-line algorithms exist unless P = NP . We study these problems using a relaxed notion of competitive analysis, introduced by Kalyanasundaram and Pruhs, in which the on-line algorithm is allowed more resources than the optimal off-line algorithm to which it is compared. Using this approach, we establish that several well-known on-line algorithms, that have poor performance from an absolute worst-case perspective, are optimal for the problems in question when allowed moderately more resources. For optimization of average flow time, these are the first results of any sort, for any NP -hard version of the problem, that indicate that it might be possible to design good approximation algorithms. 相似文献
7.
Inner product computation is an important operation, invoked repeatedly in matrix multiplications. A high-speed inner product processor can be very useful (among many possible applications) in real-time signal processing. This paper presents the design of a fast inner product processor, with appreciably reduced latency and cost. The inner product processor is implemented with a tree of carry-propagate or carry-save adders; this structure is obtained with the incorporation of three innovations in the conventional multiply/add tree: (1) The leaf-multipliers are expanded into adder subtrees, thus achieving an O(log Nb) latency, where N denotes the number of elements in a vector and b the number of bits in each element. (2) The partial products, to be summed in producing an inner product, are reordered according to their “minimum alignments.” This reordering brings approximately a 20% saving in hardware—including adders and data paths. The reduction in adder widths also yields savings in carry propagation time for carry-propagate adders. (3) For trees implemented with carry-save adders, the partial product reordering also serves to truncate the carry propagation chain in the final propagation stage by 2 log b − 1 positions, thus significantly reducing the latency further. A form of the Baugh and Wooley algorithm is adopted to implement two's complement notation with changes only in peripheral hardware. 相似文献
8.
A nomenclature describing a set of essential dimensions or characteristics germane to integrated broadband switching architectures is presented. Using this vocabulary, a classification of switching architectures is proposed. The classification is intended to afford the switch designer an ordered and reasoned approach to deciding among the numerous choices in the design of broadband switches 相似文献
9.
We study the on-line caching problem in a restricted cache where each memory item can be placed in only a restricted subset of cache locations. Examples of restricted caches in practice include victim caches, assist caches, and skew caches. To the best of our knowledge, all previous on-line caching studies have considered on-line caching in identical or fully-associative caches where every memory item can be placed in any cache location.In this paper, we focus on companion caches, a simple restricted cache that includes victim caches and assist caches as special cases. Our results show that restricted caches are significantly more complex than identical caches. For example, we show that the commonly studied Least Recently Used algorithm is not competitive unless cache reorganization is allowed while the performance of the First In First Out algorithm is competitive but not optimal. We also present two near optimal algorithms for this problem as well as lower bound arguments. 相似文献
10.
E. Torng 《Algorithmica》1998,20(2):175-200
Paging (caching) is the problem of managing a two-level memory hierarchy in order to minimize the time required to process
a sequence of memory accesses. In order to measure this quantity, which we refer to as the total memory access time, we define
the system parameter miss penalty to represent the extra time required to access slow memory. We also introduce the system
parameter page size. In the context of paging, miss penalty is quite large, so most previous studies of on-line paging have
implicitly set miss penalty =∞ in order to simplify the model. We show that this seemingly insignificant simplification substantially alters the precision
of derived results. For example, previous studies have essentially ignored page size. Consequently, we reintroduce the miss
penalty and page size parameters to the paging problem and present a more accurate analysis of on-line paging (and caching).
We validate using this more accurate model by deriving intuitively appealing results for the paging problem which cannot be
derived using the simplified model.
First, we present a natural, quantifiable definition of the amount of locality of reference in any access sequence. We also
point out that the amount of locality of reference in an access sequence should depend on page size among other factors. We
then show that deterministic and randomized marking algorithms such as the popular least recently used (LRU) algorithm achieve
constant competitive ratios when processing typical access sequences which exhibit significant locality of reference; this represents
the first competitive analysis result which (partially) explains why LRU performs as well as it is observed to in practice.
Next, we show that finite lookahead can be used to obtain algorithms with improved competitive ratios. In particular, we
prove that modified marking algorithms with sufficient lookahead achieve competitive ratios of 2. This is in stark contrast
to the simplified model where lookahead cannot be used to obtain algorithms with improved competitive ratios.
We conclude by using competitive analysis to evaluate the benefits of increasing associativity in caches. We accomplish this
by specifying an algorithm and varying the system configuration rather than the usual process of specifying the system configuration
and varying the algorithm.
Received August 7, 1995; revised May 7, 1996, and August 6, 1996. 相似文献