首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
While standard parallel machine scheduling is concerned with good assignments of jobs to machines, we aim to understand how the quality of an assignment is affected if the jobs’ processing times are perturbed and therefore turn out to be longer (or shorter) than declared. We focus on online scheduling with perturbations occurring at any time, such as in railway systems when trains are late. For a variety of conditions on the severity of perturbations, we present bounds on the worst case ratio of two makespans. For the first makespan, we let the online algorithm assign jobs to machines, based on the non-perturbed processing times. We compute the makespan by replacing each job’s processing time with its perturbed version while still sticking to the computed assignment. The second is an optimal offline solution for the perturbed processing times. The deviation of this ratio from the competitive ratio of the online algorithm tells us about the “price of perturbations”. We analyze this setting for Graham’s algorithm, and among other bounds show a competitive ratio of 2 for perturbations decreasing the processing time of a job arbitrarily, and a competitive ratio of less than 2.5 for perturbations doubling the processing time of a job. We complement these results by providing lower bounds for any online algorithm in this setting. Finally, we propose a risk-aware online algorithm tailored for the possible bounded increase of the processing time of one job, and we show that this algorithm can be worse than Graham’s algorithm in some cases.  相似文献   

2.
In this paper we describe a general grouping technique to devise faster and simpler approximation schemes for several scheduling problems. We illustrate the technique on two different scheduling problems: scheduling on unrelated parallel machines with costs and the job shop scheduling problem. The time complexity of the resulting approximation schemes is always linear in the number n of jobs, and the multiplicative constant hidden in the O(n) running time is reasonably small and independent of the error ε. Supported by Swiss National Science Foundation project 200020-109854, “Approximation Algorithms for Machine scheduling Through Theory and Experiments II”. A preliminary version of this paper appeared in the Proceedings of ESA’01.  相似文献   

3.
We explore in this paper how performance of e-commerce websites in terms of various criteria influences customers’ intention to shop again in the same website. Our approach is based on an interesting use of statistical regression in the hotel literature that attempted to classify different cues in hotels as critical, satisfier, dissatisfier, etc. We use online ratings for 484 e-commerce websites for this study. Our study shows that “satisfaction with claims” is the single most important criterion valued as critical by online customers. “Comparative prices” and “Refunds/returns” are desirable criteria. “Management accessibility”, “Payment process” and “Privacy experience” are satisfiers while “on-time delivery” is a dissatisfier.  相似文献   

4.
We consider the sequencing of a series of jobs that arrive at a single processor over time. At each job’s arrival time, a due date must be quoted for the job, and the job must complete processing before its quoted due date. The objective is to minimize the sum (or average) of quoted due dates, or equivalently, the average quoted lead time. In this paper, we propose on-line heuristics for this problem and characterize the conditions under which these heuristics are asymptotically optimal. Computational testing further demonstrates the relative effectiveness of these heuristics under various conditions. Both authors made equal contributions to this paper and are listed in alphabetical order.  相似文献   

5.
We study the basic problem of preemptive scheduling of a stream of jobs on a single processor. Consider an on-line stream of jobs, and let the ith job arrive at time r(i) and have processing time p(i). If C(i) is the completion time of job i, then the flow time of i is C(i) − r(i) and the stretch of i is the ratio of its flow time to its processing time; that is, . Flow time measures the time that a job is in the system regardless of the service it requests; the stretch measure relies on the intuition that a job that requires a long service time must be prepared to wait longer than jobs that require small service times. We present the improved algorithmic results for the average stretch metric in preemptive uniprocessor scheduling. Our first result is an off-line polynomial-time approximation scheme (PTAS) for average stretch scheduling. This improves upon the 2-approximation achieved by the on-line algorithm srpt that always schedules a job with the shortest remaining processing time. In a recent work, Chekuri and Khanna (Proc. 34th Ann. Symp. Theory Comput., 297–305, 2002) have presented approximation algorithms for weighted flow time, which is a more general metric than average stretch; their result also yields a PTAS for average stretch. Our second set of results considers the impact of incomplete knowledge of job sizes on the performance of on-line scheduling algorithms. We show that a constant-factor competitive ratio for average stretch is achievable even if the processing times (or remaining processing times) of jobs are known only to within a constant factor of accuracy.  相似文献   

6.
Opinion helpfulness prediction in the presence of “words of few mouths”   总被引:1,自引:0,他引:1  
This paper identifies a widely existing phenomenon in social media content, which we call the “words of few mouths” phenomenon. This phenomenon challenges the development of recommender systems based on users’ online opinions by presenting additional sources of uncertainty. In the context of predicting the “helpfulness” of a review document based on users’ online votes on other reviews (where a user’s vote on a review is either HELPFUL or UNHELPFUL), the “words of few mouths” phenomenon corresponds to the case where a large fraction of the reviews are each voted only by very few users. Focusing on the “review helpfulness prediction” problem, we illustrate the challenges associated with the “words of few mouths” phenomenon in the training of a review helpfulness predictor. We advocate probabilistic approaches for recommender system development in the presence of “words of few mouths”. More concretely, we propose a probabilistic metric as the training target for conventional machine learning based predictors. Our empirical study using Support Vector Regression (SVR) augmented with the proposed probability metric demonstrates advantages of incorporating probabilistic methods in the training of the predictors. In addition to this “partially probabilistic” approach, we also develop a logistic regression based probabilistic model and correspondingly a learning algorithm for review helpfulness prediction. We demonstrate experimentally the superior performance of the logistic regression method over SVR, the prior art in review helpfulness prediction.  相似文献   

7.
We give a generic construction of an optimal hitting set generator (HSG) from any good “reconstructive” disperser. Past constructions of optimal HSGs have been based on such disperser constructions, but have had to modify the construction in a complicated way to meet the stringent efficiency requirements of HSGs. The construction in this paper uses existing disperser constructions with the “easiest” parameter setting in a black-box fashion to give new constructions of optimal HSGs without any additional complications. Our results show that a straightforward composition of the Nisan-Wigderson pseudorandom generator that is similar to the composition in works by Impagliazzo, Shaltiel and Wigderson in fact yields optimal HSGs (in contrast to the “near-optimal” HSGs constructed in those works). Our results also give optimal HSGs that do not use any form of hardness amplification or implicit list-decoding—like Trevisan’s extractor, the only ingredients are combinatorial designs and any good list-decodable error-correcting code. A preliminary version of this paper appeared in RANDOM 2005. Supported by NSF CCF-0346991, BSF 2004329, a Sloan Research Fellowship, and an Okawa Foundation research grant.  相似文献   

8.
Synopses construction algorithms have been found to be of interest in query optimization, approximate query answering and mining, and over the last few years several good synopsis construction algorithms have been proposed. These algorithms have mostly focused on the running time of the synopsis construction vis-a-vis the synopsis quality. However the space complexity of synopsis construction algorithms has not been investigated as thoroughly. Many of the optimum synopsis construction algorithms are expensive in space. For some of these algorithms the space required to construct the synopsis is significantly larger than the space required to store the input. These algorithms rely on the fact that they require a smaller “working space” and most of the data can be resident on disc. The large space complexity of synopsis construction algorithms is a handicap in several scenarios. In the case of streaming algorithms, space is a fundamental constraint. In case of offline optimal or approximate algorithms, a better space complexity often makes these algorithms much more attractive by allowing them to run in main memory and not use disc, or alternately allows us to scale to significantly larger problems without running out of space. In this paper, we propose a simple and general technique that reduces space complexity of synopsis construction algorithms. As a consequence we show that the notion of “working space” proposed in these contexts is redundant. This technique can be easily applied to many existing algorithms for synopsis construction problems. We demonstrate the performance benefits of our proposal through experiments on real-life and synthetic data. We believe that our algorithm also generalizes to a broader range of dynamic programs beyond synopsis construction. Sudipto Guha’s research supported in part by an Alfred P. Sloan Research Fellowship and by NSF Awards CCF-0430376, CCF-0644119.A preliminary version of the paper appeared as “Space efficiency in synopsis construction algorithms”, VLDB Conference 2005, Trondheim, [19].  相似文献   

9.
We use competitive analysis to study how best to use redundancy to achieve fault-tolerance in online real-time scheduling. We show that the optimal way to use spatial redundancy depends on a complex interaction of the benefits, execution times, release times, and latest start times of the jobs. We give a randomized online algorithm whose competitive ratio is O( logΦ log Delta ( log 2 n log m/ log log m)) for transient faults. Here n is the number of jobs, m is the number of processors, Φ is the ratio of the maximum value density of a job to the minimum value density of a job, and Δ is the ratio of the longest possible execution time to the shortest possible execution time. We show that this bound is close to optimal by giving an Ω(( log ΔΦ/ log log m) ( log m log log m) 2 ) lower bound on the competitive ratio of any randomized algorithm. In the case of permanent faults, there is a randomized online algorithm that has a competitive ratio of O( log Phi log Δ (log m/log log m)). We also show a lower bound of Ω((log ΔΦ/ log log m) ( log m/log log m)) on the competitive ratio for interval scheduling with permanent faults. Received October 1997; revised January 1999.  相似文献   

10.
The consequences of the worst-case assumption NP=P are very well understood. On the other hand, we only know a few consequences of the analogous average-case assumption “NP is easy on average.” In this paper we establish several new results on the worst-case complexity of Arthur-Merlin games (the class AM) under the average-case complexity assumption “NP is easy on average.”
–  We first consider a stronger notion of “NP is easy on average” namely NP is easy on average with respect to distributions that are computable by polynomial size circuit families. Under this assumption we show that AM can be derandomized to nondeterministic subexponential time.
–  Under the assumption that NP is easy on average with respect to polynomial-time computable distributions, we show (a) AME=E where AME is the exponential version of AM. This improves an earlier known result that if NP is easy on average then NE=E. (b) For every c>0, . Roughly this means that for any language L in AM there is a language L′ in NP so that it is computationally infeasible to distinguish L from L′.
We use recent results from the area of derandomization for establishing our results. A. Pavan research supported by NSF grants CCR-0344817 and CCF-0430807. N.V. Vinodchandran research supported by NSF grant CCF-0430991, University of Nebraska Layman Award, and Big 12 Fellowship.  相似文献   

11.
The paper identifies a problem in default reasoning in Reiter’s Default Logic and related systems: elements which are similar given the axioms only, become distinguishable in extensions. We explain why, sometimes, this is considered undesirable. Two approaches are presented for guaranteeing similarity preservation: One approach formalizes a way of uniformly applying the defaults to all similar elements by introducing generic extensions, which depend only on similarity types of objects. According to the second approach, for a restricted class of default theories, a default theory is viewed as a “shorthand notation” to what is “really meant” by its formulation. In this approach we propose a rewriting of defaults in a form that guarantees similarity preservation of the modified theory. It turns out that the above two approaches yield the same result. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

12.
Semi-online scheduling with machine cost   总被引:2,自引:1,他引:1       下载免费PDF全文
For most scheduling problems the set of machines is fixed initially and remains unchanged for the duration of the problem.Recently Imreh and Nogaproposed to add the concept of machine cost to scheduling problems and considered the so-called List Model problem.An online algorthm with a competitive ratio 1.618 was given while the lower boud is 4/3.In this paper,two different semi-onlne versions of this problem are studied‘.In the first case,it is assumed that the processing time of the largest job is known a priori.A semi-online algorithm is presented with the competitive ratio at most 1.5309 while the lower bound is 4/3,In the second case,it is assumed that the total processing time of all jobs is known in advance.A semi-online algorithm is presented with the competitive ratio at most 1.414 while the lower bound is 1.161.It is shown that the additional partial available information about the jobs leads to the possibility of constructing a schedule with a smaller competitive ratio than that of online algorithms.  相似文献   

13.
The Publisher regrets that the original article incorrectly listed the authors’ location as the “People’s Republic of China”. This was a typesetting error. Their correct location is the “Republic of China”, which is also known as Taiwan. Their full affiliation is Department of Electrical Engineering, National Cheng Kung University, Tainan, Taiwan. The online version of the original article can be found at .  相似文献   

14.
Due to the large amounts of data required to be processed by the typical Grid job, it is conceivable that the use of optical transport networks in Grid deployment (hence the term “Lambda Grid”) will increase. The exact topology of the interconnecting network is obtained by solving a dimensioning problem, and the outcome of this strongly depends on both the expected workload characteristics and Grid scheduling policy. Solving this combined scheduling and dimensioning problem using straightforward ILP modelling is cumbersome; however, for steady-state Grid operation, Divisible Load Theory (DLT) can yield scalable formulations of this problem. In this paper, the on-line hierarchical scheduling on a lambda Grid of workload approaching the Grid’s capacity in a two-tier Grid mode of operation is studied. A number of these algorithms are goal-driven, in the sense that target per-resource goals are obtained from the off-line solution to the Divisible Load model. We compare these on-line multiresource scheduling policies for different workloads, Grid interconnection topologies and Grid parameters. We show that these algorithms perform well in the studied scenarios when compared to a fully centralized scheduling algorithm.
Pieter ThysebaertEmail:
  相似文献   

15.
Jin-Ho Park interprets Schindler’s “reference frames in space” as set forth in his 1916 lecture note on mathematics, proportion, and architecture, in the context of Robinson’s1898–99 articles in the Architectural Record. Schindler’s unpublished, handwritten notes provide a source for his concern for “rhythmic” dimensioning in architecture. He uses a system in which rectangular dimensions are arranged in “rows.” Architectural examples of Schindler’s Shampay, Braxton-Shore and How Houses illustrate the principles.  相似文献   

16.
Over the years a wide variety of access control models and policies have been proposed, and almost all the models have assumed “grant the access request or deny it.” They do not provide any mechanism that enables us to bind authorization rules with required operations such as logging and encryption. We propose the notion of a “provisional action” that tells the user that his request will be authorized provided he (and/or the system) takes certain actions. The major advantage of our approach is that arbitrary actions such as cryptographic operations can all coexist in the access control policy rules. We define a fundamental authorization mechanism and then formalize a provision-based access control model. We also present algorithms and describe their algorithmic complexity. Finally, we illustrate how provisional access control policy rules can be specified effectively in practical usage scenarios. Published online: 22 January 2002  相似文献   

17.
18.
We consider a relaxed version of the open shop scheduling problem–the concurrent open shop scheduling problem, in which any two operations of the same job on distinct machines are allowed to be processed concurrently. The completion time of a job is the maximum completion time of its operations. The objective is to schedule the jobs so as to minimize the weighted number of tardy jobs, with 0–1 operation processing times and a common due date d. We show that, even when the weights are identical, the problem has no (1 – )ln m-approximation algorithm for any > 0 if NP is not a subset of DTIME(n loglogn ), and has no c·ln m-approximation algorithm for some constant c > 0 if P NP, where m is the number of machines. This also implies that the problem is strongly NP-hard. We also give a (1 + d)-approximation algorithm for the problem.  相似文献   

19.
In this article we develop quantum algorithms for learning and testing juntas, i.e. Boolean functions which depend only on an unknown set of k out of n input variables. Our aim is to develop efficient algorithms: (1) whose sample complexity has no dependence on n, the dimension of the domain the Boolean functions are defined over; (2) with no access to any classical or quantum membership (“black-box”) queries. Instead, our algorithms use only classical examples generated uniformly at random and fixed quantum superpositions of such classical examples; (3) which require only a few quantum examples but possibly many classical random examples (which are considered quite “cheap” relative to quantum examples). Our quantum algorithms are based on a subroutine FS which enables sampling according to the Fourier spectrum of f; the FS subroutine was used in earlier work of Bshouty and Jackson on quantum learning. Our results are as follows: (1) We give an algorithm for testing k-juntas to accuracy ε that uses O(k/ϵ) quantum examples. This improves on the number of examples used by the best known classical algorithm. (2) We establish the following lower bound: any FS-based k-junta testing algorithm requires queries. (3) We give an algorithm for learning k-juntas to accuracy ϵ that uses O−1 k log k) quantum examples and O(2 k log(1/ϵ)) random examples. We show that this learning algorithm is close to optimal by giving a related lower bound. Supported in part by NSF award CCF-0347282, by NSF award CCF-0523664, and by a Sloan Foundation Fellowship.  相似文献   

20.
Our main result is an optimal online algorithm for preemptive scheduling on uniformly related machines with the objective to minimize makespan. The algorithm is deterministic, yet it is optimal even among all randomized algorithms. In addition, it is optimal for any fixed combination of speeds of the machines, and thus our results subsume all the previous work on various special cases. Together with a new lower bound it follows that the overall competitive ratio of this optimal algorithm is between 2.054 and e≈2.718. We also give a complete analysis of the competitive ratio for three machines. T. Ebenlendr and J. Sgall partially supported by Institutional Research Plan No. AV0Z10190503, by Inst. for Theor. Comp. Sci., Prague (project 1M0545 of MŠMT ČR), grant 201/05/0124 of GA ČR, and grant IAA1019401 of GA AV ČR. W. Jawor supported by NSF grants CCF-0208856 and OISE-0340752.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号