共查询到20条相似文献,搜索用时 0 毫秒
1.
《Information Processing Letters》1987,25(6):389-396
The basic problem of nonpreemptive scheduling of independent tasks on identical processors is studied. The well-known heuristics LPT and Multifit are combined to an algorithm Mix which has a better worst-case ratio than each of its components. Exact ratios for the case of two and three processors are given. 相似文献
2.
In this paper we focus our attention on the determination of upper bounds of the l∞ norm of the output of a linear discrete-time dynamic system driven by a step input, in the presence of both persistent unknown, but, l∞ bounded disturbances and memoryless time-varying model uncertainty. For the same type of systems we also analyze the transient behavior of the step response in terms of its overshoot. The problem is solved in a constructive way by determining appropriate invariant sets contained in a given convex region. Finally, we show how to extend these results to continuous-time systems. 相似文献
3.
Network Calculus theory aims at evaluating worst-case performances in communication networks. It provides methods to analyze models where the traffic and the services are constrained by some minimum and/or maximum envelopes (arrival/service curves). While new applications come forward, a challenging and inescapable issue remains open: achieving tight analyzes of networks with aggregate multiplexing. The theory offers efficient methods to bound maximum end-to-end delays or local backlogs. However as shown in a recent breakthrough paper (Schmitt et al. 2008), those bounds can be arbitrarily far from the exact worst-case values, even in seemingly simple feed-forward networks (two flows and two servers), under blind multiplexing (i.e. no information about the scheduling policies, except FIFO per flow). For now, only a network with three flows and three servers, as well as a tandem network called sink tree, have been analyzed tightly.We describe the first algorithm which computes the maximum end-to-end delay for a given flow, as well as the maximum backlog at a server, for any feed-forward network under blind multiplexing, with piecewise affine concave arrival curves and piecewise affine convex service curves. Its computational complexity may look expensive (possibly super-exponential), but we show that the problem is intrinsically difficult (NP-hard). Fortunately we show that in some cases, like tandem networks with cross-traffic interfering along intervals of servers, the complexity becomes polynomial. We also compare ourselves to the previous approaches and discuss the problems left open. 相似文献
4.
5.
Manufacturing in a world class environment demands a high level of customer service. The production control department is responsible for achieving this high level of service through accurate planning and scheduling. The ability to achieve such a high level of customer service is limited by the scheduling tools currently available. Production planning is typically performed using MRP infinite capacity, fixed lead-time, backward scheduling. The work in each MRP time-bucket is then sequenced to develop a schedule. The production floor however, is not a static environment. Dynamic events, that cannot be scheduled, degrade production performance relative to the projected schedule. In this paper the relationship between dynamic events and schedule degradation is examined. Common approaches to production scheduling underestimate the effect of capacity loading relative to unplanned events and schedule achievability. These dynamic events exhaust capacity previously allocated to production orders. To hedge against such known, but unscheduled events, production control can schedule resources to a level less than full capacity. The size of the capacity hedge and the duration of the unplanned event dictate the time to recover from the backlog created by these dynamic events. A performance metric is developed to measure the ability to achieve customer promise dates. A machine loading policy is also presented to achieve the optimal capacity hedge point that will maximize this performance measure. The results are compared to simulated failures to examine the accuracy of the predicted performance degradation. The results suggest a trade-off of throughput for improved performance to customer promise date. 相似文献
6.
The expected and worst-case numbers of nodes required for the quadtree representation of curves or regions are investigated. It is shown that in both cases the numbers are roughly proportional to the number of pixels in the curve or the boundary of the region, but that the worst case-storage requirements are at least 2 √2 times the size of the expected storage requirements, provided some reasonable assumptions are made. 相似文献
7.
Esteban Feuerstein Alberto Marchetti-Spaccamela Frans Schalekamp René Sitters Suzanne van der Ster Leen Stougie Anke van Zuylen 《Journal of Scheduling》2017,20(6):545-555
We consider scheduling problems over scenarios where the goal is to find a single assignment of the jobs to the machines which performs well over all scenarios in an explicitly given set. Each scenario is a subset of jobs that must be executed in that scenario. The two objectives that we consider are minimizing the maximum makespan over all scenarios and minimizing the sum of the makespans of all scenarios. For both versions, we give several approximation algorithms and lower bounds on their approximability. We also consider some (easier) special cases. Combinatorial optimization problems under scenarios in general, and scheduling problems under scenarios in particular, have seen only limited research attention so far. With this paper, we make a step in this interesting research direction. 相似文献
8.
In the resource allocation game introduced by Koutsoupias and Papadimitriou, n jobs of different weights are assigned to m identical machines by selfish agents. For this game, it has been conjectured by several authors that the fully mixed Nash equilibrium (FMNE) is the worst possible w.r.t. the expected maximum load over all machines. Assuming the validity of this conjecture, computing a worst-case Nash equilibrium for a given instance was trivial, and approximating the Price of Anarchy for this instance would be possible by approximating the expected social cost of the FMNE by applying a known FPRAS. 相似文献
9.
This paper describes SPATS—a new toolset for the development of safety-critical and hard real-time systems. SPATS integrates the analysis traditionally offered by program proof and static timing analysis tools through analysis of program basic-path graphs. This paper concentrates on SPATS' facilities for high-level static timing analysis and analysis of worst-case stack usage. The integration of timing analysis and program proof allows timing analysis to be performed where worst-case execution time (WCET) depends on a program's input data, and allows timing annotations to be formally verified. The approach is developed and illustrated with a worked example. The implementation and experimental application of SPATS to realistic industrial case-studies are also described. We conclude that SPATS offers a novel new approach to static timing analysis, offers several new analyses not seen in previous systems, and can be implemented in a useful and efficient toolset.This work was completed while Rod Chapman was with the Dependable Computing Systems Centre at the University of York. 相似文献
10.
C. D. Spyropoulos 《Performance Evaluation》1985,5(4):225-234
This paper deals with the performance of the Priority-Driven scheduling algorithm, under various preset ordering rules, on a homogeneous multiprocessor computing model with independent memories. The performance criterion used is the completion time of the schedules. For each ordering rule we prove informative worst-case bounds that generalize the ones derived earlier by D.G. Kafura and V.Y. Shen. 相似文献
11.
12.
In this paper, computation and communication performance is evaluated for single and multitransputer arrays. Performance models are proposed for Occam program execution, under Transputer Development System TDS2. The performance features of normalised arithmetic, concurrent floating and integer arithmetic, logarithmic array indexing, and on-chip/off-chip RAM are studied. The startup time, byte transfer rate, asymptotic link bandwidth, and half performance message length are estimated for simultaneous operation of one, two, three, and four links at 10/20 MHz clock in unidirectional/bidirectional modes. The impact of various performance maximisation techniques on execution time is also addressed.
The matrix factorisation algorithms for dense linear systems are chosen as the focus for this study. The implementations include LUD, Householder, Gauss-Jordan, Choleski, and Givens methods. Floating point operations count alone is inadequate to estimate computation time; many other factors such as array indexing, load/store overhead, and loop overhead play a significant role in the transouter performance for the dense linear systems. The reduction in array indexing overheads in multitransputer arrays may result in superlinear speedups. 相似文献
13.
Michael K. H. Fan 《Systems & Control Letters》1992,18(6):409-421
The structured singular value (SSV or μ) is known to be an effective tool for assessing robust performance of linear time-invariant models subject to structured uncertainty. Yet all single μ analysis provides is a bound β on the uncertainty under which stability as well as H∞ performance level of κ/β are guaranteed, where κ is preselectable. In this paper, we introduce a related quantity ν which provides answers for the following questions: (i) given β, determine the smallest with the property that, for any uncertainty bounded by β, an H∞ performance level of is guaranteed; (ii) conversely, given , determined the largest β with the property that, again, for any uncertainty bounded by β, an H∞ performance level of is guaranteed. Properties of this quantity are established and approaches to its computation are investigated. Both unstructured uncertainty and structured uncertainty are considered. 相似文献
14.
Raimund Kirner Jens Knoop Adrian Prantl Markus Schordan Albrecht Kadlec 《Software and Systems Modeling》2011,10(3):411-437
Worst-case execution time (WCET) analysis is concerned with computing a precise-as-possible bound for the maximum time the execution of a program can
take. This information is indispensable for developing safety-critical real-time systems, e. g., in the avionics and automotive
fields. Starting with the initial works of Chen, Mok, Puschner, Shaw, and others in the mid and late 1980s, WCET analysis
turned into a well-established and vibrant field of research and development in academia and industry. The increasing number
and diversity of hardware and software platforms and the ongoing rapid technological advancement became drivers for the development
of a wide array of distinct methods and tools for WCET analysis. The precision, generality, and efficiency of these methods
and tools depend much on the expressiveness and usability of the annotation languages that are used to describe feasible and infeasible program paths. In this article we survey the annotation languages which
we consider formative for the field. By investigating and comparing their individual strengths and limitations with respect
to a set of pivotal criteria, we provide a coherent overview of the state of the art. Identifying open issues, we encourage
further research. This way, our approach is orthogonal and complementary to a recent approach of Wilhelm et al. who provide
a thorough survey of WCET analysis methods and tools that have been developed and used in academia and industry. 相似文献
15.
Madhukar Anand Sebastian Fischmeister Insup Lee Linh T. X. Phan 《Real-Time Systems》2012,48(4):430-462
Distributed real-time systems require bounded communication delays and achieve them by means of a predictable and verifiable control mechanism for the communication medium. Real-time bus arbitration mechanisms control access to the medium and guarantee bounded communication delays. These arbitration mechanisms can be static dispatch tables or dynamic, algorithmic approaches. In this work, we introduce a real-time bus arbitration mechanism called tree schedules that takes the best parts of both sides: It can be analyzed like static dispatch tables, and it provides a certain degree of flexibility similar to algorithmic approaches. We present tree schedules as a framework to specify real-time traffic and introduce mechanisms to analyze it. We discuss how tree schedules can capture application-specific behavior in a time-triggered state-based supply model by means of conditional branching built into the model. We present analysis results for this model specifically aiming at schedulability in fixed and dynamic priority schemes and waiting time analysis. Finally, we demonstrate the advantages of state-based supply over stateless supply by means of two case studies. 相似文献
16.
《Ergonomics》2012,55(5):455-465
Consideration of the literature survey indicates that video display terminal (VDT) operators tend to have a high incidence of musculoskeletal problems, visual fatigue, and job stress. Although a number of ergonomic improvements in workstation design and work environment can help to reduce these problems, a proper work-rest schedule deserves consideration since it is easily applicable and inexpensive. The objective of this study was to compare the work-rest schedules for VDT operators considering data entry and mental arithmetic tasks. An experiment was conducted with 10 male college students as participants. The methodology included a discomfort questionnaire and performance measures. The independent variables were the work-rest schedule (60-minute work/10-minute rest, 30-minute work/5-minute rest, and 15-minute work/micro breaks) and the type of task (data entry and a mental arithmetic task). The results were analysed using multiple analysis of variance followed by separate analyses. The 15/micro schedule resulted in significantly lower discomfort in the neck, lower back, and chest than the other schedules for data entry task. The 30/5 schedule followed by 15/micro schedule resulted in the lowest eyestrain and blurred vision. Discomfort in the elbow and arm was the lowest with the 15/micro schedule for the mental arithmetic task. The 15/micro schedule resulted in the highest speed, accuracy, and performance for both of the tasks, compared with the 60/10 and 30/5 schedules. The data entry task resulted in significantly increased speed, accuracy, and performance, and lower shoulder and chest discomfort than the mental arithmetic task. 相似文献
17.
The effect of work-rest schedules and type of task on the discomfort and performance of VDT users 总被引:8,自引:0,他引:8
Consideration of the literature survey indicates that video display terminal (VDT) operators tend to have a high incidence of musculoskeletal problems, visual fatigue, and job stress. Although a number of ergonomic improvements in workstation design and work environment can help to reduce these problems, a proper work-rest schedule deserves consideration since it is easily applicable and inexpensive. The objective of this study was to compare the work-rest schedules for VDT operators considering data entry and mental arithmetic tasks. An experiment was conducted with 10 male college students as participants. The methodology included a discomfort questionnaire and performance measures. The independent variables were the work-rest schedule (60-minute work/10-minute rest, 30-minute work/5-minute rest, and 15-minute work/micro breaks) and the type of task (data entry and a mental arithmetic task). The results were analysed using multiple analysis of variance followed by separate analyses. The 15/micro schedule resulted in significantly lower discomfort in the neck, lower back, and chest than the other schedules for data entry task. The 30/5 schedule followed by 15/micro schedule resulted in the lowest eyestrain and blurred vision. Discomfort in the elbow and arm was the lowest with the 15/micro schedule for the mental arithmetic task. The 15/micro schedule resulted in the highest speed, accuracy, and performance for both of the tasks, compared with the 60/10 and 30/5 schedules. The data entry task resulted in significantly increased speed, accuracy, and performance, and lower shoulder and chest discomfort than the mental arithmetic task. 相似文献
18.
In this paper, we give an overview of the competition formats and the schedules used in 25 European soccer competitions for the season 2008?C2009. We discuss how competitions decide the league champion, qualification for European tournaments, and relegation. Following Griggs and Rosa (Bull. ICA 18:65?C68, 1996), we examine the popularity of the so-called canonical schedule. We investigate the presence of a number of properties related to successive home or successive away matches (breaks) and of symmetry between the various parts of the competition. We introduce the concept of ranking-balancedness, which is particularly useful to decide whether a fair ranking can be made. We also determine how the schedules manage the carry-over effect. We conclude by observing that there is quite some diversity in European soccer schedules, and that current schedules leave room for further optimizing. 相似文献
19.
Tracking performance in surveillance systems depends on two interrelated functions: track updating, the process of incorporating a new measurement into the track to update the system state estimate, and return-to-track correlation, the process of selecting which sensor return, if any, to use for track updating. Because of the presence of a number of targets in the same vicinity and the existence of clutter and false alarms, the correlation function is generally performed imperfectly. Since typical tracking filters such as the Kalman filter do not account for such correlation errors, degraded performance often results as well as unreliable and optimistic estimates of tracking accuracies. This paper examines and provides for optimizing the overall tracking process considering both the correlation and track update functions and their interaction. General equations for tracking performance of any arbitrary tracking filter used with a broad class of correlation algorithms in dense multitarget environments are developed. A new reoptimized tracking filter is derived which provides, from among a general class of tracking filters using a priori information on sensor return statistics, optimal performance in such environments and which reduces to the Kalman filter when environmental effects are eliminated. The new filter is compared parametrically to both the standard Kalman filter and a computationally simpler version of the optimal filter in terms of tracking accuracy and reliability of the calculated covariance matrix, over a spectrum of environmental conditions. At high densities of sensor returns, the new filter provides considerably improved tracking performance as well as uniquely reliable estimates of this performance. 相似文献
20.
Araz Hashemi Ben Fitzpatrick Le Yi Wang 《International journal of systems science》2014,45(7):1563-1578
This paper investigates noise attenuation problems for systems with unmodelled dynamics and unknown noise characteristics. A unique methodology is introduced that employs signal estimation in one phase, followed by control design for noise rejection. The methodology enjoys certain advantages in its simple control design process, accommodation of unmodelled dynamics, and non-conservative noise rejection performance. Under mild information on unmodelled dynamics, we first derive robust performance bounds on noise attenuation with respect to unmodelled dynamics without noise estimation errors. Then more general results are presented for systems that are subject to both stochastic signal estimation errors and unmodelled dynamics. Examples are also presented to demonstrate our findings. 相似文献