首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
向军  李国徽  杨兵 《计算机应用》2008,28(7):1709-1712
移动实时数据库服务应用逐渐广泛,但系统负载不可预测和有限资源常导致事务重启或夭折,给系统带来损失甚至灾难。传统的基于最长执行时间实时调度算法已不能满足性能要求,提出结合不精确计算和反馈控制的新算法。考虑更新数据对象间联系并结合时间和值域有效性,提出性能新参数标准去保证系统性能和服务质量。通过仿真实验表明:算法可从稳定性能和暂态性能保证系统预定的服务质量规范。  相似文献   

2.
将简单反馈控制与任务准入/回归、可达/夭折等策略相结合,设计新的动态调度框架。在此基础上,综合截止期、关键度和最坏执行时间3种特征参数,提出基于反馈控制的混合策略调度算法,该算法也适用于对任务的其他多种特征参数的综合。从截止期错失率、错失任务平均关键度和CPU有效利用率3个方面,分析算法的性能。实验结果表明,该算法在混合任务和动态负载下与最早截止期优先和最高价值优先算法相比具有更好的性能。  相似文献   

3.
在容错实时系统中,可调度性分析是确保实时任务在限定时间内完成的重要手段。分析了突发性故障模式的可调度性问题,针对该故障模式下已有策略的不足,设计了优先级分配策略,并根据策略的性质实现了容错优先级变迁因子的搜索算法。深入的分析和实验证明,这种策略能够有效地提高系统的容错能力。  相似文献   

4.
Goossens  J.  Devillers  R. 《Real-Time Systems》1997,13(2):107-126
In this paper, we study the problem of scheduling hard real-time periodic tasks with static priority pre-emptive algorithms. We consider tasks which are characterized by a period, a hard deadline, a computation time and an offset (the time of the first request), where the offsets may be chosen by the scheduling algorithm, hence the denomination offset free systems.We study the rate monotonic and the deadline monotonic priority assignments for this kind of system and we compare the offset free systems and the asynchronous systems in terms of priority assignment. Hence, we show that the rate and the deadline monotonic priority assignments are not optimal for offset free systems.  相似文献   

5.
Rate monotonic and deadline monotonic scheduling are commonly used for periodic real-time task systems. This paper discusses a feasibility decision for a given real-time task system when the system is scheduled by rate monotonic and deadline monotonic scheduling. The time complexity of existing feasibility decision algorithms depends on both the number of tasks and maximum periods or deadlines when the periods and deadlines are integers. This paper presents a new necessary and sufficient condition for a given task system to be feasible and proposes a new feasibility decision algorithm based on that condition. The time complexity of this algorithm depends solely on the number of tasks. This condition can also be applied as a sufficient condition for a task system using priority inheritance protocols to be feasible with rate monotonic and deadline monotonic scheduling.  相似文献   

6.
The dynamic distributed real-time applications run on clusters with varying execution time, so re-allocation of resources is critical to meet the applications’s deadline. In this paper we present two adaptive recourse management techniques for dynamic real-time applications by employing the prediction of responses of real-time tasks that operate in time sharing environment and run-time analysis of scheduling policies. Prediction of response time for resource reallocation is accomplished by historical profiling of applications’ resource usage to estimate resource requirements on the target machine and a probabilistic approach is applied for calculating the queuing delay that a process will experience on distributed hosts. Results show that as compared to statistical and worst-case approaches, our technique uses system resource more efficiently.  相似文献   

7.
Grid computing has emerged a new field, distinguished from conventional distributed computing. It focuses on large-scale resource sharing, innovative applications and in some cases, high performance orientation. The Grid serves as a comprehensive and complete system for organizations by which the maximum utilization of resources is achieved. The load balancing is a process which involves the resource management and an effective load distribution among the resources. Therefore, it is considered to be very important in Grid systems. For a Grid, a dynamic, distributed load balancing scheme provides deadline control for tasks. Due to the condition of deadline failure, developing, deploying, and executing long running applications over the grid remains a challenge. So, deadline failure recovery is an essential factor for Grid computing. In this paper, we propose a dynamic distributed load-balancing technique called “Enhanced GridSim with Load balancing based on Deadline Failure Recovery” (EGDFR) for computational Grids with heterogeneous resources. The proposed algorithm EGDFR is an improved version of the existing EGDC in which we perform load balancing by providing a scheduling system which includes the mechanism of recovery from deadline failure of the Gridlets. Extensive simulation experiments are conducted to quantify the performance of the proposed load-balancing strategy on the GridSim platform. Experiments have shown that the proposed system can considerably improve Grid performance in terms of total execution time, percentage gain in execution time, average response time, resubmitted time and throughput. The proposed load-balancing technique gives 7 % better performance than EGDC in case of constant number of resources, whereas in case of constant number of Gridlets, it gives 11 % better performance than EGDC.  相似文献   

8.
实时系统要求任务在最差情况下能在其截止时间前获得结果,若超过了其截止时间,也会认为是错误的行为,所以改进任务可调度性分析、提高任务集可调度性尤其重要。统一调度能结合固定优先级调度的优点,防止不必要的抢占,降低资源额外销耗,能够提高任务集合的可调度性;但其任务的可调度性分析方法过于粗糙,影响任务最差响应时间分析的结果,降低了任务集的可调度性。针对存在的问题,基于统一调度,增加任务运行阶段数,重新建立任务模型,并提出通过分配任务抢占阈值、调整运行阶段的抢占阈值与长度,优化任务可容忍阻塞,改善任务集可调度性的算法。最后,实验表明,与统一调度算法及其他算法相比,所提出的调度算法能够有效改善任务集的可调度性。  相似文献   

9.
Scheduling is a typical technique used to distribute the load in multiprocessor systems. Usually, the manager(dispatcher or operating system)schedule the tasks so that the average finish time is minimized. Constraints related to the characteristics of the load such as precedence relation, deadline time, etc. must be taken into consideration. With ever increasing applications of a new paradigm of divisible tasks in image processing and parallel processing, one must concentrate on the characteristics of the system such as processor speed, link speed, and processor interconnection topology when distributing the load. By exploiting queuing theory, we managed to find different bounds on the arrival rate(load) as a function of link speed, processor speed and the size of tasks. A flow control mechanism for different multiprocessor systems with different topologies is embedded in our analysis. Moreover, our model indicates to the design engineers, depending on the traffic intensity, which element(s)of a parallel system has to be upgraded or replaced to meet the new load. This, of course, has to be justified by cost consideration.  相似文献   

10.
We study the performance of concurrency control algorithms in maintaining temporal consistency of shared data in hard real time systems. In our model, a hard real time system consists of periodic tasks which are either write only, read only or update transactions. Transactions may share data. Data objects are temporally inconsistent when their ages and dispersions are greater than the absolute and relative thresholds allowed by the application. Real time transactions must read temporally consistent data in order to deliver correct results. Based on this model, we have evaluated the performance of two well known classes of concurrency control algorithms that handle multiversion data: the two phase locking and the optimistic algorithms, as well as the rate monotonic and earliest deadline first scheduling algorithms. The effects of using the priority inheritance and stack based protocols with lock based concurrency control are also studied  相似文献   

11.
12.
In distributed real-time systems, if a task misses its deadline, an exception can be thrown. In this context, end-to-end deadline missing prediction mechanisms can reduce exception throwing because they define an estimated response time. With this estimated response time the system can carry out remedial actions in time to avoid the throw of an exception. In this work, we propose the Available Slack (AS) deadline missing prediction mechanism, which defines an estimated response time for distributed tasks using information such as computation time and end-to-end deadline. We show how AS behaves in simulations with different system workloads like pipelines, balanced and non-balanced loads.  相似文献   

13.
Recently, a growing number of scientific applications have been migrated into the cloud. To deal with the problems brought by clouds, more and more researchers start to consider multiple optimization goals in workflow scheduling. However, the previous works ignore some details, which are challenging but essential. Most existing multi-objective workflow scheduling algorithms overlook weight selection, which may result in the quality degradation of solutions. Besides, we find that the famous partial critical path (PCP) strategy, which has been widely used to meet the deadline constraint, can not accurately reflect the situation of each time step. Workflow scheduling is an NP-hard problem, so self-optimizing algorithms are more suitable to solve it.In this paper, the aim is to solve a workflow scheduling problem with a deadline constraint. We design a deadline constrained scientific workflow scheduling algorithm based on multi-objective reinforcement learning (RL) called DCMORL. DCMORL uses the Chebyshev scalarization function to scalarize its Q-values. This method is good at choosing weights for objectives. We propose an improved version of the PCP strategy calledMPCP. The sub-deadlines in MPCP regularly update during the scheduling phase, so they can accurately reflect the situation of each time step. The optimization objectives in this paper include minimizing the execution cost and energy consumption within a given deadline. Finally, we use four scientific workflows to compare DCMORL and several representative scheduling algorithms. The results indicate that DCMORL outperforms the above algorithms. As far as we know, it is the first time to apply RL to a deadline constrained workflow scheduling problem.  相似文献   

14.
We address lateness and tardiness scheduling policies for real-time systems. It is well-known that preemptive Earliest Deadline First (EDF) minimizes the worst lateness and tardiness of a finite set of tasks with known arrival times, service times and deadlines to the finishing time, on a uniprocessor. We extend this result significantly, to include an arbitrary (possibly infinite) number of tasks with arbitrary arrival and service times, and deadlines, and to show thatEDF
  1. minimizes the lateness and tardiness of the tasks that are in the system at an arbitrary time.
  2. minimizes lateness within a busy interval, for an arbitrary, possibly infinite number of tasks.
  3. maximizes the time to the first missed deadline, and
  4. minimizes the length of time during which there is at least one missed deadline in the system.
We also show that a combination ofEDF and Shortest Remaining Processing Time First (SRPTF) policy minimizes maximum latenesses in a vector sense (as defined tin the paper) and minimizes the number of tasks that miss their deadline at the time the first missed deadline occurs. For non-preemptive non-idling polices, we establish new, similar results in a stochastic sense. We attempt extending our findings to multiprocessor systems. We demonstrate that under the assumptions of arbitrary distributions of arrival times, service times and deadlines, our results no longer hold true. When a further assumption of unit-length service times and integer-valued arrival times is introduced, we are able to re-establish the results in the multiprocessor case.  相似文献   

15.
Recently there has been an increased demand for imaging systems in support of high-speed digital printing. The required increase in performance in support of such systems can be accomplished through an effective parallel execution of image processing applications in a distributed cluster computing environment. The output of the system must be presented to a raster based display at regular intervals, effectively establishing a hard deadline for the production of each image. Failure to complete a rasterization task before its deadline will result in an interruption of service that is unacceptable. The goal of this research was to derive a metric for measuring robustness in this environment and to design a resource allocation heuristic capable of completing each rasterization task before its assigned deadline, thus, preventing any service interruptions. We present a mathematical model of such a cluster based raster imaging system, derive a robustness metric for evaluating heuristics in this environment, and demonstrate using the metric to make resource allocation decisions. The heuristics are evaluated within a simulation of the studied raster imaging system. We clearly demonstrate the effectiveness of the heuristics by comparing their results with the results of a resource allocation heuristic commonly used in this type of system.  相似文献   

16.
The demand for real-time e-commerce data services has been increasing recently. In many e-commerce applications, it is essential to process user requests within their deadlines, i.e., before the market status changes, using fresh data reflecting the current market status. However, current data services are poor at processing user requests in a timely manner using fresh data. To address this problem, we present a differentiated real-time data service framework for e-commerce applications. User requests are classified into several service classes according to their importance, and they receive differentiated real-time performance guarantees in terms of deadline miss ratio. At the same time, a certain data freshness is guaranteed for all transactions that commit within their deadlines. A feedback-based approach is applied to differentiate the deadline miss ratio among service classes. Admission control and adaptable update schemes are applied to manage potential overload. A simulation study, which reflects the e-commerce data semantics, shows that our approach can achieve a significant performance improvement compared to baseline approaches. Our approach can support the specified per-class deadline miss ratios maintaining the required data freshness even in the presence of unpredictable workloads and data access patterns, whereas baseline approaches fail.  相似文献   

17.
A disk cache is typically used in file systems to reduce average access time for data storage and retrieval. The `periodic update' write policy, widely used in existing computer systems, is one in which dirty cache blocks are written to a disk on a periodic basis. The average response time for disk read requests when the periodic update write policy is used is determined. Read and write load, cache-hit ratio, and the disk scheduler's ability to reduce service time under load are incorporated in the analysis, leading to design criteria that can be used to decide among competing cache write policies. The main conclusion is that the bulk arrivals generated by the periodic update policy cause a traffic jam effect which results in severely degraded service. Effective use of the disk cache and disk scheduling can alleviate this problem, but only under a narrow range of operating conditions. Based on this conclusion, alternate write packages that retain the periodic update policy's advantages and provide uniformly better service are proposed  相似文献   

18.
Nowadays, Internet of things has become as an inevitable aspect of humans’ IT-based life. A huge number of geo-distributed IoT enabled devices such as smart phones, smart cameras, health care systems, vehicles, etc. are connected to the Internet and manage users’ applications. The IoT applications are generally time sensitive, so that giving them up to Cloud and receiving the response may violate their required deadline, due to distance between user device and centralized Cloud data center and consequently increasing network latency. Fog environment, as an intermediate layer between Cloud and IoT devices, brings a smaller scales of Cloud capabilities closer to user location. Processing real time applications in Fog layer helps more deadlines to be met. Although Fog computing enhances quality of service parameters, limited resources and power of Fog nodes is a challenge in processing applications. Furthermore, the network latency is still an issue for communications between applications’ services and between user device and Fog node, which seriously threatens deadline condition. Regarding to mentioned points, this paper proposes a 3-partite deadline-aware applications’ services placement optimization model in Fog environment which optimizes total power consumption, total resources wastage, and total network latency, simultaneously. The proposed model prioritizes applications in 3 levels based on their associated deadline, and then the model is solved using a parallel model of first fit decreasing and genetic algorithm combination. Simulations results indicates the superiority of proposed approach against counterpart algorithms in terms of reducing power consumption, resource wastage, network latency, and service rejection rate.  相似文献   

19.
Timing Analysis for Data and Wrap-Around Fill Caches   总被引:1,自引:0,他引:1  
White  Randall T.  Mueller  Frank  Healy  Chris  Whalley  David  Harmon  Marion 《Real-Time Systems》1999,17(2-3):209-233
The contributions of this paper are twofold. First, an automatic tool-based approach is described to bound worst-case data cache performance. The approach works on fully optimized code, performs the analysis over the entire control flow of a program, detects and exploits both spatial and temporal locality within data references, and produces results typically within a few seconds. Results obtained by running the system on representative programs are presented and indicate that timing analysis of data cache behavior usually results in significantly tighter worst-case performance predictions. Second, a method to deal with realistic cache filling approaches, namely wrap-around-filling for cache misses, is presented as an extension to pipeline analysis. Results indicate that worst-case timing predictions become significantly tighter when wrap-around-fill analysis is performed. Overall, the contribution of this paper is a comprehensive report on methods and results of worst-case timing analysis for data caches and wrap-around caches. The approach taken is unique and provides a considerable step toward realistic worst-case execution time prediction of contemporary architectures and its use in schedulability analysis for hard real-time systems.  相似文献   

20.
On-line service scheduling   总被引:1,自引:0,他引:1  
This paper is concerned with a scheduling problem that occurs in service systems where customers are classified as ‘ordinary’ and ‘special’. Ordinary customers can be served on any service facility, while special customers can be served only on the flexible service facilities. Customers arrive dynamically over time and their needs become known upon arrival. We assume any service, once started, will be carried out to its completion. In this paper, we study the worst-case performance of service policies used in practice. In particular, we evaluate three classes of service policies: policies with priority, policies without priority, and their combinations. We obtain tight worst-case performance bounds for all service policies considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号