首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex problems. However, they suffer from a number of major technical hurdles, including distributed resource management and effective job scheduling. The main focus of this work is devoted on online scheduling of real time applications in distributed environments such as grids. Specifically, we are interested in applications with several independent tasks, each task with a prespecified lifecycle called deadline. Here, our goal is to schedule applications within an optimum overall time considering the specified deadlines. To achieve this, the resource performance prediction based on workload modeling and with the help of queuing techniques is employed. Afterward, a mathematical neural model is used to schedule the subtasks of the application. The main contributions of this work is to incorporate the impatiency factor as well as resource fault in performance modeling of nondedicated distributed systems, and also presenting an efficient and fast parallel scheduling algorithm under time constraint and heterogeneous resources. The proposed model is appropriate for implementation on parallel machines and in O(1) time. The new model was implemented on GridSim toolkit and under various conditions and with different parameters to evaluate the performance of scheduling algorithm. Simulation outcomes have shown that approximately in 87.8% of cases, our model schedules the tasks in such a way that all constraints are satisfied.
Mohammad Kazem AkbariEmail:
  相似文献   

2.
A notable requirement of heterogeneous parallel and distributed computing systems is to maximize their processing performance and agreed upon QoS. Lots of work in this field has been done to optimize the system performance by improving certain metrics such as reliability, robustness, security, and so on. However, most of them assume that systems are running without interruption all the time and seldom consider the system’s intrinsic characteristics, such as failure rate, repair rate, and lifetime. In this paper, we study how to achieve high availability based on residual lifetime analysis for heterogeneous distributed computational systems with considering their essential features. First, we provide an availability model taking into account system’s expected residual lifetime. Second, we propose an objective function about the model and develop a heuristic scheduling algorithm to maximize the availability with the makespan constraint. At last, we demonstrate these advantages through the extensive simulated experiments.
Xin JiangEmail:
  相似文献   

3.
Connecting the family with awareness systems   总被引:1,自引:1,他引:0  
Awareness systems have attracted significant research interest for their potential to support interpersonal relationships. Investigations of awareness systems for the domestic environment have suggested that such systems can help individuals stay in touch with dear friends or family and provide affective benefits to their users. Our research provides empirical evidence to refine and substantiate such suggestions. We report our experience with designing and evaluating the ASTRA awareness system, for connecting households and mobile family members. We introduce the concept of connectedness and its measurement through the Affective Benefits and Costs of communication questionnaire (ABC-Q). We inform results that testify the benefits of sharing experiences at the moment they happen without interrupting potential receivers. Finally, we document the role that lightweight, picture-based communication can play in the range of communication media available.
Natalia Romero (Corresponding author)Email:
Panos MarkopoulosEmail:
Joy van BarenEmail:
Boris de RuyterEmail:
Wijnand IJsselsteijnEmail:
Babak FarshchianEmail:
  相似文献   

4.
In scheduling hard-real-time systems, the primary objective is to meet all deadlines. We study the scheduling of such systems with the secondary objective of minimizing the duration of time for which the system locks each shared resource. We abstract out this objective into the resource hold time (rht)—the largest length of time that may elapse between the instant that a system locks a resource and the instant that it subsequently releases the resource, and study properties of the rht. We present an algorithm for computing resource hold times for every resource in a task system that is scheduled using Earliest Deadline First scheduling, with resource access arbitrated using the Stack Resource Policy. We also present and prove the correctness of algorithms for decreasing these rht’s without changing the semantics of the application or compromising application feasibility.
Sanjoy Baruah (Corresponding author)Email:
  相似文献   

5.
Heterogeneous parallel and distributed computing systems may operate in an environment where certain system performance features degrade due to unpredictable circumstances. Robustness can be defined as the degree to which a system can function correctly in the presence of parameter values different from those assumed. This work develops a model for quantifying robustness in a dynamic heterogeneous computing environment where task execution time estimates are known to contain errors. This mathematical expression of robustness is then applied to two different problem environments. Several heuristic solutions to both problem variations are presented that utilize this expression of robustness to influence mapping decisions.
Bin YeEmail:
  相似文献   

6.
Constructing deliberative real-time AI systems is challenging due to the high execution-time variance in AI algorithms and the requirement of worst-case bounds for hard real-time guarantees, often resulting in poor use of system resources. Using a motivating case study, the general problem of resource usage maximization is addressed. We approach the issues by employing a hybrid task model for anytime algorithms, which is supported by recent advances in fixed priority scheduling for imprecise computation. In particular, with a novel scheduling scheme based on Dual Priority Scheduling, hard tasks are guaranteed by schedulability analysis and scheduled in favor of optional and anytime components which are executed whenever possible for enhancing system utility. Simulation studies show satisfactory performance on the case study with the application of the scheduling scheme. We also suggest how aperiodic tasks can be scheduled effectively within the framework and how tasks can be prioritized based on their utilities by an efficient algorithm. These works form a comprehensive package of scheduling model, analysis, and algorithms based on fixed priority scheduling, providing a versatile platform where real-time AI applications can be suitably facilitated.
Alan BurnsEmail:
  相似文献   

7.
Schedulability analysis of global edf   总被引:1,自引:1,他引:0  
The multiprocessor edf scheduling of sporadic task systems is studied. A new sufficient schedulability test is presented and proved correct. It is shown that this test generalizes the previously-known exact uniprocessor edf-schedulability test, and that it offers non-trivial quantitative guarantees (including a resource augmentation bound) on multiprocessors.
Sanjoy BaruahEmail:
  相似文献   

8.
An important area of Human Reliability Assessment in interactive systems is the ability to understand the causes of human error and to model their occurrence. This paper investigates a new approach to analysis of task failures based on patterns of operator behaviour, in contrast with more traditional event-based approaches. It considers, as a case study, a formal model of an Air Traffic Control system operator’s task which incorporates a simple model of the high-level cognitive processes involved. The cognitive model is formalised in the CSP process algebra. Various patterns of behaviour that could lead to task failure are described using temporal logic. Then a model-checking technique is used to verify whether the set of selected behavioural patterns is sound and complete with respect to the definition of task failure. The decomposition is shown to be incomplete and a new behavioural pattern is identified, which appears to have been overlooked in the informal analysis of the problem. This illustrates how formal analysis of operator models can yield fresh insights into how failures may arise in interactive systems.
Antonio Cerone (Corresponding author)Email:
Simon ConnellyEmail:
Peter LindsayEmail:
  相似文献   

9.
The European Union co-funded COMUNICAR (communication multimedia unit inside car) project designed and developed an integrated multimedia human–machine interface (HMI) able to manage a wide variety of driver information systems (from entertainment to safety). COMUNICAR proposed an innovative information provision paradigm, in which the on-vehicle HMI is able to tailor the delivery of the information in real time according to the actual driving context and the drivers workload. COMUNICAR adopted a user-centred design process involving an iterative development based on extensive user tests since the early phases of the project. This approach was particularly useful to define and improve the layout of the user interface and specify the rules that decide the scheduling and the modalities of the delivery of the information messages to the driver. This paper introduces the COMUNICAR concept and the user-centred flow of design. Then, a concrete case of user-test driven, iterative improvement of a systems functionality is presented. We also briefly describe two software tools that we have designed to enhance the development process in a user-centred perspective. Finally, the future evolution of the concept of smart and safe information scheduling is sketched and discussed.
F. BellottiEmail:
A. De GloriaEmail:
R. MontanariEmail:
D. MorrealeEmail:
  相似文献   

10.
Rapid advancement and more readily availability of Grid technologies have encouraged many businesses and researchers to establish Virtual Organizations (VO) and make use of their available desktop resources to solve computing intensive problems. These VOs, however, work as disjointed and independent communities with no resource sharing between them. We, in previous work, have proposed a fully decentralized and reconfigurable Inter-Grid framework for resource sharing among such distributed and autonomous Grid systems (Rao et al. in ICCSA, [2006]). The specific problem that underlies in such a collaborating Grids system is scheduling of resources as there is very little knowledge about availability of the resources due to the distributed and autonomous nature of the underlying Grid entities. In this paper, we propose a probabilistic and adaptive scheduling algorithm using system-generated predictions for Inter-Grid resource sharing keeping collaborating Grid systems autonomous and independent. We first use system-generated job runtime estimates without actually submitting jobs to the target Grid system. Then this job execution estimate is used to predict the job scheduling feasibility on the target system. Furthermore, our proposed algorithm adapted itself to the actual resource behavior and performance. Simulation results are presented to discuss the correctness and accuracy of our proposed algorithm.
Eui-Nam Huh (Corresponding author)Email:
  相似文献   

11.
A new parallel normalized exact inverse algorithm is presented for solving sparse symmetric finite element linear systems on symmetric multiprocessor systems (SMP), based upon an antidiagonal motion approach (“wave”-like pattern) for overcoming the data dependencies. The proposed algorithm was implemented using OpenMP directives. Numerical results, such as speedups and efficiency, are presented illustrating the efficient performance on a symmetric multiprocessor computer system, where the proposed algorithmic solution method achieves good speedups.
George A. GravvanisEmail:
  相似文献   

12.
This paper addresses the scheduling problem in decentralized grid systems. Such problem focuses on computing a large set of arbitrary tasks to optimize the system performance while minimizing the average system costs. The mainstream solution flourished in recent literatures is to maximize the total system throughput by modeling such systems in either a network flow or a tree. However, most of them neglect the movements of tasks and load-dependent system costs which, in fact, are crucial to the system performance in real situations. In this paper, a Service-Oriented Overlay Network (SOON) is presented, in which the service nodes encapsulate both computation and communication resources and the links are used to track the movements of tasks instead of describing communication. An analytical Cost-Charge (C2) model, in which both running cost and service charge are dependent on load, is proposed to describe the problem by incorporating degree-dependent task allocation into a closed queuing network model. The Infinitesimal Perturbation Analysis (IPA) is applied to solve C2 theoretically. Following the theoretical analysis, a scalable decentralized scheduler named Liana (the movements of tasks in the proposed system like the growth and spread of evergreen liana, so we use Liana to name the proposed scheduler) is proposed. The major components of Liana are an autonomous scheduling algorithm and a Degree-Driven Protocol (DDP). Furthermore, trace based simulations on the test bed distributed widely across the world are implemented to compare the system performance by Liana with recent approaches. The proposed approach shows promising results that the close-to-optimal service utilization is achieved when taking system cost into account.
Chun-Qing LiEmail:
  相似文献   

13.
A performance study of multiprocessor task scheduling algorithms   总被引:1,自引:0,他引:1  
Multiprocessor task scheduling is an important and computationally difficult problem. A large number of algorithms were proposed which represent various tradeoffs between the quality of the solution and the computational complexity and scalability of the algorithm. Previous comparison studies have frequently operated with simplifying assumptions, such as independent tasks, artificially generated problems or the assumption of zero communication delay. In this paper, we propose a comparison study with realistic assumptions. Our target problems are two well known problems of linear algebra: LU decomposition and Gauss–Jordan elimination. Both algorithms are naturally parallelizable but have heavy data dependencies. The communication delay will be explicitly considered in the comparisons. In our study, we consider nine scheduling algorithms which are frequently used to the best of our knowledge: min–min, chaining, A*, genetic algorithms, simulated annealing, tabu search, HLFET, ISH, and DSH with task duplication. Based on experimental results, we present a detailed analysis of the scalability, advantages and disadvantages of each algorithm.
Damla TurgutEmail:
  相似文献   

14.
EDZL scheduling analysis   总被引:2,自引:1,他引:1  
A schedulability test is derived for the global Earliest Deadline Zero Laxity (EDZL) scheduling algorithm on a platform with multiple identical processors. The test is sufficient, but not necessary, to guarantee that a system of independent sporadic tasks with arbitrary deadlines will be successfully scheduled, with no missed deadlines, by the multiprocessor EDZL algorithm. Global EDZL is known to be at least as effective as global Earliest-Deadline-First (EDF) in scheduling task sets to meet deadlines. It is shown, by testing on large numbers of pseudo-randomly generated task sets, that the combination of EDZL and the new schedulability test is able to guarantee that far more task sets meet deadlines than the combination of EDF and known EDF schedulability tests. In the second part of the paper, an improved version of the EDZL-schedulability test is presented. This new algorithm is able to efficiently exploit information on the slack values of interfering tasks, to iteratively refine the estimation of the interference a task can be subjected to. This iterative algorithm is shown to have better performance than the initial test, in terms of schedulable task sets detected.
Marko BertognaEmail:
  相似文献   

15.
Process scheduling techniques consider the current load situation to allocate computing resources. Those techniques make approximations such as the average of communication, processing, and memory access to improve the process scheduling, although processes may present different behaviors during their whole execution. They may start with high communication requirements and later just processing. By discovering how processes behave over time, we believe it is possible to improve the resource allocation. This has motivated this paper which adopts chaos theory concepts and nonlinear prediction techniques in order to model and predict process behavior. Results confirm the radial basis function technique which presents good predictions and also low processing demands show what is essential in a real distributed environment.
Laurence T. YangEmail:
  相似文献   

16.
Rate monotonic schedulability tests using period-dependent conditions   总被引:1,自引:0,他引:1  
Feasibility and schedulability problems have received considerable attention from the real-time systems research community in recent decades. Since the publication of the Liu and Layland bound, many researchers have tried to improve the schedulability bound of the RM scheduling. The LL bound does not make any assumption on the relationship between any of the task periods. In this paper we consider the relative period ratios in a system. By reducing the difference between the smallest and the second largest virtual period values in a system, we can show that the RM schedulability bound can be improved significantly. This research has also proposed a system design methodology to improve the schedulability of real time system with a fixed system load.
Wei-Kuan ShihEmail:
  相似文献   

17.
The load balancing problem in OTIS-Hypercube interconnection networks   总被引:1,自引:1,他引:0  
An interconnection network architecture that promises to be an interesting option for future-generation parallel processing systems is the OTIS (Optical Transpose Interconnection System) optoelectronic architecture. Therefore, all performance improvement aspects of such a promising architecture need to be investigated; one of which is load balancing technique. This paper focuses on devising an efficient algorithm for load balancing on the promising OTIS-Hypercube interconnection networks. The proposed algorithm is called Clusters Dimension Exchange Method (CDEM). The analytical model and the experimental evaluation proved the excellence of OTIS-Hypercube compared to Hypercube in terms of various parameters, including execution time, load balancing accuracy, number of communication steps, and speed.
Bashira A. JaradatEmail:
  相似文献   

18.
A category of Distributed Real-Time Systems (DRTS) that has multiprocessor pipeline architecture is increasingly used. The key challenge of such systems is to guarantee the end-to-end deadlines of aperiodic tasks. This paper proposes an end-to-end deadline control model, called Linear Quadratic Stochastic Optimal Control Model (LQ-SOCM), which features a distributed feedback control that dynamically enforces the desired performance. The control system considers the aperiodic task arrivals and execution times’ variation as the two external factors of the system unpredictability. LQ-SOCM uses discrete time state space equation to describe the real-time computing system. Then, in the actuator design, a continuous manner is adopted to deal with discrete QoS (Quality of Service) adaptation. Finally, experiments demonstrate that the system is globally stable and can statistically provide the end-to-end deadline guarantee for aperiodic tasks. At the same time, LQ-SOCM is capable of effectively improving the system throughput.
Xiong Guang ZeEmail:
  相似文献   

19.
We present a framework that uses data dependency information to automate load balanced volume distribution and ray-task scheduling for parallel visualization of massive volumes. This dependency graph approach improves load balancing for both ray casting and ray tracing. The main bottlenecks in distributed volume rendering involve moving data across the network and loading memory into rendering hardware. Our load balancing solution combines static network distribution with dynamic ray-task scheduling. At the core of the dependency graph approach are the flex-block tree, introduced in this paper, and the cell-tree. The flex-block tree is similar to a kd-tree except that leaf nodes are cells containing a combination of empty space and tightly cropped subvolumes, or flex-blocks. A main contribution of this paper is the moving walls algorithm, which uses dynamic programming to create a flex-block partition. We show results for optimizing distributed ray cast rendering using a time cost function. We compare data distribution using the moving walls algorithm, with distribution using a recursive solution, and with a grid combined with a local kd-tree partition on each render-node.
Arie KaufmanEmail:
  相似文献   

20.
Optimal virtual cluster-based multiprocessor scheduling   总被引:1,自引:1,他引:0  
Scheduling of constrained deadline sporadic task systems on multiprocessor platforms is an area which has received much attention in the recent past. It is widely believed that finding an optimal scheduler is hard, and therefore most studies have focused on developing algorithms with good processor utilization bounds. These algorithms can be broadly classified into two categories: partitioned scheduling in which tasks are statically assigned to individual processors, and global scheduling in which each task is allowed to execute on any processor in the platform. In this paper we consider a third, more general, approach called cluster-based scheduling. In this approach each task is statically assigned to a processor cluster, tasks in each cluster are globally scheduled among themselves, and clusters in turn are scheduled on the multiprocessor platform. We develop techniques to support such cluster-based scheduling algorithms, and also consider properties that minimize total processor utilization of individual clusters. In the last part of this paper, we develop new virtual cluster-based scheduling algorithms. For implicit deadline sporadic task systems, we develop an optimal scheduling algorithm that is neither Pfair nor ERfair. We also show that the processor utilization bound of us-edf{m/(2m−1)} can be improved by using virtual clustering. Since neither partitioned nor global strategies dominate over the other, cluster-based scheduling is a natural direction for research towards achieving improved processor utilization bounds.
Insup LeeEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号