首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 765 毫秒
1.
In order to meet the inherent need of real-time applications for high quality results within strict timing constraints, the employment of effective scheduling techniques is crucial in distributed real-time systems. In this paper, we evaluate by simulation the performance of strategies for the dynamic scheduling of composite jobs in a homogeneous distributed real-time system. Each job that arrives in the system is a directed acyclic graph of component tasks and has an end-to-end deadline. For each scheduling policy, we provide an alternative version which allows imprecise computations, taking into account the effects of input error on the processing time of the component tasks of a job. The simulation results show that the alternative versions of the algorithms outperform their respective counterparts. To our knowledge, an imprecise computations approach for the dynamic scheduling of multiple task graphs with end-to-end deadlines and input error has never been discussed in the literature before.  相似文献   

2.
This paper presents a new service for CORBA applications that orchestrates the timely execution of the tasks of a distributed real-time system in a flexible way. It follows the CORBA philosophy of complementing the CORBA standard with additional services that solve specific problems and facilitate using CORBA in complex applications. The service has been designed for highly coupled applications that execute over LANs. It provides a synchronous framework to synchronize distributed applications that is open to accepting and removing components on-line, with reduced impact on the application timing. It also provides the flexibility to use different distributed scheduling policies that can override the local operating systems schedulers. This paper describes the service architecture and implementation as well as its best-case performance on low computing power hardware with the QNX OS and connected to a switched Ethernet network. Finally the usage and of the service is illustrated with one case study: the synchronization of several robots in a welding process.  相似文献   

3.
Network processors are designed to handle the inherently parallel nature of network processing applications. However, partitioning and scheduling of application tasks and data allocation to reduce memory contention remain as major challenges in realizing the full performance potential of a given network processor. The large variety of processor architectures in use and the increasing complexity of network applications further aggravate the problem. This work proposes a novel framework, called FEADS, for automating the task of application partitioning and scheduling for network processors. FEADS uses the simulated annealing approach to perform design space exploration of application mapping onto processor resources. Further, it uses cyclic and r-periodic scheduling to achieve higher throughput schedules. To evaluate dynamic performance metrics such as throughput and resource utilization under realistic workloads, FEADS automatically generates a Petri net (PN) which models the application, architectural resources, mapping and the constructed schedule and their interaction. The throughput obtained by schedules constructed by FEADS is comparable to that obtained by manual scheduling for linear task flow graphs; for more complicated task graphs, FEADS’ schedules have a throughput which is upto 2.5 times higher compared to the manual schedules. Further, static scheduling of tasks results in an increase in throughput by upto 30% compared to an implementation of the same mapping without task scheduling.  相似文献   

4.
The task scheduling in heterogeneous distributed computing systems plays a crucial role in reducing the makespan and maximizing resource utilization. The diverse nature of the devices in heterogeneous distributed computing systems intensifies the complexity of scheduling the tasks. To overcome this problem, a new list-based static task scheduling algorithm namely Deadline-Aware-Longest-Path-of-all-Predecessors (DA-LPP) is being proposed in this article. In the prioritization phase of the DA-LPP algorithm, the path length of the current task from all its predecessors at each level is computed and among them, the longest path length value is assigned as the rank of the task. This strategy emphasizes the tasks in the critical path. This well-optimized prioritization phase leads to an observable minimization in the makespan of the applications. In the processor selection phase, the DA-LPP algorithm implements the improved insertion-based policy which effectively utilizes the unoccupied leftover free time slots of the processors which improve resource utilization, further least computation cost allocation approach is followed to minimize the overall computation cost of the processors and parental prioritization policy is incorporated to further reduce the scheduling length. To demonstrate the robustness of the proposed algorithm, a synthetic graph generator is used in this experiment to generate a huge variety of graphs. Apart from the synthetic graphs, real-world application graphs like Montage, LIGO, Cybershake, and Epigenomic are also considered to grade the performance of the DA-LPP algorithm. Experimental results of the DA-LPP algorithm show improvement in performance in terms of scheduling length ratio, makespan reduction rate , and resource reduction rate when compared with other algorithms like DQWS, DUCO, DCO and EPRD. The results reveal that for 1000 task set with deadline equals to two times of the critical path, the scheduling length ratio of the DA-LPP algorithm is better than DQWS by 35%, DUCO by 23%, DCO by 26 %, and EPRD by 17%.  相似文献   

5.
基于规则的分层负载平衡调度模型   总被引:13,自引:0,他引:13  
On a massively parallel and distributed system and a network of workstations system, it is a critical problem to increase the utilization efficiency of resources and the answer speed of tasks by using effective load balancing scheduling strategy. This paper analyzes the scheduling strategy of dynamic load balancing and static load balancing,and then proposes a hierarchical load balancing scheduling model based on rules. Finally,making somecomparisons with Other scheduling models.  相似文献   

6.
Pervasive computing deployments are increasingly using sensor networks to build instrumented environments that provide local data to immersed mobile applications. These applications demand opportunistic and unpredictable interactions with local devices. While this direct communication has the potential to reduce both overhead and latency, it deviates significantly from existing uses of sensor networks that funnel information to a static central collection point. This pervasive computing driven perspective demands new communication abstractions that enable the required direct communication among mobile applications and sensors. This paper presents the scene abstraction, which allows immersed applications to create dynamic distributed data structures over the immersive sensor network. A scene is created based on application requirements, properties of the underlying network, and properties of the physical environment. This paper details our work on defining scenes, providing an abstract model, an implementation, and an evaluation.  相似文献   

7.
8.
The handling of complex tasks in IoT applications becomes difficult due to the limited availability of resources in most IoT devices. There arises a need to offload the IoT tasks with huge processing and storage to resource enriched edge and cloud. In edge computing, factors such as arrival rate, nature and size of task, network conditions, platform differences and energy consumption of IoT end devices impacts in deciding an optimal offloading mechanism. A model is developed to make a dynamic decision for offloading of tasks to edge and cloud or local execution by computing the expected time, energy consumption and processing capacity. This dynamic decision is proposed as processing capacity-based decision mechanism (PCDM) which takes the offloading decisions on new tasks by scheduling all the available devices based on processing capacity. The target devices are then selected for task execution with respect to energy consumption, task size and network time. PCDM is developed in the EDGECloudSim simulator for four different applications from various categories such as time sensitiveness, smaller in size and less energy consumption. The PCDM offloading methodology is experimented through simulations to compare with multi-criteria decision support mechanism for IoT offloading (MEDICI). Strategies based on task weightage termed as PCDM-AI, PCDM-SI, PCDM-AN, and PCDM-SN are developed and compared against the five baseline existing strategies namely IoT-P, Edge-P, Cloud-P, Random-P, and Probabilistic-P. These nine strategies are again developed using MEDICI with the same parameters of PCDM. Finally, all the approaches using PCDM and MEDICI are compared against each other for four different applications. From the simulation results, it is inferred that every application has unique approach performing better in terms of response time, total task execution, energy consumption of device, and total energy consumption of applications.  相似文献   

9.
Multi Agent实现是基于Multi Agent的分布式测控系统动态任务调度算法实现的关键技术。采用Java作为开发工具,根据Multi Agent的功能,详细论述了Multi Agent在分布式测控系统动态任务调度中的实现。基于Multi Agent的动态任务调度算法根据各主机的负载状态,在系统运行过程中利用移动Agent动态迁移任务。文中详细论述了利用Aglets系统开发和执行移动Agent,从而有效地提高了系统效率,实现了动态任务调度的目标。  相似文献   

10.
真航向测量系统接口箱实时任务调度   总被引:1,自引:0,他引:1  
实时性是工业控制和军事应用的关键。文章分析了某型潜艇真航向测量系统的接口箱硬件特点及相应的软件要求。在接口箱系统的单任务操作系统下,需要一种任务调度策略来实现多任务实时性。所以在此前提下,文章对常规的RM调度和EDF调度策略进行了分析研究,设计实现了一种在接口箱的单任务弱实时操作系统下,可以进行多任务动态实时调度的策略。该策略结合RM和EDF两种方法的特点,既考虑到任务的频度,同时兼顾任务的重要程度,接口箱的工程实践证明了该方法的合理性、有效性。  相似文献   

11.
In recent years, a variety of computational sites and resources have emerged, and users often have access to multiple resources that are distributed. These sites are heterogeneous in nature and performance of different tasks in a workflow varies from one site to another. Additionally, users typically have a limited resource allocation at each site capped by administrative policies. In such cases, judicious scheduling strategy is required in order to map tasks in the workflow to resources so that the workload is balanced among sites and the overhead is minimized in data transfer. Most existing systems either run the entire workflow in a single site or use naïve approaches to distribute the tasks across sites or leave it to the user to optimize the allocation of tasks to distributed resources. This results in a significant loss in productivity. We propose a multi-site workflow scheduling technique that uses performance models to predict the execution time on resources and dynamic probes to identify the achievable network throughput between sites. We evaluate our approach using real world applications using the Swift parallel and distributed execution framework. We use two distinct computational environments-geographically distributed multiple clusters and multiple clouds. We show that our approach improves the resource utilization and reduces execution time when compared to the default schedule.  相似文献   

12.
Reconfigurable machines based on field programmable gate array (FPGA) chips adapt to applications’ needs through hardware reconfiguration. Partial reconfiguration allows the configuration of a portion of a chip while the rest of the chip is busy working on tasks. This paper considers a two-dimensional partially reconfigurable FPGA chip that allows the dynamic swap in and out of circuit modules. Such a chip supports the concurrent execution of multiple applications or an application that is otherwise too large to fit. A challenging issue for 2-D runtime partial reconfiguration is how to support the efficient connection, or routing, between circuit modules or between modules and I/O pins, when those modules may be placed on any area of a chip. Because commercial chips are not efficient in 2-D runtime routing, a new FPGA architecture is proposed based on an array of clusters of configurable logic blocks and a mesh of segmented buses. To evaluate the runtime performance of the architecture, an operating system is specified and implemented which takes care of the scheduling, placement, and routing of circuits on the architecture. Simulation is used to evaluate the efficiency of the OS kernel and to determine the optimal cluster size of the architecture.  相似文献   

13.
Scheduling concerns the allocation of processors to processes, and is traditionally associated with low-level tasks in operating systems and embedded devices. However, modern software applications with soft real-time requirements need to control application-level performance. High-level scheduling control at the application level may complement general purpose OS level scheduling to fine-tune performance of a specific application, by allowing the application to adapt to changes in client traffic on the one hand and to low-level scheduling on the other hand. This paper presents an approach to express and analyze application-specific scheduling decisions during the software design stage. For this purpose, we integrate support for application-level scheduling control in a high-level object-oriented modeling language, Real-Time ABS, in which executable specifications of method calls are given deadlines and real-time computational constraints. In Real-Time ABS, flexible application-specific schedulers may be specified by the user, i.e., developer, at the abstraction level of the high-level modeling language itself and associated with concurrent objects at creation time. Tool support for Real-Time ABS is based on an abstract interpreter that supports simulations and measurements of systems at the design stage.  相似文献   

14.
The objective of this research is to develop methodologies and a framework for distributed process planning and adaptive control using function blocks. Facilitated by a real-time monitoring system, the proposed methodologies can be applied to integrate with functions of dynamic scheduling in a distributed environment. A function block-enabled process planning approach is proposed to handle dynamic changes during process plan generation and execution. This paper focuses mainly on distributed process planning, particularly on the development of a function block designer that can encapsulate generic process plans into function blocks for runtime execution. As function blocks can sense environmental changes on a shop floor, it is expected that a so-generated process plan can adapt itself to the shop floor environment with dynamically optimized solutions for plan execution and process monitoring.  相似文献   

15.
This paper explores the energy-efficient scheduling of real-time tasks on a non-ideal DVS processor in the presence of resource sharing. We assume that tasks are periodic, preemptive and may access to shared resources. When dynamic-priority and fixed-priority scheduling are considered, we use the earliest deadline first (EDF) algorithm and the rate monotonic (RM) algorithm to schedule the given set of tasks. Based on the stack resource policy (SRP), we propose an approach, called blocking-aware two-speed (BATS) algorithm, to synchronize the tasks with shared resources and to calculate appropriate execution speeds so that the shared resources can be accessed in a mutual exclusive manner and the energy consumption can be reduced. Particularly, BATS uses a static low speed to execute tasks initially, and then it switches to a high speed dynamically whenever a task blocks a higher priority task. More specifically, the processor runs at the high speed from the beginning of the blocking until the deadline of the blocked task or the processor becomes idle. In order to guarantee that the deadlines of tasks are met, the static low speed and the dynamic high speeds are derived based on the theoretical analysis of the schedulability of tasks. Compared with existing work, BATS achieves more energy saving because its dynamic high speeds are lower than that of existing work and the processor has less chance to execute tasks at the high speeds. The schedulability analysis and the properties of our proposed BATS are provided in this paper. We also evaluated the capabilities of BATS by a series of experiments, for which we have some encouraging results.  相似文献   

16.
Contemporary operating systems for single-ISA (instruction set architecture) multi-core systems attempt to distribute tasks equally among all the CPUs. This approach works relatively well when there is no difference in CPU capability. However, there are cases in which CPU capability differs from one another. For instance, static capability asymmetry results from the advent of new asymmetric hardware, and dynamic capability asymmetry comes from the operating system (OS) outside noise caused from networking or I/O handling. These asymmetries can make it hard for the OS scheduler to evenly distribute the tasks, resulting in less efficient load balancing. In this paper, we propose a user-level load balancer for parallel applications, called the ’capability balancer’, which recognizes the difference of CPU capability and makes subtasks share the entire CPU capability fairly. The balancer can coexist with the existing kernel-level load balancer without detrimenting the behavior of the kernel balancer. The capability balancer can fairly distribute CPU capability to tasks with very little overhead. For real workloads like the NAS Parallel Benchmark (NPB), we have accomplished speedups of up to 9.8% and 8.5% in dynamic and static asymmetries, respectively. We have also experienced speedups of 13.3% for dynamic asymmetry and 24.1% for static asymmetry in a competitive environment. The impacts of our task selection policies, FIFO (first in, first out) and cache, were compared. The use of the cache policy led to a speedup of 5.3% in overall execution time and a decrease of 4.7% in the overall cache miss count, compared with the FIFO policy, which is used by default.  相似文献   

17.
Grid computing has become conventional in distributed systems due to technological advancements and network popularity. Grid computing facilitates distributed applications by integrating available idle network computing resources into formidable computing power. As a result, by using efficient integration and sharing of resources, this enables abundant computing resources to solve complicated problems that a single machine cannot manage. However, grid computing mines resources from accessible idle nodes and node accessibility varies with time. A node that is currently idle, may become occupied within a second of time and then be unavailable to provide resources. Accordingly, node selection must provide effective and sufficient resources over a long period to allow load assignment. This study proposes a hybrid load balancing policy to integrate static and dynamic load balancing technologies. Essentially, a static load balancing policy is applied to select effective and suitable node sets. This will lower the unbalanced load probability caused by assigning tasks to ineffective nodes. When a node reveals the possible inability to continue providing resources, the dynamic load balancing policy will determine whether the node in question is ineffective to provide load assignment. The system will then obtain a new replacement node within a short time, to maintain system execution performance.  相似文献   

18.
论文提出了一个基于任务图的应用结构模型,以及它在移动adhoc网中的实现。该结构通过一个由“节点”和“边界”组成的任务图描述了一种分布式的应用。它选择能完成任务并且满足一定属性的设备来执行应用程序,同时分离出部分应用程序让网络中的其它有效设备执行。为实现上述应用,论文提出了一个执行协议并进行了实验,结果表明该协议是可行的。  相似文献   

19.
强实时环境下调度非周期任务的时限寻优方法   总被引:1,自引:0,他引:1  
文章提出了强实时环境下调度弱时限非周期任务的时限寻优方法(DOA),该方法在保证周期任务和偶发性任务满足时限要求的前提下,使非周期任务的响应时间达到最优。它还可根据实时应用的需要对算法的执行性能和计算复杂度进行折衷调整。仿真实验表明,DOA与现有的动态调度算法相比,使非周期任务响应时间更短,同时它收敛快,额外开销小,计算复杂度低,实现方便,因此是强实时环境下对周期任务与非周期任务进行混合调度的一种较好的方法。  相似文献   

20.
Most work related to quality of service (QoS) is concerned with individual system components, such as the operating system or the network. However, to support distributed multimedia applications, the entire distributed system must participate in providing the guaranteed performance levels. In recognition of this, a number of QoS architectures have been proposed to provide QoS guarantees. The mechanisms and schemes proposed by those architectures are used in a rather static manner since the involved entities, e.g., the network, sender and receiver, are known before the connection (call) set-up phase. In contrast to these architectures, we propose a general QoS management framework which supports the dynamic choice of a configuration of system components to support the QoS requirements for the user of a specific application. We consider different possible system configurations and select the most appropriate one depending on the desired QoS and the available resources. In this paper we present an overview of this general framework; especially, we concentrate on QoS negotiation and adaptation mechanisms. To show the feasibility of this approach, we designed and implemented a QoS manager for distributed multimedia presentational applications, such as news-on-demand. The negotiation and adaptation mechanisms which are supported by the QoS manager are specializations of the general framework. The proposed framework allows to improve the utilization of system resources, and thus to increase the system availability; it also allows to recover automatically, if this is possible, from QoS degradations. Furthermore, it provides the flexibility to incorporate different resource reservation schemes and scheduling policies, and to accommodate new system component technologies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号