共查询到19条相似文献,搜索用时 171 毫秒
1.
在重点研究单过程数控系统运动控制器的基础上,提出一种基于RT-Linux的多过程数控系统运动控制器设计方法.采用基于组件的设计思想给出了多过程运动控制器的设计框架,并采用了轮转调度策略实现了多个过程的调度.通过实验对多过程教控系统运动控制器的性能进行了测试,实验结果表明采用轮转调度策略会出现严重的抖动现象.为了解决这个问题,本文提出将伺服周期分片的调度策略,实验结果表明该调度策略很好的解决了抖动现象并能满足多过程数控系统运动控制器的要求. 相似文献
2.
3.
Linux内核调度器算法研究与性能分析 总被引:1,自引:0,他引:1
Linux操作系统正在向嵌入式系统和高端服务器领域发展。提高调度器的调度性能,支持实时应用以及支持多处理器并行性的研究工作显得非常重要。文中对Linux2.4,22和2.6.10两个版本的内核调度器进行比较分析,重点分析了两种调度器的调度算法、调度时机、优先权计算方法和时机以及调度性能。 相似文献
4.
网络控制系统的一种变采样周期动态调度策略 总被引:2,自引:0,他引:2
提出一种基于网络运行状态的网络控制系统动态调度器的设计方法.首先利用监测器在线获取当前的网络利用率、网络诱导误差和数据包执行时间,基于获取的网络状态,预测下一监测周期内的网络利用率和数据包执行时间.然后按照网络运行性能和控制性能的需求,基于网络利用率和数据包执行时间的预估值分配网络资源,计算控制系统新的采样周期.当数据包传输发生冲突时,采用MEF(Maximum Error First)作为辅助调度策略,确定数据包的发送优先级.最后通过一组仿真结果验证了所设计的动态调度器的有效性. 相似文献
5.
6.
在分析现有的资源调度方案及模型的基础上,提出了基于层次化的网格资源三层调度模型.它由主调度器、次级调度器和计算节点组成。主调度器根据任务的性质和需求,并参考下层次级调度器的执行情况,将部分任务分发到各次级调度器上,实现了主调度器与次级调度器之间的并行工作。基于该模型提出轮循任务分发策略。通过分析和模拟.该资源调度模型及任务分发策略在调度性能上明显优于集中式调度方案。 相似文献
7.
Barrier同步操作是能够直接影响处理器性能的一类操作.针对流处理器体系结构,提出并实现了2种软件同步机制和1种硬件同步机制,即基于互斥计数器的Barrier同步、基于共享状态寄存器的Lock-free Barrier同步和基于专用硬件管理单元的Barrier同步;在一款流处理器原型系统中测试并分析了在不同负载规模、不同负载分布、典型应用情况下3种同步机制的性能.结果表明,基于专用硬件管理单元的Barrier同步机制性能更优. 相似文献
8.
9.
信息物理系统(cyber-physical systems,CPSs)的应用愈加广泛,但因其本身具备开放性,易遭受网络攻击,并且攻击者越来越智能,故有必要开展CPSs安全性的相关研究.鉴于此,考虑具有多个远程状态估计子系统的信息物理系统在DoS攻击下的交互过程.每个系统中各传感器监控各自的系统,并由调度器为各个传感器的数据包分配通道,将其本地估计发送给远程状态估计器,目标是最小化总估计误差协方差.为了更接近实际应用场景,考虑在多信道传输过程中通道信号会受到不同环境影响,因此在不同环境的信道传输数据需要消耗的能量有所不同.调度器和攻击者对于通道的选择,需要满足通道对最低能量的需求才能进行传输和攻击.对于攻击者而言,考虑其更加智能,如果对一条通道攻击后仍然有剩余能量并满足其余通道要求,则可同时选择攻击其他通道进行攻击,进而实现与调度器相反的目标.在此基础上,构造一个双人零和博弈,并采用纳什Q学习算法求解双方的最优策略,为研究信息物理系统安全状态估计提供研究思路. 相似文献
10.
11.
12.
We design a task mapper TPCM for assigning tasks to virtual machines, and an application-aware virtual machine scheduler TPCS oriented for parallel computing to achieve a high performance in virtual computing systems. To solve the problem of mapping tasks to virtual machines, a virtual machine mapping algorithm (VMMA) in TPCM is presented to achieve load balance in a cluster. Based on such mapping results, TPCS is constructed including three components: a middleware supporting an application-driven scheduling, a device driver in the guest OS kernel, and a virtual machine scheduling algorithm. These components are implemented in the user space, guest OS, and the CPU virtualization subsystem of the Xen hypervisor, respectively. In TPCS, the progress statuses of tasks are transmitted to the underlying kernel from the user space, thus enabling virtual machine scheduling policy to schedule based on the progress of tasks. This policy aims to exchange completion time of tasks for resource utilization. Experimental results show that TPCM can mine the parallelism among tasks to implement the mapping from tasks to virtual machines based on the relations among subtasks. The TPCS scheduler can complete the tasks in a shorter time than can Credit and other schedulers, because it uses task progress to ensure that the tasks in virtual machines complete simultaneously, thereby reducing the time spent in pending, synchronization, communication, and switching. Therefore, parallel tasks can collaborate with each other to achieve higher resource utilization and lower overheads. We conclude that the TPCS scheduler can overcome the shortcomings of present algorithms in perceiving the progress of tasks, making it better than schedulers currently used in parallel computing. 相似文献
13.
Kashif Hesham Khan Kalim Qureshi Mostafa Abd-El-Barr 《The Journal of supercomputing》2014,68(3):1487-1502
Scheduling large-scale application in heterogeneous grid systems is a fundamental NP-complete problem that is critical to obtain good performance and execution cost. To achieve high performance in a grid system it requires effective task partitioning, resource management and load balancing. The heterogeneous and dynamic nature of a grid, as well as the diverse demands of applications running on the grid, makes grid scheduling a major task. Existing schedulers in wide-area heterogeneous systems require a large amount of information about the application and the grid environment to produce reasonable schedules. However, this required information may not be available, may be too expensive to collect, or may increase the runtime overhead of the scheduler such that the scheduler is rendered ineffective. We believe that no one scheduler is appropriate for all grid systems and applications. This is because while data parallel applications in which further data partitioning is possible can be further improved by efficient management of resources, smart selection of resources and load balancing can be possible, in functional/not-dividable-task parallel applications such partitioning is either not possible or difficult or expensive in term of performance. In this paper, we propose a scheduler for data parallel applications (SDPA) which offers an efficient task partitioning and load balancing strategy for data parallel applications in grid environment. The proposed SDPA offers two major features: maintaining job priority even if insufficient number of free resources is available and pre-task assignment to cut the idle time of nodes. The SDPA selects nodes smartly according to the nature of task and the nodes’ resources availability. Simulation results conducted reveal that SDPA achieves performance improvement over reported strategies in the reviewed literature in terms of execution time, throughput and waiting time. 相似文献
14.
Graph grammar concepts are applied for the design of consistent states and manipulations modeling actions, transactions and schedules for simultaneous executions on data base systems. Especially we show how to use Church-Rosser Theorems for graph grammars to handle synchronization problems for data base systems in a multi-user environment. In this paper theses concepts are applied to a data base system for a library. For reasons of presentation the system is restricted to the basic features of a library but can be extended to more comfortable systems including also generalized data base systems. All manipulation rules are shown to preserve consistency and it is analysed exactly which of theses rules are collateral and for which of them further synchronization mechanisms are necessary. While consistent states are modelled as graphs and manipulation rules (resp. actions) as graph productions transactions correspond to sequences of productions, and schedules to sequences of parallel productions. The application of a schedule to a consistent state yields a “semantical schedule” which transforms the given state to some other state of the system. Sufficient conditions are given for a schedule to be consistent which means that consistent states are transformed into consistent states. Moreover we are able to define a degree of non-parallelism for schedules such that optimal schedules are those with minimal degree with respect to all equivalent ones. Applying results from graph grammar theory we are able to show that for each schedule there is a unique optimal one. A similar result is shown for semantical schedules where the semantical degree of non-parallelism is smaller than the syntactical one in general. 相似文献
15.
We study the problem of executing parallel programs, in particular Cilk programs, on a collection of processors of different
speeds. We consider a model in which each processor maintains an estimate of its own speed, where communication between processors
has a cost, and where all scheduling must be online. This problem has been considered previously in the fields of asynchronous
parallel computing and scheduling theory. Our model is a bridge between the assumptions in these fields. We provide a new
more accurate analysis of an old scheduling algorithm called the maximum utilization scheduler . Based on this analysis, we generalize this scheduling policy and define the high utilization scheduler . We next focus on the Cilk platform and introduce a new algorithm for scheduling Cilk multithreaded parallel programs on
heterogeneous processors. This scheduler is inspired by the high utilization scheduler and is modified to fit in a Cilk context.
A crucial aspect of our algorithm is that it keeps the original spirit of the Cilk scheduler. In fact, when our new algorithm
runs on homogeneous processors, it exactly mimics the dynamics of the original Cilk scheduler. 相似文献
16.
《Parallel Computing》2004,30(5-6):567-583
Recent breakthroughs in the mathematical estimation of parallel genetic algorithm parameters are applied to the NP-complete problem of scheduling multiple tasks on a cluster of computers connected by a shared bus. Numerous adjustments to the original method of parameter estimation were made in order to accurately reflect differences in the problem model. The parallel scheduler used m-ary encoding and included a shared communication bus constraint. Fitness was an indirect computation requiring an evaluation of the meaning and implications (i.e., effect on communication time) of the encoding. The degree of correctness was defined as the “nearness” to the optimal schedule that could be obtained in a limited amount of time. Experiments reveal that the parallel scheduling algorithm developed very accurate schedules when the modified parameter guidelines were used. This article describes the scheduling problem, the parallel genetic scheduler, the adjustments made to the mathematical estimations, the quality of the schedules that were obtained, and the accuracy of the schedules compared to mathematically predicted expected values. 相似文献
17.
Soumen Chakrabarti James Demmel Katherine Yelick 《Journal of Parallel and Distributed Computing》1997,47(2):105
An increasing number of scientific programs exhibit two forms of parallelism, often in a nested fashion. At the outer level, the application comprises coarse-grained task parallelism, with dependencies between tasks reflected by an acyclic graph. At the inner level, each node of the graph is a data-parallel operation on arrays. Designers of languages, compilers, and runtime systems are building mechanisms to support such applications by providing processor groups and array remapping capabilities. In this paper we explore how to supplement these mechanisms with policy. What properties of an application, its data size, and the parallel machine determine the maximum potential gains from using both kinds of parallelism? It turns out that large gains can be expected only for specific task graph structures. For such applications, what are practical and effective ways to allocate processors to the nodes of the task graph? In principle one could solve the NP-complete problem of finding the best possible allocation of arbitrary processor subsets to nodes in the task graph. Instead of this, our analysis and simulations show that a simpleswitchedscheduling paradigm, which alternates between pure task and pure data parallelism, provides nearly optimal performance for the task graphs considered here. Furthermore, our scheme is much simpler to implement, has less overhead than the optimal allocation, and would be attractive even if the optimal allocation was free to compute. To evaluate switching in real applications, we implemented a switching task scheduler in the parallel numerical library ScaLAPACK and used it in a nonsymmetric eigenvalue program. Even for fairly large input sizes, the efficiency improves by factors of 1.5 on the Intel Paragon and 2.5 on the IBM SP-2. The remapping and scheduling overhead is negligible, between 0.5 and 5%. 相似文献
18.
虚拟机技术作为云计算的重要技术之一,近年来得到广泛关注,但是由于虚拟机管理层的存在,导致语义鸿沟,使得实时应用程序、并发程序等在虚拟机上的运行性能受到影响。分析和研究了Xen虚拟机管理器的Credit调度算法,针对其在并发调度和软实时调度方面存在的不足,提出了改进调度算法,实现了算法的调度器原型。新的调度算法对软实时虚拟机进行Credit比例预分配,采用动态调度时间片机制,以non-work-conserving方式实现软实时任务周期调度,保障调度周期满足运行周期要求。通过区分并发和非并发软实时虚拟机,采取不同的调度策略,在满足资源利用率的基础上,确保实时任务的顺利运行。测试结果表明,该调度算法在对并发和非并发软实时任务调度上,具有良好的表现,较好满足了软实时应用调度需求。 相似文献
19.
A neural network job-shop scheduler 总被引:3,自引:2,他引:1
Gary R. Weckman Chandrasekhar V. Ganduri David A. Koonce 《Journal of Intelligent Manufacturing》2008,19(2):191-201
This paper focuses on the development of a neural network (NN) scheduler for scheduling job-shops. In this hybrid intelligent
system, genetic algorithms (GA) are used to generate optimal schedules to a known benchmark problem. In each optimal solution,
every individually scheduled operation of a job is treated as a decision which contains knowledge. Each decision is modeled
as a function of a set of job characteristics (e.g., processing time), which are divided into classes using domain knowledge
from common dispatching rules (e.g., shortest processing time). A NN is used to capture the predictive knowledge regarding
the assignment of operation’s position in a sequence. The trained NN could successfully replicate the performance of the GA
on the benchmark problem. The developed NN scheduler was then tested against the GA, Attribute-Oriented Induction data mining
methodology and common dispatching rules on a test set of randomly generated problems. The better performance of the NN scheduler
on the test problem set compared to other methods proves the feasibility of NN-based scheduling. The scalability of the NN
scheduler on larger problem sizes was also found to be satisfactory in replicating the performance of the GA. 相似文献