共查询到20条相似文献,搜索用时 125 毫秒
1.
基于PSO的多约束QoS网格资源选择模型 总被引:2,自引:0,他引:2
现有的网格资源选择算法中,只考虑到资源的可利用率,忽略了网络因素的影响,为此提出了一种基于粒子群优化算法的、带网络QoS约束的三层资源选择模型,并对该模型的算法进行了设计.该模型综合考虑了资源利用率和网络因素对网格资源选择的影响,过滤掉一些资源利用率很高但网络通信能力很低,甚至网络无法连通的结点,减轻了资源调度的负担.给出了一个仿真实例,以说明该模型和算法的有效性. 相似文献
2.
面向网格计算的机器选择算法研究 总被引:7,自引:0,他引:7
在以网络为基础的科学与并行计算环境中,计算资源具有强分布性、异构性和动态性.当应用程序提交给网格计算环境时,需要从全部可用计算资源中选择一个资源子集以支持该应用的执行.复杂的应用问题通常包含多方面的异构性,不同性质的应用适合在不同的体系结构运行.基于对网格中可用资源的动态监测与分析结果,论文使用模糊聚类方法,根据不同的性能指标要求,为不同应用选择不同的计算结点集合.将全部可用结点划分为不同的逻辑分组,每个分组称为一个逻辑机群.针对应用的不同种类,使用λ-截矩阵为每个应用指派一个或多个聚类中心值较大的逻辑机群来协同应用调度.实验表明,根据应用类型进行机器选择,可以明显改善应用性能,通信密集应用选择内部通信性能好的逻辑机群进行调度,性能更优、计算密集应用选择计算能力强的逻辑机群进行调度,性能明显改善. 相似文献
3.
提出了一种基于松弛标记法的任务调度算法 (Relaxation labeling based task scheduling, RLBTS), 将任务映射到异构资源(处理器计算能力和链路的通信能力不同)上. 松弛标记法善于处理大量的约束条件, 其核心思想是结点的标签分配通常受该结点的邻居结点某些属性的影响. 依据邻居约束关系, 可以逐渐排除不相关因素, 迅速缩小搜索空间. 该算法统筹兼顾了任务执行的计算需求和通信需求问题, 实验结果表明对于通信和计算需求都很高的任务和通信密集型任务, RLBTS 不失为一种有效的调度算法. 相似文献
4.
5.
为了解决传统资源发现机制不能很好地适应网格资源环境的问题,在有效组建网格资源虚拟组织的基础上,提出了资源虚拟组织中联系结点的选取原则和选取算法,通过该算法从具有相同资源类型的联系结点中选出管理结点.联系结点和管理结点之间采用类似二部图的方式进行连接,构成基于动态自组织覆盖架构的网格资源发现模型.从资源发现能力、资源发现效率和系统可扩展性等方面对该模型进行了综合评价,结果显示该模型适合网格资源的特征,在大规模网格系统中能够有效提高资源发现性能,仿真实验验证了该模型的有效性. 相似文献
6.
如何在动态性极强的网格环境中有效调度工作流应用并满足用户的QoS需求是一个难题.传统的基于资源静态特征的启发式调度算法或预留策略缺乏对资源动态服务能力的有效评估而无法保证工作流应用的截止时间约束.本文采用随机服务模型建模网格资源的动态性能并考虑资源内处理单元失效的情况.利用生灭过程描述资源节点中处理单元数目的变化情况并给出了资源节点在任务截止时间内的可靠性评估方法.在此基础上,提出一种可靠性增强的网格工作流调度算法RSA_TC.实验结果表明RSA_TC算法相对于DSESAW和PFAS算法,能有效保证用户截止时间的要求,对动态网格环境有较好的自适应性. 相似文献
7.
蒋晶 《计算机工程与应用》2011,47(4):84-86
网格环境下资源的管理和调度是一个非常复杂且具有挑战性的问题。在数据密集型应用中,数据文件的读取延迟时间是至关重要的。提出了一种基于聚类预处理的数据文件复制算法(CBR),将传输带宽满足一定条件的网格结点通过聚类方法构成一个“逻辑区域”;并介绍了一种改进的LRU算法,考虑了其他计算任务需要的数据文件请求,避免删除未来将使用的数据文件。通过实验证明,该算法得到的计算任务完成时间优于其他两种算法。 相似文献
8.
9.
提出了一种基于松弛标记法的任务调度算法(Relaxation labeling based task scheduling,RLBTS),将任务映射到异构资源(处理器计算能力和链路的通信能力不同)上.松弛标记法善于处理大量的约束条件,其核心思想是结点的标签分配通常受该结点的邻居结点某些属性的影响.依据邻居约束关系,可以逐渐排除不相关因素,迅速缩小搜索空间.该算法统筹兼顾了任务执行的计算需求和通信需求问题,实验结果表明对于通信和计算需求都很高的任务和通信密集型任务,RLBTS不失为一种有效的调度算法. 相似文献
10.
11.
随着计算规模不断扩大,人们对并行计算调度的性能要求也不断提高。网格技术的出现,解决了计算资源的共享性、异构性、规模可扩展性、鲁棒性以及安全性等方面的问题,但同时也在资源调度方面带来了新的挑战。HITGRID是一个基于Globus网格工具平台上的网格调度中间件,它接收用户提交的计算任务,针对用户对计算资源提出的要求,在网格环境中查找符合条件的资源进行调度。文中讨论了将NWS预测技术封装为标准网格资源信息的方法,以及资源调度的性能模型,给出了调度中资源选择的策略。最后通过一个计算大规模N体问题的实例表明笔者的工作是有成效的。 相似文献
12.
Anastasios Gounaris Rizos Sakellariou Norman W. Paton Alvaro A. A. Fernandes 《Distributed and Parallel Databases》2006,19(2-3):87-106
Advances in network technologies and the emergence of Grid computing have both increased the need and provided the infrastructure
for computation and data intensive applications to run over collections of heterogeneous and autonomous nodes. In the context
of database query processing, existing parallelisation techniques cannot operate well in Grid environments because the way
they select machines and allocate tasks compromises partitioned parallelism. The main contribution of this paper is the proposal
of a low-complexity, practical resource selection and scheduling algorithm that enables queries to employ partitioned parallelism,
in order to achieve better performance in a Grid setting. The evaluation results show that the scheduler proposed outperforms
current techniques without sacrificing the efficiency of resource utilisation.
Recommended by: Ioannis Vlahavas 相似文献
13.
Richard McClatchey Ashiq Anjum Heinz Stockinger Arshad Ali Ian Willers Michael Thomas 《Journal of Grid Computing》2007,5(1):43-64
In Grids scheduling decisions are often made on the basis of jobs being either data or computation intensive: in data intensive
situations jobs may be pushed to the data and in computation intensive situations data may be pulled to the jobs. This kind
of scheduling, in which there is no consideration of network characteristics, can lead to performance degradation in a Grid
environment and may result in large processing queues and job execution delays due to site overloads. In this paper we describe
a Data Intensive and Network Aware (DIANA) meta-scheduling approach, which takes into account data, processing power and network
characteristics when making scheduling decisions across multiple sites. Through a practical implementation on a Grid testbed,
we demonstrate that queue and execution times of data-intensive jobs can be significantly improved when we introduce our proposed
DIANA scheduler. The basic scheduling decisions are dictated by a weighting factor for each potential target location which
is a calculated function of network characteristics, processing cycles and data location and size. The job scheduler provides
a global ranking of the computing resources and then selects an optimal one on the basis of this overall access and execution
cost. The DIANA approach considers the Grid as a combination of active network elements and takes network characteristics
as a first class criterion in the scheduling decision matrix along with computations and data. The scheduler can then make
informed decisions by taking into account the changing state of the network, locality and size of the data and the pool of
available processing cycles. 相似文献
14.
Multi-criteria and satisfaction oriented scheduling for hybrid distributed computing infrastructures
Assembling and simultaneously using different types of distributed computing infrastructures (DCI) like Grids and Clouds is an increasingly common situation. Because infrastructures are characterized by different attributes such as price, performance, trust, and greenness, the task scheduling problem becomes more complex and challenging. In this paper we present the design for a fault-tolerant and trust-aware scheduler, which allows to execute Bag-of-Tasks applications on elastic and hybrid DCI, following user-defined scheduling strategies. Our approach, named Promethee scheduler, combines a pull-based scheduler with multi-criteria Promethee decision making algorithm. Because multi-criteria scheduling leads to the multiplication of the possible scheduling strategies, we propose SOFT, a methodology that allows to find the optimal scheduling strategies given a set of application requirements. The validation of this method is performed with a simulator that fully implements the Promethee scheduler and recreates an hybrid DCI environment including Internet Desktop Grid, Cloud and Best Effort Grid based on real failure traces. A set of experiments shows that the Promethee scheduler is able to maximize user satisfaction expressed accordingly to three distinct criteria: price, expected completion time and trust, while maximizing the infrastructure useful employment from the resources owner point of view. Finally, we present an optimization which bounds the computation time of the Promethee algorithm, making realistic the possible integration of the scheduler to a wide range of resource management software. 相似文献
15.
Valliyammai ChinnaiahAuthor Vitae Thamarai Selvi Somasundaram Author Vitae 《Future Generation Computer Systems》2012,28(3):491-499
To achieve high performance distributed data access and computing in Grid environment, monitoring of resource and network performance is vital. Our proposed Grid network monitoring architecture is modeled by the Grid scheduler. The proposed Grid network monitoring retrieves network metrics using sensors as network monitoring tools. The mobile agents are migrated to start the sensors to measure the network metrics in all Grid Resources from the Resource Broker. The raw data provided by the monitoring tools is used to produce a high level view of the Grid through the set of internal cost functions. The network cost function is formed by combining various network metrics such as bandwidth, Round Trip Time, jitter and packet loss to measure the network performance. This paper presents the Grid Resource Brokering strategy which analyzes the network metrics along with the resource metrics for the selection of the Grid resource to submit the job and the proposed approach is integrated with CARE Resource Broker (CRB) for job submission. The experimental results are evident for the minimization of job completion time for the submitted job. The simulation results also prove that the more number of jobs are completed with the proposed strategy which influences the better utilization of the Grid resources. 相似文献
16.
Kenli Li Zhao Tong Dan Liu Teklay Tesfazghi Xiangke Liao 《Frontiers of Computer Science in China》2011,5(4):513-525
Grid computing is the combination of computer resources in a loosely coupled, heterogeneous, and geographically dispersed
environment. Grid data are the data used in grid computing, which consists of large-scale data-intensive applications, producing
and consuming huge amounts of data, distributed across a large number of machines. Data grid computing composes sets of independent
tasks each of which require massive distributed data sets that may each be replicated on different resources. To reduce the
completion time of the application and improve the performance of the grid, appropriate computing resources should be selected
to execute the tasks and appropriate storage resources selected to serve the files required by the tasks. So the problem can
be broken into two sub-problems: selection of storage resources and assignment of tasks to computing resources. This paper
proposes a scheduler, which is broken into three parts that can run in parallel and uses both parallel tabu search and a parallel
genetic algorithm. Finally, the proposed algorithm is evaluated by comparing it with other related algorithms, which target
minimizing makespan. Simulation results show that the proposed approach can be a good choice for scheduling large data grid
applications. 相似文献
17.
Xiaoyong Tang Kenli Li Meikang Qiu Edwin H.-M. Sha 《Journal of Parallel and Distributed Computing》2012
In a Grid computing system, many distributed scientific and engineering applications often require multi-institutional collaboration, large-scale resource sharing, wide-area communication, etc. Applications executing in such systems inevitably encounter different types of failures such as hardware failure, program failure, and storage failure. One way of taking failures into account is to employ a reliable scheduling algorithm. However, most existing Grid scheduling algorithms do not adequately consider the reliability requirements of an application. In recognition of this problem, we design a hierarchical reliability-driven scheduling architecture that includes both a local scheduler and a global scheduler. The local scheduler aims to effectively measure task reliability of an application in a Grid virtual node and incorporate the precedence constrained tasks’ reliability overhead into a heuristic scheduling algorithm. In the global scheduler, we propose a hierarchical reliability-driven scheduling algorithm based on quantitative evaluation of independent application reliability. Our experiments, based on both randomly generated graphs and the graphs of some real applications, show that our hierarchical scheduling algorithm performs much better than the existing scheduling algorithms in terms of system reliability, schedule length, and speedup. 相似文献
18.
The obstacle for the Grid to be prevalent is the difficulty in using, configuring and maintaining it, which needs excessive IT knowledge, workload, and human intervention. At the same time, inter-operation amongst Grids is on track. To be the core of Grid systems, the resource management must be autonomic and inter-operational to be sustainable for future Grid computing. For this purpose, we introduce HOURS, a reputation-driven economic framework for Grid resource management. HOURS is designed to tackle the difficulty of automatic rescheduling, self-protection, incentives, heterogeneous resource sharing, reservation, and SLA in Grid computing. In this paper, we focus on designing a reputation-based resource scheduler, and use emulation to test its performance with real job traces and node failure traces. To describe the HOURS framework completely, a preliminary multiple-currency-based economic model is also introduced in this paper, with which future extension and improvement can be easily integrated into the framework. The results demonstrate that our scheduler can reduce the job failure rate significantly, and the average number of job resubmissions, which is the most important metric in this paper that affects the system performance and resource utilization from the perspective of users, can be reduced from 3.82 to 0.70 compared to simple sequence resource selection. 相似文献
19.
Grid scheduling algorithms are usually implemented in a simulation environment using tools that hide the complexity of the Grid and assumptions that are not always realistic. In our work, we describe the steps followed, the difficulties encountered and the solutions provided to develop and evaluate a scheduling policy, initially implemented in a simulation environment, in the gLite Grid middleware. Our focus is on a scheduling algorithm that allocates in a fair way the available resources among the requested users or jobs. During the actual implementation of this algorithm in gLite, we observed that the validity of the information used by the scheduler for its decisions affects greatly its performance. To improve the accuracy of this information, we developed an internal feedback mechanism that operates along with the scheduling algorithm. Also, a Grid computation resource cannot be shared concurrently between different users or jobs, making it difficult to provide actual fairness. For this reason we investigated the use of virtualization technology in the gLite middleware. We did a proof‐of‐concept implementation and performed an experimental evaluation of our scheduling algorithm in a small gLite testbed that proves the validity and applicability of our solutions. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
20.
Multiple-context processors provide register resources that allow rapid context switching between several threads as a means of tolerating long communication and synchronization latencies. When scheduling threads on such a processor, we must first decide which threads should have their state loaded into the multiple contexts, and second, which loaded thread is to execute instructions at any given time. In this paper we show that both decisions are important, and that incorrect choices can lead to serious performance degradation. We propose thread prioritization as a means of guiding both levels of scheduling. Each thread has a priority that can change dynamically, and that the scheduler uses to allocate as many computation resources as possible to critical threads. We briefly describe its implementation, and we show simulation performance results for a number of simple benchmarks in which synchronization performance is critical. 相似文献