首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Energy usage and its associated costs have taken on a new level of significance in recent years. Globally, energy costs that include the cooling of server rooms are now comparable to hardware costs, and these costs are on the increase with the rising cost of energy. As a result, there are efforts worldwide to design more efficient scheduling algorithms. Such scheduling algorithm for grids is further complicated by the fact that the different sites in a grid system are likely to have different ownerships. As such, it is not enough to simply minimize the total energy usage in the grid; instead one needs to simultaneously minimize energy usage between all the different providers in the grid. Apart from the multitude of ownerships of the different sites, a grid differs from traditional high performance computing systems in the heterogeneity of the computing nodes as well as the communication links that connect the different nodes together. In this paper, we propose a cooperative, power-aware game theoretic solution to the job scheduling problem in grids. We discuss our cooperative game model and present the structure of the Nash Bargaining Solution. Our proposed scheduling scheme maintains a specified Quality of Service (QoS) level and minimizes energy usage between all the providers simultaneously; energy usage is kept at a level that is sufficient to maintain the desired QoS level. Further, the proposed algorithm is fair to all users, and has robust performance against inaccuracies in performance prediction information.  相似文献   

2.
This work presents a novel parallel micro evolutionary algorithm for scheduling tasks in distributed heterogeneous computing and grid environments. The scheduling problem in heterogeneous environments is NP-hard, so a significant effort has been made in order to develop an efficient method to provide good schedules in reduced execution times. The parallel micro evolutionary algorithm is implemented using MALLBA, a general-purpose library for combinatorial optimization. Efficient numerical results are reported in the experimental analysis performed on both well-known problem instances and large instances that model medium-sized grid environments. The comparative study of traditional methods and evolutionary algorithms shows that the parallel micro evolutionary algorithm achieves a high problem solving efficacy, outperforming previous results already reported in the related literature, and also showing a good scalability behavior when facing high dimension problem instances.  相似文献   

3.
Due to the emergence of grid computing over the Internet, there is a need for a hybrid load balancing algorithm which takes into account the various characteristics of the grid computing environment. Hence, this research proposes a fault tolerant hybrid load balancing strategy namely AlgHybrid_LB, which takes into account grid architecture, computer heterogeneity, communication delay, network bandwidth, resource availability, resource unpredictability and job characteristics. AlgHybrid_LB juxtaposes the strong points of neighbor-based and cluster based load balancing algorithms. Our main objective is to arrive at job assignments that could achieve minimum response time and optimal computing node utilization. Major achievements include low complexity of proposed approach and drastic reduction of number of additional communications induced due to load balancing. A simulation of the proposed approach using Grid Simulation Toolkit (GridSim) is conducted. Experimental results show that the proposed algorithm performs very well in a large grid environment.  相似文献   

4.
In this paper, we study and compare grid and global computing systems and outline the benefits of having a hybrid system called DIRAC. To evaluate the DIRAC scheduling for high throughput computing, a new model is presented and a simulator was developed for many clusters of heterogeneous nodes belonging to a local network. These clusters are assumed to be connected to each other through a global network and each cluster is managed via a local scheduler which is shared by many users. We validate our simulator by comparing the experimental and analytical results of a M/M/4 queuing system. Next, we do the comparison with a real batch system and we obtain an average error of 10.5% for the response time and 12% for the makespan. We conclude that the simulator is realistic and well describes the behaviour of a large-scale system. Thus we can study the scheduling of our system called DIRAC in a high throughput context. We justify our decentralized, adaptive and opportunistic approach in comparison to a centralized approach in such a context.  相似文献   

5.
With recent advances in computing and communication technologies enabling mobile devices more powerful, the scope of Grid computing has been broadened to include mobile and pervasive devices. Energy has become a critical resource in such devices. So, battery energy limitation is the main challenge towards enabling persistent mobile grid computing. In this paper, we address the problem of energy constrained scheduling scheme for the grid environment. There is a limited energy budget for grid applications. The paper investigates both energy minimization for mobile devices and grid utility optimization problem. We formalize energy aware scheduling using nonlinear optimization theory under constraints of energy budget and deadline. The paper also proposes distributed pricing based algorithm that is used to tradeoff energy and deadline to achieve a system wide optimization based on the preference of the grid user. The simulations reveal that the proposed energy constrained scheduling algorithms can obtain better performance than the previous approach that considers both energy consumption and deadline.  相似文献   

6.
In grid computing, grid users who submit applications and resources providers who provide resources have different motivations when they join the grid. Application-centric scheduling aims to optimize the performance of individual application. Resource-centric scheduling aims to optimize the resource utilization of resources provider. Due to autonomy both in grid users and resource providers, the objectives of application-centric and resource-centric scheduling often conflict. The paper proposes a system-centric scheduling that provides a solution of joint optimization of the objectives for both the grid resource and grid application. Utility functions are used to express the objectives of grid resource and application. The system-centric scheduling policy can be formulated as joint optimization of utilities of grid applications and grid resources, which combine both application centric and resource-centric scheduling benefits. Simulations are conducted to study the performance of the system-centric scheduling algorithm. The experiment results show that the system-centric scheduling algorithm yields significantly better performance than application-centric scheduling algorithm and resource-centric scheduling algorithm.  相似文献   

7.
We consider the problem of scheduling an application on a computing system consisting of heterogeneous processors and data repositories. The application consists of a large number of file-sharing otherwise independent tasks. The files initially reside on the repositories. The processors and the repositories are connected through a heterogeneous interconnection network. Our aim is to assign the tasks to the processors, to schedule the file transfers from the repositories, and to schedule the executions of tasks on each processor in such a way that the turnaround time is minimized. We propose a heuristic composed of three phases: initial task assignment, task assignment refinement, and execution ordering. We experimentally compare the proposed heuristics with three well-known heuristics on a large number of problem instances. The proposed heuristic runs considerably faster than the existing heuristics and obtains 10–14% better turnaround times than the best of the three existing heuristics.  相似文献   

8.
由于科学研究与商业应用等对高性能计算的需求与日俱增,高性能计算的性能和系统规模得到迅速发展。但是,急剧增长的功耗严重限制了高性能计算系统的设计和使用,使得低功耗技术成为高性能计算领域的关键技术。作为整个系统的核心组件,作业调度系统立足有限的系统资源,对用户提交的应用进行作业-资源分配,其能效性对于整个高性能计算系统的能耗控制与调节起到至关重要的作用。首先介绍主要的能量效率技术和常用的作业调度策略,然后对当前高性能计算作业调度能效性进行分析,并讨论了其面临的挑战及未来发展方向。  相似文献   

9.
Clouds are rapidly becoming an important platform for scientific applications. In the Cloud environment with uncountable numeric nodes, resource is inevitably unreliable, which has a great effect on task execution and scheduling. In this paper, inspired by Bayesian cognitive model and referring to the trust relationship models of sociology, we first propose a novel Bayesian method based cognitive trust model, and then we proposed a trust dynamic level scheduling algorithm named Cloud-DLS by integrating the existing DLS algorithm. Moreover, a benchmark is structured to span a range of Cloud computing characteristics for evaluation of the proposed method. Theoretical analysis and simulations prove that the Cloud-DLS algorithm can efficiently meet the requirement of Cloud computing workloads in trust, sacrificing fewer time costs, and assuring the execution of tasks in a security way.  相似文献   

10.
Using the Internet, “public” computing grids can be assembled using “volunteered” PCs. To achieve this, volunteers download and install a software application capable of sensing periods of low local processor activity. During such times, this program on the local PC downloads and processes a subset of the project's data. At the completion of processing, the results are uploaded to the project and the cycle repeats.  相似文献   

11.
The molecular docking web interface was developed to execute Autodock3.05 molecular docking program in the Grid environment. The nature of the application which allows the whole docking jobs to be broken up into multiple small independent tasks, has the potential of utilizing the availability of the Grid computing. Using the web interface, the whole docking procedures can be automated from the start to the end. Automation includes the preparation of the target receptor, creation of parameter files (gpf and dpf), calculation of grid energy, and docking of molecules. Once the job is split into small tasks, the tasks are submitted to Globus GRAM that submits the tasks to the resources available in the Grid environment. The execution of the grid-enabled AutoDock 3.05 is tested and the results showed that the process of molecular docking are faster compared if the execution is run on sequential computing resources.  相似文献   

12.
The exploitation of service oriented technologies, such as Grid computing, is being boosted by the current service oriented economy trend, leading to a growing need of Quality of Service (QoS) mechanisms. However, Grid computing was created to provide vast amounts of computational power but in a best effort way. Providing QoS guarantees is therefore a very difficult and complex task due to the distributed and heterogeneous nature of their resources, specially the volunteer computing resources (e.g., desktop resources).The scope of this paper is to empower an integrated multi QoS support suitable for Grid Computing environments made of either dedicated and volunteer resources, even taking advantage of that fact. The QoS is provided through SLAs by exploiting different available scheduling mechanisms in a coordinated way, and applying appropriate resource usage optimization techniques. It is based on the differentiated use of reservations and scheduling in advance techniques, enhanced with the integration of rescheduling techniques that improve the allocation decisions already made, achieving a higher resource utilization and still ensuring the agreed QoS. As a result, our proposal enhances best-effort Grid environments by providing QoS aware scheduling capabilities.This proposal has been validated by means of a set of experiments performed in a real Grid testbed. Results show how the proposed framework effectively harnesses the specific capabilities of the underlying resources to provide every user with the desired QoS level, while, at the same time, optimizing the resources’ usage.  相似文献   

13.
State of the arte assimilation techniques, such as 3D-Var, are relatively seldom used within climate analysis frameworks, partly because of the enormous numerical costs. In order to face this issue ESA's high performance computing Grid on-Demand (G-POD) is used. We assimilate Global Navigation Satellite System (GNSS) based radio occultations (RO). RO data in general exhibit some favorable properties, like global coverage, all-weather capability expected long-term stability and accuracy. These properties and the continuity of data offered by the Meteorological Operational Satellite (MetOp) program and other RO missions are an ideal opportunity to study the long term atmospheric and climate variability.This paper investigates the assimilation of RO refractivity profiles into first guess fields derived from 21 years of ECMWF's ERA40 dataset on a monthly mean basis divided into four synoptic time layers in order to take the diurnal cycle into account. In contrast to NWP systems, the assimilation procedure is applied without cycling, thus enabling us to run our 3D-Var implementation within G-POD parallel for different time layers. Results indicate a significant analysis increment which is partly systematic, emphasizing the ability of RO data to add independent information to ECMWF analysis fields, with a potential to correct biases. This work lays the ground for further studies using data from existing instruments within a framework based on a mature methodology.  相似文献   

14.
Grid computing is a largely adopted paradigm to federate geographically distributed data centers. Due to their size and complexity, grid systems are often affected by failures that may hinder the correct and timely execution of jobs, thus causing a non-negligible waste of computing resources. Despite the relevance of the problem, state-of-the-art management solutions for grid systems usually neglect the identification and handling of failures at runtime. Among the primary goals to be considered, we claim the need for novel approaches capable to achieve the objectives of scalable integration with efficient monitoring solutions and of fitting large and geographically distributed systems, where dynamic and configurable tradeoffs between overhead and targeted granularity are necessary. This paper proposes GAMESH, a Grid Architecture for scalable Monitoring and Enhanced dependable job ScHeduling. GAMESH is conceived as a completely distributed and highly efficient management infrastructure, concentrating on two crucial aspects for large-scale and multi-domain grid environments: (i) the scalable dissemination of monitoring data and (ii) the troubleshooting of job execution failures. GAMESH has been implemented and tested in a real deployment encompassing geographically distributed data centers across Europe. Experimental results show that GAMESH (i) enables the collection of measurements of both computing resources and conditions of task scheduling at geographically sparse sites, while imposing a limited overhead on the entire infrastructure, and (ii) provides a failure-aware scheduler able to improve the overall system performance, even in the presence of failures, by coordinating local job schedulers at multiple domains.  相似文献   

15.
Particle swarm optimization (PSO) is a bio-inspired optimization strategy founded on the movement of particles within swarms. PSO can be encoded in a few lines in most programming languages, it uses only elementary mathematical operations, and it is not costly as regards memory demand and running time. This paper discusses the application of PSO to rules discovery in fuzzy classifier systems (FCSs) instead of the classical genetic approach and it proposes a new strategy, Knowledge Acquisition with Rules as Particles (KARP). In KARP approach every rule is encoded as a particle that moves in the space in order to cooperate in obtaining high quality rule bases and in this way, improving the knowledge and performance of the FCS. The proposed swarm-based strategy is evaluated in a well-known problem of practical importance nowadays where the integration of fuzzy systems is increasingly emerging due to the inherent uncertainty and dynamism of the environment: scheduling in grid distributed computational infrastructures. Simulation results are compared to those of classical genetic learning for fuzzy classifier systems and the greater accuracy and convergence speed of classifier discovery systems using KARP is shown.  相似文献   

16.
Fault-tolerant scheduling is an imperative step for large-scale computational Grid systems, as often geographically distributed nodes co-operate to execute a task. By and large, primary-backup approach is a common methodology used for fault tolerance wherein each task has a primary and a backup on two different processors. In this paper, we address the problem of how to schedule DAGs in Grids with communication delays so that service failures can be avoided in the presence of processors faults. The challenge is, that as tasks in a DAG have dependence on each other, a task must be scheduled to make sure that it will succeed when any of its predecessors fails due to a processor failure. We first propose a communication model and determine when communications between a backup and backups of its successors are necessary. Then we determine when a backup can start and its eligible processors so as to guarantee that every DAG can complete upon any processor failure. We develop two algorithms to schedule backups, which minimize response time and replication cost, respectively. We also develop a suboptimal algorithm which targets minimizing replication cost while not affecting response time. We conduct extensive simulation experiments to quantify the performance of the proposed algorithms.  相似文献   

17.
Computational grids that couple geographically distributed resources such as PCs, workstations, clusters, and scientific instruments, have emerged as a next generation computing platform for solving large-scale problems in science, engineering, and commerce. However, application development, resource management, and scheduling in these environments continue to be a complex undertaking. In this article, we discuss our efforts in developing a resource management system for scheduling computations on resources distributed across the world with varying quality of service (QoS). Our service-oriented grid computing system called Nimrod-G manages all operations associated with remote execution including resource discovery, trading, scheduling based on economic principles and a user-defined QoS requirement. The Nimrod-G resource broker is implemented by leveraging existing technologies such as Globus, and provides new services that are essential for constructing industrial-strength grids. We present the results of experiments using the Nimrod-G resource broker for scheduling parametric computations on the World Wide Grid (WWG) resources that span five continents.  相似文献   

18.
In this work, first, we present a grid resource discovery protocol that discovers computing resources without the need for resource brokers to track existing resource providers. The protocol uses a scoring mechanism to aggregate and rank resource provider assets and Internet router data tables (called grid routing tables) for storage and retrieval of the assets. Then, we discuss the simulation framework used to model the protocol and the results of the experimentation. The simulator utilizes a simulation engine core that can be reused for other network protocol simulators considering time management, event distribution, and a simulated network infrastructure. The techniques for constructing the simulation core code using C++/CLR are also presented in this paper.  相似文献   

19.
Traditional distributed filesystem technologies designed for local and campus area networks do not adapt well to wide area Grid computing environments. To address this problem, we have designed the Chirp distributed filesystem, which is designed from the ground up to meet the needs of Grid computing. Chirp is easily deployed without special privileges, provides strong and flexible security mechanisms, tunable consistency semantics, and clustering to increase capacity and throughput. We demonstrate that many of these features also provide order-of-magnitude performance increases over wide area networks. We describe three applications in bioinformatics, biometrics, and gamma ray physics that each employ Chirp to attack large scale data intensive problems.  相似文献   

20.
Distributed computing (DC) projects tackle large computational problems by exploiting the donated processing power of thousands of volunteered computers, connected through the Internet. To efficiently employ the computational resources of one of world's largest DC efforts, GPUGRID, the project scientists require tools that handle hundreds of thousands of tasks which run asynchronously and generate gigabytes of data every day. We describe RBoinc, an interface that allows computational scientists to embed the DC methodology into the daily work-flow of high-throughput experiments. By extending the Berkeley Open Infrastructure for Network Computing (BOINC), the leading open-source middleware for current DC projects, with mechanisms to submit and manage large-scale distributed computations from individual workstations, RBoinc turns distributed grids into cost-effective virtual resources that can be employed by researchers in work-flows similar to conventional supercomputers. The GPUGRID project is currently using RBoinc for all of its in silico experiments based on molecular dynamics methods, including the determination of binding free energies and free energy profiles in all-atom models of biomolecules.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号