首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 615 毫秒
1.
The paper describes the results of the EU FP7 EDGI project concerning how to extend gLite VOs with public and institutional BOINC desktop grids. Beyond simply showing the integration architecture components and services, the main emphasis is on how this integrated architecture can efficiently support parameter study applications, based on the so-called metajob concept created by the EDGI project. The paper explains in detail how to use the metajob concept by gLite users to exploit the BOINC desktop grids connected to the gLite VO, as well as how metajobs are managed internally by the 3G Bridge service. Performance measurements show that the Metajob concept indeed can significantly improve the performance of gLite VOs extended with desktop grids. Finally, the paper describes the practical ways of connecting BOINC desktop grids to gLite VOs and the accounting mechanism in these integrated grid systems.  相似文献   

2.
Service Grids like the EGEE Grid can not provide the required number of resources for many VOs. Therefore extending the capacity of these VOs with volunteer or institutional desktop Grids would significantly increase the number of accessible computing resources that can particularly advantageously be exploited in case of parameter sweep applications. This objective has been achieved by the EDGeS project that built a production infrastructure enabling the extension of gLite VOs with several volunteer and institutional desktop Grids. The paper describes the technical solution of integrating service Grids and desktop Grids and, the actual EDGeS production infrastructure. The main objectives and current achievements of the follow-up EDGI project have also been described showing how the existing EDGeS infrastructure can be further extended with clouds.  相似文献   

3.
Due to their inherent limitations in computational and battery power, storage and available bandwidth, mobile devices have not yet been widely integrated into grid computing platforms. However, millions of laptops, PDAs and other portable devices remain unused most of the time, and this huge repository of resources can be potentially utilized, leading to what is called a mobile grid environment. In this paper, we propose a game theoretic pricing strategy for efficient job allocation in mobile grids. By drawing upon the Nash bargaining solution, we show how to derive a unified framework for addressing such issues as network efficiency, fairness, utility maximization, and pricing. In particular, we characterize a two-player, non-cooperative, alternating-offer bargaining game between the Wireless Access Point Server and the mobile devices to determine a fair pricing strategy which is then used to effectively allocate jobs to the mobile devices with a goal to maximize the revenue for the grid users. Simulation results show that the proposed job allocation strategy is comparable to other task allocation schemes in terms of the overall system response time.  相似文献   

4.
Grid scheduling algorithms are usually implemented in a simulation environment using tools that hide the complexity of the Grid and assumptions that are not always realistic. In our work, we describe the steps followed, the difficulties encountered and the solutions provided to develop and evaluate a scheduling policy, initially implemented in a simulation environment, in the gLite Grid middleware. Our focus is on a scheduling algorithm that allocates in a fair way the available resources among the requested users or jobs. During the actual implementation of this algorithm in gLite, we observed that the validity of the information used by the scheduler for its decisions affects greatly its performance. To improve the accuracy of this information, we developed an internal feedback mechanism that operates along with the scheduling algorithm. Also, a Grid computation resource cannot be shared concurrently between different users or jobs, making it difficult to provide actual fairness. For this reason we investigated the use of virtualization technology in the gLite middleware. We did a proof‐of‐concept implementation and performed an experimental evaluation of our scheduling algorithm in a small gLite testbed that proves the validity and applicability of our solutions. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
The EGEE Grid offers the necessary infrastructure and resources for reducing the running time of particle tracking Monte-Carlo applications like GATE. However, efforts are required to achieve reliable and efficient execution and to provide execution frameworks to end-users. This paper presents results obtained with porting the GATE software on the EGEE Grid, our ultimate goal being to provide reliable, user-friendly and fast execution of GATE to radiation therapy researchers. To address these requirements, we propose a new parallelization scheme based on a dynamic partitioning and its implementation in two different frameworks using pilot jobs and workflows. Results show that pilot jobs bring strong improvement w.r.t. regular gLite submission, that the proposed dynamic partitioning algorithm further reduces execution time by a factor of two and that the genericity and user-friendliness offered by the workflow implementation do not introduce significant overhead.  相似文献   

6.
The Grid paradigm for accessing heterogeneous distributed resources proved to be extremely effective, as many organizations are relying on Grid middlewares for their computational needs. Many different middlewares exist, the result being a proliferation of self-contained, non interoperable “Grid islands”. This means that different Grids, based on different middlewares, cannot share resources, e.g. jobs submitted on one Grid cannot be forwarded for execution on another one. To address this problem, standard interfaces are being proposed for some of the important functionalities provided by most Grids, namely job submission and management, authorization and authentication, resource modeling, and others. In this paper we review some recent standards which address interoperability for three types of services: the BES/JSDL specifications for job submission and management, the SAML notation for authorization and authentication, and the GLUE specification for resource modeling. We describe how standards-enhanced Grid components can be used to create interoperable building blocks for a Grid architecture. Furthermore, we describe how existing components from the gLite middleware have been re-engineered to support BES/JSDL, GLUE and SAML. From this experience we draw some conclusions on the strengths and weaknesses of these specifications, and how they can be improved.  相似文献   

7.
Computational grids are composed of heterogeneous autonomously managed resources. In such environment, any resource can join or leave the grid at any time. It makes the grid infrastructure unreliable in nature resulting in delay and failure of executing jobs. Thus, fault tolerance becomes a vital aspect of grid for realizing reliability, availability and quality-of-service. The most common technique, for achieving fault tolerance, used in High Performance Computing is rollback recovery. It relies on the availability of checkpoints and stability of storage media. Thus the checkpoints are replicated on storage media. It increases the job execution time, if replication is not done in proper manner. Furthermore, dedicating powerful resources solely as checkpoint storage results in loss of computation power of these resources. It may results in bottlenecks, when the load on the network is high. To address the problem, in this paper checkpoint replication based fault tolerance strategy named as Reliable Checkpoint Storage Strategy (RCSS) is proposed. In RCSS, the checkpoints are replicated on all checkpoint servers in the grid in distributed manner. It decreases the checkpoint replication time and in turn improves the overall job execution time. Additionally, if a resource fails during execution of a job, the RCSS restarts the job from its last valid checkpoint taken from any checkpoint server in the grid. Furthermore to increase the grid performance, CPU cycles of checkpoint servers are also utilized during high load on network. To evaluate the performance of RCSS simulations are carried out using GridSim. The simulation results show that RCSS outperforms in intra-cluster Checkpoint wave completion time by 12.5 % with varying number of checkpoint servers. RCSS also reduces checkpoint wave completion time by 50 % with varying number of clusters. Additionally RCSS reduces replication time within cluster by 39.5 %.  相似文献   

8.
The organic grid: self-organizing computation on a peer-to-peer network   总被引:1,自引:0,他引:1  
Desktop grids have been used to perform some of the largest computations in the world and have the potential to grow by several more orders of magnitude. However, current approaches to utilizing desktop resources require either centralized servers or extensive knowledge of the underlying system, limiting their scalability. We propose a new design for desktop grids that relies on a self-organizing, fully decentralized approach to the organization of the computation. Our approach, called the organic grid, is a radical departure from current approaches and is modeled after the way complex biological systems organize themselves. Similar to current desktop grids, a large computational task is broken down into sufficiently small subtasks. Each subtask is encapsulated into a mobile agent, which is then released on the grid and discovers computational resources using autonomous behavior. In the process of "colonization" of available resources, the judicious design of the agent behavior produces the emergence of crucial properties of the computation that can be tailored to specific classes of applications. We demonstrate this concept with a reduced-scale proof-of-concept implementation that executes a data-intensive independent-task application on a set of heterogeneous, geographically distributed machines. We present a detailed exploration of the design space of our system and a performance evaluation of our implementation using metrics appropriate for assessing self-organizing desktop grids.  相似文献   

9.
In attempts to exploit a diverse set of resources in grids efficiently, numerous assays in resource management, particularly scheduling, have been made. The primary objective of these efforts is the minimization of application completion time; however, they tend to achieve this objective at the expense of redundant resource usage. This paper investigates the problem of scheduling workflow applications on grids and presents a novel scheduling algorithm for the solution of this problem. Our algorithm performs the scheduling by accounting for both completion time and resource usage—dual objectives. Since the performance of grid resources changes dynamically and the accurate estimation of their performance is very difficult, our algorithm incorporates rescheduling to deal with unforeseen performance fluctuations effectively. The paper provides a comparative evaluation study conducted by using an extensive set of experiments. The study demonstrates that the proposed algorithm delivers promising performance in three respects: completion time, resource utilization, and robustness to resource-performance fluctuations.  相似文献   

10.
为了协调网格计算中异构资源在多用户之间的合理共享,满足不同用户需求,该文提出一种基于ECT的优先权约束作业调度策略。该策略充分考虑不同作业的期望完成时间,并通过为不同级别用户设置优先级,使得高优先权用户的作业优先执行,保证绝大多数作业在期望完成时间之内完成,同时平衡了各种资源的利用率。该策略解决了网格环境下不同类别用户无冲突共享资源问题,提高了用户满意程度,实现了作业与异构资源之间的合理匹配。  相似文献   

11.
When multiple grid applications are executed on a common grid computing infrastructure, the policy of resource allocation impacts the time to complete these applications. In this paper, we formulate an analytical model that permits us to compare different allocation policies. We show that a uniform allocation policy penalizes large jobs (i.e., the work required for an application), whereas a linear allocation of resources penalizes small jobs. In particular, we study an allocation policy that aims at minimizing the average job completion time. We show that such policy can reduce the average completion time by as much as 50% of the completion time required for uniform or linear allocation policies. Using such policy is also fair to applications because it does not penalize small jobs or large jobs as other policies (such as uniform or linear) do.  相似文献   

12.
针对所有工作必须分阶段依次完成,但同一个阶段的工作可以同时进行的情况下,如何分配现有人员来承担这些工作,才能使得完成所有工作的工期最短,并在此前提下使花费的总用时最少的分配问题,通过引入立方检测矩阵,给出了一种单调下降的迭代算法。该算法不但能获取精确最优解,而且有很好的计算效率。  相似文献   

13.
The workload of many real time systems can be characterized as a set of preemptable jobs with linear precedence constraints. Typically their execution times are only known to lie within a range of values. In addition, jobs share resources and access to the resources must be synchronized to ensure the integrity of the system. The paper is concerned with the schedulability of such jobs when scheduled on a priority driven basis. It describes three algorithms for computing upper bounds on the completion times of jobs that have arbitrary release times and priorities. The first two are simple but do not yield sufficiently tight bounds, while the last one yields the tightest bounds but has the greatest complexity  相似文献   

14.
Job scheduling is a challenging task on grid environments because they must fulfill user requirements. Scientists often have deadlines and budgets for their experiments (set of jobs). But these requirements are in conflict with each other – cheaper resources are slower than the expensive ones. In this paper, we have implemented two multi-objective swarm algorithms. One of them is based on a biological behavior – Multi-Objective Artificial Bee Colony (MOABC) – and the other on physics – Multi-Objective Gravitational Search Algorithm (MOGSA). Multi-objective properties enhance the optimization of execution time and cost per experiment. These algorithms are evaluated regard to the standard and well-known multi-objective algorithm – Non-dominated Sorting Genetic Algorithm II (NSGA II) – in order to prove the goodness of our multi-objective proposals. Moreover, they are compared with real meta-schedulers as the Workload Management System (WMS) from the most used European grid middleware, gLite, and the Deadline Budget Constraint (DBC) from Nimrod-G, that takes into account the same requirements. Results show us that MOABC offers better results in all the cases using diverse workflows with dependent jobs over different grid environments.  相似文献   

15.
The exploitation of service oriented technologies, such as Grid computing, is being boosted by the current service oriented economy trend, leading to a growing need of Quality of Service (QoS) mechanisms. However, Grid computing was created to provide vast amounts of computational power but in a best effort way. Providing QoS guarantees is therefore a very difficult and complex task due to the distributed and heterogeneous nature of their resources, specially the volunteer computing resources (e.g., desktop resources).The scope of this paper is to empower an integrated multi QoS support suitable for Grid Computing environments made of either dedicated and volunteer resources, even taking advantage of that fact. The QoS is provided through SLAs by exploiting different available scheduling mechanisms in a coordinated way, and applying appropriate resource usage optimization techniques. It is based on the differentiated use of reservations and scheduling in advance techniques, enhanced with the integration of rescheduling techniques that improve the allocation decisions already made, achieving a higher resource utilization and still ensuring the agreed QoS. As a result, our proposal enhances best-effort Grid environments by providing QoS aware scheduling capabilities.This proposal has been validated by means of a set of experiments performed in a real Grid testbed. Results show how the proposed framework effectively harnesses the specific capabilities of the underlying resources to provide every user with the desired QoS level, while, at the same time, optimizing the resources’ usage.  相似文献   

16.
Volunteer computing systems offer high computing power to the scientific communities to run large data intensive scientific workflows. However, these computing environments provide the best effort infrastructure to execute high performance jobs. This work aims to schedule scientific and data intensive workflows on hybrid of the volunteer computing system and Cloud resources to enhance the utilization of these environments and increase the percentage of workflow that meets the deadline. The proposed workflow scheduling system partitions a workflow into sub-workflows to minimize data dependencies among the sub-workflows. Then these sub-workflows are scheduled to distribute on volunteer resources according to the proximity of resources and the load balancing policy. The execution time of each sub-workflow on the selected volunteer resources is estimated in this phase. If any of the sub-workflows misses the sub-deadline due to the large waiting time, we consider re-scheduling of this sub-workflow into the public Cloud resources. This re-scheduling improves the system performance by increasing the percentage of workflows that meet the deadline. The proposed Cloud-aware data intensive scheduling algorithm increases the percentage of workflow that meet the deadline with a factor of 75% in average with respect to the execution of workflows on the volunteer resources.  相似文献   

17.
We propose a real time simulation for window frost formation on mobile devices that uses both particles and grids. Previous ice formation methods made heavy demands on both memory and computational capacity because they were designed for a desktop environment. In this paper, a frost skeleton grows around a location touched by the user using particles, and the ice surfaces are constructed using a grid. Using a nonlattice random-walk technique, the frost skeleton grows freely and naturally. A hash grid technique is used to search efficiently for neighbor particles during the crystallization process. Finally, some 2.5D details are added to the ice skeleton by adjusting the height of the grid vertices around the skeleton. Experiments show that our method creates realistic frost in real time. Our method can be used to express ice formation effects in touch-based mobile device applications such as weather forecasts or games.  相似文献   

18.
Accurate, continuous resource monitoring and profiling are critical for enabling performance tuning and scheduling optimization. In desktop grid systems that employ sandboxing, these issues are challenging because (1) subjobs inside sandboxes are executed in a virtual computing environment and (2) the state of this virtual environment within the sandboxes is reset to an initial empty state after a subjob completion.DGMonitor is a monitoring tool which builds a global, accurate, and continuous view of real resource utilization for desktop grids with sandboxing. Our monitoring tool measures performance unobtrusively and reliably, uses a simple performance data model, and is easy to use. Our measurements demonstrate that DGMonitor can scale to large desktop grids (up to 12000 PCs) with low monitoring overhead in terms of resource consumption (less than 0.1% per machine).Though we originally developed DGMonitor with the Entropia DCGrid platform, our tool is easily portable and integrated into other desktop grid systems. In all of these systems, DGMonitor data can support existing and novel information services, particularly for performance tuning and scheduling. In this paper, the high scalability and monitoring power of DGMonitor are demonstrated with the Entropia DCGrid platform and the BOINC platform respectively.  相似文献   

19.
网格计算是利用网络把分散的计算资源组织起来解决复杂问题的计算模式,工作调度是待解决的主要问题之一。本文提出一种基于模糊粒子群优化的网格计算工作调度算法,该算法利用模糊粒子群优化动态地产生网格计算工作调度的优化方案,使现有计算资源完成所有工作的时间最小化。实验结果表明,与基于遗传算法、模拟退火、蚁群算法的工作调度方法相比,所提出的算法在时间和精度上具有一定的优势。  相似文献   

20.
An important concern for an efficient use of distributed computing is dealing with load balancing to ensure all available nodes and their shared resources are equally exploited. In large scale systems such as volunteer computing platforms and desktop grids, centralized solutions may introduce performance bottlenecks and single points of failure. Accordingly fully distributed alternatives have been considered, due to their inherent robustness and reliability. In extremely dynamic contexts, scheduling middlewares should adapt their job scheduling policies to the actual availability and overcome the volatility and heterogeneity typical of the underlying nodes. To deal with the dynamicity of a large pool of resources, self-organizing and adaptive solutions represent a promising research direction. Solutions based on bio-inspired methodologies are particularly suitable, as they inherently provide the desired features. In this paper we present a fully distributed load balancing mechanism, called ozmos, which aims at increasing the efficiency of distributed computing systems through peer-to-peer interaction between nodes. The proposed algorithm is based on a Chord overlay, and employs ant-like agents to spread information about the current load on each node, to reschedule tasks from overloaded systems to underloaded ones, and to relocate incompatible tasks on suitable resources in heterogeneous grids. By means of several evaluation scenarios we demonstrate the effectiveness of the proposed solution in achieving system-wide load balancing, both with homogeneous and heterogeneous resources. In particular we consider the load balancing performance of our approach, its scalability, as well as its communication efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号