首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Clusters of computers have emerged as mainstream parallel and distributed platforms for high‐performance, high‐throughput and high‐availability computing. To enable effective resource management on clusters, numerous cluster management systems and schedulers have been designed. However, their focus has essentially been on maximizing CPU performance, but not on improving the value of utility delivered to the user and quality of services. This paper presents a new computational economy driven scheduling system called Libra, which has been designed to support allocation of resources based on the users' quality of service requirements. It is intended to work as an add‐on to the existing queuing and resource management system. The first version has been implemented as a plugin scheduler to the Portable Batch System. The scheduler offers market‐based economy driven service for managing batch jobs on clusters by scheduling CPU time according to user‐perceived value (utility), determined by their budget and deadline rather than system performance considerations. The Libra scheduler has been simulated using the GridSim toolkit to carry out a detailed performance analysis. Results show that the deadline and budget based proportional resource allocation strategy improves the utility of the system and user satisfaction as compared with system‐centric scheduling strategies. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

2.
The execution of a workflow application can result in an imbalanced workload among allocated processors, ultimately resulting in a waste of resources and a higher cost to the user. Here, we consider a dynamic resource management system in which processors are reserved not for a job but only to run a task, thus allowing a higher resource usage rate. This paper presents a scheduling algorithm that manages concurrent workflows in a dynamic environment in which jobs are submitted by users at any moment in time, on shared heterogeneous resources, and constrained to a specified budget and deadline for each job. Recent research attempted to propose dynamic strategies for concurrent workflows but only addressed fairness in resource sharing among applications while minimizing the execution time. The Multi-QoS Profit-Aware scheduling algorithm (MQ-PAS) proposed here is able to increase the profit achieved by the provider by considering the budget available for each job to define tasks priorities. We study the scalability of the algorithm with different types of workflows and infrastructures. The experimental results show that our strategy improves provider revenue significantly and obtains comparable successful rates of completed jobs.  相似文献   

3.
整合云和网格基础设施,增强科研机构现有网格系统的计算能力并向应用提供截止时间保障的服务是科学研究领域的热点。在这种"网格-云"混合计算环境中,对何时租借云虚拟资源以及如何租借做出有效决策是一个难题。现有的一些调度策略主要在网格资源静态能力特征的基础上,以作业等待时间作为决策依据,缺乏对资源动态服务能力的有效评估,无法保证科学应用的截止时间需求。本文提出了一种混合环境下的科学工作流执行系统架构并对其核心组件进行了阐述。针对其中的工作流调度问题,利用随机服务模型建模已有网格系统中的资源的动态服务能力,以任务违约风险作为是否租借外部虚拟资源的判断指标,提出了一个科学工作流调度算法HCA_SASWD。实验结果表明,HCA_SASWD相对于其他算法,能有效保证用户的截止时间要求,为需要提供截止时间保障的系统架构提供了参考。  相似文献   

4.
刘扬  韩恺  樊建平 《计算机应用》2010,30(8):2197-2201
多年来,网格资源管理技术的研究只是关注单一应用负载的性能提高,忽略了资源的可扩展服务能力,制约了网格系统从科学计算领域向更广应用领域的推广。针对以上问题提出了一种支持混合负载的网格资源管理框架,所提的策略约束资源分配方法通过将多种“负载—资源”的共享关系抽象成消费者共享策略树,使资源的获取能够同时满足批处理、SOA服务等多种类型混合负载的性能要求,从而大大提高了系统整体的用户效益。  相似文献   

5.
Grid applications with stringent security requirements introduce challenging concerns because the schedule devised by nonsecurity‐aware scheduling algorithms may suffer in scheduling security constraints tasks. To make security‐aware scheduling, estimation and quantification of security overhead is necessary. The proposed model quantifies security, in the form of security levels, on the basis of the negotiated cipher suite between task and the grid‐node and incorporates it into existing heuristics MinMin and MaxMin to make it security‐aware MinMin(SA) and MaxMin(SA). It also proposes SPMaxMin (Security Prioritized MinMin) and its comparison with three heuristics MinMin(SA), MaxMin(SA), and SPMinMin on heterogeneous grid/task environment. Extensive computer simulation results reveal that the performance of the various heuristics varies with the variation in computational and security heterogeneity. Its analysis over nine heterogeneous grid/task workload situations indicates that an algorithm that performs better for one workload degrades in another. It is conspicuous that for a particular workload one algorithm gives better makespan while another gives better response time. Finally, a security‐aware scheduling model is proposed, which adapts itself to the dynamic nature of the grid and picks the best suited algorithm among the four analyzed heuristics on the basis of job characteristics, grid characteristics, and desired performance metric. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
ABSTRACT

Present multimedia services are provided through a heterogeneous set of networks. Because of the heterogeneity of the networks, long-term resource availability guarantees are difficult to obtain. Consequently, even if the resource requirements throughout the service could be accurately mapped, it would not be feasible to provide overall performance guarantees. Application adaptation arises as an appropriate solution for quality-of-service assurance in such dynamic service infrastructures. Application adaptation basically involves adapting application characteristics according to the network resource availability. We formulate the problem of adaptation of multimedia applications to network infrastructure as well as to user- and application-imposed constraints and preferences. We benefit from the good approximation, identification, and control capabilities of recurrent high-order neural networks and we introduce an algorithm that guarantees the resource constraints of the network infrastructure will not be violated while maintaining the user and application requirements.  相似文献   

7.
To meet the challenges of consistent performance, low communication latency, and a high degree of user mobility, cloud and Telecom infrastructure vendors and operators foresee a Mobile Cloud Network that incorporates public cloud infrastructures with cloud augmented Telecom nodes in forthcoming mobile access networks. A Mobile Cloud Network is composed of distributed cost- and capacity-heterogeneous resources that host applications that in turn are subject to a spatially and quantitatively rapidly changing demand. Such an infrastructure requires a holistic management approach that ensures that the resident applications’ performance requirements are met while sustainably supported by the underlying infrastructure. The contribution of this paper is three-fold. Firstly, this paper contributes with a model that captures the cost- and capacity-heterogeneity of a Mobile Cloud Network infrastructure. The model bridges the Mobile Edge Computing and Distributed Cloud paradigms by modelling multiple tiers of resources across the network and serves not just mobile devices but any client beyond and within the network. A set of resource management challenges is presented based on this model. Secondly, an algorithm that holistically and optimally solves these challenges is proposed. The algorithm is formulated as an application placement method that incorporates aspects of network link capacity, desired user latency and user mobility, as well as data centre resource utilisation and server provisioning costs. Thirdly, to address scalability, a tractable locally optimal algorithm is presented. The evaluation demonstrates that the placement algorithm significantly improves latency, resource utilisation skewness while minimising the operational cost of the system. Additionally, the proposed model and evaluation method demonstrate the viability of dynamic resource management of the Mobile Cloud Network and the need for accommodating rapidly mobile demand in a holistic manner.  相似文献   

8.
Computational Grids and peer‐to‐peer (P2P) networks enable the sharing, selection, and aggregation of geographically distributed resources for solving large‐scale problems in science, engineering, and commerce. The management and composition of resources and services for scheduling applications, however, becomes a complex undertaking. We have proposed a computational economy framework for regulating the supply of and demand for resources and allocating them for applications based on the users' quality‐of‐service requirements. The framework requires economy‐driven deadline‐ and budget‐constrained (DBC) scheduling algorithms for allocating resources to application jobs in such a way that the users' requirements are met. In this paper, we propose a new scheduling algorithm, called the DBC cost–time optimization scheduling algorithm, that aims not only to optimize cost, but also time when possible. The performance of the cost–time optimization scheduling algorithm has been evaluated through extensive simulation and empirical studies for deploying parameter sweep applications on global Grids. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

9.
根据网格工作流中任务的依赖关系和截止时间,以及资源的有效度和MIPS(每秒百万条指令),提出基于网格资源预测的任务优先级调度算法。把网格任务工作流抽象为有向无环图,找到该工作流的关键路径,计算每个任务的最迟开始执行时间,作为任务的优先级。在算法中考虑用户的要求和资源的类型,以及任务调度失败后重新分配的问题。实验验证了该算法的有效性。  相似文献   

10.
Hybrid Cloud computing is receiving increasing attention in recent days. In order to realize the full potential of the hybrid Cloud platform, an architectural framework for efficiently coupling public and private Clouds is necessary. As resource failures due to the increasing functionality and complexity of hybrid Cloud computing are inevitable, a failure-aware resource provisioning algorithm that is capable of attending to the end-users quality of service (QoS) requirements is paramount. In this paper, we propose a scalable hybrid Cloud infrastructure as well as resource provisioning policies to assure QoS targets of the users. The proposed policies take into account the workload model and the failure correlations to redirect users’ requests to the appropriate Cloud providers. Using real failure traces and a workload model, we evaluate the proposed resource provisioning policies to demonstrate their performance, cost as well as performance–cost efficiency. Simulation results reveal that in a realistic working condition while adopting user estimates for the requests in the provisioning policies, we are able to improve the users’ QoS about 32% in terms of deadline violation rate and 57% in terms of slowdown with a limited cost on a public Cloud.  相似文献   

11.
In this report, we review the current state of the art of web‐based visualization applications. Recently, an increasing number of web‐based visualization applications have emerged. This is due to the fact that new technologies offered by modern browsers greatly increased the capabilities for visualizations on the web. We first review these technical aspects that are enabling this development. This includes not only improvements for local rendering like WebGL and HTML5, but also infrastructures like grid or cloud computing platforms. Another important factor is the transfer of data between the server and the client. Therefore, we also discuss advances in this field, for example methods to reduce bandwidth requirements like compression and other optimizations such as progressive rendering and streaming. After establishing these technical foundations, we review existing web‐based visualization applications and prototypes from various application domains. Furthermore, we propose a classification of these web‐based applications based on the technologies and algorithms they employ. Finally, we also discuss promising application areas that would benefit from web‐based visualization and assess their feasibility based on the existing approaches.  相似文献   

12.
面向服务的网格高性能计算策略   总被引:1,自引:0,他引:1  
网格技术和Web服务的发展,促成了服务计算的诞生和发展.本文在面向服务的架构下,重新研究传统计算网格下的高性能计算.首先,针舛高性能计算应用的特点,结合面向服务的思想,提出了一种层次资源管理体系结构.其次,分析了适用于网格环境的高性能计算应用的程序结构,并通过有向无循环图(DAG)加以表示.第三,基于上述的资源管理体系结构和高性能计算应用模型,提出了一种改进的动态优先级调度算法.最后,通过仿真实验,分析了提出的算法的性能,实验结果表明提出的算法适用于网格环境,进而验证了本文提出的面向服务的网格高性能计算策略的有效性.  相似文献   

13.
With recent advances in computing and communication technologies enabling mobile devices more powerful, the scope of Grid computing has been broadened to include mobile and pervasive devices. Energy has become a critical resource in such devices. So, battery energy limitation is the main challenge towards enabling persistent mobile grid computing. In this paper, we address the problem of energy constrained scheduling scheme for the grid environment. There is a limited energy budget for grid applications. The paper investigates both energy minimization for mobile devices and grid utility optimization problem. We formalize energy aware scheduling using nonlinear optimization theory under constraints of energy budget and deadline. The paper also proposes distributed pricing based algorithm that is used to tradeoff energy and deadline to achieve a system wide optimization based on the preference of the grid user. The simulations reveal that the proposed energy constrained scheduling algorithms can obtain better performance than the previous approach that considers both energy consumption and deadline.  相似文献   

14.
In this paper, the problem of fault tolerance in grid computing is addressed and a novel adaptive task replication based fault tolerant job scheduling strategy for economy driven grid is proposed. The proposed strategy maintains fault history of the resources termed as resource fault index. Fault index entry for the resource is updated based on successful completion or failure of an assigned task by the grid resource. Grid Resource Broker then replicates the task (submitting the same task to different backup resources) with different intensity, based on vulnerability of resource towards faults suggested by resource fault index. Consequently, in case of possible fault at a resource the results of replicated task(s) on other backup resource(s) can be used. Hence, user job(s) can be completed within specified deadline and assigned budget, even on the event of faults at the grid resource(s). Through extensive simulations, performance of the proposed strategy is evaluated and compared with the Time Optimization and Checkpointing based Strategy in an economy driven grid environment. The experimental results demonstrate that in the presence of faults, proposed fault tolerant strategy improves the number of tasks completed with varied deadline and fixed budget as well as number of tasks completed with varied budget and fixed deadline. Additionally, the proposed strategy used a smaller percentage of deadline time as compare to both Time Optimization and Checkpointing based Strategy. Although the proposed strategy has a percentage of budget spent greater than that of Time Optimization Strategy and Checkpointing based Strategy, it is accepted as a proposed strategy in time optimization where the main objective is to maximize tasks completed within a given deadline. It can be concluded from the experiments that the proposed strategy shows improvement in satisfying the user QoS requirements. It can effectively schedule tasks and tolerate faults gracefully even in the presence of failures, but the costs are slightly higher in terms of budget consumption. Hence, the proposed fault tolerant strategy helps in sustaining user??s faith in the grid, by enabling the grid to deliver reliable and consistent performance in the presence of faults.  相似文献   

15.
Grid technologies facilitate innovative applications among dynamic virtual organizations, while the ability to deploy, manage, and properly remain functioning via traditional approaches has been exceeded by the complexity of the next generation of grid systems. An important method for addressing this challenge may require nature‐inspired computing paradigms. This technique will entail construction of a bottom‐up multiagent system; however, the appropriate implementation mechanism is under consideration in order for the autonomous and distributed agents to emerge as a controlled grid service or application. A credit card management service in economic interactions is considered in this article for a decentralized control approach. This consideration is based on a preliminarily developed ecological network‐based grid middleware that has features desired for the next generation grid systems. The control scheme, design, and implementation of the credit card management service are presented in detail. The simulation results show that (1) agents are accountable for their activities such as behavior invocation, service provision, and resource utilization and (2) generated services or applications adapt well to dynamically changing environments such as agent amounts as well as partial failure of agents. The approach presented herein is beneficial for building autonomous and adaptive grid applications and services. © 2006 Wiley Periodicals, Inc. Int J Int Syst 21: 1269–1288, 2006.  相似文献   

16.
刘懿  李华  冯永 《计算机工程》2009,35(18):179-181
传统的网格资源调度研究注重调度的性能,很少考虑调度的服务质量。针对上述问题,设计3种类型的用户满意度评估方法衡量调度的服务质量,提出一种基于用户满意度的网格资源调度遗传算法,以用户满意度作为遗传变异的依据,实现网格资源调度过程的优化。实验表明,该算法能在保证较优调度性能的同时大幅度提高调度的服务质量。  相似文献   

17.
Fog Computing (FC) based IoT applications are encountering a bottleneck in the data management and resource optimization due to the dynamic IoT topologies, resource-limited devices, resource diversity, mismatching service quality, and complicated service offering environments. Existing problems and emerging demands of FC based IoT applications are hard to be met by traditional IP-based Internet model. Therefore, in this paper, we focus on the Content-Centric Network (CCN) model to provide more efficient, flexible, and reliable data and resource management for fog-based IoT systems. We first propose a Deep Reinforcement Learning (DRL) algorithm that jointly considers the content type and status of fog servers for content-centric data and computation offloading. Then, we introduce a novel virtual layer called FogOrch that orchestrates the management and performance requirements of fog layer resources in an efficient manner via the proposed DRL agent. To show the feasibility of FogOrch, we develop a content-centric data offloading scheme (DRLOS) based on the DRL algorithm running on FogOrch. Through extensive simulations, we evaluate the performance of DRLOS in terms of total reward, computational workload, computation cost, and delay. The results show that the proposed DRLOS is superior to existing benchmark offloading schemes.  相似文献   

18.
Multiclass query scheduling in real-time database systems   总被引:2,自引:0,他引:2  
In recent years, a demand for real-time systems that can manipulate large amounts of shared data has led to the emergence of real-time database systems (RTDBS) as a research area. This paper focuses on the problem of scheduling queries in RTDBSs. We introduce and evaluate a new algorithm called Priority Adaptation Query Resource Scheduling (PAQRS) for handling both single class and multiclass query workloads. The performance objective of the algorithm is to minimize the number of missed deadlines, while at the same time ensuring that any deadline misses are scattered across the different classes according to an administratively-defined miss distribution. This objective is achieved by dynamically adapting the system's admission, memory allocation, and priority assignment policies according to its current resource configuration and workload characteristics. A series of experiments confirms that PAQRS is very effective for real-time query scheduling  相似文献   

19.
Cloud computing is a recent advancement wherein IT infrastructure and applications are provided as ‘services’ to end‐users under a usage‐based payment model. It can leverage virtualized services even on the fly based on requirements (workload patterns and QoS) varying with time. The application services hosted under Cloud computing model have complex provisioning, composition, configuration, and deployment requirements. Evaluating the performance of Cloud provisioning policies, application workload models, and resources performance models in a repeatable manner under varying system and user configurations and requirements is difficult to achieve. To overcome this challenge, we propose CloudSim: an extensible simulation toolkit that enables modeling and simulation of Cloud computing systems and application provisioning environments. The CloudSim toolkit supports both system and behavior modeling of Cloud system components such as data centers, virtual machines (VMs) and resource provisioning policies. It implements generic application provisioning techniques that can be extended with ease and limited effort. Currently, it supports modeling and simulation of Cloud computing environments consisting of both single and inter‐networked clouds (federation of clouds). Moreover, it exposes custom interfaces for implementing policies and provisioning techniques for allocation of VMs under inter‐networked Cloud computing scenarios. Several researchers from organizations, such as HP Labs in U.S.A., are using CloudSim in their investigation on Cloud resource provisioning and energy‐efficient management of data center resources. The usefulness of CloudSim is demonstrated by a case study involving dynamic provisioning of application services in the hybrid federated clouds environment. The result of this case study proves that the federated Cloud computing model significantly improves the application QoS requirements under fluctuating resource and service demand patterns. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
Although quality requirements (QRs) have become a major drive in today's software development, there have been very few real‐world examples in the literature that demonstrate how to meet these requirements. This paper presents such an example. Specifically, the paper describes the design of a partition‐based distributed stock trading service system that satisfies a set of QRs related to resource utilization, performance, scalability and availability. The paper evaluates this design through detailed experiments and discusses some design alternatives and the lessons learned. Central to this design are a static load distribution strategy and a dynamic load balancing strategy. The first strategy is to achieve an initial balanced workload on the system's server cluster during the system initialization time, whereas the second strategy is to maintain this balanced workload throughout the system execution time. Together, these two strategies work in unison to ensure that the server resources are efficiently utilized; the user requests are processed with the required speed; the application is partitioned with sufficient room to scale; and the system is highly available. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号