共查询到20条相似文献,搜索用时 46 毫秒
1.
经济因素是驱动混合云负载调度和资源管理的关键因素,同时数据机密和安全性、任务依赖性等问题也对调度策略提出更高的要求。针对以上问题提出综合优化成本开销的调度策略,不仅考虑数据安全迁移和外包运行的总成本,也深入分析应用程序模型中任务依赖性调度。最后通过云仿真平台的实验仿真,并与现有的资源调度算法进行对比,实验结果表明该混合云调度策略在提高吞吐能力和降低总体成本开销方面的综合改善。 相似文献
2.
科学工作流是一种复杂的数据密集型应用程序.如何在混合云环境中对数据进行有效布局,是科学工作流所面临的重要问题,尤其是混合云的安全性要求给科学云工作流数据布局研究带来了新的挑战.传统数据布局方法大多采用基于负载均衡的划分模型布局数据集,该方法可以获得很好的负载平衡布局,然而传输时间并非最优.针对传统数据布局方法的不足,并结合混合云中数据布局的特点,首先设计一种基于数据依赖破坏度的矩阵划分模型,生成对数据依赖度破坏最小的划分;然后提出一种面向数据中心的数据布局方法,该方法依据划分模型将依赖度高的数据集尽量放在同一数据中心,从而减少数据集跨数据中心的传输时间.实验结果表明,该方法能够有效地缩短科学工作流运行时跨数据中心的数据传输时间. 相似文献
3.
4.
5.
为了更好地支持教师教学和学生学习,教育机构需要不断优化IT基础设施、资源和业务流程。一种新兴的计算范式"云计算(cloud computing)"是最佳的解决方案,它通过提供基础设施、平台和教育服务,改变IT资源的利用和消费模式,将对教育领域产生巨大影响。文章构建一种基于混合云的虚拟学习环境,探讨实现该学习环境面临的挑战,从而提高应用程序及服务的可扩展性和可靠性,削减教育机构软硬件资源的开销,减少软件更新和数据中心维护的费用。 相似文献
6.
目前开源已经成为云计算的发展趋势,OpenStack也成为了该领域最热门的话题之一,作为OpenStack最大的贡献者,红帽的观点对于厂商和用户都有着极佳的借鉴价值。 相似文献
7.
云计算环境下的服务调度和资源调度研究 总被引:1,自引:0,他引:1
云计算中的服务调度与资源调度对云计算的性能有重要影响,在分析现有云计算调度模式的基础上,针对云计算数据密集与计算密集的特点,提出分层调度策略以实现云计算中的服务与资源调度。分层调度策略对任务进行划分确定作业优先级,并通过数据局部性和总任务完成率对资源进行分配。数值评价部分应用分层调度与已有调度进行比较。实验结果表明,所采用的调度有效提高了资源利用率,为云服务的进一步研究提供了思路。 相似文献
8.
范贵生;王鹏;虞慧群;李增鹏 《小型微型计算机系统》2024,(7):1787-1792
随着云计算的出现和云基础设施的快速部署,越来越多的大型工作流应用正在积极向云迁移.同时,如何在满足任务时间约束的前提下优化执行成本,提高资源利用率成为新的挑战.本文提出一种云工作流动态混合资源调度算法DHRS,不仅满足任务的时间约束而且在混合资源租用时取得较低的成本.首先,根据任务的优先级关系对任务进行预处理,基于概率升序对任务进行排序,并为子任务分配子截止日期;然后,依据顺序为工作流选择满足截止日期且成本较低的服务;最后,对每个服务动态选择预留资源和按需资源,基于预留资源的空闲时间段调度,进一步降低成本.在随机生成的不同的科学工作流上进行实验,并通过与现有算法对比,DHRS在满足时间约束并且降低执行成本方面具有一定的优势. 相似文献
9.
张建娇 《数码设计:surface》2018,(3):32-33
阐述了混合云技术架构的基本概念,混合云融合了公有云和私有云,是近年来云计算的主要模式和发展方向。并针对目前高校信息化建设中云计算使用的实际情况,已经成型的云计算数据中心,需要不断扩充计算资源和存储资源,高昂的硬件成本下,如何实现可扩展性并保证高性能计算和数据安全,分析了混合云技术架构在信息化建设中应用系统具体所使用的场景和实际使用情况,主要介绍了混合云技术在校园云盘中的使用。 相似文献
10.
11.
Policy based resource allocation in IaaS cloud 总被引:1,自引:0,他引:1
Amit NathaniAuthor Vitae Sanjay ChaudharyAuthor Vitae Gaurav SomaniAuthor Vitae 《Future Generation Computer Systems》2012,28(1):94-103
In present scenario, most of the Infrastructure as a Service (IaaS) clouds use simple resource allocation policies like immediate and best effort. Immediate allocation policy allocates the resources if available, otherwise the request is rejected. Best-effort policy also allocates the requested resources if available otherwise the request is placed in a FIFO queue. It is not possible for a cloud provider to satisfy all the requests due to finite resources at a time. Haizea is a resource lease manager that tries to address these issues by introducing complex resource allocation policies. Haizea uses resource leases as resource allocation abstraction and implements these leases by allocating Virtual Machines (VMs). Haizea supports four kinds of resource allocation policies: immediate, best effort, advanced reservation and deadline sensitive. This work provides a better way to support deadline sensitive leases in Haizea while minimizing the total number of leases rejected by it. Proposed dynamic planning based scheduling algorithm is implemented in Haizea that can admit new leases and prepare the schedule whenever a new lease can be accommodated. Experiments results show that it maximizes resource utilization and acceptance of leases compared to the existing algorithm of Haizea. 相似文献
12.
Cloud computing is emerging as an important platform for business, personal and mobile computing applications. In this paper, we study a stochastic model of cloud computing, where jobs arrive according to a stochastic process and request resources like CPU, memory and storage space. We consider a model where the resource allocation problem can be separated into a routing or load balancing problem and a scheduling problem. We study the join-the-shortest-queue routing and power-of-two-choices routing algorithms with the MaxWeight scheduling algorithm. It was known that these algorithms are throughput optimal. In this paper, we show that these algorithms are queue length optimal in the heavy traffic limit. 相似文献
13.
Hybrid Cloud computing is receiving increasing attention in recent days. In order to realize the full potential of the hybrid Cloud platform, an architectural framework for efficiently coupling public and private Clouds is necessary. As resource failures due to the increasing functionality and complexity of hybrid Cloud computing are inevitable, a failure-aware resource provisioning algorithm that is capable of attending to the end-users quality of service (QoS) requirements is paramount. In this paper, we propose a scalable hybrid Cloud infrastructure as well as resource provisioning policies to assure QoS targets of the users. The proposed policies take into account the workload model and the failure correlations to redirect users’ requests to the appropriate Cloud providers. Using real failure traces and a workload model, we evaluate the proposed resource provisioning policies to demonstrate their performance, cost as well as performance–cost efficiency. Simulation results reveal that in a realistic working condition while adopting user estimates for the requests in the provisioning policies, we are able to improve the users’ QoS about 32% in terms of deadline violation rate and 57% in terms of slowdown with a limited cost on a public Cloud. 相似文献
14.
Cloud infrastructures consisting of heterogeneous resources are increasingly being utilized for hosting large-scale distributed applications from diverse users with discrete needs. The multifarious cloud applications impose varied demands for computational resources along with multitude of performance implications. Successful hosting of cloud applications necessitates service providers to take into account the heterogeneity existing in the behavior of users, applications and system resources while respecting the user’s agreed Quality of Service (QoS) criteria. In this work, we propose a QoS-Aware Resource Elasticity (QRE) framework that allows service providers to make an assessment of the application behavior and develop mechanisms that enable dynamic scalability of cloud resources hosting the application components. Experimental results conducted on the Amazon EC2 cloud clearly demonstrate the effectiveness of our approach while complying with the agreed QoS attributes of users. 相似文献
15.
《Journal of Manufacturing Systems》2014,33(4):551-566
Cloud manufacturing is emerging as a novel business paradigm for the manufacturing industry, in which dynamically scalable and virtualised resources are provided as consumable services over the Internet. A handful of cloud manufacturing systems are proposed for different business scenarios, most of which fall into one of three deployment modes, i.e. private cloud, community cloud, and public cloud. One of the challenges in the existing solutions is that few of them are capable of adapting to changes in the business environment. In fact, different companies may have different cloud requirements in different business situations; even a company at different business stages may need different cloud modes. Nevertheless, there is limited support on migrating to different cloud modes in existing solutions. This paper proposes a Hybrid Manufacturing Cloud that allows companies to deploy different cloud modes for their periodic business goals. Three typical cloud modes, i.e. private cloud, community cloud and public cloud are supported in the system. Furthermore, it enables companies to set self-defined access rules for each resource so that unauthorised companies will not have access to the resource. This self-managed mechanism gives companies full control of their businesses and boosts their trust with enhanced privacy protection. A unified ontology is developed to enhance semantic interoperability throughout the whole process of service provision in the clouds. A Cloud Management Engine is developed to manage all the user-defined clouds, in which Semantic Web technologies are used as the main toolkit. The feasibility of this approach is verified through a group of companies, each of which has complex access requirements for their resources. In addition, a use case is carried out between customers and service providers. This way, optimal service is delivered through the proposed system. 相似文献
16.
An important feature of most cloud computing solutions is auto-scaling, an operation that enables dynamic changes on resource capacity. Auto-scaling algorithms generally take into account aspects such as system load and response time to determine when and by how much a resource pool capacity should be extended or shrunk. In this article, we propose a scheduling algorithm and auto-scaling triggering strategies that explore user patience, a metric that estimates the perception end-users have from the Quality of Service (QoS) delivered by a service provider based on the ratio between expected and actual response times for each request. The proposed strategies help reduce costs with resource allocation while maintaining perceived QoS at adequate levels. Results show reductions on resource-hour consumption by up to approximately 9% compared to traditional approaches. 相似文献
17.
Walid Saad 《Journal of Network and Computer Applications》2012,35(1):348-356
The major driver for deploying next generation wireless cellular systems is their ability to efficiently deliver resource-demanding services, many of which require symmetric communication between an uplink mobile user and a downlink mobile user that belong to the same network. In this work, we propose a utility-based joint uplink/downlink scheduling algorithm suitable for wireless services involving pairwise communication among mobile users. While most existing literature focuses on downlink-only or uplink-only scheduling algorithms, the proposed algorithm aims at ensuring a utility function that jointly captures the quality of service in terms of delay and channel quality on both links. By jointly considering the time varying channel conditions in both the uplink and the downlink directions, the proposed algorithm avoids wasting of resources and achieves notable performance gains in terms of increased number of active connections, lower packet drop rate, and increased network throughput. These gains are achieved with a tradeoff cost in terms of complexity and signaling overhead. For overhead reduction in practical scenarios, we propose an implementation over clusters within the network. 相似文献
18.
Infrastructure as a Service (IaaS) cloud providers typically offer multiple service classes to satisfy users with different requirements and budgets. Cloud providers are faced with the challenge of estimating the minimum resource capacity required to meet Service Level Objectives (SLOs) defined for all service classes. This paper proposes a capacity planning method that is combined with an admission control mechanism to address this challenge. The capacity planning method uses analytical models to estimate the output of a quota-based admission control mechanism and find the minimum capacity required to meet availability SLOs and admission rate targets for all classes. An evaluation using trace-driven simulations shows that our method estimates the best cloud capacity with a mean relative error of 2.5% with respect to the simulation, compared to a 36% relative error achieved by a single-class baseline method that does not consider admission control mechanisms. Moreover, our method exhibited a high SLO fulfillment for both availability and admission rates, and obtained mean CPU utilization over 91%, while the single-class baseline method had values not greater than 78%. 相似文献
19.
Resource allocation strategies in virtualized data centers have received considerable attention recently as they can have substantial impact on the energy efficiency of a data center. This led to new decision and control strategies with significant managerial impact for IT service providers. We focus on dynamic environments where virtual machines need to be allocated and deallocated to servers over time. Simple bin packing heuristics have been analyzed and used to place virtual machines upon arrival. However, these placement heuristics can lead to suboptimal server utilization, because they cannot consider virtual machines, which arrive in the future. We ran extensive lab experiments and simulations with different controllers and different workloads to understand which control strategies achieve high levels of energy efficiency in different workload environments. We found that combinations of placement controllers and periodic reallocations achieve the highest energy efficiency subject to predefined service levels. While the type of placement heuristic had little impact on the average server demand, the type of virtual machine resource demand estimator used for the placement decisions had a significant impact on the overall energy efficiency. 相似文献
20.
In recent years, cloud computing has been one of the most widely discussed topics in the field of Information Technology. Owing to the popularity of services offered by cloud environments, several critical aspects of security have aroused interest in the academic and industrial world, where there is a concern to provide efficient mechanisms to combat a wide range of threats. As is well known, the application of security techniques and methodologies has a direct influence on the performance of the system, since security and performance are two quantities that are inversely proportional. This means that if the service providers fail to manage their computing infrastructure efficiently, the demand for services may not be met with the quality required by clients, including security and performance requirements, and the computational resources may be used inefficiently. The aim of this paper was to define QoS-driven approaches for cloud environments on the basis of the results of a performance evaluation of a service in which different security mechanisms are employed. These mechanisms impose additional overhead on the performance of the service, and to counter this, an attempt was made to change computational resources dynamically and on-the-fly. On the basis of the results, it could be shown that in a cloud environment, it is possible to maintain the performance of the service even with the overhead imposed by the security mechanisms, through an alteration in the virtualized computational resources. However, this change in the amount of resources had a direct effect on the response variables. 相似文献