首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
针对云计算资源任务调度效率低,资源分配不均的情况,将改进的烟花算法和人工蜂群算法算法进行融合为IFWA-ABC。首先,对云计算资源任务调度进行描述;其次,在FWA初始化中采用混沌反向学习和柯西分布进行优化,对核心烟花和非核心烟花的半径分别进行优化,将FWA中最优个体通过改进的ABC算法进行获得;最后,将IFWA-ABC算法用于云计算任务调度。仿真实验中,通过与FWA、ABC在虚拟机、执行时间、消耗成本、能量消耗指标对比中,IFWA-ABC具有明显的优势能够有效地提高云计算资源分配效率。  相似文献   

2.
面向服务的云数据挖掘引擎的研究   总被引:1,自引:0,他引:1  
数据挖掘算法处理海量数据时,扩展性受到制约。在商业和科学研究的各个领域,知识发现的过程和需求差异较大,需要有效的机制来设计和运行各种类型的分布式数据挖掘应用。提出了一种面向服务的云数据挖掘引擎的框架CloudDM。不同于基于网格的分布式数据挖掘框架,CloudDM利用开源云计算平台Hadoop处理海量数据的能力,以面向服务的形式支持分布式数据挖掘应用的设计和运行,并描述面向服务的云数据挖掘引擎系统的关键部件和实现技术。依据面向服务的软件体系结构和基于云平台的数据挖掘引擎,可以有效解决海量数据挖掘中的海量数据存储、数据处理和数据挖掘算法互操作性等问题。  相似文献   

3.
Scheduling and planning job execution of loosely coupled applications   总被引:1,自引:1,他引:0  
Growth in availability of data collection devices has allowed individual researchers to gain access to large quantities of data that needs to be analyzed. As a result, many labs and departments have acquired considerable compute resources. However, effective and efficient utilization of those resources remains a barrier for the individual researchers because the distributed computing environments are difficult to understand and control. We introduce a methodology and a tool that automatically manipulates and understands job submission parameters to realize a range of job execution alternatives across a distributed compute infrastructure. Generated alternatives are presented to a user at the time of job submission in the form of tradeoffs mapped onto two conflicting objectives, namely job cost and runtime. Such presentation of job execution alternatives allows a user to immediately and quantitatively observe viable options regarding their job execution, and thus allows the user to interact with the environment at a true service level. Generated job execution alternatives have been tested through simulation and on real-world resources and, in both cases, the average accuracy of the runtime of the generated and perceived job alternatives is within 5%.  相似文献   

4.
Cloud resource scheduling requires mapping of cloud resources to cloud workloads. Scheduling results can be optimized by considering Quality of Service (QoS) parameters as inherent requirements of scheduling. In existing literature, only a few resource scheduling algorithms have considered cost and execution time constraints but efficient scheduling requires better optimization of QoS parameters. The main aim of this research paper is to present an efficient strategy for execution of workloads on cloud resources. A particle swarm optimization based resource scheduling technique has been designed named as BULLET which is used to execute workloads effectively on available resources. Performance of the proposed technique has been evaluated in cloud environment. The experimental results show that the proposed technique efficiently reduces execution cost, time and energy consumption along with other QoS parameters.  相似文献   

5.
为了降低云环境中科学工作流调度的执行代价与数据中心能耗,提出了一种基于能效感知的工作流调度代价最优化算法CWCO-EA。算法在满足截止时间约束下,以最小化工作流执行代价与降低能耗为目标,将工作流的任务调度划分为四步执行。首先,通过代价效用的概念设计虚拟机选择策略,实现了子makespan约束下的任务与最优虚拟机间的映射;其次,通过串行与并行任务合并策略,同步降低了工作流的执行代价与能耗;然后,通过空闲虚拟机重用机制,改善了租用虚拟机的利用率,进一步提高了能效;最后,通过任务松驰策略实现了租用虚拟机的能力回收,节省了能耗。通过四种科学工作流的仿真实验,结果表明,CWCO-EA算法比较同类型算法,在满足截止时间的同时,可以同步降低工作流的执行代价与执行能耗。  相似文献   

6.
In the last years the Wireless Sensor Networks’ (WSN) technology has been increasingly employed in various application domains. The extensive use of WSN posed new challenges in terms of both scalability and reliability. This paper proposes Sensor Node File System (SENFIS), a novel file system for sensor nodes, which addresses both scalability and reliability concerns. SENFIS can be mainly used in two broad scenarios. First, it can transparently be employed as a permanent storage for distributed TinyDB queries, in order to increase the reliability and scalability. Second, it can be directly used by a WSN application for permanent storage of data on the WSN nodes. The experimental section shows that SENFIS implementation makes an efficient use of resources in terms of energy consumption, memory footprint, flash wear levelling, while achieving execution times similarly with existing WSN file systems.  相似文献   

7.
谢兵 《计算机应用研究》2020,37(10):3014-3019
移动云计算可以通过应用任务的计算迁移降低执行延时和改善移动设备能效,但面对多云站点选择时,迁移决策是NP问题。针对该问题,提出一种能效计算迁移算法。为了实现截止期限和预算约束下执行时间与代价的多目标优化,算法将优化过程分解为三步进行。首先根据用户对时间与代价参数的偏好,设计一种CTTPO算法对应用进行分割,生成迁移模块(云端站点执行)和非迁移模块(移动设备执行);然后为了实现云端多站点间的迁移模块调度,设计一种基于教与学最优化方法的MTS算法,进而产生效率最优的应用调度解;最后设计一种基于动态电压缩放方法的ESM算法,通过多站点的性能缩放进一步降低应用执行能耗。通过两种随机应用结构图进行了仿真实验,实验结果证明,该算法在执行效率、执行代价以及执行能耗上要优于对比算法。  相似文献   

8.
随着新型基础设施建设(新基建)的加速,云计算将获得新的发展契机.数据中心作为云计算的基础设施,其内部服务器不断升级换代,这造成计算资源的异构化.如何在异构云环境下,对作业进行高效调度是当前的研究热点之一.针对异构云环境多目标优化调度问题,设计一种AHP定权的多目标强化学习作业调度方法.首先定义执行时间、平台运行能耗、成...  相似文献   

9.
Parameter server (PS) as the state-of-the-art distributed framework for large-scale iterative machine learning tasks has been extensively studied. However, existing PS-based systems often depend on memory implementations. With memory constraints, machine learning (ML) developers cannot train large-scale ML models in their rather small local clusters. Moreover, renting large-scale cloud servers is always economically infeasible for research teams and small companies. In this paper, we propose a disk-resident parameter server system named DRPS, which reduces the hardware requirement of large-scale machine learning tasks by storing high dimensional models on disk. To further improve the performance of DRPS, we build an efficient index structure for parameters to reduce the disk I/O cost. Based on this index structure, we propose a novel multi-objective partitioning algorithm for the parameters. Finally, a flexible workerselection parallel model of computation (WSP) is proposed to strike a right balance between the problem of inconsistent parameter versions (staleness) and that of inconsistent execution progresses (straggler). Extensive experiments on many typical machine learning applications with real and synthetic datasets validate the effectiveness of DRPS.  相似文献   

10.
Due to energy crisis of the last years, energy waste and sustainability have been brought both into public attention, and under industry and scientific scrutiny. Thus, obtaining high-performance at a reduced cost in cloud environments as reached a turning point where computing power is no longer the most important concern. However, the emphasis is shifting to manage energy efficiently, whereas providing techniques for measuring energy requirements in cloud systems becomes of capital importance.Currently there are different methods for measuring energy consumption in computer systems. The first consists in using power meter devices, which measure the aggregated power use of a machine. Another method involves directly instrumenting the motherboard with multimeters in order to obtain each power connector’s voltage and current, thus obtaining real-time power consumption. These techniques provide a very accurate results, but they are not suitable for large-scale environments. On the contrary, simulation techniques provide good scalability for performing experiments of energy consumption in cloud environments. In this paper we propose E-mc2, a formal framework integrated into the iCanCloud simulation platform for modelling the energy requirements in cloud computing systems.  相似文献   

11.
Existing simulators are designed to simulate a few thousand nodes due to the tight integration of modules. Thus, with limited simulator scalability, researchers/developers are unable to simulate protocols and algorithms in detail, although cloud simulators provide geographically distributed data centers environment but lack the support for execution on distributed systems. In this paper, we propose a distributed simulation framework referred to as CloudSimScale. The framework is designed on top of highly adapted CloudSim with communication among different modules managed using IEEE Std 1516 (high-level architecture). The underlying modules can now run on the same or different physical systems and still manage to discover and communicate with one another. Thus, the proposed framework provides scalability across distributed systems and interoperability across modules and simulators.  相似文献   

12.
Within the context of cloud computing, efficient resource management is of great importance as it can result in higher scalability and significant energy and cost reductions over time. Because of the high complexity and costs of cloud environments, however, newly developed resource allocation strategies are often only validated by means of simulations, for example, by using CloudSim or custom-developed simulation tools. This article describes a general approach for the validation of cloud resource allocation strategies, illustrating the importance of experimental validation on physical testbeds. Furthermore, the design and implementation of Raspberry Pi as a Service (RPiaaS), a low-cost embedded testbed built using Raspberry Pi nodes, is presented. RPiaaS aims to facilitate the step from simulations toward experimental evaluations on larger cloud testbeds and is designed using a microservice architecture, where experiments and all required management services are running inside containers. The performance of the RPiaaS testbed is evaluated using several benchmark experiments. The obtained results not only illustrate that the overhead of both using containers and running the required RPiaaS services is minimal but also provide useful insights for scaling up experiments between the Raspberry Pi testbed and a larger more traditional cloud testbed. The introduced validation approach is then illustrated using a case study focusing on the allocation of hierarchically structured tenant data. The results obtained through simulations are compared to the experimental results. The RPiaaS testbed proved to be a very useful tool for the initial experimental validation before moving the experiments to a large-scale testbed.  相似文献   

13.
Since the concept of merging the capabilities of mobile devices and cloud computing is becoming increasingly popular, an important question is how to optimally schedule services/tasks between the device and the cloud. The main objective of this article is to investigate the possibilities for using machine learning on mobile devices in order to manage the execution of services within the framework of Mobile Cloud Computing. In this study, an agent-based architecture with learning possibilities is proposed to solve this problem. Two learning strategies are considered: supervised and reinforcement learning. The solution proposed leverages, among other things, knowledge about mobile device resources, network connection possibilities and device power consumption, as a result of which a decision is made with regard to the place where the task in question is to be executed. By employing machine learning techniques, the agent working on a mobile device gains experience in determining the optimal place for the execution of a given type of task. The research conducted allowed for the verification of the solution proposed in the domain of multimedia file conversion and demonstrated its usefulness in reducing the time required for task execution. Using the experience gathered as a result of subsequent series of tests, the agent became more efficient in assigning the task of multimedia file conversion to either the mobile device or cloud computing resources.  相似文献   

14.
云计算数据中心的耗电量巨大,但绝大多数的云计算数据中心并没有取得较高的资源利用率,通常只有15%-20%,有相当数量的服务器处于闲置工作状态,导致大量的能耗白白浪费。为了能够有效降低云计算数据中心的能耗,提出了一种适用于异构集群系统的云计算数据中心虚拟机节能调度算法(PVMAP算法),仿真实验结果表明:与经典算法PABFD相比,PVMAP算法的能耗明显更低,可扩展性与稳定性都更好。与此同时,随着〈Hosts,VMs〉数目的不断增加,PVMAP 算法虚拟机迁移总数和关闭主机总数的增长幅度都要低于PABFD算法。  相似文献   

15.
伴随着云计算技术的快速发展,数据中心的服务器能耗日益激增,带来了严重的经济和环境问题,降低数据中心能耗,对缩减数据中心运营成本、实现全球“双碳”战略目标具有重要意义。因此,不同层面的服务器能耗模型构建和预估成为了近年来研究的热点。据此,从硬件、软件层面系统地总结了服务器能耗模型的相关工作。在硬件层面,对服务器的整体能耗按加法模型、基于系统利用率模型和其他模型分类;同时,还总结了服务器部件粒度的能耗模型,涵盖CPU、内存、磁盘和网络接口。在软件层面,按机器学习的类别将服务器能耗模型归纳为监督学习、非监督学习、强化学习。此外,还比较了不同能耗模型的优缺点、适用场景,展望了能耗模型的未来研究方向。  相似文献   

16.
An unheard of growth in mobile data traffic has drawn attention from academia and industry. Mobile cloud computing is an emerging computing paradigm combining cloud computing and mobile networks to alleviate resource-constrained limitations of mobile devices, which can greatly improve network quality of service and efficiency to make good use of available network resource. Mobile cloud computing not only inherits the advantages of strong computing capacity and massive storage of cloud computing, but also overcomes the time and geographical restrictions, bringing benefits for mobile users to offload complex computation to powerful cloud servers for execution anytime and anywhere. To this end, an optimal task workflow scheduling scheme is proposed for the mobile devices, based on the dynamic voltage and frequency scaling technique and the whale optimization algorithm. Through considering three factors: task execution position, task execution sequence, and operating voltage and frequency of mobile devices, this study makes a tradeoff between performance and energy consumption by solving the joint optimization for task completion time and energy consumption simultaneously. Finally, a series of extensive simulation results has demonstrated and verified the scheme has distinguished performance in terms of efficiency and operational cost, providing feasible solutions to similar optimization problems of mobile cloud computing.  相似文献   

17.
针对构建大规模机器学习系统在可扩展性、算法收敛性能、运行效率等方面面临的问题,分析了大规模样本、模型和网络通信给机器学习系统带来的挑战和现有系统的应对方案。以隐含狄利克雷分布(LDA)模型为例,通过对比三款开源分布式LDA系统——Spark LDA、PLDA+和LightLDA,在系统资源消耗、算法收敛性能和可扩展性等方面的表现,分析各系统在设计、实现和性能上的差异。实验结果表明:面对小规模的样本集和模型,LightLDA与PLDA+的内存使用量约为Spark LDA的一半,系统收敛速度为Spark LDA的4至5倍;面对较大规模的样本集和模型,LightLDA的网络通信总量与系统收敛时间远小于PLDA+与SparkLDA,展现出良好的可扩展性。“数据并行+模型并行”的体系结构能有效应对大规模样本和模型的挑战;参数弱同步策略(SSP)、模型本地缓存机制和参数稀疏存储能有效降低网络开销,提升系统运行效率。  相似文献   

18.
Personal cloud storage provides users with convenient data access services. Service providers build distributed storage systems by utilizing cloud resources with distributed hash table (DHT), so as to enhance system scalability. Efficient resource provisioning could not only guarantee service performance, but help providers to save cost. However, the interactions among servers in a DHT‐based cloud storage system depend on the routing process, which makes its execution logic more complicated than traditional multi‐tier applications. In addition, production data centers often comprise heterogeneous machines with different capacities. Few studies have fully considered the heterogeneity of cloud resources, which brings new challenges to resource provisioning. To address these challenges, this paper presents a novel resource provisioning model for service providers. The model utilizes queuing network for analysis of both service performance and cost estimation. Then, the problem is defined as a cost optimization with performance constraints. We propose a cost‐efficient algorithm to decompose the original problem into a sub‐optimization one. Furthermore, we implement a prototype system on top of an infrastructure platform built with OpenStack. It has been deployed in our campus network. Based on real‐world traces collected from our system and Dropbox, we validate the efficiency of our proposed algorithms by extensive experiments. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
Recently cloud computing is facing increasing attention as it is applied in many business scenarios by advertising the illusion of infinite resources towards its customers. Nevertheless, it raises severe issues with energy consumption: the higher levels of quality and availability require irrational energy expenditures. This paper proposes Pliant system-based virtual machine scheduling approaches for reducing the energy consumption of cloud datacenters. We have designed a CloudSim-based simulation environment for task-based cloud applications to evaluate our proposed solution, and applied industrial workload traces for our experiments. We show that significant savings can be achieved in energy consumption by our proposed Pliant-based algorithms, in this way a beneficial trade-off can be reached by IaaS providers between energy consumption and execution time.  相似文献   

20.
云计算环境下的分布存储关键技术   总被引:11,自引:0,他引:11  
云计算作为下一代计算模式,在科学计算和商业计算领域均发挥着重要作用,受到当前学术界和企业界的广泛关注.云计算环境下的分布存储主要研究数据在数据中心上的组织和管理,作为云计算环境的核心基础设施,数据中心通常由百万级以上节点组成,存储其上的数据规模往往达到PB级甚至EB级,导致数据失效成为一种常态行为,极大地限制了云计算的应用和推广,增加了云计算的成本.因此,提高可扩展性和容错性、降低成本,成为云计算环境下分布存储研究的若干关键技术.针对如何提高存储的可扩展性、容错性以及降低存储的能耗等目标,从数据中心网络的设计、数据的存储组织方式等方面对当前分布存储的关键技术进行了综述.首先,介绍并对比了当前典型的数据中心网络结构的优缺点;其次,介绍并对比了当前常用的两种分布存储容错技术,即基于复制的容错技术和基于纠删码的容错技术;第三,介绍了当前典型的分布存储节能技术,并分析了各项技术的优缺点;最后指出了当前技术面临的主要挑战和下一步研究的方向.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号