首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Mobile systems, such as smartphones, are becoming the primary platform of choice for a user’s computational needs. However, mobile devices still suffer from limited resources such as battery life and processor performance. To address these limitations, a popular approach used in mobile cloud computing is computation offloading, where resource-intensive mobile components are offloaded to more resourceful cloud servers. Prior studies in this area have focused on a form of offloading where only a single server is considered as the offloading site. Because there is now an environment where mobile devices can access multiple cloud providers, it is possible for mobiles to save more energy by offloading energy-intensive components to multiple cloud servers. The method proposed in this paper differentiates the data- and computation-intensive components of an application and performs a multisite offloading in a data and process-centric manner. In this paper, we present a novel model to describe the energy consumption of a multisite application execution and use a discrete time Markov chain (DTMC) to model fading wireless mobile channels. We adopt a Markov decision process (MDP) framework to formulate the multisite partitioning problem as a delay-constrained, least-cost shortest path problem on a state transition graph. Our proposed Energy-efficient Multisite Offloading Policy (EMOP) algorithm, built on a value iteration algorithm (VIA), finds the efficient solution to the multisite partitioning problem. Numerical simulations show that our algorithm considers the different capabilities of sites to distribute appropriate components such that there is a lower energy cost for data transfer from the mobile to the cloud. A multisite offloading execution using our proposed EMOP algorithm achieved a greater reduction on the energy consumption of mobiles when compared to a single site offloading execution.  相似文献   

2.
Cloud computing, an important source of computing power for the scientific community, requires enhanced tools for an efficient use of resources. Current solutions for workflows execution lack frameworks to deeply analyze applications and consider realistic execution times as well as computation costs. In this study, we propose cloud user–provider affiliation (CUPA) to guide workflow’s owners in identifying the required tools to have his/her application running. Additionally, we develop PSO-DS, a specialized scheduling algorithm based on particle swarm optimization. CUPA encompasses the interaction of cloud resources, workflow manager system and scheduling algorithm. Its featured scheduler PSO-DS is capable of converging strategic tasks distribution among resources to efficiently optimize makespan and monetary cost. We compared PSO-DS performance against four well-known scientific workflow schedulers. In a test bed based on VMware vSphere, schedulers mapped five up-to-date benchmarks representing different scientific areas. PSO-DS proved its efficiency by reducing makespan and monetary cost of tested workflows by 75 and 78%, respectively, when compared with other algorithms. CUPA, with the featured PSO-DS, opens the path to develop a full system in which scientific cloud users can run their computationally expensive experiments.  相似文献   

3.
谢兵 《计算机应用研究》2020,37(10):3014-3019
移动云计算可以通过应用任务的计算迁移降低执行延时和改善移动设备能效,但面对多云站点选择时,迁移决策是NP问题。针对该问题,提出一种能效计算迁移算法。为了实现截止期限和预算约束下执行时间与代价的多目标优化,算法将优化过程分解为三步进行。首先根据用户对时间与代价参数的偏好,设计一种CTTPO算法对应用进行分割,生成迁移模块(云端站点执行)和非迁移模块(移动设备执行);然后为了实现云端多站点间的迁移模块调度,设计一种基于教与学最优化方法的MTS算法,进而产生效率最优的应用调度解;最后设计一种基于动态电压缩放方法的ESM算法,通过多站点的性能缩放进一步降低应用执行能耗。通过两种随机应用结构图进行了仿真实验,实验结果证明,该算法在执行效率、执行代价以及执行能耗上要优于对比算法。  相似文献   

4.
何远德  黄奎峰 《计算机应用研究》2020,37(6):1633-1637,1651
移动云计算可以通过计算卸载改善移动设备的能效和应用的执行延时。然而面对云端的多重服务选择时,计算卸载决策是NP问题。为了解决这一问题,提出一种遗传算法寻找计算卸载的最优应用分割决策解。遗传种群初始化中,算法联立预定义和随机染色体方法进行初始种群的生成,减少了无效染色体的发生比例。同时,算法为预定义的预留种群设计一种特定的基于汉明距离函数的适应度函数,更好地衡量了染色体间的差异。种群交叉中分别利用近亲交配与杂交繁育丰富了种群个体。算法通过修正的遗传操作减少了无效解的产生,以更合理的时间代价获得了应用分割的最优可行解。应用现实的移动应用任务图进行仿真实验评估了算法效率。评估结论表明,所设计的遗传算法在应用执行能耗、执行时间以及综合权重代价方面均优于对比算法。  相似文献   

5.
Cloud computing is an emerging technology in which information technology resources are virtualized to users in a set of computing resources on a pay‐per‐use basis. It is seen as an effective infrastructure for high performance applications. Divisible load applications occur in many scientific and engineering applications. However, dividing an application and deploying it in a cloud computing environment face challenges to obtain an optimal performance due to the overheads introduced by the cloud virtualization and the supporting cloud middleware. Therefore, we provide results of series of extensive experiments in scheduling divisible load application in a Cloud environment to decrease the overall application execution time considering the cloud networking and computing capacities presented to the application's user. We experiment with real applications within the Amazon cloud computing environment. Our extensive experiments analyze the reasons of the discrepancies between a theoretical model and the reality and propose adequate solutions. These discrepancies are due to three factors: the network behavior, the application behavior and the cloud computing virtualization. Our results show that applying the algorithm result in a maximum ratio of 1.41 of the measured normalized makespan versus the ideal makespan for application in which the communication to computation ratio is big. They show that the algorithm is effective for those applications in a heterogeneous setting reaching a ratio of 1.28 for large data sets. For application following the ensemble clustering model in which the computation to communication ratio is big and variable, we obtained a maximum ratio of 4.7 for large data set and a ratio of 2.11 for small data set. Applying the algorithm also results in an important speedup. These results are revealing for the type of applications we consider under experiments. The experiments also reveal the impact of the choice of the platforms provided by Amazon on the performance of the applications under study. Considering the emergence of cloud computing for high performance applications, the results in this paper can be widely adopted by cloud computing developers. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
According to parallel computing technology, Cloud service is popular, and it is easy to use Cloud service at everywhere. Cloud means involving application systems that are executed within the cloud and operated via the internet enabled devices. Cloud computing does not rely on the use of cloud storage as it will be removed upon users download action. Clouds can be classified as public, private and hybrid. Cloud service comes up with Ubiquitous; Cloud service users can use their service at anywhere at any time. It is convenient. However, there is a tradeoff. If user’s username and password are compromised, user’s cloud system is in danger, and their confidential information will be in jeopardy. At anywhere and anytime with any device, Cloud user’s credential could be in jeopardy. Security concerns in Cloud play a major role. It is the biggest obstacle to developing in Cloud. However, Cloud is still popular and vulnerability for hacking because of one channel user authentication. Therefore, this research proposes two-channel user authentication by using USB to emphasise security.  相似文献   

7.
Cloud computing is a big paradigm shift of computing mechanism. It provides high scalability and elasticity with a range of on-demand services. We can execute a variety of distributed applications on cloud’s virtual machines (computing nodes). In a distributed application, virtual machine nodes need to communicate and coordinate with each other. This type of coordination requires that the inter-node latency should be minimal to improve the performance. But in the case of nodes belonging to different clusters of the same cloud or in a multi-cloud environment, there can be a problem of higher network latency. So it becomes more difficult to decide, which node(s) to choose for the distributed application execution, to keep inter-node latency at minimum. In this paper, we propose a solution for this problem. We propose a model for the grouping of nodes with respect to network latency. The application scheduling is done on the basis of network latency. This model is a part of our proposed Cloud Scheduler module, which helps the scheduler in scheduling decisions on the basis of different criteria. Network latency and resultant node grouping on the basis of this latency is one of those criteria. The main essence of the paper is that our proposed latency grouping algorithm not only has no additional network traffic overheads for algorithm computation but also works well with incomplete latency information and performs intelligent grouping on the basis of latency. This paper addresses an important problem in cloud computing, which is locating communicating virtual machines for minimum latency between them and group them with respect to inter-node latency.  相似文献   

8.
The Mobile Cloud Computing (MCC) paradigm depends on efficient offloading of computation from the resource constrained mobile device to the resource rich cloud server. The computational offloading is assisted by system virtualization, application virtualization, and process state migration. However, system and application virtualization techniques force unnecessary overhead on applications that require offloading to the cloud and applications that do not. Moreover, smartphones and cloud data centers are based on heterogeneous processor architectures, such as, ARM and x86. As a result, process migrated from a smartphone needs translation or emulation on the cloud server. Therefore, instruction emulation is a necessary criterion for a comprehensive MCC framework. In this paper, we evaluate the overhead of the system and application virtualization techniques and emulation frameworks that enable MCC offloading mechanisms. We find that the overhead of system and application virtualization can be as high as 4.51% and 55.18% respectively for the SciMark benchmark. Moreover, ARM to Intel device emulation overhead can be as high as 55.53%. We provide a proof of concept of emulation speedup by utilizing efficient Single Instruction, Multiple Data (SIMD) translations. We conclude that the overhead of virtualization and emulation techniques need to be reduced for efficient MCC offloading frameworks.  相似文献   

9.
云存储系统作为云计算的重要组成部分,是各种云计算服务的基础。但随云存储系统规模的不断扩大和在设计时对能耗因素的忽略,使其日益暴露出高能耗、低效率的问题。因为云存储系统占整个云计算中心能耗的27%~40%,所以无论从降低服务提供商的运营成本,还是从降低能耗以保护环境的角度出发,研究云存储系统中的节能技术都具有很大的现实意义与应用前景。将存储系统中的能耗优化问题分为基于硬件的节能方法与基于调度的节能方法两大类进行讨论;并将基于调度的节能方法分为基于节点调度、基于数据调度和基于缓存预取技术3类进行综合比较;最后,对适应节能的云存储体系结构、节能模式下的QoS保证、节能模式与计算模式的匹配以及纠删码容错技术下的节能研究4个方向进行了展望。  相似文献   

10.
Modern mobile devices, such as smartphones and tablets, have made many pervasive computing dreams come true. Still, many mobile applications do not perform well due to the shortage of resources for computation, data storage, network bandwidth, and battery capacity. While such applications can be re-designed with client–server models to benefit from cloud services, the users are no longer in full control of the application, which has become a serious concern for data security and privacy. In addition, the collaboration between a mobile device and a cloud server poses complex performance issues associated with the exchange of application state, synchronization of data, network condition, etc. In this work, a novel mobile cloud execution framework is proposed to execute mobile applications in a cloud-based virtualized execution environment controlled by mobile applications and users, with encryption and isolation to protect against eavesdropping from cloud providers. Under this framework, several efficient schemes have been developed to deal with technical issues for migrating applications and synchronizing data between execution environments. The communication issues are also addressed in the virtualization execution environment with probabilistic communication Quality-of-Service (QoS) technique to support timely application migration.  相似文献   

11.
Cloud resource scheduling requires mapping of cloud resources to cloud workloads. Scheduling results can be optimized by considering Quality of Service (QoS) parameters as inherent requirements of scheduling. In existing literature, only a few resource scheduling algorithms have considered cost and execution time constraints but efficient scheduling requires better optimization of QoS parameters. The main aim of this research paper is to present an efficient strategy for execution of workloads on cloud resources. A particle swarm optimization based resource scheduling technique has been designed named as BULLET which is used to execute workloads effectively on available resources. Performance of the proposed technique has been evaluated in cloud environment. The experimental results show that the proposed technique efficiently reduces execution cost, time and energy consumption along with other QoS parameters.  相似文献   

12.
Cloud computing offers new computing paradigms, capacity and flexible solutions to high performance computing (HPC) applications. For example, Hardware as a Service (HaaS) allows users to provide a large number of virtual machines (VMs) for computation-intensive applications using the HaaS model. Due to the large number of VMs and electronic components in HPC system in the cloud, any fault during the execution would result in re-running the applications, which will cost time, money and energy. In this paper we presented a proactive fault tolerance (FT) approach to HPC systems in the cloud to reduce the wall-clock execution time and dollar cost in the presence of faults. We also developed a generic FT algorithm for HPC systems in the cloud. Our algorithm does not rely on a spare node prior to prediction of a failure. We also developed a cost model for executing computation-intensive applications on HPC systems in the cloud. We analysed the dollar cost of provisioning spare nodes and checkpointing FT to assess the value of our approach. Our experimental results obtained from a real cloud execution environment show that the wall-clock execution time and cost of running computation-intensive applications in cloud can be reduced by as much as 30%. The frequency of checkpointing of computation-intensive applications can be reduced up to 50% with our FT approach for HPC in the cloud compared with current FT approaches.  相似文献   

13.
The flexible and pay-as-you-go computing capabilities offered by Cloud infrastructures are nowadays very attractive, and widely adopted by many organizations and enterprises. In particular this is true for those having periodical or variable tasks to execute, and, choose to not or cannot afford the expenses of buying and managing computing facilities or software packages, that should remain underutilized for most of the time. For their ability to couple the scalability offered by public service providers, with the wider Quality of Service (QoS) provisions and ad-hoc customizations provided by private Clouds, Hybrid Clouds (HC) seem a particularly appealing solution for customers requiring something more than the mere availability of the service.The paper firstly introduces a Cloud brokering system leveraging on a promising architectural approach, based on the use of a gateway toolkit. This approach provides noticeably advantages, both to customers and to Cloud Brokers (CB), for its ability to hide all the intricacies related to the management of powerful, but often complex and heterogeneous infrastructures like the Cloud. Moreover such approach, through customized interfaces, facilitates customers in accessing Cloud resources thus easing the tailored deployment and execution of their applications and workflows.The major contribution of this work is given by the analysis of a set of brokering strategies for Hybrid Clouds, implemented by a brokering algorithm, aimed at the execution of various applications subject to different user requirements and computational conditions. With the objective of firstly maximize both user satisfaction and CB’s revenues the algorithm also pursues profit increases through the reduction of energy costs by adopting energy saving mechanisms. A simulation model is used to evaluate performance, and the results show that differences among strategies depend on type and size of system loads and that the use of turn on and off techniques greatly improves energy savings at low and medium load rates thus indirectly increasing CB revenues without diminishing customers’ satisfaction.  相似文献   

14.
随机任务在云计算平台中能耗的优化管理方法   总被引:5,自引:0,他引:5  
谭一鸣  曾国荪  王伟 《软件学报》2012,23(2):266-278
针对云计算系统在运行过程中由于计算节点空闲而产生大量空闲能耗,以及由于不匹配任务调度而产生大量“奢侈”能耗的能耗浪费问题,提出一种通过任务调度方式的能耗优化管理方法.首先,用排队模型对云计算系统进行建模,分析云计算系统的平均响应时间和平均功率,建立云计算系统的能耗模型.然后提出基于大服务强度和小执行能耗的任务调度策略,分别针对空闲能耗和“奢侈”能耗进行优化控制.基于该调度策略,设计满足性能约束的最小期望执行能耗调度算法ME3PC(minimum expectation execution energy with performance constraints).实验结果表明,该算法在保证执行性能的前提下,可大幅度降低云计算系统的能耗开销.  相似文献   

15.
为实现对高速公路隧道照明系统的节能控制,提出基于云模型的车辆速度与位置估计方法。结合正向云与逆向云算法,对车辆通过线圈时的脉冲波形数据进行处理,获得均速估计值。将车速估计结果与行车时间作为车辆行驶位置的判定依据,采用云推理估计车辆的行驶位置。对云估计误差进行实时修正,从而精确判断车辆行驶的位置。仿真和实验结果表明,该方法能实现对高速公路隧道内车辆位置的正确估计,且估计精度高达99.2309%,可提高云模型算法在实际应用中的效率,适用于高速公路隧道照明系统的节能控制。  相似文献   

16.
移动云计算技术可帮助移动用户在执行工作流任务时将一些任务迁移至云端服务器执行,从而节省移动设备的电池能耗,并提高计算能力.传统研究工作在进行移动云计算环境中的任务调度时缺乏对能耗和运行时间的联合优化.为了实现有效的任务调度,基于工作流图中任务执行的先后关系,分析了采用动态电压频率调节技术的移动设备处理器执行工作流任务的运行时间与能耗,并考虑了将任务通过无线信道迁移到云端服务器执行所需的时间,给出了能耗与执行时间联合优化的任务调度模型和目标方程.提出基于模拟退火算法的任务调度方法,分析了算法时间复杂度,进行了系统性的对比实验,评估了所提出方法的正确性和有效性.  相似文献   

17.
The Journal of Supercomputing - Cloud computing is an Internet-provisioned computing paradigm that provides scalable resources for the execution of the end user’s tasks. The cloud users lease...  相似文献   

18.
Mobile cloud computing presents an effective solution to overcome smartphone constraints, such as limited computational power, storage, and energy. As the traditional mobile application development models do not support computation offloading, mobile cloud computing requires novel application development models that can facilitate the development of cloud enabled mobile applications. This paper presents a mobile cloud application development model, named MobiByte, to enhance mobile device applications’ performance, energy efficiency, and execution support. MobiByte is a context-aware application model that uses multiple data offloading techniques to support a wide range of applications. The proposed model is validated using prototype applications and detailed results are presented. Moreover, MobiByte is compared with the most recent application models with a conclusion that it outperforms the existing application models in many aspects like energy efficiency, performance, generality, context awareness, and privacy.  相似文献   

19.
Cloud computing is an essential part of today’s computing world. Continuously increasing amount of computation with varying resource requirements is placed in large data centers. The variation among computing tasks, both in their resource requirements and time of processing, makes it possible to optimize the usage of physical hardware by applying cloud technologies. In this work, we develop a prototype system for load-based management of virtual machines in an OpenStack computing cluster. Our prototype is based on an idea of ‘packing’ idle virtual machines into special park servers optimized for this purpose. We evaluate the method by running real high-energy physics analysis software in an OpenStack test cluster and by simulating the same principle using the Cloudsim simulator software. The results show a clear improvement, 9–48 % , in the total energy efficiency when using our method together with resource overbooking and heterogeneous hardware.  相似文献   

20.
曹洁  曾国荪 《计算机应用》2015,35(3):648-653
云环境中的处理机故障已成为云计算不可忽视的问题,容错成为设计和发展云计算系统的关键需求。针对一些容错调度算法在任务调度过程中调度效率低下以及任务类型单一的问题,提出一种处理机和任务主副版本分组的容错调度方法;并给出了副版本可重叠执行的判定方法,以及任务最坏响应时间的计算公式。通过实验和分析表明,和以前算法相比,将处理机分成两组分别执行任务主版本和任务副版本,减少了任务调度所需进行可调度测试的时间,增加了副版本重叠执行的机会,减少了所需的处理机个数,对提高系统处理机的利用率和容错调度的效率具有重要的意义。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号