首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A telehealth system covers both clinical and nonclinical uses, which not only provides store-and-forward data services to be offline studied by relevant specialists, but also monitors the real-time physiological data through ubiquitous sensors to support remote telemedicine. However, the current telehealth systems do not consider the velocity and veracity of the big-data system in the medical context. Emergency events generate a large amount of the real-time data, which should be stored in the data center, and forwarded to remote hospitals. Furthermore, patients’ information is scattered on the distributed data center, which cannot provide a high-efficient remote real-time service. In this paper, we proposes a probability-based bandwidth model in a telehealth cloud system, which helps cloud broker to provide a high performance allocation of computing nodes and links. This brokering mechanism considers the location protocol of Personal Health Record (PHR) in cloud and schedules the real-time signals with a low information transfer between different hosts. The broker uses several bandwidth evaluating methods to predict the near future usage of bandwidth in a telehealth context. The simulation results show that our model is effective at determining the best performing service, and the inserted service validates the utility of our approach.  相似文献   

2.

On a cloud platform, the user requests are managed through workload units called cloudlets which are assigned to virtual machines through cloudlet scheduling mechanism that mainly aims at minimizing the request processing time by producing effective small length schedules. The efficient request processing, however, requires excessive utilization of high-performance resources which incurs large overhead in terms of monetary cost and energy consumed by physical machines, thereby rendering cloud platforms inadequate for cost-effective green computing environments. This paper proposes a power-aware cloudlet scheduling (PACS) algorithm for mapping cloudlets to virtual machines. The algorithm aims at reducing the request processing time through small length schedules while minimizing energy consumption and the cost incurred. For allocation of virtual machines to cloudlets, the algorithm iteratively arranges virtual machines (VMs) in groups using weights computed through optimization and rescaling of parameters including VM resources, cost of utilization of resources, and power consumption. The experiments performed with a diverse set of configurations of cloudlets and virtual machines show that the PACS algorithm achieves a significant overall performance improvement factor ranging from 3.80 to 23.82 over other well-known cloudlet scheduling algorithms..

  相似文献   

3.
Scheduling of tasks in cloud computing is an NP-hard optimization problem. Load balancing of non-preemptive independent tasks on virtual machines (VMs) is an important aspect of task scheduling in clouds. Whenever certain VMs are overloaded and remaining VMs are under loaded with tasks for processing, the load has to be balanced to achieve optimal machine utilization. In this paper, we propose an algorithm named honey bee behavior inspired load balancing (HBB-LB), which aims to achieve well balanced load across virtual machines for maximizing the throughput. The proposed algorithm also balances the priorities of tasks on the machines in such a way that the amount of waiting time of the tasks in the queue is minimal. We have compared the proposed algorithm with existing load balancing and scheduling algorithms. The experimental results show that the algorithm is effective when compared with existing algorithms. Our approach illustrates that there is a significant improvement in average execution time and reduction in waiting time of tasks on queue.  相似文献   

4.
针对IaaS(Infrastructure as a Service)云计算中资源调度的多目标优化问题,提出一种基于改进多目标布谷鸟搜索的资源调度算法。在多目标布谷鸟搜索算法的基础上,通过改进随机游走策略和丢弃概率策略提高了算法的局部搜索能力和收敛速度。以最大限度地减少完成时间和成本为主要目标,将任务分配特定的VM(Virtual Manufacturing)满足云用户对云提供商的资源利用的需求,从而减少延迟,提高资源利用率和服务质量。实验结果表明,该算法可以有效地解决IaaS云计算环境中资源调度的多目标问题,与其他算法相比,具有一定的优势。  相似文献   

5.
This paper considers online energy-efficient scheduling of virtual machines (VMs) for Cloud data centers. Each request is associated with a start-time, an end-time, a processing time and a capacity demand from a Physical Machine (PM). The goal is to schedule all of the requests non-preemptively in their start-time-end-time windows, subjecting to PM capacity constraints, such that the total busy time of all used PMs is minimized (called MinTBT-ON for abbreviation). This problem is a fundamental scheduling problem for parallel jobs allocation on multiple machines; it has important applications in power-aware scheduling in cloud computing, optical network design, customer service systems, and other related areas. Offline scheduling to minimize busy time is NP-hard already in the special case where all jobs have the same processing time and can be scheduled in a fixed time interval. One best-known result for MinTBT-ON problem is a g-competitive algorithm for general instances and unit-size jobs using First-Fit algorithm where g is the total capacity of a machine. In this paper, a $(1+\frac{g-2}{k}-\frac{g-1}{k^{2}})$ -competitive algorithm, Dynamic Bipartition-First-Fit (BFF) is proposed and proved for general case, where k is the ratio of the length of the longest interval over the length of the second longest interval for k>1 and g≥2. More results in general and special cases are obtained to improve the best-known bounds.  相似文献   

6.
While virtualization enables multiple virtual machines (VMs)—with multiple operating systems and applications—to run within a physical server, it also complicates resource allocations trying to guarantee Quality of Service (QoS) requirements of the diverse applications running within these VMs. As QoS is crucial in the cloud, considerable research efforts have been directed towards CPU, memory and network allocations to provide effective QoS to VMs, but little attention has been devoted to disk resource allocation.This paper presents the design and implementation of Flubber, a two-level scheduling framework that decouples throughput and latency allocation to provide QoS guarantees to VMs while maintaining high disk utilization. The high-level throughput control regulates the pending requests from the VMs with an adaptive credit-rate controller, in order to meet the throughput requirements of different VMs and ensure performance isolation. Meanwhile, the low-level latency control, by the virtue of the batch and delay earliest deadline first mechanism (BD-EDF), re-orders all pending requests from VMs based on their deadlines, and batches them to disk devices taking into account the locality of accesses across VMs. We have implemented Flubber and made extensive evaluations on a Xen-based host. The results show that Flubber can simultaneously meet the different service requirements of VMs while improving the efficiency of the physical disk. The results also show improvement of up to 25% in the VM performance over state-of-art approaches: for example, in contract to the default Xen disk I/O scheduler—Completely Fair Queueing (CFQ)—besides achieving the desired QoS of each VM, Flubber speeds up the sequential and random reads by 17% and 25%, respectively, due to the efficient physical disk utilization.  相似文献   

7.
Video surveillance applications need video data center to provide elastic virtual machine (VM) provisioning. However, the workloads of the VMs are hardly to be predicted for online video surveillance service. The unknown arrival workloads easily lead to workload skew among VMs. In this paper, we study how to balance the workload skew on online video surveillance system. First, we design the system framework for online surveillance service which consists of video capturing and analysis tasks. Second, we propose StreamTune, an online resource scheduling approach for workload balancing, to deal with irregular video analysis workload with the minimum number of VMs. We aim at timely balancing the workload skew on video analyzers without depending on any workload prediction method. Furthermore, we evaluate the performance of the proposed approach using a traffic surveillance application. The experimental results show that our approach is well adaptive to the variation of workload and achieves workload balance with less VMs.  相似文献   

8.
ABSTRACT

In cloud computing system, task scheduling plays an important key role. The tasks provided by the user to allocate in cloud have to pay for the share of resources that are used by them. The requirement of task scheduling in the cloud environment has become more and more complex, and the amount of resources and tasks is growing rapidly. Therefore, an efficient task-scheduling algorithm is necessary for allocating the task efficiently in the cloud, which can achieve minimum resource utilization, minimum processing time, high efficiency, and maximum profit. In hybrid clouds to maximize the profit of a private cloud while guaranteeing the service delay bound of delay-tolerant tasks is studied in this article. Here, a new metaheuristic technique inspired from the bubble-net hunting technique of humpback whales, namely whale optimization algorithm (WOA), has been applied to solve the task-scheduling problem. Then WOA algorithm is compared with existing algorithms such as artificial bee colony algorithm (ABC) and Genetic algorithm (GA). The experimental result shows that the proposed WOA algorithm greatly increases the efficiency and achieves maximum profit for the private cloud.  相似文献   

9.
The communication process is very easy today due to the rapid growth of information technology. In addition, the development of cloud computing technology makes it easier than earlier days by facilitating the large volume of data exchange anytime and from anywhere in the world. E-businesses are successfully running today due to the development of cloud computing technology. Specifically in cloud computing, cloud services are providing enormous support to share the resources and data in an efficient way with less cost expenses for businessmen. However, security is an essential issue for cloud users and services. For this purpose, many security policies have been introduced by various researchers for enhancing the security in e-commerce applications. However, the available security policies are also failing to provide the secured services in the society and e-commerce applications. To overcome this disadvantage, we propose a new policy-oriented secured service model for providing the security of the services in the cloud. The proposed model is the combination of a trust aware policy scheduling algorithm and an effective and intelligent re-encryption scheme. Here, the dynamic trust aware policy-oriented service for allocating the cloud user’s request by the cloud service provider and an effective and re-encryption scheme is used that uses intelligent agent for storing the data in the cloud database securely. The proposed model assures the scalability, reliability, and security for the stored e-commerce data and service access.  相似文献   

10.
The dynamic provisioning of virtualized resources offered by cloud computing infrastructures allows applications deployed in a cloud environment to automatically increase and decrease the amount of used resources. This capability is called auto-scaling and its main purpose is to automatically adjust the scale of the system that is running the application to satisfy the varying workload with minimum resource utilization. The need for auto-scaling is particularly important during workload peaks, in which applications may need to scale up to extremely large-scale systems. Both the research community and the main cloud providers have already developed auto-scaling solutions. However, most research solutions are centralized and not suitable for managing large-scale systems, moreover cloud providers’ solutions are bound to the limitations of a specific provider in terms of resource prices, availability, reliability, and connectivity. In this paper we propose DEPAS, a decentralized probabilistic auto-scaling algorithm integrated into a P2P architecture that is cloud provider independent, thus allowing the auto-scaling of services over multiple cloud infrastructures at the same time. Our experiments (simulations and real deployments), which are based on real service traces, show that our approach is capable of: (i) keeping the overall utilization of all the instantiated cloud resources in a target range, (ii) maintaining service response times close to the ones obtained using optimal centralized auto-scaling approaches.  相似文献   

11.
随着虚拟化技术和云计算技术的发展,越来越多的高性能计算应用运行在云计算资源上.在基于虚拟化技术的高性能计算云系统中,高性能计算应用运行在多个虚拟机之中,这些虚拟机可能放置在不同的物理节点上.若多个通信密集型作业的虚拟机放置在相同的物理节点上,虚拟机之间将竞争物理节点的网络Ⅰ/O资源,如果虚拟机对网络Ⅰ/O资源的需求超过物理节点的网络Ⅰ/O带宽上限,将严重影响通信密集型作业的计算性能.针对虚拟机对网络Ⅰ/O资源的竞争问题,提出一种基于网络Ⅰ/O负载均衡的虚拟机放置算法NLPA,该算法采用网络Ⅰ/O负载均衡策略来减少虚拟机对网络Ⅰ/O资源的竞争.实验表明,与贪心算法进行比较,对于同样的高性能计算作业测试集,NLPA算法在完成作业的计算时间、系统中的网络Ⅰ/O负载吞吐率、网络Ⅰ/O负载均衡3个方面均有更好的表现.  相似文献   

12.
In the cloud, ensuring proper elasticity for hosted applications and services is a challenging problem and far from being solved. To achieve proper elasticity, the minimal number of cloud resources that are needed to satisfy a particular service level objective (SLO) requirement has to be determined. In this paper, we present an analytical model based on Markov chains to predict the number of cloud instances or virtual machines (VMs) needed to satisfy a given SLO performance requirement such as response time, throughput, or request loss probability. For the estimation of these SLO performance metrics, our analytical model takes the offered workload, the number of VM instances as an input, and the capacity of each VM instance. The correctness of the model has been verified using discrete-event simulation. Our model has also been validated using experimental measurements conducted on the Amazon Web Services cloud platform.  相似文献   

13.
Providing differentiated service in a consolidated storage environment is a challenging task. To address this problem, we introduce FAIRIO, a cycle-based I/O scheduling algorithm that provides differentiated service to workloads concurrently accessing a consolidated RAID storage system. FAIRIO enforces proportional sharing of I/O service through fair scheduling of disk time. During each cycle of the algorithm, I/O requests are scheduled according to workload weights and disk-time utilization history. Experiments, which were driven by the I/O request streams of real and synthetic I/O benchmarks and run on a modified version of DiskSim, provide evidence of FAIRIO’s effectiveness and demonstrate that fair scheduling of disk time is key to achieving differentiated service in a RAID storage system. In particular, the experimental results show that, for a broad range of workload request types, sizes, and access characteristics, the algorithm provides differentiated storage throughput that is within 10% of being perfectly proportional to workload weights; and, it achieves this with little or no degradation of aggregate throughput. The core design concepts of FAIRIO, including service-time allocation and history-driven compensation, potentially can be used to design I/O scheduling algorithms that provide workloads with differentiated service in storage systems comprised of RAIDs, multiple RAIDs, SANs, and hypervisors for Clouds.  相似文献   

14.
Wu  Hao  Chen  Xin  Song  Xiaoyu  Zhang  Chi  Guo  He 《The Journal of supercomputing》2021,77(1):679-710

With the wide deployment of cloud computing in scientific computing, cost minimization is increasingly critical for large-scale scientific workflow. Unfortunately, due to the highly intricate directed acyclic graph (DAG)-based workflow and the flexible usage of virtual machines (VMs) in cloud platform, the existing workflow scheduling approaches are inefficient to strike a balance between the parallelism and the topology of the DAG-based workflow while using the VMs, which causes a low utilization of VMs and consumes more cost. To address these issues, this paper presents a novel task scheduling framework named cost minimization approach with the DAG splitting method (COMSE) for minimizing the cost of running a deadline-constrained large-scale scientific workflow. First, we provide comprehensive theoretical analyses on how to improve the utilization of a resource-balanced multi-vCPU VM for running multiple tasks simultaneously. Second, considering the balance between the parallelism and the topology of a workflow, we simplify the DAG-based workflow, and based on the simplified DAG, a DAG splitting method is devised to preprocess the workflow. Third, since the cloud is charged by hours, we also design an exact algorithm to find the optimal operation pattern for a given schedule to make the consumed instance hours minimum, and this algorithm is named as instance hours minimization by Dijkstra (TOID). Finally, by employing the DAG splitting method and the TOID, the COMSE schedules a deadline-constrained large-scale scientific workflow on the multi-vCPU VMs and incorporates two important objects: minimizing the computation cost and the communication cost. Our solution approach is evaluated through rigorous performance evaluation study using real-word workflows, and the results show that the proposed COMSE approach outperforms existing algorithms in terms of computation cost and communication cost.

  相似文献   

15.
Datacenters have played an increasingly essential role as the underlying infrastructure in cloud computing. As implied by the essence of cloud computing, resources in these datacenters are shared by multiple competing entities, which can be either tenants that rent virtual machines (VMs) in a public cloud such as Amazon EC2, or applications that embrace data parallel frameworks like MapReduce in a private cloud maintained by Google. It has been generally observed that with traditional transport-layer protocols allocating link bandwidth in datacenters, network traffic from competing applications interferes with each other, resulting in a severe lack of predictability and fairness of application performance. Such a critical issue has drawn a substantial amount of recent research attention on bandwidth allocation in datacenter networks, with a number of new mechanisms proposed to efficiently and fairly share a datacenter network among competing entities. In this article, we present an extensive survey of existing bandwidth allocation mechanisms in the literature, covering the scenarios of both public and private clouds. We thoroughly investigate their underlying design principles, evaluate the trade-off involved in their design choices and summarize them in a unified design space, with the hope of conveying some meaningful insights for better designs in the future.  相似文献   

16.
The Non-Affinity Aware Grouping based resource Allocation (NAGA) method toward the General VMPlacement (GP) problem enables (1) some VMs to be co-located onto the same PM while the VMs are required to be placed onto distinct PMs; and (2) some VMs to be dispersedly placed onto distinct PMs while the VMs are required to be co-located onto the same PM, leading to a serious performance degradation of application running over multiple VMs in cloud computing. In this work we study an Affinity Aware VM Placement (AAP) problem and propose a Joint Affinity AwareGrouping and Bin Packing (JAGBP) method to remedy the deficiency of the NAGA method. We firstly introduce affinity of VMs to identify affinity relationships to VMs which are required to be placed with a special VM placement pattern, such as colocation or disperse placement, and formulate the AAP problem. Then, we propose an affinity aware resource scheduling framework, and provide methods to obtain and identify the affinity relationships between VMs, and the JAGBP method. Lastly, we present holistic evaluation experiments to validate the feasibility and evaluate the performance of the proposed methods. The results demonstrate the significance of introduced affinity and the effectiveness of JAGBP method.  相似文献   

17.
Task scheduling is a fundamental issue in achieving high efficiency in cloud computing. However, it is a big challenge for efficient scheduling algorithm design and implementation (as general scheduling problem is NP‐complete). Most existing task‐scheduling methods of cloud computing only consider task resource requirements for CPU and memory, without considering bandwidth requirements. In order to obtain better performance, in this paper, we propose a bandwidth‐aware algorithm for divisible task scheduling in cloud‐computing environments. A nonlinear programming model for the divisible task‐scheduling problem under the bounded multi‐port model is presented. By solving this model, the optimized allocation scheme that determines proper number of tasks assigned to each virtual resource node is obtained. On the basis of the optimized allocation scheme, a heuristic algorithm for divisible load scheduling, called bandwidth‐aware task‐scheduling (BATS) algorithm, is proposed. The performance of algorithm is evaluated using CloudSim toolkit. Experimental result shows that, compared with the fair‐based task‐scheduling algorithm, the bandwidth‐only task‐scheduling algorithm, and the computation‐only task‐scheduling algorithm, the proposed algorithm (BATS) has better performance. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
王泽武  孙磊  郭松辉 《计算机应用》2017,37(10):2780-2786
针对当前云任务调度算法在密码云环境中无法实现任务实时处理的问题,提出一种基于滚动优化窗口的实时阈值调度方法。首先,将密钥调用环节融入密码任务流程中,提出一种密码云服务架构;其次,为实现任务的实时调度,构建基于滚动窗口的密码任务调度器模型和吞吐量分析模型,用于获得实时的吞吐量数据;最后,为满足云租户对高速密码服务的客观需求,提出吞吐量阈值调度算法,从而根据实时吞吐量相对于吞吐量阈值的变化情况实时迁移虚拟密码机。仿真结果表明,该方法与未采用滚动优化窗口或虚拟机迁移技术的方法相比,具有任务完成时间短、CPU占用率低的特点,且实时吞吐量能够持续保持在网络带宽的70%~85%,从而验证了其在密码云环境中的有效性和实时性。  相似文献   

19.
区分服务网络节点中的多级别的队列输出带宽由权值调度算法保证,固定权值调度在网络负载发生变化时无法继续提供公平的带宽保证。本文提出了一种动态调整权值的调度算法以达到在多服务级别间公平分配带宽。实验仿真表明,该算法可以对负载流数目的变化作出有效的响应,并快速实现调度权值的理想公平值。  相似文献   

20.
Virtualization technology makes data centers more dynamic and easier to administrate. Today, cloud providers offer customers access to complex applications running on virtualized hardware. Nevertheless, big virtualized data centers become stochastic environments and the simplification on the user side leads to many challenges for the provider. He has to find cost-efficient configurations and has to deal with dynamic environments to ensure service level objectives (SLOs). We introduce a software solution that reduces the degree of human intervention to manage clouds. It is designed as a multi-agent system (MAS) and placed on top of the Infrastructure as a Service (IaaS) layer. Worker agents allocate resources, configure applications, check the feasibility of requests, and generate cost estimates. They are equipped with application specific knowledge allowing it to estimate the type and number of necessary resources. During runtime, a worker agent monitors the job and adapts its resources to ensure the specified quality of service—even in noisy clouds where the job instances are influenced by other jobs. They interact with a scheduler agent, which takes care of limited resources and does a cost-aware scheduling by assigning jobs to times with low costs. The whole architecture is self-optimizing and able to use public or private clouds. Building a private cloud needs to face the challenge to find a mapping of virtual machines (VMs) to hosts. We present a rule-based mapping algorithm for VMs. It offers an interface where policies can be defined and combined in a generic way. The algorithm performs the initial mapping at request time as well as a remapping during runtime. It deals with policy and infrastructure changes. An energy-aware scheduler and the availability of cheap resources provided by a spot market are analyzed. We evaluated our approach by building up an SaaS stack, which assigns resources in consideration of an energy function and that ensures SLOs of two different applications, a brokerage system and a high-performance computing software. Experiments were done on a real cloud system and by simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号