首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Software-as-a-service (SaaS) multi-tenancy in cloud-based applications helps service providers to save cost, improve resource utilization, and reduce service customization and maintenance time. This is achieved by sharing of resources and service instances among multiple “tenants” of the cloud-hosted application. However, supporting multi-tenancy adds more complexity to SaaS applications required capabilities. Security is one of these key requirements that must be addressed when engineering multi-tenant SaaS applications. The sharing of resources among tenants—i.e. multi-tenancy—increases tenants’ concerns about the security of their cloud-hosted assets. Compounding this, existing traditional security engineering approaches do not fit well with the multi-tenancy application model where tenants and their security requirements often emerge after the applications and services were first developed. The resultant applications do not usually support diverse security capabilities based on different tenants’ needs, some of which may change at run-time i.e. after cloud application deployment. We introduce a novel model-driven security engineering approach for multi-tenant, cloud-hosted SaaS applications. Our approach is based on externalizing security from the underlying SaaS application, allowing both application/service and security to evolve at runtime. Multiple security sets can be enforced on the same application instance based on different tenants’ security requirements. We use abstract models to capture service provider and multiple tenants’ security requirements and then generate security integration and configurations at runtime. We use dependency injection and dynamic weaving via Aspect-Oriented Programming (AOP) to integrate security within critical application/service entities at runtime. We explain our approach, architecture and implementation details, discuss a usage example, and present an evaluation of our approach on a set of open source web applications.  相似文献   

2.
As cloud federation allows companies in need of computational resources to use computational resources hosted by different cloud providers, it reduces the cost of IT infrastructure by lowering capital and operational expenses. This is the result of economies of scale and the possibility for organizations to purchase just as much computing and storage resources as needed whenever needed. However, a clear specification of cost savings requires a detailed specification of the costs incurred. Although there are some efforts to define cost models for clouds, the need for a comprehensive cost model, which covers all cost factors and types of clouds, is undeniable. In this paper, we cover this gap by suggesting a cost model for the most general form of a cloud, namely federated hybrid clouds. This type of cloud is composed of a private cloud and a number of interoperable public clouds. The proposed cost model is applied within a cost minimization algorithm for making service placement decisions in clouds. We demonstrate the workings of our cost model and service placement algorithm within a specific cloud scenario. Our results show that the service placement algorithm with the cost model minimizes the spending for computational services.  相似文献   

3.
Cloud computing provides scalable computing and storage resources over the Internet. These scalable resources can be dynamically organized as many virtual machines (VMs) to run user applications based on a pay-per-use basis. The required resources of a VM are sliced from a physical machine (PM) in the cloud computing system. A PM may hold one or more VMs. When a cloud provider would like to create a number of VMs, the main concerned issue is the VM placement problem, such that how to place these VMs at appropriate PMs to provision their required resources of VMs. However, if two or more VMs are placed at the same PM, there exists certain degree of interference between these VMs due to sharing non-sliceable resources, e.g. I/O resources. This phenomenon is called as the VM interference. The VM interference will affect the performance of applications running in VMs, especially the delay-sensitive applications. The delay-sensitive applications have quality of service (QoS) requirements in their data access delays. This paper investigates how to integrate QoS awareness with virtualization in cloud computing systems, such as the QoS-aware VM placement (QAVMP) problem. In addition to fully exploiting the resources of PMs, the QAVMP problem considers the QoS requirements of user applications and the VM interference reduction. Therefore, in the QAVMP problem, there are following three factors: resource utilization, application QoS, and VM interference. We first formulate the QAVMP problem as an Integer Linear Programming (ILP) model by integrating the three factors as the profit of cloud provider. Due to the computation complexity of the ILP model, we propose a polynomial-time heuristic algorithm to efficiently solve the QAVMP problem. In the heuristic algorithm, a bipartite graph is modeled to represent all the possible placement relationships between VMs and PMs. Then, the VMs are gradually placed at their preferable PMs to maximize the profit of cloud provider as much as possible. Finally, simulation experiments are performed to demonstrate the effectiveness of the proposed heuristic algorithm by comparing with other VM placement algorithms.  相似文献   

4.
云计算中虚拟机放置的自适应管理与多目标优化   总被引:3,自引:0,他引:3  
李强  郝沁汾  肖利民  李舟军 《计算机学报》2011,34(12):2253-2264
云计算的一个关键需求是其基础设施中大规模虚拟机的放置问题.虚拟机和物理结点之间的映射决定了如何将云计算中虚拟化资源分配给多个Web应用,对云计算系统的性能、能耗和QoS保证有重要影响.文中提出了云计算中虚拟机放置的自适应管理框架,提出了带应用服务级目标约束的虚拟机放置多目标优化遗传算法,用于制定框架中的虚拟机放置策略....  相似文献   

5.
针对多租户应用的隐私数据保护问题, 在分析多租户应用的特点和隐私数据保护需求的基础上, 将可信计算技术引入多租户隐私数据保护, 基于虚拟可信平台模块vTPM提出了一种具有定制性的加密保护方案, 利用vTPM提供的加密密钥对租户的隐私数据进行加密, 同时利用vTPM的密钥保护和管理功能对加密密钥进行保护. 最后, 基于Xen实现的vTPM实现了本方案.  相似文献   

6.
Geographically distributed cloud platforms enable an attractive approach to large-scale content delivery. Storage at various sites can be dynamically acquired from (and released back to) the cloud provider so as to support content caching, according to the current demands for the content from the different geographic regions. When storage is sufficiently expensive that not all content should be cached at all sites, two issues must be addressed: how should requests for content be routed to the cloud provider sites, and what policy should be used for caching content using the elastic storage resources obtained from the cloud provider. Existing approaches are typically designed for non-elastic storage and little is known about the optimal policies when minimizing the delivery costs for distributed elastic storage.In this paper, we propose an approach in which elastic storage resources are exploited using a simple dynamic caching policy, while request routing is updated periodically according to the solution of an optimization model. Use of pull-based dynamic caching, rather than push-based placement, provides robustness to unpredicted changes in request rates. We show that this robustness is provided at low cost. Even with fixed request rates, use of the dynamic caching policy typically yields content delivery cost within 10% of that with the optimal static placement. We compare request routing according to our optimization model to simpler baseline routing policies, and find that the baseline policies can yield greatly increased delivery cost relative to optimized routing. Finally, we present a lower-cost approximate solution algorithm for our routing optimization problem that yields content delivery cost within 2.5% of the optimal solution.  相似文献   

7.
One major service provided by cloud computing is Software as a Service (SaaS). As competition in the SaaS market intensifies, it becomes imperative for a SaaS provider to design and configure its computing system properly. This paper studies the application placement problem encountered in computer clustering in SaaS networks. This problem involves deciding which software applications to install on each computer cluster of the provider and how to assign customers to the clusters in order to minimize total cost. Given the complexity of the problem, we propose two algorithms to solve it. The first one is a probabilistic greedy algorithm which includes randomization and perturbation features to avoid getting trapped in a local optimum. The second algorithm is based on a reformulation of the problem where each cluster is to be assigned an application configuration from a properly generated subset of configurations. We conducted an extensive computational study using large data sets with up to 300 customers and 50 applications. The results show that both algorithms outperform a standard branch-and-bound procedure for problem instances with large sizes. The probabilistic greedy algorithm is shown to be the most efficient in solving the problem.  相似文献   

8.
In this study, we describe the ​further development of Elastic Cloud Computing Cluster (EC3), a tool ​for creating self-managed cost-efficient virtual hybrid elastic clusters on top of Infrastructure as a Service (IaaS) clouds. By using spot ​instances and checkpointing techniques, EC3 can significantly reduce the total ​execution cost as well as facilitating automatic fault tolerance. Moreover, EC3 can deploy and manage hybrid clusters across on-premises and public ​cloud resources, thereby introducing ​cloud bursting capabilities. ​We present the results of a case study that we conducted to assess the effectiveness of the tool ​based on the structural dynamic analysis of buildings. In addition, we evaluated the checkpointing algorithms in a real ​cloud environment with existing workloads to study their effectiveness. The results ​demonstrate the feasibility and benefits of this type of ​cluster for computationally intensive applications.  相似文献   

9.
Cloud computing is becoming a profitable technology because of it offers cost-effective IT solutions globally. A well-designed task scheduling algorithm ensures the optimal utilization of clouds resources and reducing execution time dynamically. This research article deals with the task scheduling of inter-dependent subtasks on unrelated parallel computing machines in a cloud computing environment. This article considers two variants of the problem-based on two different objective function values. The first variant considers the minimization of the total completion time objective function while the second variant considers the minimization of the makespan objective function. Heuristic and meta-heuristic (HEART) based algorithms are proposed to solve the task scheduling problems. These algorithms utilize the property of list scheduling algorithm of unrelated parallel machine scheduling problem. A mixed integer linear programming (MILP) formulation has been provided for the two variants of the problem. The optimal solution is obtained by solving MILP formulation using A Mathematical Programming Language (AMPL) software. Extensive numerical experiments have been performed to evaluate the performance of proposed algorithms. The solutions obtained by the proposed algorithms are found to out-perform the existing algorithms. The proposed algorithms can be used by cloud computing service providers (CCSPs) for enhancing their resources utilization to reduce their operating cost.  相似文献   

10.
The widespread adoption of traditional heterogeneous systems has substantially improved the computing power available and, in the meantime, raised optimisation issues related to the processing of task streams across both CPU and GPU cores in heterogeneous systems. Similar to the heterogeneous improvement gained in traditional systems, cloud computing has started to add heterogeneity support, typically through GPU instances, to the conventional CPU-based cloud resources. This optimisation of cloud resources will arguably have a real impact when running on-demand computationally-intensive applications.In this work, we investigate the scaling of pattern-based parallel applications from physical, “local” mixed CPU/GPU-clusters to a public cloud CPU/GPU infrastructure. Specifically, such parallel patterns are deployed via algorithmic skeletons to exploit a peculiar parallel behaviour while hiding implementation details.We propose a systematic methodology to exploit approximated analytical performance/cost models, and an integrated programming framework that is suitable for targeting both local and remote resources to support the offloading of computations from structured parallel applications to heterogeneous cloud resources, such that performance values not available on local resources may be actually achieved with the remote resources. The amount of remote resources necessary to achieve a given performance target is calculated through the performance models in order to allow any user to hire the amount of cloud resources needed to achieve a given target performance value. Thus, it is therefore expected that such models can be used to devise the optimal proportion of computations to be allocated on different remote nodes for Big Data computations.We present different experiments run with a proof-of-concept implementation based on FastFlow  on small departmental clusters as well as on a public cloud infrastructure with CPU and GPU using the Amazon Elastic Compute Cloud. In particular, we show how CPU-only and mixed CPU/GPU computations can be offloaded to remote cloud resources with predictable performances and how data intensive applications can be mapped to a mix of local and remote resources to guarantee optimal performances.  相似文献   

11.
Applications are increasingly being deployed in the cloud due to benefits stemming from economy of scale, scalability, flexibility and utility-based pricing model. Although most cloud-based applications have hitherto been enterprise-style, there is an emerging need for hosting real-time streaming applications in the cloud that demand both high availability and low latency. Contemporary cloud computing research has seldom focused on solutions that provide both high availability and real-time assurance to these applications in a way that also optimizes resource consumption in data centers, which is a key consideration for cloud providers. This paper makes three contributions to address this dual challenge. First, it describes an architecture for a fault-tolerant framework that can be used to automatically deploy replicas of virtual machines in data centers in a way that optimizes resources while assuring availability and responsiveness. Second, it describes the design of a pluggable framework within the fault-tolerant architecture that enables plugging in different placement algorithms for VM replica deployment. Third, it illustrates the design of a framework for real-time dissemination of resource utilization information using a real-time publish/subscribe framework, which is required by the replica selection and placement framework. Experimental results using a case study that involves a specific replica placement algorithm are presented to evaluate the effectiveness of our architecture.  相似文献   

12.
针对云任务调度中存在的效率低、费用高等问题,提出一种基于改进K-means聚类算法的云任务调度算法。依据虚拟资源的硬件属性,使用改进聚类算法对虚拟资源进行聚类划分;计算任务偏好,使不同偏好的任务在不同的聚类中选择资源;考虑到调度费用问题,对每个聚类使用改进后的Min-min算法进行任务调度。针对K-means聚类算法初始聚类中心随机选取,易陷入局部最优解的问题,对聚类算法进行改进。最后,利用云仿真平台CloudSim进行实验,结果表明,与无聚类的调度算法相比,本文提出的算法在执行效率方面有所提高。  相似文献   

13.
将虚拟机加入云计算环境,可充分利用云计算的资源共享优势及其并行、分布计算功能;提出了一种可根据需要动态添加或删除虚拟机的模型系统,可有效节约云计算的使用费用,提高成本效率;研究了可用于本模型系统的两种资源调度算法——自适应先到先得(Adaptive First Come First Serve,AFCFS)和最大者优先(Largest Job First Served,LJFS)算法,尽量避免不必要的延迟,最大可能地提高系统性能,因为这对于分布式系统资源调度算法十分重要;模拟实验中采用了响应时间、等待时间、到达率等性能指标及性价比这一成本指标,比较了几种算法的性能效率,研究验证了模型系统的成本效率。实验结果表明几种算法可高效地运用于云计算环境,并能提高系统性能效率和成本效率。  相似文献   

14.
当前云计算供应商通过定价算法或类似拍卖的算法来分配他们的虚拟机(VM)实例。然而,这些算法大多要求虚拟机静态供应,无法准确预测用户需求,导致资源未得到充分利用。为此,提出了一种基于组合拍卖的虚拟机动态供应和分配算法,在做出虚拟机供应决策时考虑用户对虚拟机的需求。该算法将可用的计算资源看成是“流体”资源,且这些资源根据用户请求可分为不同数量、不同类型的虚拟机实例。然后可根据用户的估价决定分配策略,直到所有资源分配完毕。基于Parallel Workload Archive(并行工作负载存档)的真实工作负载数据进行了仿真实验,结果表明该方法可保证为云供应商带来更高收入,提高资源利用率。  相似文献   

15.
Cloud-based content delivery networks (CCDNs) have been developed as the next generation of content delivery networks (CDNs). In CCDNs, the cloud contributes to the cost-effective, pay-as-you-go model, and virtualization and the traditional CDNs contribute to content replications. Delivering infrastructure as a service in a networked cloud computing environment requires mapping virtual resources to physical resources, as well as traditional surrogate placement. In this paper, we develop a novel algorithm for virtual surrogate placement that combines multiple knapsack and competitive facility location problems. Moreover, we provide new formulations and theories for this problem. Finally, we compare our algorithm with the previous heuristics. Simulation results show that the proposed algorithm achieves significantly better results in terms of a decreased number of surrogate servers, decreased total path length between end users and surrogate servers, decreased average workload variance and CCDN deployment cost.  相似文献   

16.
Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). SLAs define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.  相似文献   

17.
We consider the problem of managing a hybrid computing infrastructure whose processing elements are comprised of in-house dedicated machines, virtual machines acquired on-demand from a cloud computing provider through short-term reservation contracts, and virtual machines made available by the remote peers of a best-effort peer-to-peer (P2P) grid. Each of these resources has different cost basis and associated quality of service guarantees. The applications that run in this hybrid infrastructure are characterized by a utility function: the utility gained with the completion of an application depends on the time taken to execute it. We take a business-driven approach to manage this infrastructure, aiming at maximizing the profit yielded, that is, the utility produced as a result of the applications that are run minus the cost of the computing resources that are used to run them. We propose a heuristic to be used by a contract planner agent that establishes the contracts with the cloud computing provider to balance the cost of running an application and the utility that is obtained with its execution, with the goal of producing a high overall profit. Our analytical results show that the simple heuristic proposed achieves very high relative efficiency in the use of the hybrid infrastructure. We also demonstrate that the ability to estimate the grid behaviour is an important condition for making contracts that allow such relative efficiency values to be achieved. On the other hand, our simulation results with realistic error predictions show only a modest improvement in the profit achieved by the simple heuristic proposed, when compared to a heuristic that does not consider the grid when planning contracts, but uses it, and another that is completely oblivious to the existence of the grid. This calls for the development of more accurate predictors for the availability of P2P grids, and more elaborated heuristics that can better deal with the several sources of non-determinism present in this hybrid infrastructure.  相似文献   

18.
The runtime of an evolutionary algorithm can be reduced by increasing the number of parallel evaluations. However, increasing the number of parallel evaluations can also result in wasted computational effort since there is a greater probability of creating solutions that do not contribute to convergence towards the global optimum. A trade-off, therefore, arises between the runtime and computational effort for different levels of parallelization of an evolutionary algorithm. When the computational effort is translated into cost, the trade-off can be restated as runtime versus cost. This trade-off is particularly relevant for cloud computing environments where the computing resources can be exactly matched to the level of parallelization of the algorithm, and the cost is proportional to the runtime and how many instances that are used. This paper empirically investigates this trade-off for two different evolutionary algorithms, NSGA-II and differential evolution (DE) when applied to a multi-objective discrete-event simulation (DES) problem. Both generational and steady-state asynchronous versions of both algorithms are included. The approach is to perform parameter tuning on a simplified version of the DES model. A subset of the best configurations from each tuning experiment is then evaluated on a cloud computing platform. The results indicate that, for the included DES problem, the steady-state asynchronous version of each algorithm provides a better runtime versus cost trade-off than the generational versions and that DE outperforms NSGA-II.  相似文献   

19.
Cloud computing can reduce power consumption by using virtualized computational resources to provision an application’s computational resources on demand. Auto-scaling is an important cloud computing technique that dynamically allocates computational resources to applications to match their current loads precisely, thereby removing resources that would otherwise remain idle and waste power. This paper presents a model-driven engineering approach to optimizing the configuration, energy consumption, and operating cost of cloud auto-scaling infrastructure to create greener computing environments that reduce emissions resulting from superfluous idle resources. The paper provides four contributions to the study of model-driven configuration of cloud auto-scaling infrastructure by (1) explaining how virtual machine configurations can be captured in feature models, (2) describing how these models can be transformed into constraint satisfaction problems (CSPs) for configuration and energy consumption optimization, (3) showing how optimal auto-scaling configurations can be derived from these CSPs with a constraint solver, and (4) presenting a case study showing the energy consumption/cost reduction produced by this model-driven approach.  相似文献   

20.
现如今,如何在满足截止时间约束的前提下降低工作流的执行成本,是云中工作流调度的主要问题之一。三步列表调度算法可以有效解决这一问题。但该算法在截止时间分配阶段只能形成静态的子截止时间。为方便用户部署工作流任务,云服务商为用户提供了的三种实例类型,其中竞价实例具有非常大的价格优势。为解决上述问题,提出了截止时间动态分配的工作流调度成本优化算法(S-DTDA)。该算法利用粒子群算法对截止时间进行动态分配,弥补了三步列表调度算法的缺陷。在虚拟机选择阶段,该算法在候选资源中增加了竞价实例,大大降低了执行成本。实验结果表明,相较于其他经典算法,该算法在实验成功率和执行成本上具有明显优势。综上所述,S-DTDA算法可以有效解决工作流调度中截止时间约束的成本优化问题。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号