共查询到20条相似文献,搜索用时 15 毫秒
1.
Recently, with the improvement of Cloud systems technologies and the essential advantages they can provide such as availability, scalability, and costs saving; massive domains in the IT industry are directing their business to the Cloud. To fit the computing demands of this trend along with nowadays fluky applications (e.g. social networks, media contents), Cloud systems require rapid resource changes. As a result, the workload management in a virtualized environment becomes a complex task. In this paper we propose a new proactive workload management model for virtualized resources to inspect the workload behavior of the running Virtual Machines, and to assent an appropriate scheduling and resource consolidation schema in order to improve the system efficiency, utilization, and throughput. We have carried out our model by modifying Xen Cloud Platform, then we tested the model performance through different representative benchmarks. The results show that the Proactive model can decrease the average response time remarkably. 相似文献
2.
《Journal of Systems Architecture》2014,60(9):726-740
Cloud computing and virtualization technology have revolutionized general-purpose computing applications in the past decade. The cloud paradigm offers advantages through reduction of operation costs, server consolidation, flexible system configuration and elastic resource provisioning. However, despite the success of cloud computing for general-purpose computing, existing cloud computing and virtualization technology face tremendous challenges in supporting emerging soft real-time applications such as online video streaming, cloud-based gaming, and telecommunication management. These applications demand real-time performance in open, shared and virtualized computing environments. This paper identifies the technical challenges in supporting real-time applications in the cloud, surveys recent advancement in real-time virtualization and cloud computing technology, and offers research directions to enable cloud-based real-time applications in the future. 相似文献
3.
《Journal of Systems Architecture》2014,60(9):757-769
Applications are increasingly being deployed in the cloud due to benefits stemming from economy of scale, scalability, flexibility and utility-based pricing model. Although most cloud-based applications have hitherto been enterprise-style, there is an emerging need for hosting real-time streaming applications in the cloud that demand both high availability and low latency. Contemporary cloud computing research has seldom focused on solutions that provide both high availability and real-time assurance to these applications in a way that also optimizes resource consumption in data centers, which is a key consideration for cloud providers. This paper makes three contributions to address this dual challenge. First, it describes an architecture for a fault-tolerant framework that can be used to automatically deploy replicas of virtual machines in data centers in a way that optimizes resources while assuring availability and responsiveness. Second, it describes the design of a pluggable framework within the fault-tolerant architecture that enables plugging in different placement algorithms for VM replica deployment. Third, it illustrates the design of a framework for real-time dissemination of resource utilization information using a real-time publish/subscribe framework, which is required by the replica selection and placement framework. Experimental results using a case study that involves a specific replica placement algorithm are presented to evaluate the effectiveness of our architecture. 相似文献
4.
Hwanju KimAuthor Vitae Hyeontaek LimAuthor Vitae Jinkyu JeongAuthor Vitae Seungryoul MaengAuthor Vitae 《Journal of Parallel and Distributed Computing》2011,71(6):758-773
Consolidated environments are progressively accommodating diverse and unpredictable workloads in conjunction with virtual desktop infrastructure and cloud computing. Unpredictable workloads, however, aggravate the semantic gap between the virtual machine monitor and guest operating systems, leading to inefficient resource management. In particular, CPU management for virtual machines has a critical impact on I/O performance in cases where the virtual machine monitor is agnostic about the internal workloads of each virtual machine. This paper presents virtual machine scheduling techniques for transparently bridging the semantic gap that is a result of consolidated workloads. To enable us to achieve this goal, we ensure that the virtual machine monitor is aware of task-level I/O-boundedness inside a virtual machine using inference techniques, thereby improving I/O performance without compromising CPU fairness. In addition, we address performance anomalies arising from the indirect use of I/O devices via a driver virtual machine at the scheduling level. The proposed techniques are implemented on the Xen virtual machine monitor and evaluated with micro-benchmarks and real workloads on Linux and Windows guest operating systems. 相似文献
5.
An innovative workflow mapping mechanism for Grids in the frame of Quality of Service 总被引:2,自引:0,他引:2
Dimosthenis Konstantinos Andreas Antonis Theodora 《Future Generation Computer Systems》2008,24(6):498-511
The advent of Grid environments made feasible the solution of computational intensive problems in a reliable and cost-effective way. As workflow systems carry out more complex and mission-critical applications, Quality of Service (QoS) analysis serves to ensure that each application meets user requirements. In that frame, we present a novel algorithm which allows the mapping of workflow processes to Grid provided services assuring at the same time end-to-end provision of QoS based on user-defined parameters and preferences. We also demonstrate the operation of the implemented algorithm and evaluate its effectiveness using a Grid scenario, based on a 3D image rendering application. 相似文献
6.
As the size and complexity of Cloud systems increase, the manual management of these solutions becomes a challenging issue as more personnel, resources and expertise are needed. Service Level Agreement (SLA)-aware autonomic cloud solutions enable managing large scale infrastructure management meanwhile supporting multiple dynamic requirement from users. This paper contributes to these topics by the introduction of Cloudcompaas, a SLA-aware PaaS Cloud platform that manages the complete resource lifecycle. This platform features an extension of the SLA specification WS-Agreement, tailored to the specific needs of Cloud Computing. In particular, Cloudcompaas enables Cloud providers with a generic SLA model to deal with higher-level metrics, closer to end-user perception, and with flexible composition of the requirements of multiple actors in the computational scene. Moreover, Cloudcompaas provides a framework for general Cloud computing applications that could be dynamically adapted to correct the QoS violations by using the elasticity features of Cloud infrastructures. The effectiveness of this solution is demonstrated in this paper through a simulation that considers several realistic workload profiles, where Cloudcompaas achieves minimum cost and maximum efficiency, under highly heterogeneous utilization patterns. 相似文献
7.
Enis Afgan Dannon Baker the Galaxy Team Anton Nekrutenko James Taylor 《Concurrency and Computation》2012,24(12):1349-1361
Modern scientific research has been revolutionized by the availability of powerful and flexible computational infrastructure. Virtualization has made it possible to acquire computational resources on demand. Establishing and enabling use of these environments is essential, but their widespread adoption will only succeed if they are transparently usable. Requiring changes to applications being deployed or requiring users to change how they utilize those applications represent barriers to the infrastructure acceptance. The problem lies in the process of deploying applications so that they can take advantage of the elasticity of the environment and deliver it transparently to users. Here, we describe a reference model for deploying applications into virtualized environments. The model is rooted in the low‐level components common to a range of virtualized environments and it describes how to compose those otherwise dispersed components into a coherent unit. Use of the model enables applications to be deployed into the new environment without any modifications, it imposes minimal overhead on management of the infrastructure required to run the application, and yields a set of higher‐level services as a byproduct of the component organization and the underlying infrastructure. We provide a fully functional sample application deployment and implement a framework for managing the overall application deployment. Copyright © 2011 John Wiley & Sons, Ltd. 相似文献
8.
Sonia Ben Mokhtar Nikolaos Georgantas Valrie Issarny 《Journal of Systems and Software》2007,80(12):1941-1955
Pervasive computing environments are populated with networked services, i.e., autonomous software entities, providing a number of functionalities. One of the most challenging objectives to be achieved within these environments is to assist users in realizing tasks that integrate on the fly functionalities of the networked services opportunely according to the current pervasive environment. Towards this purpose, we present COCOA, a solution for COnversation-based service COmposition in pervAsive computing environments with QoS support. COCOA provides COCOA-L, an OWL-S based language for the semantic, QoS-aware specification of services and tasks, which further allows the specification of services and tasks conversations. Moreover, COCOA provides two mechanisms: COCOA-SD for the QoS-aware semantic service discovery and COCOA-CI for the QoS-aware integration of service conversations towards the realization of the user task’s conversation. The distinctive feature of COCOA is the ability of integrating on the fly the conversations of networked services to realize the conversation of the user task, by further meeting the QoS requirements of user tasks. Thereby, COCOA allows the dynamic realization of user tasks according to the specifics of the pervasive computing environment in terms of available services and by enforcing valid service consumption. 相似文献
9.
The objective of this study is to determine the right cycle management policy to service periodic soft real-time disk retrieval. Cycle-based disk scheduling provides an effective way of exploiting the disk bandwidth and meeting the soft real-time requirements of individual I/O requests. It is widely used in real-time retrieval of multimedia data blocks. Interestingly, the issue of cycle management with respect to dynamically changing workloads has not been receiving proper attention despite its significant engineering implications on the system behavior. When cycle length remains constant regardless of varying I/O workload intensity, it may cause under-utilization of disk bandwidth capacity or unnecessarily long service startup latency. In this work, we present a novel cycle management policy which dynamically adapts to the varying workload. We develop pre-buffering policy which makes the adaptive cycle management policy robust against starvation. The proposed approach elaborately determines the cycle length and the respective buffer size for pre-buffering. Performance study reveals a number of valuable observations. Adaptive cycle length management with incremental pre-buffering exhibits superior performance to the other cycle management policies in startup latency, jitter and buffer requirement. It is found that servicing low playback rate contents such as video contents for 3G cellular network requires rather different treatment in disk subsystem capacity planning and call admission criteria because relatively significant fraction of I/O latency is taken up by plain disk overhead. 相似文献
10.
Antonio CelestiAuthor Vitae Maria Fazio Author VitaeMassimo Villari Author Vitae Antonio Puliafito Author Vitae 《Future Generation Computer Systems》2012,28(1):85-93
Cloud federation offers plenty of new services and business opportunities. However, many advanced services cannot be implemented in the real Cloud market due to several issues that have not been overcome yet. One of these concerns is the transfer of huge amount of data among federated Clouds. This paper aims to overcome such a limitation proposing an approach based on satellite communications. By comparing performance in data delivery on the Internet and satellite systems, it is evident that satellite technologies are enough ripe to be competitive against systems with a wired infrastructure. Thus, we propose to make use of satellite transmission to implement fast delivery of huge amount of data. Through the discussion of a use case, where a WEB TV company offers a streaming service, we show how to practically apply the proposed strategy in a real scenario, specifying the involvement of Cloud providers, Cloud users, satellite companies and end-user clients. 相似文献
11.
Service providers offer access to resources and services in distributed environments such as Grids and Clouds through formal Service level Agreements (SLA), and need well-balanced infrastructures so that they can maximise the Quality of Service (QoS) they offer and minimise the number of SLA violations. We propose a mathematical model to predict the risk of failure of resources in such environments using a discrete-time analytical model driven by reliability functions fitted to observed data. The model relies on the resource historical data so as to predict the risk of failure for a given time interval. The model is evaluated by comparing the predicted risk of failure with the observed risk of failure, and is shown to accurately predict the resources risk of failure, allowing a service provider to selectively choose which SLA request to accept. 相似文献
12.
Cloud infrastructures consisting of heterogeneous resources are increasingly being utilized for hosting large-scale distributed applications from diverse users with discrete needs. The multifarious cloud applications impose varied demands for computational resources along with multitude of performance implications. Successful hosting of cloud applications necessitates service providers to take into account the heterogeneity existing in the behavior of users, applications and system resources while respecting the user’s agreed Quality of Service (QoS) criteria. In this work, we propose a QoS-Aware Resource Elasticity (QRE) framework that allows service providers to make an assessment of the application behavior and develop mechanisms that enable dynamic scalability of cloud resources hosting the application components. Experimental results conducted on the Amazon EC2 cloud clearly demonstrate the effectiveness of our approach while complying with the agreed QoS attributes of users. 相似文献
13.
Automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compelling case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. The paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems. 相似文献
14.
V. Deora J. Shao W. A. Gray N. J. Fiddian 《International Journal on Digital Libraries》2006,6(3):260-269
The ability to gauge quality of service is critical if we are to realise the potential of the service-oriented computing paradigm.
Many techniques have been proposed for calculating the quality of a service, and they do so typically by collecting quality
ratings from the users of the service, then combining them in one way or another to derive the quality of the service. We
argue that collecting quality ratings alone from the users is not sufficient for deriving a reliable and accurate quality
measure for a service. This is because different users often have different expectations on the quality of a service and their
ratings tend to be closely related to these expectations, i.e. how their expectations are met. In this paper, we propose a
quality of service assessment model based on user expectations. That is, we collect expectations as well as ratings from the
users of a service, then calculate the quality of the service using only the ratings that have similar expectations. Our experiments
show that the method we propose here can result in more meaningful and reliable quality ratings for services. 相似文献
15.
Cloud computing provides scalable computing and storage resources over the Internet. These scalable resources can be dynamically organized as many virtual machines (VMs) to run user applications based on a pay-per-use basis. The required resources of a VM are sliced from a physical machine (PM) in the cloud computing system. A PM may hold one or more VMs. When a cloud provider would like to create a number of VMs, the main concerned issue is the VM placement problem, such that how to place these VMs at appropriate PMs to provision their required resources of VMs. However, if two or more VMs are placed at the same PM, there exists certain degree of interference between these VMs due to sharing non-sliceable resources, e.g. I/O resources. This phenomenon is called as the VM interference. The VM interference will affect the performance of applications running in VMs, especially the delay-sensitive applications. The delay-sensitive applications have quality of service (QoS) requirements in their data access delays. This paper investigates how to integrate QoS awareness with virtualization in cloud computing systems, such as the QoS-aware VM placement (QAVMP) problem. In addition to fully exploiting the resources of PMs, the QAVMP problem considers the QoS requirements of user applications and the VM interference reduction. Therefore, in the QAVMP problem, there are following three factors: resource utilization, application QoS, and VM interference. We first formulate the QAVMP problem as an Integer Linear Programming (ILP) model by integrating the three factors as the profit of cloud provider. Due to the computation complexity of the ILP model, we propose a polynomial-time heuristic algorithm to efficiently solve the QAVMP problem. In the heuristic algorithm, a bipartite graph is modeled to represent all the possible placement relationships between VMs and PMs. Then, the VMs are gradually placed at their preferable PMs to maximize the profit of cloud provider as much as possible. Finally, simulation experiments are performed to demonstrate the effectiveness of the proposed heuristic algorithm by comparing with other VM placement algorithms. 相似文献
16.
The G-EXEC subset, described in terms of data-base management software for third-world geologists, retains all features of the full G-EXEC system including generalization, and system integration. The system is composed of selected programs for input-output operations, data vetting and validation, retrieval, and file manipulation. Implementation of the subset on the Syracuse University IBM 370/155 is described and additional planned phase-by-phase upgrades are annotated. 相似文献
17.
Haibin Wang Yan-Qing Zhang Rajshekhar Sunderraman 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2006,10(11):1021-1029
Web services technology is critical for the success of business integration and other application fields such as bioinformatics. However, there are two challenges facing the practicality of Web services: (a) efficient location of the Web service registries that contain the requested Web services and (b) efficient retrieval of the requested services from these registries with high quality of service (QoS). The main reason for this problem is that current Web services technology is not semantic-oriented. Several proposals have been made to add semantics to Web services to facilitate discovery and composition of relevant Web services. Such proposals are being referred to as semantic Web services (SWS). However, most of these proposals do not address the second problem of retrieval of web services with high QoS. In this paper, we propose a framework called soft semantic Web services agent (soft SWS agent) for providing high QoS Semantic Web services using soft computing methodology. Since different application domains have different requirement for QoS, it is impractical to use classical mathematical modeling methods to evaluate the QoS of semantic Web services. We use fuzzy neural networks with Genetic Algorithms (GA) as our study case. Simulation results show that the soft computing methodology is practicable to handle fuzzy and uncertain QoS metrics effectively. 相似文献
18.
In this work, we develop energy-aware disk scheduling algorithm for soft real-time I/O. Energy consumption is one of the major
factors which bar the adoption of hard disk in mobile environment. Heat dissipation of large scale storage system also calls
for an energy-aware scheduling technique to further increase the storage density. The basic idea in this work is to properly
determine the I/O burst size so that device can be in standby mode between consecutive I/O bursts and that it can satisfy
the soft real-time requirement. We develop an elaborate model which incorporates the energy consumption characteristics, overhead
of mode transition in determining the appropriate I/O burst size and the respective disk operating schedule. Efficacy of energy-aware
disk scheduling algorithm greatly relies on not only disk scheduling algorithm itself but also various operating system and
device firmware related concerns. It is crucial that the various operating system level and device level features need to
be properly addressed within disk scheduling framework. Our energy-aware disk scheduling algorithm successfully addresses
a number of outstanding issues. First, we examine the effect of OS and hard disk firmware level prefetch policy and incorporate
its effect in our disk scheduling framework. Second, our energy aware scheduling framework can allocate a certain fraction
of disk bandwidth to handle sporadically arriving non real-time I/O’s. Third, we examine the relationship between lock granularity
of the buffer management and energy consumption. We develop a prototype software with energy-aware scheduling algorithm. In
our experiment, proposed algorithm can reduce the energy consumption to one fourth if we use energy-aware disk scheduling
algorithm. However, energy-aware disk scheduling algorithm increases buffer requirement significantly, e.g., from 4 to 140 KByte.
We carefully argue that the buffer overhead is still justifiable given the cost of DRAM chip and importance of energy management
in modern mobile devices. The result of our work not only provides the energy efficient scheduling algorithm but also provides
an important guideline in capacity planning of future energy efficient mobile devices.
This paper is funded by KOSEF through Statistical Research Paper for Complex System at Seoul National University. 相似文献
19.
Christoph Riedl Tilo Böhmann Michael Rosemann Helmut Krcmar 《Information Systems and E-Business Management》2009,7(2):199-221
Service-oriented architectures and Web services mature and have become more widely accepted and used by industry. This growing
adoption increased the demands for new ways of using Web service technology. Users start re-combining and mediating other
providers’ services in ways that have not been anticipated by their original provider. Within organisations and cross-organisational
communities, discoverable services are organised in repositories providing convenient access to adaptable end-to-end business
processes. This idea is captured in the term Service Ecosystem. This paper addresses the question of how quality management
can be performed in such service ecosystems. Service quality management is a key challenge when services are composed of a
dynamic set of heterogeneous sub-services from different service providers. This paper contributes to this important area
by developing a reference model of quality management in service ecosystems. We illustrate the application of the reference
model in an exploratory case study. With this case study, we show how the reference model helps to derive requirements for
the implementation and support of quality management in an exemplary service ecosystem in public administration. 相似文献
20.
This paper presents a novel way to control power consumption and performance in a multi-tier server cluster designed for e-commerce applications. The requests submitted to these server systems have a soft real-time constraint, given that although some can miss a pre-defined deadline, the system can still meet an agreed upon performance level. Clusters of servers are extensively used nowadays and, with the steep increase in the total power consumption in these systems, economic and environmental problems have been raised. We present ways of decreasing power expenditure, and show the implementation of a SISO (Single Input Single Output) controller that acts on the speed of all server nodes capable of dynamic voltage and frequency scaling (DVFS), with QoS (Quality of Service) being the reference setpoint. For QoS, we use the request tardiness, defined as the ratio of the end-to-end response time to the deadline, rather than the usual metric that counts missed deadlines. We adjust the servers operating frequencies to guarantee that a pre-defined p-quantile of the tardiness probability distribution of the requests meet their deadlines. Doing so we can guarantee that the QoS will be statistically p. We test this technique in a prototype multi-tier cluster, using open software, commodity hardware, and a standardized e-commerce application to generate a workload close to that of the real world. The main contribution of this paper is to empirically show the robustness of the SISO controller, presenting a sensibility analysis of its parameters. Experimental results show that our implementation outperforms other published state-of-the-art cluster implementations. 相似文献