首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
The growth in computer and networking technologies over the past decades established cloud computing as a new paradigm in information technology. The cloud computing promises to deliver cost‐effective services by running workloads in a large scale data center consisting of thousands of virtualized servers. The main challenge with a cloud platform is its unpredictable performance. A possible solution to this challenge could be load balancing mechanism that aims to distribute the workload across the servers of the data center effectively. In this paper, we present a distributed and scalable load balancing mechanism for cloud computing using game theory. The mechanism is self‐organized and depends only on the local information for the load balancing. We proved that our mechanism converges and its inefficiency is bounded. Simulation results show that the generated placement of workload on servers provides an efficient, scalable, and reliable load balancing scheme for the cloud data center. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
The cloud was originally designed to provide general-purpose computing using commodity hardware and its focus was on increasing resource consolidation as a means to lower cost. Hence, it was not particularly adapted to the requirements of multimedia applications that are highly latency sensitive and require specialized hardware, such as graphical processing units. Existing cloud infrastructure is dimensioned to serve general-purpose workloads and to meet end-user requirements by providing high throughput. In this paper, we investigate the effectiveness of using this general-purpose infrastructure for serving latency-sensitive multimedia applications. In particular, we examine on-demand gaming, also known as cloud gaming, which has the potential to change the video game industry. We demonstrate through a large-scale measurement study that the existing cloud infrastructure is unable to meet the strict latency requirements necessary for acceptable on-demand game play. Furthermore, we investigate the effectiveness of incorporating edge servers, which are servers located near end-users (e.g., CDN servers), to improve end-user coverage. Specifically, we examine an edge-server-only infrastructure and a hybrid infrastructure that consists of using edge servers in addition to the cloud. We find that a hybrid infrastructure significantly improves the number of end-users served. However, the number of satisfied end-users in a hybrid deployment largely depends on the various deployment parameters. Therefore, we evaluate various strategies that determine two such parameters, namely, the location of on-demand gaming servers and the games that are placed on these servers. We find that, through both a careful selection of on-demand gaming servers and the games to place on these servers, we significantly increase the number of end-users served over the basic random selection and placement strategies.  相似文献   

5.
高效的匹配算法是大规模基于内容的发布订阅系统所要研究的关键问题之一。建立了订阅语言和事件模型,提出了一个高效、实用的基于内容的匹配算法,它主要研究匹配操作符为"=",充分利用了多维索引技术和AVL搜索树加速查询,这种算法比其它常用算法具有较大提高,并且扩展性好,适合应用于大规模分布式基于内容的发布订阅系统中。  相似文献   

6.
7.
《Computer Networks》2000,32(3):261-275
Owing to the fast growth of World Wide Web (WWW), web traffic has become a major component of Internet traffics. Consequently, the reduction of document retrieval latency on WWW becomes more and more important. The latency can be reduced in two ways: reduction of network delay and improvement of web servers’ throughput. Our research aims at improving a web server’s throughput by keeping a memory cache in a web server’s address space.In this paper, we focus on the design and implementation of a memory cache scheme. We propose a novel web cache management policy named the adaptive-level policy that either caches the whole file content or only a portion of it, according to the file size. The experimental results show three things. First, our memory cache is beneficial since, under our experimental workloads, the throughput improvement can achieve 32.7%. Second, our cache management policy is suitable for current web traffic. Third, with the increasing popularity of multimedia files, our policy will outperform others currently used in WWW.  相似文献   

8.
9.
LCESM:位置敏感的上下文事件订阅机制   总被引:1,自引:0,他引:1  
传统的位置敏感发布/订阅系统的研究集中于事件发布和匹配,不能很好地支持在同一事件源上频繁发生的上下文事件的订阅。给出一种位置敏感的上下文事件订阅机制,该机制包括位置敏感的上下文事件订阅语言I_ACI和动态绑定方法。基于LACL定义了位置事件和事件源事件,两者分别使系统具有位置感知能力和事件源感知能力;动态绑定包括上下文事件源到订阅的自动映射和agent与订阅的绑定关系,自动映射将事件与订阅的匹配有效地转化为事件源与订阅的匹配,从而减少匹配次数和提高系统性能;agent对订阅的动态操作为传感器减少了不必要的监听成本。在上下文感知中间件CTK上实现该机制,并通过实验验证了其有效性。  相似文献   

10.
基于内容的发布订阅系统中快速匹配算法的研究   总被引:3,自引:0,他引:3  
高效的匹配算法是大规模基于内容的发布订阅系统所要研究的关键问题之一.建立了订阅语言和事件模型,提出了一个高效、实用的基于内容的匹配算法,它充分利用了多维索引技术加速查询,利用约束间的覆盖关系减少重复匹配.实验表明其效率比其它常用算法具有较大提高,并且扩展性好,适合应用于大规模分布式基于内容的发布订阅系统中.  相似文献   

11.
Many works have reported simulated performance benefits of stream reuse techniques such as batching, chaining, and patching to the scalability of VoD systems. However, the relative contribution of such techniques has been rarely evaluated in practical implementations of scalable VoD servers. In this work, we investigated the efficiency of representative stream reuse techniques on the GloVE system, a low-cost scalable VoD platform whose resulting performance depends on the combination of the stream techniques it uses. More specifically, we evaluated performance of the GloVE system on delivering multiple MPEG-1 videos to clients connected to the server through a 100 Mbps Ethernet switch, and arrival rates varying from 6 to 120 clients/min. We present experimental results including startup latency, occupancy of server's channels, and aggregate bandwidth that GloVE demanded for several combinations of stream reuse techniques. Overall, our results reveal that stream reuse techniques in isolation offer limited performance scalability to VoD systems and only balanced combination of batching, chaining, and patching explains the scalable performance of GloVE on delivering highly popular videos with low startup latency while using the smallest number of server's channels.  相似文献   

12.
State-of-the-art cluster-based data centers consisting of three tiers (Web server, application server, and database server) are being used to host complex Web services such as e-commerce applications. The application server handles dynamic and sensitive Web contents that need protection from eavesdropping, tampering, and forgery. Although the secure sockets layer (SSL) is the most popular protocol to provide a secure channel between a client and a cluster-based network server, its high overhead degrades the server performance considerably and, thus, affects the server scalability. Therefore, improving the performance of SSL-enabled network servers is critical for designing scalable and high-performance data centers. In this paper, we examine the impact of SSL offering and SSL-session-aware distribution in cluster-based network servers. We propose a back-end forwarding scheme, called ssl_with_bf, that employs a low-overhead user-level communication mechanism like virtual interface architecture (VIA) to achieve a good load balance among server nodes. We compare three distribution models for network servers, round robin (RR), ssl_with_session, and ssl_with_bf, through simulation. The experimental results with 16-node and 32-node cluster configurations show that, although the session reuse of ssl_with_session is critical to improve the performance of application servers, the proposed back-end forwarding scheme can further enhance the performance due to better load balancing. The ssl_with_bf scheme can minimize the average latency by about 40 percent and improve throughput across a variety of workloads.  相似文献   

13.
云数据处理在云计算基础设施中占有极其关键的地位.然而,当前的云存储系统绝大部分都采用基于分布式Hash的健-值对模式来组织数据,在范围查询方面支持不理想、且动态实时性差,有必要构建云环境下辅助动态索引.通过总结、分析云环境中辅助双层索引机制,提出一种基于并发跳表的云数据处理双层索引架构.该架构采用两层体系结构,突破单台机器内存和硬盘的限制,从而扩展系统整体的索引范围.通过动态分裂算法解决局部服务器中的热点问题,保证索引结构整体的负载均衡.通过并发跳表来提高全局索引的承载性能,改善了全局索引的并发性,提高整体索引的吞吐率.实验结果表明,基于并发跳表的云数据处理双层索引架构能够有效支持单键查询和范围查询,具有较强的可扩展性和并发性,是一种高效的云存储辅助索引.  相似文献   

14.
With the development of cloud environments serving as a unified infrastructure, the resource management and energy consumption issues become more important in the operations of such systems. In this paper, we investigate adaptive model-free approaches for resource allocation and energy management under time-varying workloads and heterogeneous multi-tier applications. Specifically, we make use of measurable metrics, including throughput, rejection amount, queuing state, and so on, to design resource adjustment schemes and to make control decisions adaptively. The ultimate objective is to guarantee the summarized revenue of the resource provider while saving energy and operational costs. To validate the effectiveness, performance evaluation experiments are performed in a simulated environment, with realistic workloads considered. Results show that with the combination of long-term adaptation and short-term adaptation, the fluctuation of unpredictable workloads can be captured, and thus the total revenue can be preserved while balancing the power consumption as needed. Furthermore, the proposed approach can achieve better effect and efficiency than the model-based approaches in dealing with real-world workloads.  相似文献   

15.
16.
Hypervisors enable cloud computing model to provide scalable infrastructures and on-demand access to computing resources as they support multiple operating systems to run on one physical server concurrently. This mechanism enhances utilization of physical server thus reduces server count in the data center. Hypervisors also drive the benefits of reduced IT infrastructure setup and maintenance costs along with power savings. It is interesting to know different hypervisors’ performance for the consolidated application workloads. Three hypervisors ESXi, XenServer, and KVM are carefully chosen to represent three categories full virtualized, para-virtualized, and hybrid virtualized respectively for the experiment. We have created a private cloud using CloudStack. Hypervisors are deployed as hosts in the private cloud in the respective clusters. Each hypervisor is deployed with three virtual machines. Three applications web server, application server, and database servers are installed on three virtual machines. Experiments are designed using Design of Experiments (DOE) methodology. With concurrently running virtual machines, each hypervisor is stressed with the consolidated real-time workloads (web load, application load, and OLTP) and important system information is gathered using SIGAR framework. The paper proposes a new scoring formula for hypervisors’ performance in the private cloud for consolidated application workloads and the accuracy of the results are complemented with sound statistical analysis using DOE.  相似文献   

17.
面向大规模分布式计算发布订阅系统核心技术   总被引:52,自引:2,他引:50  
马建刚  黄涛  汪锦岭  徐罡  叶丹 《软件学报》2006,17(1):134-147
发布/订阅系统技术具有异步、松散耦合和多对多通信的特点,适应了目前动态多变的大规模分布式计算环境的需求,有着广阔的应用前景.分析了国内外发布/订阅系统的研究现状,并从拓扑结构、事件模型和订阅模型等不同角度进行了系统的分类,然后分别就其关键问题从匹配算法、基于内容的路由算法、形式化建模和服务质量等方面进行了阐述,并对已有的典型系统进行了分析比较,指出了当前该领域研究存在的问题和不足.同时,分析了在支持语义和近似匹配来增强系统智能性所面临的挑战,展望了发布/订阅系统在支持移动计算、P2P等新型计算环境下的研究趋势.  相似文献   

18.
19.
为保证用户数据的机密性,业界普遍将数据加密后存储在云端。提出了一种云存储系统中保障数据机密性的方法,其特点有:(1)加解密系统部署于云存储服务器的前端,在客户端和云存储服务器之间对用户数据进行加解密;(2)对用户数据的加解密是实时进行的,在数据上传的传输过程中进行加密,在数据下载的传输过程中进行解密;(3)加解密系统对用户端和云服务器端是透明的。当前广泛使用的基于HTTP协议传输的云存储系统如Amazon S3、OpenStack Swift等可以直接使用该方法。测试结果表明:本方法在不降低数据传输吞吐率的基础上,能有效卸载云存储系统的数据加解密负担。  相似文献   

20.
Sequential prefetching schemes are widely employed in storage servers to mask disk latency and improve system throughput. However, existing schemes cannot benefit parallel disk systems as expected due to the fact that they ignore the distinct internal characteristics of the parallel disk system, in particular, data striping. Moreover, their aggressive prefetching pattern suffers from premature evictions and prolonged request latencies. In this paper, we propose a strip-oriented asynchronous prefetching (SoAP) technique, which is dedicated to the parallel disk system. It settles the above-mentioned problems by providing multiple novel features, e.g., enhanced prediction accuracy, adaptive prefetching strength, physical data layout awareness, and timely prefetching. To validate SoAP, we implement a prototype by modifying the software redundant arrays of inexpensive disks (RAID) under Linux. Experimental results demonstrate that SoAP can consistently offer improved average response time and throughput to the parallel disk system under non-random workloads compared with STEP, SP, ASP, and Linux-like SEQPs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号