首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
基于协作缓存的VOD服务器端Cache设计   总被引:1,自引:0,他引:1  
近年来,随着计算机网络技术和多媒体技术的发展,视频点播服务已逐渐成为现实。分布式VOD(VideoOnDemand)服务器系统的提出是为了支持更多的大量并发数据流,和单一服务器相比,这样的结构拥有更好的使用效率、可靠性和可扩展性。协作缓存CC(cooperativecache)技术将各服务器的内存协调工作,形成全局的cache。这样的结构不仅充分发挥了分布式VOD服务器结构的特点,同时也增大cache容量,提高系统全局命中率,从而提高了系统效率。该文在协作缓存技术基础上,针对流媒体和VOD系统的特点,提出了GBBcache替换算法。该算法以数据块的生命周期作为出发点,充分考虑了现有用户和请求接入用户的服务需求,提高了内存使用效率。笔者对该算法进行了理论分析,并证明了它在性能上与传统的cache替换算法相比的优越性。  相似文献   

2.
唐兵  张黎 《计算机应用》2014,34(11):3109-3111
为提高云存储的访问速率并降低费用,提出了一种面向费用优化的云存储缓存策略。利用几乎免费的局域网环境下的多台桌面计算机,在本地建立一个分布式文件系统,并将其作为远端云存储的缓存。进行文件读取时,首先查找其是否在缓存中,若存在则直接从缓存读取;若不存在则从远端云存储读取。采用了最近最少使用(LRU)算法进行缓存替换,将冷门数据从缓存中替换掉。以亚马逊简单存储服务(S3)作为远端的云存储服务,对原型系统进行了简单的性能测试。测试结果表明,使用了所提出的缓存策略后,在降低费用的同时能够显著提高文件读取的速度。  相似文献   

3.
在VOD集群代理缓存系统中,由于存在多个服务器节点,缓存在其上的合理分布能够极大地提高整个系统的缓存利用率,进而提高系统的整体性能,这就是缓存配置问题.现有缓存技术中动态缓存配置(DCR)算法中,部分影片缓存不充分,浪费了一部分缓存空间.针对该问题,提出了一种改进算法.算法以提高系统的缓存利用率并进而提高缓存命中率为目标,在放置过程中,对需要缓存的影片数据都进行充分放置.仿真实验表明,改进后的算法有较高的缓存命中率,因而,改进算法使缓存系统有更好的整体性能.  相似文献   

4.
针对云计算数据中心网络(DCN)环境下,P2P流媒体服务器集群部署引起的较高带宽占用问题,提出了一种基于云计算的P2P流媒体服务器集群部署方法。该方法将P2P流媒体服务器集群部署建模为一个二次分配问题,并基于蚁群算法,寻找每个虚拟流媒体服务器与每个部署点之间的映射关系,实现基于云计算的P2P流媒体服务器集群部署。通过仿真实验证明,基于云计算的P2P流媒体服务器集群部署算法可以有效地减少云计算平台DCN网络带宽占用。  相似文献   

5.
In continuous media servers, disk load can be reduced by using buffer cache. In order to utilize the saved disk bandwidth by caching, a continuous media server must employ an admission control scheme to decide whether a new client can be admitted for service without violating the requirements of clients already being serviced. A scheme providing deterministic QoS guarantees in servers using caching has already been proposed. Since, however, deterministic admission control is based on the worst case assumption, it causes the wastage of the system resources. If we can exactly predict the future available disk bandwidth, both high disk utilization and hiccup-free service are achievable. However, as the caching effect is not analytically determined, it is difficult to predict the disk load without substantial computation overhead. In this paper, we propose a statistical admission control scheme for continuous media servers where caching is used to reduce disk load. This scheme improves disk utilization and allows more streams to be serviced while maintaining near-deterministic service. The scheme, called Shortsighted Prediction Admission Control (SPAC), combines exact prediction through on-line simulation and statistical estimation using a probabilistic model of future disk load in order to reduce computation overhead. It thereby exploits the variation in disk load induced by VBR-encoded objects and the decrease in client load by caching. Through trace-driven simulations, it is demonstrated that the scheme provides near-deterministic QoS and keeps disk utilization high.  相似文献   

6.
With the rapid development of mobile Internet technologies and various new service services such as virtual reality (VR) and augmented reality (AR), users’ demand for network quality of service (QoS) is getting higher and higher. To solve the problems of high load and low latency in-network services, this paper proposes a data caching strategy based on a multi-access mobile edge computing environment. Based on the MEC collaborative caching framework, an SDN controller is introduced into the MEC collaborative caching framework, a joint cache optimization mechanism based on data caching and computational migration is constructed, and the user-perceived time-lengthening problem in the data caching strategy is solved by a joint optimization algorithm based on an improved heuristic genetic algorithm and simulated annealing. Meanwhile, this paper proposes a multi-base station collaboration-based service optimization strategy to solve the problem of collaboration of computation and storage resources due to multiple mobile terminals and multiple smart base stations. For the problem that the application service demand in MEC server changes due to time, space, requests and other privacy, an application service optimization algorithm based on the Markov chain of service popularity is constructed, and a deep deterministic strategy (DDP) based on deep reinforcement learning is also used to minimize the average delay of computation tasks in the cluster while ensuring the energy consumption of MEC server, which improves the accuracy of application service cache updates in the system as well as reducing the complexity of service updates. The experimental results show that the proposed data caching algorithm weighs the cache space of user devices, the average transfer latency of acquiring data resources is effectively reduced, and the proposed service optimization algorithm can improve the quality of user experience.  相似文献   

7.
该文对河南普招招生志愿填报系统中服务器集群的部署与缓存的使用进行了研究,提出了服务器集群与缓存技术相结合来提高网站的性能的方案,该方案制采用了一种新的基于地址表的服务器集群部署方法。同时对服务器集群下Memcached的使用进行说明,并对其存在的缺陷提出了解决方法。该方案适合于类似大型网站的部署具有很强的推广价值。  相似文献   

8.
该文对河南普招招生志愿填报系统中服务器集群的部署与缓存的使用进行了研究,提出了服务器集群与缓存技术相结合来提高网站的性能的方案,该方案制采用了一种新的基于地址表的服务器集群部署方法。同时对服务器集群下Memcached的使用进行说明,并对其存在的缺陷提出了解决方法。该方案适合于类似大型网站的部署具有很强的推广价值。  相似文献   

9.
提高视频点播系统性能的一个有效途径是提高缓存的利用率,进而使系统可容纳更多的媒体流.间隔缓存(Interval Caching)是一种有效的缓存策略.在间隔缓存的基础上提出了一种混合了静态和动态缓存策略特点的混合缓存策略.通过对该策略进行缓存模型分析,对比了混合策略同间隔缓存策略的相关指标,结果说明,混合策略较间隔缓存策略有明显的改善.  相似文献   

10.
大规模层次化视频点播存储系统的设计与管理   总被引:6,自引:0,他引:6  
李勇  吴飞  陈福接 《软件学报》1999,10(4):356-358
近来计算机和通信技术的发展使得视频点播(video-on-demand,简称VOD)在技术和经济上成为可能.连续媒体的特性使得VOD系统需要大规模的存储服务器.层次化存储体系是减少系统费用的合理方案.文章提出了一种层次化的存储模型和磁盘cache的概念.根据这个模型,提出了基于访问频率的替换算法,并对算法的有效性进行了模拟和分析.结果表明,这种算法解决了LFU(least frequently used)算法中的“cache污染”(cache pollution)问题,能较好地适用于连续媒体数据应用.  相似文献   

11.
随着移动互联网的发展和用户数量的增加,网络中的音视频服务普遍采用动态缓存机制来减轻回程主干网的带宽压力和提高用户的观影体验。如何根据网络和用户需求,调整不同节点的缓存内容,以减少主干网的带宽压力,是当前缓存部署迫切需要解决的问题。基于子模函数理论,本文提出主动和被动资源分配调整方案及其算法。主动方案根据资源的流行度把视频文件部署到缓存节点上,以达到用户访问代价的最小化;被动方案根据音视频流行度的变化对节点上缓存的内容进行实时调整,以提高缓存资源的利用率和用户体验,降低主干网的带宽消耗。最小访问代价算法的复杂度与缓存空间容量大小相关,在缓存空间紧张时能快速迭代出资源的分配方案。数值仿真表明,主动和被动分配资源分配方案能有效降低远程服务器的带宽压力和提高用户体验。  相似文献   

12.
在Internet上高效传输流媒体数据是推广诸如视频点播等应用的基础.现有方案仅考虑了采用单代理结构的前缀缓存和服务器调度来降低骨干网带宽消耗和服务器负载.在带前缀缓存的Batch patching基础上提出了后缀的动态缓存算法ICBR,并提出了基于ICBR缓存算法的多缓存协作体系结构及协作算法MCC,仿真结果表明,基于ICBR的多缓存协作显著地降低了获取补丁而导致的骨干网带宽的消耗,提高了客户端QoS同时也降低了服务器负载.  相似文献   

13.
为了保证网络存储的负载平衡并避免在节点或磁盘故障的情况下造成不可恢复的损失,提出一种基于均衡数据放置策略的分布式网络存储编码缓存方案,针对大型高速缓存和小型缓存分别给出了不同的解决办法。首先,将Maddah方案扩展到多服务器系统,结合均衡数据放置策略,将每个文件作为一个单元存储在数据服务器中,从而解决大型高速缓存问题;然后,将干扰消除方案扩展到多服务器系统,利用干扰消除方案降低缓存的峰值速率,结合均衡数据放置策略,提出缓存分段的线性组合,从而解决小型缓存问题。最后,通过基于Linux的NS2仿真软件,分别在一个和两个奇偶校验服务器系统中进行仿真实验。仿真结果表明,提出的方案可以有效地降低峰值传输速率,相比其他两种较新的缓存方案,提出的方案获得了更好的性能。此外,采用分布式存储虽然限制了将来自不同服务器的内容组合成单个消息的能力,导致编码缓存方案性能损失,但可以充分利用分布式存储系统中存在的固有冗余,从而提高存储系统的性能。  相似文献   

14.
Cloud computing allows execution and deployment of different types of applications such as interactive databases or web-based services which require distinctive types of resources. These applications lease cloud resources for a considerably long period and usually occupy various resources to maintain a high quality of service (QoS) factor. On the other hand, general big data batch processing workloads are less QoS-sensitive and require massively parallel cloud resources for short period. Despite the elasticity feature of cloud computing, fine-scale characteristics of cloud-based applications may cause temporal low resource utilization in the cloud computing systems, while process-intensive highly utilized workload suffers from performance issues. Therefore, ability of utilization efficient scheduling of heterogeneous workload is one challenging issue for cloud owners. In this paper, addressing the heterogeneity issue impact on low utilization of cloud computing system, conjunct resource allocation scheme of cloud applications and processing jobs is presented to enhance the cloud utilization. The main idea behind this paper is to apply processing jobs and cloud applications jointly in a preemptive way. However, utilization efficient resource allocation requires exact modeling of workloads. So, first, a novel methodology to model the processing jobs and other cloud applications is proposed. Such jobs are modeled as a collection of parallel and sequential tasks in a Markovian process. This enables us to analyze and calculate the efficient resources required to serve the tasks. The next step makes use of the proposed model to develop a preemptive scheduling algorithm for the processing jobs in order to improve resource utilization and its associated costs in the cloud computing system. Accordingly, a preemption-based resource allocation architecture is proposed to effectively and efficiently utilize the idle reserved resources for the processing jobs in the cloud paradigms. Then, performance metrics such as service time for the processing jobs are investigated. The accuracy of the proposed analytical model and scheduling analysis is verified through simulations and experimental results. The simulation and experimental results also shed light on the achievable QoS level for the preemptively allocated processing jobs.  相似文献   

15.
随着对等网络应用的不断深入,如何减少时间延迟,减轻集中性带宽负载,提高服务质量,已经成为研究的一个重点.提出了CORPC缓存管理方案.该方案通过使用流媒体片段的流行度来定义媒体片段副本数可占用的最佳系统缓存容量,综合考虑流媒体片段已有的副本容量、流媒体片段的热度、系统节点存储容量,使用启发式贪婪算法来实现缓存准入和缓存替换机制.该方案兼顾了不同热度的媒体片段的服务质量.模拟环境的测试结果表明,随着节点缓存空间的增加,系统服务质量得到改善.  相似文献   

16.
Design of servers to meet the quality of service (QoS) requirements of interactive video-on-demand (VOD) systems is challenging. Recognizing the increasing use of these systems in a wide range of applications, as well as the stringent service demands expected from them, several design alternatives have been proposed to improve server throughput. A buffer management technique, called interval caching, is one such solution which exploits the temporal locality of requests to the same movie and tries to serve requests from the cache, thereby enhancing system throughput.In this paper, we present a comprehensive mathematical model for analyzing the performance of interactive video servers that use interval caching. The model takes into account the representative workload parameters of interactive servers employing interval caching and calculates the expected number of cached streams as an indication of the improvement in server capacity due to interval caching. Especially, user interactions, which sensitively affect the performance of interval caching, are realistically reflected in our model for an accurate analysis. A statistical admission control technique has also been developed based on this model. Using this model as a design tool, we apply the model to measure the impact of different VCR operations on client requests and rejection probability, as well as the effect of cache size.  相似文献   

17.
在流媒体CDN中,采用“推拉”结合的流媒体内容分发方式可进一步减少客户的平均启动时间,降低网络资源的消耗,根据流媒体文件的部分访问特性给出了一种流媒体内容的部分推送策略,并利用源服务器统计的历史访问信息,提出了目的代理缓存服务器的一种随机选择算法。实验表明该策略及算法可提高系统性能。  相似文献   

18.
针对容器化云环境中数据中心能耗较高的问题,提出了一种基于最佳能耗优先(Power Full,PF)物理机选择算法的虚拟资源配置策略。首先,提出容器云虚拟资源的配置和迁移方案,发现物理机选择策略对数据中心能耗有重要影响;其次,通过研究主机利用率与容器利用率,主机利用率与虚拟机利用率,主机利用率与数据中心能耗之间的数学关系,建立容器云数据中心能耗的数学模型,定义出优化目标函数;最后,通过对物理机的能耗函数使用线性插值进行模拟,依据邻近事物相类似的特性,提出改进的最佳能耗优先物理机选择算法。仿真实验将此算法与先来先得(First Fit,FF)、最低利用率优先(Least Fit,LF)、最高利用率优先(Most Full,MF)进行比较,实验结果表明,在有规律不同物理机群的计算服务中,其能耗比FF、LF、MF分别平均降低45%、53%和49%;在有规律相同物理机群的计算服务中,其能耗比FF、LF、MF分别平均降低56%、46%和58%;在无规律不同物理机群的计算服务中,其能耗比FF、LF、MF分别平均降低32%、24%和12%。所提算法实现了对容器云虚拟资源的合理配置,且在数据中心节能方面具有优越性。  相似文献   

19.
移动边缘计算研究中,边缘服务器通过缓存任务数据可以有效节约计算资源,但如何分配缓存资源解决边缘服务器的竞争关系,以及能耗和效益问题,达到系统性能最优是一个NP难问题。为此提出基于缓存优化的在线势博弈资源分配策略OPSCO(online potential-game strategy based on cache optimization),采用新的缓存替换策略CASCU(cache allocation strategy based on cache utility),最大化缓存的效用。通过优化边缘服务器的效益指示函数,将缓存替换代价等因素与李雅普诺夫优化、势博弈以及EWA(exponential weighting algorithm)算法结合,对边缘服务器的竞争关系建模,进行势博弈相关证明和分析。仿真结果表明,OPSCO相比于其他资源分配策略,可以明显提升任务完成率和缓存效用,并降低设备能耗和时间开销,解决了移动边缘计算在线缓存场景中的资源分配以及数据缓存问题。  相似文献   

20.
一个基于机群的可扩展的Web缓存服务器   总被引:4,自引:0,他引:4  
解决分层缓存系统较高层缓存器的瓶颈问题可用扩展性好的机群系统来实现。文章在分析了目前已有的两种典型的机群缓存服务器的基础上,提出了一个新的Web缓存服务器系统。在新系统里引入了摘要缓存机制,使系统形成单一缓存映像,达到快速访问的目的。摘要信息文件的缩短和单一入口点只传输请求不传输响应的机制,减轻了系统里请求分配器的负担,系统可扩展性好。解决缓存系统的热点文件和机群系统的异构性问题的动态负载平衡算法有利于提高系统的吞吐率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号