首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 109 毫秒
1.
分布式多媒体存储系统中的全局缓存管理   总被引:2,自引:2,他引:0       下载免费PDF全文
朱晴波  乔浩  陈道蓄 《电子学报》2002,30(12):1832-1835
多媒体存储系统必须同时支持连续媒体和非连续媒体的访问.由于连续媒体的实时要求,系统必须为访问连续媒体保留大量的磁盘带宽,并且持续很长的时间,这使其他类型文件的访问性能严重下降.本文根据连续媒体的访问特性,提出了一个分布式多媒体存储系统的协同缓存策略GLNU,充分利用系统中其他结点上可用的内存资源,提高缓存的利用率,以减少连续媒体的磁盘I/O,从而提高其他媒体的访问性能.仿真试验表明GLNU在各种不同的参数下,均优于现有的缓存策略,是一种适合分布式多媒体存储系统的缓存策略.  相似文献   

2.
当传统数据库面临大规模数据访问时,磁盘I/O往往成为性能瓶颈,从而导致过高的响应延迟.本文基于Rdeis缓存技术,构建一个分布式缓存系统,为应用系统提供高可用性,高性能、高可靠性的分布式缓存系统.  相似文献   

3.
随着云计算和大数据时代的到来,在满足用户对系统访问量、访问速度、访问安全的要求的同时,系统必须实时准确地处理迅猛增长的海量数据,而传统的缓存技术无法满足海量数据处理和用户高并发访问的需求.分布式缓存技术是最好的高性能缓存解决方案.本文研究如何利用云计算下分布式缓存技术在海量数据处理平台中解决该问题,分析研究了分布式缓存的关键技术、分布式缓存的一致性和分布式内存数据管理.在此基础上,分析并设计了分布式缓存系统的部署和整体架构.并将该分布式缓存系统的设计模式应用在某团购网上,进行了POC测试.测试结果证明分布式缓存技术可以缓解服务器的压力,解决海量数据和超高并发数据访问所带来的问题,提升了系统的性能、访问速度、可靠性以及降低响应延迟.  相似文献   

4.
针对现代数据仓库系统中常见的需接收大量流数据,且其与磁盘上已有的数据做连接后再入库的场景进行了探讨.通过合理设置磁盘分页和应用缓存模块,分散磁盘I/O压力,在已有研究的基础上提出了一种具有更高效率的数据接收方案,并引入一致性哈希函数将其扩展到分布式环境,提出一种应用于分布式环境的D-CACHEJOIN算法.通过理论计算...  相似文献   

5.
为了提高代理系统的整体性能,基于内部网络用户访问时间的局部性和相似性,并结合现有的分布式缓存系统,本文提出了一种新型的分布式代理缓存系统——双层缓存集群.双层缓存集群系统分为网内集群缓存层和代理集群缓存层,采用双层代理缓存结构,充分利用现有内部网络资源,分散了代理的负担.降低了代理之间的通信开销,还增强了缓存资源的利用...  相似文献   

6.
面向云计算的键值型分布式存储系统研究   总被引:1,自引:0,他引:1       下载免费PDF全文
孙勇  林菲  王宝军 《电子学报》2013,41(7):1406-1411
 对于数据密集型的云计算应用,基于磁盘的存储系统很难同时满足它们对性能与可用性的需求.本文提出了一种以内存为主设备、以磁盘为辅助设备的键值型分布式存储系统M-Cloud,能提供大数据读写、备份及恢复等存储服务功能.M-Cloud通过将数据全部装入服务器集群内存中的方式提高系统整体性能,并设计了分区线性哈希算法以实现负载均衡和高扩展性,设计了相应的数据备份与故障快速恢复策略以保证系统可靠性.仿真实验结果表明,M-Cloud具有较高的性能与可用性,对系统进一步改进和优化后具有应用于实际生产环境中的潜力,可为用户提供高质量的存储服务.  相似文献   

7.
《中兴通讯技术》2016,(2):23-27
研究了Spark并行计算集群对于内存的使用行为,认为其主要工作是通过对内存行为进行建模与分析,并对内存的使用进行决策自动化,使调度器自动识别出有价值的弹性分布式数据集(RDD)并放入缓存。另外,也对缓存替换策略进行优化,代替了原有的近期最少使用(LRU)算法。通过改进缓存方法,提高了任务在资源有限情况下的运行效率,以及在不同集群环境下任务效率的稳定性。  相似文献   

8.
廖爽爽 《信息技术》2006,30(12):54-59
对象缓存是一种通过在使用对象后不立即释放,而是存储在内存或硬盘中并被后来的客户端请求重用,避免重新建立对象的昂贵成本的机制。在考查了业界广泛使用的几种对象缓存框架后,提出了一种分布式对象缓存框架的设计方案LiteCS。该框架中服务器不需要负责对象更改消息的传递,即使缓存对象被频繁地修改,也不会大大增加系统的整体负载,可有效降低网络负荷.现所做的两组测试证明了LiteCS框架适用于多台服务器通过网络共享缓存对象。  相似文献   

9.
赵昕  戚文芽  廖军 《电视技术》2006,(Z1):102-104
结合嵌入式磁盘阵列在媒体服务器中的应用,实现了SATA磁盘驱动程序,并在其基础上设计了双缓存管理策略:采用通过NVRAM,RAM和磁盘分区实现不同于主缓存的非对称备份缓存结构;在达到对称式双缓存系统性能的同时有效增强了系统的可靠性,提高了硬件资源利用率.实际运行测试表明,该性能优化达到了预期效果.  相似文献   

10.
在传统数据存储技术的基础上提出以固态硬盘为缓存的存储技术,结合固态硬盘的写缩减技术,实现热点数据与冷数据在内存‐固态硬盘‐机械磁盘之间的分级缓存与存储机制,从而能充分利用内存资源,增强与CPU 的数据交互能力,进而有效提升大数据环境下的数据处理速度。  相似文献   

11.
Cache cooperation improves the performance of isolated caches, especially for caches with small cache populations. To make caches cooperate on a large scale and effectively increase the cache population, several caches are usually federated in caching architectures. We discuss and compare the performance of different caching architectures. In particular, we consider hierarchical and distributed caching. We derive analytical models to study important performance parameters of hierarchical and distributed caching, i.e., client's perceived latency, bandwidth usage, load in the caches, and disk space usage. Additionally, we consider a hybrid caching architecture that combines hierarchical caching with distributed caching at every level of a caching hierarchy. We evaluate the performance of a hybrid scheme and determine the optimal number of caches that should cooperate at each caching level to minimize client's retrieval latency  相似文献   

12.
Data caching can significantly improve the efficiency of information access in a wireless ad hoc network by reducing the access latency and bandwidth usage. However, designing efficient distributed caching algorithms is nontrivial when network nodes have limited memory. In this article, we consider the cache placement problem of minimizing total data access cost in ad hoc networks with multiple data items and nodes with limited memory capacity. The above optimization problem is known to be NP-hard. Defining benefit as the reduction in total access cost, we present a polynomial-time centralized approximation algorithm that provably delivers a solution whose benefit is at least 1/4 (1/2 for uniform-size data items) of the optimal benefit. The approximation algorithm is amenable to localized distributed implementation, which is shown via simulations to perform close to the approximation algorithm. Our distributed algorithm naturally extends to networks with mobile nodes. We simulate our distributed algorithm using a network simulator (ns2) and demonstrate that it significantly outperforms another existing caching technique (by Yin and Cao [33]) in all important performance metrics. The performance differential is particularly large in more challenging scenarios such as higher access frequency and smaller memory.  相似文献   

13.
大规模连续媒体服务的缓存替换算法设计与实现   总被引:4,自引:0,他引:4       下载免费PDF全文
张潇  吴敏强  恽爽  陆桑璐  谢立 《电子学报》2003,31(5):783-785
连续媒体的缓存设计是非常关键的问题,本文针对大规模连续媒体服务系统的特点,提出了EA缓存替换算法.该算法充分考虑了现有用户和请求接入用户的服务需求,提高了内存使用效率.我们的理论分析和实验模拟证明了它在性能上大大优于传统的缓存替换算法.  相似文献   

14.
Wireless mesh networks (WMNs) have been proposed to provide cheap, easily deployable and robust Internet access. The dominant Internet-access traffic from clients causes a congestion bottleneck around the gateway, which can significantly limit the throughput of the WMN clients in accessing the Internet. In this paper, we present MeshCache, a transparent caching system for WMNs that exploits the locality in client Internet-access traffic to mitigate the bottleneck effect at the gateway, thereby improving client-perceived performance. MeshCache leverages the fact that a WMN typically spans a small geographic area and hence mesh routers are easily over-provisioned with CPU, memory, and disk storage, and extends the individual wireless mesh routers in a WMN with built-in content caching functionality. It then performs cooperative caching among the wireless mesh routers.We explore two architecture designs for MeshCache: (1) caching at every client access mesh router upon file download, and (2) caching at each mesh router along the route the Internet-access traffic travels, which requires breaking a single end-to-end transport connection into multiple single-hop transport connections along the route. We also leverage the abundant research results from cooperative web caching in the Internet in designing cache selection protocols for efficiently locating caches containing data objects for these two architectures. We further compare these two MeshCache designs with caching at the gateway router only.Through extensive simulations and evaluations using a prototype implementation on a testbed, we find that MeshCache can significantly improve the performance of client nodes in WMNs. In particular, our experiments with a Squid-based MeshCache implementation deployed on the MAP mesh network testbed with 15 routers show that compared to caching at the gateway only, the MeshCache architecture with hop-by-hop caching reduces the load at the gateway by 38%, improves the average client throughput by 170%, and increases the number of transfers that achieve a throughput greater than 1 Mbps by a factor of 3.  相似文献   

15.
提出了一种基于查询事件的日志模型,采用查询/应答日志匹配的方法完整的记录了一次查询事件,利用内存数据结构提高了海量数据写入的I/O效率;在日志分析过程中对日志文件建立二维哈希索引,利用布隆过滤器减少磁盘I/O次数,提高了分析效率.  相似文献   

16.
To support large-scale Video-on-Demand (VoD) services in a heterogeneous network environment, either a replication or layering approach can be deployed to adapt the client bandwidth requirements. With the aid of the broadcasting and caching techniques, it has been proved that the overall performance of the system can be enhanced. In this paper, we explore the impact on the broadcasting schemes coupled with proxy caching and develop an analytical model to evaluate the system performance in a highly heterogeneous network environment. We develop guidelines for resources allocation, transmission strategies as well as caching schemes under different system configurations. The model can assist system designers to study various design options as well as perform system dimensioning. Moreover, a systematic comparison between replication and layering is performed. From the results, it can be seen that the system performance of layering is better than that of replication when the environment is highly heterogeneous even if the layering overhead is higher than 25%. In addition, it is found that the system blocking probability can be further reduced by exploring the broadcast capability of the network if the proxy server cannot store all the popular videos.  相似文献   

17.
缓存是指在计算机存储系统的层次结构中,介于中央处理器和主存储器之间的高速小容量存储器。缓存和主存储器一起构成一级存储器,高速缓冲存储器和主存储器之间信息的调度和传送是由硬件自动进行。在计算机的发展历程中,依据摩尔定律,计算系统中的中央处理器性能发展迅速。而磁盘作为计算系统中的主存储器,由于机械机理的限制,其发展速度远远不及中央处理器的发展速度,形成了中央处理器数据处理快而磁盘读写数据缓慢的状况,从而降低整个计算机系统工作效率。因此,通过在两者之间增加一个缓冲层来协调两者之间数据调动效率问题,缓存由此应运而生。缓存的处理速率接近于中央处理器,可以通过扩大缓存容量,缓解两者之间处理效率差距,能够快速响应中央处理器和磁盘之间的读写请求,作为两者之间的缓冲池,缓存在一个适当范围内越大越好。由于缓存资源的珍贵,因此,缓存成为一个计算系统性能高低的重要标志。  相似文献   

18.
Active networking in environments built to support link rates up to several gigabits per second poses many challenges. One such challenge is that the memory bandwidth and individual processing power of the router's microprocessors limit the total available processing power of a router. In this article we identify and describe three components, which promise a high-performance active network solution. This implements the key features typical to active networking, such as automatic protocol deployment and application specific processing, and it is suitable for a gigabit environment. First, we describe the hardware of the active network node (ANN), a scalable high-performance platform based on off-the-shelf CPUs connected to a gigabit ATM switch backplane. Second, we introduce the ANN's modular, extensible, and highly efficient operating system (NodeOS). Third, we describe an execution environment running on top of the NodeOS, which implements a novel large-scale active networking architecture called distributed code caching  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号