首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
为满足海量数据存储的需求,提出一种基于低功耗、高性能固态硬盘的云存储系统分布式缓存策略.该策略对不同存储介质的硬盘虚拟化,将热点访问数据的缓存与存储相结合,实现在不同存储介质之间的热点数据迁移,解决热点元数据的访问一致性与存储服务器的动态负载均衡问题.工作负载压力测试结果表明,该策略可使云存储系统的读峰值速率最高提升约86%,并且能提高存储服务器的吞吐量.  相似文献   

2.
操顺德  华宇  冯丹  孙园园  左鹏飞 《软件学报》2017,28(8):1999-2009
通过对视频监控数据的特点和传统存储方案进行分析,提出一种高性能分布式存储系统解决方案.不同于传统的基于文件存储的方式,设计了一种逻辑卷结构,将非结构化的视频流数据以此结构进行组织并直接写入RAW磁盘设备,解决了传统存储方案中随机磁盘读写和磁盘碎片导致存储性能下降的问题.该方案将元数据组织为两级索引结构,分别由状态管理器和存储服务器管理,极大地减少了状态管理器需要管理元数据的数量,消除了性能瓶颈,并提供精确到秒级的检索精度.此外,该方案灵活的存储服务器分组策略和组内互备关系使得存储系统具备容错能力和线性扩展能力.系统测试结果表明,该方案在成本低廉的PC服务器上实现了单台服务器能同时记录400路1080P视频流,写入速度是本地文件系统的2.5倍.  相似文献   

3.
当今时代数据呈现出指数级增长效应,更多的组织采用多数据中心和分布式来存储数据,Alluxio作为以内存为中心的虚拟分布式存储系统,整合了底层大数据生态系统。在Alluxio与底层存储结合的远程场景中,由于网络的延迟,使得I/O速度成为影响对外服务的重要因素之一。针对以上研究提出一种基于Alluxio远程场景下的缓存策略CPR,利用存储系统中数据块之间的关联性指导数据预取与替换,采用分组思想提高关联规则的利用率,启用后台线程实时更新规则集,并通过仿真实验验证策略的有效性。仿真结果表明,CPR策略指导下的I/O性能要优于Alluxio现有的缓存策略和一些基于数据块间关联规则的缓存策略。  相似文献   

4.
高性能、可扩展的网络存储系统在当今数据密集型应用中日益重要.分布式RAID作为一种存储体系结构,被广泛应用在集群和分布式计算环境中,但其机械的数据放置策略会导致在数据一致性维护中读写性能的下降.基于分布式RAID和存储虚拟化的理念,提出了一种新型块级网络存储系统VISA(virtual interface storage architecture).VISA系统不仅可以实现本地和远程的快速存储访问,而且可以根据当前各存储节点的负载状况和数据布局策略,将用户I/O请求的逻辑块地址动态映射到物理块地址,从而达到负载均衡与高性能的统一.测试结果显示,使用这种动态映射策略的VISA系统与传统的采用分布式RAID结构的IP-SAN相比,顺序写性能提高了78.66%~141.77%,顺序读性能提高了34.89%~51.73%.  相似文献   

5.
虚拟现实环境下,数据实时性和系统稳定性的高要求对服务器架构的设计和优化提出了新的挑战。针对虚拟现实环境下海量数据存储效率的提升和系统性能的优化,提出了一种新的分布式服务器架构,该架构基于分布式协调框架ZooKeeper、分布式缓存架构Redis以及MongoDB分片机制,并改进一致性Hash算法来优化Redis缓存架构,同时优化MongoDB分片的负载均衡机制。经过相应的仿真验证,该架构在虚拟现实环境下具有有效性。  相似文献   

6.
云存储技术已经成为当前互联网中共享存储和数据服务的基础技术,云存储系统普遍利用数据复制来提高数据可用性,增强系统容错能力和改善系统性能。提出了一种云存储系统中基于分簇的数据复制策略,该策略包括产生数据复制的时机判断、复制副本数量的决定以及如何放置复制所产生的数据副本。在放置数据副本时,设计了一种基于分簇的负载均衡副本放置方法。相关的仿真实验表明,提出的基于分簇的负载均衡副本放置方法是可行的,并且具有良好的性能。  相似文献   

7.
NoSQL数据库以其支持数据高并发读写,海量数据高效率存储和访问,以及高扩展性和高可用行的特点在分布式存储系统中得到了广泛的应用。通过对分布式存储系统负载均衡的研究,提出了使用一致性哈希函数来实现系统负载均衡,并通过添加为集群节点添加虚拟节点的方式来增加缓存的命中率。  相似文献   

8.
在VOD服务器集群中,对用户服务请求的合理调度是提高集群整体性能的关键技术之一.本文针对共享存储结构下的服务器机群,在请求调度算法LoadCache-rep基础上进行改进,提出一种基于视频节目点播集中度的调度策略,该策略通过将相近的点播请求调度至相同服务器上以充分利用服务器的缓存机制,同时兼顾均衡各服务器间的负载.并根据实时负载变化对请求进行迁移以消除VCR操作对负载分布的影响.仿真试验表明,该策略能有效提高视频服务器集群的运行性能.  相似文献   

9.
Samba分布式存储系统通过根服务器向用户提供全局名字空间,当用户提出访问请求时,根服务器只按照静态的方式返回逻辑名对应的物理目标位置.当存在多个物理目标时,大多数用户的访问请求都将被定位在一台服务器上.多目标只保证了服务可用性,而没有对系统的负载均衡作出贡献.针对这一问题,提出了一种基于服务器性能指标的动态反馈负载均衡策略,并给出了基于Samba分布式存储系统的实现方案,旨在提高整个系统的存储能力、网络吞吐率、服务平均响应时间等指标.实验表明,优化后系统的I/O性能得到了很大提升.  相似文献   

10.
一种NAS系统的设计模型与性能分析研究   总被引:4,自引:0,他引:4  
与传统的存储服务器相比,附网存储体系具有诸多的优点.它实现了数据在存储设备到客户机之间的直接传输,提高了系统的整体性能,是目前新型的分布式存储解决方案.文中分析了基于附网存储的数据传输特点,并在此基础上提出了系统实现的设计模型.同时,我们对两种不同的存储方案造成的性能差异进行了研究,结果表明,附网存储系统能够有效地减轻服务器的负载,更充分地利用网络资源,如磁盘中缓存、磁盘处理器性能等.  相似文献   

11.
The sharing of caches among proxies is an important technique to reduce Web traffic, alleviate network bottlenecks, and improve response time of document requests. Most existing work on cooperative caching has been focused on serving misses collaboratively. Very few have studied the effect of cooperation on document placement schemes and its potential enhancements on cache hit ratio and latency reduction. We propose a new document placement scheme which takes into account the contentions at individual caches in order to limit the replication of documents within a cache group and increase document hit ratio. The main idea of this new scheme is to view the aggregate disk space of the cache group as a global resource of the group and uses the concept of cache expiration age to measure the contention of individual caches. The decision of whether to cache a document at a proxy is made collectively among the caches that already have a copy of this document. We refer to this new document placement scheme as the Expiration Age-based scheme (EA scheme). The EA scheme effectively reduces the replication of documents across the cache group, while ensuring that a copy of the document always resides in a cache where it is likely to stay for the longest time. We report our study on the potentials and limits of the EA scheme using both analytic modeling and trace-based simulation. The analytical model compares and contrasts the existing (ad hoc) placement scheme of cooperative proxy caches with our new EA scheme and indicates that the EA scheme improves the effectiveness of aggregate disk usage, thereby increasing the average time duration for which documents stay in the cache. The trace-based simulations show that the EA scheme yields higher hit rates and better response times compared to the existing document placement schemes used in most of the caching proxies.  相似文献   

12.
随着高速宽带接入技术的发展,流媒体技术的研究得到了迅速的发展,并具有广阔的应用前景.流媒体代理技术作为减轻服务器的访问负载、提高用户的访问响应速度的重要手段,已成为流媒体研究领域中的研究热点之一.针对流媒体服务中的分布式代理服务器系统,提出了一种优化的缓存数据放置策略.其主要思想是将缓存数据放入某个特定的代理服务器中,使得今后访问该数据的网络传输开销最小.仿真实验表明,所提出的算法比传统的缓存数据放置算法能获得更小的传输开销和更好的可扩展性.  相似文献   

13.
We study the use of non-volatile memory for caching in distributed file systems. This provides an advantage over traditional distributed file systems in that the load is reduced at the server without making the data vulnerable to failures. We propose the use of a small non-volatile cache for writes, at the client and the file server, together with a larger volatile read cache to keep the cost of the caches reasonable. We use a synthetic workload developed from analysis of file I/O traces from commercial production systems and use a detailed simulation of the distributed environment. The service times for the resources of the system were derived from measurements performed on a typical workstation. We show that non-volatile write caches at the clients and the file server reduce the write response time and the load on the file server dramatically, thus improving the scalability of the system. We examine the comparative benefits of two alternative writeback policies for the non-volatile write cache. We show that a proposed threshold based writeback policy is more effective than a periodic writeback policy under heavy load. We also investigate the effect of varying the write cache size and show that introducing a small non-volatile cache at the client in conjunction with a moderate sized non-volatile server write cache improves the write response time by a factor of four at all load levels.  相似文献   

14.
影响多媒体服务器性能的关键因素研究   总被引:7,自引:0,他引:7  
在构建大规模视频服务系统时 ,基于层次型多服务器群的体系结构在吞吐率、可扩展性、经济性等方面都有其突出的优势 ,尤其适合于在因特网上的应用 .但是 ,要充分发挥和提高视频服务系统的性能 ,还要针对一些主要的瓶颈(如服务器磁盘 I/ O带宽与网络带宽 ) ,解决好一系列的问题 .本文分析了影响多媒体视频服务器性能的一些主要因素 ,如视频服务器的体系结构、服务器与客户端之间的数据传送方式、媒体数据在视频服务器存储子系统中的分布与放置方式、对磁盘访问请求的调度、单服务器中的缓存及多服务器间协同缓存的管理、接入控制策略、流调度策略等 ,这些因素对视频服务器的性能与吞吐率有着极大的影响 .本文还介绍了一些适用于大规模视频服务系统的性能优化技术 ,如广播、批处理等流调度策略 .在构建视频服务器系统时 ,只有综合考虑这些因素 ,才能真正提高服务器乃至整个视频服务系统的吞吐率 ,并较好地满足客户的 Qo S要求  相似文献   

15.
Belloum  A.  Hertzberger  L.O.  Muller  H. 《World Wide Web》2001,4(4):255-275
Web caches are traditionally organised in a simple tree like hierarchy. In this paper, a new architecture is proposed, where federations of caches are distributed globally, caching data partially. The advantages of the proposed system are that contention on global caches is reduced, while at the same time improving the scalability of the system since extra cache resources can be added on the fly. Among other topics discussed in this papers, is the scalability of the proposed system, the algorithms used to control the federation of Web caches and the approach used to identify the potential Web cache partners. In order to obtain a successful collaborative Web caching system, the formation of federations must be controlled by an algorithm that takes the dynamics of the Internet traffic into consideration. We use the history of Web cache access in order to determine how federations should be formed. Initial performance results of a simulation of a number of nodes are promising.  相似文献   

16.
As the Internet has become a more central aspect for information technology, so have concerns with supplying enough bandwidth and serving web requests to end users in an appropriate time frame. Web caching was introduced in the 1990s to help decrease network traffic, lessen user perceived lag, and reduce loads on origin servers by storing copies of web objects on servers closer to end users as opposed to forwarding all requests to the origin servers. Since web caches have limited space, web caches must effectively decide which objects are worth caching or replacing for other objects. This problem is known as cache replacement. We used neural networks to solve this problem and proposed the Neural Network Proxy Cache Replacement (NNPCR) method. The goal of this research is to implement NNPCR in a real environment like Squid proxy server. In order to do so, we propose an improved strategy of NNPCR referred to as NNPCR-2. We show how the improved model can be trained with up to twelve times more data and gain a 5–10% increase in Correct Classification Ratio (CCR) than NNPCR. We implemented NNPCR-2 in Squid proxy server and compared it with four other cache replacement strategies. In this paper, we use 84 times more data than NNPCR was tested against and present exhaustive test results for NNPCR-2 with different trace files and neural network structures. Our results demonstrate that NNPCR-2 made important, balanced decisions in relation to the hit rate and byte hit rate; the two performance metrics most commonly used to measure the performance of web proxy caches.  相似文献   

17.
We study client-server caching of data with expiration timestamps. Although motivated by the potential for caching in telecommunication applications, our work extends to the general case of caching data that has known expiration times. Toward this end, we tailor caching algorithms to consider expiration timestamps. Next, we consider several different client-server paradigms that differ in whether and how the server updates client caches. Finally, we perform simulation studies to evaluate the empirical performance of a variety of strategies for managing a single cache independent of the server and for managing caches in a client-server setting.  相似文献   

18.
传统网络缓存系统中数据包级别的缓存难以实现,信息中心网络的出现使这个难题得以缓解,但数据包级别的缓存仍然面临严重的扩展性问题。通过分析当前限制数据包级别缓存实现的若干问题,提出了一种分组报文缓存优化方法。该方法通过根据分组前缀而非单个报文前缀建立索引来减少高速存储器的使用量,同时分组级别的流行度也用于优化缓存决策。定义了大量的评估指标,并通过广泛的实验来评估此方案的性能。实验结果表明,与之前的数据包级别的缓存方案相比,该方法可以大大减少高速存储器使用量,并且在服务器负载减少率、平均跳数减少率和平均缓存命中率方面取得显着改善。  相似文献   

19.
Data caching is a popular technique that improves data accessibility in wired or wireless networks. However, in mobile ad hoc networks, improvement in access latency and cache hit ratio may diminish because of the mobility and limited cache space of mobile hosts (MHs). In this paper, an improved cooperative caching scheme called group-based cooperative caching (GCC) is proposed to generalize and enhance the performance of most group-based caching schemes. GCC allows MHs and their neighbors to form a group, and exchange a bitmap data directory periodically used for proposed algorithms, such as the process of data discovery, and cache placement and replacement. The goal is to reduce the access latency of data requests and efficiently use available caching space among MH groups. Two optimization techniques are also developed for GCC to reduce computation and communication overheads. The first technique compresses the directories using an aggregate bitmap. The second employs multi-point relays to develop a forwarding node selection scheme to reduce the number of broadcast messages inside the group. Our simulation results show that the optimized GCC yields better results than existing cooperative caching schemes in terms of cache hit ratio, access latency, and average hop count.  相似文献   

20.
Distributed high-speed caching uses replication strategies to improve the parallel service efficiency in data-intensive services in cloud-based environments. This paper proposes a replication strategy based on the access pattern of tile in order to optimize load balancing for large-scale user access in cloud-based WebGISs. First, this strategy considers the access bias and repeatability involved in tile access, to cache tiles with higher popularities, and obtains a higher cache hit rate under limited distributed caches. Second, on the basis of the uneven distribution and temporal local changes of tile access, it generates hot tile replicas with a seat allocation scheme to efficiently allocate the distributed cache. Finally, it balances the load for tile access requests based on the spatial correlation and spatial locality in tile access patterns to achieve fast data extraction. Experimental results indicate that the proposed strategy can be adopted to handle numerous WebGISs tile requests in a cloud-based environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号