首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 62 毫秒
1.
在RAID系统的研究中,简洁、高效的软件系统是确保整个系统I/O性能和可用性的关键因素.详细介绍了基于结构化模型的高性能RAID系统nssRAID的设计思路,论述了系统中Cache设计、状态迁移机制,基于策略库的管理控制等关键技术的实现细节;以可引导内核模块的形式在X86-Linux平台上实现了nssRAID系统,实验结果显示,nssRAID系统的IO率和数据吞吐率较目前各种RAID系统均有较大提高.  相似文献   

2.
针对分布式RAID的特殊架构,设计了基于总线侦听方法的Cache模块。该模块采用主存分块映射策略来解决总线侦听方法,由于共享网络总线对带宽要求太高,使用较少带宽、较少的数据操作,提高了分布式RAID的系统性能。对Cache模块设计进行了性能分析,对多处理机系统Cache一致性问题的解决方案进行了分析比较。  相似文献   

3.
本文介绍了采用非易失写缓存解决RAID5小写问题的设计和实现,该方法通过对Cache数据结构的精心组织和采用事务机制对Cache进行修改。我们用单片NVRAM实现了高可用的磁盘阵列Cache。  相似文献   

4.
一种新的基于RAID的CACHE技术研究与实现   总被引:5,自引:0,他引:5  
设计了一个新的基于 RAID的 Cache系统 ,本系统有如下创新之处 :(1)在全局配置 Cache时 ,采取了预留与动态分配相结合的策略 ,并建立了分配模型 ,从理论上确定了预留与动态分配的最佳比例 .(2 )建立了二级读 Cache结构 ,其中 :一级读 Cache实现时间的局部性 ;二级读 Cache实现空间的局部性 .(3)提出了定时搬移并按阈值淘汰的策略 ,即定时把二级 Cache的节点中最近访问过的数据小块搬移到一级 Cache,当搬移的数据小块超过一个阈值时 ,淘汰二级 Cache的相应节点 .通过仿真测试证明了定时搬移策略较好地实现了时间的局部性 ,提高了 Cache系统的性能  相似文献   

5.
RAID控制器中多级Cache的研究   总被引:1,自引:0,他引:1       下载免费PDF全文
本文介绍了一种应用于RAID控制器的两级Cache结构。在物理上,整个Cache可分为读Cache和写Cache,且读Cache分为两级:一个容纳小块数据的组相联Cache和一个容纳大块数据的全相联空间Cache。性能测试结果表明,命中率和命中次数在两级Cache结构中都有所提高。  相似文献   

6.
设计了一种多个控制卡分条的RAID50模型和主机虚拟卷地址到所属高级别阵列、低级别阵列分条的二级地址映射模式,以提高I/O访问的并发性。定义多个连续I/O块为扩展块并等同于Cache页大小,且作为RAID50传输粒度,进一步改善内存与存储设备之间的传输效率。设计了基于Cache描述符控制块的哈希链式查找算法和基于Cache页访问频率计数的二次机会置换算法,实现了一种主机数据接收与RAID50存储设备预读并发进行的策略。结果表明,该设计有效地提高了存储系统的I/O性能。  相似文献   

7.
基于设计模式的Raid Cache软件重构   总被引:1,自引:0,他引:1  
介绍了设计模式思想对提高嵌入式软件开发的稳定性、应变性和可维护性等方面的积极作用,在此基础上运用设计模式的面向接口编程原则、组合原则,以及策略模式、类厂模式等方法对嵌入式RAID磁盘阵列系统中传统的Cache模块代码进行重构。  相似文献   

8.
在带有Cache的RAID5系统中,倒盘算法对于提高系统的10性能起着十分重要的作用。本文研究了一些现有倒盘算法,并针对RAID5提出了一种基于系统状态分类的倒盘效率优先算法。该算法的主要思想是以当前的Cache占用率和负载特性为依据,根据倒盘效率优先的原则动态地调整倒盘策略。同时,本文还给出了实现这种倒盘算法的关键技术,并测试和分析了该算法的性能。  相似文献   

9.
介绍了一种基于RAID5的Disk Cache的实现。在对磁盘阵列Cache的实现过程中,使用了组相联映射、LRU替换算法等比较成熟的技术,在Cache回写策略上采用write-back方式。从而提高了写磁盘速度,减少冗余写盘操作。另外通过对校验组加锁,有效地防止了同一校验组里多个块同时降级而导致的数据不一致现象。  相似文献   

10.
基于RAID5的磁盘阵列Cache的研究与实现   总被引:2,自引:1,他引:2  
通过对RAID5结构的研究,发现磁盘阵列Cache能减少I/O子系统的写响应时间, 可将多个小的写转换为大的写,从而减少冗余写的次数。最后给出作者在参与RAID5控制器 的研制工作中对磁盘阵列Cache的实现方法。  相似文献   

11.
不同的Cache预取策略适用于不同的存取模式。本文介绍了存储系统Cache预取技术的研究现状,从分析存取模式出发,构造了存取模式三元组模型,并在磁盘阵列上测试了适 用于复杂环境下的Cache预取自适应策略,结果证明,自适应策略能够在不同环境上获得磁盘阵列的最优性能。  相似文献   

12.
Linux中Software RAID驱动程序的机制分析   总被引:6,自引:2,他引:4  
本文简单介绍了RAID的含义,级别分类及其在Linux中的应用,着重探讨了Linux中Software RAID驱动程序的体系结构以及buffer cache的机制,并以RAID1为例具体分析了其实现机制。  相似文献   

13.
RAID系统设计中的一个关键技术问题   总被引:3,自引:0,他引:3  
介绍了RAID 系统的分类和构造,重点讨论了如何利用DRAM 来实现各种RAID 方式和实现cache 功能。通过为NTC公司设计RAID 所得的经验,指出了在开发过程中需要注意的问题。  相似文献   

14.
在盘阵中,RAID5利用校验信息来提高数据的可靠性。由于要维护校验信息,带来了小写问题,影响了系统的整体性能。本文在AFRAID方法的基础上,提出了一种基于动态缓存标志表(DCMT)的提高RAID5性能的方法。该方法在保证RAID5磁盘数据特征的前提下,用较小的代价就可以大大缩短响应时间,提高系统整体性能。  相似文献   

15.
RAID零拷贝设计充分利用散聚表,减少数据在RAID系统中各个模块间不必要的内存拷贝,避免数据重复保存.这种设计和传统的RAID设计相比,可以有效减少RAID系统的处理器负载.提高IO吞吐率和CACHE命中率.通过信息存储系统教育部重点实验室实现的两种设计的RAID系统测试表明,零拷贝的RAID设计可以提高IO吞吐率10%左右.  相似文献   

16.

One way to increase storage density is using a shingled magnetic recording (SMR) disk. We propose a novel use of SMR disks with RAID (redundant array of independent disks) arrays, specifically building upon and compared with a basic RAID 4 arrangement. The proposed scheme (called RAID 4SMR) has the potential to improve the performance of a traditional RAID 4 array with SMR disks. Our evaluation shows that compared with the standard RAID 4, when using update in-place in RAID arrays, RAID 4SMR with garbage collection not just can allow the adoption of SMR disks with a reduced performance penalty, but offers a performance improvement of up to 56%.

  相似文献   

17.
刘朝斌  吴非 《计算机工程》2006,32(12):45-46,49
在分析嵌入式Linux下驱动程序设计特点基础上,提出了一种提高存储I/O性能的新的RAID算法,并实现了相应的设备驱动和cache管理技术。通过对RAID5存储子系统进行的测试和性能分析,证明该方法是有效和可行的。  相似文献   

18.
赵昕  戚文芽  廖军 《计算机工程》2007,33(1):259-261
结合磁盘存储阵列的应用,实现了基于Intel 80321 I/O处理器的RAID系统。该文利用了I/O处理器应用加速单元的硬件特性,在软件方面用高效读、写高速缓存的管理策略,实现了RAID5系统性能的优化,实际运行结果表明,该性能优化能达到预期效果。  相似文献   

19.
Performance of RAID5 disk arrays with read and write caching   总被引:1,自引:0,他引:1  
In this paper, we develop analytical models and evaluate the performance of RAID5 disk arrays in normal mode (all disks operational), in degraded mode (one disk broken, rebuild not started) and in rebuild mode (one disk broken, rebuild started but not finished). Models for estimating rebuild time under the assumption that user requests get priority over rebuild activity have also been developed. Separate models were developed for cached and uncached disk controllers. Particular emphasis is on the performance of cached arrays, where the caches are built of Non-Volatile memory and support write caching in addition to read caching. Using these models, we evaluate the performance of arrayed and unarrayed disk subsystems when driven by a database workload such as those seen on systems running any of several popular database managers. In particular, we assume single-block accesses, flat device skew and little seek affinity.With the above assumptions, we find six significant results. First, in normal mode, we find there is no difference in performance between subsystems built out of either small arrays or large arrays as long as the total number of disks used is the same. Second, we find that if our goal is to minimize the average response time of a subsystem in degraded and rebuild modes, it is better to use small arrays rather than large arrays in the subsystem. Third, we find the counter-intuitive result that if our goal is to minimize the average response time of requests to any one array in the subsystem, it is better to use large arrays than small arrays in the subsystem. We call this the best worst-case phenomenon.Fourth, we find that when no caching is used in the disk controller, subsystems built out of arrays have a normal mode performance that is significantly worse than an equivalent unarrayed subsystem built of the same drives. For the specific drive, controller, workload and system parameters we used for our calculations, we find that, without a cache in the controller and operating at typical I/O rates, the normal mode response time of a subsystem built out of arrays is 50% higher than that of an unarrayed subsystem. In rebuild mode, we find that a subsystem built out of arrays can have anywhere from 100% to 200% higher average response time than an equivalent unarrayed subsystem.Out fifth result is that, with cached controllers, the performance differences between arrayed and equivalent unarrayed subsystems shrink considerably. We find that the normal mode response time in a subsystem built out of arrays is only 4.1% higher than that of an equivalent unarrayed system. In degraded (rebuild) mode, a subsystem built out of small arrays has a response time 11% (13%) higher and a subsystem built out of large arrays has a response time 15% (19%) higher than an unarrayed subsystem.Our sixth and last result is that cached arrays have significantly better response times and throughputs than equivalent uncached arrays. For one workload, a cached array with good hit ratios had 5 times the throughout and 10 to 40 times lower response times than the equivalent uncached array. With poor hit ratios, the cached array is still a factor of 2 better in throughput and a factor of 4 to 10 better in response time for this same workload.We conclude that 3 design decisions are important when designing disk subsystems built out of RAID level 5 arrays. First, it is important that disk subsystems built out of arrays have disk controllers with caches, in particular Non-Volatile caches that cache writes in addition to reads. Second, if one were trying to minimize the worst response time seen by any user, one would choose disk array subsystems built out of large RAID level 5 arrays because of the best worst-case phenomenon. Third, if average subsystem response time is the most important design metric, the subsystem should be built out of small RAID level 5 arrays.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号