首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
毛友发  杨明福 《计算机工程》2003,29(21):23-24,40
存储区域网络SAN是未来最有前途的网络存储技术。RAID技术在高性能存储系统中被广泛应用,在SAN条件下RAID是异质的。该文提出了一种新颖的文件RAID(file RAID)机制。它将RAID结合到文件系统中,克服了独立硬件和软件RAID的缺陷。它是自适应的、多RAID级别混合的,在SAN下能具有很好的可用性、扩展性、可靠性、动态适应性和高性能。提出了fileRAID自适应算法。  相似文献   

2.
Redundant arrays of independent disks (RAID) provide an efficient stable storage system for parallel access and fault tolerance. The most common fault tolerant RAID architecture is RAID-1 or RAID-5. The disadvantage of RAID-1 lies in excessive redundancy, while the write performance of RAID-5 is only 1/4 of that of RAID-0. In this paper, we propose a high performance and highly reliable disk array architecture, called stripped mirroring disk array (SMDA). It is a new solution to the small-write problem for disk array. SMDA stores the original data in two ways, one on a single disk and the other on a plurality of disks in RAID-0 by stripping. The reliability of the system is as good as RAID-1, but with a high throughput approaching that of RAID-0. Because SMDA omits the parity generation procedure when writing new data, it avoids the write performance loss often experienced in RAID-5.  相似文献   

3.
冗余磁盘阵列虽然引入了容错机制使得磁盘阵列的数据可靠性得到了很大的提高,但同时也引起性能不降。而且随着磁盘数量的增加,磁盘失效的概率将明显增大,当单个磁盘失效后,虽然此时磁盘阵列数据并未矢失,且仍能服务于系统的请量此时磁盘阵列是带“病”工作,处于一种降级模式,本文对冗余磁盘阵列RAID5进行了队列建模和仿真计算,提出了性能损失率的概念,并作为评价磁盘阵列性能损失的衡量指标。计算结果分析表明,RAI  相似文献   

4.
To access a RAID (redundant arrays of inexpensive disks), the disk stripe size greatly affects the performance of the disk array. In this article, we present a performance model to analyze the effects of striping with different stripe sizes in a RAID. The model can be applied to optimize the stripe size. Compared with previous approaches, our model is simpler to apply and more accurately reveals the real performance. Both system designers and users can apply the model to support parallel I/O events. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

5.
廉价磁盘冗余阵列(RAID)采用许多小的廉价磁盘来代替大容量昂贵的磁盘,以取得更高的性能和更低的功耗。本文介绍了RAID-5级磁盘阵列中校验信息的八种不同的分布策略,即奇偶校验信息的放置策略,并从几个不同的应用情况对不同的放置策略进行了研究,结论是不同的奇偶校验信息放置策略对磁盘阵列的I/O读写性能有很大的影响。  相似文献   

6.
王志坤  冯丹 《计算机科学》2010,37(11):295-299
传统的磁盘阵列一般采用集中式控制结构,其连接的底层磁盘数受系统总线的制约,容易出现性能瓶颈,且不能容两个以上磁盘出错。从模块化系统的组织方法出发,提出一种采用标准模块化存储单元组成的通过胖树结构互连的大规模磁盘阵列结构MT2RAID,分别就其各种数据分布的性能和可靠性进行了分析和讨论。原型系统测试结果表明,相比集中式磁盘阵列结构,MT2RAID也具有较高的性能。  相似文献   

7.
Performance of RAID5 disk arrays with read and write caching   总被引:1,自引:0,他引:1  
In this paper, we develop analytical models and evaluate the performance of RAID5 disk arrays in normal mode (all disks operational), in degraded mode (one disk broken, rebuild not started) and in rebuild mode (one disk broken, rebuild started but not finished). Models for estimating rebuild time under the assumption that user requests get priority over rebuild activity have also been developed. Separate models were developed for cached and uncached disk controllers. Particular emphasis is on the performance of cached arrays, where the caches are built of Non-Volatile memory and support write caching in addition to read caching. Using these models, we evaluate the performance of arrayed and unarrayed disk subsystems when driven by a database workload such as those seen on systems running any of several popular database managers. In particular, we assume single-block accesses, flat device skew and little seek affinity.With the above assumptions, we find six significant results. First, in normal mode, we find there is no difference in performance between subsystems built out of either small arrays or large arrays as long as the total number of disks used is the same. Second, we find that if our goal is to minimize the average response time of a subsystem in degraded and rebuild modes, it is better to use small arrays rather than large arrays in the subsystem. Third, we find the counter-intuitive result that if our goal is to minimize the average response time of requests to any one array in the subsystem, it is better to use large arrays than small arrays in the subsystem. We call this the best worst-case phenomenon.Fourth, we find that when no caching is used in the disk controller, subsystems built out of arrays have a normal mode performance that is significantly worse than an equivalent unarrayed subsystem built of the same drives. For the specific drive, controller, workload and system parameters we used for our calculations, we find that, without a cache in the controller and operating at typical I/O rates, the normal mode response time of a subsystem built out of arrays is 50% higher than that of an unarrayed subsystem. In rebuild mode, we find that a subsystem built out of arrays can have anywhere from 100% to 200% higher average response time than an equivalent unarrayed subsystem.Out fifth result is that, with cached controllers, the performance differences between arrayed and equivalent unarrayed subsystems shrink considerably. We find that the normal mode response time in a subsystem built out of arrays is only 4.1% higher than that of an equivalent unarrayed system. In degraded (rebuild) mode, a subsystem built out of small arrays has a response time 11% (13%) higher and a subsystem built out of large arrays has a response time 15% (19%) higher than an unarrayed subsystem.Our sixth and last result is that cached arrays have significantly better response times and throughputs than equivalent uncached arrays. For one workload, a cached array with good hit ratios had 5 times the throughout and 10 to 40 times lower response times than the equivalent uncached array. With poor hit ratios, the cached array is still a factor of 2 better in throughput and a factor of 4 to 10 better in response time for this same workload.We conclude that 3 design decisions are important when designing disk subsystems built out of RAID level 5 arrays. First, it is important that disk subsystems built out of arrays have disk controllers with caches, in particular Non-Volatile caches that cache writes in addition to reads. Second, if one were trying to minimize the worst response time seen by any user, one would choose disk array subsystems built out of large RAID level 5 arrays because of the best worst-case phenomenon. Third, if average subsystem response time is the most important design metric, the subsystem should be built out of small RAID level 5 arrays.  相似文献   

8.
一种新型的能够防止两块磁盘失败的技术   总被引:3,自引:0,他引:3  
海量存储系统的建设是目前计算机系统最热门和发展最快的领域,存储系统的主要部分是在线存储系统。RAID(磁盘阵列)对于提升存储系统的效率、数据的高可靠性、防止数据破坏和业务停顿具有重大意义。目前实际应用中的RAID 1,RAID 0+1,RAID 4,RAID 5都只能防止单块磁盘的损坏,实际生产中已经出现了很多由于双盘损坏造成业务长时间停顿的事故。在介绍了通用的RAID级别的基础上,介绍了一种新型的对角线奇偶校验方法,结合水平奇偶校验,可以防止两块磁盘损坏。通过可靠的数学分析,可以看到该方法可以极大提高磁  相似文献   

9.
在磁盘阵列模型中,关键是如何实现容许多个磁盘阵列故障使得系统性能达到最优。该文提出了一类新的纠双错编码――V码,使用该编码的磁盘阵列数据布局,阵列的盘数可以为偶数,校验信息均匀分散在阵列每个盘中,容许任意2个磁盘故障。与其它纠双码的磁盘阵列布局进行比较,当阵列盘数为偶数时,V码阵列布局具有最优性能,编译码复杂度、冗余率达到最低以及小写性能最优,利于解决磁盘阵列I/O问题。  相似文献   

10.
RAID5 (Redundant Arrays of Independent Disk level 5) is a popular paradigm, which uses parity to protect against single disk failures. A major shortcoming of RAID5 is the small write penalty, i.e., the cost of updating parity when a data block is modified. Read-modify writes and reconstruct writes are alternative methods for updating small data and parity blocks. We use a queuing formulation to determine conditions under which one method outperforms the other. Our analysis shows that in the case of RAID6 and more generally disk arrays with k check disks tolerating k disk failures, RCW outperforms RMW for higher values of N and G. We note that clustered RAID and variable scope of parity protection methods favor reconstruct writes. A dynamic scheme to determine the more desirable policy based on the availability of appropriate cached blocks is proposed.  相似文献   

11.
设计了一种多个控制卡分条的RAID50模型和主机虚拟卷地址到所属高级别阵列、低级别阵列分条的二级地址映射模式,以提高I/O访问的并发性。定义多个连续I/O块为扩展块并等同于Cache页大小,且作为RAID50传输粒度,进一步改善内存与存储设备之间的传输效率。设计了基于Cache描述符控制块的哈希链式查找算法和基于Cache页访问频率计数的二次机会置换算法,实现了一种主机数据接收与RAID50存储设备预读并发进行的策略。结果表明,该设计有效地提高了存储系统的I/O性能。  相似文献   

12.
Distributed sparing is a method to improve the performance of RAID5 disk arrays with respect to a dedicated sparing system with N+2 disks (including the spare disk), since it utilizes the bandwidth of all N+2 disks. We analyze the performance of RAID5 with distributed sparing in normal mode, degraded mode, and rebuild mode in an OLTP environment, which implies small reads and writes. The analysis in normal mode uses an M/G/1 queuing model, which takes into account the components of disk service time. In degraded mode, a low-cost approximate method is developed to estimate the mean response time of fork-join requests resulting from accesses to recreate lost data on the failed disk. Rebuild mode performance is analyzed by considering an M/G/1 vacationing server model with multiple vacations of different types to take into account differences in processing requirements for reading the first and subsequent tracks. An iterative solution method is used to estimate the mean response time of disk requests, as well as the time to read each disk, which is shown to be quite accurate through validation against simulation results. We next compare RAID5 performance in a system (1) without a cache; (2) with a cache; and (3) with a nonvolatile storage (NVS) cache. The last configuration, in addition to improved read response time due to cache hits, provides a fast-write capability, such that dirty blocks can be destaged asynchronously and at a lower priority than read requests, resulting in an improvement in read response time. The small write penalty is also reduced due to the possibility of repeated writes to dirty blocks in the cache and by taking advantage of disk geometry to efficiently destage multiple blocks at a time  相似文献   

13.
Flash memory has limited erasure/program cycles.Hence,to meet their advertised capacity all the time,flashbased solid state drives(SSDs) must prolong their life span through a wear-leveling mechanism.As a very important part of flash translation layer(FTL),wear leveling is usually implemented in SSD controllers,which is called internal wear leveling.However,there is no wear leveling among SSDs in SSD-based redundant array of independent disks(RAIDs) systems,making some SSDs wear out faster than others.Once an SSD fails,reconstruction must be triggered immediately,but the cost of this process is so high that both system reliability and availability are affected seriously.We therefore propose cross-SSD wear leveling(CSWL) to enhance the endurance of entire SSD-based RAID systems.Under the workload of random access pattern,parity stripes suffer from much more updates because updating to a data stripe will cause the modification of other all related parity stripes.Based on this principle,we introduce an age-driven parity distribution scheme to guarantee wear leveling among flash SSDs and thereby prolong the endurance of RAID systems.Furthermore,age-driven parity distribution benefits performance by maintaining better load balance.With insignificant overhead,CSWL can significantly improve both the life span and performance of SSD-based RAID.  相似文献   

14.
提出了一种跨多阵列通道的海量存储RAID50模型,通过采取多阵列卡的RAID0分条和阵列卡上多磁盘RAID5分条和校验的二级并发的数据组织与分块方式,以扩展块(大小等于阵列卡上的一个RAID5校验组)作为Cache和阵列之间数据交换的单位,实现了将阵列矩阵中所有磁盘的容量聚合及全并发访问。设计了该模型逻辑卷管理的最佳适配算法及二级地址映射算法。理论分析与实验结果表明:该策略将I/O响应时间降到了最低,且获得了与阵列通道数线性相关的逻辑卷容量和I/O性能。  相似文献   

15.
Software redundant arrays of independent disks (RAID) suffer from several hours of resynchronization time after a sudden power-off. Data blocks and a parity block in a stripe must be updated in a consistent manner. However, a data block may be updated without a parity update if power goes off. Such a partially modified stripe must be updated with a correct parity block after a reboot. It is difficult, however, to find which stripe is partially updated. The widely used traditional parity resynchronization approach entails a very long process that scans the entire volume to find and fix partially updated stripes. As a remedy to this problem, this paper presents a parity resynchronization scheme that exhibits a small overhead for a wide range of workloads, finishes parity resynchronization within several minutes, and is transparent to file systems, thanks to a new seamless block-level journaling. The proposed scheme is integrated into a software RAID driver in a Linux system. A performance evaluation demonstrates that the proposed scheme shortens the resynchronization process from 200 min to 30 s with 1% overhead, compared to 51% overhead for the prior scheme.  相似文献   

16.
基于遗传算法的RAID磁盘阵列中磁盘负载均衡方法   总被引:3,自引:0,他引:3  
该文对RAID磁盘阵列内逻辑磁盘和物理磁盘之间的映射和负载关系进行分析,提出了一种基于遗传算法的物理磁盘间负载均衡方法,以提高磁盘阵列的吞吐量。仿真实验证明该方法是有效的。  相似文献   

17.
基于分条单元热度的RAID数据分布优化   总被引:3,自引:0,他引:3  
磁盘具有离磁盘轴心越远的分区其数据传输率越高的特点,然而传统 RAID 中文件分条单元在磁盘中是随机和静态存放的。针对此种情况,为了充分改进 RAID的 I/O 性能,本文提出了一种动态的 RAID 数据分条存放和迁移策略 PMSH(Placement and Migration based on Stripe unit Heat)。PMSH 根据 RAID 中的文件分条单元的访问热度,动态地将访问频率高的分条单元迁移到数据传榆率较高的磁盘分区,从而优化文件在 RAID 的存放位置,使RAID 中的数据分布能够适应文件访问率的动态变化。仿真实验的结果表明:PMSH 算法能够显著地改进整个 RAID的 I/O 性能,具有很好的实用价值。  相似文献   

18.
随着大数据时代的到来,固态硬盘已经逐渐在大型数据中心得到应用。作为使用最广泛的RAID技术,RAID5也开始应用于固态硬盘阵列,以保证数据的可靠性。然而,RAID5中校验信息需要频繁地更新,尤其在随机访问中,频繁地更新校验信息将会对固态硬盘阵列的性能和寿命造成很大的影响,针对此问题,提出PA SSD(Parity Aware Solid State Disk)控制器设计,从RAID5控制器得到校验信息的逻辑地址,在SSD控制器中设置一个缓存Pcache,暂存更新后的校验信息,并在SSD中将数据和校验分开布局,设置专门的区域存放校验信息。通过实验仿真测试,提出的方法能有效地减少校验信息对SSD的写操作,并且减少了SSD的擦除次数,提升了SSD阵列的性能和寿命。  相似文献   

19.
The performance of traditional RAID Level 5 arrays is, for many applications, unacceptably poor while one of its constituent disks is non-functional. This paper describes and evaluates mechanisms by which this disk array failure-recovery performance can be improved. The two key issues addressed are thedata layout, the mapping by which data and parity blocks are assigned to physical disk blocks in an array, and thereconstruction algorithm, which is the technique used to recover data that is lost when a component disk fails.The data layout techniques this paper investigates are instantiations of thedeclustered parity organization, a derivative of RAID Level 5 that allows a system to trade some of its data capacity for improved failure-recovery performance. We show that our instantiations of parity declustering improve the failure-mode performance of an array significantly, and that a parity-declustered architecture is preferable to an equivalent-size multiple-group RAID Level 5 organization in environments where failure-recovery performance is important. The presented analyses also include comparisons to a RAID Level 1 (mirrored disks) approach.With respect to reconstruction algorithms, this paper describes and briefly evaluates two alternatives,stripeoriented reconstruction anddisk-oriented reconstruction, and establishes that the latter is preferable as it provides faster reconstruction. The paper then revisits a set of previously-proposed reconstruction optimizations, evaluating their efficacy when used in conjunction with the disk-oriented algorithm. The paper concludes with a section on the reliability versus capacity trade-off that must be addressed when designing large arrays.Portions of this material are drawn from papers at the 5th Conference on Architectural Support for Programming Languages and Operating Systems, 1992, and at the 23rd Symposium on Fault-Tolerant Computing, 1993. The work was supported by the National Science Foundation under grant number ECD-8907068, by the Defense Advanced Research Project Agency monitored by ARPA/CMO under contract MDA972-90-C-0035, and by an IBM Graduate Fellowship.  相似文献   

20.
由于应用需求的快速发展以及网络存储系统的出现,因此异构磁盘阵列的变得越来越常见。RAID5由于较高的性能和可靠性以及较低的代价,是应用最为广泛的RAID结构。目前对异构磁盘阵列RAID5结构的研究,重点主要放在充分利磁盘存储空间以及对性能的定性研究。论文提出了一种异构磁盘阵列RAID5结构数据布局优化方法,该方法充分考虑异构磁盘的相对容量和性能,以及校验单元的散布对RAID5小数据写性能的影响,可以生成负载均匀分布或接近均匀分布的布局。仿真实验结果表明,对于多用户小数据访问模式,优化布局的性能明显优于简单RAID5布局,且具有更高的伸缩性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号