首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
介绍应用于RAID控制器的I/O调度算法的设计与实现.主要目标是把来自RAID模块针对每个磁盘的具体读写请求按照响应的策略放入对应磁盘的读写I/O队列.然后根据具体请求的优先级和读写特性,对响应请求在队列中的次序进行调整或者对前后项进行合并,实现I/O请求的调度策略.  相似文献   

2.
垃圾回收操作会显著影响固态盘的性能,进而导致固态盘阵列的性能波动.为此,提出一种基于垃圾回收感知的磁盘阵列(GC-RAIS),充分利用固态盘的高随机读特性和固态盘阵列中的热备份盘,以减轻垃圾回收操作对固态盘阵列性能波动的负面影响.当固态盘阵列中某个固态盘正在处理垃圾回收操作时,对于到达该固态盘的读请求采用重构方式处理,即读取同一条带上其他固态盘上的数据重构得到,而对于到达该固态盘的写请求则将写数据临时存放在热备盘中,并更新相应的校验信息.当垃圾回收过程结束后,将被重定向的写数据写回到正确的固态盘中.仿真实验结果表明相对局部垃圾回收LGC策略和全局垃圾回收GGC策略,GC-RAIS分别减少用户I/O请求的平均响应时间达55%和25%.  相似文献   

3.
存储设备上的大量文件其长度呈重尾态分布,IO请求的响应延迟和请求大小有着密切关系,并且固态硬盘的IO操作不对称。基于以上几点,在内核NOOP调度算法的基础上提出一种针对重尾数据分布下的IO调度算法。该算法通过减少大量小片请求的等待时间,提高固态硬盘的性能。经实验验证,相比内核的NOOP调度算法,平均响应时间减少17%。  相似文献   

4.
为有效缓解固态盘的存储瓶颈问题,针对闪存固态盘内部芯片的工作原理和物理特性,将并行调度技术引入闪存固态盘的闪存转换层(FTL)的设计中,设计并实现了一种plane级的并行调度算法,基本方法就是将一个读写请求分解成多个段,在多个plane上并行的执行,通过更均衡的分配I/O负载,可以显著提高闪存固态盘的整体读写性能。通过设置不同的芯片参数进行了模拟与测试,实验结果表明,采用并行调度技术可以有效提高闪存固态盘中存储芯片之间的并行度,以及芯片内部各个单元之间的并行度,闪存固态盘的读写延时均有较大改善。  相似文献   

5.
针对固态硬盘( SSD)存储系统在高并发访问时出现的I/O路径局部拥塞效应,设计并实现了动态拥塞控制调度器。该调度器通过设置动态拥塞阈值,分化处理I/O请求,显著提高了固态硬盘存储系统的系统性能,有效改善了系统I/O路径的局部拥塞效应。实验结果表明,相对于内核NOOP调度器,该调度器使得系统平均响应时间减少20%左右,局部拥塞效应减少90%左右。  相似文献   

6.
固态盘具有低访问延迟、抗震性、内部并行性等诸多优良特性,已被广泛使用。如何利用固态盘提高系统性能是当前研究议题之一。首先通过一系列不同读写比例的负载实验来探索固态盘的特性,发现在较大I/O请求粒度场景下,较高的读请求比例有利于提升各类固态盘的吞吐量。基于实验结论,提出了一种I/O只读负载分离方法RODI,通过合理放置只读数据来分离只读负载到合适的固态盘上,以提升异构多盘阵列整体的吞吐量。大量实验表明,在较大I/O粒度的异构多盘环境中,相比传统的RAID技术, RODI方法对于改善多盘总体吞吐量更具优势。  相似文献   

7.
结合对象存储的特点,提出基于QoS的存储系统模型。该模型将迁移任务划分为细粒度的迁移请求,使对象存储设备在实现数据迁移的同时能响应I/O请求。元数据服务器按相同的标准给I/O请求和迁移请求分配相应的收益,使对象存储设备能采用收益最大算法调度I/O请求和迁移请求,从而提供更高的服务质量。通过收益预测和带宽预留实现在线最大收益调度算法。实验表明,最大收益算法与通常的迁移优先算法和固定平均迁移率算法相比,对系统的I/O性能影响最小。  相似文献   

8.
响应时间是服务等级目标(Service Level Objective,SLO)的一个重要性能指标,与资源的使用量有关。资源充足可以保证请求的正常执行,响应时间短;资源不足,请求需要等待资源,响应时间长。在云计算虚拟化环境下,控制资源的访问既有对整体资源的控制,也有对CPU、网络带宽等单个资源的控制,但是目前很少有通过对网络I/O请求的直接控制来保证响应时间。为了获得更好的性能,虚拟化技术大多采用半虚拟化框架Virtio。网络I/O请求通过Virtio共享通道进行传输,使得在Virtio设立网络I/O请求的门控机制成为可能。文中利用双端聚合方法(Two-end Aggregation Method,TAM),提出实时网络I/O请求门控机制(Gating Mechanism for Real-time Network I/O Requests,GMRNR),通过控制网络I/O请求经过Virtio的时刻,保证各类请求的响应时间。GMRNR设立在Virtio前端virtio-net模块中,将请求按照其响应时间指标分级,采用计时器和聚合队列长度来控制不同级别请求经过Virtio的时刻和聚合频率,保证请求的响应时间。实验测试表明:GMRNR能够区分网络I/O请求优先级,在资源充足时,使得不同等级的网络I/O请求在各自要求的时间内完成;在资源不充足时,能优先保证高优先级的网络I/O请求的响应时间。同时,GMRNR具有较高的资源利用效率。  相似文献   

9.
实时进程调度算法在任务调度过程中对于公平性体现不够。为了解决这个问题,在Linux 2.6.11内核的基础上作了改进,提出了一个兼具公平性和实时性的RMOSA(realtime modified O(1) scheduling algorithm)算法。保留了I/O队列以缩短I/O请求的响应时间,同时采用动态计算优先级和时间片的方法来使通用进程调度达到最优。最后,通过仿真实验的结果比较,证明了RMOSA算法相对于Linux 2.6.11O(1)调度算法的优越性。  相似文献   

10.
陈梅梅 《计算机科学》2016,43(8):199-203, 222
请求调度通常需要在充分利用现有服务器资源的基础上满足响应时间最小化和系统吞吐量最大化的目标,但对于以盈利为目的的电子商务网站来说,关键还是要提高交易请求和VIP用户发起请求的达成率。针对电子商务网站请求调度的多重目标,首先提出了收益驱动的请求分类多维标准,在此基础上定义了请求优先级和调度优先级的概念,给出了基于请求分类的多目标动态优先调度算法MODP,并引入了基于事前过载判断而非负载测量的调度机制以避免控制延迟,有利于电子商务网站在多变的负载条件下自适应地实现差别服务和QoS保障。仿真实验证明了MODP机制与算法的有效性,将其与传统FCFS调度方法进行对比研究,结果表明:服务器无论在高载还是低载情况下,MODP调度策略在实现收益最大化、平均响应时间最小化的目标方面都具有明显的优势。  相似文献   

11.
Emerging non-volatile memory technologies, especially flash-based solid state drives (SSDs), have increasingly been adopted in the storage stack. They provide numerous advantages over traditional mechanically rotating hard disk drives (HDDs) and have a tendency to replace HDDs. Due to the long existence of HDDs as primary building blocks for storage systems, however, much of the system software has been specially designed for HDD and may not be optimal for non-volatile memory media. Therefore, in order to realistically leverage its superior raw performance to the maximum, the existing upper layer software has to be re-evaluated or re-designed. To this end, in this paper, we propose PASS, an optimized I/O scheduler at the Linux block layer to accommodate the changing trend of underlying storage devices toward flash-based SSDs. PASS takes the rich internal parallelism in SSDs into account when dispatching requests to the device driver in order to achieve high performance. Specifically, it parti-tions the logical storage space into fixed-size regions (preferably the component package sizes) as scheduling units. These scheduling units are serviced in a round-robin manner and for every chance that the chosen dispatching unit issues only a batch of either read or write requests to suppress the excessive mutual interference. Additionally, the requests are sorted according to their visiting addresses while waiting in the dispatching queues to exploit high sequential performance of SSD. The experimental results with a variety of workloads have shown that PASS outperforms the four Linux off-the-shelf I/O schedulers by a degree of 3%up to 41%, while at the same time it improves the lifetime significantly, due to reducing the internal write amplification.  相似文献   

12.
The flash-based SSD is used as a tiered cache between RAM and HDD. Conventional schemes do not utilize the nonvolatile feature of SSD and cannot cache write requests. Writes are a significant, or often dominant, fraction of storage workloads. To cache write requests, the SSD cache should persistently and consistently manage its data and metadata, and guarantee no data loss even after a crash. Persistent cache management may require frequent metadata changes and causes high overhead. Some researchers insist that a nonvolatile persistent cache requires new additional primitives that are not supported by general SSDs in the market. We proposed a fully persistent read/write cache, which improves both read and write performance, does not require any special primitive, has a low overhead, guarantees the integrity of the cache metadata and the consistency of the cached data, even during a crash or power failure, and is able to recover the flash cache quickly without any data loss. We implemented the persistent read/write cache as a block device driver in Linux. Our scheme aims at virtual desktop infra servers. So the evaluation was performed with massive, real desktop traces of five users for ten days. The evaluation shows that our scheme outperforms an LRU version of SSD cache by 50% and the read-only version of our scheme by 37%, on average, for all experiments. This paper describes most of the parts of our scheme in detail. Detailed pseudo-codes are included in the Appendix.  相似文献   

13.
Modern solid-state drives (SSDs) are integrating more internal resources to achieve higher capacity. Parallelizing accesses across internal resources can potentially enhance the performance of SSDs. However, exploiting parallelism inside SSDs is challenging owing to real-time access conflicts. In this paper, we propose a highly parallelizable I/O scheduler (PIOS) to improve internal resource utilization in SSDs from the perspective of I/O scheduling. Specifically, we first pinpoint the conflicting flash requests with precision during the address translation in the Flash Translation Layer (FTL). Then, we introduce conflict eliminated requests (CERs) to reorganize the I/O requests in the device-level queue by dispatching conflicting flash requests to different CERs. Owing to the significant performance discrepancy between flash read and write operations, PIOS employs differentiated scheduling schemes for read and write CER queues to always allocate internal resources to the conflicting CERs that are more valuable. The small dominant size prioritized scheduling policy for the write queue significantly decreases the average write latency. The high parallelism density prioritized scheduling policy for the read queue better utilizes resources by exploiting internal parallelism aggressively. Our evaluation results show that the parallelizable I/O scheduler (PIOS) can accomplish better SSD performance than existing I/O schedulers implemented in both SSD devices and operating systems.  相似文献   

14.
方才华  刘景宁  童薇  高阳  雷霞  蒋瑜 《计算机应用》2017,37(5):1257-1262
由于NAND闪存的固有限制,写前擦除和擦除粒度较大,基于NAND Flash的固态硬盘(SSD)需要执行垃圾回收以重用失效页。然而垃圾回收带来的高开销会显著降低SSD的性能,也会直接影响SSD的寿命。特别是对于频繁使用的有数据碎片的SSD,垃圾回收带来的性能下降问题将更为严重,现有的垃圾回收(GC)算法各自侧重垃圾回收操作的某个步骤,并没有给出全面考虑各步骤对整体影响的综合方案。针对该问题,在详细剖析垃圾回收过程的基础上,提出了一种全程优化的垃圾回收方法WPO-GC,在数据初始放置、垃圾回收目标块的选择、有效数据的迁移、触发回收的时间点以及中断处理方式上,尽可能全面地考虑各步骤对SSD正常读写请求和寿命的影响。通过开源模拟器SSDsim上的WPO-GC的有效性验证表明,同典型GC算法相比,WPO-GC可以减少SSD读请求延迟20%~40%和写请求延迟17%~40%,均衡磨损近30%。  相似文献   

15.
赵培  李国徽 《计算机科学》2012,39(4):287-292
闪存以其体积小、抗震性强、能耗低、读取速度快等特点,被广泛应用于存储系统中。NOOP是闪存上传统的调度方法,但是NOOP的I/O性能较低,不能满足很多应用程序的要求。根据闪存读取速度快、多个banks(chips)可以并行运行等特点,提出了一种基于闪存文件系统YAFFS的Multi-bank闪存调度方法(简称MBS)。MBS并行地执行请求,且给予读请求更高的优先级。MBS根据AVL-based-tree机制识别出的写请求属性动态地将其分配到合适的bank中。实验结果表明,相比NOOP,MBS调度具有更高的I/O吞吐率、更短的请求响应时间并具有均匀的bank擦除次数和利用率。  相似文献   

16.
Deduplication 通常在两个企业存储系统和云存储被使用了。克服性能挑战为选择恢复 deduplication 系统的操作, solid-state-drive-based (即,基于 SSD ) 读的缓存能为由缓冲加快被部署流行动态地恢复内容。不幸地,经常的数据更改由古典缓存计划导致了(例如, LRU 和 LFU ) 显著地弄短 SSD 一生当在 SSD 减慢 I/O 进程时。处理这个问题,我们建议新解决方案砍缓存极大地由扩大比例象 I/O 性能一样改进 SSD 的 write 耐久性长期流行(砍) 在写进基于 SSD 的缓存的数据之中的数据。砍缓存保留很长时间在 SSD 缓存砍数据减少的时期缓存代替的数字。而且,它在 deduplication 集装箱阻止不得人心或不必要的数据被写进 SSD 缓存。我们在一个原型 deduplication 系统实现了砍缓存评估它的性能。我们的试验性的结果显示砍缓存弄短潜伏选择与仅仅 deduplicated 数据的 5.56% 能力以小基于 SSD 的缓存的成本由 37.3% 的一般水准恢复。重要地,砍缓存由 9.77 的一个因素改进 SSD 一生。砍缓存为一个成本效率的基于 SSD 的读的缓存解决方案提供到的证据表演增加性能选择为 deduplication 恢复系统。  相似文献   

17.
Solid-state drives (SSDs) have been widely used as caching tier for disk-based RAID systems to speed up dataintensive applications. However, traditional cache schemes fail to effectively boost the parity-based RAID storage systems (e.g., RAID-5/6), which have poor random write performance due to the small-write problem. What’s worse, intensive cache writes can wear out the SSD quickly, which causes performance degradation and cost increment. In this article, we present the design and implementation of KDD, an efficient SSD-based caching system which Keeps Data and Deltas in SSD. When write requests hit in the cache, KDD dispatches the data to the RAID storage without updating the parity blocks to mitigate the small write penalty, and compactly stores the compressed deltas in SSD to reduce the cache write traffic while guaranteeing reliability in case of disk failures. In addition, KDD organizes the metadata partition on SSD as a circular log to make the cache persistent with low overhead.We evaluate the performance of KDD via both simulations and prototype implementations. Experimental results show that KDD effectively reduces the small write penalty while extending the lifetime of the SSD-based cache by up to 6.85 times.  相似文献   

18.
The flash memory solid-state disk (SSD) is emerging as a killer application for NAND flash memory due to its high performance and low power consumption. To attain high write performance, recent SSDs use an internal SDRAM write buffer and parallel architecture that uses interleaving techniques. In such architecture, coarse-grained address mapping called superblock mapping is inevitably used to exploit the parallel architecture. However, superblock mapping shows poor performance for random write requests. In this paper, we propose a novel victim block selection policy for the write buffer considering the parallel architecture of SSD. We also propose a multi-level address mapping scheme that supports small-sized write requests while utilizing the parallel architecture. Experimental results show that the proposed scheme improves the I/O performance of SSD by up to 64% compared to the existing technique.  相似文献   

19.
In NAND flash memory, once a page program or block erase (P/E) command is issued to a NAND flash chip, the subsequent read requests have to wait until the time-consuming P/E operation to complete. Preliminary results show that the lengthy P/E operations increase the read latency by 2× on average. This increased read latency caused by the contention may significantly degrade the overall system performance. Inspired by the internal mechanism of NAND flash P/E algorithms, we propose in this paper a low-overhead P/E suspension scheme, which suspends the on-going P/E to service pending reads and resumes the suspended P/E afterwards. Having reads enjoy the highest priority, we further extend our approach by making writes be able to preempt the erase operations in order to improve the write latency performance. In our experiments, we simulate a realistic SSD model that adopts multi-chip/channel and evaluate both SLC and MLC NAND flash as storage materials of diverse performance. Experimental results show the proposed technique achieves a near-optimal performance on servicing read requests. The write latency is significantly reduced as well. Specifically, the read latency is reduced on average by 46.5% compared to RPS (Read Priority Scheduling) and when using write–suspend–erase the write latency is reduced by 13.6% relative to FIFO.  相似文献   

20.
针对Flash写前需擦除,读写I/O开销不均衡等固有缺陷,研究面向闪存缓冲区管理,对提高基于Flash的固态硬盘(SolidState Disk,SSD)访问性能具有重要理论意义和应用价值.通过分析SSD关键技术及现有缓冲区管理算法,实现了一种适用于SSD的基于写数据页聚簇缓冲算法.文章中详细介绍了该算法关键技术及原理,并通过FlashSim仿真平台实现SSD写缓冲.基于仿真结果与传统缓冲算法性能比对,分析得出该缓冲算法可降低SSD随机写次数和SSD数据存储分散性,并提升SSD响应速度.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号