首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
王跃清  黄烨  王翰虎  陈梅 《计算机应用》2010,30(11):2962-2964
为了有效地利用固态盘读速快以及磁盘低存储成本的特点,在磁盘和固态盘共存的混合存储结构模型下,设计并实现了一种基于页迁移思想的存储分层算法(SZA)。不同于NUMA的迁移代价计算方法,该算法按照迁移代价选择相应的存储介质,并且对不同工作负载的数据进行迁移。实验结果显示,算法有效地提升了数据库系统的I/O性能,同时大幅度地减少了对闪存的擦写次数。  相似文献   

2.
对于同类型的I/O请求,基于闪存固态盘的请求响应时间与请求大小基本呈线性比例关系,并且固态盘的读写性能具有非对称性。针对该特性,提出一种基于请求大小的固态盘I/O调度(SIOS)算法,从I/O请求平均响应时间的角度提高固态盘设备的I/O性能。根据读写性能的非对称性,对读写请求进行分组并且优先处理读请求。在此基础上首先处理等待队列中的小请求,从而减少队列中请求的平均等待时间。采用SLC和MLC2种类型的固态盘进行实验,在5种测试负载的驱动下与Linux系统中的3种调度算法进行比较,对于SLC固态盘,SIOS平均响应时间分别减少18.4%、25.8%、14.9%、14.5%和13.1%,而对于MLC固态盘,平均响应时间分别减少16.9%、24.4%、13.1%、13.0%和13.7%,结果表明,SIOS能有效减少I/O请求的平均响应时间,提高固态盘存储系统的I/O性能。  相似文献   

3.
张琦  王林章  张天  邵子立 《软件学报》2014,25(2):314-325
近年来,NAND闪存广泛应用于各类嵌入式系统.由于“异地更新”的限制,闪存中需要地址映射方法将来自文件系统的逻辑地址转换为闪存中的物理地址.随着闪存存储空间的日益增长,如何使地址映射表占用较小的内存而又不损失较多性能,成为一个重要的问题.基于需求的页级地址映射方法能够有效地解决这个问题,然而该方法会产生地址转换页操作的额外开销,影响系统性能.从基于需求的地址映射方法出发,从两方面进行优化:首先,为了减少转换页的频繁更新,提出了页级地址映射缓存技术以统一在闪存和内存中的地址映射信息的粒度;其次,设计了基于地址转换页的数据聚集技术.通过该技术,每个数据块在垃圾回收时产生的地址转换页的更新开销被降至最低.实验用一系列基准数据集并与之前代表性的工作进行比较,结果表明,优化的地址映射方法能够大量减少额外地址转换页的开销,并提高闪存存储系统的性能.  相似文献   

4.
DFTL(demand-based FTL)是一种根据负载访问特点动态加载映射项到缓存中的知名FTL(flash translation layer)算法,但是它没有考虑到请求的空间局部性,而且缓存中的一个映射项剔除就可能会导致翻译页的更新,缓存中映射项的频繁剔除又会导致额外的擦除操作.在DFTL的基础上,提出了SDFTL(sequential/second cache DFTL)算法.SDFTL新设置连续缓存和二级缓存,连续缓存通过预取映射信息,利用请求的空间局部性,提高了FTL对连续负载的处理性能;二级缓存通过暂存从一级缓存中剔除的、发生更新的映射项,并采取批量更新策略回写到闪存,减少了闪存的翻译页写回次数和擦除次数.利用实际负载做的实验结果显示,SDFTL相比DFTL缓存命中率平均提高41.57%,擦除次数平均减少23.08%,响应时间平均减少17.74%.  相似文献   

5.
为设计高闪存空间利用率、低闪存擦除次数、低内存占用率的Flash管理算法,针对NAND闪存的擦写特性,提出一种改进的双粒度地址映射算法FAST算法,重新定义转换操作和合并操作,将顺序写日志块的合并操作变为转换操作。与传统FAST算法的比较结果表明,该算法可以减少一倍的擦除操作,提高空间利用率。  相似文献   

6.
网络地址转换与协议翻译(NAT-PT)是IPv6 (Internet Protocol version 6)过渡协议中非常重要的一种.随着IPV6的普及,转换条目的增加,对NAT-PT翻译网关地址转换速度提出了更高的要求.地址映射表查找算法是NAT-PT地址转换速度的决定性因素.本文提出了一种基于Patricia树的地址映射表查找的改进算法,该算法加快了地址映射表中转换条目的查找速度_提高了NAT-PT的性能.  相似文献   

7.
通过使用固态盘模拟器对SSD的架构与控制算法进行设计与优化,能够节省大量的时间与经济成本。为评估固态盘模拟器功能与性能,在研究固态盘模拟器架构组成与延迟模型等模型的基础上,选取固态盘随机读写场景,对4种典型开源固态盘模拟器与实际固态盘进行比对实验。实验结果表明,模拟器能够在不同层面模拟固态盘行为,降低固件开发对硬件依赖,减少固态盘设计迭代次数。  相似文献   

8.
为有效缓解固态盘的存储瓶颈问题,针对闪存固态盘内部芯片的工作原理和物理特性,将并行调度技术引入闪存固态盘的闪存转换层(FTL)的设计中,设计并实现了一种plane级的并行调度算法,基本方法就是将一个读写请求分解成多个段,在多个plane上并行的执行,通过更均衡的分配I/O负载,可以显著提高闪存固态盘的整体读写性能。通过设置不同的芯片参数进行了模拟与测试,实验结果表明,采用并行调度技术可以有效提高闪存固态盘中存储芯片之间的并行度,以及芯片内部各个单元之间的并行度,闪存固态盘的读写延时均有较大改善。  相似文献   

9.
垃圾回收操作会显著影响固态盘的性能,进而导致固态盘阵列的性能波动.为此,提出一种基于垃圾回收感知的磁盘阵列(GC-RAIS),充分利用固态盘的高随机读特性和固态盘阵列中的热备份盘,以减轻垃圾回收操作对固态盘阵列性能波动的负面影响.当固态盘阵列中某个固态盘正在处理垃圾回收操作时,对于到达该固态盘的读请求采用重构方式处理,即读取同一条带上其他固态盘上的数据重构得到,而对于到达该固态盘的写请求则将写数据临时存放在热备盘中,并更新相应的校验信息.当垃圾回收过程结束后,将被重定向的写数据写回到正确的固态盘中.仿真实验结果表明相对局部垃圾回收LGC策略和全局垃圾回收GGC策略,GC-RAIS分别减少用户I/O请求的平均响应时间达55%和25%.  相似文献   

10.
一个基于分布式数据库系统的动态负载分配算法   总被引:1,自引:0,他引:1  
负载分配算法能够通过在其结点间明智地再分配工作负载而提高分布式系统的性能.在本文中,我们提出了一个新的基于分布式数据库系统的动态负载分配算法.它能够根据系统负载状况、数据的分布和结点间的通信开销自适应地改变其参数和策略。对一个分布式数据库系统的模拟表明,该算法能比稳定的发送者启动自适应算法提供更好的稳定性和性能。  相似文献   

11.
Deduplication 通常在两个企业存储系统和云存储被使用了。克服性能挑战为选择恢复 deduplication 系统的操作, solid-state-drive-based (即,基于 SSD ) 读的缓存能为由缓冲加快被部署流行动态地恢复内容。不幸地,经常的数据更改由古典缓存计划导致了(例如, LRU 和 LFU ) 显著地弄短 SSD 一生当在 SSD 减慢 I/O 进程时。处理这个问题,我们建议新解决方案砍缓存极大地由扩大比例象 I/O 性能一样改进 SSD 的 write 耐久性长期流行(砍) 在写进基于 SSD 的缓存的数据之中的数据。砍缓存保留很长时间在 SSD 缓存砍数据减少的时期缓存代替的数字。而且,它在 deduplication 集装箱阻止不得人心或不必要的数据被写进 SSD 缓存。我们在一个原型 deduplication 系统实现了砍缓存评估它的性能。我们的试验性的结果显示砍缓存弄短潜伏选择与仅仅 deduplicated 数据的 5.56% 能力以小基于 SSD 的缓存的成本由 37.3% 的一般水准恢复。重要地,砍缓存由 9.77 的一个因素改进 SSD 一生。砍缓存为一个成本效率的基于 SSD 的读的缓存解决方案提供到的证据表演增加性能选择为 deduplication 恢复系统。  相似文献   

12.
In general, NAND flash memory has advantages in low power consumption, storage capacity, and fast erase/write performance in contrast to NOR flash. But, main drawback of the NAND flash memory is the slow access time for random read operations. Therefore, we proposed the new NAND flash memory package for overcoming this major drawback. We present a high performance and low power NAND flash memory system with a dual cache memory. The proposed NAND flash package consists of two parts, i.e., an NAND flash memory module, and a dual cache module. The new NAND flash memory system can achieve dramatically higher performance and lower power consumption compared with any conventionM NAND-type flash memory module. Our results show that the proposed system can reduce about 78% of write operations into the flash memory cell and about 70% of read operations from the flash memory cell by using only additional 3KB cache space. This value represents high potential to achieve low power consumption and high performance gain.  相似文献   

13.
The poor performance of random writes has been a cause of major concern which needs to be addressed to better utilize the potential of flash in enterprise-scale environments. We examine one of the important causes of this poor performance: the design of the flash translation layer (FTL) which performs the virtual-to-physical address translations and hides the erase-before-write characteristics of flash. We propose a complete paradigm shift in the design of the core FTL engine from the existing techniques with our Demand-Based Flash Translation Layer (DFTL) which selectively caches page- level address mappings. Our experimental evaluation using FlashSim with realistic enterprise-scale workloads endorses the utility of DFTL in enterprise-scale storage systems by demonstrating: 1) improved performance, 2) reduced garbage collection overhead and 3) better overload behavior compared with hybrid FTL schemes which are the most popular implementation methods. For example, a predominantly random-write dominant I/O trace from an OLTP application running at a large financial institution shows a 78% improvement in average response time (due to a 3-fold reduction in operations of the garbage collector), compared with the hybrid FTL scheme. Even for the well-known read-dominant TPC-H benchmark, for which DFTL introduces additional overheads, we improve system response time by 56%. Moreover, interestingly, when write-back cache on DFTL-based SSD is enabled, DFTL even outperforms the page-based FTL scheme, improving their response time by 72% in Financial trace.  相似文献   

14.
随着大数据时代的到来,固态硬盘已经逐渐在大型数据中心得到应用。作为使用最广泛的RAID技术,RAID5也开始应用于固态硬盘阵列,以保证数据的可靠性。然而,RAID5中校验信息需要频繁地更新,尤其在随机访问中,频繁地更新校验信息将会对固态硬盘阵列的性能和寿命造成很大的影响,针对此问题,提出PA SSD(Parity Aware Solid State Disk)控制器设计,从RAID5控制器得到校验信息的逻辑地址,在SSD控制器中设置一个缓存Pcache,暂存更新后的校验信息,并在SSD中将数据和校验分开布局,设置专门的区域存放校验信息。通过实验仿真测试,提出的方法能有效地减少校验信息对SSD的写操作,并且减少了SSD的擦除次数,提升了SSD阵列的性能和寿命。  相似文献   

15.
利用页面重构与数据温度识别的闪存缓存算法   总被引:1,自引:0,他引:1  
基于闪存的固态盘(SSD)具有比磁盘更加优越的性能,并且在桌面系统中逐渐替代磁盘.但是,尽管在SSD中嵌入了DRAM作为缓存,闪存在不断写入的过程中也可能产生不稳定的写性能,主要是因为逻辑页写入时会频繁引发非覆盖写和垃圾回收操作.针对此问题,提出了一种叫作PRLRU的新型闪存缓存管理方法,通过页面重构机制以及数据温度识...  相似文献   

16.
An SSD generally has a small memory, called cache buffer, to increase its performance and the frequently accessed data are maintained in this cache buffer. These cached data must periodically write back to the NAND Flash memory to prevent the data loss due to sudden power-off, and it should immediately flush all dirty data items into a non-volatile storage media (i.e., NAND Flash memory), when receiving a flush command, while the flush command is supported in Serial ATA (SATA) and Serial Attached SCSI (SAS). Thus, a flush command is an important factor to give significant impact on SSD performance.In this paper, we have investigated the impact of a flush command on SSD performance and have conducted in-depth experiments with versatile workloads, using the modified FlashSim simulator. Our performance measurements using PC and server workloads provide several interesting conclusions. First, a cache buffer without a flush command could improve SSD performance as a cache buffer size increases, since more requested data could be handled in the cache buffer. Second, our experiments have revealed that a flush command might give a negative impact on SSD performance. The average response time per request with a flush command is getting worse compared to not supporting the flush command, as cache buffer size increases. Finally, we have proposed the backend flushing scheme to nullify the negative performance impact of the flush command. The backend flushing scheme first writes the requested data into a cache buffer and sends the acknowledgment of the request completion to a host system. Then, it writes back the data in the cache buffer to NAND Flash memory. Thus, the proposed scheme could improve SSD performance since it might reduce the number of the dirty data items in a cache buffer to write back to NAND Flash memory.All these results suggest that a flush command could give a negative impact on SSD performance and our proposed backend flushing scheme could improve the SSD performance while supporting a flush command.  相似文献   

17.
To boost the performance of massive data processing, solid-state drives (SSDs) have been used as a kind of cache in the Hadoop system. However, most of existing SSD cache management algorithms are ignorant of the characteristics of upper-level applications. In this paper, we propose a novel SSD cache management algorithm called DSA, which can exploit the application-level data similarity to improve the SSD cache performance in Hadoop. Our algorithm takes both temporal similarity and user similarity in querying behaviors into account. We evaluate the effectiveness of our proposed DSA algorithm in a small-scale Hadoop cluster. Our experimental results show that our algorithm can achieve much better performance than other well-known algorithms (e.g., LRU, FIFO). We also clearly point out the underlying tradeoff between cache performance and SSD deployment cost, and identify a number of key factors that affect SSD cache performance. Our findings can provide useful guidelines on how to effectively integrate SSDs into Hadoop.  相似文献   

18.
The flash memory solid-state disk (SSD) is emerging as a killer application for NAND flash memory due to its high performance and low power consumption. To attain high write performance, recent SSDs use an internal SDRAM write buffer and parallel architecture that uses interleaving techniques. In such architecture, coarse-grained address mapping called superblock mapping is inevitably used to exploit the parallel architecture. However, superblock mapping shows poor performance for random write requests. In this paper, we propose a novel victim block selection policy for the write buffer considering the parallel architecture of SSD. We also propose a multi-level address mapping scheme that supports small-sized write requests while utilizing the parallel architecture. Experimental results show that the proposed scheme improves the I/O performance of SSD by up to 64% compared to the existing technique.  相似文献   

19.
唐震  吴恒  王伟  魏峻  黄涛 《软件学报》2017,28(8):1982-1998
以SSD为代表的新型存储介质在虚拟化环境下得到了广泛的应用,通常作为虚拟机读写缓存,起到优化磁盘IO性能的作用.已有研究往往关注SSD缓存的容量规划,依据缓存读写命中率评价SSD缓存分配效果,未能充分考虑SSD的服务能力上限,难以适用于典型的分布式应用场景,存在虚拟机抢占SSD缓存资源,导致虚拟机中应用性能违约的可能.本文实现了虚拟化环境下面向多目标优化的自适应SSD缓存系统,考虑了SSD的服务能力上限.基于自适应闭环实现对虚拟机和应用状态的动态感知.动态检测局部SSD缓存抢占状态,基于聚类方法生成虚拟机的优化放置方案,依据全局SSD缓存供给能力确定虚拟机迁移顺序和时机.实验结果表明该方法在应对典型分布式应用场景时可以有效缓解SSD缓存资源的争用,同时满足应用对虚拟机放置的需求,提升应用的性能并兼顾应用的可靠性.在Hadoop应用场景下,平均降低了25%的任务执行时间,对IO密集型应用平均提升39%的吞吐率.在ZooKeeper应用场景下,以不到5%的性能损失为代价应对了虚拟化主机的单点失效带来的虚拟机宕机问题.  相似文献   

20.
Solid-state drives (SSDs) have been widely used as caching tier for disk-based RAID systems to speed up dataintensive applications. However, traditional cache schemes fail to effectively boost the parity-based RAID storage systems (e.g., RAID-5/6), which have poor random write performance due to the small-write problem. What’s worse, intensive cache writes can wear out the SSD quickly, which causes performance degradation and cost increment. In this article, we present the design and implementation of KDD, an efficient SSD-based caching system which Keeps Data and Deltas in SSD. When write requests hit in the cache, KDD dispatches the data to the RAID storage without updating the parity blocks to mitigate the small write penalty, and compactly stores the compressed deltas in SSD to reduce the cache write traffic while guaranteeing reliability in case of disk failures. In addition, KDD organizes the metadata partition on SSD as a circular log to make the cache persistent with low overhead.We evaluate the performance of KDD via both simulations and prototype implementations. Experimental results show that KDD effectively reduces the small write penalty while extending the lifetime of the SSD-based cache by up to 6.85 times.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号