首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The flash memory solid-state disk (SSD) is emerging as a killer application for NAND flash memory due to its high performance and low power consumption. To attain high write performance, recent SSDs use an internal SDRAM write buffer and parallel architecture that uses interleaving techniques. In such architecture, coarse-grained address mapping called superblock mapping is inevitably used to exploit the parallel architecture. However, superblock mapping shows poor performance for random write requests. In this paper, we propose a novel victim block selection policy for the write buffer considering the parallel architecture of SSD. We also propose a multi-level address mapping scheme that supports small-sized write requests while utilizing the parallel architecture. Experimental results show that the proposed scheme improves the I/O performance of SSD by up to 64% compared to the existing technique.  相似文献   

2.
The existing NAND flash memory file systems have not taken into account multiple NAND flash memories for large-capacity storage. In addition, since large-capacity NAND flash memory is much more expensive than the same capacity hard disk drive, it is cost wise infeasible to build large-capacity flash drives. To resolve these problems, this paper suggests a new file system called NAFS for large-capacity storage with multiple small-capacity and low-cost NAND flash memories. It adopts a new cache policy, mount scheme, and garbage collection scheme in order to improve read and write performance, to reduce the mount time, and to improve the wear-leveling effectiveness. Our performance results show that NAFS is more suitable for large-capacity storage than conventional NAND file systems such as YAFFS2 and JFFS2 and a disk-based file system for Linux such as HDD-RAID5-EXT3 in terms of the read and write transfer rate using a double cache policy and the mount time using metadata stored on a separate partition. We also demonstrate that the wear-leveling effectiveness of NAFS can be improved by our adaptive garbage collection scheme.  相似文献   

3.
Due to the rapid development of flash memory technology, NAND flash has been widely used as a storage device in portable embedded systems, personal computers, and enterprise systems. However, flash memory is prone to performance degradation due to the long latency in flash program operations and flash erasure operations. One common technique for hiding long program latency is to use a temporal buffer to hold write data. Although DRAM is often used to implement the buffer because of its high performance and low bit cost, it is volatile; thus, that the data may be lost on power failure in the storage system. As a solution to this issue, recent operating systems frequently issue flush commands to force storage devices to permanently move data from the buffer into the non-volatile area. However, the excessive use of flush commands may worsen the write performance of the storage systems. In this paper, we propose two data loss recovery techniques that require fewer write operations to flash memory. These techniques remove unnecessary flash writes by storing storage metadata along with user data simultaneously by utilizing the spare area associated with each data page.  相似文献   

4.
NAND flash memory is a promising storage media that provides low-power consumption, high density, high performance, and shock resistance. Due to these versatile features, NAND flash memory is anticipated to be used as storage in enterprise-scale systems as well as small embedded devices. However, unlike traditional hard disks, flash memory should perform garbage collection that consists of a series of erase operations. The erase operation is time-consuming and it usually degrades the performance of storage systems seriously. Moreover, the number of erase operations allowed to each flash memory block is limited. This paper presents a new garbage collection scheme for flash memory based storage systems that focuses on reducing garbage collection overhead, and improving the endurance of flash memory. The scheme also reduces the energy consumption of storage systems significantly. Trace-driven simulations show that the proposed scheme performs better than various existing garbage collection schemes in terms of the garbage collection time, the number of erase operations, the energy consumption, and the endurance of flash memory.  相似文献   

5.
In NAND flash memory, once a page program or block erase (P/E) command is issued to a NAND flash chip, the subsequent read requests have to wait until the time-consuming P/E operation to complete. Preliminary results show that the lengthy P/E operations increase the read latency by 2× on average. This increased read latency caused by the contention may significantly degrade the overall system performance. Inspired by the internal mechanism of NAND flash P/E algorithms, we propose in this paper a low-overhead P/E suspension scheme, which suspends the on-going P/E to service pending reads and resumes the suspended P/E afterwards. Having reads enjoy the highest priority, we further extend our approach by making writes be able to preempt the erase operations in order to improve the write latency performance. In our experiments, we simulate a realistic SSD model that adopts multi-chip/channel and evaluate both SLC and MLC NAND flash as storage materials of diverse performance. Experimental results show the proposed technique achieves a near-optimal performance on servicing read requests. The write latency is significantly reduced as well. Specifically, the read latency is reduced on average by 46.5% compared to RPS (Read Priority Scheduling) and when using write–suspend–erase the write latency is reduced by 13.6% relative to FIFO.  相似文献   

6.
Existing SSD technology exploits the properties of NAND flash and leverages NAND flash with a controller running FTL algorithms to improve system performance. On one hand, however, in this black-box-modeled structure, data semantic information is hard to be transferred and interpreted by conventional interfaces. Hence, SSD firmware fails to make full use of the performance potential of SSD by utilizing semantic information. Moreover, the host cannot obtain physical characteristics and statistical information about SSD, failing to be used by the file system or I/O scheduling algorithm designed for the disks. On the other hand, in SSD-based storage systems, persistent data are stored in the NAND flash and however manipulated in DRAM, causing the decoupled inefficiency. The data being closer to the processors are much easier to be lost due to the volatile property of DRAM, leading to serious data reliability problems. What’s more, restrictive read/program granularity and out-of-place updates limit the performance while flash suffers from small size operations.In order to address these problems, we propose a user-visible solid-state storage system with software-defined fusion methods for PCM and NAND flash. PCM is used for improving data reliability and reducing the write amplification of NAND flash as PCM shows some outstanding features, such as in-place updates, byte-addressable, non-volatile properties and better endurance. In this system, we manage the storage device as user-visible structure rather than black-box-modeled structure. In detail, we expose the number of channels, erase counts and data distribution of PCM/NAND flash to the host and design FTL algorithm closer to file system to obtain more semantic information of data accessing. PCM can be software-defined as the same level storage or buffer of NAND flash to reduce the WA (Write Amplification) of NAND flash and improve the data reliability. Moreover, some key software components (such as FTL, I/O scheduling and buffer management) are also reconfigurable and operated easily combined with physical characteristics. To achieve these design goals, we implement a Host Fusion Storage Layer (HFSL) and redesign the lengthy I/O path. Applications or filesystem can access PCM/flash directly via provided interfaces by HFSL without passing traditional I/O subsystem. Moreover, we provide the system management software to make the storage system can be easily software-defined by the upper-level system. We implement our software-defined fusion storage system in our actual hardware prototype and extensive experimental results demonstrate the efficiency of the proposed schemes.  相似文献   

7.
NAND flash-based storage devices (NFSDs) are widely employed owing to their superior characteristics when compared to hard disk drives. However, NAND flash memory (NFM) still exhibits drawbacks, such as a limited lifetime and an erase-before-write requirement. Along with effective software management, the implementation of a cache buffer is one of the most common solutions to overcome these limitations. However, the read/write performance becomes saturated primarily because the eviction overhead caused by limited DRAM capacity significantly impacts overall NFSD performance. This paper therefore proposes a method that hides the eviction overhead and overcomes the saturation of the read/write performance. The proposed method exploits the new intra-request idle time (IRIT) in NFSD and employs a new data management scheme. In addition, the new pre-store eviction scheme stores dirty page data in the cache to NFMs in advance. This reduces the eviction overhead by maintaining a sufficient number of clean pages in the cache. Further, the new pre-load insertion scheme improves the read performance by frequently loading data that needs to be read into the cache in advance. Unlike previous methods with large migration overhead, our scheme does not cause any eviction/insertion overhead because it actually exploits the IRIT to its advantage. We verified the effectiveness of our method, by integrating it into two cache management strategies which were then compared. Our proposed method reduced read latency by 43% in read-intensive traces, reduced write latency by 40% in write-intensive traces, and reduced read/write latency by 21% and 20%, respectively, on average compared to NFSD with a conventional write cache buffer.  相似文献   

8.
NAND flash memory-based Solid State Drives (SSD) have many merits, in comparison to the traditional hard disk drives (HDD). However, random write within SSD is still far slower than sequential read/write and random read. There are two independent approaches for resolving this problem as follows: (1) using overprovisioning so that reserved portion of the physical memory space can be used as, for example, log blocks, for performance enhancement, and (2) using internal write buffer (DRAM or Non-Volatile RAM) within SSD. While log blocks are managed by the Flash Translation Layer (FTL), write buffer management has been treated separately from the FTL. Write buffer management schemes did not use the exact status of log blocks, and log block management schemes in FTL did not consider the behavior of the write buffer management scheme. This paper first demonstrates that log blocks and write buffers maintain a tight relationship, which necessitates integrated management to both of them. Since log blocks can also be viewed as another type of write buffer, we can manage both of them as an integrated write buffer. Then we propose an Integrated Write buffer Management scheme (IWM), which collectively manages both the write buffer and log blocks. The proposed scheme greatly outperforms previous schemes in terms of write amplification, block erase count, and execution time.  相似文献   

9.
The limited lifespan is the Achilles’ heel of solid state drives(SSDs) based on NAND flash.NAND flash has two drawbacks that degrade SSDs’ lifespan.One is the out-of-place update.Another is the sequential write constraint within a block.SSDs usually employ write buffer to extend their lifetime.However,existing write buffer schemes only pay attention to the first drawback,while neglect the second one.We propose a hetero-buffer architecture covering both aspects simultaneously.The hetero-buffer consists of two components,dynamic random access memory(DRAM) and the reorder area.DRAM endeavors to reduce write traffic as much as possible by pursuing a higher hit ratio(overcome the first drawback).The reorder area focuses on reordering write sequence(overcome the second drawback).Our hetero-buffer outperforms traditional write buffers because of two reasons.First,the DRAM can adopt existing superior cache replacement policy,thus achieves higher hit ratio.Second,the hetero-buffer reorders the write sequence,which has not been exploited by traditional write buffers.Besides the optimizations mentioned above,our hetero-buffer considers the work environment of write buffer,which is also neglected by traditional write buffers.By this way,the hetero-buffer is further improved.The performance is evaluated via trace-driven simulations.Experimental results show that,SSDs employing the hetero-buffer survive longer lifespan on most workloads.  相似文献   

10.
Solid state disks (SSDs) are becoming one of the mainstream storage devices due to their salient features, such as high read performance and low power consumption. In order to obtain high write performance and extend flash lifespan, SSDs leverage an internal DRAM to buffer frequently rewritten data to reduce the number of program operations upon the flash. However, existing buffer management algorithms demonstrate their blank in leveraging data access features to predict data attributes. In various real-world workloads, most of large sequential write requests are rarely rewritten in near future. Once these write requests occur, many hot data will be evicted from DRAM into flash memory, thus jeopardizing the overall system performance. In order to address this problem, we propose a novel large write data identification scheme, called Prober. This scheme probes large sequential write sequences among the write streams at early stage to prevent them from residing in the buffer. In the meantime, to further release space and reduce waiting time for handling the incoming requests, we temporarily buffer the large data into DRAM when the buffer has free space, and leverage an actively write-back scheme for large sequential write data when the flash array turns into idle state. Experimental results demonstrate that our schemes improve hit ratio of write requests by up to 10%, decrease the average response time by up to 42% and reduce the number of erase operations by up to 11%, compared with the state-of-the-art buffer replacement algorithms.  相似文献   

11.
In general, NAND flash memory has advantages in low power consumption, storage capacity, and fast erase/write performance in contrast to NOR flash. But, main drawback of the NAND flash memory is the slow access time for random read operations. Therefore, we proposed the new NAND flash memory package for overcoming this major drawback. We present a high performance and low power NAND flash memory system with a dual cache memory. The proposed NAND flash package consists of two parts, i.e., an NAND flash memory module, and a dual cache module. The new NAND flash memory system can achieve dramatically higher performance and lower power consumption compared with any conventionM NAND-type flash memory module. Our results show that the proposed system can reduce about 78% of write operations into the flash memory cell and about 70% of read operations from the flash memory cell by using only additional 3KB cache space. This value represents high potential to achieve low power consumption and high performance gain.  相似文献   

12.
NAND flash memory has become the major storage media in mobile devices, such as smartphones. However, the random write operations of NAND flash memory heavily affect the I/O performance, thus seriously degrading the application performance in mobile devices. The main reason for slow random write operations is the out‐of‐place update feature of NAND flash memory. Newly emerged non‐volatile memory, such as phase‐change memory, spin transfer torque, supports in‐place updates and presents much better I/O performance than that of flash memory. All these good features make non‐volatile memory (NVM) as a promising solution to improve the random write performance for NAND flash memory. In this paper, we propose a non‐volatile memory for random access (NVMRA) scheme to utilize NVM to improve the I/O performance in mobile devices. NVMRA exploits the I/O behaviors of applications to improve the random write performance for each application. Based on different I/O behaviors, such as random write‐dominant I/O behavior, NVMRA adopts different storing decisions. The scheme is evaluated on a real Android 4.2 platform. The experimental results show that the proposed scheme can effectively improve the I/O performance and reduce the I/O energy consumption for mobile devices. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
Many mobile devices demand a large-capacity and high-performance storage system in order to store, retrieve, and process large multimedia data quickly. In this paper, we present a high-performance NAND flash-based storage system based on a multi-channel architecture. The proposed system consists of multiple independent channels, where each channel has multiple NAND flash memory chips. On this hardware, we investigate three optimization techniques to exploit I/O parallelism: striping, interleaving, and pipelining. By combining all the optimization techniques carefully, our system has shown 3.6 times higher overall performance compared to the conventional single-channel architecture.  相似文献   

14.
This paper presents the design of a NAND flash based solid state disk (SSD), which can support various storage access patterns commonly observed in a PC environment. It is based on a hybrid model of high-performance SLC (single-level cell) NAND and low cost MLC (multi-level cell) NAND flash memories. Typically, SLC NAND has a higher transfer rate and greater cell endurance than MLC NAND flash memory. MLC NAND, on the other hand, benefits from lower price and higher capacity. In order to achieve higher performance than traditional SSDs, an interleaving technique that places NAND flash chips in parallel is essential. However, using the traditional FTL (flash translation layer) on an SSD with only MLC NAND chips is inefficient because the size of a logical block becomes large as the mapping address unit grows. In this paper, we proposed a HFTL (hybrid flash translation layer) which makes use of chained-blocks, combining SLC NAND and MLC NAND flash memories in parallel. Experimental results show that for most of the traces studied, the HFTL in an SSD configuration composed of 80% MLC NAND and 20% SLC NAND memories can improve performance compared to other solid state disk configurations, composed of either SLC NAND or MLC NAND flash memory alone.  相似文献   

15.
An SSD generally has a small memory, called cache buffer, to increase its performance and the frequently accessed data are maintained in this cache buffer. These cached data must periodically write back to the NAND Flash memory to prevent the data loss due to sudden power-off, and it should immediately flush all dirty data items into a non-volatile storage media (i.e., NAND Flash memory), when receiving a flush command, while the flush command is supported in Serial ATA (SATA) and Serial Attached SCSI (SAS). Thus, a flush command is an important factor to give significant impact on SSD performance.In this paper, we have investigated the impact of a flush command on SSD performance and have conducted in-depth experiments with versatile workloads, using the modified FlashSim simulator. Our performance measurements using PC and server workloads provide several interesting conclusions. First, a cache buffer without a flush command could improve SSD performance as a cache buffer size increases, since more requested data could be handled in the cache buffer. Second, our experiments have revealed that a flush command might give a negative impact on SSD performance. The average response time per request with a flush command is getting worse compared to not supporting the flush command, as cache buffer size increases. Finally, we have proposed the backend flushing scheme to nullify the negative performance impact of the flush command. The backend flushing scheme first writes the requested data into a cache buffer and sends the acknowledgment of the request completion to a host system. Then, it writes back the data in the cache buffer to NAND Flash memory. Thus, the proposed scheme could improve SSD performance since it might reduce the number of the dirty data items in a cache buffer to write back to NAND Flash memory.All these results suggest that a flush command could give a negative impact on SSD performance and our proposed backend flushing scheme could improve the SSD performance while supporting a flush command.  相似文献   

16.
利用页面重构与数据温度识别的闪存缓存算法   总被引:1,自引:0,他引:1  
基于闪存的固态盘(SSD)具有比磁盘更加优越的性能,并且在桌面系统中逐渐替代磁盘.但是,尽管在SSD中嵌入了DRAM作为缓存,闪存在不断写入的过程中也可能产生不稳定的写性能,主要是因为逻辑页写入时会频繁引发非覆盖写和垃圾回收操作.针对此问题,提出了一种叫作PRLRU的新型闪存缓存管理方法,通过页面重构机制以及数据温度识...  相似文献   

17.
We propose a efficient writeback scheme that enables guaranteeing throughput in high-performance storage systems. The proposed scheme, called de-fragmented writeback (DFW), reduces positioning time of storage devices in writing workloads, and thus enables fast writeback in storage systems. We consider both of storage media in designing DFW scheme; traditional rotating disk and emerging solid-state disks. First, sorting and filling holes methods are used for rotating disk media for the higher throughput. The scheme converts fragmented data blocks into sequential ones so that it reduces the number of write requests and unnecessary disk-head movements. Second, flash block aware clustering-based writeback scheme is used for solid-state disks considering the characteristics of flash memory. The experimental results show that our schemes guarantee system’s high throughput while guaranteeing data reliability.  相似文献   

18.
Flash memory is becoming a major database storage in building embedded systems or portable devices because of its non-volatile, shock-resistant, power-economic nature, and fast access time for read operations. Flash memory, however, should be erased before it can be rewritten and the erase and write operations are very slow as compared to main memory. Due to this drawback, traditional database management schemes are not easy to apply directly to flash memory database for portable devices. Therefore, we improve the traditional schemes and propose a new scheme called flash two phase locking (F2PL) scheme for efficient transaction processing in a flash memory database environment. F2PL achieves high transaction performance by exploiting the notion of the alternative version coordination which allows previous version reads and efficiently handles slow write/erase operations in lock management processes. We also propose a simulation model to show the performance of F2PL. Based on the results of the performance evaluation, we conclude that F2PL scheme outperforms the traditional schemes.  相似文献   

19.
相比于传统机械硬盘,基于NAND Flash的固态盘由于具有非易失性、高性能、低功耗等优点,被广泛应用于数据中心、云计算、在线事务交易等场景。然而,由于NAND Flash中的读操作速度远远快于写操作速度,当读写请求并发执行时,读请求可能被写请求阻塞,从而表现出极大的读延时。在许多以读请求为主的场合,尤其是在线事物交易中(读请求占总请求的比例超过90%),读延时的急剧增加严重影响了系统的整体性能。提出一种读写性能优化调度的策略,通过在闪存转换层之下动态调整读写请求的优先序列,使读性能获得显著的提升。实验中,通过对固态盘仿真器的设计与实现,对读写调度策略的有效性进行了系统的评估。实验结果表明,在该调度策略下,系统中读延时的最大值和平均值均得到了显著的减少,且降幅分别达到了72%和41%。  相似文献   

20.
Flash memories are one of the best media to support portable and desktop computers’ storage areas. Their features include non-volatility, low power consumption, and fast access time for read operations, features which are sufficient to present flash memories as major database storage components for portable computers. However, we need to improve traditional index management schemes based on B-Tree due to the relatively slow characteristics of flash memory operations compared to RAM memory. In order to achieve this goal, we propose a new index management scheme based on a compressed hot-cold clustering called CHC-Tree. The CHC-Tree-based index management scheme improves index operation performance by compressing the flash index nodes and clustering the hot-cold segments. The cold cluster compression techniques using unused free area in index node reduces the number of slow write operations in index node insert/delete processes. Our performance evaluation shows that our scheme significantly reduces the write operation overheads, improving the index update performance of B-Tree by 21.9%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号