首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 171 毫秒
1.
为解决基于闪存的固态盘寿命有限及随容量被消耗可靠性也逐渐衰减的特性,基于两个重要的观察结果:a)存在大量的重复数据块;b)元数据块比数据块被更加频繁地访问或修改,但每次更新仅有微小变化,提出了flash saver,耦合EXT2/3文件系统,结合重复数据删除和增量编码来减少到SSD的写流量.Flash saver将重复数据删除用于文件系统数据块,将增量编码用于文件系统元数据块.实验结果表明,flash saver可减少高达63%的总写流量,从而使其具有更长的使用寿命、更大的有效flash空间和更高的可靠性.  相似文献   

2.
介绍了eMMC及其在HS400高速数据传输模式下的工作原理,提出了一种eMMC控制器的设计方案。实现了200 MHz工作频率下,使用DDR传输模式进行数据传输的eMMC控制器,并通过CRC校验模块实现对传输数据的CRC校验,增强了系统的可靠性。实验平台采用母板/子板总体架构,在Xilinx Zynq 7000 FPGA开发板Zedboard上实现eMMC控制器,通过FMC接口与eMMC芯片子板进行通信传输。仿真及板级测试表明, HS400模式下数据读写 的传输速率最高可达400 MB/s,能够在实际的eMMC开发中有效提高eMMC设备的访问性能。  相似文献   

3.
海量存储系统中,高效的元数据索引是减少查找元数据所需时间与空间开销的重要手段。针对现有元数据管理方法存在查找元数据所需时间与空间开销大和性能波动大等问题,设计了元数据分级索引算法。依据元数据的生命周期,将元数据分为活跃和非活跃两级;使用Bloom Fliter对均衡的活跃元数据分区生成摘要串,并使用B-树建立活跃元数据分区的索引;使用类似的方法对非活跃元数据分区,并为每个分区选择各自的哈希函数。从查找元数据所需时间与空间开销、适应能力两方面对元数据分级索引算法进行了分析,并与现有元数据管理算法进行了比较。最后实现了元数据分级索引算法的原型系统,使用真实数据集进行了测试与分析,结果表明,元数据分级索引算法能减少查找元数据所需的时间与空间开销,并具有很强的适应能力。  相似文献   

4.
针对传统存储器使用单份eMMC进行存储时存在的传输速率较低、可靠性较差和智能化水平较低等问题,设计了一种基于eMMC高速阵列双备份智能存储系统。通过将4片容量为64 GB、型号为KLMCG8GESD-B04Q的三星eMMC芯片分成两份组成阵列,采用并行操作方式进行双备份存储。同时加入了断电续存的功能,即使出现系统偶然瞬时断电,也不会出现已存储的数据被覆盖的问题。试验测试与数据分析结果表明,该系统写入速度为152 MB/s,读出速度为83 MB/s,存储容量为118 GB。即使在有限数量eMMC失效的情况下,存储器仍能满足航天飞行记录器的各项技术指标要求。  相似文献   

5.
地学元数据结构分析及其管理系统设计   总被引:5,自引:0,他引:5  
在分析了地学数据的Web共享需求及其多学科特点基础上,设计了地学数据的可扩展元数据结构,它包括地学核心元数据、模式核心元数据、模式(专用)扩展元数据等三层体系,并利用W3C推荐的RDF/XML数据模型和方法开发了地学数据共享平台的元数据管理系统(MMS)。该系统的应用验证了地学数据共享元数据构架体系的可靠性和适用性。  相似文献   

6.
高能物理是典型的数据密集型计算,数据访问性能对整个系统至关重要并与应用的计算模式密切相关.从剖析高能物理的典型计算模式入手,总结出其数据访问的特点,提出针对操作系统I/O调度、分布式文件系统缓存等多个因素的优化措施,优化后数据访问性能和CPU利用率明显提高.大规模存储系统对于元数据管理、数据可靠性、扩容等可管理性等功能也有较高要求,结合现有Lustre并行文件系统的不足,提出了Gluster的高能物理存储系统设计,在进行数据管理以及扩容等方面的优化后,系统已经正式投入使用,数据访问性能能够满足高能物理计算的需求,同时具有更好的可扩展性和可靠性.  相似文献   

7.
林凌  陈展虹 《福建电脑》2008,24(4):144-145
为了满足分布式网络存储对元数据服务的要求,本文采用多个元数据服务器组成元数据服务器群,并且提出分区散列管理方案对元数据服务器群进行有效的管理。并在不涉及元数据的物理移动的前提下,实现元数据服务器群的负载平衡机制、可扩充性以及高可用性等性能。  相似文献   

8.
针对航空航天领域飞行器传感数据信息采集、存储和传输等需求,设计了基于ZYNQ的千兆以太网数据记录器。以ZYNQ为主控芯片,通过以太网接口实时接收数据包,编码后写入缓存单元,逻辑端提取缓存数据写入eMMC存储单元,并可读取eMMC内存储数据经以太网接口传回电脑,用于数据验证、分析。试验结果表明,该记录器实现了千兆网TCP通信,读、写平均速率达到500 Mbit/s,以太网包解析后数据以60 MB/s平均速度写入存储单元,瞬时写入最大速度可达100 MB/s。系统将主控与存储单元分离,便于对存储单元进行过载保护,用于实验后回收数据,为高过载情况下飞行器数据记录提供解决方案。  相似文献   

9.
随着企业的快速发展,企业内外部数据也越来越丰富,此时元数据管理就成为了很多公司的关键问题。元数据管理的一致性和可靠性保障对于元数据中的操作请求来讲是非常重要的。本文主要对元数据管理的一致性保证策略以及元数据管理的可靠性保证策略进行分析。  相似文献   

10.
元数据就是描述数据的数据,随着信息技术的快速向前发展,元数据在地理空间信息资源共享过程中起着关键的作用。元数据有自己的标准,这个标准的主要作用是介绍了元数据的组成元素和分类应用,提出了一些元数据建库的基本原则,并且给出了基于XML格式的元数据管理的设计方案。该文主要是研究元数据的管理和标准,并根据此标准实现了元数据的建库、编辑、导入导出、查询和目录关联管理的功能。  相似文献   

11.
Journaling file systems, which are widely used in modern operating systems, guarantee file system consistency and data integrity by logging file system updates to a journal, which is a reserved space on the storage, before the updates are written to the data storage. Such journal writes increase the write traffic to the storage and thus degrade the file system performance, especially in full data journaling, which logs both metadata and data updates. In this paper, a new journaling approach is proposed to eliminate journal writes in server virtualization environments, which are gaining in popularity in server platforms. Based on reliable hardware subsystems and virtual machine monitor (VMM), the proposed approach eliminates journal writes by retaining journal data (i.e. logged file system updates) in the memory of each virtual machine and ensuring the integrity of these journal data through cooperation between the journaling file systems and the VMM. We implement the proposed approach in Linux ext3 in the Xen virtualization environment. According to the performance results, a performance improvement of up to 50.9journaling approach of ext3 due to journal write elimination. In metadata‐write dominated workloads, this approach could even outperform the metadata journaling approaches of ext3, which do not guarantee data integrity. These results demonstrate that, on virtual servers with reliable VMM and hardware subsystems, the proposed approach is an effective alternative to traditional journaling approaches. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

12.
Solid-state drives (SSDs) have been widely used as caching tier for disk-based RAID systems to speed up dataintensive applications. However, traditional cache schemes fail to effectively boost the parity-based RAID storage systems (e.g., RAID-5/6), which have poor random write performance due to the small-write problem. What’s worse, intensive cache writes can wear out the SSD quickly, which causes performance degradation and cost increment. In this article, we present the design and implementation of KDD, an efficient SSD-based caching system which Keeps Data and Deltas in SSD. When write requests hit in the cache, KDD dispatches the data to the RAID storage without updating the parity blocks to mitigate the small write penalty, and compactly stores the compressed deltas in SSD to reduce the cache write traffic while guaranteeing reliability in case of disk failures. In addition, KDD organizes the metadata partition on SSD as a circular log to make the cache persistent with low overhead.We evaluate the performance of KDD via both simulations and prototype implementations. Experimental results show that KDD effectively reduces the small write penalty while extending the lifetime of the SSD-based cache by up to 6.85 times.  相似文献   

13.
操顺德  华宇  冯丹  孙园园  左鹏飞 《软件学报》2017,28(8):1999-2009
通过对视频监控数据的特点和传统存储方案进行分析,提出一种高性能分布式存储系统解决方案.不同于传统的基于文件存储的方式,设计了一种逻辑卷结构,将非结构化的视频流数据以此结构进行组织并直接写入RAW磁盘设备,解决了传统存储方案中随机磁盘读写和磁盘碎片导致存储性能下降的问题.该方案将元数据组织为两级索引结构,分别由状态管理器和存储服务器管理,极大地减少了状态管理器需要管理元数据的数量,消除了性能瓶颈,并提供精确到秒级的检索精度.此外,该方案灵活的存储服务器分组策略和组内互备关系使得存储系统具备容错能力和线性扩展能力.系统测试结果表明,该方案在成本低廉的PC服务器上实现了单台服务器能同时记录400路1080P视频流,写入速度是本地文件系统的2.5倍.  相似文献   

14.
Due to the rapid development of flash memory technology, NAND flash has been widely used as a storage device in portable embedded systems, personal computers, and enterprise systems. However, flash memory is prone to performance degradation due to the long latency in flash program operations and flash erasure operations. One common technique for hiding long program latency is to use a temporal buffer to hold write data. Although DRAM is often used to implement the buffer because of its high performance and low bit cost, it is volatile; thus, that the data may be lost on power failure in the storage system. As a solution to this issue, recent operating systems frequently issue flush commands to force storage devices to permanently move data from the buffer into the non-volatile area. However, the excessive use of flush commands may worsen the write performance of the storage systems. In this paper, we propose two data loss recovery techniques that require fewer write operations to flash memory. These techniques remove unnecessary flash writes by storing storage metadata along with user data simultaneously by utilizing the spare area associated with each data page.  相似文献   

15.
In modern energy-saving replication storage systems, a primary group of disks is always powered up to serve incoming requests while other disks are often spun down to save energy during slack periods. However, since new writes cannot be immediately synchronized into all disks, system reliability is degraded. In this paper, we develop a high-reliability and energy-efficient replication storage system, named RERAID, based on RAID10. RERAID employs part of the free space in the primary disk group and uses erasure coding to construct a code cache at the front end to absorb new writes. Since code cache supports failure recovery of two or more disks by using erasure coding, RERAID guarantees a reliability comparable with that of the RAID10 storage system. In addition, we develop an algorithm, called erasure coding write (ECW), to buffer many small random writes into a few large writes, which are then written to the code cache in a parallel fashion sequentially to improve the write performance. Experimental results show that RERAID significantly improves write performance and saves more energy than existing solutions.  相似文献   

16.
Increasingly, for extensibility and performance, special purpose application code is being integrated with database system code. Such application code has direct access to database system buffers, and as a result, the danger of data being corrupted due to inadvertent application writes is increased. Previously proposed hardware techniques to protect from corruption require system calls, and their performance depends on details of the hardware architecture. We investigate an alternative approach which uses codewords associated with regions of data to detect corruption and to prevent corrupted data from being used by subsequent transactions. We develop several such techniques which vary in the level of protection, space overhead, performance, and impact on concurrency. These techniques are implemented in the Dali main-memory storage manager, and the performance impact of each on normal processing is evaluated. Novel techniques are developed to recover when a transaction has read corrupted data caused by a bad write and gone on to write other data in the database. These techniques use limited and relatively low-cost logging of transaction reads to trace the corruption and may also prove useful when resolving problems caused by incorrect data entry and other logical errors.  相似文献   

17.
基于融合数据库的海量传感器信息存储架构   总被引:2,自引:0,他引:2  
类兴邦  房俊 《计算机科学》2016,43(6):68-71, 111
在物联网、工业监控等系统中,庞大规模的传感器每时每刻都在产生大量的数据。实时数据库在处理高时效性数据方面具有较强的优势,但是在处理大规模传感器数据方面存在着存储量低、不便于扩展的弊端。而HBase在处理海量数据方面具有高读写性能、高扩展性、高可靠性和高存储量的优势。通过将实时数据库与HBase相结合,设计并实现了基于融合数据库的传感器信息存储架构。该架构采用多租户机制,对HBase写入进行了优化,将原来分散的传感器数据集中式存储,并把传感器元数据与历史数据分离存储,同时维持了实时数据库原有的查询、数据组织结构的特点。经过实验验证,该架构具有较高的读写性能以及良好的可扩展性,有效避免了Region写入热点,实现了集群负载均衡。  相似文献   

18.
We describe a data deduplication system for backup storage of PC disk images, named in-RAM metadata utilizing deduplication (IR-MUD). In-RAM hash granularity adaptation and miniLZO based data compression are firstly proposed to reduce the in-RAM metadata size and thereby reduce the space overheads required by the in-RAM metadata caches. Secondly, an in-RAM metadata write cache, as opposed to the traditional metadata read cache, is proposed for further reducing metadata-related disk I/O operations and improving deduplication throughput. During deduplication, the metadata write cache is managed following the LRU caching policy. For each manifest that is hit in the metadata write cache, an expensive manifest reloading operation from the disk is avoided. After deduplication, all the manifests in the metadata write cache are cleared and stored on the disk. Our experimental results using 1.5 TB real-world disk image dataset show that 1) IR-MUD achieved about 95% size reduction for the deduplication metadata, with a small time overhead introduced, 2) when the metadata write cache was not utilized, with the same RAM space size for the metadata read cache, IR-MUD achieved a 400% higher RAM hit ratio and a 50% higher deduplication throughput, as compared with the classic Sparse Indexing deduplication system where no metadata utilization approaches are utilized, and 3) when the metadata write cache was utilized and enough RAM space was available, IR-MUD achieved a 500% higher RAM hit ratio compared with Sparse Indexing and a 70% higher deduplication throughput compared with IR-MUD with only a single metadata read cache. The in-RAM metadata harnessing and metadata write caching approaches of IR-MUD can be applied in most parallel deduplication systems for improving metadata caching efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号