首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Flash存储器是一种在嵌入式系统中日益普及的存储介质,它提供了高密度且成本相对较低的固态存储。使用Flash存储器需要很多技巧来确保数据可靠性并延长Flash器件的使用寿命。根据Flash存储器的特点提出了一种文件系统模型,可以合理且有效地利用Flash存储器,同时提供日志特性以增强使用该文件系统的嵌入式系统的可靠性。  相似文献   

2.
Distributed active storage architectures are designed to offload user-level processing to the peripheral from the host servers. In this paper, we report preliminary investigation on performance and fault recovery designs, as impacted by emerging storage interconnect protocols and state-of-the-art storage devices. Empirical results obtained using validated device-level and interconnect data demonstrate the significance of the said parameters on the overall system performance and reliability.  相似文献   

3.
Heterogeneous storage architectures combine the strengths of different storage devices in a synergistically useful fashion, and are increasingly being used in mobile storage systems. In this paper, we propose ARC-H, an adaptive cache replacement algorithm for heterogeneous storage systems consisting of a hard disk and a NAND flash memory. ARC-H employs a dynamically adaptive management policy based on ghost buffers and takes account of recency, I/O cost per device, and workload patterns in making cache replacement decisions. Realistic trace-driven simulations show that ARC-H reduces service time by up to 88% compared with existing caching algorithms with a 20 Mb cache. ARC-H also reduces energy consumption by up to 81%.  相似文献   

4.
Great advancements in commodity graphics hardware have favoured graphics processing unit (GPU)‐based volume rendering as the main adopted solution for interactive exploration of rectilinear scalar volumes on commodity platforms. Nevertheless, long data transfer times and GPU memory size limitations are often the main limiting factors, especially for massive, time‐varying or multi‐volume visualization, as well as for networked visualization on the emerging mobile devices. To address this issue, a variety of level‐of‐detail (LOD) data representations and compression techniques have been introduced. In order to improve capabilities and performance over the entire storage, distribution and rendering pipeline, the encoding/decoding process is typically highly asymmetric, and systems should ideally compress at data production time and decompress on demand at rendering time. Compression and LOD pre‐computation does not have to adhere to real‐time constraints and can be performed off‐line for high‐quality results. In contrast, adaptive real‐time rendering from compressed representations requires fast, transient and spatially independent decompression. In this report, we review the existing compressed GPU volume rendering approaches, covering sampling grid layouts, compact representation models, compression techniques, GPU rendering architectures and fast decoding techniques.  相似文献   

5.
针对闪存缓冲区置换算法的性能验证基本采用仿真模拟的现状,提出了一种基于PostgreSQL的有说服力的性能验证方法,重点讨论了在PostgreSQL上扩展闪存缓冲区置换算法的方法和实现技术,并以CFLRU(clean first least recently used)和CCFLRU(cold clean first least recently used)算法为例,给出了具体的扩展过程。然后以一块固态硬盘作为数据存储设备进行性能测试,测试结果证明了基于PostgreSQL的扩展方法在缓冲区置换算法性能验证上的有效性。  相似文献   

6.
The traditional hard disk drive (HDD) is often a bottleneck in the overall performance of modern computer systems. With the development of solid state drives (SSD) based on flash memory, new possibilities are available to improve secondary storage performance. In this work, we propose a new hybrid SSD–HDD storage system and a selection of algorithms designed to assign pages across an HDD and an SSD to optimise I/O performance. The hybrid system combines the advantages of the SSD’s fast random seek speed with the sequential access speed and large storage capacity of the HDD to produce significantly improved performance in a variety of situations. We further improve performance by allowing concurrent access across the two types of storage devices. We show the drive assignment problem is NP-complete and accordingly propose effective heuristic solutions. Extensive experiments using both synthetic and real data sets show our system with a small SSD can outperform a striped dual HDD and remain competitive with a dual SSD.  相似文献   

7.
网络存储系统中I/O请求响应时间的研究   总被引:11,自引:1,他引:11  
网络存储技术从很多方面改善了传统基于主机的存储系统的不足,但由于在数据存储和处理之间增加了网络,对整个I/O请求过程产生很大的影响,使得I/O性能难以准确地估计.通过对两种常用的网络存储系统——NAS和SAN的基本存储过程进行分析,提出了针对网络存储系统中I/O响应时间的性能评估模型.通过实验,发现这个模型在很大程度上能够对存储网络的性能进行评价.结果表明存储网络的性能不光和存储设备以及网络设备的物理性质有关,还和具体的负载状况密切联系.另一方面,FC(光纤通道)对负载状况的依赖性远远小于TCP/IP网络,就I/O响应速度而言,FC有更好的性能.  相似文献   

8.
Block devices such as magnetic disks are nonvolatile data storage devices that transfer data in fixed‐size chunks. They are the main nonvolatile memory that holds the file system, and they are also used in virtual memory mechanisms such swapping and page fault handling. Investigating storage performance issues requires a full insight into the operating system internals. Kernel tracing offers an efficient mechanism to gather information about the storage subsystem at runtime. Still, the tracing output is often huge and difficult to analyze manually. In this paper, we introduce a framework to compute meaningful storage performance metrics from low‐level trace events generated by LTTng. A stateful approach is used to model the state of the storage subsystem. Efficient data structures and algorithms are proposed to offer a reasonable response time, allowing the user to navigate throughout the trace and to retrieve metrics from any time range. The framework includes a visualization system that provides different graphical views that represent the collected information in a convenient way. These views are synchronized together, forming a comprehensive perspective that makes storage performance investigation a much more comfortable task. Different use cases are presented to show the usefulness of the framework in real‐world applications.  相似文献   

9.
The doubly-chained tree data base (file) organization is modeled and analyzed to obtain estimates of average access time (read-only) and total storage requirements. Macroscopic expressions are derived relating such performance measures to characteristic parameters of the specific data base, query or transaction traffic, storage devices and particular doubly-chained storage structure chosen. Important and generally applicable implementation-oriented aspects are considered. The model forms part of a prototype system for automatically analyzing and evaluating various data base organizations. The methodology and system are briefly outlined. Results for the doubly-chained structure using several real data bases are summarized, showing the rather large variability of performance as a function of both data base contents and complexity of queries.  相似文献   

10.
Solid state disks (SSDs) are becoming one of the mainstream storage devices due to their salient features, such as high read performance and low power consumption. In order to obtain high write performance and extend flash lifespan, SSDs leverage an internal DRAM to buffer frequently rewritten data to reduce the number of program operations upon the flash. However, existing buffer management algorithms demonstrate their blank in leveraging data access features to predict data attributes. In various real-world workloads, most of large sequential write requests are rarely rewritten in near future. Once these write requests occur, many hot data will be evicted from DRAM into flash memory, thus jeopardizing the overall system performance. In order to address this problem, we propose a novel large write data identification scheme, called Prober. This scheme probes large sequential write sequences among the write streams at early stage to prevent them from residing in the buffer. In the meantime, to further release space and reduce waiting time for handling the incoming requests, we temporarily buffer the large data into DRAM when the buffer has free space, and leverage an actively write-back scheme for large sequential write data when the flash array turns into idle state. Experimental results demonstrate that our schemes improve hit ratio of write requests by up to 10%, decrease the average response time by up to 42% and reduce the number of erase operations by up to 11%, compared with the state-of-the-art buffer replacement algorithms.  相似文献   

11.
In this paper, we propose data space mapping techniques for storage and retrieval in multi-dimensional databases on multi-disk architectures. We identify the important factors for an efficient multi-disk searching of multi-dimensional data and develop secondary storage organization and retrieval techniques that directly address these factors. We especially focus on high dimensional data, where none of the current approaches are effective. In contrast to the current declustering techniques, storage techniques in this paper consider both inter- and intra-disk organization of the data. The data space is first partitioned into buckets, then the buckets are declustered to multiple disks while they are clustered in each disk. The queries are executed through bucket identification techniques that locate the pages. One of the partitioning techniques we discuss is especially practical for high dimensional data, and our disk and page allocation techniques are optimal with respect to number of I/O accesses and seek times. We provide experimental results that support our claims on two real high dimensional datasets.  相似文献   

12.
The basics of reliable distributed storage networks   总被引:2,自引:0,他引:2  
《IT Professional》2004,6(3):18-24
Because of storage protocols that operate over extended distances, various distributed storage applications that improve the efficiency and reliability of data storage are now possible. Distributed storage applications improve efficiency by allowing any network server to transparently consolidate and access data stored in multiple physical locations. Remote backup and mirroring improve the system's reliability by copying critical data. These processes improve efficiency by eliminating backup downtime and manual backup operations. Business continuity and disaster recovery capabilities enable enterprises to recover quickly and transparently from system failure or data loss. Storage protocols and gateway devices enable rapid and transparent data transfer between mainframe applications and open-systems applications. NAS applications provide shared file access for clients using standard LAN-based technology, and can integrate with SAN architectures to provide truly distributed network capabilities. All these distributed storage network applications enable IT managers to improve data availability and reliability while minimizing management overhead and costs.  相似文献   

13.
P2P持久存储研究   总被引:29,自引:0,他引:29  
田敬  代亚非 《软件学报》2007,18(6):1379-1399
P2P(peer-to-peer)的组织模式已经成为新一代互联网应用的重要形式,它为应用带来了更好的扩展性、容错性和高性能.P2P存储系统一直是研究界所关注的热点,被认为是P2P最具前途的应用之一.数据的持久存储是制约P2P存储系统发展的关键问题,也是其研究的难点.综述了P2P存储系统及数据持久存储相关技术的研究现状.首先概述了P2P存储系统的基本技术组成及其在不同应用环境中的优势,并介绍了数据冗余、数据分发、错误检测和冗余数据维护等多种持久存储的基本技术.在一个P2P存储系统研究框架下,介绍了目前知名的P2P存储系统及其使用的持久存储技术.对各种技术进行了详细综述和对比讨论,分析了各种技术的适应环境及优劣,指出了存在的问题和未来研究的方向.  相似文献   

14.
陈震  刘文洁  张晓  卜海龙 《计算机应用》2017,37(5):1217-1222
大数据和云计算环境下海量增长的数据对存储系统的超高容量和体系结构带来了极大的挑战。目前存储系统的发展趋向于大容量、低成本和高性能,然而任何单一的存储器件如传统的机械磁盘(HDD)、固态硬盘(SSD)、非易失型性随机存储器等由于其固有的物理特性的限制,并不能满足以上的需求。将不同的存储介质混合组合成高效的存储系统是一个好的解决方法,固态硬盘作为一种高可靠性、低能耗、高性能的存储器被越来越广泛地运用到混合存储系统。通过将固态硬盘与传统磁盘进行组合,利用固态硬盘的高性能和传统磁盘低成本大容量的特点,能够为用户提供大容量的存储空间,保证系统的高性能,同时还能降低成本。通过阐述SSD与HDD混合存储系统的研究现状,对不同的SSD与HDD混合存储系统进行分类总结;然后针对缓存架构和设备同层架构这两种目前最流行的存储架构中涉及到的关键技术和不足进行讨论;最后对基于SSD和HDD的混合存储技术进行概括总结,并对今后该领域的研究重点和方向进行展望。  相似文献   

15.
随着商业航天的发展,为了能以更低成本使宇航计算单元得到应用,需要结合设计成本、预期寿命、实时性和系统复杂度等因素,对不同计算单元冗余架构的可靠性进行评估;目前在基于高性能商用货架(COTS,commercial off-the-shelf)器件的宇航计算单元研究多满足于工程应用,缺乏关于对不同架构可靠性的对比;首先,针对几种不同冗余计算单元冗余架构,简单介绍具体的拓扑结构和工作方式;其次,根据工作方式给出了他们的故障状态转移图;最后,根据上述几种架构,运用马尔可夫模型理论,对这些计算单元结构进行可靠性建模,在考虑失效率和维修率两个参数对系统可靠性影响的情况下,并以一个虚拟的长时期任务为背景对各结构的可靠性指标进行了评价;仿真结果为更低成本基于COTS器件制造宇航计算单元提供了设计支撑。  相似文献   

16.
17.
嵌入式实时数据库系统的存储管理   总被引:18,自引:0,他引:18  
本文针对嵌入式系统的应用环境和实时数据库系统的数据特征,通过对各种存储器性能的比较,提出了嵌入式实时数据库系统的存储体系结构.在此基础上,对读写速度不对称且写次数有限的存储器提出了一种合理的文件系统的物理结构和数据库的索引结构,减少写入次数以提高数据存取速度,并使写入次数均匀分布在存储器的各块中,以增加数据的可靠性延长存储器的寿命.对实时数据库的不同特征的数据的在本存储体系的安置也进行初步的讨论.  相似文献   

18.
Voxel‐based approaches are today's standard to encode volume data. Recently, directed acyclic graphs (DAGs) were successfully used for compressing sparse voxel scenes as well, but they are restricted to a single bit of (geometry) information per voxel. We present a method to compress arbitrary data, such as colors, normals, or reflectance information. By decoupling geometry and voxel data via a novel mapping scheme, we are able to apply the DAG principle to encode the topology, while using a palette‐based compression for the voxel attributes, leading to a drastic memory reduction. Our method outperforms existing state‐of‐the‐art techniques and is well‐suited for GPU architectures. We achieve real‐time performance on commodity hardware for colored scenes with up to 17 hierarchical levels (a 128K3voxel resolution), which are stored fully in core.  相似文献   

19.
Energy costs have become increasingly problematic for high performance processors, but the rising number of cores on-chip offers promising opportunities for energy reduction. Further, emerging architectures such as heterogeneous multicores present new opportunities for improved energy efficiency. While previous work has presented novel memory architectures, multithreading techniques, and data mapping strategies for reducing energy, consideration to thread generation mechanisms that take into account data locality for this purpose has been limited. This study presents methodologies for the joint partitioning of data and threads to parallelize sequential codes across an innovative heterogeneous multicore processor called the Passive/Active Multicore (PAM) for reducing energy consumption from on-chip data transport and cache access components while also improving execution time. Experimental results show that the design with automatic thread partitioning offered reductions in energy-delay product (EDP) of up to 48%.  相似文献   

20.
如何有效地在大规模的网络存储系统中存放数据是一个具有挑战性的问题.深入研究如何在满足存储的公平、冗余和自适应性的情况下,基于存储设备不同的可靠性能,充分考虑存储数据的重要性进行数据布局.用整数规划的形式描述了面向不同可靠性等级的存储设备进行数据布局的优化问题,并说明了这个问题是NP难的.提出了一种块级别的面向可靠性的数据分级布局算法,保证了布局算法的公平、冗余以及自适应性,并分析了数据布局算法的合理性和可行性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号