首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 203 毫秒
1.
传统的网络文件系统难以满足高性能计算系统的I/O 需求,并行网络文件系统——PNFS可以有效地解决传统网络文件系统在可扩展性、可用性和性能上存在的问题。首先对PNFS的体系结构进行了设计,实现了元数据服务器与存储服务器的分离,消除了由于集中服务器结构引发的I/O瓶颈问题。然后,对PNFS的原型系统进行了性能测试,并与相同环境下NFS的测试结果进行比较与分析,结果表明PNFS能够为客户端提供并行访问文件数据的能力,有着较高的I/O读写带宽和较低的访问延迟,同时实现了客户端I/O带宽与存储服务器规模之间的线性可扩展关系,能较好地满足高性能计算中的I/O需求。  相似文献   

2.
熊安萍  刘进进  邹洋 《计算机工程与设计》2012,33(7):2678-2682,2689
对象存储文件系统中将大数据文件分片存储到多个存储节点上,以获取更好的并行I/O性能,提高系统吞吐率.现有对象存储文件系统的存储策略并未充分考虑存储对象本身负载的动态变化,不利于提高系统资源利用率.针对此问题,考虑存储对象的空间及I/O等负载实时变化,提出了一种简单、灵活、高效的负载均衡存储策略,并对该策略进行了实现.实验结果表明,该策略能有效提高对象存储系统资源的利用率和吞吐率,保证对象存储文件系统高效的读写性能.  相似文献   

3.
并行文件系统是并行计算系统的存储子系统,I/O性能是并行计算系统研究的重要方面.本文分析了并行文件系统I/O研究的难点,研究了并行文件系统的I/O特征和关键技术,指出了并行文件系统I/O性能研究未来的研究方向,为并行计算系统的设计和优化提供重要参考.  相似文献   

4.
排队论在计算机存储系统性能中的应用和分析   总被引:1,自引:0,他引:1  
周薇  罗荣桂  田磊 《微计算机信息》2006,22(21):271-272
I/O响应时间是衡量存储系统性能的重要指标。本文基于使用光纤通道磁盘阵列构建的存储区域网环境,利用排队论分析了不同预取策略对磁盘阵列I/O响应时间的影响,提出改进存储系统性能的方法。  相似文献   

5.
科学计算数据集由数据和元数据组成.一般条件下,数据的尺寸较大,元数据尺寸较小.传统的高性能计算机并行文件系统可以高效率地读写大块连续数据,但是无法高效率地读写大量较小块的元数据.一旦大块数据和小块元数据两类读写特征混杂在一起,元数据将较严重地干扰并行I/O,造成性能的下降.为此,文中提出数据与元数据分治的双路并行I/O方法.该方法在高层I/O库中建立内存文件系统与并行文件系统两级存储,在存储资源之间并行迁移科学计算元数据.一方面降低较频繁读写元数据的I/O延迟,另一方面改变科学计算数据的存储特征与存储模式,从而提高科学计算应用、尤其是数据分析与可视化等读入密集型应用的I/O效率.测试表明,双路并行I/o方法可提高写性能8%~13%,提高读性能89%到1.01倍.  相似文献   

6.
在大规模集群系统的并行运算环境中,I/O效率一直是影响系统整体性能的关键因素,并行文件系统技术是目前解决I/O性能瓶颈的有效途径之一。介绍当前并行文件系统的发展现状以及并行文件系统的类型,阐述SNFS并行文件系统的架构以及负载均衡DLC(分布式LAN客户端)技术的实现原理,并给出大规模集群系统环境中SNFS文件系统负载均衡技术的实现方法,最后,通过实际应用分析说明该技术在提升I/O性能上的优势。  相似文献   

7.
磁盘的随机I/O延时制约了存储系统的性能提高,具备高性能随机I/O特性的固态盘(solid state disk,SSD)逐渐成为关注的热点.分析了磁盘、Flash型SSD以及DRAM型SSD三类设备不同的性能特点,讨论了SSD存储加速技术的研究现状,提出了一种面向Lustre文件系统的固态盘存储加速系统架构,介绍和分析了各模块的构成与原理,提出了对象迁移策略.  相似文献   

8.
高性能计算系统需要一个可靠高效的并行文件系统.Lustre集群文件系统是典型的基于对象存储的集群文件系统,它适合大数据量聚合I/O操作.大文件I/O操作能够达到很高的带宽,但是小文件I/O性能低下.针对导致Lustre的设计中不利于小文件I/O操作的两个方面,提出了Filter Cache方法.在Lustre的OST组件中设计一个存放小文件I/O数据的Cache,让OST端的小文件I/O操作异步进行,以此来减少用户感知的小文件I/O操作完成的时间,提高小文件I/O操作的性能.  相似文献   

9.
新型非易失相变存储器PCM应用研究   总被引:1,自引:0,他引:1  
并行I/O技术有效优化了I/O性能,但对访问延迟却难以控制.相变存储器(phase change memory,PCM)作为一种SCM(storage class memory),具有非易失性、随机可读写、低延迟、高吞吐率、体积小和低功耗的特点,为I/O性能优化提供了最直接有效的途径.研究了PCM的特性与存在的问题,总结了目前PCM的应用研究进展,针对高性能计算中的并行I/O问题,提出了一种基于相变存储器PCM的层次式并行混合存储模型,能够有效提高并行文件系统元数据服务效率和并行I/O吞吐率.  相似文献   

10.
采样数据的并行I/O制约一些并行应用的运行效率。设计、实现了采样数据的聚集并行I/O方法。该方法在客户端部署采样数据缓存,然后合并数据到输出进程,再存储到文件。为了保障并行程序长时间运行过程中采样数据的存储一致性,该方法在JASMIN框架中监测应用程序的运行状态,当并行程序发生负载平衡或者重启动时刷新或者恢复数据。I/O过程中,进一步使用HDF5的分块I/O提高列存储数据的读写效率。测试表明,新方法不仅具有较好的可扩展性,还能在具有负载平衡与重启动等复杂功能的并行应用中提高采样数据的并行 I/O 效率7.5倍以上。  相似文献   

11.
并行文件系统Lustre粗粒度I/O性能良好,细粒度I/O性能相对粗粒度I/O比较低下,因此优化细粒度I/O性能成为提高系统整体I/O性能的关键问题。在研究和分析了Lustre的I/O访问模式、细粒度I/O服务流程和页面替换算法等方面后,提出了细粒度优先(Fine Grained First,FGF)LRU算法。在OST端及Client端的页高速缓存中最大程度地保留细粒度I/O的页面,降低细粒度I/O引起的页面下沉速度,延长细粒度I/O页面在主存中的时间,进而减少对磁盘的访问次数,降低磁盘访问开销。通过对实验数据的对比和分析,验证了FGF-LRU算法的有效性。在不影响粗粒度I/O性能的情况下,提高了细粒度I/O性能,最终实现提高系统整体I/O性能。  相似文献   

12.
并行文件系统的设计   总被引:2,自引:0,他引:2  
孙凝晖 《计算机学报》1994,17(12):938-945
在大规模并行处理巨型机(MPP)的设计中,提高I/O性能同提高计算能力和通信能力同样重要。并行文件系统(PFS)在多个I/O结点的多个磁盘上,分布文件系统和文件的磁盘块,将文件读写在计算结点转化成多个对物理块的直接I/O请求,利用预读,预分配,磁盘缓冲式区和异步I/O增加I/O的并发生,在特定的文件使用模式下,也是MPP应用的主要I/O模式,获得很高的I/O效率。  相似文献   

13.
Repository for continuous media data differs from that of the traditional text-based data both in storage space and streaming bandwidth requirements. The file systems used for continuous media streams need to support large volumes and high bandwidth. We propose a scalable distributed continuous media file system built using autonomous disks. Autonomous disks are attached directly to the network and are able to perform lightweight processing. We discuss different ways to realize the autonomous disk, and describe a prototype implementation on a Linux platform using PC-based hardware. We present the basic requirements of the continuous media file system and present the design methodology and a prototype Linux-based implementation of the distributed file system that supports the requirements. We present experimental results on the performance of the proposed file system prototyped using autonomous disks. We show that the performance of the file system scales linearly with the number of disks and the number of clients. The file system performs much superior to NFS running on the same hardware platform and can deliver higher raw disk bandwidth to the applications. We also present bandwidth and time sensitive read/write procedures for the file system and show that the file system can provide strict bandwidth guarantees for continuous media streams.  相似文献   

14.
This paper presents the design and performance of SPIFFI, a scalable high-performance parallel file system intended for use by extremely I/O intensive applications including “Grand Challenge” scientific applications and multimedia systems. This paper contains experimental results from a SPIFFI prototype on a 64 node/64 disk Intel Paragon. The results show that SPIFFI provides high performance and linear scaleup on real hardware. The paper also explains how shared file pointers (i.e., file pointers that are shared by multiple processes) can simplify the design of a parallel application. By sequentializing I/O accesses and by providing dynamic I/O load balancing, a shared file pointer may even improve an application's performance. This paper also presents the predictions of a SPIFFI simulator that we validated using the prototype. The simulator results show that SPIFFI continues to provide high performance even when it is scaled to configurations with as many as 128 disks or 256 compute nodes  相似文献   

15.
In petascale systems with a million CPU cores, scalable and consistent I/O performance is becoming increasingly difficult to sustain mainly because of I/O variability. The I/O variability is caused by concurrently running processes/jobs competing for I/O or a RAID rebuild when a disk drive fails. We present a mechanism that stripes across a selected subset of I/O nodes with the lightest workload at runtime to achieve the highest I/O bandwidth available in the system. In this paper, we propose a probing mechanism to enable application-level dynamic file striping to mitigate I/O variability. We implement the proposed mechanism in the high-level I/O library that enables memory-to-file data layout transformation and allows transparent file partitioning using subfiling. Subfiling is a technique that partitions data into a set of files of smaller size and manages file access to them, making data to be treated as a single, normal file to users. We demonstrate that our bandwidth probing mechanism can successfully identify temporally slower I/O nodes without noticeable runtime overhead. Experimental results on NERSC’s systems also show that our approach isolates I/O variability effectively on shared systems and improves overall collective I/O performance with less variation.  相似文献   

16.
尹洋  刘振军  许鲁 《软件学报》2009,20(10):2752-2765
随着计算规模越来越大,网络存储系统应用领域越来越广泛,对网络存储系统I/O性能要求也越来越高.在存储系统高负载的情况下,采用低速介质在客户机和网络存储系统的I/O路径上作为数据缓存也变得具有实际的意义.设计并实现了一种基于磁盘介质的存储系统块一级的缓存原型D-Cache.采用两级结构对磁盘缓存进行管理,并提出了相应的基于块一级的两级缓存管理算法.该管理算法有效地解决了因磁盘介质响应速度慢而带来的磁盘缓存管理难题,并通过位图的使用消除了磁盘缓存写Miss时的Copy on Write开销.原型系统的测试结果表明,在存储服务器高负载的情况下,缓存系统能够有效地提高系统的整体性能.  相似文献   

17.
Inverted file partitioning schemes in multiple disk systems   总被引:1,自引:0,他引:1  
Multiple-disk I/O systems (disk arrays) have been an attractive approach to meet high performance I/O demands in data intensive applications such as information retrieval systems. When we partition and distribute files across multiple disks to exploit the potential for I/O parallelism, a balanced I/O workload distribution becomes important for good performance. Naturally, the performance of a parallel information retrieval system using an inverted file structure is affected by the partitioning scheme of the inverted file. In this paper, we propose two different partitioning schemes for an inverted file system for a shared-everything multiprocessor machine with multiple disks. We study the performance of these schemes by simulation under a number of workloads where the term frequencies in the documents are varied, the term frequencies in the queries are varied, the number of disks are varied and the multiprogramming level is varied  相似文献   

18.
One problem with data-intensive computing facilitating is how to effectively manage massive amounts of data stored in a parallel I/O system. The file assignment method plays a significant role in data management. However, in the context of a parallel I/O system, most existing file assignment approaches share the following two limitations. First, most existing methods are designed for a non-partitioned file, while the file in a parallel I/O system is generally partitioned to provide aggregated bandwidth. Second, the file allocation metric, e.g. service time, of most existing methods is difficult to determine in practice, and also these metrics only reflect the static property of the file. In this paper, a new metric, namely file access density is proposed to capture the dynamic property of file access, i.e. disk contention property. Based on file access density definition, this paper introduces a new static file assignment algorithm named MinCPP and its dynamic version DMinCPP, both of which aim at minimizing the disk contention property. Furthermore MinCPP and DMinCPP take the file partition property into consideration by trying to allocate the partitions belonging to the same file onto different disks. By assuming file request arrival follows the Poisson process, we prove the effectiveness of the proposed schemes both analytically and experimentally. The MinCPP presented in this study can be applied to reorganize the files stored in a large-scale parallel I/O system and the DMinCPP can be integrated into file systems which dynamically allocate files in a batch.  相似文献   

19.
曹立强  马捷 《计算机工程》2005,31(24):56-57,89
并行文件系统是高性能计算系统中的快速I/O库。它的目的是为并行计算应用提供快速Input/Output的手段。文章总结了并行应用程序的读写特点,其中的关键问题和在并行文件系统的通常使用的技术,并以此为基础设计了面向曙光高性能服务器的曙光并行文件系统(Dawning Parallel File System,DPFS)。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号