首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 62 毫秒
1.
当今,磁盘I/O的发展速度永远赶不上遵照摩尔定律的CPU的发展速度,并且网络I/O资源稀缺,所以I/O常常成为数据处理的瓶颈。Hadoop能存储PB级数据,因此I/O问题愈加显著。压缩是I/O调优的一个重要方法,它能减少I/O的负载,加快磁盘和网络上的数据传输。首先通过分析Hadoop中各压缩算法的特点,得出一个压缩使用策略来帮助Hadoop的使用者确定如何使用压缩,并用实验得以验证补充。基于该策略,一些Hadoop应用在合理使用压缩后,效率能提高65%。  相似文献   

2.
详细分析Windows的I/O机制,提出了分别在用户级、系统级、驱动级的FO模拟操作,通过实际案例和源代码分享在I/O模拟上的得失,着重介绍鼠标与键盘的模拟.  相似文献   

3.
首先分别介绍了文件I/O和标准I/O库的相关函数和实现细节,提出了一系列有关文件读写效率的猜想,然后基于Linux平台设计实验并验证了提出的猜想,最后比较了文件I/O函数和标准I/O库函数,并总结了各自的适用场合。  相似文献   

4.
随着NOW在科学研究中白益广泛的应用,如何为NOW上的科学计算提供高性能的输入输出成为我们面临的一个新课题。作者根据NOW的特点,设计并实现了一个具有NOW特色的基于CollectiveI/Q的并行I/O系统,吸取了DDIO与two-phaseI/O的优点,从而有效地解决了高带宽和低延迟问题。初步的系统吞吐量测试显示了良好的性能。  相似文献   

5.
输入输出系统(I/O系统)作为计算机系统中的一个重要组成部分,其性能的好坏对CPU的性能有很大的影响,因此本文从I/O系统基本概述开始介绍,并采用了模型模拟和实际测量的方法来衡量I/O系统性能,并做了相关的评价分析.  相似文献   

6.
虚拟化技术在为现代数据中心提供高效的服务器整合能力和灵活的应用部署能力的同时,也对数据中心服务器的I/O系统设计提出了新的需求,现有I/O资源与服务器紧密绑定的I/O体系架构将产生成本上升、资源冗余、I/O连线复杂化等一系列问题.针对上述问题,提出了一种基于单根I/O虚拟化协议(single root I/O virtualization, SR-IOV)的多根I/O资源池化方法:基于硬件的多根域间地址和ID映射机制,实现了多个物理服务器对同一I/O设备的共享复用,有效减少单体服务器所需的设备数量和连线数量,并进一步提高服务器密度;同时提出虚拟I/O设备热插拔技术和多根共享管理机制,实现了虚拟I/O资源在服务器间的实时动态分配,提高资源的利用效率.提出的方法在可编程逻辑器件(fieid-programmable gate array, FPGA)原型系统中进行了验证,其评测表明,方法能够在实现多根I/O虚拟化共享的同时,保证各个根节点服务器获得近乎本地直连设备的I/O性能.  相似文献   

7.
网络存储I/O流水机制研究   总被引:1,自引:0,他引:1  
将I/O请求处理划分为多个阶段,为流水线技术引入网络存储提供了新思路.同时,I/O请求处理的各阶段必定通过缓存(内存)来传递或处理数据.I/O请求处理除了直通方式外,大部分依赖存储转发方式(例如对I/O命令的聚散、排队操作等),存储转发的方式下的网络存储I/O流水线具有一些新的特点厦其自身特有的制约因素.探讨I/O流水机制,对提高网络存储系统整体性能,具有一定的指导及实践意义.  相似文献   

8.
本文综述智能I/O技术的现状与趋势,同时简要介绍智能I/O技术。  相似文献   

9.
基于MPI的并行I/O方法   总被引:3,自引:0,他引:3  
基于MPI-2规范的并行I/O方法,以并行矩阵乘法问题为例,比较了并行I/O和串行I/O的性能,给出了并行I/O方法的应用实例。  相似文献   

10.
利用开源Hadoop平台,重点研究了MapReduce在轻量数据集、网络I/O密集型程序的适用性。采用MapReduce编程模型改造了一个典型的轻量数据集、网络I/O密集型应用——FTP站点扫描程序;搭建了一个小规模Hadoop集群环境,调整了Hadoop平台的默认配置,并用真实数据对改造前后的程序进行了性能测试。实验证明,MapReduce编程模型具有良好的分布式特性,可适用于轻量数据集、网络I/O密集型程序。  相似文献   

11.
I/O performance of an RAID-10 style parallel file system   总被引:1,自引:0,他引:1       下载免费PDF全文
Without any additional cost, all the disks on the nodes of a cluster can be connected together through CEFT-PVFS, an RAID-10 style parallel file system, to provide a multi-GB/s parallel I/O performance.I/O response time is one of the most important measures of quality of service for a client. When multiple clients submit data-intensive jobs at the same time, the response time experienced by the user is an indicator of the power of the cluster. In this paper, a queuing model is used to analyze in detail the average response time when multiple clients access CEFT-PVFS. The results reveal that response time is with a function of several operational parameters. The results show that I/O response time decreases with the increases in I/O buffer hit rate for read requests, write buffer size for write requests and the number of server nodes in the parallel file system, while the higher the I/O requests arrival rate, the longer the I/O response time. On the other hand, the collective power of a large cluster supported by CEFT-PVFS is shown to be able to sustain a steady and stable I/O response time for a relatively large range of the request arrival rate.  相似文献   

12.
In this paper, we study I/O server placement for optimizing parallel I/O performance on switch-based clusters, which typically adopt irregular network topologies to allow construction of scalable systems with incremental expansion capability. Finding optimal solution to this problem is computationally intractable. We quantified the number of messages travelling through each network link by a workload function, and developed three heuristic algorithms to find good solutions based on the values of the workload function. The maximum-workload-based heuristic chooses the locations for I/O nodes in order to minimize the maximum value of the workload function. The distance-based heuristic aims to minimize the average distance between the compute nodes and I/O nodes, which is equivalent to minimizing average workload on the network links. The load-balance-based heuristic balances the workload on the links based on a recursive traversal of the routing tree for the network. Our simulation results demonstrate performance advantage of our algorithms over a number of algorithms commonly used in existing parallel systems. In particular, the load-balance-based algorithm is superior to the other algorithms in most cases, with improvement ratio of 10 to 95% in terms of parallel I/O throughput.  相似文献   

13.
In this article, we present a popular lossless compression/decompression algorithm, GZIP, and the study to implement it on an FPGA-based architecture, the ADM-XRC board from ALPHA DATA parallel system ltd. The algorithm is lossless, and applied to “bi-level” images of large size (A0 format). It ensures a minimum compression rate for the images we are considering. It aims to decrease storage requirements and transfer times, which are critical for wide format printing systems. In a wide format document industry, raster data are most of time processed in an uncompressed format, in order to apply processing (P) before printing (p). An example of a copy chain is composed of scanner, set of processing operations, storage, link and printer. We propose to use a compressed format as the new data-flow representation to improve the performances of the printing system. For example, the compression (C) is applied as soon as the data are produced by the scanner, and decompression (D) is performed at the last stage, before printing. The set of processing is applied to compressed images. The proposed architecture for the compressor is based on a hash table and the decompressor is based on a parallel decoder of the Huffman codes. We implemented the proposed architecture for compression and decompression algorithms on FPGA Xilinx Virtex XCV 400.  相似文献   

14.
本文对MPI-IO库中Collective I/O的实现算法Two-phase I/O提出了改进。通过选取主联络进程来减少第一阶段进程间的通信量,从而减少Two-phase I/O方法在通信过程中的时间消耗,提高了整体的I/O性能。  相似文献   

15.
设计的汇编语言子程序库可以被汇编程序或C 程序调用.实现不同数制的键盘输入和控制台显示.  相似文献   

16.
In this paper, we present a performance analysis of how effective video server applications can be supported on personal computers (PCs) connected through a local area network (LAN). We considered both the standard 16-Mbit/s token ring and a 100-Mbit/s token ring, which follows closely the specifications for the Fiber Distributed Data Interface (FDDI). We examined three I/O architectures for a PC-based video server: an interrupt-driven I/O architecture, a peer-to-peer I/O architecture, and a concurrent, object-based I/O architecture that we proposed. The video server must support multiple MPEG-1 video streams at the same time to multiple clients on the LAN. We found that the network protocol layers require a lot of processing power, and that an implementation of our proposed I/O architecture, which takes advantage of the available power of the host processor to off-load I/O adapters, can deliver much better performance, and is more cost-effective, than other I/O architectures in a video server environment.  相似文献   

17.
本文介绍了利用美国WONDERWARE公司的Wonder-wareR FactorySuiteTM 2000 I/O Server Toolkit如何开发用户自己的I/O Server.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号