首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
为减轻多径并行传输(concurrent multipath transfer,CMT)系统中的接收端数据包乱序对系统传输性能的影响,提出一种新的发送端数据分配方案。方案基于路径带宽、往返传输时延和拥塞窗口预测数据包前向传输时延,并将其作为划分系统中路径传输优先级的度量因子,发送端根据路径传输优先级和发送缓存区状态为各路径分配待发送队列中不会导致接收端乱序的数据包。仿真结果表明,与轮询和一种基于分组到达时间的负载均衡算法(arrival-time matching load-balancing,ATLB)算法相比,所提发送端数据分配方案可有效减少接收端乱序数据包个数。  相似文献   

2.
针对Web集群系统中服务器的数量不断增加、负载指标动态变化的特点,为实现均衡的分配请求,提出一种使用空间填充曲线来实现动态负载均衡的算法。利用空间填充曲线可高效得将高维数据映射到一维索引的特点,使均衡器根据实时收集的各项负载指标快速定位到最优编码的服务器。实验结果表明,该算法能有效地缩短请求响应时间,提升了集群系统的整体性能,在大规模集群系统中均衡效果更好。  相似文献   

3.
DVB-RCS 卫星通信系统中,已有的带宽请求算法多注重于队列延迟以及带宽利用率等性能的提高而未考虑卫星终端的存储优化针对此问题,提出了一种应用于 DVB-RCS 卫星通信系统的存储优化带宽请求算法,该算法通过预测到达数据流的容量来实现对发送队列中数据总量的控制,同时兼顾对传输时延以及传输效率的平衡。仿真表明,该算法能够合理控制系统的存储容量,并且在优化带帘利用率、控制时延抖动等方面与已有算法相比具有同样的高性能.  相似文献   

4.
曲乾聪  王俊 《计算机应用研究》2022,39(2):526-530+542
针对传统负载均衡算法不能满足公网数字集群系统高并发用户请求和快速呼叫建立等需求,提出一种基于负载反馈的分布式数字集群动态负载均衡算法,实现公网数字集群系统负载均衡,提高用户容量。首先建立参与MCPTT服务器的静态负载和动态负载监控机制和指标;然后利用加权轮询算法为用户分配参与MCPTT服务器,并通过用户请求的处理获得复合负载参数;根据负载指标的反馈更新参与MCPTT服务器权值以动态调整服务器负载。仿真结果表明,该算法的负载均衡效果优于传统算法和其他动态反馈算法,负载均衡度更小、用户请求响应延迟更低。  相似文献   

5.
使用特殊复合距离的选播路由算法   总被引:6,自引:0,他引:6  
选播成员都是等价的服务器,服务数据的服务质量比作为请求的选播数据报更为重要.使用特殊复合距离的选播路由算法(ASCD)使用跳数、逆向传输延迟、逆向可用带宽以及服务器负载合成的距离来选择路径.不同于其他算法,ASCD使用度量在路径逆向上的值,即从选播数据报目标节点(服务器)到选播数据报源节点(客户)方向,而不是常规从选播数据报的源节点到目的节点方向.ASCD定位的路径和选播成员使选播数据报请求的服务数据能够得到更多路径资源.ASCD还能够在一定程度上平衡服务器负载.  相似文献   

6.
基于预测机制的自适应负载均衡算法   总被引:1,自引:0,他引:1  
石磊  何增辉 《计算机应用》2010,30(7):1742-1745
工作负载特征对Web服务器集群中负载均衡调度算法的性能有重要影响。针对负载特征在调度算法所起作用的分析和讨论,提出基于预测机制的自适应负载均衡算法(RR_MMMCS-A-P)。通过监测工作负载,预测后续请求到达率和请求大小,快速调整相应参数,实现集群中各服务器之间的负载均衡。实验表明,无论是对计算密集型任务还是数据密集型任务,RR_MMMCS-A-P同基于CPU和CPU-MEM的调度算法相比在缩短平均响应时间方面具有较好的性能。  相似文献   

7.
基于负载均衡的虚拟网络映射算法研究   总被引:1,自引:0,他引:1  
为保证虚拟网络请求成功映射,同时不会导致底层网络的部分负载过重,映射性能变差,需要对虚拟网络链路映射进行合理化负载均衡。本文中把虚拟链路带宽资源切片,利用增广子图路径方法选择底层路径,并且将不相交路径资源归一化,设计了基于负载均衡的虚拟网络映射算法。最后,通过仿真将负载均衡算法与路径割裂算法、K最短路径算法进行性能对比。仿真结果表明了负载均衡算法在虚拟网络映射的请求接受率、成本和收益指标方面优于其他两种算法。  相似文献   

8.
多路并行传输中数据调度算法的优化   总被引:1,自引:0,他引:1  
余东平  张剑峰  王聪  李宁 《计算机应用》2014,34(5):1227-1231
针对异构无线网络环境中,基于流控制传输协议(SCTP)的多路并行传输协议(CMT-SCTP)存在接收缓存阻塞和路径负载失衡等问题,提出一种改进的轮询数据调度算法。该算法根据每条路径上的发送队列信息和拥塞状况对网络状况进行估计,并按照各路径上的网络状况分配相应的传输任务量,缩短数据包在接收端缓冲区的平均排队时延,减少接收端乱序数据包的数量。仿真结果表明,改进的轮询数据调度算法能有效提升CMT-SCTP在异构无线网络环境中的传输效率,有效缓解接收缓存的阻塞,且对不同的网络场景具有很好的适应性。  相似文献   

9.
许德力  宋飞  高德云  鄢欢 《计算机应用》2010,30(9):2515-2518
随着网络接入技术的多样和接入设备的廉价,越来越多的移动终端具备多家乡的特性,而现有TCP/IP协议栈只能使用单一接口传输数据,无法充分利用多家乡的优势。在流控制传输协议(SCTP)基础上,设计了路径选择模块和快速路径切换模块,实现了一种无线并行多路径传输(Wireless CMT)解决方案,使得多家乡终端能够同时使用多个接口并行传输数据。在实际无线网络环境对Wireless CMT带宽聚合效果和快速路径切换效果分别进行测试验证。测试结果表明,Wireless CMT使无线网络传输效率和可靠性都得到了明显提升。  相似文献   

10.
李瑞芬  葛倩 《计算机仿真》2021,38(2):253-257
针对当前OMS配网一体化调控方法存在的带宽利用率低、数据传输时延高和数据丢包率高的问题,提出基于大数据调度的OMS配网一体化调控算法.根据任务的稀缺度和紧急度计算任务在OMS配网中的优先请求级别,采用历史信息统计法结合节点能力影响因素计算节点在OMS配网中的上传能力.在任务优先请求级别和节点上传能力的基础上,计算路径在OMS配网中的可用带宽和前向传输时延,并将最大优先算法应用到发生数据丢包的现象中,重新选择传输路径,避免配网中接收端出现乱序的现象,根据计算结果结合重选路策略实现OMS配网的一体化调控.实验结果表明,所提算法的带宽利用率高、数据传输时延低、数据丢包率低.  相似文献   

11.
This paper presents a new scheme of I/O scheduling on storage servers of distributed/parallel file systems, for yielding better I/O performance. To this end, we first analyze read/write requests in the I/O queue of storage server (we name them block I/Os), by using our proposed technique of horizontal partition. Then, all block requests are supposed to be divided into multiple groups, on the basis of their offsets. This is to say, all requests related to the same chunk file will be grouped together, and then be satisfied within the same time slot between opening and closing the target chunk file on the storage server. As a result, the time resulted by completing block I/O requests can be significantly decreased, because of less file operations on the corresponding chunk files at the low-level file systems of server machines. Furthermore, we introduce an algorithm to rate a priority for each group of block I/O requests, and then the storage server dispatches groups of I/Os by following the priority order. Consequently, the applications having higher I/O priorities, e.g. they have less I/O operations and small size of involved data, can finish at a earlier time. We implement a prototype of this server-side scheduling in the PARTE file system, to demonstrate the feasibility and applicability of the proposed scheme. Experimental results show that the newly proposed scheme can achieve better I/O bandwidth and less I/O time, compared with the strategy of First Come First Served, as well as other server-side I/O scheduling approaches.  相似文献   

12.
Hadoop分布式文件系统(HDFS)通常用于大文件的存储和管理,当进行海量小文件的存储和计算时,会消耗大量的NameNode内存和访问时间,成为制约HDFS性能的一个重要因素.针对多模态医疗数据中海量小文件问题,提出一种基于双层哈希编码和HBase的海量小文件存储优化方法.在小文件合并时,使用可扩展哈希函数构建索引文件存储桶,使索引文件可以根据需要进行动态扩展,实现文件追加功能.在每个存储桶中,使用MWHC哈希函数存储每个文件索引信息在索引文件中的位置,当访问文件时,无须读取所有文件的索引信息,只需读取相应存储桶中的索引信息即可,从而能够在O(1)的时间复杂度内读取文件,提高文件查找效率.为了满足多模态医疗数据的存储需求,使用HBase存储文件索引信息,并设置标识列用于标识不同模态的医疗数据,便于对不同模态数据的存储管理,并提高文件的读取速度.为了进一步优化存储性能,建立了基于LRU的元数据预取机制,并采用LZ4压缩算法对合并文件进行压缩存储.通过对比文件存取性能、NameNode内存使用率,实验结果表明,所提出的算法与原始HDFS、HAR、MapFile、TypeStorage以及...  相似文献   

13.
In recent years data grids have been deployed and grown in many scientific experiments and data centers. The deployment of such environments has allowed grid users to gain access to a large number of distributed data. Data replication is a key issue in a data grid and should be applied intelligently because it reduces data access time and bandwidth consumption for each grid site. Therefore this area will be very challenging as well as providing much scope for improvement. In this paper, we introduce a new dynamic data replication algorithm named Popular File Group Replication, PFGR which is based on three assumptions: first, users in a grid site (Virtual Organization) have similar interests in files and second, they have the temporal locality of file accesses and third, all files are read-only. Based on file access history and first assumption, PFGR builds a connectivity graph for a group of dependent files in each grid site and replicates the most popular group files to the requester grid site. After that, when a user of that grid site needs some files, they are available locally. The simulation results show that our algorithm increases performance by minimizing the mean job execution time and bandwidth consumption and avoids unnecessary replication.  相似文献   

14.
论文中对于文件访问的服务时间进行了较深入的研究,提出一种并行I/O文件分配算法——启发式文件分类分配策略,它在负载基本均衡前提下,按照相似的访问服务时间对每个待分配的数据文件进行磁盘分配。通过对启发式文件分类分配策略与已有的贪婪文件分配法进行实验比较,结果表明:系统处理重负载时,访问响应时间提高了30%左右,而且数据访问速率越高,由启发式文件分类分配策略所提高的性能就越明显。  相似文献   

15.
A middleware is proposed to optimize file fetch process in transparent computing (TC) platform. A single TC server will receive file requests of large scale distributed operating systems, applications or user data from multiple clients. In consideration of limited size of server’s memory and the dependency among files, this work proposes a middleware to provide a file fetch sequence satisfying: (1) each client, upon receiving any file, is able to directly load it without waiting for pre-required files (i.e. “receive and load”); and (2) the server is able to achieve optimization in reducing overall file fetch time cost. The paper firstly addresses the features of valid file fetch sequence generating problem in the middleware. The method solves the concurrency control problem when the file fetch is required for the multiple clients. Then it explores the methods to determine time cost for file fetch sequence. Based on the established model, we propose a heuristic and greedy (HG) algorithm. According to the simulation results, we conclude that HG algorithm is able to reduce overall file fetch time roughly by 50% in the best cases compared with the time cost of traditional approaches.  相似文献   

16.
高性能计算系统需要一个可靠高效的并行文件系统.Lustre集群文件系统是典型的基于对象存储的集群文件系统,它适合大数据量聚合I/O操作.大文件I/O操作能够达到很高的带宽,但是小文件I/O性能低下.针对导致Lustre的设计中不利于小文件I/O操作的两个方面,提出了Filter Cache方法.在Lustre的OST组件中设计一个存放小文件I/O数据的Cache,让OST端的小文件I/O操作异步进行,以此来减少用户感知的小文件I/O操作完成的时间,提高小文件I/O操作的性能.  相似文献   

17.
基于网络性能的智能Web加速技术——缓存与预取   总被引:8,自引:0,他引:8  
Web业务在网络业务中占有很大比重,在无法扩大网络带宽时,需要采取一定技术合理利用带宽,改善网络性能。研究了基于RTT(round trip time)等网络性能指标的Web智能加速技术,在对Web代理服务器上的业务进行分析和对网络RTT进行测量分析的基础上,提出了智能预取控制技术及新的缓存(cache)替换方法。对新算法的仿真研究表明,该方法提高了缓存的命中率。研究表明预取技术在不明显增加网络负荷的前提下,提高了业务的响应速度,有效地改进了Web访问性能。  相似文献   

18.
In many-task computing (MTC), applications such as scientific workflows or parameter sweeps communicate via intermediate files; application performance strongly depends on the file system in use. The state of the art uses runtime systems providing in-memory file storage that is designed for data locality: files are placed on those nodes that write or read them. With data locality, however, task distribution conflicts with data distribution, leading to application slowdown, and worse, to prohibitive storage imbalance. To overcome these limitations, we present MemFS, a fully symmetrical, in-memory runtime file system that stripes files across all compute nodes, based on a distributed hash function. Our cluster experiments with Montage and BLAST workflows, using up to 512 cores, show that MemFS has both better performance and better scalability than the state-of-the-art, locality-based file system, AMFS. Furthermore, our evaluation on a public commercial cloud validates our cluster results. On this platform MemFS shows excellent scalability up to 1024 cores and is able to saturate the 10G Ethernet bandwidth when running BLAST and Montage.  相似文献   

19.
With the increase in personal computer clusters in popularity and quantity, message passing between nodes has been an important issue for high failure rate in the network. File access in a cluster file system often contains several sub-operations; each includes one or more network transmissions. Any network failures cause the file system service unavailable. In this paper, we describe a highly reliable message-passing mechanism (HR-NET), which tolerates both software and hardware network failures. HR-NET provides fine-grained, connection-level failover across redundant communication paths. With it, the file system can keep passing messages because HR-NET handles failures automatically by either recovery from network failures or failed over to a backup; therefore, it screens network failures from requests or data transmission of cluster file system. Load balance for messages is also achieved to relieve network traffic. For transmission timeout, HR-NET proposes a priority-based message scheduling which dynamically manages messages in an appropriate order to tolerate request–response failures between clients and servers. HR-NET is implemented upon standard network protocol stack. Performance results show that HR-NET can provide almost full underlying network bandwidth with average 6.17% throughput loss and provide a fast recovery. Experiments with cluster file system show that the overall performance degradation is below 8% due to failover of HR-NET while the reliability is highly enhanced.  相似文献   

20.
We study the broadcast scheduling problem in which clients send their requests to a server in order to receive some files available on the server. The server may be scheduled in a way that several requests are satisfied in one broadcast. When files are transmitted over computer networks, broadcasting the files by fragmenting them provides flexibility in broadcast scheduling that allows the optimization of per user response time. The broadcast scheduling algorithm, then, is in charge of determining the number of segments of each file and their order of transmission in each round of transmission. In this paper, we obtain a closed form approximation formula which approximates the optimal number of segments for each file, aiming at minimizing the total response time of requests. The obtained formula is a function of different parameters including those of underlying network as well as those of requests arrived at the server. Based on the obtained approximation formula we propose an algorithm for file broadcast scheduling which leads to total response time which closely conforms to the optimum one. We use extensive simulation and numerical study in order to evaluate the proposed algorithm which reveals high accuracy of obtained analytical approximation. We also investigate the impact of various headers that different network protocols add to each file segment. Our segmentation approach is examined for scenarios with different file sizes at the range of 100 KB to 1 GB. Our results show that for this range of file sizes the segmentation approach shows on average 13% tolerance from that of optimum in terms of total response time and the accuracy of the proposed approach is growing by increasing file size. Besides, using proposed segmentation in this work leads to a high Goodput of the scheduling algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号