首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 578 毫秒
1.
DTN(Delay Tolerant Network)网络具有间歇性连接、存储容量有限等特点,因而极易耗尽有限的网络资源,导致网络拥塞,降低网络性能。针对这个问题,在Epidemic路由算法基础上提出一种基于消息冗余度和节点缓存空闲率的拥塞控制策略RBCCS(message redundancy and node buffer residual rate-based congestion control strategy)。该策略要求发送节点以本身缓存空闲率为阈值,只将消息递交给缓存空闲率大于该阈值的邻居节点,避免盲目洪泛。此外,提出综合考虑消息生存时间、消息已转发次数和消息接收时刻的消息冗余度的概念。根据消息冗余度来优化缓存管理策略,拥塞发生时,冗余度大的消息被率先丢弃,使得拥塞节点获得足够容纳新消息的空间。仿真结果表明,应用该策略的Epidemic路由算法能使平均时延降低6.8%,消息递交率提升15.8%,开销率降低14.4%。  相似文献   

2.
All_to_All操作是一种重要的集合操作.目前的商用Infiniband网络中没有有效的拥塞控制机制.通过实验研究了2种典型的All_to_All算法在Infiniband网络中的性能,发现这些算法在传输大于32KB的大消息时会在网络中产生严重的拥塞,从而导致网络带宽利用率仅有30%~70%.尝试通过将大消息拆分成小消息、调度小消息的发送来减少网络拥塞.在任意2对进程间都建立可靠的连接,为每个连接都维护一个正在处理的发送请求计数器.当该计数器超过某个阈值后,认为这2个进程间的通信链路上发生了拥塞,此时停止向该连接的发送队列投递新的发送请求,以避免拥塞扩散到整个网络.实验结果表明该优化算法可以改善网络的拥塞程度;相比现有算法带宽利用率可以提高10%以上,最多可以提高20%.  相似文献   

3.
VIA是个能降低软件开销的用户级通信协议,针对航电系统对传输网络通信的实时性提出了很高的要求,在分析VIA原理的基础上,对硬件进行了少许修改,把VIA引入航电网络,针对面向航电的先锋光纤传输网卡实现了一个精简的类VIA的通信协议。该协议根据VIA的基本原理并结合先锋网卡的特点,通过减少陷入内核的次数,尽量消除数据挎贝,缩短数据传输的关键路径来提通信速率。通过实验验证,引入VIA技术后,可大大降低通信过程中的软件开销,降低通信延迟,提高通信网络的实时性。  相似文献   

4.
针对复杂战场环境下用户终端间缺少稳定的端到端通信路径的问题,提出一种基于车载自组网(VANET)通信终端和运动信息的容忍时延网络(DTN)分簇路由算法——CVCTM。首先,完成了基于簇头选举的分簇算法研究;然后,根据跳数、转发方式和地理位置信息开展了簇内源车辆路由选择算法研究;其次,通过引入等待时间、重发次数阈值和下游簇头,实现了异簇间源车辆路由选择;最后,通过车载自组网的通信终端选择与上级指挥所通信的最佳方式。ONE仿真的实验结果表明,CVCTM与无线自组网按需平面距离向量路由协议(AODV)相比,消息投递率增加了近5%,网络开销减少了近10%,簇结构重组次数减少了近25%;与基于传统分簇路由(CBRP)算法和动态源路由(DSR)协议相比,消息投递率增加了近10%,网络开销减少了近25%,簇结构重组次数减少了近40%。CVCTM能够有效减少网络开销和簇结构重组次数,同时增加消息投递率。  相似文献   

5.
无线多跳通信网络依赖多节点中继实现信息传输,因不需要依赖预先架设的基础设施而成为军用及民用领域等特殊应用场景下的重要通信方式。为了能在复杂及恶劣环境下组网,信源节点往往采用全网广播路由请求分组RREQ的泛洪方法,以提高多跳转发路径构建成功率。然而,全网泛洪广播产生消息的冗余转发和重叠效应引起节点能耗的上升和信道利用率的下降,导致分组碰撞与网络拥塞概率的上升,严重时可能造成网络瘫痪而失去效用。本文基于贝叶斯概率论设计了无线多跳通信网络的消息转发模型,通过计算节点密度和后验概率在保证网络连通性的条件下减少不必要的消息转发。基于NS2的仿真结果表明,本文所提出的基于贝叶斯概率模型的消息转发机制能够有效减小广播分组的重播次数。相比于同类算法,在基本保证网络吞吐量的前提下,可以有效降低能量消耗、路由开销,并提高分组成功交付率,从而为未来广域大规模动态多跳网络部署提供技术支撑。  相似文献   

6.
夏汉铸  崔晓燕 《测控技术》2015,34(3):101-104
针对无线Mesh网络的网络特性,分析了无线Mesh网络的拥塞控制策略,对无线Mesh网络的拥塞程度进行分级,并提出了一种无线Mesh网络的拥塞控制算法——RICC算法.该算法主要通过移动节点不同的拥塞程度发送不同的拥塞通告消息,收到拥塞通告消息的移动节点动态地调整发送数据的速率以达到拥塞控制的目的,并通过仿真验证了该算法可提高无线Mesh网络性能.  相似文献   

7.
通信对机群并行计算性能的影响   总被引:1,自引:1,他引:1  
分析了通信和计算重叠模型及LogGP模型,指出各通信参数对并行计算性能的影响,结合并行程序的特征介绍了在机群环境下改善并行计算性能经常采用的五种通信方式:采用高速网络、采用用户级通信协议、利用SMP通信、动态预取或迁移数据、消息合并发送,详尽测试了各种方式影响性能情况并分析了其特点和适用范围.采用高速网络是最常用的方法,性能提高明显,适用于各类应用程序.需要传送大量小消息的并行程序还应采用用户级通信协议.对于特定的一类应用程序,采用消息合并发送方式提高性能最多.采用动态预取或迁移数据和利用SMP通信时要慎重,这两种方法只在特定条件下有效.  相似文献   

8.
在并行计算的消息传递编程中,由于处理器间的通信将花费大量的时间,因此减少通信开销变得非常关键。基于这一点,注意到网络传输中存在大量小消息的特点,文章采用数据合并的思想,提出了一种减少弦振荡问题并行程序设计通信开销的方案,推导出一个使用性能达到最佳的公式,并对其进行了实验,得出的实验结果表明这种方案能够有效地减少并行计算中的通信开销.而且这种方案也能应用于一些其它的并行计算问题中。  相似文献   

9.
现有基于议价博弈的机会网络路由算法存在着因节点交互过程偏多所引起的控制开销过大、对无用消息提出请求时带来了额外开销和博弈双方达成交易概率不高所引起的时延以及SV列表中消息剩余跳数降为1时带来了额外开销等问题,对此提出了一种高效的机会网络路由算法——EORB。该算法通过采用自适应精简数据包摘要、自适应合并SV-DP消息和求购消息、综合考虑买卖双方收益的博弈策略等机制减少了冗余开销,加速了消息的转发速率并提高了消息的到达率。仿真结果表明,该算法有效提高了数据传送到达的成功率,降低了系统开销以及消息的平均端到端时延。  相似文献   

10.
孙三山  汪帅  樊自甫 《计算机应用》2016,36(7):1784-1788
针对传统数据中心网络极易发生拥塞的问题,提出了在软件定义网络(SDN)的架构下设计基于流调度代价的拥塞控制路由算法加以解决。首先,进行拥塞链路上的大小流区分,并对所有大流的各条等价路径进行路径开销权重的计算,选择权重最小的路径作为可用调度路径;然后,使用调度后路径开销变化量和流占用带宽比例来共同定义流调度代价;最终选择调度代价最小的流进行调度。仿真结果表明,所提算法能在网络发生拥塞时降低了拥塞链路上的负荷,并且与仅进行流路径选择的拥塞控制算法相比,提高了链路利用率,减少了流传输时间,使得网络链路资源得到更好的利用。  相似文献   

11.
With the increasing uniprocessor and symmetric multiprocessor computational power available today, interprocessor communication has become an important factor that limits the performance of clusters of workstations/multiprocessors. Many factors including communication hardware overhead, communication software overhead, and the user environment overhead (multithreading, multiuser) affect the performance of the communication subsystems in such systems. A significant portion of the software communication overhead belongs to a number of message copying operations. Ideally, it is desirable to have a true zero‐copy protocol where the message is moved directly from the send buffer in its user space to the receive buffer in the destination without any intermediate buffering. However, due to the fact that message‐passing applications at the send side do not know the final receive buffer addresses, early arrival messages have to be buffered at a temporary area. In this paper, we show that there is a message reception communication locality in message‐passing applications. We have utilized this communication locality and devised different message predictors at the receiver sides of communications. In essence, these message predictors can be efficiently used to drain the network and cache the incoming messages even if the corresponding receive calls have not yet been posted. The performance of these predictors, in terms of hit ratio, on some parallel applications are quite promising and suggest that prediction has the potential to eliminate most of the remaining message copies. We also show that the proposed predictors do not have sensitivity to the starting message reception call, and that they perform better than (or at least equal to) our previously proposed predictors. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

12.
针对现有BPLC网络组网协议在回复关联确认消息的数量和时间上存在冗余的问题,对BPLC报文交互过程进行研究,提出一种基于自适应组播的高效组网协议.通过自适应地聚合关联确认消息并采用组播方式发送,在减少控制开销的同时加速发送部分关联确认消息.理论分析显示了该协议的有效性.仿真结果表明,与现有BPLC网络组网协议相比,该协...  相似文献   

13.
Group communication protocols (GCPs) play an important role in the design of modern distributed systems. A typical GCP exchanges control messages to provide message delivery guarantees, and a key point in the configuration of such a protocol is to establish the right trade-off between message overhead and delivery latency. This trade-off becomes even a greater challenge in systems where computing resources and application requirements may change at runtime. In such scenarios, the configuration of a GCP must be continuously re-adjusted to attain certain performance goals, or to adapt to current resource availability. This paper addresses this challenge by proposing self-managing mechanisms based on feedback control theory to a GCP especially designed to be self-manageable; in the proposed protocol, message overhead and delivery latency can be adjusted at runtime to follow some new operating set-point. The evaluation performed under varied scenarios shows the effectiveness of our approach.  相似文献   

14.
Jia  W. Kaiser  J. Nett  E. 《Micro, IEEE》1996,16(2):59-67
Based on a logical token ring, this communication protocol ensures total message ordering and atomic delivery, so all group members maintain an identical view of ordered events. RMP (Reliable Multicast Protocol) is an efficient multicast protocol for general distributed applications based on the logical token ring approach. The novelty of RMP is that it simultaneously multicasts an ordered message and implicitly rotates the token position on the ring. There is no token transfer message in the normal multicast. For message atomicity, our protocol minimizes control messages and communication costs while incurring a relatively short delay. In contrast to other token algorithms, RMP does not risk losing the token when the token site fails. Without requiring extra overhead, our approach guarantees total ordering of messages and message atomicity-either every member receives a message, or none do  相似文献   

15.
Distributed hard real-time systems are characterized by communication messages associated with timing constraints, typically in the form of deadlines. A message should be received at the destination before its deadline expires. Carrier sense multiple access with collision detection (CSMA/CD) appears to be one of the most common communication network access schemes that can be used in distributed hard real-time systems. In this paper, we propose a new real-time network access protocol which is based on the CSMA/CD scheme. The protocol classifies the messages into two classes as ‘critical’ and ‘noncritical’ messages. The messages close to their deadlines are considered to be critical. A critical message is given the right to access the network by preempting a noncritical message in transmission. Extensive simulation experiments have been conducted to evaluate the performance of the protocol. It is shown that the protocol can provide considerable improvement over the virtual time CSMA/CD protocol proposed for hard real-time communication by Zhao et al.1.  相似文献   

16.
The different types of messages used by a parallel application program executing in a distributed computing system can each have unique characteristics so that no single communication network can produce the lowest latency for all messages. For instance, short control messages may be sent with the lowest overhead on one type of network, such as Ethernet, while bulk data transfers may be better suited to a different type of network, such as Fibre Channel or HIPPI. This work investigates how to exploit multiple heterogeneous communication networks that interconnect the same set of processing nodes using a set of techniques we call performance-based path determination (PBPD). The performance-based path selection (PBPS) technique selects the best (lowest latency) network among several for each individual message to reduce the communication overhead of parallel programs. The performance-based path aggregation (PBPA) technique, on the other hand, aggregates multiple networks into a single virtual network to increase the available bandwidth. We test the PBPD techniques on a cluster of SGI multiprocessors interconnected with Ethernet, Fibre Channel, and HiPPI networks using a custom communication library built on top of the TCP/IP protocol layers. We find that PBPS can reduce communication overhead in applications compared to using either network alone, while aggregating networks into a single virtual network can reduce communication latency for bandwidth-limited applications. The performance of the PBPD techniques depends on the mix of message sizes in the application program and the relative overheads of the networks, as demonstrated in our analytical models  相似文献   

17.
王艳玲  秦拯  陶勇 《计算机工程》2012,38(14):76-78
DTN网络一般采用基于消息复制的随机路由策略,由于网络中存在大量的消息副本,因此会导致中间节点缓冲区占用大,出现拥塞。为此,从冗余控制角度出发,基于PROPHET路由算法,设计用于缓冲区管理的3种机制,包括消息副本数量的控制、数据包生存期的动态设置以及已成功传输数据包的主动删除。通过限制消息副本数和删除多余消息,降低网络中消息副本总量,从而减轻节点负载。实验结果表明,在网络资源有限的情况下,上述3种机制能提高消息的成功传输率,降低网络开销。  相似文献   

18.
We show how any dynamic instantaneous compression algorithm can be converted to an asymmetric communication protocol, with which a server with high bandwidth can help clients with low bandwidth send it messages. Unlike previous authors, we do not assume the server knows the messages' distribution, and our protocols are the first to use only one round of communication for each message.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号