首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
窦飞  高永智 《现代电子技术》2010,33(16):169-171
针对在DTN中使用蔓延路由协议,受转发节点的缓存限制,导致递交率因网络拥塞而下降的问题,在分析比较几种报文丢弃策略的基础上,提出一种改进的拥塞控制方案。当拥塞发生时,对节点缓存中超过某个TTL门限的报文进行丢弃。将改善递交率作为主要参数,使用ONE仿真器对改进方案的性能进行仿真和比较。结果显示,这种方案很好地实现了拥塞控制。  相似文献   

2.
Multimedia communication in wireless networks is challenging due to the inherent complexity and constraints of multimedia data. To reduce the high bandwidth requirement of video streaming, videos are compressed by exploiting spatial and temporal redundancy, thus yielding dependencies among frames as well as within a frame. Unnecessary transmission and maintenance of useless packets in the buffers cause further loss and degrade the quality of delivery (QoD) significantly. In this paper, we propose a QoD‐aware hop system that can decide when and which packets could be dropped without degrading QoD. Moreover, the transmission of useless packets causes network congestion and vain payment by the wireless system subscriber. In this paper, we focus on two types of frame discarding policies to maintain QoD: partial frame discarding policy (PFD) and early frame discarding policy (EFD). PFD policy discards the succeeding packets of a frame if a packet of the frame cannot be served. On the other hand, in EFD policy, when it is likely to fail to serve packets of a frame (based on a threshold) the subsequent packets of the frame are discarded. We first provide an analytical study of average buffer occupancy based on these discarding policies and show the closed‐form expressions for average buffer occupancy. We then perform our simulations by implementing a Markovian model and measure the frameput (the ratio of number of frames served) rather than the number of packets served. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
The wireless access in vehicular environment system is developed for enhancing the driving safety and comfort of automotive users. However, such system suffers from quality of service degradation for safety applications caused by the channel congestion in scenarios with high vehicle density. In the present work channel congestion is controlled jointly by road side unit, and vehicle. The present work supports vehicle to vehicle communication of authentic safe messages among authentic vehicles only. The road side unit reduces channel congestion by allowing only the authentic vehicles to participate in vehicle to vehicle communication, and by discarding unauthentic messages from the network. It revokes vehicles which are not authentic, and vehicles which are communicating unauthentic messages. Each vehicle also participates in the reduction of channel congestion by varying the size of beacon message dynamically, by removing the duplicate messages from message queue, and also by controlling the transmission power, and transmission range of a message during transmission. It further reduces the channel congestion by controlling the message generation rate using message generation rate control algorithm. Two different message generation rate control algorithm are proposed in the present work. In the first approach it maintains the channel load to an estimated initial value whereas the second approach increases the channel load till the percentage of message loss lies below a predefined threshold. The performance of the proposed scheme is studied on the basis of percentage of successful message reception, and percentage of message loss. The performance of the two message generation rate control algorithms are also compared in the present work.  相似文献   

4.
The authors propose a multiplexing frame structure that makes it possible to transmit voice messages synchronously without loss or clipping of contents. This scheme has discrete delay characteristics, and provides a simple play-out method for reproduction of voice signal. The authors investigate its performance by obtaining the cumulative distribution of delay of voice packets and the mean waiting time of data packets. It is concluded that this synchronous frame structure can easily be applied to enhance services with various transmission rates, such as flow control of message streams, node congestion control, and service-class or throughput-class negotiation of channels without significant degradation of trunk utilization  相似文献   

5.
This paper presents the architecture of a new space priority mechanism intended to control cell loss in ATM switches. Our mechanism is a new generic concept called: the multiple pushout. It is based on the utilization of both AAL and ATM features and on a particular definition of the priority bit. Whenever one cell of a message overflows the buffer of an ATM switch, the algorithm causes the switch to discard other cells of the message (including later arrivals). Such discarding frees buffer spaces for cells of other messages that have a chance of arriving at their destination intact. Our objective is to emphasize that in case of overload, with most of proposed mechanisms, cells are discarded without any semantic information about the type of cells. Therefore, at the destination, all the fragments of the corrupted messages will be discarded anyway. Finally, we present simulation results comparing cell loss rates and message loss rates of several space priority mechanisms.  相似文献   

6.
This paper considers the congestion control scheme for the SS7 signaling network in the group special mobile (GSM) digital cellular network. This congestion control scheme is based on monitoring the SS7 link buffer occupancy. In this scheme, a congestion onset message is sent to the user parts of the SS7 network when the buffer occupancy exceeds a certain threshold, and, subsequently, a congestion abatement message is sent when the buffer occupancy goes below another threshold. Upon receipt of the congestion onset message, the user parts are expected to “intelligently” throttle the user traffic (reduce the traffic rate) so as to yield speedy recovery from congestion. Subsequently, on receipt of the abatement message, the user traffic is restored to precongestion levels. This paper primarily proposes appropriate choice of throttles and an algorithmic procedure to size the thresholds so as to yield good performance during congestion. The paper also addresses some implementation issues related to the throttles. Finally, it considers the effects of delays for the onset and abatement messages in reaching the user parts on the performance and parameters of the congestion control scheme  相似文献   

7.
Flow Routing and its Performance Analysis in Optical IP Networks   总被引:1,自引:0,他引:1  
Optical packet-switching networks deploying buffering, wavelength conversion and multi-path routing have been extensively studied in recent years to provide high capacity transport for Internet traffic. However due to packet-based routing and switching, such a network could result in significant disorder and delay variation of packets when they are received by end users, thus increasing the burstiness of the Internet traffic and causing higher-layer protocol to malfunction. This paper addresses a novel routing and switching method for optical IP networks — flow routing, and its facilitating protocol. Flow routing deals with packet-flows to reduce flow corruption due to packet out-of-order, delay variation and packet loss, without using complicate control mechanism. Detailed performance analysis is given for output-buffered optical routers adopting flow routing. Two flow-oriented discarding techniques, i.e., flow discard (FD) and early flow discard (EFD), are discussed. Compared with optical packet-switching routers, a remarkable improvement of good-throughput is obtained in the optical flow-routers, especially under high congestion periods. We conclude that EFD behaves as a robust technique, which is more tolerant than FD to the change of traffic and transmission system factors.  相似文献   

8.
This paper studies the behavior of a packet switch which provides finite waiting space and receives packetized messages. The arrivals of the messages constitute a Poisson process. Each message consists of a random number of packets. The number of packets contained in a message is assumed to be an integer-valued random variable which may follow any arbitrary probability distribution. All packets residing in the buffer receive service from a single output transmitter operating synchronously at a constant rate. Each packet receives the same fixed service time from the transmitter and then leaves the system. Upon the arrival of a message, if the remaining buffer space is not enough to accommodate all packets of the message, then the entire message is completely rejected. Results such as message blocking probability, packet blocking probability, throughput, and mean delay have been obtained. Two different approaches, minislot approximation and the application of the residue theorem, are used to obtain these results. Especially, this combinatorially very complex problem is successfully solved by the residue theorem in a recursive manner. These results are useful in evaluating the performance of a packet switch. They are also useful for design purposes.  相似文献   

9.
In local loss recovery schemes, a small number of recovery nodes distributed along the transmission paths save incoming packets temporarily in accordance with a specified cache policy and retransmit these packets if they subsequently receive a request message from a downstream receiver. To reduce the recovery latency, the cache policy should ensure that the recovery nodes are always able to satisfy the retransmission requests of the downstream receivers. However, owing to the limited cache size of the recovery nodes and the behavior of the cache policy, this cannot always be achieved, and thus some of the packets must be retransmitted by the sender. Accordingly, this paper develops a new network‐coding‐based cache policy, designated as network‐coding‐based FIFO (NCFIFO), which extends the caching time of the packets at the recovery nodes without dropping any of the incoming packets. As a result, the lost packets can be always recovered from the nearest recovery nodes and the recovery latency is significantly reduced. The loss recovery performance of the NCFIFO cache policy is compared with that of existing cache policies by performing a series of simulation experiments using both a uniform error model and a burst error model. The simulation results show that the NCFIFO cache policy not only achieves a better recovery performance than existing cache policies, but also provides a more effective solution for managing a small amount of cache size in environments characterized by a high packet arrival rate. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
Selective packet dropping policies have been used to reduce congestion and transmission of traffic that would inevitably be retransmitted. For data applications using best-effort services, packet dropping policies (PDPs) are congestion management mechanisms implemented at each intermediate node that decide, reactively or proactively, to drop packets to reduce congestion and free up precious buffer space. While the primary goal of PDPs is to avoid or combat congestion, the individual PDP designs can significantly affect application throughput, network utilization, performance fairness, and synchronization problems with multiple Transmission Control Protocol (TCP) connections. Scalability and simplicity are also important design issues. This article surveys the most important selective packet dropping policies that have been designed for best-effort traffic in ATM and IP networks, providing a comprehensive comparison between the different mechanisms.  相似文献   

11.
Expressions are derived for the mean message delay, as a function of message length, when data messages of different lengths arrive asynchronously at a trunk and are divided, as they come in, into packets of some maximum length. The packets of different messages are intermingled and are put onto the trunk either from a single first-in-first-out (FIFO) queue, or from a high-priority and a low-priority queue for single-packet and multiple-packet messages, respectively. The results are compared to the case in which messages enter service in order of arrival and each message is served to completion without interruption, so that the mean message delay is independent of message length  相似文献   

12.
The author considers the performance of a Signaling System Number 7 network when the routing processors, as opposed to transmission facilities, of signaling transfer points are overloaded. The choice of overload controls in such a situation is implementation-dependent, with an option of simply discarding messages in excess of the signaling transfer point (STP) processing capability. It is this option that is studied. Call completion performance, rather than message throughput is considered as being the primary performance measure of interest since it most accurately reflects the service provided to customers. To determine realistic call completion estimates. the authors explicitly incorporate into their analysis the effects of application-level recovery procedures and customer reattempts, both of which significantly impact the service levels achieved. In so doing, they demonstrate that message throughput can be a very misleading measure of the network's ability to provide service. The need for some form of feedback mechanism to the traffic sources that will allow them to appropriately control traffic entering the network is demonstrated  相似文献   

13.
为解决移动自组网中网络编码多播路由协议因业务传输负载增大,而产生的网络拥塞现象,本文提出了一种可靠的基于TCP Vegas窗口拥塞控制的网络编码多播路由协议。该协议的核心思想是发送节点采用发送窗口自调整和反馈消息触发发送窗口调整的机制,综合的调节数据包的发送速率,来改善网络拥塞现象,从而可以降低丢包率。仿真结果表明,当传输负载增大时,基于窗口拥塞控制的网络编码多播路由协议可使得系统的总开销大大降低,分组投递率获得了相对的提升。  相似文献   

14.
Dynamics of TCP traffic over ATM networks   总被引:6,自引:0,他引:6  
Investigates the performance of transport control protocol (TCP) connections over ATM networks without ATM-level congestion control and compares it to the performance of TCP over packet-based networks. For simulations of congested networks, the effective throughput of TCP over ATM can be quite low when cells are dropped at the congested ATM switch. The low throughput is due to wasted bandwidth as the congested link transmits cells from “corrupted” packets, i.e., packets in which at least one cell is dropped by the switch. The authors investigate two packet-discard strategies that alleviate the effects of fragmentation. Partial packet discard, in which remaining cells are discarded after one cell has been dropped from a packet, somewhat improves throughput. They introduce early packet discard, a strategy in which the switch drops whole packets prior to buffer overflow. This mechanism prevents fragmentation and restores throughput to maximal levels  相似文献   

15.
Non-real-time packets in the interactive multimedia satellite network suffer inherent delay in cases of traffic congestion. Moreover, these packets may be discarded from the queue if they are not sent within a certain time. The objective of this study is to develop a real-time method in order to minimize the packet discard rate. Extensive evaluation results show that the proposed method performs very well.  相似文献   

16.
Gerla  M. Kleinrock  L. 《IEEE network》1988,2(1):72-76
The reasons why congestion control is more difficult in interconnected local area networks (LANs) than in conventional packet nets are examined. The flow and congestion control mechanisms that can be used in an interconnected LAN environment are reviewed. The focus is on congestion control (that is, prevention of internal congestion); however some of the proposed schemes require the interaction of flow and congestion control. The schemes considered are dropping packets; input buffer limit, i.e. a limit on the number of input packets (i.e. packets from local hosts) that can be buffered in the packet switch; the use of choke packets, in which, whenever a bridge or router experiences congestion, it returns to the source a choke packet containing the header of the packet traveling in the congested direction and the source, on receiving the choke packet, declares the destination congested, and slows (or stops altogether, for a period of time) traffic to that destination; backpressure, which is the regulation of flow along a virtual connection; and congestion prevention, whereby a voice or video connection is accepted only if there is enough bandwidth (in a statistical sense) in the network to support it  相似文献   

17.
Store-and-forward packet switched networks are subject to congestion under heavy load conditions. In this paper a distributed drop and throttle flow control (DTFC) policy based on a nodal buffer management scheme is proposed. Two classes of traffic are identified: "new" and "transit" traffic. Packets that traveled over one or more hops are considered as transit packets. Packets that are candidates to enter the communication network are considered as new packets. At a given node if the number of allocated buffers is greater than a limit value, then new traffic is rejected, whereas transit traffic is accepted. Indeed, if the total buffer area is occupied, transit traffic is also rejected and, furthermore, it is dropped from the network. This policy is analyzed in the context of symmetrical networks. A queueing network model is developed whereby network throughput is expressed in terms of the traffic load, the number of buffers in a node and the DTFC limit value. Optimal policies where the limit value is a function of the traffic load are found to prevent network congestion. Furthermore, they achieve a very good network throughput even for loads fifty times beyond the normal operating region. Moreover, suboptimal, easy to implement fixed limit policies offer satisfactory results.  相似文献   

18.
Optimal Selective Transmission under Energy Constraints in Sensor Networks   总被引:1,自引:0,他引:1  
An optimum selective transmission scheme for energy-limited sensor networks, where sensors send or forward messages of different importance (priority), is developed. Considering the energy costs, the available battery, the message importances and their statistical distribution, sensors decide whether to transmit or discard a message so that the importance sum of the effectively transmitted messages is maximized. It turns out that the optimal decision is made comparing the message importance with a time-variant threshold. Moreover, the gain of the selective transmission scheme, compared to a nonselective one, critically depends on the energy expenses, among other factors. Albeit suboptimal, practical schemes that operate under less demanding conditions than those for the optimal one are developed. Effort is placed into three directions: 1) the analysis of the optimal transmission policy for several stationary importance distributions; 2) the design of a transmission policy with invariant threshold that entails asymptotic optimality; and 3) the design of an adaptive algorithm that estimates the importance distribution from the actual received (or sensed) messages. Numerical results corroborating our theoretical claims and quantifying the gains of implementing the selective scheme close this paper.  相似文献   

19.
黄隆胜  谢维信 《信号处理》2016,32(11):1318-1327
无线传感器网络节点通信能力有限,有事件发生时,数据产生速率将急剧增大,网络可能会发生拥塞的问题,提出了一种适合关键信息可靠传输的节点拥塞避免算法CAARTKI (Congestion Avoidance Algorithm for Reliable Transmission of Key Information)。算法主要思想是通过引入区分服务,数据包按其重要性不同划分为不同的优先级,高优先级分组优先得到传输。本算法中,节点只有在下一跳节点为其分配了发送窗口才可以发送数据,以避免节点拥塞发生;在链路层考虑拥塞避免的同时,在路由层通过选择可用缓存空间多的邻居节点作为下一跳节点,使关键信息能及时可靠传输到负载较轻的节点,减少由于负载过重时重要信息不能及时传输的可能;在关键信息密集产生时,采用主动丢包策略,丢弃部分低优先级分组,为高优先级分组腾出缓存区间。NS2仿真实验结果表明:CAARTKI可预防拥塞的产生,最高优先级分组的丢包率低,平均网络时延较小,能保证关键信息的及时可靠传输。   相似文献   

20.
In delay tolerant network interruptions will occur continuously because there is no end-to-end path exists for the longer period of time from source to destination. In this context, delays can be immensely large due to its environment contrails e.g. wildlife tracking, sensor network, deep space and ocean networks. Furthermore, larger replication of messages put into the network is to increase delivery probability. Due to this high buffer occupancy storage space and replication result in a huge overhead on the network. Consequently, well-ordered intelligent message control buffer drop policies are necessary to operate on buffer that allows control on messages drop when the node buffers are near to overflow. In this paper, we propose an efficient buffer management policy which is called message drop control source relay (MDC-SR) for delay tolerant routing protocols. We also illustrate that conventional buffer management policy like Drop oldest, LIFO and MOFO be ineffective to consider all appropriate information in this framework. The proposed MDC-SR buffer policy controls the message drop while at the same time maximizes the delivery probability and buffer time average and reduces the message relay, drop and hop count in the reasonable amount. Using simulations support on an imitation mobility models Shortest Path Map Based Movement and Map Route Movements, we show that our drop buffer management MDC-SR with random message sizes performs better as compared to existing MOFO, LIFO and DOA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号