首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
UDP是互联网上多媒体数据传输采用的主要传输协议。其主要特点是网络开销低但传输不可靠,易产生包丢失或失序。为了对失序数据包进行重排序,应用程序需要预留一个适当的缓冲区以存储期望包到达之前的数据包。该文分析了数据包丢失或失序情况下,缓冲区大小对系统性能的影响,给出了缓冲区大小的一个估计值,并对失序或数据包丢失情况下数据处理的策略进行了讨论。  相似文献   

2.
基于UUP传输协议的包丢失和失序处理   总被引:3,自引:0,他引:3  
UDP是互联网上多媒体数据传输采用的主要传输协议。其主要特点是网络开销低但传输不可靠,易产生包丢失或失序。为了对失序数据包进行重排序,应用程序需要预留一个适当的缓冲区以存储期望包到达之前的数据包。该文分析了数据包丢失或失序情况下,缓冲区大小对系统性能的影响,给出了缓冲区大小的一个估计值,并对失序或数据包丢失情况下数据处理的策略进行讨论。  相似文献   

3.
该文分析了MPLS网络中的Makam和Haskin两种故障恢复机制,Makam机制会造成的数据包的丢失,Haskin可以避免这个问题,但会导致数据包失序的问题,该文提出了在Haskin的基础上使用缓冲区来避免数据包的失序,并给出了缓冲区大小的估计式,以及估计式中相关参数的近似计算方法.  相似文献   

4.
根据视频图像传输的要求,扩展了UDP协议,定义了包头结构,在发送端对传输进行光滑化处理,在接收端预留一个适当的缓冲区以存储期望包到达之前的数据,添加了流量控制、失序和包丢失处理机制,从而保证了视频图像传输的有序性和正确性。  相似文献   

5.
为了实现网络的实时监测,需要进行尽可能多地捕获网络中传送的数据包。由于网络传输过程中,数据包的突发性、随机性和实时性,因此必须对接口程序进行优化,实现上层应用程序处理与下层网络数据采集的相互匹配,减少数据包的丢失。本文利用多缓冲区技术对NDIS接口进行了优化,实验表明,即使在大数据流量的情况下,通过该项技术也可使得数据包的丢失率低于1/1000。  相似文献   

6.
季宝杰  陈新 《微计算机应用》2004,25(2):226-226,240
鉴于UDP相对于TCP不可靠,效率高,支持广播功能;在现场实时通信中要传输的数据不但包含周期性的较短的控制信令和采集数据还有现场视频监控数据。因此,采用UDP作传输层协议时扩展UDP协议如下: (1)发送端和接收端确定一个传输缓冲区,大小为65536字节,以满足数据的突发传送的需要。 (2)对一个分组数据的每个数据包加一个包头,包头结构中定义了序列号,传输通道号,时戳。 (3)传输超时和分组丢失的处理机制。现场传输要求实时性高,如果数据包丢失数目  相似文献   

7.
网络控制系统的分析与建模   总被引:73,自引:7,他引:73  
本文论述了网络控制系统的复杂性及当前网络控制系统建模中存在的主要缺点. 在考虑系统噪声、控制器动态及输出反馈的情况下,提出了多包传输、单包传输有数据包丢 失及多包传输有数据包丢失时网络控制系统的离散随机模型的统一的建模方法,并给出了网 络诱导时延合并定理.  相似文献   

8.
具有数据包丢失及多包传输的网络控制系统稳定性   总被引:10,自引:2,他引:10  
在分析网络数据包丢失和多包传输原因的基础上,研究存在数据包丢失和多包传输的网络控制系统稳定性问题.根据网络数据包丢失模型,提出了信号传输成功率应满足的系统指数稳定性条件,并依据具有事件概率限制的异步动态系统理论建立了多包传输网络控制系统模型,给出了判定系统指数稳定性的充分条件.仿真示例验证了上述判定系统稳定性条件的有效性.  相似文献   

9.
为实现无线传感器网络中的拥塞控制以提高网络覆盖,提出一种最大冗余丢弃的缓冲区管理和覆盖传输的数据包调度机制.在前者中,如果节点之间彼此靠得很近,则在拥塞期间丢弃靠得很近的一组节点的数据包,获得期望效用更高的数据包,使更少的信息丢失;在后者中,当两个数据包非常靠近时,则在考虑全部数据包的同时,寻找高效用或低冗余数据包,尝试选择最大化覆盖的数据包传输,实现高网络覆盖.仿真结果表明,相比单纯的丢尾和先进先出算法,采用最大冗余丢弃和覆盖传输的拥塞控制机制可明显提高网络的覆盖增益.  相似文献   

10.
提出了一种适合包交换网络传输的基于离散小波变换的视频编码方案,通过对SPIHT小波系数编码算法进行复杂度降低、纹理分割等修改,来适应视频编码在编码效率和鲁棒性方面的要求,为解决网络数据包丢失造成的帧质量骤降,方案对数据包的重要性进行了均衡,每一个数据包中均包含帧内信息和帧间信息,利用改进的SPIHT算法生成混合比特流。试验表明,该方案运算复杂度低,对网络传输中的包丢失不敏感,并且能很好地抑制错误传播。  相似文献   

11.
针对多通道并行传输中的接收缓存阻塞问题,分析了引起接收缓存阻塞的原因,提出一种改进的缓解接收缓存阻塞的数据包调度方法,综合考虑通道的带宽、时延和丢包率,引入通道质量的评价函数,优化多通道之间的数据包调度,选择质量最好的通道进行传输,减少由于通道特性不同造成的接收端数据包乱序;提出一种改进的数据包重传策略,基于时延和丢包率选择能使数据包最快到达接收端的通道进行重传;提出一种根据通道的带宽-延迟积估算所需接收缓存大小的方法。仿真实验表明,所提出的调度方法和重传策略能够有效地减轻接收缓存阻塞,与CMT-SCTP相比具有更优的性能,所提出的缓存大小的估算方法也能够准确估算所需接收缓存的大小。  相似文献   

12.
13.
In the online packet buffering problem (also known as the unweighted FIFO variant of buffer management), we focus on a single network packet switching device with several input ports and one output port. This device forwards unit-size, unit-value packets from input ports to the output port. Buffers attached to input ports may accumulate incoming packets for later transmission; if they cannot accommodate all incoming packets, their excess is lost. A packet buffering algorithm has to choose from which buffers to transmit packets in order to minimize the number of lost packets and thus maximize the throughput. We present a tight lower bound of e/(e?1)≈1.582 on the competitive ratio of the throughput maximization, which holds even for fractional or randomized algorithms. This improves the previously best known lower bound of 1.4659 and matches the performance of the algorithm Random Schedule. Our result contradicts the claimed performance of the algorithm Random Permutation; we point out a flaw in its original analysis.  相似文献   

14.
This paper presents a simulation study of a new dynamic allocation of input buffer space in multistage interconnection networks (MINs). MINs are composed of an interconnected set of switching elements (SEs), connected in a specific topology. The SEs are composed of input and output buffers which are used to store received and forwarded packets, respectively. The performance of these networks depends on the design of these internal buffers and the clock mechanism in synchronous MINs. Various cycle models exist which include the big cycle, small cycle and the smart cycle, each of which provides a more efficient cycle timing. The smart cycle model achieves a superior performance by using output buffers and acknowledgement. However, it suffers from lost and out-of-order packets at high traffic loads. This paper, presents a variation of the smart cycle model, whereby, the input buffer space of each SE is allocated dynamically as a function of traffic load, in order to overcome the above-mentioned drawbacks. A shared buffer pool is provided, which supplies the required input buffer space as required by each SE. Simulation results are presented, which show the required buffer pool for various network sizes and for different network loads. Also, comparison with a static allocation scheme shows an increased network throughput, and the elimination of lost and out-of-order packets at high traffic loads.  相似文献   

15.
First Person Shooters are a genre of online games in which users demand a high interactivity, because the actions and the movements are very fast. They usually generate high rates of small packets which have to be delivered to the server within a deadline. When the traffic of a number of players shares the same link, these flows can be aggregated in order to save bandwidth. Certain multiplexing techniques are able to merge a number of packets, in a similar way to voice trunking, creating a bundle which is transmitted using a tunnel. In addition, the headers of the original packets can be compressed by means of standard algorithms. The characteristics of the buffers of the routers which deliver these bundled packets may have a strong influence on the network impairments (mainly delay, jitter and packet loss) which determine the quality of the game. A subjective quality estimator has been used in order to study the mutual influence of the buffer and multiplexing techniques. Taking into account that there exist buffers which size is measured in terms of bytes, and others measured in packets, both kinds of buffers have been tested, using different sizes. Traces from real game parties have been merged in order to obtain the traffic of 20 simultaneous players sharing the same Internet access. The delay and jitter produced by the buffer of the access router have been obtained using simulations. In general, the quality is expected to be reduced as the background traffic grows, but the results show an anomalous region in which the quality rises with the background traffic amount. Small buffers present better subjective quality results than bigger ones. When the total traffic amount gets above the available bandwidth, the buffers measured in bytes add to the packets a fixed delay, which grows with buffer size. They present a jitter peak when the offered traffic is roughly the link capacity. On the other hand, buffers which size is measured in packets add a smaller delay, but they increase packet loss for gaming traffic. The obtained results illustrate the need of knowing the characteristics of the buffer in order to make the correct decision about traffic multiplexing. As a conclusion, it would be interesting for game developers to identify the behaviour of the router buffer so as to adapt the traffic to it.  相似文献   

16.
IETF提出的移动IP协议对微移动的支持不够,而分布式路由方案能较好地解决该问题。该文在一种分布式移动IP方案的基础上进行了改进:将转向代理的缓存修改为循环列表,减少移动节点接收的乱序分组;当切换转向代理时,移动节点向前一个转向代理发送绑定更新消息,减少转向代理切换时的分组丢失;在转向代理处引入Snoop机制,减少通信对端重传分组数。利用NS2软件对两种方案进行了仿真。仿真结果表明,改进的方案能够明显地减少重传分组和乱序分组,进一步减少远程信令流量和数据流量,提高网络资源的利用率,有效地改善移动IP的性能。  相似文献   

17.
The concept of Quality of Service (QoS) networks has gained growing attention recently, as the traffic volume in the Internet constantly increases, and QoS guarantees are essential to ensure proper operation of most communication-based applications. A QoS switch serves m incoming queues by transmitting packets arriving to these queues through one output port, one packet per time step. Each packet is marked with a value indicating its priority in the network. Since the queues have bounded capacities and the rate of arriving packets can be much higher than the transmission rate, packets can be lost due to insufficient queue space. The goal is to maximize the total value of transmitted packets. This problem encapsulates two dependent questions: buffer management, namely which packets to admit into the queues, and scheduling, i.e. which queue to use for transmission in each time step. We use competitive analysis to study online switch performance in QoS-based networks. Specifically, we provide a novel generic technique that decouples the buffer management and scheduling problems. Our technique transforms any single-queue buffer management policy (preemptive or non-preemptive) to a scheduling and buffer management algorithm for our general m queues model, whose competitive ratio is at most twice the competitive ratio of the given buffer management policy. We use our technique to derive concrete algorithms for the general preemptive and non-preemptive cases, as well as for the interesting special cases of the 2-value model and the unit-value model. We also provide a 1.58-competitive randomized algorithm for the unit-value case. This case is interesting by itself since most current networks (e.g. IP networks) do not yet incorporate full QoS capabilities, and treat all packets equally.  相似文献   

18.
As one of the fast-developing switch-based high-speed networks, asynchronous transfer mode (ATM) is a promising network standard which may satisfy various requirements of multimedia computing. The Moving Picture Experts Group (MPEG) standard was designed to support full motion video stored on digital storage media at compression ratios up to 200:1. MPEG-2 is the second development phase of the MPEG standard and is designed for higher resolutions (including but not restricted to interlaced video) and higher bit rates (up to 20 Mbits/s). In this paper, the ATM adaptation layer type 5 (AAL-5) protocol was used to encapsulate constant-bit-rate-encoded MPEG-2 transport packets because of AAL-5's general availability. However, there is a mismatch of size between MPEG-2's transport packets (188 bytes) and ATM AAL-5's protocol data units (up to 65 535 bytes). In this paper, we examine and analyze four different packing schemes, 1TP, 2TP, nTP-Tight, and nTP-Loose (the scheme we propose), which encapsulated a certain number of MPEG-2 transport packets into one AAL-5 PDU. nTP-Loose scheme is proposed to have (1) better end-to-end performance than schemes 1TP and 2TP, (2) better error-recovery capability than scheme nTP-Tight, and (3) the same buffer requirement as scheme 2TP. A Power Macintosh ATM platform was used to identify the range of possible ways of packing MPEG-2 transport packets into one ATM AAL-5 PDU, when schemes with more than two MPEG-2 transport packets are chosen. Based on the test results, 10 or 12 MPEG-2 transport packets, which can yield throughputs of 70.36 and 78.98 Mbits/s, respectively, are recommended. Fast forward and backward playing of MPEG-2 movies (several times the video display speed) can be easily achieved via ATM networks.  相似文献   

19.
This work presents a study of RTP multiplexing schemes, which are compared with the normal use of RTP, in terms of experienced quality. Bandwidth saving, latency and packet loss for different options are studied, and some tests of Voice over IP (VoIP) traffic are carried out in order to compare the quality obtained using different implementations of the router buffer. Voice quality is calculated using ITU R-factor, which is a widely accepted quality estimator. The tests show the bandwidth savings of multiplexing, and also the importance of packet size for certain buffers, as latency and packet loss may be affected. The customer’s experience improvement is measured, showing that the use of multiplexing can be interesting in some scenarios, like an enterprise with different offices connected via the Internet. The system is also tested using different numbers of samples per packet, and the distribution of the flows into different tunnels is found to be an important factor in order to achieve an optimal perceived quality for each kind of buffer. Grouping all the flows into a single tunnel will not always be the best solution, as the increase of the number of flows does not improve bandwidth efficiency indefinitely. If the buffer penalizes big packets, it will be better to group the flows into a number of tunnels. The router processing capacity has to be taken into account too, as the limit of packets per second it can manage must not be exceeded. The obtained results show that multiplexing is a good way to improve customer’s experience of VoIP in scenarios where many RTP flows share the same path.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号