首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
作为拥塞度量,排队时延具有很多优点,但仅利用排队时延并不能完全避免丢包,而在链路缓存不足出现丢包时,排队时延已不能有效反应网络拥塞情况。该文提出了一种基于排队时延和丢包率的拥塞控制模型,该模型采用双模控制的方法。在瓶颈链路上有足够缓存时,模型利用排队时延作为拥塞度量,使各流获得稳定的动态性和成比例公平性。当瓶颈路由器上没有足够缓存不可避免要丢包时,模型利用丢包率作为拥塞度量,使各流仍能获得与不丢包情况下相近的流特性。模型在两种模式的切换中保持稳定,实现平滑过渡。  相似文献   

2.
由于无线网络环境下网络节点的增加,网络延时成为一个亟待解决的问题。为了提高服务质量(QoS),提高吞吐量,文中提出了一种基于优先级的队列延迟模型,通过将每一个包预设置优先级来区分其重要性和实时性,同时将每一个AP设备中的队列根据优先级划分为3种类型,并将预设优先级的包放入其中进行传输,从而有效减少发送端的队列延迟。通过分析和仿真可以发现,与未划分优先级队列的节点网络相比,这种方案不仅使单个节点的延迟大幅减少,也使整个网络的平均延迟明显降低,网络整体性能显著提高。  相似文献   

3.
In a packet switching network, congestion is unavoidable and affects the quality of real‐time traffic with such problems as delay and packet loss. Packet fair queuing (PFQ) algorithms are well‐known solutions for quality‐of‐service (QoS) guarantee by packet scheduling. Our approach is different from previous algorithms in that it uses hardware time achieved by sampling a counter triggered by a periodic clock signal. This clock signal can be provided to all the modules of a routing system to get synchronization. In this architecture, a variant of the PFQ algorithm, called digitized delay queuing (DDQ), can be distributed on many line interface modules. We derive the delay bounds in a single processor system and in a distributed architecture. The definition of traffic contribution improves the simplicity of the mathematical models. The effect of different time between modules in a distributed architecture is the key idea for understanding the delay behavior of a routing system. The number of bins required for the DDQ algorithm is also derived to make the system configuration clear. The analytical models developed in this paper form the basis of improvement and application to a combined input and output queuing (CIOQ) router architecture for a higher speed QoS network.  相似文献   

4.
We present a novel queuing analytical framework for the performance evaluation of a distributed and energy-aware medium access control (MAC) protocol for wireless packet data networks with service differentiation. Specifically, we consider a node (both buffer-limited and energy-limited) in the network with two different types of traffic, namely, high-priority and low-priority traffic, and model the node as a MAP (Markovian arrival process)/PH (phase-type)/1/K nonpreemptive priority queue. The MAC layer in the node is modeled as a server and a vacation queuing model is used to model the sleep and wakeup mechanism of the server. We study standard exhaustive and number-limited exhaustive vacation models both in multiple vacation case. A setup time for the head-of-line packet in the queue is considered, which abstracts the contention and the back-off mechanism of the MAC protocol in the node. A nonideal wireless channel model is also considered, which enables us to investigate the effects of packet transmission errors on the performance behavior of the system. After obtaining the stationary distribution of the system using the matrix-geometric method, we study the performance indices, such as packet dropping probability, access delay, and queue length distribution, for high-priority packets as well as the energy saving factor at the node. Taking into account the bursty traffic arrival (modeled as MAP) and, therefore, the nonsaturation case for the queuing analysis of the MAC protocol, using phase-type distribution for both the service and the vacation processes, and combining the priority queuing model with the vacation queuing model make the analysis very general and comprehensive. Typical numerical results obtained from the analytical model are presented and validated by extensive simulations. Also, we show how the optimal MAC parameters can be obtained by using numerical optimization  相似文献   

5.
Queueing in high-performance packet switching   总被引:14,自引:0,他引:14  
The authors study the performance of four different approaches for providing the queuing necessary to smooth fluctuations in packet arrivals to a high-performance packet switch. They are (1) input queuing, where a separate buffer is provided at each input to the switch; (2) input smoothing, where a frame of b packets is stored at each of the input line to the switch and simultaneously launched into a switch fabric of size Nb×Nb; (3) output queuing, where packets are queued in a separate first-in first-out (FIFO) buffer located at each output of the switch; and (4) completely shared buffering, where all queuing is done at the outputs and all buffers are completely shared among all the output lines. Input queues saturate at an offered load that depends on the service policy and the number of inputs N, but is approximately 0.586 with FIFO buffers when N is large. Output queuing and completely shared buffering both achieve the optimal throughput-delay performance for any packet switch. However, compared to output queuing, completely shared buffering requires less buffer memory at the expense of an increase in switch fabric size  相似文献   

6.
Software‐defined networking (SDN) is a network concept that brings significant benefits for the mobile cellular operators. In an SDN‐based core network, the average service time of an OpenFlow switch is highly influenced by the total capacity and type of the output buffer, which is used for temporary storage of the incoming packets. In this work, the main goal is to model the handover delay due to the exchange of OpenFlow‐related messages in mobile SDN networks. The handover delay is defined as the overall delay experienced by the mobile node within the handover procedure, when reestablishing an ongoing session from the switch in the source eNodeB to the switch in the destination eNodeB. We propose a new analytical model, and we compare two systems with different SDN switch designs that model a continuous time Markov process by using quasi‐birth–death processes: (1) single shared buffer without priority (model SFB), used for all output ports for both control and user traffic, and (2) two isolated buffers with priority (model priority finite buffering [PFB]), one for control and the other for user plane traffic, where the control traffic is always prioritized. The two proposed systems are compared in terms of total handover delay and minimal buffer capacity needed to satisfy a certain packet error ratio imposed by the link. The mathematical modeling is verified via extensive simulations. In terms of handover delay, the results show that the model PFB outperforms the model SFB, especially for networks with high number of users and high probability of packet‐in messages. As for the buffer dimensioning analysis, for lower arrival rates, low number of users, and low probability of packet‐in messages, the model SFB has the advantage of requiring a smaller buffer size.  相似文献   

7.
This paper analyzes the packet loss and delay performance of an arrayed-waveguide-grating-based (AWG) optical packet switch developed within the EPSRC-funded project WASPNET (wavelength switched packet network). Two node designs are proposed based on feedback and feed-forward strategies, using sharing among multiple wavelengths to assist in contention resolution. The feedback configuration allows packet priority routing at the expense of using a larger AWG. An analytical framework has been established to compute the packet loss probability and delay under Bernoulli traffic, justified by simulation. A packet loss probability of less than 10-9 was obtained with a buffer depth per wavelength of 10 for a switch size of 16 inputs-outputs, four wavelengths per input at a uniform Bernoulli traffic load of 0.8 per wavelength. The mean delay is less than 0.5 timeslots at the same buffer depth per wavelength  相似文献   

8.
The process of packet clustering in a network with well-regulated input traffic is studied and a strategy for congestion-free communication in packet networks is proposed. The strategy provides guaranteed services per connection with no packet loss and an end-to-end delay which is a constant plus a small bounded jitter term. It is composed of an admission policy imposed per connection at the source node, and a particular queuing scheme practiced at the switching nodes, which is called stop-and-go queuing. The admission policy requires the packet stream of each connection to possess a certain smoothness property upon arrival at the network. This is equivalent to a peak bandwidth allocation per connection. The queuing scheme eliminates the process of packet clustering and thereby preserves the smoothness property as packets travel inside the network. Implementation is simple  相似文献   

9.
A selective-repeat automatic-repeat request (SR ARQ) system model in which packets arrive at the transmitter according to a general renewal process is analyzed. The overall delay of a packet in a system that operates under the SR ARQ protocol consists of the queuing delay at the transmitter and the resequencing delay at the receiver. The joint distribution of the buffer occupancies at the transmitter and at the receiver is derived, and the tow types of delay are compared using numerical examples  相似文献   

10.
Next generation mobile networks are expected to provide seamless personal mobile communication and quality-of-service (QoS) guaranteed IP-based multimedia services. Providing seamless communication in mobile networks means that the networks have to be able to provide not only fast but also lossless handoff. This paper presents a two-layer downlink queuing model and a scheduling mechanism for providing lossless handoff and QoS in mobile networks, which exploit IP as a transport technology for transferring datagrams between base stations and the high-speed downlink packet access (HSDPA) at the radio layer. In order to reduce handoff packet dropping rate at the radio layer and packet forwarding rate at the IP layer and provide high system performance, e.g., downlink throughput, scheduling algorithms are performed at both IP and radio layers, which exploit handoff priority scheduling principles and take into account buffer occupancy and channel conditions. Performance results obtained by computer simulation show that, by exploiting the downlink queuing model and scheduling algorithms, the system is able to provide low handoff packet dropping rate, low packet forwarding rate, and high downlink throughput.  相似文献   

11.
This paper addresses the problem of providing per-connection end-to-end delay guarantees in a high-speed network. We consider a network comprised of store-and-forward packet switches, in which a packet scheduler is available at each output link. We assume that the network is connection oriented and enforces some admission control which ensures that the source traffic conforms to specified traffic characteristics. We concentrate on the class of rate-controlled service (RCS) disciplines, in which traffic from each connection is reshaped at every hop, and develop end-to-end delay bounds for the general case where different reshapers are used at each hop. In addition, we establish that these bounds can also be achieved when the shapers at each hop have the same “minimal” envelope. The main disadvantage of this class of service discipline is that the end-to-end delay guarantees are obtained as the sum of the worst-case delays at each node, but we show that this problem can be alleviated through “proper” reshaping of the traffic. We illustrate the impact of this reshaping by demonstrating its use in designing RCS disciplines that outperform service disciplines that are based on generalized processor sharing (GPS). Furthermore, we show that we can restrict the space of “good” shapers to a family which is characterized by only one parameter. We also describe extensions to the service discipline that make it work conserving and as a result reduce the average end-to-end delays  相似文献   

12.
We consider modeling the statistical behavior of interactive and streaming traffics in high-speed downlink packet access (HSDPA) networks. Two important applications in these traffic categories are web-browsing (interactive service) and video streaming (streaming service). Web-browsing is characterized by its important sensitivity to delay. Video streaming on the other hand is less sensitive to delay, however, due to its large frame sizes, video traffic is more affected by the packet loss resulting from a limited buffer size at the base station. Taking these characteristics into account, we consider modeling the queuing delay probability density function (PDF) of the Web-browsing traffic, and modeling the queuing buffer size distribution of video streaming traffic. Specifically, we show that the queuing delay of the Web-browsing traffic follows an exponential distribution and that the queuing buffer size of video streaming traffic follows a weighted Weibull distribution. Model fitting based on simulated data is used to provide simple mathematical formulations for the different parameters that characterize the PDFs under consideration. The provided equations could be used, directly, in HSDPA network dimensioning and, as a reference, to satisfy a certain quality of service (QoS).  相似文献   

13.
In this paper, we study the problem of packet scheduling in a wireless environment with the objective of minimizing the average transmission energy expenditure under individual packet delay constraints. Most past studies assumed that the input arrivals followed a Poisson process or were statistically independent. However, traffic from a real source typically has strong time correlation. We model a packet scheduling and queuing system for a general input process in linear time-invariant systems. We propose an energy-efficient packet scheduling policy that takes the correlation into account. Meanwhile, a slower transmission rate implies that packets stay in the transmitter for a longer time, which may result in unexpected transmitter overload and buffer overflow. We derive the upper bounds of the maximum transmission rate under an overload probability and the upper bounds of the required buffer size under a packet drop rate. Simulation results show that the proposed scheduler improves up to 15 percent in energy savings compared with the policies that assume statistically independent input. Evaluation of the bounds in providing QoS control shows that both deadline misses and packet drops can be effectively bounded by a predefined constraint.  相似文献   

14.
This paper focuses on the routing overhead analysis in ad hoc networks. Available work in this research field considered the infinite buffer scenario, so that buffer overflow will never occur. Obviously, in realistic ad hoc networks, the node buffer size is strictly bounded, which leads to unavoidable packet loss. Once a packet is dropped by a relay node, the bandwidth consumption for the previous transmission is actually wasted. We define the extra wasted bandwidth as the packet loss (PL) overhead. A theoretical analysis framework based on G/G/1/K queuing model is provided, to estimate the PL overhead for any specific routing protocols. Then, with this framework, we propose a distributed routing algorithm termed as novel load-balancing cognitive routing (NLBCR). The OPNET network simulator is further conducted to compare the performance among the NLBCR, AODV and CRP. The results indicate that NLBCR can reduce routing overhead to a considerable extent, as well as improve the network throughput and decrease the end-to-end delay.  相似文献   

15.
This article describes a new model of a cooperative file sharing system in a wireless Mesh network. The authors' approach is to develop an efficient and cooperative file sharing mechanism based on opportunistic random linear Network Coding. Within this mechanism, every node transmits random linear combination of its packets according to cooperative priority, which is computed in a distributed manner according to the node-possible contribution to its neighbor nodes. With this mechanism, the more a node contributes to others, the more the node has chances to recover the entire file first. The performance metrics of interest here are: the delay until all the packets in a file have been delivered to all nodes, and an ideal packet size, by the use of which the authors can get the minimum transmission delay. Through extensive simulation the authors compare their mechanism with the current transmission process in a wireless Mesh network without random linear Network Coding. The authors found that using their mechanism, the nodes can cooperatively share the entire file with less transmission time and delay than the current transmission process without random linear network.  相似文献   

16.
A movable boundary protocol is proposed for integrating packet voice and data in unidirectional bus networks. The head station on the bus learns the number of ready-to-transmit voice stations by reading a `request' bit in the header of the received packets and allocates the exact number of voice slots needed in each frame. The protocol guarantees that the maximum delay to transmit a voice packet will be less than the round-trip propagation delay at the head station plus twice the time needed to form the packet. The average data packet delay is evaluated via approximate analysis and simulation, for the case in which the voice-reserved slots in a frame are contiguous and for the case in which they are evenly distributed  相似文献   

17.
A single-stage nonblocking N*N packet switch with both output and input queuing is considered. The limited queuing at the output ports resolves output port contention partially. Overflow at the output queues is prevented by a backpressure mechanism and additional queuing at the input ports. The impact of the backpressure effect on the switch performance for arbitrary output buffer sizes and for N to infinity is studied. Two different switch models are considered: an asynchronous model with Poisson arrivals and a synchronous model with Bernoulli arrivals. The investigation is based on the average delay and the maximum throughput of the switch. Closed-form expressions for these performance measures are derived for operation with fixed size packets. The results demonstrate that a modest amount of output queuing, in conjunction with appropriate switch speedup, provides significant delay and throughput improvements over pure input queuing. The maximum throughput is the same for the synchronous and the asynchronous switch model, although the delay is different.<>  相似文献   

18.
When designing and configuring an ATM-based B-ISDN, it remains difficult to guarantee the quality of service (QoS) for different service classes, while still allowing enough statistical sharing of bandwidth so that the network is efficiently utilized. These two goals are often conflicting. Guaranteeing QoS requires traffic isolation, as well as allocation of enough network resources (e.g., buffer space and bandwidth) to each call. However, statistical bandwidth sharing means the network resources should be occupied on demand, leading to less traffic isolation and minimal resource allocation. The authors address this problem by proposing and evaluating a network-wide bandwidth management framework in which an appropriate compromise between the two conflicting goals is achieved. Specifically, the bandwidth management framework consists of a network model and a network-wide bandwidth allocation and sharing strategy. Implementation issues related to the framework are discussed. For real-time applications the authors obtain maximum queuing delay and queue length, which are important in buffer design and VP (virtual path) routing  相似文献   

19.
The mean delay and throughput characteristics of various trunk queuing disciplines of the FIFO (first in, first out) and round-robin types for byte-stream data networks are investigated. It is shown that, under normal traffic, high-speed trunks substantially reduce queuing delays. Almost any queuing discipline will give acceptable delay if the backbone network is sufficiently faster than the access lines. In the absence of high-speed trunks, both the packet FIFO and the round-robin discipline can be augmented with a priority queue that expedites single-packet messages, which may carry network control signals or echoplex characters. In FIFO-type disciplines, the mean delays of messages that do not go through the priority queue depend on the overall message length distribution. A sprinkling of very long messages can significantly increase the mean delays of other messages. In disciplines of round-robin type, the mean delay of each message type is not affected by the presence of very long messages of other types  相似文献   

20.
Banyan networks are being proposed for interconnecting memory and processor modules in multiprocessor systems as well as for packet switching in communication networks. This paper describes an analysis of the performance of a packet switch based on a single-buffered Banyan network. A model of a single-buffered Banyan network provides results on the throughput, delay, and internal blocking. Results of this model are combined with models of the buffer controller (finite and infinite buffers). It is shown that for balanced loads, the switching delay is low for loads below maximum throughput (about 45 percent per input link) and the blocking at the input buffer controller is low for reasonable buffer sizes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号