首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
The concept of Quality of Service (QoS) networks has gained growing attention recently, as the traffic volume in the Internet constantly increases, and QoS guarantees are essential to ensure proper operation of most communication-based applications. A QoS switch serves m incoming queues by transmitting packets arriving to these queues through one output port, one packet per time step. Each packet is marked with a value indicating its priority in the network. Since the queues have bounded capacities and the rate of arriving packets can be much higher than the transmission rate, packets can be lost due to insufficient queue space. The goal is to maximize the total value of transmitted packets. This problem encapsulates two dependent questions: buffer management, namely which packets to admit into the queues, and scheduling, i.e. which queue to use for transmission in each time step. We use competitive analysis to study online switch performance in QoS-based networks. Specifically, we provide a novel generic technique that decouples the buffer management and scheduling problems. Our technique transforms any single-queue buffer management policy (preemptive or non-preemptive) to a scheduling and buffer management algorithm for our general m queues model, whose competitive ratio is at most twice the competitive ratio of the given buffer management policy. We use our technique to derive concrete algorithms for the general preemptive and non-preemptive cases, as well as for the interesting special cases of the 2-value model and the unit-value model. We also provide a 1.58-competitive randomized algorithm for the unit-value case. This case is interesting by itself since most current networks (e.g. IP networks) do not yet incorporate full QoS capabilities, and treat all packets equally.  相似文献   

2.
Video transmission over wireless channels is affected by channel-induced packet losses. Distortion due to channel errors can be alleviated by applying forward error correction. Aggregating H.264/AVC slices to form video packets with sizes adapted to their importance can also improve transmission reliability. Larger packets are more likely to be in error but smaller packets require more overhead. We present a cross-layer dynamic programming (DP) approach to minimize the expected received video distortion by jointly addressing the priority-adaptive packet formation at the application layer and rate compatible punctured convolutional (RCPC) code rate allocation at the physical layer for prioritized slices of each group of pictures (GOP). Some low priority slices are also discarded to improve protection to more important slices and meet the channel bit-rate limitations. We propose two schemes. Our first scheme carries out joint optimization for all slices of a GOP at a time. The second scheme extends our cross-layer DP-based approach to slices of each frame by predicting the expected channel bit budget per frame for live streaming. The prediction uses a generalized linear model developed over the cumulative mean squared error per frame, channel SNR, and normalized compressed frame bit budget. The parameters are determined over a video dataset that spans high, medium and low motion complexity. The predicted frame bit budget is used to derive the packet sizes and corresponding RCPC code rates for live transmission using our DP-based approach. Simulation results show that both proposed schemes significantly improve the received video quality over contemporary error protection schemes.  相似文献   

3.
In this paper, a novel approach for efficiently supporting IP packets directly into a slotted optical wavelength-division-multiplexing (WDM) layer with several quality of service (QoS) requirements is presented and analyzed. The approach is based on two main features. First, an aggregation cycle is performed at fixed time intervals by assembling several IP packets into a single macro-packet of fixed size, called an aggregate packet. Second, since IP packets have variable size, the aggregation process may allow or not the segmentation of an IP packet if it does not fit into the remaining gap in the aggregate packet. As a key element of our proposition, an efficient QoS support access mechanism is presented. The new QoS control performs aggregation in a loop manner by always beginning the aggregation cycle with the highest priority class. The aggregation cycle ends if the aggregate packet cannot accommodate more IP packets, or if the lowest priority class is reached. We introduce two analytical models that allow us to evaluate the effectiveness of the aggregation technique with and without segmentation. On the other hand, a third analytical model is presented to analyze the standard case (where no aggregation is performed), and comparisons between the three models are carried out. The aggregation models are validated by simulations, and the effect of self-similarity is also analyzed. The application of the proposed approach takes place in a slotted dual bus optical ring network (SDBORN), where we prove that a good fairness and high bandwidth efficiency are achieved, and that only two QoS classes (real-time and non-real-time classes) at the access interface (IP domain) are sufficient in order to fulfill the strict delay requirements of real-time data traffic.  相似文献   

4.
The design of a high speed, broadband packet switch with two priority levels for application in integrated voice/data networks is presented. The packet switch can efficiently cope with 128 byte packets converging on it from eight 140 Mbit/s dynamic time division multiplexed fibre optic links. The packet switch throughput varies with the load and traffic composition, and the delay experienced by voice and data packets is within 300 μs and 3 ms, respectively. The design is implemented by task-sharing in a multi-processor configuration. The design of the packet switch, including its subsystems, is detailed here.  相似文献   

5.
If the frame size of a multimedia encoder is small, Internet Protocol (IP) streaming applications need to pack many encoded media frames in each Real-time Transport Protocol (RTP) packet to avoid unnecessary header overhead. The generic forward error correction (FEC) mechanisms proposed in the literature for RTP transmission do not perform optimally in terms of stability when the RTP payload consists of several individual data elements of equal priority. In this paper, we present a novel approach for generating FEC packets optimized for applications packing multiple individually decodable media frames in each RTP payload. In the proposed method, a set of frames and its corresponding FEC data are spread among multiple packets so that the experienced frame loss rate does not vary greatly under different packet loss patterns. We verify the performance improvement gained against traditional generic FEC by analyzing and comparing the variance of the residual frame loss rate in the proposed packetization scheme and in the baseline generic FEC.  相似文献   

6.
Describes Tiny Tera: a small, high-bandwidth, single-stage switch. Tiny Tera has 32 ports switching fixed-size packets, each operating at over 10 Gbps (approximately the Sonet OC-192e rate, a telecom standard for system interconnects). The switch distinguishes four classes of traffic and includes efficient support for multicasting. We aim to demonstrate that it is possible to use currently available CMOS technology to build this compact switch with an aggregate bandwidth of approximately 1 terabit per second and a central hub no larger than a can of soda. Such a switch could serve as a core for an ATM switch or an Internet router. Tiny Tera is an input-buffered switch, which makes it the highest bandwidth switch possible given a particular CMOS and memory technology. The switch consists of three logical elements: ports, a central crossbar switch, and a central scheduler. It queues packets at a port on entry and optionally prior to exit. The scheduler, which has a map of each port's queue occupancy, determines the crossbar configuration every packet time slot. Input queueing, parallelism, and tight integration are the keys to such a high-bandwidth switch. Input queueing reduces the memory bandwidth requirements: When a switch queues packets at the input, the buffer memories need run no faster than the line rate. Thus, there is no need for the speedup required in output-queued switches  相似文献   

7.
We propose a new fair scheduling technique, called OCGRR (output controlled grant-based round robin), for the support of DiffServ traffic in a core router. We define a stream to be the same-class packets from a given immediate upstream router destined to an output port of the core router. At each output port, streams may be isolated in separate buffers before being scheduled in a frame. The sequence of traffic transmission in a frame starts from higher-priority traffic and goes down to lower-priority traffic. A frame may have a number of small rounds for each class. Each stream within a class can transmit a number of packets in the frame based on its available grant, but only one packet per small round, thus reducing the intertransmission time from the same stream and achieving a smaller jitter and startup latency. The grant can be adjusted in a way to prevent the starvation of lower priority classes. We also verify and demonstrate the good performance of our scheduler by simulation and comparison with other algorithms in terms of queuing delay, jitter, and start-up latency  相似文献   

8.
Packet scheduling is a critical component in multi-session video streaming over mesh networks. Different video packets have different levels of contribution to the overall video presentation quality at the receiver side. In this work, we develop a fine-granularity transmission distortion model for the encoder to predict the quality degradation of decoded videos caused by lost video packets. Based on this packet-level transmission distortion model, we propose a content-and-deadline-aware scheduling (CDAS) scheme for multi-session video streaming over multi-hop mesh networks, where content priority, queuing delays, and dynamic network transmission conditions are jointly considered for each video packet. Our extensive experimental results demonstrate that the proposed transmission distortion model and the CDAS scheme significantly improve the performance of multi-session video streaming over mesh networks.  相似文献   

9.
Tom  Joris  Herwig 《Performance Evaluation》2006,63(12):1235-1252
In this paper, we investigate a simplified head-of-the-line with priority jumps (HOL-PJ) scheduling discipline. Therefore, we consider a discrete-time single-server queueing system with two priority queues of infinite capacity and with a newly introduced HOL-PJ priority scheme. We derive expressions for the probability generating function of the system contents and the packet delay. Some performance measures (such as mean and variance) of these quantities are derived and are used to illustrate the impact and significance of the HOL-PJ priority scheduling discipline in an output queueing switch. We compare this dynamic priority scheduling discipline with a first-in, first-out (FIFO) scheduling and a static priority scheduling (HOL) and we investigate the influence of the different parameters of the simplified HOL-PJ scheduling discipline.  相似文献   

10.
OCGRR: A New Scheduling Algorithm for Differentiated Services Networks   总被引:1,自引:0,他引:1  
We propose a new fair scheduling technique, called OCGRR (Output Controlled Grant-based Round Robin), for the support of DiffServ traffic in a core router. We define a stream to be the same-class packets from a given immediate upstream router destined to an output port of the core router. At each output port, streams may be isolated in separate buffers before being scheduled in a frame. The sequence of traffic transmission in a frame starts from higher-priority traffic and goes down to lower-priority traffic. A frame may have a number of small rounds for each class. Each stream within a class can transmit a number of packets in the frame based on its available grant, but only one packet per small round, thus reducing the intertransmission time from the same stream and achieving a smaller jitter and startup latency. The grant can be adjusted in a way to prevent the starvation of lower priority classes. We also verify and demonstrate the good performance of our scheduler by simulation and comparison with other algorithms in terms of queuing delay, jitter, and start-up latency.  相似文献   

11.
In this paper we investigate a certain class of systems containing dependent discrete time queues. This class of systems consists of N nodes transmitting packets to each other or to the outside of the system over a common shared channel, and is characterized by the fact that access to the channel is assigned according to priorities that are preassigned to the nodes. To each node a given probability distribution is attached, that indicates the probabilities that a packet transmitted by the node is forwarded to one of the other nodes or to the outside of the system.

Using extensively the fact that the joint generating function of the queue lengths distribution is an analytic function in a certain domain, we obtain an expression for this joint generating function. From the latter any moment of the queue lengths and also average time delays can be obtained.

The main motivation for investigating the class of systems of this paper is its applicability to several packet-radio networks. We give two examples: The first is a certain access scheme for a network where all nodes can hear each other, and the second is a three-node tandem packet-raido network.  相似文献   


12.
夏清国  高德远  姚群 《计算机应用》2004,24(2):37-38,52
如何为IP网络中的业务提供QoS保证正成为lP技术所要解决的关键问题。文中基于Diffserv提出了一种IP电话QoS方案,并对其基本思想及实现方法作了详细介绍。该方案将不同的分组数据包设置成不同的优先级,其中系统控制分组数据包优先级最高,语音分组数据包次之,普通数据分组数据包最低,使得系统控制分组和语音分组数据包的平均等待时间缩短。研究结果表明,对不同数据包进行优先级设置是改善IP电话QoS的一种可行方法。  相似文献   

13.
Scheduling for flows has been studied before. However, applying the previous schemes directly for LTE networks may not achieve good performance. To have good performance, both frequency domain allocations and time domain allocations for LTE resource blocks are suggested. Our method is suitable for real-time services and it consists of three phases. In frequency domain we design our method to utilize the RBs effectively. In time domain we first manage queues for different applications and propose a mechanism for predicting the packet delays. We introduce the concept of virtual queue to predict the behavior of future incoming packets based on the packets in the current queue. Then based on the calculated results, we introduce a cut-in process to rearrange the transmission order and discard those packets which cannot meet their delay requirements. We compare our scheduling mechanism with maximum throughput, proportional fair, modified largest delay first and exponential proportional fair. Simulation results show our scheduling method can achieve better performance than other schemes.  相似文献   

14.
基于HSDPA的增强型分组调度算法研究   总被引:1,自引:0,他引:1       下载免费PDF全文
从系统吞吐量、用户公平性等方面分析研究了HSDPA系统中支持非实时业务的三种经典分组调度算法RR、Max C/I和PF。针对PF算法重传时延过长问题,提出了一种结合混合自动请求重传HARQ的增强分组调度算法。该算法通过提高重传分组的优先级降低重传时延,有效地避免系统资源的浪费。MATLAB仿真结果表明,该算法在降低单用户重传时延的同时,仍能保证用户间的公平性和系统的吞吐量。  相似文献   

15.
《Performance Evaluation》2006,63(4-5):278-294
We consider a time-slotted queueing model where each time slot can either be an arrival slot, in which new packets arrive, or a departure slot, in which packets are transmitted and hence depart from the queue. The slot scheduling strategy we consider describes periodically, and for a fixed number of time slots, which slots are arrival and departure slots. We consider a static and a dynamic strategy. For both strategies, we obtain expressions for the probability generating function of the steady-state queue length and the packet delay. The model is motivated by cable-access networks, which are often regulated by a request–grant procedure in which actual data transmission is preceded by a reservation procedure. Time slots can then either be used for reservation or for data transmission.  相似文献   

16.
In this paper, a novel priority assignment scheme is proposed for priority service networks, in which each link sets its own priority threshold, namely, the lowest priority the link is willing to support for the incoming packets without causing any congestion. Aiming at a reliable transmission, the source then assigns each originated packet the maximum priority value required along its path, because links may otherwise discard the incoming packets which do not meet the corresponding priority requirements. It is shown that if each source sends the traffic at a rate that is reciprocal to the specified highest priority, a bandwidth max–min fairness is achieved in the network. Furthermore, if each source possesses a utility function of the available bandwidth and sends the traffic at a rate so that the associated utility is reciprocal to the highest link priority, a utility max–min fairness is achieved. For general networks without priority services, the resulting flow control strategy can be treated as a unified framework to achieve either bandwidth max–min fairness or utility max–min fairness through link pricing policy. More importantly, the utility function herein is only assumed to be strictly increasing and does not need to satisfy the strictly concave condition, the new algorithms are thus not only suitable for the traditional data applications with elastic traffic, but are also capable of handling real-time applications in the Future Internet.  相似文献   

17.
The authors examine the design, implementation, and experimental analysis of parallel priority queues for device and network simulation. They consider: 1) distributed splay trees using MPI; 2) concurrent heaps using shared memory atomic locks; and 3) a new, more general concurrent data structure based on distributed sorted lists, designed to provide dynamically balanced work allocation and efficient use of shared memory resources. We evaluate performance for all three data structures on a Cray-TSESOO system at KFA-Julich. Our comparisons are based on simulations of single buffers and a 64×64 packet switch which supports multicasting. In all implementations, PEs monitor traffic at their preassigned input/output ports, while priority queue elements are distributed across the Cray-TBE virtual shared memory. Our experiments with up to 60000 packets and two to 64 PEs indicate that concurrent priority queues perform much better than distributed ones. Both concurrent implementations have comparable performance, while our new data structure uses less memory and has been further optimized. We also consider parallel simulation for symmetric networks by sorting integer conflict functions and implementing a packet indexing scheme. The optimized message passing network simulator can process ~500 K packet moves in one second, with an efficiency that exceeds ~50 percent for a few thousand packets on the Cray-T3E with 32 PEs. All developed data structures form a parallel library. Although our concurrent implementations use the Cray-TSE ShMem library, portability can be derived from Open-MP or MP1-2 standard libraries, which will provide support for one-way communication and shared memory lock mechanisms  相似文献   

18.
The buffered crossbar switch architecture has recently gained considerable research attention. In such a switch, besides normal input and output queues, a small buffer is associated with each crosspoint. Due to the introduction of crossbar buffers, output and input dependency is eliminated, and the scheduling process is greatly simplified. We analyze the performance of switch policies by means of competitive analysis, where a uniform guarantee is provided for all traffic patterns. We assume that each packet has an intrinsic value designating its priority and the goal of the switch policy is to maximize the weighted throughput of the switch. We consider FIFO queueing buffering policies, which are deployed by the majority of today’s Internet routers. In packet-mode scheduling, a packet is divided into a number of unit length cells and the scheduling policy is constrained to schedule all the cells contiguously, which removes reassembly overhead and improves Quality-of-Service. For the case of variable length packets with uniform value density (Best Effort model), where the packet value is proportional to its size, we present a packet-mode greedy switch policy that is 7-competitive. For the case of unit size packets with variable values (Differentiated Services model), we propose a β-preemptive (β is a preemption factor) greedy switch policy that achieves a competitive ratio of 6 + 4β + β 2 + 3/(β − 1). In particular, its competitive ratio is at most 19.95 for the preemption factor of β = 1.67. As far as we know, this is the first constant-competitive FIFO policy for this architecture in the case of variable value packets. In addition, we evaluate performance of β-preemptive greedy switch policy by simulations and show that it outperforms other natural switch policies. The presented policies are simple and thus can be efficiently implemented at high speeds. Moreover, our results hold for any value of the internal switch fabric speedup.  相似文献   

19.
A Wi-Fi broadcasting system is a kind of Mobile-TV system that transmits multimedia content over Wi-Fi networks. The specialty of the system is that it takes advantage of broadcast packets for streaming to be scalable to the number of users. However, the loss rate of broadcast packets is much higher than that of unicast ones because MAC layer retransmission is not applied on broadcast packets. To recover lost packets, a packet level Forward Error Correction (FEC) scheme is usually used in Wi-Fi broadcasting systems. But it introduces additional transmission overhead, which is usually proportional to the packet loss rate. So it is important to reduce the packet loss rate to build an efficient and reliable Wi-Fi broadcasting system. While past studies have considered only single-AP systems, our study focuses on a multi-AP system which is designed to cover a much larger area. We found a specific packet collision problem that increases packet loss rate significantly in a multi-AP system. It is caused by the simultaneous arrival and transmission of a broadcast packet at and by APs. We identify two scenarios of the collision that depend on the channel state at the time of packet arrival. We propose two collision avoidance methods to handle these scenarios: Broadcast Packet Scheduling Method (BPSM) and Adaptive Contention Window-Sizing Method(ACWSM). We implement both methods in our multi-AP Wi-Fi broadcasting system and verify their effectiveness through experiments.  相似文献   

20.
In the online packet buffering problem (also known as the unweighted FIFO variant of buffer management), we focus on a single network packet switching device with several input ports and one output port. This device forwards unit-size, unit-value packets from input ports to the output port. Buffers attached to input ports may accumulate incoming packets for later transmission; if they cannot accommodate all incoming packets, their excess is lost. A packet buffering algorithm has to choose from which buffers to transmit packets in order to minimize the number of lost packets and thus maximize the throughput. We present a tight lower bound of e/(e?1)≈1.582 on the competitive ratio of the throughput maximization, which holds even for fractional or randomized algorithms. This improves the previously best known lower bound of 1.4659 and matches the performance of the algorithm Random Schedule. Our result contradicts the claimed performance of the algorithm Random Permutation; we point out a flaw in its original analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号