共查询到20条相似文献,搜索用时 15 毫秒
1.
Jing Xie Shengming Jiang Yuming Jiang 《Communications Magazine, IEEE》2004,42(8):S32-S39
Passive optical networks bring high-speed broadband access via fiber to the business, curb and home. Among various types of PONs, Ethernet PONs are gaining more and more attention since they are built on widely used Ethernet technology and can offer high bandwidth, low cost and broad services. EPONs use a point-to-multipoint topology, in which multiple optical network units share one uplink channel to transmit multimedia traffic to a control element, the optical line terminal. To avoid data collision on the shared uplink channel, a key issue in EPONs is a contention-free MAC protocol for the OLT to schedule the transmission order of different ONUs. In this article we first review some DBA schemes available in the literature, then propose a two-layer bandwidth allocation scheme that implements weight based priority for this need. To maximally satisfy the requests of all ONUs and provide differentiated services, an ONU is allowed to request bandwidth for all its available traffic, and all traffic classes proportionally share the bandwidth based on their instantaneous demands. The weight set for each class not only prevents high-priority traffic from monopolizing the bandwidth under heavy load but also ensures a minimum bandwidth allocated to each traffic class. 相似文献
2.
《Selected Areas in Communications, IEEE Journal on》2009,27(2):134-142
With the advances in optical technology, the span of a broadband access network using Passive Optical Network (PON) technology can be increased from today's standard of 20 km to 100 km or higher, and thereby serve a lot more users. Such an extended-reach PON is known as SuperPON in the literature, and we call it a Long-Reach PON (LR-PON). A major challenge in LR-PON is that the propagation delay (for data as well as control signals) between the telecom central office (CO) and the end user is increased by a very significant amount. Now, traditional PON algorithms for scheduling the upstream transmission, such as dynamic bandwidth allocation (DBA) algorithms, may not be sufficient; actually, they may lead to degraded performance because of the long delay of the CO-to- Users "control loop." This challenge motivates us to propose and study a multi-thread polling algorithm to effectively and fairly distribute the upstream bandwidth dynamically. This algorithm exploits the benefits of having multiple polling processes running simultaneously and enabling users to send bandwidth requests before receiving acknowledgement from the CO. We compare the proposed algorithm with traditional DBA, and show its advantage on average packet delay. We then analyze and optimize key parameters of the algorithm, such as initiating and tuning multiple threads, inter-thread scheduling, and fairness among users. Numerical results demonstrate the algorithm's advantage to decrease the average packet delay and improve network throughput under varying offered loads. 相似文献
3.
Wireless Networks - In practical communication systems, there are always multiple subscribers competing for limited resources, such as time and frequency, hence effective user scheduling is... 相似文献
4.
Routing bandwidth guaranteed paths with local restoration in label switched networks 总被引:1,自引:0,他引:1
Li Li Buddhikot M.M. Chekuri C. Guo K. 《Selected Areas in Communications, IEEE Journal on》2005,23(2):437-449
The emerging multiprotocol label switching (MPLS) networks enable network service providers to route bandwidth guaranteed paths between customer sites. This basic label switched path (LSP) routing is often enhanced using restoration routing which sets up alternate LSPs to guarantee uninterrupted connectivity in case network links or nodes along primary path fail. We address the problem of distributed routing of restoration paths, which can be defined as follows: given a request for a bandwidth guaranteed LSP between two nodes, find a primary LSP, and a set of backup LSPs that protect the links along the primary LSP. A routing algorithm that computes these paths must optimize the restoration latency and the amount of bandwidth used. We introduce the concept of "backtracking" to bound the restoration latency. We consider three different cases characterized by a parameter called backtracking distance D: 1) no backtracking (D=0); 2) limited backtracking (D=k); and 3) unlimited backtracking (D=/spl infin/). We use a link cost model that captures bandwidth sharing among links using various types of aggregate link-state information. We first show that joint optimization of primary and backup paths is NP-hard in all cases. We then consider algorithms that compute primary and backup paths in two separate steps. Using link cost metrics that capture bandwidth sharing, we devise heuristics for each case. Our simulation study shows that these algorithms offer a way to tradeoff bandwidth to meet a range of restoration latency requirements. 相似文献
5.
6.
James Aweya Michel Ouellette Delfin Y. Montuno 《International Journal of Network Management》2003,13(3):211-229
Most active queue management schemes maintain an average of the queue length which they use together with a number of queue thresholds to detect congestion. However, the setting of the queue thresholds is problematic because the required buffer size for good sharing among TCP connections is dependent on the number of TCP connections using the buffer. This paper describes an improved active queue management scheme which dynamically changes its threshold settings as the number of connections and system load changes. This technique allows network devices to effectively control packet losses and TCP timeouts while maintaining high link utilization. Copyright © 2003 John Wiley &Sons, Ltd. 相似文献
7.
Sudip Misra Tushar I. Ghosh Mohammad S. Obaidat 《International Journal of Communication Systems》2014,27(11):2964-2984
In this paper, the authors present a novel algorithm for computing bandwidth guaranteed paths for traffic engineering in WiMAX IEEE 802.16 standard based networks using the mesh topology. The underlying algorithm fulfills routing requests ‘on the fly’ without a priori knowledge of future requests. This problem is motivated by the need for efficient handling of traffic and network resource utilization. The key idea behind the solution is the use of heuristic methods to defer routing through certain nodes, which have a higher chance of getting selected because of hop constraints, so that they can be prevented from congestion. Simulation‐based performance evaluation shows that the proposed algorithm performs well in comparison with the selected benchmarks on metrics such as the number of rejected requests and the active links present in the network. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
8.
Dynamic bandwidth allocation (DBA) will play an important role in future broadband wireless networks, including the 3G and 4G WCDMA systems. A code-division generalized processor sharing (CDGPS) fair scheduling DBA scheme is proposed for WCDMA systems. The scheme exploits the capability of the WCDMA physical layer, reduces the computational complexity in the link layer, and allows channel rates to be dynamically and fairly scheduled in response to the variation of traffic rates. Deterministic delay bounds for heterogeneous packet traffic are derived. Simulation results show that the proposed CDGPS scheme is effective in supporting differentiated QoS, while achieving efficient utilization of radio resources. 相似文献
9.
Routing with service restorability is of much importance in Multi-Protocol Label Switched (MPLS) networks, and is a necessity in optical networks. For restoration, each connection has an active path and a link-disjoint backup path. The backup path enables service restoration upon active path failure. For bandwidth efficiency, backups may be shared. This requires that at least the aggregate backup bandwidth used on each link be distributed to nodes performing route computations. If this information is not available, sharing is not possible. Also, one scheme in use for restorability in optical networks is for the sender to transmit simultaneously on the two disjoint paths and for the receiver to choose data from the path with stronger signal. This has the advantage of fast receiver-initiated recovery upon failure but it does not allow backup sharing. In this paper, we consider the problem of efficient dynamic routing of restorable connections when backup sharing is not allowed. Our objective is to be able to route as many connections as possible for one-at-a-time arrivals and no knowledge of future arrivals. Since sharing cannot be used for achieving efficiency, the goal is to achieve efficiency by improved path selection. We show that by using the minimum-interference ideas used for nonrestorable routing, we can develop efficient algorithms that outperform previously proposed algorithms for restorable routing such as routing with the min-hop like objective of finding two disjoint paths with minimum total hop-count. We present two new and efficient algorithms for restorable routing without sharing, and one of them requires only shortest path computations. We demonstrate that both algorithms perform very well in comparison to previously proposed algorithms. 相似文献
10.
In this paper, we define a class of generalized guaranteed rate (GR) scheduling algorithms that includes algorithms which allocate a variable rate to the packets of a flow. We define work-conserving generalized virtual clock, packet-by-packet generalized processor sharing, and self-clocked fair queueing scheduling algorithms that can allocate a variable rate to the packets of a flow. We also define scheduling algorithms suitable for servers where packet fragmentation may occur. We demonstrate that if a class of rate controllers is employed for a flow in conjunction with any scheduling algorithm in GR, then the resulting non-work-conserving algorithm also belongs to GR. This leads to the definition of several non-work-conserving algorithms. We then present a method for deriving the delay guarantee of a network of servers when: (1) different rates are allocated to packets of a flow at different servers along the path and the bottleneck server for each packet may be different, and (2) packet fragmentation and/or reassembly may occur. This delay guarantee enables a network to provide various service guarantees to flows conforming to any specification. We illustrate this by utilizing delay guarantee to derive delay bounds for flows conforming to leaky bucket, exponentially bounded burstiness, and flow specification. Our method for determining these bounds is valid in internetworks and leads to tighter results 相似文献
11.
Quality-of-service (QoS) support in Ethernet passive optical networks (EPON) is a crucial concern. However, most studies have only focused on optical line terminal (OLT) capacity allocation amongst multiple optical network units (ONU), and the further issue of intra-ONU allocation remains open. In this work a novel decentralized intra-ONU solution is presented using virtual-time schedulers. Results confirm good performance for a wide range of input traffic classes and loads. 相似文献
12.
Giaccone P. Prabhakar B. Shah D. 《Selected Areas in Communications, IEEE Journal on》2003,21(4):546-559
The aggregate bandwidth of a switch is its port count multiplied by its operating line rate. We consider switches with high-aggregate bandwidths; for example, a 30-port switch operating at 40 Gb/s or a 1000-port switch operating at 1 Gb/s. Designing high-performance schedulers for such switches with input queues is a challenging problem for the following reasons: (1) high performance requires finding good matchings; (2) good matchings take time to find; and (3) in high-aggregate bandwidth switches there is either too little time (due to high line rates) or there is too much work to do (due to a high port count). We exploit the following features of the switching problem to devise simple-to-implement, high-performance schedulers for high-aggregate bandwidth switches: (1) the state of the switch (carried in the lengths of its queues) changes slowly with time, implying that heavy matchings will likely stay heavy over a period of time and (2) observing arriving packets will convey useful information about the state of the switch. The above features are exploited using hardware parallelism and randomization to yield three scheduling algorithms - APSARA, LAURA, and SERENA. These algorithms are shown to achieve 100% throughput and simulations show that their delay performance is quite close to that of the maximum weight matching, even when the traffic is correlated. We also consider the stability property of these algorithms under generic admissible traffic using the fluid-model technique. The main contribution of this paper is a suite of simple to implement, high-performance scheduling algorithms for input-queued switches. We exploit a novel operation, called MERGE, which combines the edges of two matchings to produce a heavier match, and study of the properties of this operation via simulations and theory. The stability proof of the randomized algorithms we present involves a derandomization procedure and uses methods which may have wider applicability. 相似文献
13.
14.
Hoon Kim Youngnam Han 《Communications Letters, IEEE》2005,9(3):210-212
This letter extends the proportional fair (PF) scheduling proposed in the high data rate (HDR) system to multicarrier transmission systems. It is known that the PF allocation (F. P. Kelly et al. (1998)) results in the maximization of the sum of logarithmic average user rates. We propose a PF scheduling that assigns users to each carrier while maximizing the sum of logarithmic average user rates. 相似文献
15.
Jyrki Ari Timo Mika 《AEUE-International Journal of Electronics and Communications》2007,61(2):118-126
In this paper we present a packet scheduling method which guarantees bandwidth of the connection and optimizes revenue of the network service provider. A closed form formula for updating the adaptive weights of a packet scheduler is derived from a revenue-based optimization problem. The weight updating procedure is fast and independent on the assumption of the connections’ statistical behavior. The features of the algorithm are simulated and analyzed with and without a call admission control (CAC) mechanism. We also show in context with the CAC procedure a mechanism for guaranteeing a specified mean bandwidth for different service classes. 相似文献
16.
针对LTE-Advanced系统中小区间干扰及用户公平性问题,提出了基于多小区联合预编码和静态功率控制的比例公平(MCPPC-PF)调度算法。通过干扰空间迫零和静态控制发射功率的方法抑制小区间干扰,并结合比例公平(PF)调度算法,提高用户的公平性。仿真结果表明,与传统算法相比,MCPPC-PF算法提升系统容量的同时还提高了用户的公平性;与基于多小区联合预编码和静态功率控制的最大化吞吐量调度算法相比,MCPPC-PF算法在系统容量损失了4.6%的情况下,边缘用户容量提高了约45%。 相似文献
17.
Wireless multi-carrier communication systems that use packet schedulers based on channel knowledge have been proved their performance. Proportional fairness scheduling (PFS), if used in these systems, promises an attractive trade-off between fairness among users and system throughput. In this paper, an analytical solution is presented for the PFS throughput. Then, a simple scheme is presented to reduce the feedback signaling of the PFS without a significant loss in performance. This reduced complexity PFS attempts to meet a trade-off between multi-user diversity gain, fairness among users and low rate feedback signaling. As shown by simulations, for a large number of users compared to the number of sub-channels, this scheme kept fairness among users while minimizing the feedback signaling. 相似文献
18.
The proportional fair scheduling (PFS) algorithm is implemented in current 3G wireless networks for high data rate delay-tolerant services. Though the algorithm has low implementation complexity, the problem of proportional fairness is NP-hard. An analytical expression is obtained to approximate the throughput of PFS in cellular networks over Rayleigh fading channels. Comparisons against simulation results show that the expression is accurate. 相似文献
19.
In this letter, we study the Proportional Fair scheduler that has been proposed for scheduling in the high data rate (HDR) wireless data system. We consider a single basestation transmitting to a set of mobile users. In each time slot, the scheduler has to decide on a mobile to which it will transmit data. The decision is based on information that the basestation receives about the time-varying channels between itself and the mobiles. We focus on deciding whether or not Proportional Fair is stable in a situation with finite queues and a data arrival process. That is, we wish to decide if Proportional Fair keeps all queues bounded whenever this is feasible. There are, in fact, multiple versions of Proportional Fair, depending on how it treats small queues. In this letter, we consider six different versions and show that all are unstable for one simple example. 相似文献
20.
针对OCS (online charging system)服务器,提出了一种新的请求调度算法,算法的基本思想是利用系统队列的长度、请求的到达率以及预先分配的时延区分参数作为调度的优先级依据,在调度时采用概率的方式选取需要服务的队列请求.实验结果表明,在考虑服务时延的情况下,新算法性能总是优于一些传统的PDD(proportional delay differentiation)调度算法,当请求服务时延增加和彼此差别很大时,性能优势相对更大,并且有效地满足了内容计费环境下提出的6点QoS(quality of service)要求. 相似文献