首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Synchronous optical packet buffering is demonstrated utilizing a fiber-based synchronizer with a photonic integrated circuit packet buffer. Asynchronously arriving packets are optically synchronized to a local frame clock and loaded synchronously into the optical buffer. The synchronizer is a four-stage design with a resolution of 853 ps and a dynamic tuning range of 12.8 ns. The optical packet buffer consists of an integrated 2 × 2 InP switch coupled to a silica-on-silicon 12.8-ns delay line. Packet recovery measurements of 40-B return-to-zero packets at 40 Gb/s show-error free performance for several combinations of synchronizer and buffer delays.  相似文献   

2.
In all-optical packet switching, packets may arrive at an optical switch in an uncoordinated fashion. To prevent packet loss in the switch, fiber delay lines (FDLs) are used as optical buffers to store optical packets. However, assigning FDLs to the arrival packets to achieve high throughput, low delay, and low loss rate is not a trivial task. In the authors' companion paper, several efficient scheduling algorithms were proposed for single-stage shared-FDL optical packet switches (OPSs). To further enhance the switch's scalability, this work was extended to a multistage case. In this paper, two scheduling algorithms are proposed: 1) sequential FDL assignment and 2) multicell FDL assignment algorithms for a three-stage optical Clos-Network switch (OCNS). The paper shows by simulation that a three-stage OCNS with these FDL assignment algorithms can achieve satisfactory performance.  相似文献   

3.
Buffers are essential components of any packet switch for resolving contentions among arriving packets. Currently, optical buffers are composed of fiber delay lines (FDL), whose blocking and delay behavior differ drastically from that of conventional RAM at least two-fold: 1) only multiples of discrete time delays can be offered to arriving packets; 2) a packet must be dropped if the maximum delay provided by optical buffer is not sufficient to avoid contention, this property is called balking. As a result, optical buffers only have finite time resolution, which may lead to excess load and prolong the packet delay. In this paper, a novel queueing model of optical buffer is proposed, and the closed-form expressions of blocking probability and mean delay are derived to explore the tradeoff between buffer performance and system parameters, such as the length of the optical buffer, the time granularity of FDLs, and to evaluate the overall impact of packet length distribution on the buffer performance.  相似文献   

4.
In all-optical packet switching, packets may arrive at an optical switch in an uncoordinated fashion. When contention occurs, fiber delay lines (FDLs) are needed to delay (buffer) the packets that have lost the contention to some future time slots for the desired output ports. There have been several optical-buffered switch architectures and FDL assignment algorithms proposed in the literature. However, most of them either have high implementation complexity or fail to schedule in advance departure time for the delayed packets. This paper studies the packet scheduling algorithms for the single-stage shared-FDL optical packet switch. Three new FDL assignment algorithms are proposed, namely sequential FDL assignment (SEFA), multicell FDL assignment (MUFA), and parallel iterative FDL assignment (PIFA) algorithms for the switch. The proposed algorithms can make FDLs and output-port reservation so as to schedule departure time for packets. Owing to FDL and/or output-port conflicts, the packets that fail to be scheduled are discarded before entering the switch so that they do not occupy any FDL resources. It is shown by simulation that with these algorithms, the optical-buffered switch can achieve a loss rate of /spl sim/10/sup -7/ even at the load of 0.9. These algorithms are extended to the three-stage Clos-Network optical packet switches in the companion paper.  相似文献   

5.
Architectures for packet switches are approaching the limit of electronic switching speed. This raises the question of how best to utilize advances in photonic technology to enable higher speeds. The authors introduce cascaded optical delay line (COD) architectures. The COD architectures utilize an extremely simple distributed electronic control algorithm to configure the states of 2×2 photonic switches and use optical fiber delay lines to temporarily buffer packets if necessary. The simplicity of the architectures may also make them suitable for “lightweight” all-electronic implementations. For optical implementations, the number of 2×2 photonic switches used is a significant factor determining cost. The authors present a “baseline” architecture for a 2×2 buffered packet switch that is work conserving and has the first-in, first-out (FIFO) property. If the arrival processes are independent and without memory, the maximum utilization factor is ρ, and the maximum acceptable packet loss probability is ϵ, then the required number of 2×2 photonic switches is O(log(ϵ)/log(γ)), where γ=ρ2/(ρ2+4-4ρ). If one modifies the baseline architecture by changing the delay line lengths then the system is no longer work conserving and loses the FIFO property, but the required number of 2×2 photonic switches is reduced to O(log[log(ϵ)/log(γ)]). The required number of 2×2 photonic switches is essentially insensitive to the distribution of packet arrivals, but long delay lines are required for bursty traffic  相似文献   

6.
We consider the fundamental delay bounds for scheduling packets In an N times N packet switch operating under the crossbar constraint. Algorithms that make scheduling decisions without considering queue backlog are shown to incur an average delay of at least O(N). We then prove that O(log(N)) delay is achievable with a simple frame based algorithm that uses queue backlog information. This is the best known delay bound for packet switches, and is the first analytical proof that sublinear delay is achievable in a packet switch with random inputs.  相似文献   

7.
输入排队中抢占式的短包优先调度算法   总被引:7,自引:2,他引:5       下载免费PDF全文
李文杰  刘斌 《电子学报》2005,33(4):577-583
调度算法决定了输入排队交换结构的性能.本文根据Internet业务特征提出调度算法应保证短包的高优先级和低延迟.已有包方式调度中,长包信元的连续传输将造成短包长时间等待.为解决该问题,本文设计了一种低复杂度抢占式交换结构,并提出了相应的抢占式短包优先调度算法(P-SPF).短包优先可减小TCP流的RTT,并由此提高TCP之性能.通过排队论分析和实际业务源模型下仿真可知:P-SPF取得短包近似为零的平均包等待时间,同时达到94%的系统吞吐量.  相似文献   

8.
The telecommunications networks of the future are likely to be packet switched networks consisting of wide bandwidth optical fiber transmission media, and large, highly parallel, self-routing switches. Recent considerations of switch architectures have focused on internally nonblocking networks with packet buffering at the switch outputs. These have optimal throughput and delay performance. The author considers a switch architecture consisting of parallel plans of low-speed internally blocking switch networks, in conjunction with input and output buffering. This architecture is desirable from the viewpoint of modularity and hardware cost, especially for large switches. Although this architecture is suboptimal, the throughput shortfall may be overcome by adding extra switch planes. A form of input queuing called bypass queuing can improve the throughput of the switch and thereby reduce the number of switch planes required. An input port controller is described which distributes packets to all switch planes according to the bypass policy, while preserving packet order for virtual circuits. Some simulation results for switch throughput are presented  相似文献   

9.
The performance of single-wavelength fiber delay line buffer with finite waiting places is evaluated in this paper. For Poisson arriving packets with arbitrarily distributed lengths, the generating function of delay time distribution can be derived from the quantized delay buffer model. Then queue length distributions, loss probability, and other important performance measures can be figured out. Specifically, two important cases of negative-exponentially distributed packet lengths and fixed packet lengths are considered and compared. The accuracy of the proposed approach is verified through simulation. It is also observed that the buffer system performs more effectively for the fixed-length packets.  相似文献   

10.
This paper reviews advanced optical burst switching (OBS) and optical packet switching (OPS) technologies and discusses their roles in the future photonic Internet. Discussions include optoelectronic and optical systems technologies as well as systems integration into viable network elements (OBS and OPS routers). Optical label switching (OLS) offers a unified multiple-service platform with effective and agile utilization of the available optical bandwidth in support of voice, data, and multimedia services on the Internet Protocol. In particular, OLS routers with wavelength routing switching fabrics and parallel optical labeling allow forwarding of asynchronously arriving variable-length packets, bursts, and circuits. By exploiting contention resolution in wavelength, time, and space domains, the OLS routers can achieve high throughput without resorting to a store-and-forward method associated with large buffer requirements. Testbed demonstrations employing OLS edge routers show high-performance networking in support of multimedia and data communications applications over the photonic Internet with optical packets and bursts switched directly at the optical layer  相似文献   

11.
This paper considers a general parallel buffered packet switch (PBPS) architecture which is based on multiple packet switches operating independently and in parallel. A load-balancing mechanism is used at each input to distribute the traffic to the parallel switches. The buffer structure of each of the parallel packet switches is based on either a dedicated, a shared, or a buffered-crosspoint output-queued architecture. As in such PBPS multipath switches, packets may get out of order when they travel independently in parallel through these switches, a resequencing mechanism is necessary at the output side. This paper addresses the issue of evaluating the minimum resequence-queue size required for a deadlock-free lossless operation. An analytical method is presented for the exact evaluation of the worst-case resequencing delay and the worst-case resequence-queue size. The results obtained reveal their relation, and demonstrate the impact of the various system parameters on resequencing  相似文献   

12.
This paper presents a class of algorithms for scheduling packets in input-queued switches. As opposed to previously known algorithms that focus only on achieving high throughput, these algorithms seek to achieve low average delay without compromising the throughput achieved. Packet scheduling in input-queued switches based on the virtual-output-queued architecture is a bipartite graph matching problem wherein ports are represented by vertices and the traffic flows by the edges. The set of matched edges determine the packets that are to be transferred from the input ports to the output ports. Current matching algorithms implicitly prioritize high-degree vertices, i.e., ports with a large number of flows, causing longer delays at ports with a smaller number of flows. Motivated by this observation, we present three matching algorithms based on explicitly prioritizing low-degree vertices and the edges through them. Using both real gateway traffic traces as well as synthetically generated traffic, we present simulation results showing that this class of algorithms achieves a low average delay as compared to other scheduling algorithms of equivalent complexity while still achieving similar throughput. We also show that these algorithms determine the maximum size matching in almost all cases.  相似文献   

13.
Load-balanced switches have received a great deal of attention recently as they are much more scalable than other existing switch architectures in the literature. However, as there exist multiple paths for flows of packets to traverse through load-balanced switches, packets in such switches may be delivered out of order. In this paper, we propose a new switch architecture, called the contention and reservation (CR) switch, that not only delivers packets in order but also guarantees 100% throughput. The key idea, as in a multiple-access channel, is to operate the CR switch in two modes: 1) the contention mode in light traffic and 2) the reservation mode in heavy traffic. To do this, we invent a new buffer management scheme, called virtual output queue with insertion (I-VOQ). With the I-VOQ scheme, we give rigorous mathematical proofs for 100% throughput and in-order packet delivery of the CR switch. By computer simulations, we also demonstrate that the average packet delay of the CR switch is considerably lower than other schemes in the literature, including the uniform frame spreading scheme, the padded frame scheme, and the mailbox switch .  相似文献   

14.
In a multihop network, packets go through a number of hops before they are absorbed at their destinations. In routing to its destination using minimum path, a packet at a node may have a preferential output link (the so-called “care” packet) or may not (the so-called “don't care” packet). Since each node in an optical multihop network may have limited buffer, when such buffer runs out, contention among packets for the same output link can be resolved by deflection. In this paper, we study packet scheduling algorithms and their performance in a buffered regular network with deflection routing. Using shufflenet as an example, we show that high performance (in terms of throughput and delay) can he achieved if “care” packets can be scheduled with higher priority than “don't care” packets. We then analyze the performance of a shufflenet with this priority scheduling given the buffer size per node. Traditionally, the deflection probability of a packet at a node is solved from a transcendental equation by numerical methods which quickly becomes very cumbersome when the buffer size is greater than one packet per node. By exploiting the special topological properties of the shufflenet, we are able to simplify the analysis greatly and obtain a simple closed-form approximation of the deflection probability. The expression allows us to extract analytically the performance trend of the shufflenet with respect to its buffer and network sizes. We show that a shufflenet indeed performs very well with only one buffer, and can achieve performance close to the store-and-forward case using a buffer size as small as four packets per node  相似文献   

15.
We address the problem of congestion resolution in optical packet switching (OPS). We consider a fairly generic all-optical packet switch architecture with a feedback optical buffer constituted of fiber delay lines (FDL). Two alternatives of switching granularity are addressed for a switch operating in a slotted transfer mode: switching at the slot level (i.e., fixed length packets of a single slot) or at the burst level (variable length packets that are integer multiples of the slot length). For both cases, we show that in spite of the limited queuing resources, acceptable performance in terms of packet loss can be achieved for reasonable hardware resources with an appropriate design of the time/wavelength scheduling algorithms. Depending on the switching units (slots or bursts), an adapted scheduling algorithm needs to be deployed to exploit the bandwidth and buffer resources most efficiently.  相似文献   

16.
Routing strategies for maximizing throughput in LEO satellite networks   总被引:1,自引:0,他引:1  
This paper develops routing and scheduling algorithms for packet transmission in a low Earth orbit satellite network with a limited number of transmitters and buffer space. We consider a packet switching satellite network, where time is slotted and the transmission time of each packet is fixed and equal to one time slot. Packets arrive at each satellite independently with a some probability during each time slot; their destination satellite is uniformly distributed. With a limited number of transmitters and buffer space on-board each satellite, contention for transmission inevitably occurs as multiple packets arrive at a satellite. First, we establish the stability region of the system in terms of the maximum admissible packet arrival rate that can possibly be supported. We then consider three transmission scheduling schemes for resolving these contentions: random packet win, where the winning packet is chosen at random; oldest packet win, where the packet that has traveled the longest distance wins the contention; and shortest hops win (SHW), where the packet closest to its destination wins the contention. We evaluate the performance of each of the schemes in terms of throughput. For a system without a buffer, the SHW scheme attains the highest throughput. However, when even limited buffer space is available, all three schemes achieve about the same throughput performance. Moreover, even with a buffer size of just a few packets the achieved throughput is close to that of the infinite buffer case.  相似文献   

17.
On scheduling optical packet switches with reconfiguration delay   总被引:5,自引:0,他引:5  
Using optical technology for the design of packet switches/routers offers several advantages such as scalability, high bandwidth, power consumption, and cost. However, reconfiguring the optical fabric of these switches requires significant time under current technology (microelectromechanical system mirrors, tunable elements, bubble switches, etc.). As a result, conventional slot-by-slot scheduling may severely cripple the performance of these optical switches due to the frequent fabric reconfiguration that may entail. A more appropriate way is to use a time slot assignment (TSA) scheduling approach to slow down the scheduling rate. The switch gathers the incoming packets periodically and schedules them in batches, holding each fabric configuration for a period of time. The goal is to minimize the total transmission time, which includes the actual traffic-sending process and the reconfiguration overhead. This optical switch scheduling problem is defined in this paper and proved to be NP-complete. In particular, earlier TSA algorithms normally assume the reconfiguration delay to be either zero or infinity for simplicity. To this end, we propose a practical algorithm, ADJUST, that breaks this limitation and self-adjusts with different reconfiguration delay values. The algorithm runs at O(/spl lambda/N/sup 2/logN) time complexity and guarantees 100% throughput and bounded worst-case delay. In addition, it outperforms existing TSA algorithms across a large spectrum of reconfiguration values.  相似文献   

18.
In modern packet switches, technology limitations may introduce switch configuration delays that are non-negligible compared with the time required to transmit a single packet. In this paper, we propose a methodology for scheduling of packets, in the context of these technology limitations. If the total tolerable delay through a packet switch is at least on the order of the switch configuration delay, we show that a near 100% utilization of the communication links is possible, while providing strict quality of service guarantees. The main idea is to increase the quantum with which data is scheduled and switched to beyond that of a single packet. This also decreases the rate at which scheduling need to be made, and hence decreases the implementation complexity. The quality of service guarantees we consider are in terms of a service curve. Specifically, we present a framework for the provision of service curves while coping with non-negligible switch configuration delays.  相似文献   

19.
Wireless multimedia synchronization is concerned with distributed multimedia packets such as video, audio, text and graphics being played-out onto the mobile clients via a base station (BS) that services the mobile client with the multimedia packets. Our focus is on improving the Quality of Service (QoS) of the mobile client's on-time-arrival of distributed multimedia packets through network multimedia synchronization. We describe a media synchronization scheme for wireless networks, and we investigate the multimedia packet scheduling algorithms at the base station to accomplish our goal. In this paper, we extend the media synchronization algorithm by investigating four packet scheduling algorithms: First-In-First-Out (FIFO), Highest-Priority-First (PQ), Weighted Fair-Queuing (WFQ) and Round-Robin (RR). We analyze the effect of the four packet scheduling algorithms in terms of multimedia packet delivery time and the delay between concurrent multimedia data streams. We show that the play-out of multimedia units on the mobile clients by the base station plays an important role in enhancing the mobile client's quality of service in terms of intra-stream synchronization and inter-stream synchronization. Our results show that the Round-Robin (RR) packet scheduling algorithm is, by far, the best of the four packet scheduling algorithms in terms of mobile client buffer usage. We analyze the four packet scheduling algorithms and make a correlation between play-out of multimedia packets, by the base station, onto the mobile clients and wireless network multimedia synchronization. We clarify the meaning of buffer usage, buffer overflow, buffer underflow, message complexity and multimedia packet delay in terms of synchronization between distributed multimedia servers, base stations and mobile clients.  相似文献   

20.
The overhead associated with reconfiguring a switch fabric in optical packet switches is an important issue in relation to the packet transmission time and can adversely affect switch performance. The reconfiguration overhead increases the mean waiting time of packets and reduces throughput. The scheduling of packets must take into account the reconfiguration frequency. This work proposes an analytical model for input-buffered optical packet switches with the reconfiguration overhead and analytically finds the optimal reconfiguration frequency that minimizes the mean waiting time of packets. The analytical model is suitable for several round-robin (RR) scheduling schemes in which only non-empty virtual output queues (VOQs) are served or all VOQs are served and is used to examine the effects of the RR scheduling schemes and various network parameters on the mean waiting time of packets. Quantitative examples demonstrate that properly balancing the reconfiguration frequency can effectively reduce the mean waiting time of packets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号