首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Presently, optical burst switching (OBS) technology is under study as a promising solution for the backbone of the optical Internet in the near future because OBS eliminates the optical buffer problem at the switching node with the help of no optical/electro/optical conversion and guarantees class of service without any buffering. To implement the OBS network, there are a lot of challenging issues to be solved. The edge router, burst offset time management, and burst assembly mechanism are critical issues. In addition, the core router needs data burst and control header packet scheduling, a protection and restoration mechanism, and a contention resolution scheme. In this paper, we focus on the burst assembly mechanism. We present a novel data burst generation algorithm that uses hysteresis characteristics in the queueing model for the ingress edge node in optical burst switching networks. Simulation with Poisson and self‐similar traffic models shows that this algorithm adaptively changes the data burst size according to the offered load and offers high average data burst utilization with a lower timer operation. It also reduces the possibility of a continuous blocking problem in the bandwidth reservation request, limits the maximum queueing delay, and minimizes the required burst size by lifting up data burst utilization for bursty input IP traffic.  相似文献   

2.
This work proposes a stochastic model to characterize the transmission control protocol (TCP) over optical burst switching (OBS) networks which helps to understand the interaction between the congestion control mechanism of TCP and the characteristic bursty losses in the OBS network. We derive the steady-state throughput of a TCP NewReno source by modeling it as a Markov chain and the OBS network as an open queueing network with rejection blocking. We model all the phases in the evolution of TCP congestion window and evaluate the number of packets sent and time spent in different states of TCP. We model the mixed assembly process, burst assembler and disassembler modules, and the core network using queueing theory and compute the burst loss probability and end-to-end delay in the network. We derive expression for the throughput of a TCP source by solving the models developed for the source and the network with a set of fixed-point equations. To evaluate the impact of a burst loss on each TCP flow accurately, we define the burst as a composition of per-flow-bursts (which is a burst of packets from a single source). Analytical and simulation results validate the model and highlight the importance of accounting for individual phases in the evolution of TCP congestion window.  相似文献   

3.
In OBS networks, the delay of control packets in the switch control unit (SCU) of core nodes influences burst loss performance in the optical switching and should be constrained. Furthermore, the end-to-end (E2E) delay requirements of premium services need queueing delay guarantee in network nodes throughout the transmission path. For this purpose, a framework for deterministic delay guarantee is proposed in this article. It incorporates the deterministic delay model in the ingress edge node as well as in the SCUs of core nodes. On this basis, the configuration of the assembler and the offset time is addressed by means of an optimization problem under the delay constraints. Scenario studies are carried out with reference to realistic transport network topologies. Compared to statistical delay models in the literature, the deterministic model has advantages in rendering robust absolute delay guarantee for individual FEC flows, which is especially appreciated in the provisioning of premium services. By performance evaluation in comparison with the statistical models, it is shown that the adopted deterministic delay models lead to practical delay bounds in a magnitude that is close to the delay estimations by stochastic analysis.  相似文献   

4.
In optical burst switched (OBS) networks, the queueing delay in ingress edge node is an important performance measure. In this paper, we propose a deterministic delay model and derive the upper bound of the burst queueing delay in an edge node. On this basis, the edge-to-edge delay guarantee framework for premium services is further outlined.  相似文献   

5.
In contention-free slotted optical burst switching (SOBS) networks, controllers are utilized in order to manage the time-slot assignment, avoiding congestions among multiple burst transmissions. In this network, bursts are never lost at intermediate nodes but packets are lost at an ingress edge node due to a burst transmission algorithm. In addition, packet transmission delay increases depending on the algorithm. In order to improve packet level performance, in this paper, we propose a new burst transmission algorithm. In this method, two different thresholds are used; one is used to send a control packet to a controller and the other is used to assemble a burst. With these thresholds, a time slot can be assigned to a burst in advance and packet level performance can be improved. In order to evaluate its packet level performance and investigate the impact of thresholds, we also propose a queueing model of a finite buffer where a batch of packets are served in a slot of a constant length. Numerical results show that our proposed method can decrease packet loss probability and transmission delay with two thresholds. In addition, we show that our analysis results are effective to investigate the performance of the proposed method when the number of wavelengths is large.  相似文献   

6.
We consider optical delay line buffer as a solution to reduce the number of lost burst in optical burst switching, one of the promising candidates for future networks. Such network takes burst loss as an important performance criteria in the design step. Network performance, however, cannot be captured efficiently using traditional queueing models, because they often ignore the impatience of messages traveling through optical switches which is one of the popular issues in communication networks. In this paper, we develop an analytic model for this system using queueing theory and considering special impatience features. Simulation results show that (i) the developed model with impatience features can decrease burst loss probability ( ? 10%) compared with other approaches, and (ii) applying that model, we demonstrate that shared buffer architecture in optical burst switching network with optical buffer often achieves lower burst loss probability than dedicated buffer way in several different scenarios. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
This article proposes a new high-speed network architecture called dynamic burst transfer time-slot-base network (DBTN). The DBTN network is based on circuit-switched network technology. A routing tag is attached to a burst at an ingress edge node and the burst is self-routed in a DBTN network, the circuit of which is created dynamically by the routing tag. The routing tag, which is called time-slots-relay, consists of link identifiers from the ingress to the egress nodes and is used to create the circuit. Subsequent data is switched over the circuit being created in an on-the-fly fashion. Each link identifier is loaded into the address control memory (ACM) of each circuit switching node, and thereby the circuit to the destination is created dynamically. Subsequent user data follows immediately after the time-slots-relay and is sent over the established circuit. Thus short-lived fairly large data transfers such as WWW traffic are efficiently carried. A circuit between adjacent nodes is created and released dynamically so bandwidth efficiency is improved compared with conventional circuit-switched networks. Time division multiplexing of the circuit-switched network is utilized so there is no delay jitter or loss within a burst. We address the performance of DBTN switches and report the experimental system  相似文献   

8.
A local lightwave network can be constructed by employing two-way fibers to connect nodes in a passive-star physical topology, and the available optical bandwidth may be accessed by the nodal transmitters and receivers at electronic rates using wavelength-division multiplexing (WDM). The number of WDM channels, w, in such a network is technology-limited and is less than the number of network nodes, N, especially if the network should support a scalable number of nodes. We describe a general and practical channel sharing method, which requires each node to be equipped with only one transmitter-receiver pair, and in which each WDM channel is shared in a time-division multiplexed fashion; optical fiber LANs are discussed in particular. We also develop a general model for analyzing such a shared-channel, multi-hop, WDM network. Our analysis yields a counterintuitive result: it is sometimes better to employ fewer channels than a larger number of channels. We explore bounds on the ranges of w which admit queueing stability-using too few or too many channels can lead to instability. We also obtain an estimate for the optimal number of channels that minimizes network-wide queueing delay  相似文献   

9.
Optical burst switching (OBS) is a switching concept which lies between optical circuit switching and optical packet switching. Both node switching time and burst size can impact the resource efficiency of an OBS network. To increase resource utilization, burst grooming has been proposed where numerous data bursts are coalesced to form a larger burst that will be switched as one unit in order to reduce the resource waste and switching penalty. In this article, assuming burst grooming can only be realized at edge nodes, we study the burst grooming problem where sub-bursts originating from the same source may be groomed together regardless of their destinations under certain conditions. We explore the capability that core nodes can split incoming light signals to support multicast to achieve more efficient burst grooming. Specifically, core nodes can transmit the groomed burst to multiple downstream nodes if the sub-bursts in the groomed burst have different destinations. The groomed burst will traverse a tree which spans the source and all the destinations of the sub-bursts in the groomed burst. The destination egress nodes recognize, de-burstify, and drop the sub-bursts destined to these nodes, i.e., the sub-bursts destined to these egress nodes are removed from the groomed burst. At the same time, the remaining sub-bursts may be groomed with sub-bursts at these egress nodes subject to burst grooming criteria. We propose two effective burst grooming algorithms, (1) a no over-routing waste approach (NoORW); and (2) a minimum relative total resource ratio approach (MinRTRR). Our simulation results have shown that the proposed algorithms are effective in terms of the burst blocking probability, the average burst end-to-end delay, the number of sub-bursts per groomed burst, as well as the resource waste.  相似文献   

10.
Performance analysis of optical composite burst switching   总被引:1,自引:0,他引:1  
In this letter, we introduce a queueing model to study the performance enhancement in a so-called optical composite burst switching network (OCBS). Based on our model, we develop a simple analytical method to calculate the packet loss probability and we provide numerical results to compare the performance of OCBS versus the traditional optical burst switching (OBS) technique. We then provide explanations for the performance improvement of OCBS over that of OBS.  相似文献   

11.
We develop, analyze and then numerically compare performance models of a fast-adapting and centrally controlled form of optical circuit switching (OCS) with a conservative form of optical burst switching (OBS). For the first time, we consider a unified model comprising both: edge buffers at which arriving packets are aggregated and enqueued according to a vacation-type service discipline with nondeterministic set-up times, together with a core network comprising switches arbitrarily interconnected via fibers to allow transmission of packets from an edge buffer to their desired egress point through use of a dynamic signaling process to establish a lightpath, and in the case of OCS, also acknowledge its establishment. As such, edge buffers dynamically issue requests for wavelength capacity via a two or one-way reservation signaling process. Previously analyzed models of OCS and OBS have either been for a stand-alone edge buffer or a core network without edge buffering. We compare OCS with OBS in terms of packet blocking probability due to edge buffer overflow and blocking at switches in the case of OBS; mean packet queueing delay at edge buffers; and, wavelength capacity utilization. Also for the first time, we derive the exact blocking probability for a multi-hop stand-alone OBS route, assuming Kleinrock's independence, which is not simply a matter of summing the stationary distribution of an appropriate Markov process over all blocking states, as shown to be the case for an OCS route.   相似文献   

12.
Optical burst switching (OBS) is a proposed new communications technology that seeks to expand the use of optical technology in switching systems. However, many challenging issues have to be solved in order to pave the way for an effective implementation of OBS. Contention, which may occur when two or more bursts compete for the same wavelength on the same link, is a critical issue. Many contention resolution methods have been proposed in the literature but many of them are very vulnerable to network load and may suffer severe loss in case of heavy traffic. Basically, this problem is due to the lack of information at the nodes and the absence of global coordination between the edge routers. In this work, we propose another approach to avoid contention and decrease the loss. In this scheme, the intermediate nodes report the loss observed to the edge nodes so that they can adjust the traffic at the sources to meet an optimal network load. Furthermore, we propose a combination of contention reduction through congestion control and bursts retransmission to eliminate completely bursts loss. This new approach achieves fairness among all the edge nodes and enhances the robustness of the network. We also show through simulation that the proposed protocol is a viable solution for effectively reducing the conflict and increasing the bandwidth utilization for optical burst switching.  相似文献   

13.
光突发交换(OBS)是构造下一代全光网络的潜在技术之一,但存在网络参数设计困难和阻塞性能有限等问题。文章介绍了一种新型基于时隙的OBS网络体系,简要阐明了其各部分功能,并将其与常规OBS网络进行了对比。提出了网络节点中关键的时隙分配与调度问题,并给出相应的在线调度策略,仿真结果表明BF算法性能较优。  相似文献   

14.
This paper investigates support for TCP RENO flows in an Optical Burst Switching (OBS) network. In particular we evaluate the TCP send-rate, i.e., the amount of data sent per time unit taking into account the burst assembly mechanism at the edge nodes of the OBS network and burst loss events inside the network. The analysis demonstrates an interesting phenomenon, that we call correlation benefit. This phenomenon is introduced by the aggregation mechanism and can give rise, in some conditions, to a significant increase in the TCP send-rate. These results are obtained by means of an analytical model, based on a Markovian approach, and have been validated via an intensive simulation campaign.  相似文献   

15.
该文分析和研究了光突发交换网络中控制平面处理时延、数据平面资源利用效率以及数据平面突发丢失率等性能需求对边缘节点组装算法的约束性.分析结果证实数据平面的性能对边缘节点的组装算法参数更加敏感.在网络性能多目标约束条件下,基于无波长转换器核心节点的光突发交换网络几乎不存在有效的组装门限.在核心节点采用主流光交换矩阵且期望突发碰撞概率在10-4以下时,核心节点输出端口的单纤波长转换器数量至少需要30个以满足网络性能需求的约束条件.  相似文献   

16.
光突发交换网络中基于抢占的突发编码机制   总被引:1,自引:0,他引:1  
黄胜  马良  李玲霞  阳小龙 《光电子.激光》2011,(12):1793-1796,1825
为了降低突发丢失率,在分析突发克隆的基础上,提出了一种基于抢占的突发编码机制。在源边缘节点,采用奇偶监督码对信息突发进行编码,并产生冗余突发。在核心节点,信息突发有条件地抢占冗余突发,减少了冗余突发对信息突发的竞争,降低了核心节点处的信息突发丢失率。目的边缘节点,利用冗余突发恢复出丢失的信息突发。提出的突发编码机制实现...  相似文献   

17.
The capacity region of wireless networks with per‐destination (PD) queueing model has been studied extensively in the literature. However, the PD queueing structure is not scalable because the number of queues in a node can be as large as the number of all possible source–destination pairs. In this work, we study the capacity region of wireless networks with per‐link (PL) queueing model. The advantage of the PL queueing structure is that the number of queues in a node can be reduced significantly to the number of its neighboring nodes. In this paper, the capacity region of a wireless network with PL queueing structure is characterized, and a dynamic routing and power control policy, namely, DRPC‐PL, is proposed to stabilize the network whenever the input rate is within the capacity region. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
Efficient network provisioning mechanisms that support service differentiation are essential to the realization of the Differentiated Services (DiffServ) Internet. Building on our prior work on edge provisioning, we propose a set of efficient dynamic node and core provisioning algorithms for interior nodes and core networks, respectively. The node provisioning algorithm prevents transient violations of service level agreements (SLA) by predicting the onset of service level violations based on a multiclass virtual queue measurement technique, and by automatically adjusting the service weights of weighted fair queueing schedulers at core routers. Persistent service level violations are reported to the core provisioning algorithm, which dimensions traffic aggregates at the network ingress edge. The core provisioning algorithm is designed to address the difficult problem of provisioning DiffServ traffic aggregates (i.e., rate-control can only be exerted at the root of any traffic distribution tree) by taking into account fairness issues not only across different traffic aggregates but also within the same aggregate whose packets take different routes through a core IP network. We demonstrate through analysis and simulation that the proposed dynamic provisioning model is superior to static provisioning for DiffServ in providing quantitative delay bounds with differentiated loss across per-aggregate service classes under persistent congestion and device failure conditions when observed in core networks.  相似文献   

19.
The concept of optical burst switching (OBS) aims to allow access to optical bandwidth in dense wavelength division multiplexed (DWDM) networks at fractions of the optical line rate to improve bandwidth utilization efficiency. This paper studies an alternative network architecture combining OBS with dynamic wavelength allocation under fast circuit switching to provide a scalable optical architecture with a guaranteed QoS in the presence of dynamic and bursty traffic loads. In the proposed architecture, all processing and buffering are concentrated at the network edge and bursts are routed over an optical transport core using dynamic wavelength assignment. It is assumed that there are no buffers or wavelength conversion in core nodes and that fast tuneable laser sources are used in the edge routers. This eliminates the forwarding bottleneck of electronic routers in DWDM networks for terabit-per-second throughput and guarantees forwarding with predefined delay at the edge and latency due only to propagation time in the core. The edge burst aggregation mechanisms are evaluated for a range of traffic statistics to identify their impact on the allowable burst lengths, required buffer size and achievable edge delays. Bandwidth utilization and wavelength reuse are introduced as new parameters characterizing the network performance in the case of dynamic wavelength allocation. Based on an analytical model, upper bounds for these parameters are derived to quantify the advantages of wavelength channel reuse, including the influence of the signaling round-trip time required for lightpath reservation. The results allow to quantify the operational gain achievable with fast wavelength switching compared to quasistatic wavelength-routed optical networks and can be applied to the design of future optical network architectures  相似文献   

20.
A queueing model with finite buffer size, mixed input traffic (Poisson and burst Poisson arrivals), synchronous transmission and server interruptions through a Bernoulli sequence of independent random variables is studied. Using average burst length, traffic intensity and input traffic mixture ratio as parameters, the relationships among buffer size, overflow probability and expected message queueing delay are obtained. An integrated digital voice-data system with synchronous time division multiplexing (STDM) for a large number of voice sources and mixed arrival process for data messages is considered as an application for this model. The results of this study are portrayed on graphs and may be used as guidelines in buffer design problems in digital voice-data systems. The queueing model developed is quite general in a sense that it covers pure Poisson and burst Poisson arrival processes and the mixture of the two as well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号