首页 | 本学科首页   官方微博 | 高级检索  
     


Reducing the impact of false time out on TCP performance in TCP over OBS networks
Authors:N Sreenath  N Srinath  J Aloysius Suren  K D S S U Kumar
Affiliation:1. Department of Computer Science and Information Technology, Pondicherry Engineering College, Pondicherry, India
2. Symantec Software and Services India Pvt. Ltd, Chennai, India
3. Amazon Development Center India Pvt. Ltd., Chennai, India
4. Sameva Software and Services Pvt. Ltd., Hyderabad, India
Abstract:Random burst contention losses plague the performance of Optical Burst Switched networks. Such random losses occur even in low load network condition due to the analogous behavior of wavelength and routing algorithms. Since a burst may carry many packets from many TCP sources, its loss can trick the TCP sources to conclude/infer that the underlying (optical) network is congested. Accordingly, TCP reduces sending rate and switches over to either fast retransmission or slow start state. This reaction by TCP is uncalled-for in TCP over OBS networks as the optical network may not be congested during such random burst contention losses. Hence, these losses are to be addressed in order to improve the performance of TCP over OBS networks. Existing work in the literature achieves the above laid objective at the cost of violating the semantics of OBS and/or TCP. Several other works make delay inducing assumptions. In our work, we introduce a new layer, called Adaptation Layer, in between TCP and OBS layers. This layer uses burst retransmission to mitigate the effect of burst loss due to contention on TCP by leveraging the difference between round trip times of TCP and OBS. We achieve our objective with the added advantage of maintaining the semantics of the layers intact.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号