首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
We consider the problem of delay-efficient scheduling in general multihop networks. While the class of max-weight type algorithms are known to be throughput optimal for this problem, they typically incur undesired delay performance. In this paper, we propose the Delay-Efficient SCheduling algorithm (DESC). DESC is built upon the idea of accelerating queues (AQ), which are virtual queues that quickly propagate the traffic arrival information along the routing paths. DESC is motivated by the use of redundant constraints to accelerate convergence in the classic optimization context. We show that DESC is throughput-optimal. The delay bound of DESC can be better than previous bounds of the max-weight type algorithms which did not use such traffic information. We also show that under DESC, the service rates allocated to the flows converge quickly to their target values and the average total “network service lag” is small. In particular, when there are O(1) flows and the rate vector is of Θ(1) distance away from the boundary of the capacity region, the average total “service lag” only grows linearly in the network size.  相似文献   

2.
With the increasing use of clusters in real-time applications, it has become essential to design high-performance networks with quality-of-service (QoS) guarantees. We explore the feasibility of providing QoS in wormhole switched routers, which are widely used in designing scalable, high-performance cluster interconnects. In particular, we are interested in supporting multimedia video streams with CBR and VBR traffic, in addition to the conventional best-effort traffic. The proposed MediaWorm router uses a rate-based bandwidth allocation mechanism, called Fine-Grained VirtualClock (FGVC), to schedule network resources for different traffic classes. Our simulation results on an 8-port router indicate that it is possible to provide jitter-free delivery to VBR/CBR traffic up to an input load of 70-80 percent of link bandwidth and the presence of best-effort traffic has no adverse effect on real-time traffic. Although the MediaWorm router shows a slightly lower performance than a pipelined circuit switched (PCS) router, commercial success of wormhole switching, coupled with simpler and cheaper design, makes it an attractive alternative. Simulation of a (2/spl times/2) fat-mesh using this router shows performance comparable to that of a single switch and suggests that clusters designed with appropriate bandwidth balance between links can provide required performance for different types of traffic.  相似文献   

3.
《Computer Networks》2007,51(14):4092-4111
As traffic on the Internet continues to grow exponentially, there is a real need to solve scalability and traffic engineering simultaneously – specifically, without using over-provisioning in order to accommodate streaming media traffic. One of the threats to the current operation stability of the Internet comes from UDP-based streaming media applications, such as Skype (which is currently doubling every 6-month) today and video services in the near future. This paper shows how the Internet can benefit from pipeline forwarding in order to: (i) construct ultra-scalable IP switches, (ii) provide predictable quality of service for UDP-based streaming applications, while (iii) preserving elastic TCP-based traffic as is, i.e., without affecting any existing best-effort applications.  相似文献   

4.
《Computer Networks》2007,51(4):1183-1204
The Differentiated Service (DiffServ) network model has been defined as a scalable framework for providing Quality of Service to applications. In this model, traffic is classified into several service classes with different priorities inside queues of IP routers. The premium service class has the highest priority. Due to the high priority of premium traffic, the global network behaviour against this service class, including routing and scheduling of premium packets, may impose significant influences on traffic of other classes. These negative influences, which could degrade the performance of low-priority classes with respect to some important metrics such as the packet loss probability and the packet delay, are often called the inter-class effects. To reduce the inter-class effects, the premium-class routing algorithm must be carefully selected such that (1) it works correctly (i.e., without loop) under the hop-by-hop routing paradigm; and (2) the congestion resulted from the traffic of premium class over the network becomes minimum. In this paper, we first introduce a novel routing framework, named compatible routing, that guarantees loop-freedom in the context of hop-by-hop routing model. Then, upon this framework, we propose two multipath architectures for load balancing of high-priority traffic on DiffServ networks. Our extensive simulations clearly demonstrate that the proposed methods distribute the premium bandwidth requirements more efficiently over the whole network and perform better than the existing algorithms, especially in the case of complex and highly loaded networks.  相似文献   

5.
Wireless broadband networks based on the IEEE 802.11 technology are being increasingly deployed as mesh networks to provide users with extended coverage for wireless Internet access. These wireless mesh networks, however, may be deployed by different authorities without any coordination a priori, and hence it is possible that they overlap partially or even entirely in service area, resulting in contention of radio resources among them. In this paper, we investigate the artifacts that result from the uncoordinated deployment of wireless mesh networks. We use a network optimization approach to model the problem as resource sharing among nodes belonging to one or different networks. Based on the proposed LP formulation, we then conduct simulations to characterize the performance of overlaying wireless mesh networks, with the goal to provide perspectives for addressing the problems. We find that in a system with multiple overlaying wireless mesh networks, if no form of inter-domain coordination is present, individual mesh networks could suffer from capacity degradation due to increased network contention. One solution toward addressing the performance degradation is to “interwork” these wireless mesh networks by allowing inter-domain traffic relay through provisioning of “bridge” nodes. However, if such bridge nodes are chosen arbitrarily, the problems of throughput sub-optimality and unfairness may arise. We profile the impact of bridge node selection and show the importance in controlling network unfairness for wireless mesh network interworking. We conclude that mesh network interworking is a promising direction to address the artifacts due to uncoordinated deployment of wireless mesh networks if it is supplemented with appropriate mechanisms.  相似文献   

6.
姜楠  何元智 《计算机科学》2015,42(10):95-100
给出了一种分布式星群网络(Distributed Satellite Cluster Network,DSCN)体系架构,阐明了DSCN拓扑变化的特点。在分析网络状态获取方式和路由计算方法的基础上,提出了一种适用于DSCN的基于蚁群算法的通信量分类路由(Ant Colony Optimization Based Traffic Classified Routing,ATCR)算法。ATCR算法将通信量分为时延敏感型通信量A、带宽敏感型通信量B以及提供尽力而为服务的通信量C,并对蚁群算法(Ant Colony Optimization,ACO)收敛慢的缺点进行了改进。仿真实验表明,ATCR算法提高了收敛速度,可以有效平衡网络流量。通信量A和C的端到端时延要小于未采用通信量分类的改进ACO算法。由于减少了重负载链路的数量及拥塞引起的丢包,ATCR算法在分组递交率上的表现优于改进的ACO算法。  相似文献   

7.
In wavelength division multiplexed (WDM)-based optical burst switching (OBS) networks, bursts that traverse longer paths are more likely to be dropped compared to bursts that traverse shorter paths resulting in a fairness problem. Fairness here refers to having, for all ingress–egress node pairs in a network, a burst to have equal likelihood to get through independent of the hop length involved. In this paper, we develop a link scheduling state based fairness improvement method which can be used in a classless as well as a multi-class environment. The basic idea is to collect link scheduling state information and use it to determine the offset times for routes with different hop lengths. By using the online link state information, this method periodically computes and adapts the offset times needed, thus inherently accounting for the traffic loading patterns and network topological connectivity. It also ensures that the delay experienced by a burst is low and shorter-hop bursts are not over-penalized while improving the performance of longer-hop bursts. The effectiveness of the proposed method is evaluated through simulation experiments.  相似文献   

8.
《Computer Networks》2000,32(3):333-345
Several researchers have recently advocated dynamic pricing mechanisms such as the smart market. This paper explores how dynamic state-dependent pricing and explicit congestion control can both be used to avoid and alleviate congestion. Dynamic pricing has significant advantages for heterogeneous traffic, although this paper demonstrates that this approach reduces raw throughput. It is shown that when propagation delay is non-trivial, as is the case in wide-area networks, a slow-reacting version of dynamic pricing is preferable. This paper also advocates the use of novel stream-oriented best-effort ATM services with which a stream's arrival process is declared to the network before transmission begins and then policed, although there are no performance guarantees and none of these best-effort streams are ever blocked. With this approach, it is possible to provide price incentives for applications to decrease traffic burstiness, and to reveal important information about their packet streams, making mechanisms like slow-reacting dynamic pricing more practical.  相似文献   

9.
多跳无线网络中网络拥塞的出现将严重降低网络的性能。基于802.11e提供的区分业务类型的信道接入优先级机制,提出了一种对尽力而为(best-effort)类型业务动态调整其优先级进行网络拥塞控制的协议。该算法的主要思想是对发生网络拥塞的节点提升其业务流传输的优先级使其获得更多的传输机会以缓解拥塞状况,并对严重的网络拥塞状况采取反向施压的策略降低上游节点业务流的转发速率。仿真结果表明,该算法有效地提高了网络重负载情况下的吞吐量。  相似文献   

10.
This paper deals with the impact of traffic handling mechanisms on capacity for different network architectures. Three traffic handling models are considered: per-flow, class-based and best-effort (BE). These models can be used to meet service guarantees, the major differences being in their complexity of implementations and in the quantity of network resources that must be provided. In this study, the performance is fixed and the required capacity determined for various combinations of traffic handling architectures for edge-core networks. This study provides a comparison of different QoS architectures. One key result of this work is that on the basis of capacity requirements, there is no significant difference between semi-aggregate traffic handling and per-flow traffic handling. However, best-effort handling requires significantly more capacity as compared to the other methods.  相似文献   

11.
Since the TCP protocol uses the loss of packets as an indication of network congestion, its performance degrades over wireless links, which are characterized by a high bit error rate. Different solutions have been proposed to improve the performance of TCP over wireless links, the most promising one being the use of a hybrid model at the link level combining Forward Error Correction (FEC), Automatic Repeat Request with Selective Repeat (ARQ-SR), and an in-order delivery of packets to IP. The drawback of FEC is that it consumes some extra bandwidth to transmit the redundant information. ARQ-SR consumes extra bandwidth only when packets are lost, its drawback is that it increases the round-trip time (RTT), which may deteriorate the performance of TCP. Another drawback of ARQ-SR is that a complete packet can be retransmitted to correct a small piece of errored data. We study in this paper the performance of TCP over a wireless link implementing hybrid FEC/ARQ-SQ. The study is done by simulating and modeling long-lived TCP transfers over wireless links showing Bernoulli errors. We are motivated by how to tune link-level error recovery, e.g. amount of FEC, persistency of ARQ, so as to maximize the performance of TCP. We provide results for different physical characteristics of the wireless link (delay, error rate) and for different traffic loads (number of TCP connections).  相似文献   

12.
By making the best use of limited bandwidth, quality of service (QoS) provisioning over internet is essential for satisfying various types of internet-application requirements. The traffic classification and scheduling are the key functions to provide various kinds of class of service (CoS) under an overload condition. This paper investigates QoS performance in a network equipment testbed when implementing these main functions. We examine the major CoS functions provided by the Juniper T320 router, and measure their performance. In addition to fundamental analysis of the QoS behavior, we show the impact of QoS operations on a parallel system distributed in multi-domain networks as a practical case study of grid environments.  相似文献   

13.
Today the ICT industry accounts for 2–4% of the worldwide carbon emissions that are estimated to double in a business-as-usual scenario by 2020. A remarkable part of the large energy volume consumed in the Internet today is due to the over-provisioning of network resources such as routers, switches and links to meet the stringent requirements on reliability. Therefore, performance and energy issues are important factors in designing gigabit routers for future networks. However, the design and prototyping of energy-efficient routers is challenging because of multiple reasons, such as the lack of power measurements from live networks and a good understanding of how the energy consumption varies under different traffic loads and switch/router configuration settings. Moreover, the exact energy saving level gained by adopting different energy-efficient techniques in different hardware prototypes is often poorly known. In this article, we first propose a measurement framework that is able to quantify and profile the detailed energy consumption of sub-components in the NetFPGA OpenFlow switch. We then propose a new power-scaling algorithm that can adapt the operational clock frequencies as well as the corresponding energy consumption of the FPGA core and the Ethernet ports to the actual traffic load. We propose a new energy profiling method, which allows studying the detailed power performance of network devices. Results show that our energy efficient solution obtains higher level of energy efficiency compared to some existing approaches as the upper and lower bounds of power consumption of the NetFPGA Openflow switch are proved to be 30% lower than ones of the commercial HP Enterprise switch. Moreover, the new switch architecture can save up to 97% of dynamic power consumption of the FPGA chip at lowest frequency mode.  相似文献   

14.
This paper deals with the problem of load-balanced routing in multi-radio multi-rate multi-channel wireless mesh networks. Our analysis relies on the multicast and broadcast sessions, where each session has a specific bandwidth requirement. We show that using both rate and channel diversity significantly improves the network performance. Toward this goal, we propose two cross-layer algorithms named the “Interference- and Rate-aware Multicast Tree (IRMT)” and the “Interference- and Rate-aware Broadcast Tree (IRBT)”. The proposed algorithms jointly address the problems of routing tree construction, transmission channel selection, transmission rate selection, and call admission control. As an advantage, the IRMT and the IRBT algorithms consider both inter-flow and intra-flow interference. These schemes not only improve the utilization of the network resources, but also balance the traffic load over the network. Numerical results demonstrate the efficiency of the proposed algorithms in terms of the number of transmissions, the load-balancing, and the network throughput.  相似文献   

15.
《Computer Networks》2002,38(2):225-246
In this paper we state a general framework for radio resource allocation based on a matrix which highlights the trade-offs of complexity and efficiency. This framework is outlined for the systematic definition of scheduling algorithms that are jointly adaptive to traffic and to transmission quality in order to improve the radio resource utilization and the achievable throughput of cellular networks for the support of best-effort traffic. We consider the application of the matrix concept to both time division and code division multiple access, the latter scheme also bringing about mutual interference among competing users. Then we propose a scheduling algorithm for wireless systems, called channel adaptive open scheduling (CHAOS). The CHAOS performance in terms of throughput and delay is extensively compared with those resulting from a load adaptive channel independent scheduling (CIS). A major result of this work is the quantitative assessment of the performance advantage allowed by jointly accounting for traffic congestion and transmission quality. Moreover the main implementation issues related to the proposed algorithms are investigated.  相似文献   

16.
A significant share of today's Internet traffic is generated by network gaming. This kind of traffic is interesting in regard to it's market potential as well as to it's real time requirements on the network. For the consideration of game traffic in network dimensioning, traffic models are required that allow to generate a characteristic load for analytical or simulative performance evaluation of networks. In this paper the fast action multiplayer game Counter Strike is evaluated based on one month of Internet traffic traces and traffic models for client and server are presented. The paper concludes with remarks on QoS metrics for an adequate assessment of performance evaluation results.  相似文献   

17.
Quality of service (QoS) support in local and cluster area environments has become an issue of great interest in recent years. Most current high-performance interconnection solutions for these environments have been designed to enhance conventional best-effort traffic performance, but are not well-suited to the special requirements of the new multimedia applications. The multimedia router (MMR) aims at offering hardware-based QoS support within a compact interconnection component. One of the key elements in the MMR architecture is the algorithms used in traffic scheduling. These algorithms are responsible for the order in which information is forwarded through the internal switch. Thus, they are closely related to the QoS-provisioning mechanisms. In this paper, several traffic scheduling algorithms developed for the MMR architecture are described. Their general organization is motivated by chances for parallelization and pipelining, while providing the necessary support both to multimedia flows and to best-effort traffic. Performance evaluation results show that the QoS requirements of different connections are met, in spite of the presence of best-effort traffic, while achieving high link utilizations.  相似文献   

18.
The Internet infrastructure must evolve from best-effort service to meet the needs of different customers and applications. With Internet traffic differentiation, service providers can support a range of offerings, such as loss or delay bounds and network bandwidth allocation, to meet different performance requirements. The differentiated services (Diffserv) architecture provides a scalable approach, in which network access (or edge) devices aggregate traffic flows onto provisioned pipes that traverse a streamlined network core. We have identified the key requirements for provisioning Diffserv functions on Internet servers. Based on these requirements, we have implemented, and deployed, a policy-based architecture on IBM's AIX operating system that provides Diffserv services to both QoS-aware and -unaware applications  相似文献   

19.
With the multi-tier pricing scheme provided by most of the cloud service providers (CSPs), the cloud users typically select a high enough transmission service level to ensure the quality of service (QoS), due to the severe penalty of missing the transmission deadline. This leads to the so-called over-provisioning problem, which increases the transmission cost of the cloud user. Given the fact that cloud users may not be aware of their traffic demand before accessing the network, the over-provisioning problem becomes more serious. In this paper, we investigate how to reduce the transmission cost from the perspective of cloud users, especially when they are not aware of their traffic demand before the transmission deadline. The key idea is to split a long-term transmission request into several short ones. By selecting the most suitable transmission service level for each short-term request, a cost-efficient inter-datacenter transmission service level selection framework is obtained. We further formulate the transmission service level selection problem as a linear programming problem and resolve it in an on-line style with Lyapunov optimization. We evaluate the proposed approach with real traffic data. The experimental results show that our method can reduce the transmission cost by up to 65.04%.  相似文献   

20.
Recently, Secure-Multiparty Computation (SMC) has been proposed as an approach to enable inter-domain network monitoring while protecting the data of individual ISPs. The SMC family includes many different techniques and variants, featuring different forms of “security”, i.e., against different types of attack (er), and with different levels of computation complexity and communication overhead. In the context of collaborative network monitoring, the rate and volume of network data to be (securely) processed is massive, and the number of participating players is large, therefore scalability is a primary requirement. To preserve scalability one must sacrifice other requirement, like verifiability and computational completeness that, however, are not critical in our context. In this paper we consider two possible schemes: the Shamir’s Secret Sharing (SSS), based on polynomial interpolation on prime fields, and the Globally-Constrained Randomization (GCR) scheme based on simple blinding. We address various system-level aspects and quantify the achievable performance of both schemes. A prototype version of GCR has been implemented as an extension of SEPIA, an open-source SMC library developed at ETH Zurich that supports SSS natively. We have performed a number of controlled experiments in distributed emulated scenarios for comparing SSS and GCR performance. Our results show that additions via GCR are faster than via SSS, that the relative performance gain increases when scaling up the data volume and/or number of participants, and when network conditions get worse. Furthermore, we analyze the performance degradation due to sudden node failures, and show that it can be satisfactorily controlled by containing the fault probability below a reasonable level.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号