首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The popularity and availability of Internet connection has opened up the opportunity for network-centric collaborative work that was impossible a few years ago. Contending traffic flows in this collaborative scenario share different kinds of resources such as network links, buffers, and router CPU. The goal should hence be overall fairness in the allocation of multiple resources rather than a specific resource. In this paper, firstly, we present a novel QoS-aware resource scheduling algorithm called Weighted Composite Bandwidth and CPU Scheduler (WCBCS), which jointly allocates the fair share of the link bandwidth as well as processing resource to all competing flows. WCBCS also uses a simple and adaptive online prediction scheme for reliably estimating the processing times of the incoming data packets. Secondly, we present some analytical results, extensive NS-2 simulation work, and experimental results from our implementation on Intel IXP2400 network processor. The simulation and implementation results show that our low complexity scheduling algorithm can efficiently maximise the CPU and bandwidth utilisation while maintaining guaranteed Quality of Service (QoS) for each individual flow.  相似文献   

2.
As flows of traffic traverse a network, they share with other flows a variety of resources such as links, buffers and router CPUs in their path. Fairness is an intuitively desirable property in the allocation of resources in a network shared among flows of traffic from different users. While fairness in bandwidth allocation over a shared link has been extensively studied, overall end-to-end fairness in the use of all the resources in the network is ultimately the desired goal. End-to-end fairness becomes especially critical when fair allocation algorithms are used as a component of the mechanisms used to provide end-to-end quality-of-service guarantees. This paper seeks to answer the question of what is fair when a set of traffic flows share multiple resources in the network with a shared order of preference for the opportunity to use these resources. We present the Principle of Fair Prioritized Resource Allocation or the FPRA principle, a powerful extension of any of the classic notions of fairness such as max–min fairness, proportional fairness and utility max–min fairness defined over a single resource. We illustrate this principle by applying it to a system model with a buffer and an output link shared among competing flows of traffic. To complete our illustration of the applicability of the FPRA principle, we propose a measure of fairness and evaluate representative buffer allocation algorithms based on this measure. Besides buffer allocation, the FPRA principle may also be used in other contexts in data communication networks and operating system design.  相似文献   

3.
研究了基于WLAN访问Internet的网络基站处流,提出了一种基于队列长度的调度方法和基于信道容量的拥塞控制模式,以达到网络资源的公平分配,并解决由于不恰当处理基站处堆积数据包而引起的弊端。在提出的资源分配模型中,调度算法根据各条流堆积的队列长度来随机地选择将要发送的数据分组;而拥塞控制模式中,将链路使用率作为拥塞指示,通过计算,平等地反馈给每一条流的发送端。发送端根据反馈到的拥塞信息来调整发送速率,以达到资源分配的公平性。仿真的结果表明:各条流能公平地共享无线网络的带宽。此算法的最大的优点在于基站不需要按照某种特定的公平性定义来选择数据包却能达到很高的公平性。  相似文献   

4.
《Computer Networks》2007,51(8):1981-1997
We consider the flow-level quality of service (QoS) seen by a dynamic load of rate adaptive sessions sharing a bottleneck link based on fair share bandwidth allocation. This is of interest both in considering wired networks supporting rate adaptive multimedia sessions and wireless networks supporting voice with rate adaptation to realize graceful degradation during congested periods. Two QoS metrics are considered: the time-average instantaneous utility of the allocated bandwidth, and the time-average of transition penalties associated with the changes in allocation seen by a flow. We present a simple model for rate adaptation, where (heterogeneous) flows can vary their rates within (different) ranges, and present closed-form results for these perceived flow-level QoS metrics. We then prove asymptotic results for large capacity systems exhibiting the salient features of rate adaptation in a dynamic network. Finally, we provide a concrete example, showing how the QoS seen by sessions with different degrees of adaptivity would vary under a natural fair bandwidth allocation policy.  相似文献   

5.
带宽共享和拥塞控制对于Internet的健壮性和公平性是很重要的研究课题.对交换设备中流量共享拥塞链路的带宽公平分配进行研究,提出一个优化交换设备带宽的设置算法:根据出端口接收速率自适应设置入端口带宽门限,对入端口的带宽进行动态重新分配,提高资源利用率.模拟实验表明,此算法有以下优点:1)高适应性;2)最大最小公平性;3)快速响应网络变化;4)高可靠性;5)稳定性.  相似文献   

6.
一种核心无状态保存的自适应成比例公平带宽分配机制   总被引:6,自引:0,他引:6  
提出了一种核心无状态的自适应比例公平带宽分配机制CSPAFA(core stateless proporitonal adaptive fair allocation),在边界路由器完成基于每个流的状态处理,将所有流分成标记流和非标记流两种业务类型,采用DPS(dynamic packet state)技术将有关信息编码进IP分组头,在核心将输出链路带宽分成两部分,核心根据当前的网络负荷对标记流按服务规格成比例的分配输出链路带宽,对未标记流公平分配带宽,并且能自适应地调整两类业务的带宽共享比例,最后,给出了在NS网络仿真环境下的仿真实验结果。  相似文献   

7.
Radio resource management mechanisms in current and future wireless networks is expected to face an enormous challenge due to the ever increasing demand for bandwidth and latency sensitive applications on mobile devices. This is because an optimal resource allocation scheme which attempts to multiplex the available bandwidth in order to maximize Quality of service (QoS), will pose an exponential computational burden at eNodeB. In order to minimize such computational overhead, this work proposes a hybrid offline-online resource allocation strategy which effectively allocates all the available resources among flows such that their QoS requirements are satisfied. The flows are firstly classified into priority buckets based on real-time criticality factors. During the offline phase, the scheduler attempts to maintain the system load within a pre-specified safe threshold value by selecting an appropriate number of buckets. This offline selection procedure makes use of supervisory control theory of discrete event systems to synthesize an offline scheduler. Next, we have devised an online resource allocation strategy which runs on top of the offline policy and attempts to minimize the impact of the inherent variability in wireless networks. Simulation results show that the proposed scheduling framework is able to provide satisfactory QoS to all end users in most practical scenarios.   相似文献   

8.
Differentiated service (DiffServ) networks have been proposed to assure the achievable minimum bandwidth to aggregate flows. However, analyses in the literature show that the current DiffServ networks are biased in favor of ah aggregate flow that has a smaller committed information rate (CIR) when aggregate flows with different CIRs share a bottleneck link. In order to mitigate this unfairness problem, we propose an adaptive marking scheme which provides the relative bandwidth assurance in proportion to the CIRs of the aggregates. By introducing a virtual target rate (VTR) and adjusting it depending on the provision level of the network, each aggregate can obtain its fair share of the bandwidth, regardless of traffic load. This scheme is based on a feedback approach. It utilizes only two-bit feedback information conveyed in the packet header and can be implemented in a distributed manner. Furthermore, the proposed scheme does not require calculating fair shares of aggregates or any additional signaling protocol. Using steady state analysis and extensive simulations, we show that the scheme can provide aggregate flows with their fair shares of bandwidth, which is proportional to the CIRs, under various network conditions  相似文献   

9.
In current Internet, applications employ lots of sessions with multiple connections between multiple senders and receivers. Sessions or users with more connections gain higher throughput. To obtain more network resource, applications tend to create more connections. It causes unfair bandwidth allocation by per-connection TCP rate allocation and the network suffers lots of TCP overheads. In this paper, we explore the issue on fair share allocation of bandwidth among sessions or users. Various fairness definitions are discussed in this study. Then, we propose a novel distributed scheme to achieve various fairness definitions. Simulation results show that our distributed scheme could achieve fair allocation according to each fairness definition.  相似文献   

10.
Today's distributed computing systems incorporate different types of nodes with varied bandwidth constraints which should be considered while designing cost-optimal job allocation schemes for better system performance. In this paper, we propose a fair pricing strategy for job allocation in bandwidth-constrained distributed systems. The strategy formulates an incomplete information, alternating-offers bargaining game on two variables, such as price per unit resource and percentage of bandwidth allocated, for both single and multiclass jobs at each node. We present a cost-optimal job allocation scheme for single-class jobs that involve communication delay and, hence, the link bandwidth. For fast and adaptive allocation of multiclass jobs, we describe three efficient heuristics and compare them under different network scenarios. The results show that the proposed algorithms are comparable to existing job allocation schemes in terms of the expected system response time over all jobs  相似文献   

11.
Parallel systems are increasingly being used in multiuser environments with the interconnection network shared by several users at the same time. Fairness is an intuitively desirable property in the allocation of bandwidth available on a link among traffic flows of different users that share the link. Strict fairness in traffic scheduling can improve the isolation between users, offer a more predictable performance and improve performance by eliminating some bottlenecks. This paper presents a simple, fair, efficient, and easily implementable scheduling discipline, called Elastic Round Robin (ERR), designed to satisfy the unique needs of wormhole switching, which is popular in interconnection networks of parallel systems. In spite of the constraints of wormhole switching imposed on the design, ERR is also suitable for use in Internet routers and has better fairness and performance characteristics than previously known scheduling algorithms of comparable efficiency, including Deficit Round Robin and Surplus Round Robin. In this paper, we prove that ERR is efficient, with a per-packet work complexity of O(1). We analytically derive the relative fairness bound of ERR, a popular metric used to measure fairness. We also derive the bound on the start-up latency experienced by a new flow that arrives at an ERR scheduler. Finally, this paper presents simulation results comparing the fairness and performance characteristics of ERR with other scheduling disciplines of comparable efficiency  相似文献   

12.
One of the main challenges in Grid computing is efficient allocation of resources (CPU – hours, network bandwidth, etc.) to the tasks submitted by users. Due to the lack of centralized control and the dynamic/stochastic nature of resource availability, any successful allocation mechanism should be highly distributed and robust to the changes in the Grid environment. Moreover, it is desirable to have an allocation mechanism that does not rely on the availability of coherent global information. In this paper we examine a simple algorithm for distributed resource allocation in a simplified Grid-like environment that meets the above requirements. Our system consists of a large number of heterogenous reinforcement learning agents that share common resources for their computational needs. There is no explicit communication or interaction between the agents: the only information that agents receive is the expected response time of a job it submitted to a particular resource, which serves as a reinforcement signal for the agent. The results of our experiments suggest that even simple reinforcement learning can indeed be used to achieve load balanced resource allocation in large scale heterogenous system.  相似文献   

13.
The end-to-end congestion control mechanism of transmission control protocol (TCP) is critical to the robustness and fairness of the best-effort Internet. Since it is no longer practical to rely on end-systems to cooperatively deploy congestion control mechanisms, the network itself must now participate in regulating its own resource utilization. To that end, fairness-driven active queue management (AQM) is promising in sharing the scarce bandwidth among competing flows in a fair manner. However, most of the existing fairness-driven AQM schemes cannot provide efficient and fair bandwidth allocation while being scalable. This paper presents a novel fairness-driven AQM scheme, called CHORD (CHOKe with recent drop history) that seeks to maximize fair bandwidth sharing among aggregate flows while retaining the scalability in terms of the minimum possible state space and per-packet processing costs. Fairness is enforced by identifying and restricting high-bandwidth unresponsive flows at the time of congestion with a lightweight control function. The identification mechanism consists of a fixed-size cache to capture the history of recent drops with a state space equal to the size of the cache. The restriction mechanism is stateless with two matching trial phases and an adaptive drawing factor to take a strong punitive measure against the identified high-bandwidth unresponsive flows in proportion to the average buffer occupancy. Comprehensive performance evaluation indicates that among other well-known AQM schemes of comparable complexities, CHORD provides enhanced TCP goodput and intra-protocol fairness and is well-suited for fair bandwidth allocation to aggregate traffic across a wide range of packet and buffer sizes at a bottleneck router.  相似文献   

14.
杨明  张福炎 《计算机科学》2003,30(10):109-112
An ECN-based implementing bandwidth-sharing algorithm for unicast and multicast flows is presented.The algorithm uses a bandwidth allocation strategy to give an incentive to multicast flows in bandwidth allocation according to algorithm of the number of receivers, and to assure the unicast flows get their bandwidth shares fairly.Provided best-effort networks, an ECN-based congestion control algorithm is used to implement differentiated service in bandwidth allocation between unicast flows and multicast flows. In implementation, we solve the problems such asreceiver‘s number estimation, the RTT estimation and compromise between convergence and stability.The simulation results show that the algorithm can implement bandwidth sharing for TCP flows and multicast flows. Atthe same time, the algorithm not only allocates more bandwidth to multicast flows, but promises TCP flows to get their fair bandwidth share.  相似文献   

15.
分析了基于有效带宽理论(effectivebandwidththeory)的静态资源分配方案的弊端。在此基础上,提出了一种基于流量预测的资源动态管理算法,并且把资源动态管理算法具体应用到QoS的区分服务(DifferentiatedService)体系中,算法在区分服务网络的边缘路由器上实现。最后,在ns 2的仿真环境下对两种算法进行了比较,试验结果证明无论在丢包率还是链路利用率上,资源动态管理算法都比静态资源分配方案有明显的优势。  相似文献   

16.
In an Infrastructure as a Service (IaaS), the amount of resources allocated to a virtual machine (VM) at creation time may be expressed with relative values (relative to the hardware, i.e., a fraction of the capacity of a device) or absolute values (i.e., a performance metric which is independent from the capacity of the hardware). Surprisingly, disk or network resource allocations are expressed with absolute values (bandwidth), but CPU resource allocations are expressed with relative values (a percentage of a processor). The major problem with CPU relative value allocations is that it depends on the capacity of the CPU, which may vary due to different factors (server heterogeneity in a cluster, Dynamic Voltage Frequency Scaling (DVFS)). In this paper, we analyze the side effects and drawbacks of relative allocations. We claim that CPU allocation should be expressed with absolute values. We propose such a CPU resource management system and we demonstrate and evaluate its benefits.  相似文献   

17.
韩国栋  邬江兴 《计算机工程》2005,31(10):45-47,53
针对接入段多业务非均匀突发性特点,提出了一种实用的全带宽动态分配方案,该方案允许单业务最大可用带宽超出其公平共享范围。分析和实践结果表明,在保证带宽利用率、公平性和满足用户QoS指标的同时,该方案可使有需求链路在系统处于非饱和状态时占用全部剩余带宽(直至全带宽),大大提高系统的总带宽利用率。  相似文献   

18.
Clusters of computers have emerged as mainstream parallel and distributed platforms for high‐performance, high‐throughput and high‐availability computing. To enable effective resource management on clusters, numerous cluster management systems and schedulers have been designed. However, their focus has essentially been on maximizing CPU performance, but not on improving the value of utility delivered to the user and quality of services. This paper presents a new computational economy driven scheduling system called Libra, which has been designed to support allocation of resources based on the users' quality of service requirements. It is intended to work as an add‐on to the existing queuing and resource management system. The first version has been implemented as a plugin scheduler to the Portable Batch System. The scheduler offers market‐based economy driven service for managing batch jobs on clusters by scheduling CPU time according to user‐perceived value (utility), determined by their budget and deadline rather than system performance considerations. The Libra scheduler has been simulated using the GridSim toolkit to carry out a detailed performance analysis. Results show that the deadline and budget based proportional resource allocation strategy improves the utility of the system and user satisfaction as compared with system‐centric scheduling strategies. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper, we propose, design, implement, and evaluate a CPU scheduler and a memory management scheme for interactive soft real-time applications. Our CPU scheduler provides a new CPU reservation algorithm that is based on the well-known Constant Bandwidth Server (CBS) algorithm but is more flexible in allocating the CPU time to multiple concurrently-executing real-time applications. Our CPU scheduler also employs a new multicore scheduling algorithm, extending the Earliest Deadline First to yield Window-constraint Migrations (EDF-WM) algorithm, to improve the absolute CPU bandwidth available in reservation-based systems. Furthermore, we propose a memory reservation mechanism incorporating a new paging algorithm, called Private-Shared-Anonymous Paging (PSAP). This PSAP algorithm allows interactive real-time applications to be responsive under memory pressure without wasting and compromising the memory resource available for contending best-effort applications. Our evaluation demonstrates that our CPU scheduler enables the simultaneous playback of multiple movies to perform at the stable frame-rates more than existing real-time CPU schedulers, while also improves the ratio of hard-deadline guarantee for randomly-generated task sets. Furthermore, we show that our memory management scheme can protect the simultaneous playback of multiple movies from the interference introduced by memory pressure, whereas these movies can become unresponsive under existing memory management schemes.  相似文献   

20.
网格技术为地理上广泛分布的资源的共享和协作提供了很大的便利,信息服务是其重要组成部分,而信息服务的性能属性直接影响整个网格平台的服务质量.本文设计了一种基于网络属性的新的信息服务机制,该机制将IP属性与网络的带宽和链路的传输延迟时间相联系,将每一用户任务调度至最合理的资源上执行,使任务向资源的总提交时间最短,从而优化了资源的分配调度过程,提高了网格性能.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号