共查询到20条相似文献,搜索用时 0 毫秒
1.
In current networks, packet losses can occur if routers do not
provide sufficiently large buffers. This paper studies how many
buffers should be provided in a router to eliminate packet losses.
We assume a network router has m incoming queues, each
corresponding to a single traffic stream, and must schedule
at any time on-line from which queue to take the next packet to
send out. To exclude packet losses with a small amount of
buffers, the maximum queue length must be kept low over the entire
scheduling period. We call this new on-line problem the balanced
scheduling problem (BSP). By competitive analysis, we measure the
power of on-line scheduling algorithms to prevent packet losses.
We show that a simple greedy algorithm is
(log m)-competitive which is asymptotically optimal,
while Round-Robin scheduling is not better than
m-competitive, as actually is any deterministic on-line
algorithm for BSP.
We also give a polynomial time algorithm for
solving off-line BSP optimally.
We also study another on-line balancing problem that tries to
balance the delay among the m traffic streams. 相似文献
2.
Internet中的包延迟分布与包丢失关系研究 总被引:2,自引:0,他引:2
包延迟的分布特征是Internet端到端行为特征的一个重要组成部分,前人已在该方面进行了大量研究,但许多研究结论只适用于丢包率较小的情况,通过研究我国科技网和教育网上34条端到端路径上的延迟特征,得出了如下结论:(1)包延迟的分布特征与丢包率存在一定的关系;(2)在丢包率较小的情况下,包延迟的分布多具有单峰性,但随着丢包率的增大,包延迟的分布不再具有单峰性,而是呈现出越来越分散的特点;(3)随着丢包率的增大,固有延迟的发生次数呈现逐渐减少的趋势,当丢包率增大到一定程度,固有延迟的发生次数就很少了。 相似文献
3.
在WDM网络中,由于每条链路上可用波长是动态变化的,在考虑波长转换延迟时间的条件下,实现实时组播连接的路由与波长分配是十分困难的。论文提出了一种用于建立满足延迟时限和延迟差要求的实时组播连接的分布式路由与波长分配算法。该算法假定每个节点没有全局路由信息,只根据关联链路的信息进行路由选择,且将路由与波长分配统一进行。组播路由算法以Prim最小生成树算法为基础,生成一棵满足给定延迟时限的最小成本树。对不满足延迟时限的目的节点,通过增加回路边构造回路再消除长延迟路径的方式,加入到组播树中。对不满足延迟差的目的节点,采用重构Steiner树的方法,使其满足延迟差的要求。波长分配使用最少波长转换和负载平衡策略。 相似文献
4.
An overview of the application of product-form queuing network models to the performance analysis of store-and-forward packet-switching networks is presented. Multiple routing chains are used to model the different routing behaviors of data packets. Queuing networks with open chains are considered in this paper. Kleinrock's formula for the mean end-to-end delay of packets is first derived. The application of this delay formula to optimal channel capacity assignment and optimal routing is discussed. Analytic results for the mean and distribution of the end-to-end delay of each chair are presented. The issue of fairness among chains is also addressed.The application of queuing networks with closed chains and population size constraints to the performance analysis of store-and-forward packet-switching networks is illustrated in a companion paper [1]. 相似文献
5.
在以包为单位进行数据传输合、语音应用程序(VOIP,Voice Over Internet Protocol)中,为了补偿数据包在网络传输中不可预知的网络传输延迟,在接收端首先必须把接收到的数据包缓存起来,缓存一定的时间再播放出来,以减少通话的抖动,得到比较满意的通话质量。文章主要研究动态缓出时延算法,力求使这个缓出时延尽可能小,同时尽可能减少包的丢失率。文章提出了一个有效动态缓出时延算法,该算法主要跟踪最近到达的数据包的网络传输时延求出其近似分布函数,并利用这些信息和延迟峰的侦测算法预测下一个语音峰的缓出时延。实验结果表明利用该算法可以在缓出时延和包丢失率之间达到最佳平衡,是一种理想、有效的算法。 相似文献
6.
移动IP是支持IP移动性的协议,但是当其用于移动节点驻留在远离归属网络的外地网络时,将会产生严重的注册延迟,从而引起严重的包丢失和通信吞吐量的下降,为了改变这些不足,人们把移动节点的移动方式分为宏移动和微移动,并且提出了很多应用于微移动的移动性管理方案。文章深入研究了宏移动协议(RFC2002)和微移动协议的包丢失率,给出了计算公式,并且对计算结果进行了详细的分析。 相似文献
7.
We propose a framework to model on-line resource management problems based on an on-line version of positive linear programming. We consider both min cost problems and max benefit problems and propose logarithmic competitive algorithms that are optimal up to a constant factor. The proposed framework provides a general methodology that applies to a wide class of on-line problems: shop scheduling, packet routing, and in general a class of packing and assignment problems. Previously studied problems as on-line multiprocessor scheduling and on-line virtual circuit routing can also be modeled within this framework. Received December 18, 1996; revised March 2, 1997. 相似文献
8.
用遗传算法解带时延及时延抖动约束的组播路由优化问题 总被引:1,自引:0,他引:1
带约束条件的组播路由是网络应用的发展所提出的新的问题,根据不同的约束条件有不同的变种,该文讨论了带时延及时延抖动约束的组播路由优化问题,给出了该问题的数学模型,提出了求解该问题的一种基于候选路由库的遗传算法,并对该算法的仿真结果与前人的结果进行了比较。结果证明,用遗传算法解决这类问题是有效的。 相似文献
9.
10.
围绕平衡负载这一目标,针对进程级并行任务的动态调度问题进行了研究,提出了一个异构集群环境下动态负载平衡算法,它结合了自适应数据采集与交换算法,有效的解决了服务器之间负载不平衡的问题,提高了系统的吞吐率。 相似文献
11.
随着Internet的不断发展,实时数据的应用对网络提出了更高的服务质量控制要求,分组调度是实现网络服务质量控制的核心技术之一.分组调度按照一定的规则决定队列中分组的发送次序并分配共享链路带宽.文中介绍了一种先进的分组调度和资源管理模型:链路共享模型, 以及两种不同的实现算法:CBQ和HPFQ,分析了它们用于传输实时流的可行性.HPFQ算法为实时数据提供了严格的时延保证和完全意义上的等级链路共享服务.文中使用网络仿真工具NS2,对链路共享模型下实时流传输的服务质量性能进行了仿真分析. 相似文献
12.
13.
We consider the following load balancing problem. Jobs arrive on-line and must be assigned to one of m machines thereby increasing the load on that machine by a certain weight. Jobs also depart on-line. The goal is to minimize the maximum load on any machine, the load being defined as the sum of the weights of the jobs assigned to the machine divided by the machine capacity. The scheduler also has the option of preempting a job and reassigning it to another machine. Whenever a job is assigned or reassigned to a machine, the on-line algorithm incurs a reassignment cost depending on the job. For arbitrary reassignment costs and identical machines, we present an on-line algorithm with a competitive ratio of 3.5981 against current load , i.e., the maximum load at any time is less than 3.5981 times the lowest achievable load at that time. Our algorithm also incurs a reassignment cost less than 6.8285 times the cost of assigning all the jobs. For arbitrary reassignment costs and related machines we present an algorithm with a competitive ratio of 32 and a reassignment factor of 79.4. We also describe algorithms with better performance guarantees for some special cases of the problem. Received August 24, 1996; revised August 6, 1997. 相似文献
14.
We consider the on-line load balancing problem where there are m identical machines (servers) and a sequence of jobs. The jobs arrive one by one and should be assigned to one of the machines in an on-line fashion. The goal is to minimize the sum (over all machines) of the squares of the loads, instead of the traditional maximum load. We show that for the sum of the squares the greedy algorithm performs within 4/3 of the optimum, and no on-line algorithm achieves a better competitive ratio. Interestingly, we show that the performance of Greedy is not monotone in the number of machines. More specifically, the competitive ratio is 4/3 for any number of machines divisible by 3 but strictly less than 4/3 in all the other cases (although it approaches 4/3 for a large number of machines). To prove that Greedy is optimal, we show a lower bound of 4/3 for any algorithm for three machines. Surprisingly, we provide a new on-line algorithm that performs within 4/3 -δ of the optimum, for some fixed δ>0 , for any sufficiently large number of machines. This implies that the asymptotic competitive ratio of our new algorithm is strictly better than the competitive ratio of any possible on-line algorithm. Such phenomena is not known to occur for the classic maximum load problem. Minimizing the sum of the squares is equivalent to minimizing the load vector with respect to the l 2 norm. We extend our techniques and analyze the exact competitive ratio of Greedy with respect to the l p norm. This ratio turns out to be 2 - Θ(( ln p)/p) . We show that Greedy is optimal for two machines but design an algorithm whose asymptotic competitive ratio is better than the ratio of Greedy. Received January 2, 1998; revised December 12, 1998. 相似文献
15.
Gregory W. Cermak 《International Journal of Speech Technology》2002,5(1):65-84
An expanding proportion of voice traffic is being carried by packet networks. Speech quality can be impaired in qualitatively new ways in packet networks when packets are lost or the spacing between them is distorted. Three parameters that characterize the performance of packet networks were examined for their relative impact on speech quality as judged by human observers: network delay or latency, packet loss, and packet delay variation or jitter. We manipulated these variables via a network emulator made available by NIST. This report summarizes five laboratory experiments that examined the variables in a variety of experimental procedures for presenting and judging speech. The experiments agreed in showing that the relative importance of the variables for affecting speech quality was, in decreasing order: packet loss, jitter, delay. The effect on speech quality of 200 ms of network delay was shown to be equivalent to the effect of one percentage point of packet loss. Many consumers also traded off some speech quality for a free, added feature, unified messaging. 相似文献
16.
Leyli Mohammad KhanliAuthor Vitae Shiva RazzaghzadehAuthor Vitae 《Future Generation Computer Systems》2012,28(4):682-688
Grid computing is a network of software-hardware capabilities. It serves as a comprehensive and complete system for organizations by which the maximum utilization from resources is achieved. Resource distribution in a heterogeneous and unstable environment and also effective load distribution among these resources are the important and difficult problems in Grid networks. Using dynamic and static algorithms or searching tree and Branch and Bound algorithm are considered to be among the available methods to reach the load balancing in Grid networks. This paper presents a new method for dynamic load balancing. In this method, we use the subtraction of forward and backward ants as a competency rank to take the priority of the sites, and we use a control word to search the suitable resource as well. Our main purpose is to devote jobs to the existing resources based on their processing power. Simulation results show that the proposed method can reduce the total completion time and also total tardiness to get the load balancing. The cost of using resources as an effective factor in load balancing is also observed. 相似文献
17.
Multicasting has become increasingly important with the emergence of Internet-based applications such as IP telephony, audio/video conferencing, distributed databases and software upgrading. IP multicasting is an efficient way to distribute information from a single source to multiple destinations at different locations. In practice IP is considered as a layer 3 protocol. Multiprotocol Label Switching (MPLS) replaces the IP forwarding by a simple label lookup. MPLS combines the flexibility of layer 3 routing and layer 2 switching.In order to provide QoS in group communications for real time applications such as video conferencing, reliable multicasting is used. Miscellaneous efforts have been undertaken to provide reliability on top of IP multicast. Two error control strategies have been popular in practice. These are the FEC (Forward Error Correction) strategy, which uses error correction alone, and the ARQ (Automatic Repeat Request) strategy, which uses error detection, combined with retransmission of data.In this paper, we present a new fair share policy (FSP) that utilizes Differentiated Services to solve the problems of QoS and congestion control when reliable ARQ multicast is used. The results should provide insight into the comparisons of the residual packet loss probability between IP multicast in MPLS networks using FSP and plain IP multicasting using the same policy when DiffServ are adopted and when reliable ARQ multicast is considered. 相似文献
18.
Raffaele BrunoAuthor Vitae Marco Conti Author VitaeAntonio Pinizzotto Author Vitae 《Performance Evaluation》2011,68(9):841-858
In practical wireless mesh networks (WMNs), gateways are subject to hard capacity limits on the aggregate number of flows (in terms of bit rate) that they can support. Thus, if traffic is routed in the mesh network without considering those constraints, as well as the traffic distribution, some gateways or intermediate mesh routers may rapidly get overloaded, and the network resources can be unevenly utilized. To address this problem, in this paper we firstly develop a multi-class queuing network model to analyze feasible throughput allocations, as well as average end-to-end delay, in heterogeneous WMNs. Guided by our analysis, we design a Capacity-Aware Route Selection algorithm (CARS), which allocates network paths to downstream and upstream Internet flows so as to ensure a more balanced utilization of wireless network resources and gateways’ fixed connections. Through simulations in a number of different network scenarios we show that the CARS scheme significantly outperforms conventional shortest path routing, as well as an alternative routing method that distributes the traffic load on the gateway nodes to minimize its variance. 相似文献
19.
Due to the emergence of Grid computing over the Internet, there is presently a need for dynamic load balancing algorithms
which take into account the characteristics of Grid computing environments. In this paper, we consider a Grid architecture
where computers belong to dispersed administrative domains or groups which are connected with heterogeneous communication
bandwidths. We address the problem of determining which group an arriving job should be allocated to and how its load can
be distributed among computers in the group to optimize the performance. We propose algorithms which guarantee finding a load
distribution over computers in a group that leads to the minimum response time or computational cost. We then study the effect
of pricing on load distribution by considering a simple pricing function. We develop three fully distributed algorithms to
decide which group the load should be allocated to, taking into account the communication cost among groups. These algorithms
use different information exchange methods and a resource estimation technique to improve the accuracy of load balancing.
We conducted extensive simulations to evaluate the performance of the proposed algorithms and strategies. 相似文献
20.
On-Line Load Balancing and Network Flow 总被引:1,自引:0,他引:1
In this paper we study two problems that can be viewed as on-line games on a dynamic bipartite graph. The first problem is
on-line load balancing with preemption. A centralized scheduler must assign tasks to servers, processing on-line a sequence
of task arrivals and departures. Each task is restricted to run on some subset of the servers. The scheduler attempts to keep
the load well-balanced. If preemptive reassignments are disallowed, Azar et al. [3] proved a lower bound of Ω(n
1/2
) on the ratio between the maximum load achieved by an on-line algorithm and the optimum off-line maximum load. We show that
this ratio can be greatly reduced by an efficient scheduler using only a small amount of rescheduling.
We then apply these ideas to network flow. Cheriyan and Hagerup [6] introduced an on-line game on a bipartite graph as a
fundamental step in improving algorithms for computing the maximum flow in networks. They described a randomized strategy
to play the game. King et al. [11] studied a modified version of this game, called ``node kill,' and gave a deterministic
strategy. We obtain an improved deterministic algorithm for the node kill game (and hence for maximum flow) in all but the
sparsest graphs. The running time achieved is O(mn log
m/n
n+n
2
log
2+ε
n) , compared with King et al.'s O(mn+n
2+ε
) .
These problems combine a demand for good competitive ratios with more traditional requirements of implementation efficiency.
Our solutions deal with the tradeoffs between these measures.
Received March 15, 1997; revised April 20, 1997. 相似文献