首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 187 毫秒
1.
信元调度问题中有优先级的排队系统研究   总被引:4,自引:1,他引:4  
本文根据排队理论和实际应用对带有优先级的信元调度问题进行了定量分析研究,推导出了每个级别的信元平均时延以及丢包率的计算公式,并进行了计算机仿真。将理论公式与仿真结果进行了对照分析,得出了要降低平均时延、减少丢包率就必须提高服务率(在缓冲区大小既定的情况下)的结论。由已得到的公式,我们可以确定将平均时延及及丢包率控制在所允许的范围内的服务率的具体数值。本文所采用的分析方法克服了以往计算平均时延和丢包率的概率母函数法以及数值叠代计算方法的复杂性。  相似文献   

2.
提出了一种能够提供端到端时延保证和满足丢包率要求的多优先级算法。该算法以分组头中记录的时延、丢包率、保证带宽为权重对分组进行调度,通过对信元的相对优先级及服务质量参数的加权算法,得到一种公平的满足绝对服务质量的算法。还能够使系统避免维护每个流的状态信息以及对单个流进行复杂的队列管理和调度,由此增加了系统的可扩展性。计算机仿真表明该算法具有较高的资源利用率,较低的端到端时延和时延抖动以及较低的分组丢弃率等特点。  相似文献   

3.
为解决视频图像在互联网中进行传输时,其质量易受网络丢包率、时延等因素的影响而显著降低的问题,提出了一种基于丢包率预测的视频传输纠错算法。该算法采用隐马尔可夫模型预测网络丢包率,根据丢包率的大小自适应地选择FEC或ARQ对视频图像进行纠错操作。当预测出的丢包率较高时,为避免FEC算法在丢包率较高时降低带宽利用率,采用选择性ARQ算法恢复丢失的视频数据包,并通过限制其重传次数使视频传输的实时性得到了保证;当预测出的丢包率较低时,则采用优化了RS冗余值的FEC算法进行纠错操作。在OPNET modeler中进行的仿真实验表明,与HARQ算法相比,使用该纠错算法,视频图像的PSNR的平均值提高了1.6 dB,平均时延减少了0.24 s左右。该算法不但降低了视频传输的平均时延和丢包率,而且提高了接收端视频图像的重建质量,具有复杂度低、实现简单的特点。  相似文献   

4.
基于RED算法的非线性拥塞控制   总被引:4,自引:1,他引:3       下载免费PDF全文
由于RED算法是采用丢包率随平均队列长度线性变化的方法,因此导致网络在拥塞并不严重的时候丢包率较大,在拥塞比较严重的时候丢包率较小,拥塞控制能力较低。该文提出非线性平滑算法通过对RED算法的丢包率函数进行非线性平滑,在最小阈值时丢包率增长速度比较小,在最大阈值时丢包率增长速度比较大,有效地控制了平均队列长度,具有较好的拥塞控制能力。NS2仿真结果表明该算法对丢包率、端到端时延、吞吐量以及时延抖动等性能均有较明显的提高。  相似文献   

5.
为了提高高丢包率环境下的TCP传输性能,提出一种往返时延偏移智能响应机制。对往返时延偏移值进行标准化处理得到标准延迟因子,用这个因子对拥塞窗口增长和减小量进行修正,实现拥塞窗口增长速度随往返时延偏移自适应调整,能够区分随机丢包和网络拥塞。开发Linux内核模块实现了往返时延偏移智能响应机制,可快速部署到所有基于AIMD策略的拥塞控制机制。仿真结果表明,使用往返时延偏移智能响应机制,平均吞吐量超过cubic算法57%,能够有效提升高丢包率环境的带宽利用率。  相似文献   

6.
针对网络控制系统中同时存在时延和丢包问题,基于零阶保持器的工作机制,将同时受时延和丢包影响的网络控制系统建模为输入带有时延的控制系统,根据李雅普诺夫稳定性理论和时滞系统理论得出控制系统的时滞相关稳定性条件,进一步基于锥补线性化的方法给出控制器的设计方法,有效解决了网络控制系统中同时存在时延和丢包的控制问题。仿真算例表明所提方法的有效性。  相似文献   

7.
在无线通信网络环境下,提出了一种改进的基于平均队列长度和等待时间的随机提前检测算法.这种算法根据平均队列长度和等待时间计算数据包的丢弃概率.仿真结果表明,与单纯基于平均队列长度的RED算法相比较,在大的数据业务负荷条件下可以获得相对更大的吞吐量、更低的丢包率以及较低的时延抖动,从而能更有效地实现无线网络中的拥塞控制.  相似文献   

8.
随着互联网技术的发展,以太网逐渐被引入到工业控制领域.对于一个网络控制系统,网络带来的控制系统时滞和丢包是不可避免的,因此,在对网络控制系统的建模中,必需重点考虑时延和丢包的影响.分析了网络控制系统的时延特征,并将数据丢包的网络控制系统建模成具有事件率约束的异步动态系统.对网络控制系统模型用传统的PID算法对系统进行仿真控制,在存在系统随机时延和不存在随机时延的两种情况下,仿真分析了网络控制系统的稳定性,仿真结果进一步验证了网络控制系统保持稳定的丢包范围.  相似文献   

9.
基于OPNET的网络压力仿真   总被引:2,自引:1,他引:1       下载免费PDF全文
针对网络规划和优化过程中压力测试工具使用的不足,提出基于OPNET的网络压力仿真方法,建立包含不同网络负载的多个仿真场景,以对FFP服务的压力仿真为例,收集并对比这些场景中服务器和网络的吞吐量、时延及丢包率,分析各种网络运行数据,得出网络负载情况。结果表明,利用OPNET软件可以有效地进行网络压力仿真。  相似文献   

10.
利用OPNET对排队论中的M/M/S服务系统模型进行仿真,得出数据包到达的平均速率、数据包平均大小、服务台个数、服务台平均服务速率等参数的改变,可以影响数据包平均延时和队列长度平均时间,也可以影响系统的稳定性。  相似文献   

11.
In this investigation, the loss and delay Markovian queueing system with nopassing is proposed. The customers may balk or renege with certain probability, on finding all servers busy on their arrival. To cope up with the balking and reneging behaviour of the customers, there is provision of removable additional servers apart from permanent servers so as to provide the better grade of service at optimal cost operating conditions. The customers are classified into two classes depending upon whether they can wait or lost when all servers are busy. The customers can also be categorized into two classes from service point of view. Type A customers have zero service time whereas type B customers have exponential service time. The explicit expressions for the average number of customers in the system, the expected waiting time for both types of customers, etc., are derived by using steady-state queue size distribution. Some earlier results are deduced by setting appropriate system parameters. The system behaviour is examined with the help of numerical illustrations by varying different parameters.Scope and purposeThe performance prediction of various systems in communication switching network, remote border security check post, jobs processing in computers, etc., are influenced by the customers behaviour, in particular, when nopassing constraints are prevalent. The incorporation of loss and delay phenomena is likely to bring about understanding whether the customers would like to wait in the queue or would be lost in case when all servers are busy. The provision of additional removable servers will be helpful in upgrading the service and to reduce the discouragement behaviour of the customers in such congestion situations.  相似文献   

12.
RMTP协议的时延分析   总被引:1,自引:1,他引:1  
林宇  王重钢  王文东  程时端 《软件学报》2002,13(8):1710-1717
RMTP(reliable multicast transport protocol)是一种通过修复服务器(repair server)本地恢复来提供可靠性的组播协议.对改进的RMTP协议的时延性能进行建模分析.推导了数据包自发送主机发出到成功地被一个随机选择的接收主机接收之间的平均时延公式.分析表明,随着丢失率的增长,RMTP的时延性能恶化点将迅速下降;随着每个修复服务器下连接尾链路数量的增加,RMTP的时延性能恶化点也将下降.但是,随着尾链路总数量的增加而保持每个修复服务器连接的尾链路数量不变,RMTP的时延性能的变化很小.仿真结果较好地验证了分析的有效性.  相似文献   

13.

In the face of massive parallel multimedia streaming and user access, multimedia servers are often in an overload state, resulting in the delay of service response and the low utilization of wireless resources, which makes it is difficult to satisfy the user experience quality. Aiming at the problems of low utilization rate of multimedia communication resources and large computing load of servers, this paper proposes a self management mechanism and architecture of wireless resources based on multimedia flow green communication. First, based on the combination of multimedia server, relay base station and user cluster, a multimedia green communication system architecture is built based on the comprehensive utilization rate of multimedia communication, and a cluster green communication control algorithm is proposed. Secondly, aiming at the dynamic service demand and asynchronous multimedia communication environment, aiming at ensuring the balance of resource allocation and accelerating the speed of resource allocation, we build a dynamic multimedia wireless resource architecture. Finally, the experimental results of statistics and analysis, from the server in different scale parallel multimedia streams under different scale delay, number of users relay network free resources proportion, user satisfaction, packet loss rate and other performance show that the proposed algorithm is effective and feasible.

  相似文献   

14.
Internet上Web应用日益广泛的使用,使得Web服务器需要在高负载下提供性能保证与区分服务,以满足用户的不同需求。响应延迟是评价Web服务器的一项关键性能指标,而成比例延迟区分服务是一种重要的区分服务模型。针对Apache Web服务器,提出并实现了基于自适应控制的成比例延迟区分服务。在每个采样周期,自适应控制器根据预设的延迟区分参数,通过动态计算并调节各个客户类别的服务线程数目,可保证Apache Web服务器上高优先级客户具有较低的平均连接延迟,而各个客户类别的平均延迟比保持不变。仿真结果表明,在动态变化的负载、参考输入以及不同的系统配置之下,控制器作用下的Apache Web服务器都能可靠地提供成比例延迟区分服务。  相似文献   

15.
Motivated by the need to study traffic incident management, we consider a Markovian infinite server queue that is subjected to randomly occurring shocks. These shocks affect the service of all servers to deteriorate, i.e., increase the service time of all servers, and might also cause other shocks, thus causing further service deterioration. There are a finite number of service levels, zero corresponding to the normal service with the highest service rate and the last level corresponding to the slowest service rate which could even be equal to zero, implying the complete service breakdown. The repair process is performed only at the last level. These types of queues also represent an approximation of multi-server call-centers with deteriorating service. We derive the mean and variance of the stationary number in the system, and show that the mean is convex with respect to the repair rate. Furthermore, we study the optimal repair rate that minimizes the expected long-run average cost incurred due to delay and repairs. We show that the expected total cost per unit time as a function of repair rate is unimodal. We derive conditions under which the cost function is in one of three simple forms, so that the optimum repair rate can easily be obtained. Numerical examples are also provided.  相似文献   

16.
Analyzing factors that influence end-to-end Web performance   总被引:1,自引:0,他引:1  
Web performance impacts the popularity of a particular Web site or service as well as the load on the network, but there have been no publicly available end-to-end measurements that have focused on a large number of popular Web servers examining the components of delay or the effectiveness of the recent changes to the HTTP protocol. In this paper we report on an extensive study carried out from many client sites geographically distributed around the world to a collection of over 700 servers to which a majority of Web traffic is directed. Our results show that the HTTP/1.1 protocol, particularly with pipelining, is indeed an improvement over existing practice, but that servers serving a small number of objects or closing a persistent connection without explicit notification can reduce or eliminate any performance improvement. Similarly, use of caching and multi-server content distribution can also improve performance if done effectively.  相似文献   

17.
移动边缘计算(MEC)的出现使移动用户能够以低延迟访问部署在边缘服务器上的服务。然而,MEC仍然存在各种挑战,尤其是服务部署问题。边缘服务器的数量和资源通常是有限的,只能部署数量有限的服务;此外,用户的移动性改变了不同服务在不同地区的流行度。在这种情况下,为动态请求部署合适的服务就成为一个关键问题。针对该问题,通过了解动态用户请求来部署适当的服务以最小化交互延迟,将服务部署问题表述为一个全局优化问题,并提出了一种基于集群划分的资源聚合算法,从而在计算、带宽等资源约束下初步部署合适的服务。此外,考虑动态用户请求对服务流行度及边缘服务器负载的影响,开发了动态调整算法来更新现有服务,以确保服务质量(QoS)始终满足用户期望。通过一系列仿真实验验证了所提出策略的性能。仿真结果表明,与现有基准算法相比,所提出的策略可以降低服务交互延迟并实现更稳定的负载均衡。  相似文献   

18.
A multiserver on-demand system is considered in which each call has three interdependent random characteristics: the required number of servers, capacity, and service time. The total capacity of calls and the total number of servers in the system are limited. The type of a call is defined by the number of servers required for its service. We find a stationary distribution of the number of calls in the system, as well as the loss probability for a call of each type.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号