首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
IT service providers are increasingly hosting different services of different customers on a shared IT infrastructure. While this fosters utilization of hardware infrastructure, system malfunctions, unexpected service behaviour or peak demands for one or more services may exploit resource pools (CPU, I/O, main memory, bandwidth etc.), entailing rejection of service requests. In this paper we describe models for dynamic admission control on shared infrastructures. The admission control model decides whether to accept, buffer or reject a service request based on the revenue, Service Level Agreements (SLAs) and its resource demand in comparison to the actual workload to maximize overall revenue. Simulations of a media streaming infrastructure have been used for evaluation and comparison with traditional admission control policies.  相似文献   

2.
Conventional admission control models incur some performance penalty. First, admission control computation can overload a server that is already heavily loaded. Also, in large-scale media systems with geographically distributed server clusters, performing admission control on each cluster can result in long response latency, if the client request is denied at one site and has to be forwarded to another site. Furthermore, in prefix caching, initial frames cached at the proxy are delivered to the client before the admission decisions are made. If the media server is heavily loaded and, finally, has to deny the client request, forwarding a large number of initial frames is a waste of critical network resources. In this paper, a novel distributed admission control model is presented. We make use of proxy servers to perform the admission control tasks. Each proxy hosts an agent to coordinate the effort. Agents reserve media server's disk bandwidth and make admission decisions autonomously based on the allocated disk bandwidth. We develop an effective game theoretic framework to achieve fairness in the bandwidth allocation among the agents. To improve the overall bandwidth utilization, we also consider an aggressive admission control policy where each agent may admit more requests than its allocated bandwidth allows. The distributed admission control approach provides the solution to the stated problems incurred in conventional admission control models. Experimental studies show that our algorithms significantly reduce the response latency and the media server load.  相似文献   

3.
基于QoS延时参数的服务接纳控制算法   总被引:1,自引:0,他引:1       下载免费PDF全文
刘俊  陈昊鹏 《计算机工程》2008,34(19):89-91
为给并发访问的多个请求提供QoS保证,Web Service必须借助某些算法对请求加以控制。该文分析QoS延时参数的构成,提出基于QoS延时参数的服务接纳控制算法。该算法可在满足既有服务的QoS延时要求的条件下,控制接纳新进入的服务。否则即让其在队列中等待直到被服务器接纳,或服务器直接抛弃这个请求,由客户端在超时后自动重发请求。  相似文献   

4.
传统的接纳控制算法以固定带宽接受新连接,当连接数较少时会造成大量空闲带宽的浪费。基于充分利用带宽并提高服务质量(Qos)的目的提出了一种利用BP算法动态分配带宽的新型接纳控制算法,能根据网络的运行状况实时的调整已连接的应用流带宽,在总带宽一定且连接数较少时可以明显的提高服务质量。  相似文献   

5.
为了根据计算、存储资源和网络带宽的占用情况进行更加精确的接纳控制决策,设计了受限的可变时间窗口测量法,提出了基于多种资源占用情况的测量和算法(MBMS)。在获取多种资源负载情况的基础上实现了流媒体服务的实时接纳控制方法。实验结果表明,该方法能够在高负载的情况下确保较高的服务接纳率,同时在低负载情况下提高了资源利用率。  相似文献   

6.
To provide ubiquitous access to the proliferating rich media on the Internet, scalable streaming servers must be able to provide differentiated services to various client requests. Recent advances of transcoding technology make network-I/O bandwidth usages at the server communication ports controllable by request schedulers on the fly. In this article, we propose a transcoding-enabled bandwidth allocation scheme for service differentiation on streaming servers. It aims to deliver high bit rate streams to high priority request classes without overcompromising low priority request classes. We investigate the problem of providing differentiated streaming services at application level in two aspects: stream bandwidth allocation and request scheduling. We formulate the bandwidth allocation problem as an optimization of a harmonic utility function of the stream quality factors and derive the optimal streaming bit rates for requests of different classes under various server load conditions. We prove that the optimal allocation, referred to as harmonic proportional allocation, not only maximizes the system utility function, but also guarantees proportional fair sharing between classes with different prespecified differentiation weights. We evaluate the allocation scheme, in combination with two popular request scheduling approaches, via extensive simulations and compare it with an absolute differentiation strategy and a proportional-share strategy tailored from relative differentiation in networking. Simulation results show that the harmonic proportional allocation scheme can meet the objective of relative differentiation in both short and long timescales and greatly enhance the service availability and maintain low queueing delay when the streaming system is highly loaded.  相似文献   

7.
Free-riding is one of the main challenges of Peer-to-Peer (P2P) streaming systems which results in reduction in video streaming quality. Therefore, providing an incentive mechanism for stimulating cooperation is one of the essential requirements to maintain video Quality of Experience (QoE) in such systems. Among the existing mechanisms, payment-based schemes are most suitable for streaming applications due to their low overhead. However, to date, no dynamic payment mechanism has been proposed which can take the stochastic dynamics of the video streaming ecosystem (e.g., the request arrival, demand submission, bandwidth availability, etc.) into account. In this paper, we propose a dynamic token-based payment mechanism in which each peer earns tokens by admitting other peers’ requests and spends tokens for submitting its demands to the others. This system allows the peers to dynamically adjust their income level in adaptation to changes in the system state. We propose a Constrained Markov Decision Process (CMDP) formulation in which the goal of each peer is to obtain a request admission policy which minimizes the expected cumulative cost of consumed bandwidth, while satisfying a long-term constraint on the Mean Opinion Score (MOS) of the users as the measure of QoE. The proposed admission policy is adaptive to the request arrival process, bandwidth state and the token bucket length of each peer. To make up for the lack of design-time knowledge of the system’s statistics, each individual peer is equipped with a model-free algorithm to learn its optimal admission policy over the course of real-time interaction with the system. Simulation results are presented to compare the performance of the proposed algorithm against baseline schemes such as: random, token-threshold, bandwidth-threshold and myopic algorithms.  相似文献   

8.
Video on demand services require video broadcast schemes to provide efficient and reliable performance under various client request loads. In this paper, we have developed an efficient request load adaptive broadcast scheme, speculative load adaptive streaming scheme (SLAS), that requires lower service bandwidth than previous approaches, regardless of request rate. We have provided both analysis and simulation to show the performance gain over previous schemes. In this paper, we provide the theoretic upper bound of the continuous segment allocations on channels. We found that the number of allocated segments of the SLAS is close to the theoretic upper bound when compared with other schemes over various numbers of stream channels. Our analysis of client waiting time is almost identical to simulation results about all client requests. By simulation, we compared the required service bandwidth and storage requirements of the SLAS scheme and other schemes and found the SLAS scheme is an efficient broadcast scheme as compared to well known seamless channel transition schemes.  相似文献   

9.
Haonan  Derek L.  Mary K.   《Performance Evaluation》2002,49(1-4):387-410
Previous analyses of scalable streaming protocols for delivery of stored multimedia have largely focused on how the server bandwidth required for full-file delivery scales as the client request rate increases or as the start-up delay is decreased. This previous work leaves unanswered three questions that can substantively impact the desirability of using these protocols in some application domains, namely:

Are simpler scalable download protocols preferable to scalable streaming protocols in contexts where substantial start-up delays can be tolerated?

If client requests are for (perhaps arbitrary) intervals of the media file rather than the full-file, are there conditions under which streaming is not scalable (i.e., no streaming protocol can achieve sub-linear scaling of required server bandwidth with request rate)?

For systems delivering a large collection of objects with a heavy-tailed distribution of file popularity, can scalable streaming substantially reduce the total server bandwidth requirement, or will this requirement be largely dominated by the required bandwidth for relatively cold objects?

This paper addresses these questions primarily through the development of tight lower bounds on required server bandwidth, under the assumption of Poisson, independent client requests. Implications for other arrival processes are also discussed. Previous work and results presented in this paper suggest that these bounds can be approached by implementable policies. With respect to the first question, the results show that scalable streaming protocols require significantly lower server bandwidth in comparison to download protocols for start-up delays up to a large fraction of the media playback duration. For the second question, we find that in the worst-case interval access model, the minimum required server bandwidth, assuming immediate service to each client, scales as the square root of the request rate. Finally, for the third question, we show that scalable streaming can provide a factor of log K improvement in the total minimum required server bandwidth for immediate service, as the number of objects K is scaled, for systems with fixed minimum object request popularity.  相似文献   


10.
In this paper, we develop an end-to-end analysis of a distributed Video-on-Demand (VoD) system that includes an integrated model of the server and the network subsystems with analysis of its impact on client operations. The VoD system provides service to a heterogeneous client base at multiple playback rates. A class-based service model is developed where an incoming video request can specify a playback rate at which the data is consumed on the client. Using an analytical model, admission control conditions at the server and the network are derived for multi-rate service. We also develop client buffer requirements in presence of network delay bounds and delay jitter bounds using the same integrated framework of server and network subsystems. Results from an extensive simulation show that request handling policies based on limited redirection of blocked requests to other resources perform better than load sharing policies. The results also show that downgrading the service for blocked requests to a lower bitrate improves VoD system performance considerably. Combining the downgrade option with restrictions on access to high bitrate request classes is a powerful tool for manipulating an incoming request mix into a workload that the VoD system can handle.  相似文献   

11.
随着视频点播、视频会议、视频监控、数字图书馆等流媒体应用的普及,流媒体服务器存储资源管理成为制约服务质量的瓶颈之一。根据多媒体服务器的性能要求,提出了一种支持QoS的磁盘调度策略。它由三个主要部分组成:探测模块、负载监测模块和自适应管理模块。探测模块,负责判断当前的资源情况能否满足服务请求;自适应模块,根据负载监删模块检测到的负载变化情况,动态调整服务周期在实时请求和尽力服务请求之间的分配。实验表明此磁盘调度策略能在保证实时请求无抖动执行的同时,明显减少了非实时请求的响应时间。  相似文献   

12.
Handling a tertiary storage device, such as an optical disk library, in the framework of a disk-based stream service model, requires a sophisticated streaming model for the server, and it should consider the device-specific performance characteristics of tertiary storage. This paper discusses the design and implementation of a video server which uses tertiary storage as a source of media archiving. We have carefully designed the streaming mechanism for a server whose key functionalities include stream scheduling, disk caching and admission control. The stream scheduling model incorporates the tertiary media staging into a disk-based scheduling process, and also enhances the utilization of tertiary device bandwidth. The disk caching mechanism manages the limited capacity of the hard disk efficiently to guarantee the availability of media segments on the hard disk. The admission controller provides an adequate mechanism which decides upon the admission of a new request based on the current resource availability of the server. The proposed system has been implemented on a general-purpose operating system and it is fully operational. The design principles of the server are validated with real experiments, and the performance characteristics are analyzed. The results guide us on how servers with tertiary storage should be deployed effectively in a real environment. RID="*" ID="*" e-mail: hjcha@cs.yonsei.ac.kr  相似文献   

13.
A number of technology and workload trends motivate us to consider the appropriate resource allocation mechanisms and policies for streaming media services in shared cluster environments. We present MediaGuard – a model-based infrastructure for building streaming media services – that can efficiently determine the fraction of server resources required to support a particular client request over its expected lifetime. The proposed solution is based on a unified cost function that uses a single value to reflect overall resource requirements such as the CPU, disk, memory, and bandwidth necessary to support a particular media stream based on its bit rate and whether it is likely to be served from memory or disk. We design a novel, time-segment-based memory model of a media server to efficiently determine in linear time whether a request will incur memory or disk access when given the history of previous accesses and the behavior of the server's main memory file buffer cache. Using the MediaGuard framework, we design two media services: (1) an efficient and accurate admission control service for streaming media servers that accounts for the impact of the server's main memory file buffer cache, and (2) a shared streaming media hosting service that can efficiently allocate the predefined shares of server resources to the hosted media services, while providing performance isolation and QoS guarantees among the hosted services. Our evaluation shows that, relative to a pessimistic admission control policy that assumes that all content must be served from disk, MediaGuard (as well as services that are built using it) deliver a factor of two improvement in server throughput.  相似文献   

14.
Next generation of wireless cellular networks aim at supporting a diverse range of multimedia services to mobile users with guaranteed quality of service (QoS). Resource allocation and call admission control (CAC) are key management functions in future 3G and 4G cellular networks, in order to provide multimedia applications to mobile users with QoS guarantees and efficient resource utilization. There are two main strategies for radio resource allocations in cellular wireless networks known as complete partitioning (CP) and complete sharing (CS). In this paper, theses strategies are extended for operation in 3G and beyond network. First, two CS-based call admission controls, referred to herein as queuing priority call admission control (QP-CAC) and hybrid priority call admission control (HP-CAC), and one CP-based call admission control referred to as complete partitioning call admission control (CP-CAC) are presented. Then, this study proposes a novel dynamic procedure, referred to as the dynamic prioritized uplink call admission control (DP-CAC) designed to overcome the shortcomings of CS and CP-based CACs. Results indicate the superiority of DP-CAC as it is able to achieve a better balance between system utilization, revenue, and quality of service provisioning. CS-based algorithms achieve the best system utilization and revenue at the expense of serious unfairness for the traffic classes with diverse QoS requirements. DP-CAC manages to attain equal system utilization and revenue to CS-based algorithms without the drawbacks in terms of fairness and service differentiation.  相似文献   

15.
设计并实现了一种基于量子行为粒子群算法(QPSO)系统模型在线辨识的Web服务自适应接纳控制,根据系统模型的变化在线调节比例积分控制器参数.通过接纳时间比反馈控制机制,调整控制周期内服务器接纳请求的时间长度,进而实现接纳控制.通过仿真实验,并与多种不同控制方法进行比较,所得结果表明,在线辨识自适应控制能够在服务器过载的情况下更有效地控制系统资源,进一步提高了服务质量.  相似文献   

16.
提出了基于跨层的自适应带宽预留和多重QoS保证的802.11eEDCA分布式流接纳控制。首先自适应分配各站点OFDM子载波比特以最大化信道容量,并将比特率跨层传送到MAC层。基于此,提出了基于分布式测量的动态带宽预留机制,使预留带宽自适应各用户信道特点和业务特征;提出了半模式化的中心控制的剩余因子估计方法,从而克服了直接测量的不准确性和分布式估计的局部性,并降低了计算复杂度;提出了基于协议模型的带宽和碰撞率双重接纳标准,使多重QoS参数同时得到保证。通过这些措施得到自上至下的自适应接纳控制。仿真表明,提出的接纳控制机制能较大地提高资源利用率,更好地保证业务质量。  相似文献   

17.
异构网络的接入策略与网络资源管理效率紧密相关;同时,网络复杂性与网络资源竞争性直接影响到用户服务质量。针对异构网络接入控制存在的切换掉话率和呼叫阻塞率高、资源利用率低等问题,提出了基于马尔科夫链的联合呼叫接入控制算法。接入控制算法为切换呼叫业务、实时业务动态地预留了一定的带宽资源,根据不同业务设置带宽降级因子来决定是否释放带宽;同时,根据用户偏好和不同业务的QoS要求,构建了呼叫接入控制效用函数,利用马尔科夫链进行了建模分析。仿真表明,算法提高了网络资源利用率,降低了系统复杂度,满足了各类业务的QoS要求。  相似文献   

18.
A file server for continuous media must provide resource guarantees and only admit requests that do not violate the resource availability. This paper addresses the admission performance of a server that explicitly considers the variable bit rate nature of the continuous media streams. A prototype version of the server has been implemented and evaluated in several heterogeneous environments. The two system resources for which admission control is evaluated are the disk bandwidth and the network bandwidth. Performance results from both measurement and simulation are shown with respect to different admission methods and varying scenarios of stream delivery patterns. We show that the vbrSim algorithm developed specifically for the server outperforms the other options for disk admission especially with request patterns that have staggered arrivals, while the network admission control algorithm is able to utilize a large percentage of the network bandwidth available. We also show the interactions between the limits of these two resources and how a system can be configured without wasted capacity on either one of the resources. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

19.
《Computer Networks》2002,38(5):631-643
In future wireless multimedia networks, user mobility management for seamless connection regarding realtime multimedia applications is one of the most important problems. In this paper we propose an opportunity-cost concept-based approach for adaptive bandwidth reservation with admission control for handover calls utilizing network traffic information. Excessive reservation guarantees low blocking probability of handover calls at the cost of high blocking probability of new calls. According to our survey, however, it may degrade bandwidth utilization while no prioritization for handover admissions degrades quality of service (QoS) for ongoing calls. We consider both QoS assurance and bandwidth utilization in order to optimize the amount of bandwidth to reserve for handover admissions. We believe that our scheme could be utilized as a guideline for cost-effective radio resource allocation in mobile multimedia networks.  相似文献   

20.
张慧  方旭明  袁琴 《软件学报》2011,22(4):736-744
由于无线频谱是极为有限的资源,呼叫接纳控制(call admission control,简称CAC)成为移动通信系统中无线资源管理的一个重要部分.针对流媒体对接入资源的过度占用问题,提出了一种基于合作博弈理论的CAC策略,博弈方是处于服务状态的业务和申请接入的新业务,基站是保证协议强制执行的外在力量,基站选择效用和最大的策略组作为博弈过程的最终结果.仿真结果表明,所提策略有效缓解了流媒体业务对资源的捕获效应,保证了用户接入的公平性,对于实际系统性能的改善具有重要的意义.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号