首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In continuous media servers, disk load can be reduced by using buffer cache. In order to utilize the saved disk bandwidth by caching, a continuous media server must employ an admission control scheme to decide whether a new client can be admitted for service without violating the requirements of clients already being serviced. A scheme providing deterministic QoS guarantees in servers using caching has already been proposed. Since, however, deterministic admission control is based on the worst case assumption, it causes the wastage of the system resources. If we can exactly predict the future available disk bandwidth, both high disk utilization and hiccup-free service are achievable. However, as the caching effect is not analytically determined, it is difficult to predict the disk load without substantial computation overhead. In this paper, we propose a statistical admission control scheme for continuous media servers where caching is used to reduce disk load. This scheme improves disk utilization and allows more streams to be serviced while maintaining near-deterministic service. The scheme, called Shortsighted Prediction Admission Control (SPAC), combines exact prediction through on-line simulation and statistical estimation using a probabilistic model of future disk load in order to reduce computation overhead. It thereby exploits the variation in disk load induced by VBR-encoded objects and the decrease in client load by caching. Through trace-driven simulations, it is demonstrated that the scheme provides near-deterministic QoS and keeps disk utilization high.  相似文献   

2.
Handling a tertiary storage device, such as an optical disk library, in the framework of a disk-based stream service model, requires a sophisticated streaming model for the server, and it should consider the device-specific performance characteristics of tertiary storage. This paper discusses the design and implementation of a video server which uses tertiary storage as a source of media archiving. We have carefully designed the streaming mechanism for a server whose key functionalities include stream scheduling, disk caching and admission control. The stream scheduling model incorporates the tertiary media staging into a disk-based scheduling process, and also enhances the utilization of tertiary device bandwidth. The disk caching mechanism manages the limited capacity of the hard disk efficiently to guarantee the availability of media segments on the hard disk. The admission controller provides an adequate mechanism which decides upon the admission of a new request based on the current resource availability of the server. The proposed system has been implemented on a general-purpose operating system and it is fully operational. The design principles of the server are validated with real experiments, and the performance characteristics are analyzed. The results guide us on how servers with tertiary storage should be deployed effectively in a real environment. RID="*" ID="*" e-mail: hjcha@cs.yonsei.ac.kr  相似文献   

3.
Video can be encoded into multiple-resolution format in nature. A multi-resolution or scalable video stream is a video sequence encoded such that subsets of the full resolution video bit stream can be decoded to recreate lower resolution video streams. Employing scalable video enables a video server to provide multiple resolution services for a variety of clients with different decoding capabilities and network bandwidths connected to the server. The inherent advantages of the multi-resolution video server include: heterogeneous client support, storage efficiency, adaptable service, and interactive operations support.For designing a video server, several issues should be dealt with under a unified framework including data placement/retrieval, buffer management, and admission control schemes for deterministic service guarantee. In this paper, we present a general framework for designing a large-scale multi-resolution video server. First, we propose a general multi-resolution video stream model which can be implemented by various scalable compression techniques. Second, given the proposed stream model, we devise a hybrid data placement scheme to store scalable video data across disks in the server. The scheme exploits both concurrency and parallelism offered by striping data across the disks and achieves the disk load balancing during any resolution video service. Next, the retrieval of multi-resolution video is described. The deterministic access property of the placement scheme permits the retrieval scheduling to be performed on each disk independently and to support interactive operations (e.g. pause, resume, slow playback, fastforward and rewind) simply by reconstructing the input parameters to the scheduler. We also present an efficient admission control algorithm which precisely estimates the actual disk workload for the given resolution services and hence permits the buffer requirement to be much smaller. The proposed schemes are verified through detailed simulation and implementation.  相似文献   

4.
Conventional admission control models incur some performance penalty. First, admission control computation can overload a server that is already heavily loaded. Also, in large-scale media systems with geographically distributed server clusters, performing admission control on each cluster can result in long response latency, if the client request is denied at one site and has to be forwarded to another site. Furthermore, in prefix caching, initial frames cached at the proxy are delivered to the client before the admission decisions are made. If the media server is heavily loaded and, finally, has to deny the client request, forwarding a large number of initial frames is a waste of critical network resources. In this paper, a novel distributed admission control model is presented. We make use of proxy servers to perform the admission control tasks. Each proxy hosts an agent to coordinate the effort. Agents reserve media server's disk bandwidth and make admission decisions autonomously based on the allocated disk bandwidth. We develop an effective game theoretic framework to achieve fairness in the bandwidth allocation among the agents. To improve the overall bandwidth utilization, we also consider an aggressive admission control policy where each agent may admit more requests than its allocated bandwidth allows. The distributed admission control approach provides the solution to the stated problems incurred in conventional admission control models. Experimental studies show that our algorithms significantly reduce the response latency and the media server load.  相似文献   

5.
多播是一种有效的同时将数据分发到多个接收者的途径。层次性的多播缓存策略能够很好地支持客户终端的移动性,允许漫游客户在访问网络间移动的时候请求数据,并且能够最大限度地满足移动客户的数据请求。为了提高这种层次性多播缓存的性能,在分析这种层次性多播缓存的基础上,提出了一种基于iSCSI的多播缓存策略,可以比较好地解决层次性多播缓存机制中存在的一些问题。  相似文献   

6.
Integrated buffering schemes for P2P VoD services   总被引:1,自引:1,他引:0  
How to improve the scalability and QoS of peer-to-peer on-demand streaming system based on unstructured overlay is still a problem. Researchers have proposed some memory based buffering schemes to archive the targets. Considering the limited space of memory on one peer, a new caching strategy, which can integrate memory-caching strategy with disk-caching strategy, is proposed to make full use of peers memory, disk and bandwidth resources. Based on the new strategy, peers can request media data from neighbors of the overlay, buffer the fresh part into the memory slots and the watched part into the free local disk, which can enlarge the capacity to buffer media data. Based on the new scheme, the experimental results show that the new caching strategy improves the service capacity and QoS of the whole system greatly. The load of the media server is obviously alleviated and the continuity of playing media data is obviously improved.  相似文献   

7.
Lin  Law Sie  Yong Khai   《Computer Communications》2006,29(18):3780-3788
In a Video-on-Demand (VoD) system, in order to guarantee smooth playback of a video stream, sufficient resources (such as disk I/O (Input/Output) bandwidth, network bandwidth) have to be reserved in advance. Thus, given limited resources, the number of simultaneous streams can be supported by a video server is restricted. Due to the mechanical nature, the I/O subsystem is generally the performance bottleneck of a VoD system, and there have been a number of caching algorithms to overcome the disk bandwidth limitation. In this paper, we propose a novel caching strategy, referred to as client-assisted interval caching (CIC) scheme, to balance the requirements of I/O bandwidth and cache capacity in a cost-effective way. The CIC scheme tends to use the cache memory available in clients to serve the first few blocks of streams so as to dramatically reduce the demand on the I/O bandwidth of the server. Our objective is to maximize the number of requests that can be supported by the system and minimize the overall system cost. Simulations are carried out to study the performance of our proposed strategy under various conditions. The experimental results show the superior of CIC scheme to the tradition Interval Caching (IC) scheme, with respect to request accepted ratio and average servicing cost per stream.  相似文献   

8.
Cloud-based video on demand (VOD) service is a promising next-generation media streaming service paradigm. Being a resource-intensive application, how to maximize resource utilization is a key issue of designing such an application. Due to the special cloud-based VOD system architecture consisting of cloud storage cluster and media server cluster, existing techniques such as traditional caching strategies are inappropriate to be adopted by a cloud-based VOD system directly in practice. Therefore, in this study, we have proposed a systemic caching scheme, which seamlessly integrates a caching algorithm and a cache deployment algorithm together to maximize the resources utilization of cloud-based VOD system. Firstly, we have proposed a cloud-based caching algorithm. The algorithm models the cloud-based VOD system as a multi-constraint optimization problem, so as to balance the resource utilization between cloud storage cluster and media server cluster. Secondly, we have proposed a cache deployment algorithm. The algorithm further manages the bandwidth and cache space resource utilization inside the media server cluster in a more fine-grained manner, and achieves load balancing performance. Our evaluation results show that the proposed scheme enhances the resource utilization of the cloud-based VOD system under resource-constrained situation, and cuts down the reject ratio of user requests.  相似文献   

9.
影响多媒体服务器性能的关键因素研究   总被引:7,自引:0,他引:7  
在构建大规模视频服务系统时 ,基于层次型多服务器群的体系结构在吞吐率、可扩展性、经济性等方面都有其突出的优势 ,尤其适合于在因特网上的应用 .但是 ,要充分发挥和提高视频服务系统的性能 ,还要针对一些主要的瓶颈(如服务器磁盘 I/ O带宽与网络带宽 ) ,解决好一系列的问题 .本文分析了影响多媒体视频服务器性能的一些主要因素 ,如视频服务器的体系结构、服务器与客户端之间的数据传送方式、媒体数据在视频服务器存储子系统中的分布与放置方式、对磁盘访问请求的调度、单服务器中的缓存及多服务器间协同缓存的管理、接入控制策略、流调度策略等 ,这些因素对视频服务器的性能与吞吐率有着极大的影响 .本文还介绍了一些适用于大规模视频服务系统的性能优化技术 ,如广播、批处理等流调度策略 .在构建视频服务器系统时 ,只有综合考虑这些因素 ,才能真正提高服务器乃至整个视频服务系统的吞吐率 ,并较好地满足客户的 Qo S要求  相似文献   

10.
为了保证网络存储的负载平衡并避免在节点或磁盘故障的情况下造成不可恢复的损失,提出一种基于均衡数据放置策略的分布式网络存储编码缓存方案,针对大型高速缓存和小型缓存分别给出了不同的解决办法。首先,将Maddah方案扩展到多服务器系统,结合均衡数据放置策略,将每个文件作为一个单元存储在数据服务器中,从而解决大型高速缓存问题;然后,将干扰消除方案扩展到多服务器系统,利用干扰消除方案降低缓存的峰值速率,结合均衡数据放置策略,提出缓存分段的线性组合,从而解决小型缓存问题。最后,通过基于Linux的NS2仿真软件,分别在一个和两个奇偶校验服务器系统中进行仿真实验。仿真结果表明,提出的方案可以有效地降低峰值传输速率,相比其他两种较新的缓存方案,提出的方案获得了更好的性能。此外,采用分布式存储虽然限制了将来自不同服务器的内容组合成单个消息的能力,导致编码缓存方案性能损失,但可以充分利用分布式存储系统中存在的固有冗余,从而提高存储系统的性能。  相似文献   

11.
With the wide availability of high-speed network access, we are experiencing high quality streaming media delivery over the Internet. The emergence of ubiquitous computing enables mobile users to access the Internet with their laptops, PDAs, or even cell phones. When nomadic users connect to the network via wireless links or phone lines, high quality video transfer can be problematic due to long delay or size mismatch between the application display and the screen. Our proposed solution to this problem is to enable network proxies with the transcoding capability, and hence provide different, appropriate video quality to different network environment. The proxies in our transcoding-enabled caching (TeC) system perform transcoding as well as caching for efficient rich media delivery to heterogeneous network users. This design choice allows us to perform content adaptation at the network edges. We propose three different TeC caching strategies. We describe each algorithm and discuss its merits and shortcomings. We also study how the user access pattern affects the performance of TeC caching algorithms and compare them with other approaches. We evaluate TeC performance by conducting two types of simulation. Our first experiment uses synthesized traces while the other uses real traces derived from an enterprise media server logs. The results indicate that compared with the traditional network caches, with marginal transcoding load, TeC improves the cache effectiveness, decreases the user-perceived latency, and reduces the traffic between the proxy and the content origin server.  相似文献   

12.
A file server for continuous media must provide resource guarantees and only admit requests that do not violate the resource availability. This paper addresses the admission performance of a server that explicitly considers the variable bit rate nature of the continuous media streams. A prototype version of the server has been implemented and evaluated in several heterogeneous environments. The two system resources for which admission control is evaluated are the disk bandwidth and the network bandwidth. Performance results from both measurement and simulation are shown with respect to different admission methods and varying scenarios of stream delivery patterns. We show that the vbrSim algorithm developed specifically for the server outperforms the other options for disk admission especially with request patterns that have staggered arrivals, while the network admission control algorithm is able to utilize a large percentage of the network bandwidth available. We also show the interactions between the limits of these two resources and how a system can be configured without wasted capacity on either one of the resources. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

13.
在Internet上高效传输流媒体数据是推广诸如视频点播等应用的基础.现有方案仅考虑了采用单代理结构的前缀缓存和服务器调度来降低骨干网带宽消耗和服务器负载.在带前缀缓存的Batch patching基础上提出了后缀的动态缓存算法ICBR,并提出了基于ICBR缓存算法的多缓存协作体系结构及协作算法MCC,仿真结果表明,基于ICBR的多缓存协作显著地降低了获取补丁而导致的骨干网带宽的消耗,提高了客户端QoS同时也降低了服务器负载.  相似文献   

14.
In this paper, we propose a new multicast scheme that is based on the client-initiated-with-prefetching (CIWP) and peer-to-peer (P2P) transfer of a partial multimedia stream. In the CIWP scheme, when a new client joins an ongoing multicast channel, the server has to create an extra unicast channel to retransmit the partial stream that has already been transmitted. However, the unicast channel consumes some of the I/O bandwidth of the server, as well as some of the network resources between the server and the client's Internet Service Provider (ISP). To solve this problem, we propose the use of the P2P transfer algorithm to deliver the partial stream from a client that has already joined the ongoing multicast session to the newcomer. This P2P transfer between clients is limited to clients belonging to the same ISP. To further improve the performance, a threshold is used to control the P2P transfer. We performed analytical studies to show that the proposed multicast scheme can reduce the consumption of the network resources of the server, by utilizing the client's disk space. We also performed various simulation studies to demonstrate the performance improvement in terms of the use of the server's bandwidth and the waiting time for the clients’ requests.  相似文献   

15.
基于并行磁盘阵列的视频数据布局   总被引:1,自引:1,他引:1  
分析了大规模视频存储服务器的性能指标要求、体系结构、数据布局和准入控制等问题,提出了基于并行磁盘阵列上的、可满足用户多QoS要求的、基于三层可扩展的MPEG2视频编码的多分段视频存储方案,比已有的平衡放置、周期放置等方案能提供更好的灵活性、更高的可扩展性。  相似文献   

16.
In a video-on-demand server, resource reservation is needed for continuous delivery. Hence, any given server can serve only a fixed maximum of clients. Different videos can be placed on different disks or disk array groups. Since the access rates to various movies are not uniform, load imbalance can occur among the disks in the system. In this paper, we propose a dynamic policy that replicates segments of files to balance the load across the disks. By using simulation, we show that the proposed policy is responsive to quick load surges and is superior to a policy based on the static replication of hot movies.  相似文献   

17.
By effectively harnessing networked computing resources, the two-tier client-server model has been used to support shared data access. In systems based on this approach, the database servers often become performance bottlenecks when the number of concurrent users is large. Client data caching techniques have been proposed in order to ease resource contention at the servers. The key theme of these techniques is the exploitation of user data access locality. In this paper, we propose a three-tiered model that takes advantage of such data access locality to furnish a much more scalable system. Groups of clients that demonstrate similarities in their data access behavior are logically clustered together. Each such group of clients is handled by an Intermediate Cluster Manager (ICM) that acts as a cluster-wide directory service and cache manager. Clients within the same cluster are now capable of sharing data among themselves without interacting with the server(s). This results in reduced server load and allows the support of a much larger number of clients. Through prototyping and experimentation, we show that the logical clustering of clients, and the introduction of the ICM layer, significantly improve system scalability as well as transaction response times. Logical clusters, consisting of clients with similar data access patterns, are identified with the help of both a greedy algorithm and a genetic algorithm. For the latter, we have developed an encoding scheme and its corresponding operators.  相似文献   

18.
The performance of striped disk arrays is governed by two parameters: the stripe unit size and the degree of striping. In this paper, we describe techniques for determining the stripe unit size and degree of striping for disk arrays storing variable bit rate continuous media data. We present an analytical model to determine the optimal stripe unit size in redundant and non-redundant disk arrays. We then use the model to study the effect of various system parameters on the optimal stripe unit size. To determine the degree of striping, we first demonstrate that striping a continuous media stream across all disks in the array causes the number of clients supported to increase sub-linearly with increase in the number of disks. To overcome this limitation, we propose a technique that partitions a disk array and stripes each media stream across a single partition. We then propose an analytical model to determine the optimal partition size and maximize the number of clients supported by the array.  相似文献   

19.
With the exponential growth of WWW traffic, web proxy caching becomes a critical technique for Internet web services. Well-organized proxy caching systems with multiple servers can greatly reduce the user perceived latency and decrease the network bandwidth consumption. Thus, many research papers focused on improving web caching performance with the efficient coordination algorithms among multiple servers. Hash based algorithm is the most widely used server coordination mechanism, however, there's still a lot of technical issues need to be addressed. In this paper, we propose a new hash based web caching architecture, Tulip. Tulip aggregates web objects that are likely to be accessed together into object clusters and uses object clusters as the primary access units. Tulip extends the locality-based algorithm in UCFS to hash based web proxy systems and proposes a simple algorithm to reduce the data grouping overhead. It takes into consideration the access speed dispatch between memory and disk and replaces expensive small disk I/O with less large ones. In case a client request cannot be fulfilled by the server in the memory, the system fetches the whole cluster which contains the required object into memory, the future requests for other objects in the same cluster can be satisfied directly from memory and slow disk I/Os are avoided. It also introduces a simple and efficient data dupllication algorithm, few maintenance work need to be done in case of server join/leave or server failure. Along with the local caching strategy, Tulip achieves better fault tolerance and load balance capability with the minimal cost. Our simulation results show Tulip has better performance than previous approaches.  相似文献   

20.
Caching and scheduling in NAD-based multimedia servers   总被引:1,自引:0,他引:1  
Multimedia-on-demand (MOD) applications have grown dramatically in popularity, especially in the domains of education, business, and entertainment. Current MOD servers waste precious resources in performing store-and-forward copying. This excessive overhead increases cost and severely limits the scalability of these servers. In this paper, we propose using the network-attached disk (NAD) architecture to design highly scalable and cost-effective MOD servers. In order to ensure enhanced performance, we propose a scheme, called distributed interval caching (DIG), which utilizes the on-disk buffers for caching intervals between successive streams. We also propose another scheme, called multiobjective scheduling (MOS), which increases the degrees of resource sharing by scheduling the waiting requests for service intelligently. We then integrate the two schemes and study the overall performance benefits through extensive simulation. The results demonstrate that the integrated policy works very well in increasing the number of customers that can be serviced concurrently while decreasing their waiting times for service. The performance benefits vary with several architectural, system workload, and scheduling parameters. We conclude this study by developing an analytical model for ideal DIG in order to estimate the performance limits which may be achieved through various optimizations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号