首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A number of technology and workload trends motivate us to consider the appropriate resource allocation mechanisms and policies for streaming media services in shared cluster environments. We present MediaGuard – a model-based infrastructure for building streaming media services – that can efficiently determine the fraction of server resources required to support a particular client request over its expected lifetime. The proposed solution is based on a unified cost function that uses a single value to reflect overall resource requirements such as the CPU, disk, memory, and bandwidth necessary to support a particular media stream based on its bit rate and whether it is likely to be served from memory or disk. We design a novel, time-segment-based memory model of a media server to efficiently determine in linear time whether a request will incur memory or disk access when given the history of previous accesses and the behavior of the server's main memory file buffer cache. Using the MediaGuard framework, we design two media services: (1) an efficient and accurate admission control service for streaming media servers that accounts for the impact of the server's main memory file buffer cache, and (2) a shared streaming media hosting service that can efficiently allocate the predefined shares of server resources to the hosted media services, while providing performance isolation and QoS guarantees among the hosted services. Our evaluation shows that, relative to a pessimistic admission control policy that assumes that all content must be served from disk, MediaGuard (as well as services that are built using it) deliver a factor of two improvement in server throughput.  相似文献   

2.
The performance evaluation of large file systems, such as storage and media streaming, motivates scalable generation of representative traces. We focus on two key characteristics of traces, popularity and temporal locality. The common practice of using a system-wide distribution obscures per-object behavior, which is important for system evaluation. We propose a model based on delayed renewal processes which, by sampling interarrival times for each object, accurately reproduces popularity and temporal locality for the trace. A lightweight version reduces the dimension of the model with statistical clustering. It is workload-agnostic and object type-aware, suitable for testing emerging workloads and ‘what-if’ scenarios. We implemented a synthetic trace generator and validated it using: (1) a Big Data storage (HDFS) workload from Yahoo!, (2) a trace from a feature animation company, and (3) a streaming media workload. Two case studies in caching and replicated distributed storage systems show that our traces produce application-level results similar to the real workload. The trace generator is fast and readily scales to a system of 4.3 million files. It outperforms existing models in terms of accurately reproducing the characteristics of the real trace.  相似文献   

3.
尹磊  刘云龙  曾晋 《软件》2012,33(4):55-57,60
当前,许多媒体服务供应商利用云技术向使用者提供流媒体云服务。云服务虽然提升了流媒体业务按需访问的便捷性,但用户在使用流媒体云服务的同时操作的智能化程度较低。用户在流媒体文件选择、媒体设备选择及服务器连接方面缺乏智能手段。此外,系统不具有媒体流播放的断点支持功能。本文利用即插即用网络通信协议UPnP,设计了一套最佳播放设备的智能选取模型。本模型通过分析比较媒体文件元数据与播放设备元数据,自动选取最佳的播放设备。同时,本模型通过断点信息的保存来实现媒体文件二次播放的连续性。本模型为流媒体云服务的断点播放和播放设备智能优化选取,提供了一种有效的技术模型。  相似文献   

4.
1 概述经过了2000、2001两年的社区宽带网建设的高速发展后,摆在中国ISP们面前的任务是如何在已建成的宽带网上开展增值服务,许多ISP尝试在宽带网上开展流媒体(Streaming Media)服务,如视频点播VOD(Video On-Demand)系统。然而,流媒体对网络带宽和实时性的要求使得流服务器必须能够进行端对端(End-to-End)的拥塞控制和质量调整,由于  相似文献   

5.
It is expected that by 2003, continuous media will account for more than 50% of the data available on origin servers. This will provoke a significant change in Internet workload, due to the high bandwidth requirements and the long-lived nature of digital video, streaming server loads and network bandwidths are proving to be major limiting factors. Aiming at the characteristic of broadband network in a residential area, we propose a popularitybased on server-proxy caching strategy for streaming media. According to a streaming media popularity on streaming server and proxy, this strategy caches the content of this streaming media partially or completely, and plays an important role in decreasing server load, reducing the traffic from streaming server to proxy, and improving the startup latency of the client.  相似文献   

6.
基于流行度预测的流媒体代理缓存替换算法   总被引:2,自引:0,他引:2       下载免费PDF全文
针对流行度随时间变化的特性,利用回归分析技术给出了一种流媒体文件的流行度预测算法,并在增加少量存储空间及计算时间消耗的情况下,将该预测算法应用于流媒体代理缓存服务器的缓存替换算法之中,模拟实验表明,该方法能减少缓存的替换次数,提高缓存命中率,性能较优。  相似文献   

7.
针对现有的视频点播技术无法直接在智慧矿山管控平台中直接使用的问题, 基于HTTP的自适应码率流媒体传输协议与FFmpeg开源库设计一种视频点播技术. 该技术包括客户端模块、Web请求处理模块、多媒体处理模块. 该技术中客户端模块通过设定的视频源信息向Web请求处理模块发送视频请求; Web请求处理模块利用请求中的视频源...  相似文献   

8.
Smooth workload adaptive broadcast   总被引:1,自引:0,他引:1  
The high-bandwidth requirements and long-lived characteristics of digital video make transmission bandwidth usage a key limiting factor in the widespread streaming of such content over the Internet. A challenging problem is to develop bandwidth-efficient techniques for delivering popular videos to a large, asynchronous client population with time-varying demand characteristics. In this paper, we propose smooth workload adaptive broadcast to address the above issues. A key component of our scheme is Flexible Periodic Broadcast (FPB). By introducing a feedback control loop into FPB, and enhancing FPB using techniques such as parsimonious transmission, smooth workload adaptive broadcast provides instantaneous or near-instantaneous playback services and can smoothly adapt to workload changes. Furthermore, FPB, as proposed in this paper, is bandwidth efficient and exhibits the periodic smooth channel transition property.  相似文献   

9.
移动3G/4G网络的逐步成熟促进了移动终端多媒体技术的发展以及相关产品的推广。专业实时的气象流媒体服务迎来了新的机遇和挑战,为此设计了一种移动终端气象流媒体服务的解决方案。本方案扩充了FFMPEG 技术对信息编码和文件格式的支持,完善了气象流媒体 Web 服务功能,更快捷地提供了多样化的气象预报形式。同时,也为移动终端气象产品应用拓展了新思路。  相似文献   

10.
In recent years, network streaming becomes a highly popular research topic in computer science due to the fact that a large proportion of network traffic is occupied by multimedia streaming. In this paper we present novel methodologies for enhancing the streaming capabilities of Java RMI. Our streaming support for Java RMI includes the pushing mechanism, which allows the servers to push data in a streaming fashion to the client site, and the aggregation mechanism, which allows the client site to make a single remote invocation to gather data from multiple servers that keep replicas of data streams and aggregate partial data into a complete data stream. In addition, our system also allows the client site to forward local data to other clients. Our framework is implemented by extending the Java RMI stub to allow custom designs for streaming buffers and controls, and by providing a continuous buffer for raw data in the transport layer socket. This enhanced framework allows standard Java RMI services to enjoy streaming capabilities. In addition, we propose aggregation algorithms as scheduling methods in such an environment. Preliminary experiments using our framework demonstrate its promising performance in the provision of streaming services in Java RMI layers.  相似文献   

11.
Providing real-time Internet video streaming anytime, anywhere and using any devices from different access networks preserves more challenges to equilibrate the quality of service (QoS) and security protection (QoP). Because encryption/decryption for video packets are time-consuming processes to protect real-time video streaming services from eavesdropping, our observation is that the playback buffer occupancy (PBO) can simply indicate time availability to adjust security level to affect the packet sending rate. In this paper, we present an end-to-end buffer-aware feedback control from client PBO for effectively securing media streaming for heterogeneous clients over ubiquitous Internet. That is, security-level adjustments can be applied further to keep PBO running away from overflow and underflow to pursue an effective leverage between QoS and QoP. To further boost the protection, we also apply the Diffie-Hellman key negotiation method to provide the dynamic key changes. Moreover, since the running PBO will vary on the dynamics of Internet from access time, client devices and access networks, the different applied security levels and key changes during the video streaming session will make eavesdropper more difficult to recover all the encrypted videos delivered in public networks. We demonstrate the leverage performance in preserving both QoS and QoP for ubiquitous video streaming in our proposed schemes by comprehensive experiments on a true VoD system. The experimental results show our secure VoD scheme can achieve cost-effective leverage of QoS and QoP from different inserted network dynamics, even if client buffer size is limited to 256 KB only.  相似文献   

12.
Haonan  Derek L.  Mary K.   《Performance Evaluation》2002,49(1-4):387-410
Previous analyses of scalable streaming protocols for delivery of stored multimedia have largely focused on how the server bandwidth required for full-file delivery scales as the client request rate increases or as the start-up delay is decreased. This previous work leaves unanswered three questions that can substantively impact the desirability of using these protocols in some application domains, namely:

Are simpler scalable download protocols preferable to scalable streaming protocols in contexts where substantial start-up delays can be tolerated?

If client requests are for (perhaps arbitrary) intervals of the media file rather than the full-file, are there conditions under which streaming is not scalable (i.e., no streaming protocol can achieve sub-linear scaling of required server bandwidth with request rate)?

For systems delivering a large collection of objects with a heavy-tailed distribution of file popularity, can scalable streaming substantially reduce the total server bandwidth requirement, or will this requirement be largely dominated by the required bandwidth for relatively cold objects?

This paper addresses these questions primarily through the development of tight lower bounds on required server bandwidth, under the assumption of Poisson, independent client requests. Implications for other arrival processes are also discussed. Previous work and results presented in this paper suggest that these bounds can be approached by implementable policies. With respect to the first question, the results show that scalable streaming protocols require significantly lower server bandwidth in comparison to download protocols for start-up delays up to a large fraction of the media playback duration. For the second question, we find that in the worst-case interval access model, the minimum required server bandwidth, assuming immediate service to each client, scales as the square root of the request rate. Finally, for the third question, we show that scalable streaming can provide a factor of log K improvement in the total minimum required server bandwidth for immediate service, as the number of objects K is scaled, for systems with fixed minimum object request popularity.  相似文献   


13.
Healthcare scientific applications, such as body area network, require of deploying hundreds of interconnected sensors to monitor the health status of a host. One of the biggest challenges is the streaming data collected by all those sensors, which needs to be processed in real time. Follow-up data analysis would normally involve moving the collected big data to a cloud data center for status reporting and record tracking purpose. Therefore, an efficient cloud platform with very elastic scaling capacity is needed to support such kind of real time streaming data applications. The current cloud platform either lacks of such a module to process streaming data, or scales in regard to coarse-grained compute nodes.In this paper, we propose a task-level adaptive MapReduce framework. This framework extends the generic MapReduce architecture by designing each Map and Reduce task as a consistent running loop daemon. The beauty of this new framework is the scaling capability being designed at the Map and Task level, rather than being scaled from the compute-node level. This strategy is capable of not only scaling up and down in real time, but also leading to effective use of compute resources in cloud data center. As a first step towards implementing this framework in real cloud, we developed a simulator that captures workload strength, and provisions the amount of Map and Reduce tasks just in need and in real time.To further enhance the framework, we applied two streaming data workload prediction methods, smoothing and Kalman filter, to estimate the unknown workload characteristics. We see 63.1% performance improvement by using the Kalman filter method to predict the workload. We also use real streaming data workload trace to test the framework. Experimental results show that this framework schedules the Map and Reduce tasks very efficiently, as the streaming data changes its arrival rate.  相似文献   

14.
Typical network file system (NFS) clients write lazily: they leave dirty pages in the page cache and defer writing to the server. This reduces network traffic when applications repeatedly modify the same set of pages. However, this approach can lead to memory pressure, when the number of available pages on the client system is so low that the system must work harder to reclaim dirty pages. We show that NFS performance is poor under memory pressure and present two mechanisms to solve it: eager writeback and eager page laundering. These mechanisms change the client's data management policy from lazy to eager, in which dirty pages are written back proactively, resulting in higher throughput for sequential writes. In addition, we show that NFS servers suffer from out-of-order file operations, which further reduce performance. We introduce request ordering, a server mechanism to process operations, as much as possible, in the order they were sent by the client, which improves read performance substantially. We have implemented these techniques in the Linux operating system. I/O performance is improved, with the most pronounced improvement visible for sequential access to large files. We see 33% improvement in the performance of streaming write workloads and more than triple the performance of streaming read workloads. We evaluate several non-sequential workloads and show that these techniques do not degrade performance, and can sometimes improve performance.  相似文献   

15.
With the success of Internet video-on-demand (VoD) streaming services, the bandwidth required and the financial cost incurred by the host of the video server becoming extremely large. Peer-to-peer (P2P) networks and proxies are two common ways for reducing the server workload. In this paper, we consider a peer-assisted Internet VoD system with proxies deployed at domain gateways. We formally present the video caching problem with the objectives of reducing the video server workload and avoiding inter-domain traffic, and we obtain its optimal solution. Inspired by theoretical analysis, we develop a practical protocol named PopCap for Internet VoD services. Compared with previous work, PopCap does not require additional infrastructure support, is inexpensive, and able to cope well with the characteristic workloads of Internet VoD services. From simulation-based experiments driven by real-world data sets from YouTube, we find that PopCap can effectively reduce the video server workload, therefore provides a superior performance regarding the video server’s traffic reduction.  相似文献   

16.
流媒体技术及其文件格式   总被引:11,自引:0,他引:11  
为了全面、深刻地理解流媒体技术,以达到进一步提高流服务器工作效率的目的,该文就流媒体文件格式展开深入研究。在剖析常用流媒体系统和文件格式的基础上,笔者结合在研发新的流媒体系统过程中积累的经验,论证了流媒体文件格式在流媒体系统中占有重要地位,设计合理的文件格式是提高流媒体服务器工作效率最直接和最有效的办法。最后该文对已有流媒体文件格式对流服务器性能的影响进行了分析,提出了一种新的流媒体文件格式框架。实践表明,使用该文建议的流媒体文件格式可以大幅度提高流媒体服务器的工作效率。  相似文献   

17.
P2P streaming systems, such as PPLive and PPStream, have become popular services with the widespread deployment of broadband networks. However, P2P streaming systems still face free-riding problems, similar to those that have been observed in P2P file sharing systems. Thus, one important problem in providing streaming services is that of providing appropriate incentives for peers to contribute their upload capacity. To this end, we propose the use of advertisements as an incentive for peers to contribute upload capacity. In the proposed framework, peers enjoy the same quality of streamed media, with the difference in quality of service being achieved through different amounts of advertisements viewed, based on the resource contributions to the system. Moreover, since calculating peers’ contributions accurately is important to successfully deploying such systems, we design a token-based framework to address this problem. An extensive simulation-based study is performed to evaluate the proposed approach. The results demonstrate that our approach provides appropriate incentives for peers to contribute their resources. Furthermore, we explore several characteristics of the token-based mechanism which can provide system developers with insight into efficient development of such systems.  相似文献   

18.
《Computer Networks》1999,31(11-16):1545-1561
As commercial interest in the Internet grows, more and more companies are offering the service of hosting and providing access to information that belongs to third-party information providers. In the future, successful hosting services may host millions of objects on thousands of servers deployed around the globe. To provide reasonable access performance to popular resources, these resources will have to be mirrored on multiple servers. In this paper, we identify some challenges due to the scale that a platform for such global services would face, and propose an architecture capable of handling this scale. The proposed architecture has no bottleneck points. A trace-driven simulation using an access trace from AT&T's hosting service shows very promising results for our approach.  相似文献   

19.
Providing a real-time cloud service requires simultaneously retrieving a large amount of data. How to improve the performance of file access becomes a great challenge. This paper first addresses the preconditions of dealing with this problem considering the requirements of applications, hardware, software, and network environments in the cloud. Then, a novel distributed layered cache system named HDCache is proposed. HDCahe is built on the top of Hadoop Distributed File System (HDFS). Applications can integrate the client library of HDCache to access the multiple cache services. The cache services are built up with three access layers an in-memory cache, a snapshot of the local disk, and a network disk provided by HDFS. The files loaded from HDFS are cached in a shared memory which can be directly accessed by the client library. In order to improve robustness and alleviate workload, the cache services are organized in a peer-to-peer style using a distributed hash table and every cached file has three replicas scattered in different cache service nodes. Experimental results show that HDCache can store files with a wide range in their sizes and has the access performance in a millisecond level under highly concurrent environments. The tested hit ratio obtained from a real-world cloud serviced is higher than 95 %.  相似文献   

20.
With the success of Internet video-on-demand (VoD) streaming services, the bandwidth required and the financial cost incurred by the host of the video server becoming extremely large. Peer-to-peer (P2P) networks and proxies are two common ways for reducing the server workload. In this paper, we consider a peer-assisted Internet VoD system with proxies deployed at domain gateways. We formally present the video caching problem with the objectives of reducing the video server workload and avoiding inter-domain traffic, and we obtain its optimal solution. Inspired by theoretical analysis, we develop a practical protocol named PopCap for Internet VoD services. Compared with previous work, PopCap does not require additional infrastructure support, is inexpensive, and able to cope well with the characteristic workloads of Internet VoD services. From simulation-based experiments driven by real-world data sets from YouTube, we find that PopCap can effectively reduce the video server workload, therefore provides a superior performance regarding the video server’s traffic reduction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号