首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
研究了代理服务器在流媒体传输和缓存方面对于网络带宽的消耗以及在缓存过程中内存分配和管理等方面存在的问题;在分析了现有的流媒体代理缓存技术的基础上,提出了基于代理服务器的流媒体动态共享缓存(DSB)算法.分析和实验结果表明,与现有的缓存技术相比,DSB算法能够有效地提高缓冲区的利用率,节省代理服务器的内存资源,节省网络带宽,同时能够为更多的客户端请求提供服务,并且可以有效地缩短请求延时.  相似文献   

2.
Many algorithmic efforts have been made to address technical issues in designing a streaming media caching proxy. Typical of those are segment-based caching approaches that efficiently cache large media objects in segments which reduces the startup latency while ensuring continuous streaming. However, few systems have been practically implemented and deployed. The implementation and deployment efforts are hindered by several factors: 1) streaming of media content in complicated data formats is difficult; 2) typical streaming protocols such as RTP often run on UDP; in practice, UDP traffic is likely to be blocked by firewalls at the client side due to security considerations; and 3) coordination between caching discrete object segments and streaming continuous media data is challenging. To address these problems, we have designed and implemented a segment-based streaming media proxy, called SProxy. This proxy system has the following merits. First, SProxy leverages existing Internet infrastructure to address the flash crowd. The content server is now free of the streaming duty while hosting streaming content through a regular Web server. Thus, UDP based streaming traffic from SProxy suffers less dropping and no blocking. Second, SProxy streams and caches media objects in small segments determined by the object popularity, causing very low startup latency, and significantly reducing network traffic. Finally, prefetching techniques are used to pro-actively preload uncached segments that are likely to be used soon, thus providing continuous streaming. SProxy has been extensively tested and we show that it provides high quality streaming delivery in both local area networks and wide area networks (e.g., between Japan and the U.S.).  相似文献   

3.
李凯慧 《计算机工程》2007,33(5):202-204
流媒体传输已经成为Internet通信中的重要组成部分之一,虽然它可以从代理缓存中受益,但传统的代理缓存策略不能满足媒体对象所特有的特征,必须提出新颖的缓存方法。该文讨论了代理对视频流的缓存,包括前缀、分块和部分视频对象的全部缓存的问题和挑战。同时研究了缓存视频流的代理网络结构,有分布式、层次式和覆盖式,对它们进行了描述和评论,还把缓存和代理网络同组播相结合进行了讨论。  相似文献   

4.
流媒体代理缓存和预取方法的研究   总被引:1,自引:0,他引:1  
代理缓存可以降低用户的启动延迟,减轻网络流量和服务器的负载,且在Web中也已广泛使用。但由于流媒体和非流媒体(文本,图像)有着显著的区别,目前,支持流媒体的代理缓存技术还面临着许多挑战。文中针对流媒体的数据量大和高带宽需求等特性,对支持流媒体的代理缓存和预取方法进行了总结。调研、分类、比较了目前已有的一些缓存算法和预取算法的优缺点,为下一步的研究提供启发和借鉴作用,同时指出将来的研究方向和热点问题。  相似文献   

5.
User-Generated Content has become very popular since new web services such as YouTube allow for the distribution of user-produced media content. YouTube-like services are different from existing traditional VoD services in that the service provider has only limited control over the creation of new content. We analyze how content distribution in YouTube is realized and then conduct a measurement study of YouTube traffic in a large university campus network. Based on these measurements, we analyzed the duration and the data rate of streaming sessions, the popularity of videos, and access patterns for video clips from the clients in the campus network. The analysis of the traffic shows that trace statistics are relatively stable over short-term periods while long-term trends can be observed. We demonstrate how synthetic traces can be generated from the measured traces and show how these synthetic traces can be used as inputs to trace-driven simulations. We also analyze the benefits of alternative distribution infrastructures to improve the performance of a YouTube-like VoD service. The results of these simulations show that P2P-based distribution and proxy caching can reduce network traffic significantly and allow for faster access to video clips.  相似文献   

6.
In this paper, the impact of memory management policies and switch design alternatives on the application performance of cache-coherent nonuniform memory access (CC-NUMA) multiprocessors is studied in detail. Memory management plays an important role in determining the performance of NUMA multiprocessors by dictating the placement of data among the distributed memory modules. We analyze memory traces of several scientific applications for three different memory management techniques, namely buddy, round-robin, and first-touch policies, and compare their memory system performance. Interconnection network switch designs that consider virtual channels and varying number of input buffers per switch are presented. Our performance evaluation is based on execution-driven simulation methodology to capture the dynamic changes in the network traffic during execution of the applications. It is shown that the use of cut-through switching with buffers and virtual channels can Improve the average message latency tremendously. However, the choice of memory management policy affects the amount of network traffic and the network access pattern. Thus, we vary the memory management policy and confirm the performance benefits of improved switch designs. Results of sensitivity studies by varying switch design parameters, cache block size, and memory page size are also presented. We find that a combination of first-touch memory management policy and a switch design with virtual channels and increased buffer space can reduce the average message latency by as high as 70 percent  相似文献   

7.
流媒体对象的缓存管理策略   总被引:2,自引:0,他引:2  
基于流媒体服务的代理技术是流媒体研究领域中的重要课题.随着流媒体技术在Internet和无线网络环境中的高速发展,对流媒体代理服务器的研究也正在逐步深入.本文主要讨论通过代理技术改善媒体的服务质量,降低媒体的传输延迟以及减轻网络负载.在Internet环境下,对流媒体代理服务器的研究集中于流媒体的访问特性、缓存替换算法,构建和实现一个流媒体代理服务器是对流媒体代理技术研究的基础.  相似文献   

8.
With the exponential growth of WWW traffic, web proxy caching becomes a critical technique for Internet web services. Well-organized proxy caching systems with multiple servers can greatly reduce the user perceived latency and decrease the network bandwidth consumption. Thus, many research papers focused on improving web caching performance with the efficient coordination algorithms among multiple servers. Hash based algorithm is the most widely used server coordination mechanism, however, there's still a lot of technical issues need to be addressed. In this paper, we propose a new hash based web caching architecture, Tulip. Tulip aggregates web objects that are likely to be accessed together into object clusters and uses object clusters as the primary access units. Tulip extends the locality-based algorithm in UCFS to hash based web proxy systems and proposes a simple algorithm to reduce the data grouping overhead. It takes into consideration the access speed dispatch between memory and disk and replaces expensive small disk I/O with less large ones. In case a client request cannot be fulfilled by the server in the memory, the system fetches the whole cluster which contains the required object into memory, the future requests for other objects in the same cluster can be satisfied directly from memory and slow disk I/Os are avoided. It also introduces a simple and efficient data dupllication algorithm, few maintenance work need to be done in case of server join/leave or server failure. Along with the local caching strategy, Tulip achieves better fault tolerance and load balance capability with the minimal cost. Our simulation results show Tulip has better performance than previous approaches.  相似文献   

9.
1 概述经过了2000、2001两年的社区宽带网建设的高速发展后,摆在中国ISP们面前的任务是如何在已建成的宽带网上开展增值服务,许多ISP尝试在宽带网上开展流媒体(Streaming Media)服务,如视频点播VOD(Video On-Demand)系统。然而,流媒体对网络带宽和实时性的要求使得流服务器必须能够进行端对端(End-to-End)的拥塞控制和质量调整,由于  相似文献   

10.
It is expected that by 2003, continuous media will account for more than 50% of the data available on origin servers. This will provoke a significant change in Internet workload, due to the high bandwidth requirements and the long-lived nature of digital video, streaming server loads and network bandwidths are proving to be major limiting factors. Aiming at the characteristic of broadband network in a residential area, we propose a popularitybased on server-proxy caching strategy for streaming media. According to a streaming media popularity on streaming server and proxy, this strategy caches the content of this streaming media partially or completely, and plays an important role in decreasing server load, reducing the traffic from streaming server to proxy, and improving the startup latency of the client.  相似文献   

11.
In this paper, we investigate a proxy-based integrated cache consistency and mobility management scheme for supporting client–server applications in Mobile IP systems with the objective to minimize the overall network traffic generated. Our cache consistency management scheme is based on a stateful strategy by which cache invalidation messages are asynchronously sent by the server to a mobile host (MH) whenever data objects cached at the MH have been updated. We use a per-user proxy to buffer invalidation messages to allow the MH to disconnect arbitrarily and to reduce the number of uplink requests when the MH is reconnected. Moreover, the user proxy takes the responsibility of mobility management to further reduce the network traffic. We investigate a design by which the MH’s proxy serves as a gateway foreign agent (GFA) as in the MIP Regional Registration protocol to keep track of the address of the MH in a region, with the proxy migrating with the MH when the MH crosses a regional area. We identify the optimal regional area size under which the overall network traffic cost, due to cache consistency management, mobility management, and query requests/replies, is minimized. The integrated cache consistency and mobility management scheme is demonstrated to outperform MIPv6, no-proxy and/or no-cache schemes, as well as a decoupled scheme that optimally but separately manages mobility and service activities in Mobile IPv6 environments.  相似文献   

12.
Liu  Jiangchuan  Li  Bo 《World Wide Web》2004,7(3):281-296
With the development of the broadband Internet, multimedia services have been widely deployed and contributed to a significant amount of todays Internet traffic. Like normal web objects (e.g., HTML pages and images), media objects can benefit from proxy caching; yet their unique features such as huge size and high bandwidth demand imply that conventional proxy caching strategies have to be substantially revised. Moreover, in the current Internet, clients are highly heterogeneous; it is necessary to differentiate their Quality-of-Service (QoS) requirements in streaming. However, the presence of an intermediate proxy in a streaming system poses great challenges to designers. This paper proposes a novel QoS-based algorithm for media streaming with proxy caching. We employ layered coding and transmission, and jointly consider the problems of caching and scheduling to improve the QoS for the clients. We derive general and effective solutions to the problems and evaluate their performance under various configurations. The results demonstrate that the proposed algorithm can accommodate diverse QoS demands from the clients, and yet satisfy stringent resource limits.  相似文献   

13.
Multimedia streaming is one of the most popular services on the current Internet. However, the major problem hinders streaming media applications is the network bandwidth. In this paper, we put forward a optimized streaming proxy system and think of utilizing it in peer-to-peer (P2P) applications. The main contribution of our proxy system is RTP segments splicing, prefix caching size determination and prefetching techniques, which are detailed discussed in this paper. By validating our implementation, the experimental results show that the streaming proxy can effectively contribute to reduce the server and backbone network load, thus achieve improved media quality and scalability. In addition, P2P network is a fascinate way for Internet resource sharing, many of researchers are devoting to research on P2P-based client communications. In our project, we make use of our proxy prototype into the streaming sharing over P2P network, each peer node just like a lightweight proxy between the server and the target peers. The real-time video sharing (RVS) system efficiently utilizes network bandwidth between peer clients through combining with our proxy as expected.  相似文献   

14.
OpenCL programming provides full code portability between different hardware platforms, and can serve as a good programming candidate for heterogeneous systems, which typically consist of a host processor and several accelerators. However, to make full use of the computing capacity of such a system, programmers are requested to manage diverse OpenCL-enabled devices explicitly, including distributing the workload between different devices and managing data transfer between multiple devices. All these tedious jobs pose a huge challenge for programmers. In this paper, a distributed shared OpenCL memory (DSOM) is presented, which relieves users of having to manage data transfer explicitly, by supporting shared buffers across devices. DSOM allocates shared buffers in the system memory and treats the on-device memory as a software managed virtual cache buffer. To support fine-grained shared buffer management, we designed a kernel parser in DSOM for buffer access range analysis. A basic modified, shared, invalid cache coherency is implemented for DSOM to maintain coherency for cache buffers. In addition, we propose a novel strategy to minimize communication cost between devices by launching each necessary data transfer as early as possible. This strategy enables overlap of data transfer with kernel execution. Our experimental results show that the applicability of our method for buffer access range analysis is good, and the efficiency of DSOM is high.  相似文献   

15.
This paper proposes a novel leakage management technique for applications with producer-consumer sharing patterns. Although previous research has proposed leakage management techniques by turning off inactive cache blocks, these techniques can be further improved by exploiting the various run-time characteristics of target applications in CMPs. By exploiting particular access sequences observed in producer-consumer sharing patterns and the spatial locality of shared buffers, our technique enables a more aggressive turn-off of L2 cache blocks of these buffers. Experimental results using a CMP simulator show that our proposed technique reduces the energy consumption of on-chip L2 caches, a shared bus, and off-chip memory by up to 31.3% over the existing cache leakage power management techniques with no significant performance loss.  相似文献   

16.
HTTP代理服务器的设计与实现   总被引:1,自引:4,他引:1  
实现了一个代理服务器系统,设计了代理的模型,给出了代理各个模块的设计结构图。该系统使用IP地址和用户口令对用户进行限制,使系统具备了一定的安全访问功能;缓存和预取的使用,提高了客户的响应速度,降低了网络流量。  相似文献   

17.
介绍了基于嵌入式微处理器S3C2440的嵌入式流媒体系统的硬件结构和工作流程. 服务器端通过RTP/RTCP协议将流媒体数据发送出去,客户端对收到的数据进行解压并实时播放. 将接收缓存分成接收缓冲区、播放缓冲区和DMA缓冲区,三个缓冲区的大小按1:1:2的比例设置,通过平均速率、延时抖动和解码码率等参数来约束缓冲区的容量. 在接收缓冲区设置两个临界点,通过对两个临界点的检测,来辅助调节发送端的数据发送速率. 既可以避免网络拥塞,又可以提高流媒体的传输质量.  相似文献   

18.
针对流媒体用户访问偏好的情况,提出了一种基于前缀缓存与媒体流行度的缓存替换算法。该算法根据不同媒体外部、内部流行度给出预估的综合流行度,进而选择可用缓存中具有低流行度的片断进行替换,使得缓存中的所有片断的再利用价值之和最大。模拟实验结果表明,该算法能减少缓存的替换次数,提高缓存命中率,性能较好。  相似文献   

19.
Fragmental Proxy Caching for Streaming Multimedia Objects   总被引:1,自引:0,他引:1  
In this paper, a fragmental proxy-caching scheme that efficiently manages the streaming multimedia data in proxy cache is proposed to improve the quality of streaming multimedia services. The novel data-fragmentation method in this scheme not only provides finer granularity caching units to allow more effective cache replacement, but also offers a unique and natural way of handling the interactive VCR functions in the proxy-caching environment. Furthermore, a cache-replacement scheme, based on user request arrival rates for different multimedia objects and the playback rates of these objects, is proposed to address the drawbacks in existing cache-replacement schemes, most of which consider only the user access frequencies in their cache-replacement decisions. In this cache-replacement scheme, a sliding history window is employed to monitor the dynamic user request arrivals, and a tunable-victimization procedure is used to provide an excellent method of managing the cached multimedia data in accordance with different quality-of-service requirements of the streaming multimedia applications. Performance studies demonstrate that the fragmental proxy-caching scheme significantly outperforms other caching schemes, in terms of byte-hit ratio and the number of delayed starts and can be tuned to either maximize the byte-hit ratio or minimize the number of delayed starts  相似文献   

20.
基于流行度预测的流媒体代理缓存替换算法   总被引:2,自引:0,他引:2       下载免费PDF全文
针对流行度随时间变化的特性,利用回归分析技术给出了一种流媒体文件的流行度预测算法,并在增加少量存储空间及计算时间消耗的情况下,将该预测算法应用于流媒体代理缓存服务器的缓存替换算法之中,模拟实验表明,该方法能减少缓存的替换次数,提高缓存命中率,性能较优。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号