首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
流媒体对象的缓存管理策略   总被引:2,自引:0,他引:2  
基于流媒体服务的代理技术是流媒体研究领域中的重要课题.随着流媒体技术在Internet和无线网络环境中的高速发展,对流媒体代理服务器的研究也正在逐步深入.本文主要讨论通过代理技术改善媒体的服务质量,降低媒体的传输延迟以及减轻网络负载.在Internet环境下,对流媒体代理服务器的研究集中于流媒体的访问特性、缓存替换算法,构建和实现一个流媒体代理服务器是对流媒体代理技术研究的基础.  相似文献   

2.
The technology advance in network has accelerated the development of multimedia applications over the wired and wireless communication. To alleviate network congestion and to reduce latency and workload on multimedia servers, the concept of multimedia proxy has been proposed to cache popular contents. Caching the data objects can relieve the bandwidth demand on the external network, and reduce the average time to load a remote data object to local side. Since the effectiveness of a proxy server depends largely on cache replacement policy, various approaches are proposed in recent years. In this paper, we discuss the cache replacement policy in a multimedia transcoding proxy. Unlike the cache replacement for conventional web objects, to replace some elements with others in the cache of a transcoding proxy, we should further consider the transcoding relationship among the cached items. To maintain the transcoding relationship and to perform cache replacement, we propose in this paper the RESP framework (standing for REplacement with Shortest Path). The RESP framework contains two primary components, i.e., procedure MASP (standing for Minimum Aggregate Cost with Shortest Path) and algorithm EBR (standing for Exchange-Based Replacement). Procedure MASP maintains the transcoding relationship using a shortest path table, whereas algorithm EBR performs cache replacement according to an exchanging strategy. The experimental results show that the RESP framework can approximate the optimal cache replacement with much lower execution time for processing user queries.  相似文献   

3.
迅速发展的3G网络和覆盖范围可达整个城域的Wi-Fi网络的逐渐普及,使移动网络环境中对流媒体服务的需求迅速增长.由于移动用户主要依赖Internet上的流媒体服务资源提供服务,所以位于无线网络和Internet交界处的边缘流媒体服务器可以充当桥梁和缓冲区,对于降低网络负载和提高服务质量有显著的效果,因而提出一种新的集群架构的边缘流媒体服务器(CESS)的设计,而且针对集群服务器中最重要的负载均衡问题做了分析和测试,提出了一种新的MCLBS缓存替换算法来使CESS能够达到自适应的负载均衡.最后,实验测试和结果分析证明,相对于传统的缓冲替换算法,MCLBS算法更适合集群式的服务器体系结构,在同样的条件下,缓存命中率明显提高,大大降低远程服务器的带宽.  相似文献   

4.
Padmavathi  Poorva   《Computer Networks》2006,50(18):3608-3621
In this paper, we address the server selection problem for streaming applications on the Internet. The architecture we consider is similar to the content distribution networks consisting of geographically dispersed servers and user populations over an interconnected set of metropolitan areas. Server selection issues for Web-based applications in such an environment have been widely addressed; the selection is mostly based on proximity measured using packet delay. Such a greedy or heuristic approach to server selection will not address the capacity planning problem evident in multimedia applications. For such applications, admission control becomes an essential part of their design to maintain Quality of Service (QoS). Our objective in providing a solution to the server selection problem is threefold: first, to direct clients to the nearest server; second, to provide multiple sources to diffuse network load; third, to match server capacity to user demand so that optimal blocking performance can be expected. We accomplish all three objectives by using a special type of Linear Programming (LP) formulation called the Transportation Problem (TP). The objective function in the TP is to minimize the cost of serving a video request from user population x using server y as measured by network distance. The optimal allocation between servers and user populations from TP results in server clusters, the aggregated capacity of each cluster designed to meet the demands of its designated user population. Within a server cluster, we propose streaming protocols for request handling that will result in a balanced load. We implement threshold-based admission control in individual servers within a cluster to enforce the fair share of the server resource to its designated user population. The blocking performance is used as a trigger to find new optimal allocations when blocking rates become unacceptable due to change in user demands. We substantiate the analytical model with an extensive simulation for analyzing the performance of the proposed clustered architecture and the protocols. The simulation results show significant difference in overall blocking performance between optimal and suboptimal allocations in as much as 15% at moderate to high workloads. We also show that the proposed cluster protocols result in lower packet loss and latencies by forcing path diversity from multiple sources for request delivery.  相似文献   

5.
Due to the high bandwidth requirement and rate variability of compressed video, delivering video across wide area networks (WANs) is a challenging issue. Proxy servers have been used to reduce network congestion and improve client access time on the Internet by caching passing data. We investigate ways to store or stage partial video in proxy servers to reduce the network bandwidth requirement over WAN. A client needs to access a portion of the video from a proxy server over a local area network (LAN) and the rest from a central server across a WAN. Therefore, client buffer requirement and video synchronization are to be considered. We study the tradeoffs between client buffer, storage requirement on the proxy server, and bandwidth requirement over WAN. Given a video delivery rate for the WAN, we propose several frame staging selection algorithms to determine the video frames to be stored in the proxy server. A scheme called chunk algorithm, which partitions a video into different segments (chunks of frames) with alternating chunks stored in the proxy server, is shown to offer the best tradeoff. We also investigate an efficient way to utilize client buffer when the combination of video streams from WAN and LAN is considered.  相似文献   

6.
In this paper, we propose a novel approach for reducing the download time of large files over the Internet. Our approach, known as Parallelized File Transport Protocol (P-FTP), proposes simultaneous downloads of disjoint file portions from multiple file servers. P-FTP server selects file servers for the requesting client on the basis of a variety of QoS parameters, such as available bandwidth and server utilization. The sensitivity analysis of our file server selection technique shows that it performs significantly better than random selection. During the file transfer, P-FTP client monitors the file transfer flows to detect slow servers and congested links and adjusts the file distributions accordingly. P-FTP is evaluated with simulations and real-world implementation. The results show at least 50 percent reduction in download time when compared to the traditional file-transfer approach. Moreover, we have also carried out a simulation-based study to investigate the issues related to large scale deployment of our approach on the Internet. Our results demonstrate that a large number of P-FTP users has no adverse effect on the performance perceived by non-P-FTP users. In addition, the file servers and network are not significantly affected by large scale deployment of P-FTP.  相似文献   

7.
The single-system approach is no longer sufficient to handle the load on popular Internet servers, especially for those offering extensive multimedia content. Such services have to be replicated to enhance their availability, performance, and reliability. In a highly replicated and available environment, server selection is an important issue. In this paper, we propose an application-layer broker (ALB) for this purpose. ALB employs a content-based, client-centric approach to negotiate with the servers and to identify the best server for the requested objects. ALB aims to maximize client buffer utilization in order to efficiently handle dynamic user interactions such as skip, reverse presentation, go back in time. We also present details of a collaborative multimedia presentation platform that we have developed based on ALB.  相似文献   

8.

In the face of massive parallel multimedia streaming and user access, multimedia servers are often in an overload state, resulting in the delay of service response and the low utilization of wireless resources, which makes it is difficult to satisfy the user experience quality. Aiming at the problems of low utilization rate of multimedia communication resources and large computing load of servers, this paper proposes a self management mechanism and architecture of wireless resources based on multimedia flow green communication. First, based on the combination of multimedia server, relay base station and user cluster, a multimedia green communication system architecture is built based on the comprehensive utilization rate of multimedia communication, and a cluster green communication control algorithm is proposed. Secondly, aiming at the dynamic service demand and asynchronous multimedia communication environment, aiming at ensuring the balance of resource allocation and accelerating the speed of resource allocation, we build a dynamic multimedia wireless resource architecture. Finally, the experimental results of statistics and analysis, from the server in different scale parallel multimedia streams under different scale delay, number of users relay network free resources proportion, user satisfaction, packet loss rate and other performance show that the proposed algorithm is effective and feasible.

  相似文献   

9.
Finding replacement candidates for accommodating a new object is an important research issue in web caching. Due to the new emerging factors in the transcoding proxy and the aggregate effect of caching multiple versions of the same multimedia object, this problem becomes more important and complex as audio and video applications have proliferated over the Internet, especially in the environment of mobile computing systems. This paper addresses coordinated cache replacement in transcoding proxies. First, we propose an original model which determines cache replacement candidates on all candidate nodes in a coordinated fashion with the objective of minimizing the total cost loss for linear topology. We formulate this problem as an optimization problem and present a low-cost optimal solution for deciding cache replacement candidates. Second, we extend this problem to solve the same problem for tree networks. Finally, we conduct extensive simulations to evaluate the performance of our solutions by comparing with existing models.  相似文献   

10.
With the exponential growth of WWW traffic, web proxy caching becomes a critical technique for Internet web services. Well-organized proxy caching systems with multiple servers can greatly reduce the user perceived latency and decrease the network bandwidth consumption. Thus, many research papers focused on improving web caching performance with the efficient coordination algorithms among multiple servers. Hash based algorithm is the most widely used server coordination mechanism, however, there's still a lot of technical issues need to be addressed. In this paper, we propose a new hash based web caching architecture, Tulip. Tulip aggregates web objects that are likely to be accessed together into object clusters and uses object clusters as the primary access units. Tulip extends the locality-based algorithm in UCFS to hash based web proxy systems and proposes a simple algorithm to reduce the data grouping overhead. It takes into consideration the access speed dispatch between memory and disk and replaces expensive small disk I/O with less large ones. In case a client request cannot be fulfilled by the server in the memory, the system fetches the whole cluster which contains the required object into memory, the future requests for other objects in the same cluster can be satisfied directly from memory and slow disk I/Os are avoided. It also introduces a simple and efficient data dupllication algorithm, few maintenance work need to be done in case of server join/leave or server failure. Along with the local caching strategy, Tulip achieves better fault tolerance and load balance capability with the minimal cost. Our simulation results show Tulip has better performance than previous approaches.  相似文献   

11.
1 概述经过了2000、2001两年的社区宽带网建设的高速发展后,摆在中国ISP们面前的任务是如何在已建成的宽带网上开展增值服务,许多ISP尝试在宽带网上开展流媒体(Streaming Media)服务,如视频点播VOD(Video On-Demand)系统。然而,流媒体对网络带宽和实时性的要求使得流服务器必须能够进行端对端(End-to-End)的拥塞控制和质量调整,由于  相似文献   

12.
It is expected that by 2003, continuous media will account for more than 50% of the data available on origin servers. This will provoke a significant change in Internet workload, due to the high bandwidth requirements and the long-lived nature of digital video, streaming server loads and network bandwidths are proving to be major limiting factors. Aiming at the characteristic of broadband network in a residential area, we propose a popularitybased on server-proxy caching strategy for streaming media. According to a streaming media popularity on streaming server and proxy, this strategy caches the content of this streaming media partially or completely, and plays an important role in decreasing server load, reducing the traffic from streaming server to proxy, and improving the startup latency of the client.  相似文献   

13.
The delivery of multimedia over the Internet is affected by adverse network conditions such as high packet loss rate and long delay. This paper aims at mitigating such effects by leveraging client-side caching proxies. We present a novel cache architecture and associated cache management algorithms that turn edge caches into accelerators of streaming media delivery. This architecture allows partial caching of media objects and joint delivery from caches and origin servers. Most importantly, the caching algorithms are both network-aware and stream-aware; they take into account the popularity of streaming media objects, their bit rate requirements, and the available bandwidth between clients and servers. Using Internet bandwidth models derived from proxy cache logs and measured over real Internet paths, we have conducted extensive simulations to evaluate the performance of various cache management algorithms. Our experiments demonstrate that network-aware caching algorithms can significantly reduce startup delay and improve stream quality. Our experiments also show that partial caching is particularly effective when bandwidth variability is not very high.Shudong Jin: Corespondence to This research was supported in part by NSF (awards ANI-9986397, ANI-0095988, ANI-0205294 and EJA-0202067) and by IBM. Part of this work was done while the first author was at IBM Research in 2001.  相似文献   

14.
With the falling price of memory, an increasing number of multimedia servers and proxies are now equipped with a large memory space. Caching media objects in the memory of a proxy helps to reduce the network traffic, the disk I/O bandwidth requirement, and the data delivery latency. The running buffer approach and its alternatives are representative techniques to caching streaming data in the memory. There are two limits in the existing techniques. First, although multiple running buffers for the same media object co-exist in a given processing period, data sharing among multiple buffers is not considered. Second, user access patterns are not insightfully considered in the buffer management. In this paper, we propose two techniques based on shared running buffers in the proxy to address these limits. Considering user access patterns and characteristics of the requested media objects, our techniques adaptively allocate memory buffers to fully utilize the currently buffered data of streaming sessions, with the aim to reduce both the server load and the network traffic. Experimentally comparing with several existing techniques, we show that the proposed techniques achieve significant performance improvement by effectively using the shared running buffers.  相似文献   

15.

Cognitive radio (CR) technology has been demonstrated as one of the key technologies that can provide the needed spectrum bands for supporting the emerging spectrum-hungry multimedia applications and services in next-generation wireless networks. Multicast routing technique plays a significant role in most of wireless networks that require multimedia data dissemination to a group of destinations through single-hop or multi-hop communication. Performing multimedia multicasting over CR networks can significantly improve the quality of multimedia transmissions by effectively exploiting the available spectrum, reducing network traffic and minimizing communication cost. An important challenge in this domain is how to perform a multi-cast transmissions over multiple hops in a dynamically varying CR environment while maintaining high-quality received video streaming to all multi-case CR receivers without affecting the performance of legacy primary radio networks (PRNs). In this paper, we investigate the problem of multicast multimedia streaming in multi-hop CR networks (CRNs). Specifically, we propose an intelligent multicast routing protocol for multi-hop ad hoc CRNs that can effectively support multimedia streaming. The proposed protocol consists of path selection and channel assignment phases for the different multi-cast receivers. It is based on the shortest path tree (SPT) that implements the expected transmission count metric (ETX). The channel selection is based on the ETX, which is a function of the probability of success (POS) over the different channels that depends on the channel-quality and availability. Simulation results verify the significant improvement achieved by the proposed protocol compared to other existing multicast routing protocols under different network conditions.

  相似文献   

16.
随着过去几十年互联网服务的指数增长,各大网站的访问量急剧上升。海量的用户请求使得热门网站的网络请求率可能在几秒钟内大规模增加。一旦服务器承受不住这样的高并发请求,由此带来的网络拥塞和延迟会极大地影响用户体验。负载均衡是高可用网络基础架构的关键组件,通过在后端引入一个负载均衡器,将工作负载分布到多个服务器来缓解海量并发请求对服务器造成的巨大压力,提高后端服务器和数据库的性能以及可靠性。而Nginx作为一款高性能的HTTP和反向代理服务器,正越来越多地应用到实践中。文中将分析Nginx服务器负载均衡的体系架构,研究默认的加权轮询算法,并提出一种改进后的动态负载均衡算法,实时收集负载信息,重新计算并分配权值。通过实验测试,对比不同算法下的负载均衡性能,改进后的算法能有效提高服务器集群的性能。  相似文献   

17.
Anonymity technologies enable Internet users to maintain a level of privacy that prevents the collection of identifying information such as the IP address. Understanding the deployment of anonymity technologies on the Internet is important to analyze the current and future trends. In this paper, we provide a tutorial survey and a measurement study to understand the anonymity technology usage on the Internet from multiple perspectives and platforms. First, we review currently utilized anonymity technologies and assess their usage levels. For this, we cover deployed contemporary anonymity technologies including proxy servers, remailers, JAP, I2P, and Tor with the geo-location of deployed servers. Among these systems, proxy servers, Tor and I2P are actively used, while remailers and JAP have minimal usage. Then, we analyze application-level protocol usage and anonymity technology usage with different applications. For this, we preform a measurement study by collecting data from a Tor exit node, a P2P client, a large campus network, a departmental email server, and publicly available data on spam sources to assess the utilization of anonymizer technologies from various perspectives. Our results confirm previous findings regarding application usage and server geo-location distribution where certain countries utilize anonymity networks significantly more than others. Moreover, our application analysis reveals that Tor and proxy servers are used more than other anonymity techniques.  相似文献   

18.
With the wide availability of high-speed network access, we are experiencing high quality streaming media delivery over the Internet. The emergence of ubiquitous computing enables mobile users to access the Internet with their laptops, PDAs, or even cell phones. When nomadic users connect to the network via wireless links or phone lines, high quality video transfer can be problematic due to long delay or size mismatch between the application display and the screen. Our proposed solution to this problem is to enable network proxies with the transcoding capability, and hence provide different, appropriate video quality to different network environment. The proxies in our transcoding-enabled caching (TeC) system perform transcoding as well as caching for efficient rich media delivery to heterogeneous network users. This design choice allows us to perform content adaptation at the network edges. We propose three different TeC caching strategies. We describe each algorithm and discuss its merits and shortcomings. We also study how the user access pattern affects the performance of TeC caching algorithms and compare them with other approaches. We evaluate TeC performance by conducting two types of simulation. Our first experiment uses synthesized traces while the other uses real traces derived from an enterprise media server logs. The results indicate that compared with the traditional network caches, with marginal transcoding load, TeC improves the cache effectiveness, decreases the user-perceived latency, and reduces the traffic between the proxy and the content origin server.  相似文献   

19.
《Computer Networks》2003,41(3):347-362
This paper presents handoff management schemes for synchronization algorithms for wireless multimedia systems. The synchronization and handoff management schemes allow mobile hosts to receive time-dependant multimedia streams without delivery interruption while moving from one cell to another. They also maintain the correct ordering of the media components, through the execution of the wireless multimedia application by a means of timestamped messages passed among mobile hosts, base stations and servers. The timestamp values are used to compute the delay for each multimedia unit for each server. Furthermore, the proposed schemes always search for a quasi-receiver among the base stations with which the mobile hosts can communicate to synchronize multimedia units. We discuss the algorithms and present a set of simulation experiments that evaluate the performance of our schemes, using message complexity and buffer usage at each frame arrival time. Our results indicate that our schemes exhibit no underflow or overflow within the bounded delivery time.  相似文献   

20.
Caching is one of the most important schemes for improving the performance of continuous media servers. Continuous object caching enables a server to support more clients simultaneously since it reduces the disk load imposed at each round. However, without a quantative analysis of the disk load reduction induced by caching, the caching effect can not be reflected in the admission control scheme, which limits the number of simultaneous clients serviced. In this paper, we define a performance metric for caching scheme in the continuous media server, define an optimal caching, formalize three heuristic block replacement model and propose a novel near optimal caching scheme. For quantative analysis of the proposed scheme we also propose a probabilistic model of the caching effect in a continuous media server. The proposed model enables the development of efficient statistical admission control algorithms that can increase the number of clients serviced simultaneously. To show the potential of the model, we present a simple example of a statistical admission control algorithm and demonstrate the performance enhancement resulting from the use of the proposed model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号