首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Haonan  Derek L.  Mary K.   《Performance Evaluation》2002,49(1-4):387-410
Previous analyses of scalable streaming protocols for delivery of stored multimedia have largely focused on how the server bandwidth required for full-file delivery scales as the client request rate increases or as the start-up delay is decreased. This previous work leaves unanswered three questions that can substantively impact the desirability of using these protocols in some application domains, namely:

Are simpler scalable download protocols preferable to scalable streaming protocols in contexts where substantial start-up delays can be tolerated?

If client requests are for (perhaps arbitrary) intervals of the media file rather than the full-file, are there conditions under which streaming is not scalable (i.e., no streaming protocol can achieve sub-linear scaling of required server bandwidth with request rate)?

For systems delivering a large collection of objects with a heavy-tailed distribution of file popularity, can scalable streaming substantially reduce the total server bandwidth requirement, or will this requirement be largely dominated by the required bandwidth for relatively cold objects?

This paper addresses these questions primarily through the development of tight lower bounds on required server bandwidth, under the assumption of Poisson, independent client requests. Implications for other arrival processes are also discussed. Previous work and results presented in this paper suggest that these bounds can be approached by implementable policies. With respect to the first question, the results show that scalable streaming protocols require significantly lower server bandwidth in comparison to download protocols for start-up delays up to a large fraction of the media playback duration. For the second question, we find that in the worst-case interval access model, the minimum required server bandwidth, assuming immediate service to each client, scales as the square root of the request rate. Finally, for the third question, we show that scalable streaming can provide a factor of log K improvement in the total minimum required server bandwidth for immediate service, as the number of objects K is scaled, for systems with fixed minimum object request popularity.  相似文献   


2.
adPD:一种速度自适应的动态并行下载技术   总被引:5,自引:0,他引:5  
本文在介绍了现有的并行下载算法的基础上提出了一种新的速度自适应的动态并行下载机制-adPD。adPD通过为速度不同的连接动态分配大小不同的下载任务,可以很好地适应传输连接速度的变化,做到按速度比例分配下载任务量,充分利用带宽。同时,通过划分大小不固定的文件分块,adPD还可以尽可能地减少发送数据请求的数量,缩短请求等待的空闲时间,在减轻提供服务的节点的负载的同时,提高了下载速度。最后,通过实验结果分析了adPD的实际性能,验证了adPD是一种高效的并行下载算法。  相似文献   

3.
《Computer Networks》2007,51(3):901-917
Peer-to-peer networks have been commonly used for tasks such as file sharing or file distribution. We study a class of cooperative file distribution systems where a file is broken up into many chunks that can be downloaded independently. The different peers cooperate by mutually exchanging the different chunks of the file, each peer being client and server at the same time. While such systems are already in widespread use, little is known about their performance and scaling behavior. We develop analytic models that provide insights into how long it takes to deliver a file to N clients given a distribution architecture. Our results indicate that even for the case of heterogeneous client populations it is possible to achieve download times that are almost independent of the number of clients and very close to optimal.  相似文献   

4.
Minimizing delivery cost in scalable streaming content distribution systems   总被引:1,自引:0,他引:1  
Recent scalable multicast streaming protocols for on-demand delivery of media content offer the promise of greatly reduced server and network bandwidth. However, a key unresolved issue is how to design scalable content distribution systems that place replica servers closer to various client populations and route client requests and response streams so as to minimize the total server and network delivery cost. This issue is significantly more complex than the design of distribution systems for traditional Web files or unicast on-demand streaming, for two reasons. First, closest server and shortest path routing does not minimize network bandwidth usage; instead, the optimal routing of client requests and server multicasts is complex and interdependent. Second, the server bandwidth usage increases with the number of replicas. Nevertheless, this paper shows that the complex replica placement and routing optimization problem, in its essential form, can be expressed fairly simply, and can be solved for example client populations and realistic network topologies. The solutions show that the optimal scalable system can differ significantly from the optimal system for conventional delivery. Furthermore, simple canonical networks are analyzed to develop insights into effective heuristics for near-optimal placement and routing. The proposed new heuristics can be used for designing large and heterogeneous systems that are of practical interest. For a number of example networks, the best heuristics produce systems with total delivery cost that is within 16% of optimality.  相似文献   

5.
阳鑫磊  何倩  曹礼  王士成 《计算机科学》2017,44(11):268-272, 283
遥感数据日益增长,大规模遥感数据分发对集中分发服务器构成了巨大压力。充分利用参与下载节点的网络资源,提出并实现了一种支持访问控制的P2P大规模遥感数据分发系统。遥感数据分发系统分为遥感数据管理平台和遥感数据客户端两部分,遥感数据管理平台包含共享分发平台网站、云存储、种子资源服务器和跟踪服务器4个组件,遥感数据各客户端和种子资源服务器构成P2P网络。设计了包括共享分片、分片选择、跟踪器通信等的P2P协议,实现的遥感数据分发系统能够上传遥感数据并自动做种,支持对用户的访问控制。根据用户权限进行下载,各下载节点共享分片,然后基于类Bittorrent协议来加速遥感数据的分发。实验结果表明,实现的大规模遥感数据分发系统的功能完善,在多节点下载时具备良好的并发性能,能够满足大规模遥感数据分发的需要。  相似文献   

6.
With the increasing client population and the explosive volume of Internet media content, the peer-to-peer networking technologies and systems provide a rapid and scalable content distribution mechanism in the global networks. The BitTorrent protocol and its derivatives are among the most popular peer-to-peer file sharing applications, which contribute a dominant fraction of today??s Internet traffic. In this paper, we conduct the performance measurement and analysis of BitTorrent systems with an extensive volume of real trace logs. We use several downloading-side metrics, including overall downloading time, maximum of downloading bandwidth, average bandwidth utilization, maximum of downloading connections, and average number of active connections, to derive various interesting results from the downloading-side aspect of network resource usage. Performance examination learns many new observations and characteristics into the virtue of BitTorrent protocols and systems, thereby providing beneficial information for bandwidth allocation and connection control in BitTorrent client applications. Therefore, this study is complementary to many previous research works that mainly focused on system-oriented and uploading-side performance measurements.  相似文献   

7.
Dynamic batching policies for an on-demand video server   总被引:18,自引:0,他引:18  
In a video-on-demand environment, continuous delivery of video streams to the clients is guaranteed by sufficient reserved network and server resources. This leads to a hard limit on the number of streams that a video server can deliver. Multiple client requests for the same video can be served with a single disk I/O stream by sending (multicasting) the same data blocks to multiple clients (with the multicast facility, if present in the system). This is achieved by batching (grouping) requests for the same video that arrive within a short time. We explore the role of customer-waiting time and reneging behavior in selecting the video to be multicast. We show that a first come, first served (FCFS) policy that schedules the video with the longest outstanding request can perform better than the maximum queue length (MQL) policy that chooses the video with the maximum number of outstanding requests. Additionally, multicasting is better exploited by scheduling playback of the most popular videos at predetermined, regular intervals (hence, termed FCFS-). If user reneging can be reduced by guaranteeing that a maximum waiting time will not be exceeded, then performance of FCFS- is further improved by selecting the regular playback intervals as this maximum waiting time. For an empirical workload, we demonstrate a substantial reduction (of the order of 60%) in the required server capacity by batching.  相似文献   

8.
针对传统浏览器单线程下载效率低下、过度依赖目标服务器的问题,研究提出了基于HTML5的浏览器端多线程下载技术.基于HTML5 Web Workers技术,实现了浏览器端多线程下载功能;利用分段下载技术,实现了单一文件的多源下载;利用HTML5 File System API加Blob对象的技术,实现了浏览器端文件片段的合并功能.实验结果表明,本文提出的方法对于大文件下载,或者高延迟、高丢包率的网络下载环境,效率明显优于单线程下载技术.  相似文献   

9.
云存储服务在内容分发过程中的数据传递协议通常采用超文本传输协议(HTTP),当大量客户端在短时间内向云存储服务器发出下载同一文件的请求时,会造成云服务端带宽压力过大以及客户端下载过慢的问题。为有效解决该问题,提出了一种融合Peer-to-Peer (P2P)技术的云平台快速内容分发方法,在内容分发过程中构建动态的HTTP和P2P协议转换机制,实现快速内容分发。选取用户类型、服务质量、时间收益、带宽收益等四种协议转换度量指标,并基于OpenStack云平台实现了所提出的动态协议转换方法。实验结果表明,与仅使用HTTP或P2P协议的内容分发方式相比,动态协议转换方法能够保证客户端用户总是获得较短的内容下载时间,同时,当P2P客户端数量较大时能够有效节约服务提供商的带宽资源。  相似文献   

10.
在Internet视频直播服务中,受宽带限制,服务器难以支持大规模并发客户.针对该问题,本文提出了采用P2P方式,提高并发节点的数目.即采用多点下载和文件分块重组方法,一边下载一边播放,能够保证视频流完整而流畅地播放.  相似文献   

11.
In this paper, we propose and evaluate the performance of a continuous media delivery technique, called threshold-based multicast. Similar to patching, threshold-based multicast allows two clients that request the same video to share a channel without having to delay the earlier request. It ensures sharing by permitting the client with the later arrival time to join an ongoing multicast session initiated for the earlier request. However, threshold-based multicast does not allow a later arriving client to always join an ongoing multicast session. If it has been some time since the ongoing multicast session was started, a new multicast session is initiated. That is, a threshold is used to control the frequency at which new multicast sessions are started. We derive the optimal threshold that minimizes the server bandwidth required. Our analytical result shows that threshold-based multicast significantly reduces the server bandwidth requirement. Furthermore, we perform a simulation study demonstrating the performance gain of continuous media delivery by threshold-based multicast  相似文献   

12.
Peer-to-Peer (P2P) file sharing accounts for a very significant part of the Internet’s traffic, affecting the performance of other applications and translating into significant peering costs for ISPs. It has been noticed that, just like WWW traffic, P2P file sharing traffic shows locality properties, which are not exploited by current P2P file sharing protocols.We propose a peer selection algorithm, Adaptive Search Radius (ASR), where peers exploit locality by only downloading from those other peers which are nearest (in network hops). ASR ensures swarm robustness by dynamically adapting the distance according to file part availability. ASR aims at reducing the Internet’s P2P file sharing traffic, while decreasing the download times perceived by users, providing them with an incentive to adopt this algorithm. We believe ASR to be the first locality-aware P2P file sharing system that does not require assistance from ISPs or third parties nor modification to the server infrastructure.We support our proposal with extensive simulation studies, using the eDonkey/eMule protocol on SSFNet. These show a 19 to 29% decrease in download time and a 27 to 70% reduction in the traffic carried by tier-1 ISPs. ASR is also compared (favourably) with Biased Neighbour Selection (BNS), and traffic shaping. We conclude that ASR and BNS are complementary solutions which provide the highest performance when combined. We evaluated the impact of P2P file sharing traffic on HTTP traffic, showing the benefits on HTTP performance of reducing P2P traffic.A plan for introducing ASR into eMule clients is also discussed. This will allow a progressive migration to ASR enabled versions of eMule client software.ASR was also successfully used to download from live Internet swarms, providing significant traffic savings while finishing downloads faster.  相似文献   

13.
本文介绍了一种新型Ftp客户端的设计。它可以充分利用本地网络资源,实现共享与方便快速的文件下载。这个客户端主要适用于具有共同的兴趣并在同一个局域网内或者邻近网络内工作的使用者,比如某个研究方向上的研究小组。该客户端结合了P2P与FTP技术,既可以从FTP服务器上下载文件,也可以从局域网内(或者临近网络内)的对等客户端处下载文件。该客户端同时也提供文件让其它客户端共享下载。本文介绍该软件的设计原理与主要技术。  相似文献   

14.
黄华  张建刚  许鲁 《计算机科学》2005,32(9):243-245
在蓝鲸分布式文件系统中,客户端的所有元数据操作都是通过远程过程调用由元数据服务器完成,所有数据读写都是直接与存储服务器交换完成的.由于通信延迟,在客户端进行频繁数据读写时,元数据信息交换影响了整个系统的性能.我们设计了一种在客户端尽量缓存文件元数据信息的模型,有效地减少了元数据通信,缩短了整个读写过程的延迟,极大地提高了蓝鲸分布式文件系统的性能.  相似文献   

15.
Multicast Video-on-Demand (VoD) systems are scalable and  cheap-to-operate. In such systems, a single stream is shared by a batch of common user requests. In this research, we propose multicast communication technique in an Enterprise Network where multimedia data are stored in distributed servers. We consider a novel patching scheme called Client-Assisted Patching where clients’ buffer of a multicast group can be used to patch the missing portion of the clients who will request the same movie shortly. This scheme significantly reduces the server load without requiring larger client cache space than conventional patching schemes. Clients can join an existing multicast session without waiting for the next available server stream which reduces service latency. Moreover, the system is more scalable and cost effective than similar existing systems. Our simulation experiment confirms all these claims.
Md. Humayun KabirEmail:
  相似文献   

16.
File downloads make up a large percentage of the Internet traffic to satisfy various clients using distributed environments for their Cloud, Grid and Internet applications. In particular, the Cloud has become a popular data storage provider and users (individuals and corporates) are relying heavily on it to keep their data. Furthermore, most cloud data servers replicate their data storage infrastructures and servers at various sites to meet the overall high demands of their clients and increase availability. However, most of them do not use that replication to enhance the download performance per client. To make use of this redundancy and to enhance the download speed, we introduce a fast and efficient concurrent technique for downloading large files from replicated Cloud data servers and traditional FTP servers as well. The technique, DDFTP utilizes the availability of replicated files on distributed servers to enhance file download times through concurrent downloads of file blocks from opposite directions in the files. DDFTP does not require coordination between the servers and relies on the in-order and reliability features of TCP to provide fast file downloads. In addition, DDFTP offers efficient load balancing among multiple heterogeneous data servers with minimal overhead. As a result, we can maximize network utilization while maintaining efficient load balancing on dynamic environments where resources, current loads and operational properties vary dynamically. We implemented and evaluated DDFTP and experimentally demonstrated considerable performance gains for file downloads compared to other concurrent/parallel file/data download models.  相似文献   

17.
Access to and transmission of 3D models over networks becomes increasingly popular. However, the performance and quality of access to remote 3D models strongly depends on system load conditions and the capabilities of the various system components, such as clients, servers, and interconnect. The network graphics framework (NGF) integrates various transmission methods for downloading 3D models in a client–server environment. The NGF automatically selects the optimal transmission method for a given pair of client and server, taking into account characteristics of the model to be transmitted, critical environment conditions, user preferences and the capabilities of the client and the server. The NGF aims to provide constant quality of service across different clients and under varying environment conditions.  相似文献   

18.
We study the use of non-volatile memory for caching in distributed file systems. This provides an advantage over traditional distributed file systems in that the load is reduced at the server without making the data vulnerable to failures. We propose the use of a small non-volatile cache for writes, at the client and the file server, together with a larger volatile read cache to keep the cost of the caches reasonable. We use a synthetic workload developed from analysis of file I/O traces from commercial production systems and use a detailed simulation of the distributed environment. The service times for the resources of the system were derived from measurements performed on a typical workstation. We show that non-volatile write caches at the clients and the file server reduce the write response time and the load on the file server dramatically, thus improving the scalability of the system. We examine the comparative benefits of two alternative writeback policies for the non-volatile write cache. We show that a proposed threshold based writeback policy is more effective than a periodic writeback policy under heavy load. We also investigate the effect of varying the write cache size and show that introducing a small non-volatile cache at the client in conjunction with a moderate sized non-volatile server write cache improves the write response time by a factor of four at all load levels.  相似文献   

19.
Recent years have seen intensive investigations of Periodic Broadcast, an attractive paradigm for broadcasting popular videos. In this paradigm, the server simply broadcasts segments of a popular video periodically on a number of communication channels. A large number of clients can be served simultaneously by tuning into these channels to receive segments of the requested video. A playback can begin as soon as a client can access the first segment. Periodic Broadcast guarantees a small maximum service delay regardless of the number of concurrent clients. Existing periodic broadcast techniques are typically evaluated through analytical assessment. While these results are good performance indicators, they cannot demonstrate subtle implementation difficulty that can prohibit these techniques from practical deployment. In this paper, we present the design and implementation of a video broadcasting system based on our periodic broadcast scheme called Striping Broadcast. Our experience with the system confirms that the system offers a low service delay close to its analytical guaranteed delay while requiring small storage space and low download bandwidth at a client.  相似文献   

20.
The access patterns of most information systems follow the 80/20 rules. That is, 80% of the requests are for 20% of the data. A video server can take advantage of this property by waiting for requests and serving them together in one multicast. This simple strategy, however, incurs service delay. We address this drawback in this paper by allowing clients to receive the leading portion of a video on demand, and the rest of the video from an ongoing multicast. Since clients do not have to wait for the next multicast, the service latency is essentially zero. Furthermore, since most services require the server to deliver only a small leading portion of the video, the server can serve many more clients per time unit. We analyze the performance of this approach, and determine the optimal condition for when to use this strategy. We compare its performance to a hardware solution called Piggybacking. The results indicate that more than 200% improvement is achievable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号