首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we develop a model to study how to effectively download a document from a set of replicated servers. We propose a generalized application-layer anycasting protocol, known as paracasting, to advocate concurrent access of a subset of replicated servers to cooperatively satisfy a client's request. Each participating server satisfies the request in part by transmitting a subset of the requested file to the client. The client can recover the complete file when different parts of the file sent from the participating servers are received. This model allows us to estimate the average time to download a file from the set of homogeneous replicated servers, and the request blocking probability when each server can accept and serve a finite number of concurrent requests. Our results show that the file download time drops when a request is served concurrently by a larger number of homogeneous replicated servers, although the performance improvement quickly saturates when the number of servers increases. If the total number of requests that a server can handle simultaneously is finite, the request blocking probability increases with the number of replicated servers used to serve a request concurrently. Therefore, paracasting is effective when a small number of servers, say, up to four, are used to serve a request concurrently.  相似文献   

2.
File downloads make up a large percentage of the Internet traffic to satisfy various clients using distributed environments for their Cloud, Grid and Internet applications. In particular, the Cloud has become a popular data storage provider and users (individuals and corporates) are relying heavily on it to keep their data. Furthermore, most cloud data servers replicate their data storage infrastructures and servers at various sites to meet the overall high demands of their clients and increase availability. However, most of them do not use that replication to enhance the download performance per client. To make use of this redundancy and to enhance the download speed, we introduce a fast and efficient concurrent technique for downloading large files from replicated Cloud data servers and traditional FTP servers as well. The technique, DDFTP utilizes the availability of replicated files on distributed servers to enhance file download times through concurrent downloads of file blocks from opposite directions in the files. DDFTP does not require coordination between the servers and relies on the in-order and reliability features of TCP to provide fast file downloads. In addition, DDFTP offers efficient load balancing among multiple heterogeneous data servers with minimal overhead. As a result, we can maximize network utilization while maintaining efficient load balancing on dynamic environments where resources, current loads and operational properties vary dynamically. We implemented and evaluated DDFTP and experimentally demonstrated considerable performance gains for file downloads compared to other concurrent/parallel file/data download models.  相似文献   

3.
Anonymity technologies enable Internet users to maintain a level of privacy that prevents the collection of identifying information such as the IP address. Understanding the deployment of anonymity technologies on the Internet is important to analyze the current and future trends. In this paper, we provide a tutorial survey and a measurement study to understand the anonymity technology usage on the Internet from multiple perspectives and platforms. First, we review currently utilized anonymity technologies and assess their usage levels. For this, we cover deployed contemporary anonymity technologies including proxy servers, remailers, JAP, I2P, and Tor with the geo-location of deployed servers. Among these systems, proxy servers, Tor and I2P are actively used, while remailers and JAP have minimal usage. Then, we analyze application-level protocol usage and anonymity technology usage with different applications. For this, we preform a measurement study by collecting data from a Tor exit node, a P2P client, a large campus network, a departmental email server, and publicly available data on spam sources to assess the utilization of anonymizer technologies from various perspectives. Our results confirm previous findings regarding application usage and server geo-location distribution where certain countries utilize anonymity networks significantly more than others. Moreover, our application analysis reveals that Tor and proxy servers are used more than other anonymity techniques.  相似文献   

4.
Data Grids enable the sharing, selection, and connection of a wide variety of geographically distributed computational and storage resources for content needed by large‐scale data‐intensive applications such as high‐energy physics, bioinformatics, and virtual astrophysical observatories. In Data Grids, co‐allocation architectures were developed to enable parallel downloads of data sets from selected replica servers. As Internet is usually the underlying network of a grid, network bandwidth plays as the main factor affecting file transfers between clients and servers. In this paradigm, there are still some challenges that need to be solved, such as to reduce differences in finish times between selected replica servers, to avoid traffic congestion resulting from transferring the same blocks in different links among servers and clients, and to manage network performance variations among parallel transfers. In this paper, we propose the Anticipative Recursively Adjusting Mechanism (ARAM) scheme to adjust the workloads on selected replica servers and handle unpredictable variations in network performance by those servers. Our algorithm is based on using the finish rates for previously assigned transfers to anticipate the bandwidth status for the next section to adjust workloads, and to reduce file transfer times in grid environments. Our approach is useful in grid environments with unstable network link. It not only reduces idle time wasted waiting for the slowest server, but also decreases file transfer completion times. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
High-performance Web sites rely on Web server `farms', hundreds of computers serving the same content, for scalability, reliability, and low-latency access to Internet content. Deploying these scalable farms typically requires the power of distributed or clustered file systems. Building Web server farms on file systems complements hierarchical proxy caching. Proxy caching replicates Web content throughout the Internet, thereby reducing latency from network delays and off-loading traffic from the primary servers. Web server farms scale resources at a single site, reducing latency from queuing delays. Both technologies are essential when building a high-performance infrastructure for content delivery. The authors present a cache consistency model and locking protocol customized for file systems that are used as scalable infrastructure for Web server farms. The protocol takes advantage of the Web's relaxed consistency semantics to reduce latencies and network overhead. Our hybrid approach preserves strong consistency for concurrent write sharing with time-based consistency and push caching for readers (Web servers). Using simulation, we compare our approach to the Andrew file system and the sequential consistency file system protocols we propose to replace  相似文献   

6.
adPD:一种速度自适应的动态并行下载技术   总被引:5,自引:0,他引:5  
本文在介绍了现有的并行下载算法的基础上提出了一种新的速度自适应的动态并行下载机制-adPD。adPD通过为速度不同的连接动态分配大小不同的下载任务,可以很好地适应传输连接速度的变化,做到按速度比例分配下载任务量,充分利用带宽。同时,通过划分大小不固定的文件分块,adPD还可以尽可能地减少发送数据请求的数量,缩短请求等待的空闲时间,在减轻提供服务的节点的负载的同时,提高了下载速度。最后,通过实验结果分析了adPD的实际性能,验证了adPD是一种高效的并行下载算法。  相似文献   

7.
Peer-to-Peer (P2P) file sharing accounts for a very significant part of the Internet’s traffic, affecting the performance of other applications and translating into significant peering costs for ISPs. It has been noticed that, just like WWW traffic, P2P file sharing traffic shows locality properties, which are not exploited by current P2P file sharing protocols.We propose a peer selection algorithm, Adaptive Search Radius (ASR), where peers exploit locality by only downloading from those other peers which are nearest (in network hops). ASR ensures swarm robustness by dynamically adapting the distance according to file part availability. ASR aims at reducing the Internet’s P2P file sharing traffic, while decreasing the download times perceived by users, providing them with an incentive to adopt this algorithm. We believe ASR to be the first locality-aware P2P file sharing system that does not require assistance from ISPs or third parties nor modification to the server infrastructure.We support our proposal with extensive simulation studies, using the eDonkey/eMule protocol on SSFNet. These show a 19 to 29% decrease in download time and a 27 to 70% reduction in the traffic carried by tier-1 ISPs. ASR is also compared (favourably) with Biased Neighbour Selection (BNS), and traffic shaping. We conclude that ASR and BNS are complementary solutions which provide the highest performance when combined. We evaluated the impact of P2P file sharing traffic on HTTP traffic, showing the benefits on HTTP performance of reducing P2P traffic.A plan for introducing ASR into eMule clients is also discussed. This will allow a progressive migration to ASR enabled versions of eMule client software.ASR was also successfully used to download from live Internet swarms, providing significant traffic savings while finishing downloads faster.  相似文献   

8.
针对现有企业网盘存在的安全隐患、传输性能较差、可靠性不高、运营商锁定等问题。该文 从网盘存储的机密性、可靠性和访问效率等方面,设计和实现一种基于多云服务器的安全企业网盘系 统——SkyDisk,实现了数据的自主可控、高速存取和安全可靠。其中,基于 Tahoe-LAFS 系统将多个 云服务器整合成分布式存储集群,为网盘系统提供后端存储服务;文件在存储之前采用 256 位高级加 密标准加密,保证数据的机密性;通过纠删编码和分散存储保证数据的可靠性;本地网盘服务器与多 个云服务器之间并行传输数据,实现了高速上传和下载。最终,SkyDisk 实现一个 Web 服务,向用户 提供 Web 方式的网盘系统。系统测试结果表明,SkyDisk 能够实现安全、可靠的文件存储管理,多云 服务器存储集群没有单节点故障。同时,能够满足快速上传、下载和便捷的文件分享等功能性需求, 降低了企业文件管理成本、提高了生产效率和企业竞争力。  相似文献   

9.
The Internet supports three communication paradigms. The first, unicast, is the point-to-point flow of packets between a single source (client) and destination (server) host. Web browsing and file Me transfer are unicast applications. The next, multicast, is the point-to-multipoint flow of packets between a single source host and one or more destination hosts. Broadcast-style videoconferencing, for example, employs IP multicast. Anycast is the point-to-point flow of packets between a single client and the "nearest" destination server identified by an anycast address. The idea behind anycast is that a client wants to send packets to any one of several possible servers offering a particular service or application but does not really care which one. Any number of servers can be assigned a single anycast address within an anycast group. A client sends packets to an anycast server by placing the anycast address in the packet header. Routers then attempt to deliver the packet to a server with the matching anycast address  相似文献   

10.
《Computer Networks》2007,51(13):3715-3726
Most users have multiple accounts on the Internet where each account is protected by a password. To avoid the headache in remembering and managing a long list of different and unrelated passwords, most users simply use the same password for multiple accounts. Unfortunately, the predominant HTTP basic authentication protocol (even over SSL) makes this common practice remarkably dangerous: an attacker can effectively steal users’ passwords for high-security servers (such as an online banking website) by setting up a malicious server or breaking into a low-security server (such as a high-school alumni website). Furthermore, the HTTP basic authentication protocol is vulnerable to phishing attacks because a client needs to reveal his password to the server that the client wants to login.In this paper, we propose a protocol that allows a client to securely use a single password across multiple servers, and also prevents phishing attacks. Our protocol achieves client authentication without the client revealing his password to the server at any point. Therefore, a compromised server cannot steal a client’s password and replay it to another server.Our protocol is simple, secure, efficient and user-friendly. In terms of simplicity, it only involves three messages. In terms of security, the protocol is secure against the attacks that have been discovered so far including the ones that are difficult to defend, such as the malicious server attacks described above and the recent phishing attacks. Essentially our protocol is an anti-phishing password protocol. In terms of efficiency, each run of our protocol only involves a total of four computations of a one-way hash function. In terms of usability, the protocol requires a user to remember only one password consisting of eight (or more) random characters, and this password can be used for all of his accounts.  相似文献   

11.
It is commonly believed that file sharing traffic on the Internet is mostly generated by peer-to-peer applications. However, we show that HTTP based file sharing services are also extremely popular. We analyzed the traffic of a large research and education network for three months, and observed that a large fraction of the inbound HTTP traffic corresponds to file download services, which indicates that an important portion of file sharing traffic is in the form of HTTP data. In particular, we found that two popular one-click file hosting services are among the top Internet domains in terms of served traffic volume. In this paper, we present an exhaustive study of the traffic generated by such services, the behavior of their users, the downloaded content, and their server infrastructure.  相似文献   

12.
采用P2P(Peer-to-Peer)技术多点共享式文件传输,任何一个进行下载的客户,在下载的同时,也能够作为服务器把它已接收到的数据提供给另外一个客户进行下载。这种方式可以有效地均衡上下行线路的数据量,同时每个客户端又可以部分作为服务器端,减轻了服务器的带宽压力。本系统用MicrosoftVisualC++6.0编写,实现了在局域网中若干台机器互为客户端和服务端,共同完成一个下载任务的功能。  相似文献   

13.
云存储服务在内容分发过程中的数据传递协议通常采用超文本传输协议(HTTP),当大量客户端在短时间内向云存储服务器发出下载同一文件的请求时,会造成云服务端带宽压力过大以及客户端下载过慢的问题。为有效解决该问题,提出了一种融合Peer-to-Peer (P2P)技术的云平台快速内容分发方法,在内容分发过程中构建动态的HTTP和P2P协议转换机制,实现快速内容分发。选取用户类型、服务质量、时间收益、带宽收益等四种协议转换度量指标,并基于OpenStack云平台实现了所提出的动态协议转换方法。实验结果表明,与仅使用HTTP或P2P协议的内容分发方式相比,动态协议转换方法能够保证客户端用户总是获得较短的内容下载时间,同时,当P2P客户端数量较大时能够有效节约服务提供商的带宽资源。  相似文献   

14.
该文通过一例实际应用讨论了 NARAID(Network-Attached RAID)系统的实现以及相关文件传输协议定制,并将其与传统的Internet文件服务器进行了对比,讨论了其网络传输方案和本系统在文件传输中的性能。在实验中,基于NARAID结构的文件服务器体现出了更为优越的性能,展示出磁盘阵列附网存储技术在Internet领域中的广泛应用前景。  相似文献   

15.
An adaptive seamless streaming dissemination system for vehicular networks is presented in this work. An adaptive streaming system is established at each local server to prefetch and buffer stream data. The adaptive streaming system computes the parts of prefetched stream data for each user and stores them temporarily at the local server, based on current situation of the users and the environments where they are located. Thus, users can download the prefetched stream data from the local servers instead of from the Internet directly, meaning that the video playing problem caused by network congestion can be avoided. Several techniques such as stream data prefetching, stream data forwarding, and adaptive dynamic decoding were utilized for enhancing the adaptability of different users and environments and achieving the best transmission efficiency. Fuzzy logic inference systems are utilized to determine if a roadside base station or a vehicle can be chosen to transfer stream data for users. Considering the uneven deployment of BSs and vehicles, a bandwidth reservation mechanism for premium users was proposed to ensure the QoS of the stream data premium users received. A series of simulations were conducted, with the experimental results verifying the effectiveness and feasibility of the proposed work.  相似文献   

16.
Delivering large media files over the Internet is a challenging task because it has some unique features that are different from delivering conventional Web documents. In this paper, we propose a fine-grained peer sharing technique for dealing with the problem in the context of content distribution networks. The key difference of the technique from conventional peer-to-peer systems is that the unit of peer sharing is not a complete media file, but at a finer granularity. By doing so, we improve the flexibility of replica servers for handling client requests. We analyze the storage requirement at replica servers and design a scheduling algorithm to coordinate the delivery process from multiple replica servers to a client. Our simulations show that the fine-grained peer sharing approach can reduce the initial latency of clients and the rejection rate of the system significantly over a simple peer sharing method.  相似文献   

17.
基于多服务器架构、为多用户服务的网络文件存储系统普遍存在资源分配不均,重复文件多,存储空间浪费严重的问题。设计并实现了TNS网络文件存储系统,该系统基于多服务器存储架构,分别由用户服务器、索引服务器、数据服务器、共享服务器、管理服务器和登录服务器组成,为多用户服务,采用一致性Hash实现负载均衡,支持在客户端进行文件粒度的重复数据删除。经过实际生产环境运行测试,具有良好的负载均衡能力和重复数据删除功能,可以有效节省存储空间,提高存储设备利用率。  相似文献   

18.
In energy-aware systems, it is critical to discuss how to reduce the total electric power consumption of information systems. In this paper, we consider communication type applications where a server transmits a large volume of data to clients. A client first selects a server in a cluster of servers and issues a file transmission request to the server. In our previous studies, the transmission power consumption (TPC) model and the extended TPC (ETPC) model of a server to transmit files to clients are proposed. In this paper, we newly propose the transmission power consumption laxity-based (TPCLB) algorithm for a cluster of servers, which are based on the TPC and ETPC models so that the total power consumption in a cluster of servers can be reduced. We evaluate the TPCLB algorithm in terms of the total power consumption and elapse time compared with the round-robin (RR) algorithm.  相似文献   

19.
The Domain Name System (DNS) is an essential part of the Internet infrastructure and provides fundamental services, such as translating host names into IP addresses for Internet communication. The DNS is vulnerable to a number of potential faults and attacks. In particular, false routing announcements can deny access to the DNS service or redirect DNS queries to a malicious impostor. Due to the hierarchical DNS design, a single fault or attack against the routes to any of the top-level DNS servers can disrupt Internet services to millions of users. We propose a path-filtering approach to protect the routes to the critical top-level DNS servers. Our approach exploits both the high degree of redundancy in top-level DNS servers and the observation that popular destinations, including top-level DNS servers, are well-connected via stable routes. Our path-filter restricts the potential top-level DNS server route changes to be within a set of established paths. Heuristics derived from routing operations are used to adjust the potential routes over time. We tested our path-filtering design against BGP routing logs and the results show that the design can effectively ensure correct routes to top-level DNS servers without impacting DNS service availability.  相似文献   

20.
The paper presents a family of distributed file structures, coined DiFS, for record structured, disk resident files with key based exact or interval match access. The file is organized into buckets that are spread among multiple servers, where a server may hold several buckets. Client requests are serviced by mapping keys onto buckets and looking up the corresponding server in an address table. Dynamic growth, in terms of file size and access load, is supported by bucket splits and bucket migrations onto the existing or newly created servers.The major problem that we are addressing is achieving scalability in the sense that both the file size and the client throughput can be scaled up by linearly increasing the number of servers and dynamically redistributing the data. Unlike previous work with similar objectives, our data redistribution considers explicitly the cost/performance ratio of the system by aiming to minimize the number of servers that are used to provide the required performance. A new server is added only if the overall server load in the system does not drop below a pre-specified threshold. Simulation results demonstrate the scalability with controlled cost/performance and the importance of global load control. The impact of various tuning parameters on the effectiveness of the load control is studied in detail. Finally, we compare our approach with other approaches known to date and demonstrate that each of the previous approaches can be recast as a special case of our model. Recommended by: Mei HsuThis material is based in part upon work supported by a grant from Hewlett-Packard Corporation and by NSF under grant IRI-9221947.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号