首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
基于标记的缓存协作分布式Web服务器系统   总被引:3,自引:0,他引:3       下载免费PDF全文
林曼筠  钱华林 《软件学报》2003,14(1):117-123
介绍了提高Web服务器性能的前沿技术--分布式Web服务器系统,讨论了现有各种方案的优缺点,在此基础上提出一种新的分布式Web服务器系统.该系统使用基于标记的缓存协作用户请求分发方法(tag based cache cooperative Web requests distribution,简称TB-CCRD),通过前端机把系统中各个Web服务器的缓存组织成一个大的虚拟缓存系统,提高系统的整体缓存命中率,缩短了请求的响应时间;通过分布式处理TCP连接转交来消除前端机的性能瓶颈;利用标记通告URL在缓存中的位置,避免了额外的系统内部通信.从而得到了一个可扩展的高性能分布式Web服务器系统.  相似文献   

2.
CDNs improve network performance and offer fast and reliable applications and services by distributing content to cache servers located close to users. The Web's growth has transformed communications and business services such that speed, accuracy, and availability of network-delivered content has become absolutely critical - both on their own terms and in terms of measuring Web performance. Proxy servers partially address the need for rapid content delivery by providing multiple clients with a shared cache location. In this context, if a requested object exists in a cache (and the cached version has not expired), clients get a cached copy, which typically reduces delivery time. CDNs act as trusted overlay networks that offer high-performance delivery of common Web objects, static data, and rich multimedia content by distributing content load among servers that are close to the clients. CDN benefits include reduced origin server load, reduced latency for end users, and increased throughput. CDNs can also improve Web scalability and disperse flash-crowd events. Here we offer an overview of the CDN architecture and popular CDN service providers.  相似文献   

3.
基于机群的可扩展计算机网络已逐渐成为高性能网络服务器架构的基础。文章对几种典型的机群构架进行了研究,包括RR-DNS—Web服务器 AFS服务器、TCP路由器 Web服务器、主-从服务器等,对它们的并行性问题,例如可扩展性、可靠性以及负载平衡策略等,作了深入的比较分析。  相似文献   

4.
针对目前用于IP路由查找的地址缓存技术和前缀缓存技术的局限性,分析了骨干网路由表前缀重叠特征,提出了一种基于阈值的IP路由缓存方法,该方法结合了地址缓存和前缀缓存技术,无需进行前缀扩展,克服了地址缓存技术缓存空间要求过大、前缀缓存技术无法缓存内部前缀节点的问题,在缓存空间、缓存命中率、缓存公平性以及路由增量更新方面具有优势;仿真实验表明对于路由条目超过260000的路由表,缓存空间大小为30000,选择阈值K=4时97%以上的节点可实现1:1缓存,其余节点采用地址缓存,缓存失效率小于0.02,可以用小的缓存空间实现高速线速转发.  相似文献   

5.
Cooperative cache-based data access in ad hoc networks   总被引:1,自引:0,他引:1  
Cooperative caching, in which multiple nodes share and coordinate cached data, is widely used to improve Web performance in wired networks. However, resource constraints and node mobility have limited the application of these techniques in ad hoc networks. We propose caching techniques that use the underlying routing protocols to overcome these constraints and further improve performance.  相似文献   

6.
内容分发网络中基于内容名的缓存算法会导致路由表规模随网络增长而膨胀,将严重影响网络路由效率和性能。针对该问题,提出一种基于相关内容吸引的节点缓存算法。利用本地缓存算法,通过节点已缓存内容对其他内容的吸引作用吸引主要特征内容,排斥具有次要特征内容,将缓存中不同特征内容的数量差异进行放大,使缓存内容表现出明显稳定的内容特征。同时设计相关内容生存时间相互增强的缓存策略,以减少路由通告信息量,提高内容分发网络的路由能力。实验结果表明,该算法在有效解决路由问题的同时,能增强缓存内容稳定性,提高路由可信度。  相似文献   

7.
周颖  赵岳松 《计算机工程》2003,29(16):172-174
对相关Web分配器技术进行分析,并提出一个新颖的构架来改进在服务器和缓存集群中Web请求路由,MPLS方案利用应用层信息到第二层标签以提高复杂的请求路由功能,而不会发生TCP连接终止瓶颈。需要客户端代理服务器参与为客户请求申请到合适的标签,可用off-the-shelf MPLS交换去执行分配。允许分配器执行一些关健功能实现可伸缩性。  相似文献   

8.
Iyer  Ravi 《World Wide Web》2004,7(3):259-280
As Internet usage continues to expand rapidly, careful attention needs to be paid to the design of Internet servers for achieving high performance and end-user satisfaction. Currently, the memory system continues to remain a significant performance bottleneck for Internet servers employing multi-GHz processors. In this paper, our aim is two-fold: (1) to characterize the cache/memory performance of web server workloads and (2) to propose and evaluate cache design alternatives for future web servers. We chose SPECweb99 as the representative web server workload and our entire characterization and evaluation methodology is based on our CASPER simulation framework. We begin by exploring the processor cache design space for single and dual-processor servers. Based on our observations, we then evaluate other cache hierarchy alternatives such as chipset caches, coherence filters and decompressed page stores. We show the sensitivity of these components to basic organization parameters such as cache size, line size and degree of associativity. We also present the performance implications of routing memory requests initiated by I/O devices through these caches. Based on detailed simulation data and its implications on system level performance, this paper shows that chipset caches have significant potential for improving future web server performance.  相似文献   

9.
In mobile ad hoc network (MANET), on-demand routing protocols are proposed for establishing a route in a distributed manner only when a source host originates a data packet addressed to the destination host. In source-based routing (SBR) protocols, route discovery usually raises a large number of request packets for exploring the current state of the network, but it also performs the collection of useful information for future routing decisions. How to store and manage this collected information in the limited size of cache in order to improve routing performance is still an open issue in the development of an SBR scheme. This paper proposes a novel hash caching mechanism and distributed hashing routing methods to store, utilize, and manage the cached routes in order to improve cache capacity, routing performance, and network throughput. The experimental results indicate that the proposed mechanism offers high cache capacity, efficient route discovery, and good throughput for the MANET.  相似文献   

10.
Extreme-scale scientific collaborations require high-performance wide-area end-to-end data transports to enable fast and secure transfer of high data volumes among collaborating institutions. GridFTP is the de facto protocol for large-scale data transfer in science environments. Existing predominant network transport protocols such as TCP have serious limitations that consume significant CPU power and prevent GridFTP from achieving high throughput on long-haul networks with high latency and potential packet loss, reordering and jitter. On the other hand, protocols such as UDT that address some of the TCP shortcomings demand high computing resources on data transfer nodes. These limitations have caused underutilization of existing high-bandwidth links in scientific and collaborative grids. To address this situation, we have enhanced Globus GridFTP, the most widely used GridFTP implementation, by developing transport offload engines such as UDT and iWARP on SmartNIC, a programmable 10GbE network interface card (NIC). Our results show significant reduction in server utilization and full line-rate sustained bandwidth in high-latency networks, as measured for up to 100 ms of network latency. In our work, we also offload OpenSSL on SmartNIC to reduce host utilization for secure file transfers. The offload engine can provide line-rate data channel encryption/decryption on top of UDT offload without consuming additional host CPU resources. Lower CPU utilization leads to increased server capacity, which allows data transfer nodes to support higher network and data-processing rates. Alternatively, smaller or fewer DTNs can be used for a particular data rate requirement.  相似文献   

11.
提出了集群服务器并行网页预取模型,模型采用了马尔科夫链分析访问路径并在Web集群服务器的各节点上并行预取页面,把集群技术的高性能和高可靠性与预取技术的快速响应能力结合起来。实验表明,将此模型应用于集群服务器的分发器上,服务器系统具有更高的请求命中率和更大的吞吐量。  相似文献   

12.
提出一个新的 Web Caching结构模型—基于内容的 Web Caching.模型综合考虑了 Proxy的操作信息和 Web文档的内容特性 ,界定了虚拟用户团体和 Proxy个性 ,并利用 Ontology技术来刻画 Proxy的个性 ,模拟实验表明 ,结合内容属性可以使得 Web Caching性能得到进一步提高  相似文献   

13.
信息中心网络(information-centric networking,简称ICN)将网络通信模式从当前的以地址为中心转变为以信息为中心.泛在化缓存是ICN重要特性之一,它通过赋予网络任意节点缓存的能力来缓和服务器的压力,降低用户访问延迟.然而,由于缺少内容热度的分布感知,现有ICN缓存策略仍存在缓存利用率较低、缓存位置缺乏合理规划等问题.为了解决这些问题,提出一种基于两级缓存的协同缓存机制(a cache coordination scheme based on two-level cache,简称CSTC).将每个节点的缓存空间分为热度感知和协作分配两部分,为不同热度的内容提供不同的缓存策略.同时,结合提出的热度筛选机制和路由策略,降低了缓存冗余,实现了缓存位置优化.最后,基于真实网络拓扑的仿真实验表明,CSTC在次热门内容缓存数量上提升了2倍,缓存命中率提升了将近50%,且平均往返跳数在多数情况下优于现有On-path缓存方式.  相似文献   

14.
By caching video data, a video proxy server close to the clients can be used to assist video delivery and alleviate the load of video servers. We assume a video can be partially cached and a certain number of video frames are stored in the proxy server. In our setting, the proxy server is allowed to cache the passing data from the video server. A video provides several options (levels) in terms of bandwidth requirement over the server-proxy path. For each video, the proxy server decides to cache a smaller amount of data at a lower level or to accumulate more data to reach a higher level. The proxy server can dynamically adjust the cached video data by choosing an appropriate level based on the network condition or the popularity of the video. We propose a frame selection scheme, Dynamic Chunk Algorithm, to determine which frames are to be cached in the proxy server for the dynamic caching adjustment scenario. The algorithm guarantees the rate constraint over the server-proxy path to be satisfied for each level. This approach also maintains the set of cached frames at a higher level as a superset of the cached frames at a lower level. Hence, it enforces the proxy server to simply cache more data without dropping frames when it intends to reduce network bandwidth consumption for a video and vice versa.  相似文献   

15.
Transparent transmission control protocol (TCP) acceleration is a technique to increase TCP throughput without requiring any changes in end-system TCP implementations. By intercepting and relaying TCP connections inside the network, long end-to-end feedback control loops can be broken into several smaller control loops. This decrease in feedback delay allows accelerated TCP flows to react more quickly to packet loss and thus achieve higher throughput performance. Such TCP acceleration can be implemented on network processors, which are increasingly deployed in modern router systems. In our paper, we describe the functionality of transparent TCP acceleration in detail. Through simulation experiments, we quantify the benefits of TCP acceleration in a broad range of scenarios including flow-control bound and congestion-control bound connections. We study accelerator performance issues on an implementation based on the Intel IXP2350 network processor. Finally, we discuss a number of practical deployment issues and show that TCP acceleration can lead to higher system-wide utilization of link bandwidth.  相似文献   

16.
目前以太网的发展速度远高于存储器和CPU的发展速度,存储器访问和CPU处理网络协议已经成为TCP的性能瓶颈。网络带宽的不断增大对CPU造成了沉重的负担,大约需要1GHz的CPU处理资源对1Gbps的网络流量进行协议处理。为此,使用多核NPU作为NIC,实现TCP接收数据路径中的校验和计算、报文乱序重组功能,并将合并之后的大报文经Linux网卡驱动程序交由协议栈处理,从而减少协议栈处理报文和网卡产生中断的数量,提升端系统的TCP性能。在10Gbps以太网络中,实验取得4.9Gbps的TCP接收数据吞吐量。  相似文献   

17.
Previous studies indicate that I/O could become a performance bottleneck in commodity PC-based cluster Web servers. Current local native file systems do not work well for expensive file I/Os while specialized file systems have a limitation on portability. In this paper, we present a lightweight, collaborative temporary file system (CTFS) to improve disk I/O performance for clustered Web servers. CTFS employs several techniques to achieve high-performance, good scalability and portability: (1) a lightweight local temporal file system at each node, (2) using Remote Direct Memory Access (RDMA) to improve intra-cluster communication performance, and (3) a location-aware summary cache for scalable file-to-server lookup. Comprehensive trace-driven simulation experiments conclude that CTFS achieves up to a 37% better system throughput and reduces up to 47% total disk I/O latency than a local asynchronous FFS solution.  相似文献   

18.
内容中心网络(Content-Centric Network, CCN)是未来互联网的重要发展方向。网内缓存是 CCN 网络的重要特征,对 CCN 内容传输性能具有重要影响。网内缓存的内容发现效率与网内缓存性能密切相关。传统 CCN 网络缓存发现方法是通过请求包在数据平面转发,沿途机会性地命中缓存实现的,具有一定的随机性、盲目性,可能导致缓存内容无法被高效利用。本文提出一种在控制平面解决缓存可用性的方法,结合拓扑、缓存容量以及用户请求分布计算出“值得”缓存的内容进行存储,同时将其向外通告,使其参与路由计算,以便后续请求快速准确地发现并利用缓存内容。实验结果表明,本文方法可使缓存命中率提高 20%左右,服务器负载降低 15%左右。  相似文献   

19.
尤国华  刘媛  高东 《计算机应用研究》2020,37(12):3667-3670
为满足日益增加的服务器端的计算需求,更多的协处理器(如GPU和MIC)成为服务器端的新成员,参与服务器端计算,但是传统的服务器端软件(如Web服务器软件等)不能充分发挥协处理器的性能。为充分利用MIC的性能,提升单台Web服务器的服务质量,针对CPU+MIC的异构硬件体系提出了一种新的动态请求处理模型。该模型基于事件驱动模型和线程池模型,可将部分动态请求调度至MIC执行,并行处理动态请求,兼顾了CPU和MIC间的负载均衡。仿真实验表明,该模型在平均响应时间、吞吐量和99%响应时间等方面均优于现有的Web服务器软件模型。  相似文献   

20.
《Computer Networks》2000,32(3):261-275
Owing to the fast growth of World Wide Web (WWW), web traffic has become a major component of Internet traffics. Consequently, the reduction of document retrieval latency on WWW becomes more and more important. The latency can be reduced in two ways: reduction of network delay and improvement of web servers’ throughput. Our research aims at improving a web server’s throughput by keeping a memory cache in a web server’s address space.In this paper, we focus on the design and implementation of a memory cache scheme. We propose a novel web cache management policy named the adaptive-level policy that either caches the whole file content or only a portion of it, according to the file size. The experimental results show three things. First, our memory cache is beneficial since, under our experimental workloads, the throughput improvement can achieve 32.7%. Second, our cache management policy is suitable for current web traffic. Third, with the increasing popularity of multimedia files, our policy will outperform others currently used in WWW.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号