共查询到19条相似文献,搜索用时 62 毫秒
1.
WWW中缓存模型的优化设计 总被引:1,自引:0,他引:1
对于Web用户而言,衡量Web服务质量的 一个重要指标就是检索信息所花费的时间。;缩短检索时间的途径很多,本文主要介绍了通过设置缓存机,降低用户资源访问请求次数,达到缩短用户直观感觉上的检索时间的方法。 相似文献
2.
针对当前WWW服务速度越来越慢的情况,提出了WWW服务的多层次缓冲结构模型,讨论了该缓冲模型中的页面置换算法问题和信息一致性问题,并给出了几种页面置换算法及其性能分析。 相似文献
3.
4.
目前INTERNET已经风靡全球,WWW(World Wide Web)是INTERNET上最吸引人的服务之一,它通过超文本形式,为用户提供图文并茂的信息,而如何将WEB技术与数据库技术结合,使用户可以交互式地在WWW中动态查询原有数据库信息资源,实现在线查询,在线业务,也显得越来越重要。 一、WWW 我们知道,WWW的基本结构是采用开放式主从结构(BROWSER/SERVER)的方法,在WEB服务器端安装WWW服务软件,如MS IIS,NETSCAPE ENTERPRISE SERVER等,存放大量多媒体主页,规定服务器的 相似文献
5.
7.
进入90年代以来,多媒体技术得到广泛的应用,其独特而丰富珠信息表达能力促使人们研究其在Internet上进行传送的可能性,作为这方面研究的最成功的结果,WWW 在改变着Internet方方面面。 相似文献
9.
公共Cache的WWW中的应用 总被引:2,自引:0,他引:2
大量的WWW数据在Internet上被重复传输,严重浪费了网络带宽,同时也给WWW网络服务器带来了极重的负荷。通过在网上建立公共Cache,可以减少文件在网上的重复传输,从而降低网络带宽的浪费,减轻网络服务器的负担,最终降低用户响应时间,本文首先讨论了当前解决网络带宽浪费总问题所用的圾其不足之处,然后提出了利用公共Cache解决该问题的并给出详细设计,最后提出了进一步的研究方向 相似文献
10.
数据库系统与WWW的集成途径 总被引:12,自引:2,他引:10
1引言 目前最重要和最成功的信息系统的WWW却仍然是基于文件系统的,许多现代的数库系统并不支持与WWW的集成,随着WWW的迅速发展,它们将成为未来的新的遗留系统。因,如何将数据库系统与WWW集成起来,已成为摆在数据库研究 相似文献
11.
12.
针对多类型多访问模式应用的需求,在GDSF算法的基础上,引入平均访问间隔和最近访问间隔两个特性以增强算法的适应性;建立缓存结构模型,通过双关键字索引机制,快速索引缓存对象,降低系统开销;对超过一定大小的文件采取后缀预取策略以增加缓存中数据对象的个数.在课题应用背景下,与传统算法的对比实验表明,该方法能够减少缓存的平均请求等待时间,提高对象命中率和字节命中率,增强了缓存替换算法对多类型多请求模式应用的适应性. 相似文献
13.
14.
Abstract. Caching web pages at proxies and in web servers' memories can greatly enhance performance. Proxy caching is known to reduce
network load and both proxy and server caching can significantly decrease latency. Web caching problems have different properties
than traditional operating systems caching, and cache replacement can benefit by recognizing and exploiting these differences.
We address two aspects of the predictability of traffic patterns: the overall load experienced by large proxy and web servers,
and the distinct access patterns of individual pages.
We formalize the notion of ``cache load' under various replacement policies, including LRU and LFU, and demonstrate that
the trace of a large proxy server exhibits regular load. Predictable load allows for improved design, analysis, and experimental
evaluation of replacement policies. We provide a simple and (near) optimal replacement policy when each page request has an
associated distribution function on the next request time of the page. Without the predictable load assumption, no such online
policy is possible and it is known that even obtaining an offline optimum is hard. For experiments, predictable load enables
comparing and evaluating cache replacement policies using partial traces , containing requests made to only a subset of the pages.
Our results are based on considering a simpler caching model which we call the interval caching model . We relate traditional and interval caching policies under predictable load, and derive (near)-optimal replacement policies
from their optimal interval caching counterparts. 相似文献
15.
16.
基于WWW缓冲的用户实时二维兴趣模型 总被引:4,自引:0,他引:4
WWW缓冲技术通过将受欢迎的网页放到与客户较近的地方来提高用户存取这些网页的速度,如何有效充分地利用WWW缓冲中的信息,其关键是建立一个合适的用户兴趣模型和构造合适的兴趣挖掘算法.简单兴趣模型通过(词条,权重)来刻画兴趣.它没有深入挖掘这些兴趣之间的关联关系,因而在表达用户兴趣的时候,不能实现兴趣之间的关联.该文在充分分析WWW缓冲模型的基础上提出了实时二维兴趣模型,该模型的实时性可以保证挖掘出来的用户兴趣更能反映当前用户的兴趣状态;该模型引入的二维概念充分地考虑了用户兴趣之间的递推关系.该模型不是简单兴趣模型的简单扩充,而是模型和相关算法的全面改进。文章给出了二维兴趣模型的存储、二维兴趣的有效计算和二维兴趣的实时更新的相关方法。 相似文献
17.
18.
19.
This paper proposes a two-level P2P caching strategy for Web search queries. The design is suitable for a fully distributed service platform based on managed peer boxes (set-top-box or DSL/cable modem) located at the edge of the network, where both boxes and access bandwidth to those boxes are controlled and managed by an ISP provider. Our solution significantly reduces user query traffic going outside of the ISP provider to get query results from the respective Web search engine. Web users are usually very reactive to worldwide events which cause highly dynamic query traffic patterns leading to load imbalance across peers. Our solution contains a strategy to quickly ease imbalance on peers and spread communication flow among participating peers. Each peer maintains a local result cache used to keep the answers for queries originated in the peer itself and queries for which the peer is responsible for by contacting the Web search engine on-demand. When query traffic is predominantly routed to a few responsible peers our strategy replicates the role of “being responsible for” to neighboring peers so that they can absorb query traffic. This is a fairly slow and adaptive process that we call mid-term load balancing. To achieve a short-term fair distribution of queries we introduce a location cache in each peer which keeps pointers to peers that have already requested the same queries in the recent past. This lets these peers share their query answers with newly requesting peers. This process is fast as these popular queries are usually cached in the first DHT hop of a requesting peer which quickly tends to redistribute load among more and more peers. 相似文献