首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到15条相似文献,搜索用时 171 毫秒
1.
基于Web对象流行度的PPM预测模型   总被引:7,自引:0,他引:7  
Web预取技术是减少网络延迟,提高服务质量的主要解决方案之一.利用Zipf第一法则和第二法则分别对Web高频区对象和低频区对象建立访问流行度模型,进而提出一种基于Web对象流行度的PPM预测模型,实验表明,该模型除继承了传统PPM模型简单易实现的特点外,在缩减模型规模的同时预测精度也有一定程度的提高,并且控制了由预取引起的网络流量.  相似文献   

2.
基于Web浏览特征提出了一种自适应抗噪声的PPM预测模型。模型在构造过程中,利用描述用户浏览深度特征的逆高斯分布及Web流行度特征,对噪声页面及过期数据进行动态移除,分别从纵向和横向上对PPM预测模型规模进行控制。实验表明,该模型对噪声数据的影响有较大的改善,能较好地动态预测用户的Web浏览特征,不仅预测准确率和存储复杂度都有一定程度的提高,而且有效控制了由预取引起的网络流量。  相似文献   

3.
PAPPM:一种自适应Web预测模型   总被引:1,自引:0,他引:1  
提出了一种自适应PPM(Prediction by Partial Match)预测模型:PAPPM。该模型能在预测过程中使用基于熵的自适应选阶策略选择最优阶,降低了预测开销。而且,它能根据当前用户访问的Web序列实时地更新预测模型,保证了预测模型的新鲜度。实验表明,PAPPM提高了预测精度和预测命中率,适用于在线Web预取。  相似文献   

4.
建立有效的用户浏览预测模型,对用户的浏览行为进行准确的预测,是Web预取的关键。标准PPM预测模型由于存在存储复杂度高、执行效率低等缺点,影响了其推广和应用。文章基于剪枝技术,依据Zipf法则及Web对象访问特征对标准PPM预测模型进行预先剪枝和后剪枝,构造出一种自适应PPM预测模型。实验表明,该模型不仅能动态预测用户的Web浏览特征,而且在预测准确率和存储复杂度方面都有一定程度的提高。  相似文献   

5.
PPM模型适合预测用户的下一个请求,但已有的PPM模型不具备在线性,更新通过重构来实现,不能满足实时更新的要求。该文提出基于非压缩后缀树的在线PPM预测模型,采用非压缩后缀树实现增量式在线更新,提高了模型的更新速度。该模型的优点是具备在线性。  相似文献   

6.
Web预测模型是Web预取技术的核心,由于传统的基于PPM树的预测模型只考虑了用户的浏览序列,因此预测准确度较低。本文通过结合页面内容以及用户的兴趣来调整模型的输出,提出了基于神经网络的Web预测模型。实验表明,该模型能够在一定程度上提高预测的准确率。  相似文献   

7.
缓存和预取在提高无线环境下的Web访问性能方面发挥着重要作用。文章研究针对无线局域网的Web缓存和预取机制,分别基于数据挖掘和信息论提出了采用序列挖掘和延迟更新的预测算法,设计了上下文感知的预取算法和获益驱动的缓存替换机制,上述算法已在Web缓存系统OnceEasyCache中实现。性能评估实验结果表明,上述算法的集成能有效地提高缓存命中率和延迟节省率。  相似文献   

8.
基于Web流行度的选择Markov预取模型   总被引:1,自引:0,他引:1       下载免费PDF全文
石磊  古志民  卫琳 《计算机工程》2006,32(11):72-74
Web预取技术是目前WWW中减少用户的访问延迟、提高服务质量的主要解决方案之一。该文利用Zipf第1定律和第2定律对Web对象访问流行度建模,并在此基础上,提出了基于Web流行度的选择Markov预取模型。实验表明,该预取模型不仅具有较高的命中率,而且在一定程度上还减少了对带宽的需求。  相似文献   

9.
Web预取技术综述   总被引:11,自引:0,他引:11  
Web预取是减少用户访问延时、提高网络服务质量的关键技术之一,近年来已成为国内外的研究热点.通过利用WWW访问的空间局部性,Web预取使缓存机制从时间局部性向空间局部性扩展.归纳了Web预取技术的分类,概括和比较了不同类别的优势和局限性,给出了预取模型的基本框架及每部分的主要功能,并对各种评价标准进行了详细介绍.同时,深入分析和探讨了现有的几种典型预取算法,系统地比较了这些算法的优缺点.最后从在线性、协作预取、动态流行度、划分用户会话和基于语义与基于路径相结合等方面指出了Web预取技术的研究方向.  相似文献   

10.
序列模式挖掘能够发现隐含在Web日志中的用户的访问规律,可以被用来在Web预取模型中预测即将访问的Web对象。目前大多数序列模式挖掘是基于Apriori的宽度优先算法。提出了基于位图深度优先挖掘算法,采用基于字典树数据结构的深度优先策略,同时采用位图保存和计算各序列的支持度,能够较迅速地挖掘出频繁序列。将该序列模式挖掘算法应用于Web预取模型中,在预取缓存一体化的条件下实验表明具有较好的性能。  相似文献   

11.
A Data Cube Model for Prediction-Based Web Prefetching   总被引:7,自引:0,他引:7  
Reducing the web latency is one of the primary concerns of Internet research. Web caching and web prefetching are two effective techniques to latency reduction. A primary method for intelligent prefetching is to rank potential web documents based on prediction models that are trained on the past web server and proxy server log data, and to prefetch the highly ranked objects. For this method to work well, the prediction model must be updated constantly, and different queries must be answered efficiently. In this paper we present a data-cube model to represent Web access sessions for data mining for supporting the prediction model construction. The cube model organizes session data into three dimensions. With the data cube in place, we apply efficient data mining algorithms for clustering and correlation analysis. As a result of the analysis, the web page clusters can then be used to guide the prefetching system. In this paper, we propose an integrated web-caching and web-prefetching model, where the issues of prefetching aggressiveness, replacement policy and increased network traffic are addressed together in an integrated framework. The core of our integrated solution is a prediction model based on statistical correlation between web objects. This model can be frequently updated by querying the data cube of web server logs. This integrated data cube and prediction based prefetching framework represents a first such effort in our knowledge.  相似文献   

12.
Predictive Prefetching on the Web and Its Potential Impact in the Wide Area   总被引:2,自引:0,他引:2  
The rapid increase of World Wide Web users and the development of services with high bandwidth requirements have caused the substantial increase of response times for users on the Internet. Web latency would be significantly reduced, if browser, proxy or Web server software could make predictions about the pages that a user is most likely to request next, while the user is viewing the current page, and prefetch their content.In this paper we study Predictive Prefetching on a totally new Web system architecture. This is a system that provides two levels of caching before information reaches the clients. This work analyses prefetching on a Wide Area Network with the above mentioned characteristics. We first provide a structured overview of predictive prefetching and show its wide applicability to various computer systems. The WAN that we refer to is the GRNET academic network in Greece. We rely on log files collected at the network's Transparent cache (primary caching point), located at GRNET's edge connection to the Internet. We present the parameters that are most important for prefetching on GRNET's architecture and provide preliminary results of an experimental study, quantifying the benefits of prefetching on the WAN. Our experimental study includes the evaluation of two prediction algorithms: an n most popular document algorithm and a variation of the PPM (Prediction by Partial Matching) prediction algorithm. Our analysis clearly shows that Predictive prefetching can improve Web response times inside the GRNET WAN without substantial increase in network traffic due to prefetching.  相似文献   

13.
石磊  姚瑶 《计算机应用》2007,27(11):2746-2749
Markov预测模型是Web预取与个性化推荐技术的基础。大量Web对象的存在使得用户浏览转移状态激增,导致预测模型出现了巨大的空间复杂度问题。基于网站链接结构(WLS),针对Markov预测模型中的转移概率矩阵,提出一种基于行相似与列相似的相似度度量方法。首先计算出相似矩阵,然后利用行相似、列相似获得相似页面并压缩在一起,减小了Markov模型中的状态个数。实验表明,该模型具有较好的整体性能和压缩效果,在预取效率方面能够保持较高的预测准确率和查全率。  相似文献   

14.
一种智能的预取算法   总被引:1,自引:0,他引:1  
网络延迟问题是用户QoS的主要问题之一,它依赖诸多因素如网络带宽、传输延迟、排队延迟和客户机及服务器的处理速度。目前主要采用缓存和预取技术来减少网络延迟,但缓存技术所提高的缓存代理服务器的命中率是有限的。该文系统地阐述了目前预取算法的基本思想并把它们分成四类:基于流行度、基于交互、基于访问概率和基于数据挖掘的预取算法。在对它们进行分析比较的基础上,提出了一种智能的预取方案。该方案使用模糊匹配来计算用户对页面的访问概率,同时要控制预取的量和预取的时刻,以避免对网络的性能产生负面影响。  相似文献   

15.
Integrating Web Prefetching and Caching Using Prediction Models   总被引:2,自引:0,他引:2  
Yang  Qiang  Zhang  Henry Hanning 《World Wide Web》2001,4(4):299-321
Web caching and prefetching have been studied in the past separately. In this paper, we present an integrated architecture for Web object caching and prefetching. Our goal is to design a prefetching system that can work with an existing Web caching system in a seamless manner. In this integrated architecture, a certain amount of caching space is reserved for prefetching. To empower the prefetching engine, a Web-object prediction model is built by mining the frequent paths from past Web log data. We show that the integrated architecture improves the performance over Web caching alone, and present our analysis on the tradeoff between the reduced latency and the potential increase in network load.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号