共查询到20条相似文献,搜索用时 603 毫秒
1.
针对已有网页分割方法都基于文档对象模型实现且实现难度较高的问题,提出了一种采用字符串数据模型实现网页分割的新方法。该方法通过机器学习获取网页标题的特征,利用标题实现网页分割。首先,利用网页行块分布函数和网页标题标签学习得到网页标题特征;然后,基于标题将网页分割成内容块;最后,利用块深度对内容块进行合并,完成网页分割。理论分析与实验结果表明,该方法中的算法具有O(n)的时间复杂度和空间复杂度,该方法对于高校门户、博客日志和资源网站等类型的网页具有较好的分割效果,并且可以用于网页信息管理的多种应用中,具有良好的应用前景。 相似文献
2.
The complexity of web information environments and multiple‐topic web pages are negative factors significantly affecting the performance of focused crawling. A highly relevant region in a web page may be obscured because of low overall relevance of that page. Segmenting the web pages into smaller units will significantly improve the performance. Conquering and traversing irrelevant page to reach a relevant one (tunneling) can improve the effectiveness of focused crawling by expanding its reach. This paper presents a heuristic‐based method to enhance focused crawling performance. The method uses a Document Object Model (DOM)‐based page partition algorithm to segment a web page into content blocks with a hierarchical structure and investigates how to take advantage of block‐level evidence to enhance focused crawling by tunneling. Page segmentation can transform an uninteresting multi‐topic web page into several single topic context blocks and some of which may be interesting. Accordingly, focused crawler can pursue the interesting content blocks to retrieve the relevant pages. Experimental results indicate that this approach outperforms Breadth‐First, Best‐First and Link‐context algorithm both in harvest rate, target recall and target length. Copyright © 2007 John Wiley & Sons, Ltd. 相似文献
3.
With the rapid changes in dynamic web pages, there is an increasing need for receiving instant updates for dynamic blocks on the Web. In this paper, we address the problem of automatically following dynamic blocks in web pages. Given a user-specified block on a web page, we continuously track the content of the block and report the updates in real time. This service can bring obvious benefits to users, such as the ability to track top-ten breaking news on CNN, the prices of iPhones on Amazon, or NBA game scores. We study 3,346 human labeled blocks from 1,127 pages, and analyze the effectiveness of four types of patterns, namely visual area, DOM tree path, inner content and close context, for tracking content blocks. Because of frequent web page changes, we find that the initial patterns generated on the original page could be invalidated over time, leading to the failure of extracting correct blocks. According to our observations, we combine different patterns to improve the accuracy and stability of block extractions. Moreover, we propose an adaptive model that adapts each pattern individually and adjusts pattern weights for an improved combination. The experimental results show that the proposed models outperform existing approaches, with the adaptive model performing the best. 相似文献
4.
Klessius Berlt Edleno Silva de Moura André Carvalho Marco Cristo Nivio Ziviani Thierson Couto 《Information Systems》2010
In this work we propose a model to represent the web as a directed hypergraph (instead of a graph), where links connect pairs of disjointed sets of pages. The web hypergraph is derived from the web graph by dividing the set of pages into non-overlapping blocks and using the links between pages of distinct blocks to create hyperarcs. A hyperarc connects a block of pages to a single page, in order to provide more reliable information for link analysis. We use the hypergraph model to create the hypergraph versions of the Pagerank and Indegree algorithms, referred to as HyperPagerank and HyperIndegree, respectively. The hypergraph is derived from the web graph by grouping pages by two different partition criteria: grouping together the pages that belong to the same web host or to the same web domain. We compared the original page-based algorithms with the host-based and domain-based versions of the algorithms, considering a combination of the page reputation, the textual content of the pages and the anchor text. Experimental results using three distinct web collections show that the HyperPagerank and HyperIndegree algorithms may yield better results than the original graph versions of the Pagerank and Indegree algorithms. We also show that the hypergraph versions of the algorithms were slightly less affected by noise links and spamming. 相似文献
5.
Although caching has been shown as an efficient technique to reduce the delay in generating web pages to meet the page requests from web users, it becomes less effective if the pages are dynamic and contain dynamic contents. In this paper, instead of using caching, we study the effectiveness of using pre-fetching to resolve the problems in handling dynamic web pages. Pre-fetching is a proactive caching scheme since a page is cached before the receipt of any page request for the page. In addition to the problem of which pages to be pre-fetched, another equally important question is when to perform the pre-fetching. To resolve the prediction and timing problems, we explore the temporal properties of the dynamic web pages and the timing issues in accessing the pages to determine which pages to be pre-fetched and the best time to pre-fetch the pages to maximize the cache hit probability of the pre-fetched page. If the required pages can be found in the cache validly, the response times of the requests can be greatly reduced. The proposed scheme is called temporal pre-fetching (TPF) in which we prioritize pre-fetching requests based on the predicted usability of the to-be pre-fetched pages. To minimize the impact of incorrect prediction in pre-fetching on processing of on-demand page requests, a qualifying examination is performed to remove unnecessary and low usability pre-fetching requests while they are waiting to be processed and just before their processing. We have implemented the proposed TPF scheme in a web server system and experiments have been performed to study its performance characteristics compared with conventional cache-only scheme using a benchmark auction application under different system and application settings. As shown in the experiment results, the overall system performance, i.e., response time, is improved as more page requests can be served immediately using pre-fetched pages. 相似文献
6.
基于扩展DOM树的Web页面信息抽取 总被引:1,自引:0,他引:1
随着Internet的发展,Web页面提供的信息量日益增长,信息的密集程度也不断增强.多数Web页面包含多个信息块,它们布局紧凑,在HTML语法上具有类似的模式.针对含有多信息块的Web页面,提出一种信息抽取的方法:首先创建扩展的DOM(Document Object Model)树,将页面抽取成离散的信息条;然后根据扩展DOM树的层次结构,并结合必要的视觉特性和语义信息对离散化的信息条重新整合;最后确定包含信息块的子树,深度遍历DOM树实现信息抽取.该算法能对多信息块的Web页面进行信息抽取. 相似文献
7.
随着通信技术的发展,人们迫切希望能方便地利用手持移动设备访问Web网站,由于移动设备的小屏幕和低带宽的缺点,使得这一难题一直没有得到很好的解决.本文提出一种适合于移动设备小屏幕的Web页面分块算法,算法利用Web网页上对象的位置信息对信息块进行逐层聚类,生成一棵网页分块树,再根据移动设备屏幕的特点把网页分块树转换成适合小屏幕浏览的页面. 相似文献
8.
《Computers in human behavior》2006,22(5):870-884
Although many web pages consist of blocks of text surrounded by graphics, there is a lack of valid empirical research to aid the design of this type of page [D. Diaper, P. Waelend, Interact. Comput. 13 (2000) 163]. In particular little is known about the influence of animations on interaction with web pages. Proportion, in particular the Golden Section, is known to be a key determinant of aesthetic quality of objects and aesthetics have recently been identified as a powerful factor in the quality of human–computer interaction [N. Tractinsky, A.S. Katz, D. Ikar, Interact. Comput. 13 (2000) 127]. The current study aimed to establish the relative strength of the effects of graphical display and screen ratio of content and navigation areas in web pages, using an information retrieval task and a split-plot experimental research design. Results demonstrated the effect of screen ratio, but a lack of an effect of graphical display on task performance and two subjective outcome measures. However, there was an effect of graphical display on perceived distraction, with animated display leading to more distraction than static display, t(64) = 2.33. Results are discussed in terms of processes of perception and attention and recommendations for web page design are given. 相似文献
9.
网络数据的飞速增长为搜索引擎带来了巨大的存储和网络服务压力,大量冗余、低质量乃至垃圾数据造成了搜索引擎存储与运算能力的巨大浪费,在这种情况下,如何建立适合万维网实际应用环境的网页数据质量评估体系与评估算法成为了信息检索领域的重要研究课题。在前人工作的基础上,通过网络用户及网页设计人员的参与,文章提出了包括权威知名度、内容、时效性和网页外观呈现四个维度十三个因素的网页质量评价体系;标注数据显示我们的网页质量评价体系具有较强的可操作性,标注结果比较一致;文章最后使用Ordinal Logistic Regression 模型对评价体系的各个维度的重要性进行了分析并得出了一些启发性的结论 互联网网页内容和实效性能否满足用户需求是决定其质量的重要因素。 相似文献
10.
单个页面信息量远远大于特定用户对页面中的信息需求.为快速准确从当前页面中获取特定用户所需求的兴趣信息,提出了页面信息主动检索模型.该检索模型中,根据页面Block特点将当前Web页面转化成信息树,根据用户过去的浏览行为构造用户特征树,挖掘用户特征树产生用户需求信息集,然后从当前页面中检索需求的信息,获取用户兴趣信息集.详述了主动检索的基本原理,给出了相应的算法描述,并通过实验证明了该模型具有可行性. 相似文献
11.
This paper is concerned with the problem of boosting social annotations using propagation, which is also called social propagation. In particular, we focus on propagating social annotations of web pages (e.g., annotations in Del.icio.us). Social annotations
are novel resources and valuable in many web applications, including web search and browsing. Although they are developing
fast, social annotations of web pages cover only a small proportion (<0.1%) of the World Wide Web. To alleviate the low coverage
of annotations, a general propagation model based on Random Surfer is proposed. Specifically, four steps are included, namely
basic propagation, multiple-annotation propagation, multiple-link-type propagation, and constraint-guided propagation. The
model is evaluated on a dataset of 40,422 web pages randomly sampled from 100 most popular English sites and ten famous academic
sites. Each page’s annotations are obtained by querying the history interface of Del.icio.us. Experimental results show that
the proposed model is very effective in increasing the coverage of annotations while still preserving novel properties of
social annotations. Applications of propagated annotations on web search and classification further verify the effectiveness
of the model. 相似文献
12.
基于链接分块的相关链接提取方法 总被引:1,自引:0,他引:1
每个网页都包含了大量的超链接,其中既包含了相关链接,也包含了大量噪声链接。提出了一种基于链接分块的相关链接提取方法。首先,将网页按照HTML语言中标签将网页分成许多的块,从块中提取链接,形成若干链接块;其次,根据相关链接的成块出现,相关链接文字与其所在网页标题含相同词等特征,应用规则与统计相结合的方法从所有链接块中提取相关链接块。相关链接提取方法测试结果,精确率在85%以上,召回率在70%左右,表明该方法很有效。 相似文献