首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
为了解决传统主题爬虫效率偏低的问题,在分析了启发式网络爬虫搜索算法Context Graph的基础上,提出了一种改进的Context Graph爬虫搜索策略。该策略利用基于词频差异的特征选取方法和改进后的TF-IDF公式对原算法进行了改进,综合考虑了网页不同部分的文本信息对特征选取的影响,及特征词的类间权重和类中权重,以提高特征选取和评价的质量。实验结果表明,与既定传统方法进行实验对照,改进后的策略效率更高。  相似文献   

2.
主题网络爬虫研究综述   总被引:3,自引:0,他引:3       下载免费PDF全文
网络信息资源呈指数级增长,面对用户越来越个性化的需求,主题网络爬虫应运而生。主题网络爬虫是一种下载特定主题网页的程序。利用在采集页面过程获得的特定信息,主题网络爬虫抓取的页面都是与主题相关的。基于主题网络爬虫的搜索引擎以及基于主题网络爬虫构建领域语料库等应用已经得到广泛运用。首先介绍了主题爬虫的定义、工作原理;然后介绍了近年来国内外关于主题爬虫的研究状况,并比较了各种爬行策略及相关算法的优缺点;最后提出了主题网络爬虫未来的研究方向。关键词:  相似文献   

3.
The complexity of web information environments and multiple‐topic web pages are negative factors significantly affecting the performance of focused crawling. A highly relevant region in a web page may be obscured because of low overall relevance of that page. Segmenting the web pages into smaller units will significantly improve the performance. Conquering and traversing irrelevant page to reach a relevant one (tunneling) can improve the effectiveness of focused crawling by expanding its reach. This paper presents a heuristic‐based method to enhance focused crawling performance. The method uses a Document Object Model (DOM)‐based page partition algorithm to segment a web page into content blocks with a hierarchical structure and investigates how to take advantage of block‐level evidence to enhance focused crawling by tunneling. Page segmentation can transform an uninteresting multi‐topic web page into several single topic context blocks and some of which may be interesting. Accordingly, focused crawler can pursue the interesting content blocks to retrieve the relevant pages. Experimental results indicate that this approach outperforms Breadth‐First, Best‐First and Link‐context algorithm both in harvest rate, target recall and target length. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

4.
海量网页的存在及其量的急速增长使得通用搜索引擎难以为面向主题或领域的查询提供满意结果。本文研究的主题爬虫致力于收集主题相关信息,达到极大降低网页处理量的目的。它通过评价网页的主题相关度,并优先爬取相关度较高的网页。利用一种基于子空间的语义分析技术,并结合贝叶斯以及支持向量机,设计并实现了一个高效的主题爬虫。实验表明,此算法具有很好的准确性和高效性。  相似文献   

5.
Link contexts in classifier-guided topical crawlers   总被引:3,自引:0,他引:3  
Context of a hyperlink or link context is defined as the terms that appear in the text around a hyperlink within a Web page. Link contexts have been applied to a variety of Web information retrieval and categorization tasks. Topical or focused Web crawlers have a special reliance on link contexts. These crawlers automatically navigate the hyperlinked structure of the Web while using link contexts to predict the benefit of following the corresponding hyperlinks with respect to some initiating topic or theme. Using topical crawlers that are guided by a support vector machine, we investigate the effects of various definitions of link contexts on the crawling performance. We find that a crawler that exploits words both in the immediate vicinity of a hyperlink as well as the entire parent page performs significantly better than a crawler that depends on just one of those cues. Also, we find that a crawler that uses the tag tree hierarchy within Web pages provides effective coverage. We analyze our results along various dimensions such as link context quality, topic difficulty, length of crawl, training data, and topic domain. The study was done using multiple crawls over 100 topics covering millions of pages allowing us to derive statistically strong results.  相似文献   

6.
Focussed crawlers enable the automatic discovery of Web resources about a given topic by automatically navigating the Web link structure and selecting the hyperlinks to follow by estimating their relevance to the topic based on evidence obtained from the already downloaded pages. This work proposes a classifier-guided focussed crawling approach that estimates the relevance of a hyperlink to an unvisited Web resource based on the combination of textual evidence representing its local context, namely the textual content appearing in its vicinity in the parent page, with visual evidence associated with its global context, namely the presence of images relevant to the topic within the parent page. The proposed focussed crawling approach is applied towards the discovery of environmental Web resources that provide air quality measurements and forecasts, since such measurements (and particularly the forecasts) are not only provided in textual form, but are also commonly encoded as multimedia, mainly in the form of heatmaps. Our evaluation experiments indicate the effectiveness of incorporating visual evidence in the link selection process applied by the focussed crawler over the use of textual features alone, particularly in conjunction with hyperlink exploration strategies that allow for the discovery of highly relevant pages that lie behind apparently irrelevant ones.  相似文献   

7.
如何确定搜索的方向和深度是聚焦爬行的核心问题。为此,提出了链接的预期剩余能量概念及其计算方法。该方法利用当前页面的信息计算链接的立即回报能量,利用到达同一链接不同历史路径给予的历史回报知识不断迭代更新链接的预期剩余能量。利用预期剩余能量作为链接的优先级和搜索深度限制,设计了基于预期剩余能量模型的聚焦爬行算法,并给出了关键模块的实现。实验结果显示该方法具有更强的主题网站发现能力。  相似文献   

8.
Seed URLs selection for focused Web crawler intends to guide related and valuable information that meets a user's personal information requirement and provide more effective information retrieval. In this paper, we propose a seed URLs selection approach based on user-interest ontology. In order to enrich semantic query, we first intend to apply Formal Concept Analysis to construct user-interest concept lattice with user log profile. By using concept lattice merger, we construct the user-interest ontology which can describe the implicit concepts and relationships between them more appropriately for semantic representation and query match. On the other hand, we make full use of the user-interest ontology for extracting the user interest topic area and expanding user queries to receive the most related pages as seed URLs, which is an entrance of the focused crawler. In particular, we focus on how to refine the user topic area using the bipartite directed graph. The experiment proves that the user-interest ontology can be achieved effectively by merging concept lattices and that our proposed approach can select high quality seed URLs collection and improve the average precision of focused Web crawler.  相似文献   

9.
领域相关的Web网站抓取方法   总被引:3,自引:0,他引:3  
本文提出了一种抓取领域相关的Web站点的方法,可以在较小的代价下准确地收集用户所关心领域内的网站。这种方法主要改进了传统的聚焦爬虫(Focused Crawler)技术,首先利用Meta-Search技术来改进传统Crawler的通过链接分析来抓取网页的方法,而后利用启发式搜索大大降低了搜索代价,通过引入一种评价领域相关性的打分方法,迭到了较好的准确率。本文详细地描述了上述算法并通过详细的实验验证了算法的效率和效果。  相似文献   

10.
《Information Systems》2006,31(4-5):232-246
One of the major problems for automatically constructed portals and information discovery systems is how to assign proper order to unvisited web pages. Topic-specific crawlers and information seeking agents should try not to traverse the off-topic areas and concentrate on links that lead to documents of interest. In this paper, we propose an effective approach based on the relevancy context graph to solve this problem. The graph can estimate the distance and the relevancy degree between the retrieved document and the given topic. By calculating the word distributions of the general and topic-specific feature words, our method will preserve the property of the relevancy context graph and reflect it on the word distributions. With the help of topic-specific and general word distribution, our crawler can measure a page's expected relevancy to a given topic and determine the order in which pages should be visited first. Simulations are also performed, and the results show that our method outperforms than the breath-first and the method using only the context graph.  相似文献   

11.
The number of vertical search engines and portals has rapidly increased over the last years, making the importance of a topic-driven (focused) crawler self-evident. In this paper, we develop a latent semantic indexing classifier that combines link analysis with text content in order to retrieve and index domain-specific web documents. Our implementation presents a different approach to focused crawling and aims to overcome the limitations imposed by the need to provide initial data for training, while maintaining a high recall/precision ratio. We compare its efficiency with other well-known web information retrieval techniques.  相似文献   

12.
聚焦爬虫技术研究综述   总被引:51,自引:1,他引:50  
周立柱  林玲 《计算机应用》2005,25(9):1965-1969
因特网的迅速发展对万维网信息的查找与发现提出了巨大的挑战。对于大多用户提出的与主题或领域相关的查询需求,传统的通用搜索引擎往往不能提供令人满意的结果网页。为了克服通用搜索引擎的以上不足,提出了面向主题的聚焦爬虫的研究。至今,聚焦爬虫已成为有关万维网的研究热点之一。文中对这一热点研究进行综述,给出聚焦爬虫(Focused Crawler)的基本概念,概述其工作原理;并根据研究的发展现状,对聚焦爬虫的关键技术(抓取目标描述,网页分析算法和网页搜索策略等)作系统介绍和深入分析。在此基础上,提出聚焦爬虫今后的一些研究方向,包括面向数据分析和挖掘的爬虫技术研究,主题的描述与定义,相关资源的发现,W eb数据清洗,以及搜索空间的扩展等。  相似文献   

13.
A focused crawler is an efficient tool used to traverse the Web to gather documents on a specific topic. It can be used to build domain‐specific Web search portals and online personalized search tools. Focused crawlers can only use information obtained from previously crawled pages to estimate the relevance of a newly seen URL. Therefore, good performance depends on powerful modeling of context as well as the quality of the current observations. To address this challenge, we propose capturing sequential patterns along paths leading to targets based on probabilistic models. We model the process of crawling by a walk along an underlying chain of hidden states, defined by hop distance from target pages, from which the actual topics of the documents are observed. When a new document is seen, prediction amounts to estimating the distance of this document from a target. Within this framework, we propose two probabilistic models for focused crawling, Maximum Entropy Markov Model (MEMM) and Linear‐chain Conditional Random Field (CRF). With MEMM, we exploit multiple overlapping features, such as anchor text, to represent useful context and form a chain of local classifier models. With CRF, a form of undirected graphical models, we focus on obtaining global optimal solutions along the sequences by taking advantage not only of text content, but also of linkage relations. We conclude with an experimental validation and comparison with focused crawling based on Best‐First Search (BFS), Hidden Markov Model (HMM), and Context‐graph Search (CGS).  相似文献   

14.
Focused crawling is aimed at selectively seeking out pages that are relevant to a predefined set of topics. Since an ontology is a well-formed knowledge representation, ontology-based focused crawling approaches have come into research. However, since these approaches utilize manually predefined concept weights to calculate the relevance scores of web pages, it is difficult to acquire the optimal concept weights to maintain a stable harvest rate during the crawling process. To address this issue, we proposed a learnable focused crawling framework based on ontology. An ANN (artificial neural network) was constructed using a domain-specific ontology and applied to the classification of web pages. Experimental results show that our approach outperforms the breadth-first search crawling approach, the simple keyword-based crawling approach, the ANN-based focused crawling approach, and the focused crawling approach that uses only a domain-specific ontology.  相似文献   

15.
互联网网页所形成的主题孤岛严重影响了搜索引擎系统的主题爬虫性能,通过人工增加大量的初始种子链接来发现新主题的方法无法保证主题网页的全面性.在分析传统基于内容分析、基于链接分析和基于语境图的主题爬行策略的基础上,提出了一种基于动态隧道技术的主题爬虫爬行策略.该策略结合页面主题相关度计算和URL链接相关度预测的方法确定主题孤岛之间的网页页面主题相关性,并构建层次化的主题判断模型来解决主题孤岛之间的弱链接问题.同时,该策略能有效防止主题爬虫因采集过多的主题无关页面而导致的主题漂移现象,从而可以实现在保持主题语义信息的爬行方向上的动态隧道控制.实验过程利用主题网页层次结构检测页面主题相关性并抽取“体育”主题关键词,然后以此对采集的主题网页进行索引查询测试.结果表明,基于动态隧道技术的爬行策略能够较好的解决主题孤岛问题,明显提升了“体育”主题搜索引擎的准确率和召回率.  相似文献   

16.
张娜  张化祥 《计算机应用》2006,26(5):1171-1173
在网络环境下,经典的链接分析方法(HITS算法)过多的关注网页的权威性,忽视了其主题相关度,易产生主题漂移现象。文本在简要介绍HITS算法的基础上,分析了其产生主题漂移的原因,并结合内容相关度评价方法,提出了一种新的搜索算法——WHITS算法。实验表明,该算法挖掘了超链接间的潜在语义关系,能有效的引导主题挖掘。  相似文献   

17.
为提高主题爬虫的性能,依据站点信息组织的特点和URL的特征,提出一种基于URL模式集的主题爬虫。爬虫分两阶段,在实验爬虫阶段,采集站点样本数据,采用基于URL前缀树的模式构建算法构建URL模式,形成模式关系图,并利用HITS算法分析该模式关系图,计算出各模式的重要度;在聚焦爬虫阶段,无需预先下载页面,即可利用生成的URL模式判断页面是否主题相关和能否指导爬虫深入抓取,并根据URL模式的重要度预测待抓取链接优先级。实验表明,该爬虫相比现有的主题爬虫能快速引导爬虫抓取主题相关页面,保证爬虫的查准率和查全率,有效提高爬虫抓取效率。  相似文献   

18.
针对目前主题网络爬虫搜索策略难以在全局范围内找到最优解,通过对遗传算法的分析与研究,文中设计了一个基于遗传算法的主题爬虫方案。引入了结合文本内容的PageRank算法;采用向量空间模型算法计算网页主题相关度;采取网页链接结构与主题相关度来评判网页的重要性;依据网页重要性选择爬行中的遗传因子;设置适应度函数筛选与主题相关的网页。与普通的主题爬虫比较,该策略能够获取大量主题相关度高的网页信息,能够提高获取的网页的重要性,能够满足用户对所需主题网页的检索需求,并在一定程度上解决了上述问题。  相似文献   

19.
一个集群系统上的网络信息采集器   总被引:2,自引:0,他引:2  
随着硬件和网络技术的发展,集群系统已成为构建网络服务的重要方式.基于集群系统提供网络信息检索服务(如搜索引擎等)具有很大的应用价值.网络信息检索的工作基础是从网络空间采集检索数据,通常由信息采集系统完成.本文介绍一个集群系统上的网络信息采集器.该采集器利用WWW网页之间的链接关系对采集空间进行宽度优先遍历.采用多线程并发方式来提高单结点上的带宽利用率;并在多结点间实现了简单有效的协同工作机制.  相似文献   

20.
支持向量机在化学主题爬虫中的应用   总被引:3,自引:0,他引:3  
爬虫是搜索引擎的重要组成部分,它沿着网页中的超链接自动爬行,搜集各种资源。为了提高对特定主题资源的采集效率,文本分类技术被用来指导爬虫的爬行。本文把基于支持向量机的文本自动分类技术应用到化学主题爬虫中,通过SVM 分类器对爬行的网页进行打分,用于指导它爬行化学相关网页。通过与基于广度优先算法的非主题爬虫和基于关键词匹配算法的主题爬虫的比较,表明基于SVM分类器的主题爬虫能有效地提高针对化学Web资源的采集效率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号