首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
定题搜索引擎Robot的设计与算法   总被引:6,自引:0,他引:6  
定题搜索引擎将信息检索限定在特定主题领域,提供特定主题的信息检索服务,是新一代搜索引擎的发展方向之一。该文介绍了一个定题搜索robot系统NetBat 2.02版,它可以实现在web上爬行下载主题相关网页。定题搜索的关键技术是主题相关信息的搜索及网页相关度分析。该文分析了传统定题搜索算法的优缺点,提出了基于反向链接结合超链文本分析的定题搜索算法。文章还对基于内容的网页相关度分析算法进行了详细的论述。对比搜索实验表明系统有着较好的性能,能准确地爬行到主题相关网页。  相似文献   

2.
定题搜索引擎是新一代搜索引擎的发展方向之一,而定题爬虫是定题搜索引擎的构建关键.本文提出了一个定题爬虫系统的设计框架,详细介绍了其实现的关键技术.针对传统主题过滤算法Hopfield网页分析算法的不足,提出了自主学习的网页分析算法,提高链接于不相关页面后的相关页面被搜索的机会,同时提出了简单高效的镜像页面检测算法,最后, 用原型系统FC测试表明系统有较好的性能.  相似文献   

3.
以开源网络爬虫Heritrix为基础,阐述其工作原理和架构。根据渔业信息词库建立索引,提出一种基于Heritrix的定题爬虫算法,根据链接和内容对网页进行过滤,并构建了渔业信息网络爬虫FishInfoCrawler,经实验表明,本算法能完成渔业信息领域相关网页的抓取。  相似文献   

4.
本文对网页链接结构以及主题信息检索系统进行分析,将链接分析方法应用到主题信息检索系统,概述了链接分析方法在主题信息检索系统搜索策略和检索结果排序中的应用以及运用链接分析进行主题页面相关度分析的方法和策略,运用链接分析衡量主题页面权重,使用建立链接分析主题词典的方法对主题信息检索系统进行改进以便于提高定向信息搜索采集效率。  相似文献   

5.
基于多Agent系统的定题爬虫算法   总被引:2,自引:1,他引:1  
徐照财  程显毅 《计算机工程》2008,34(16):204-206
定题爬虫的研究是定题搜索引擎的关键技术。该文提出一种基于多Agent系统的爬虫算法,采用本题语义主题关键词过滤的方法来抓取与主题相关的网页,利用本体库语义网络实现本体领域中同近义词的过滤。凭借HTML网页标记对关键字识别的不同权重和超链接锚文本对主题相关网页进行预测,通过黑板的通信机制实现多Agent交互。实验结果表明算法在抓取网页的查准率、查全率方面有一定的改善。  相似文献   

6.
基于模拟退火算法的主题爬虫   总被引:1,自引:1,他引:0  
主题爬虫是主题搜索引擎的基础与核心,主题爬行策略的好坏直接影响搜索结果。为了搜索到更多相关的网页,通过利用模拟退火机制选择下一步要访问的链接,使那些蕴含“综合价值”高的链接在搜索初期有机会被选中,同时利用“隧道技术”扩大相关网页的搜索范围。计算链接价值时,综合考虑了链接所在页面内容的价值和链接提示文字的价值,根据它们对链接价值的影响程度不同,分别赋予它们不同的权值。实验证明,该方法对提高网页覆盖率和准确率都有很好的效果。  相似文献   

7.
网页链接的主题相关性影响页面的权威性计算,传统的HITS算法仅从页面的链接结构评估页面的权威性,易导致主题漂移.对HITS算法进行了扩展,提出了一种主题驱动的HITS算法.该算法分析页面文档、链接的主题相关性,把主题相关性融入权威性计算,利用页面链接的拓扑结构传播页面的权威性.该算法能够搜索到与主题高耦合的结果,有效控制主题漂移,改善搜索质量.  相似文献   

8.
基于遗传算法的主题爬行技术研究   总被引:3,自引:0,他引:3  
针对目前主题搜索策略的不足,提出了基于遗传箅法的主题爬行策略,提高了链接于内容相似度不高的网页之后的页面被搜索的机会,扩大了相关网页的搜索范围.同时,在网页相关度分析方面,引入了基于本体语义的主题过滤策略.实验结果表明,基于遗传算法的主题爬虫抓取网页中的主题相关网页数量多,在合理选择种子集合时,能够抓取大量的主题相关度高的网页.  相似文献   

9.
自适应最优搜索算法的网络蜘蛛的设计与实现   总被引:1,自引:0,他引:1  
魏文国  谢桂园 《计算机应用》2007,27(11):2857-2859
主题搜索引擎NonHogSearch改进了采用最优搜索算法的网络蜘蛛的搜索过程,控制了搜索的贪婪程度;并引入网页信噪比概念,从而判断网页是否属于所要搜索的主题页面;进一步,NonHogSearch在爬行过程中自动更新链接的权重,当得到主题相关页面时产生回报,将回报沿链接链路逆向反馈,更新链路上所有链接的Q值,这样避免了网络蜘蛛过早陷入Web搜索空间中局部最优子空间的陷阱,并通过并行方式实现多条链路的同时搜索,改进了搜索引擎的性能。实验证实了该算法在查全率与查准率两方面都有一定的优越性。  相似文献   

10.
一种基于HITS的主题敏感爬行方法   总被引:2,自引:0,他引:2  
基于主题的信息采集是信息检索领域内一个新兴且实用的方法,通过将下载页面限定在特定的主题领域,来提高搜索引擎的效率和提供信息的质量。其思想是在爬行过程中按预先定义好的主题有选择地收集相关网页,避免下载主题不相关的网页,其目标是更准确地找到对用户有用的信息。探讨了主题爬虫的一些关键问题,通过改进主题模型、链接分类模型的学习方法及链接分析方法来提高下载网页的主题相关度及质量。在此基础上设计并实现了一个主题爬虫系统,该系统利用主题敏感HITS来计算网页优先级。实验表明效果良好。  相似文献   

11.
Web crawlers are complex applications that explore the Web for different purposes. Web crawlers can be configured to crawl online social networks (OSNs) to obtain relevant data about their global structure. Before a web crawler can be launched to explore the Web, a large amount of settings have to be configured. These settings define the crawler's behavior and they have a big impact on the collected data. Both the amount of collected data and the quality of the information that it contains are affected by the crawler settings and, therefore, by properly configuring these web crawler settings we can target specific goals to achieve with our crawl. In this paper, we review the configuration choices that an attacker who wants to obtain information from an OSN by crawling it has to make to conduct his attack. We analyze different scheduler algorithms for web crawlers and evaluate their performance in terms of how useful they are to pursue a set of different adversary goals.  相似文献   

12.
主题网络爬虫研究综述   总被引:3,自引:0,他引:3       下载免费PDF全文
网络信息资源呈指数级增长,面对用户越来越个性化的需求,主题网络爬虫应运而生。主题网络爬虫是一种下载特定主题网页的程序。利用在采集页面过程获得的特定信息,主题网络爬虫抓取的页面都是与主题相关的。基于主题网络爬虫的搜索引擎以及基于主题网络爬虫构建领域语料库等应用已经得到广泛运用。首先介绍了主题爬虫的定义、工作原理;然后介绍了近年来国内外关于主题爬虫的研究状况,并比较了各种爬行策略及相关算法的优缺点;最后提出了主题网络爬虫未来的研究方向。关键词:  相似文献   

13.
Focused crawlers have as their main goal to crawl Web pages that are relevant to a specific topic or user interest, playing an important role for a great variety of applications. In general, they work by trying to find and crawl all kinds of pages deemed as related to an implicitly declared topic. However, users are often not simply interested in any document about a topic, but instead they may want only documents of a given type or genre on that topic to be retrieved. In this article, we describe an approach to focused crawling that exploits not only content-related information but also genre information present in Web pages to guide the crawling process. This approach has been designed to address situations in which the specific topic of interest can be expressed by specifying two sets of terms, the first describing genre aspects of the desired pages and the second related to the subject or content of these pages, thus requiring no training or any kind of preprocessing. The effectiveness, efficiency and scalability of the proposed approach are demonstrated by a set of experiments involving the crawling of pages related to syllabi of computer science courses, job offers in the computer science field and sale offers of computer equipments. These experiments show that focused crawlers constructed according to our genre-aware approach achieve levels of F1 superior to 88%, requiring the analysis of no more than 65% of the visited pages in order to find 90% of the relevant pages. In addition, we experimentally analyze the impact of term selection on our approach and evaluate a proposed strategy for semi-automatic generation of such terms. This analysis shows that a small set of terms selected by an expert or a set of terms specified by a typical user familiar with the topic is usually enough to produce good results and that such a semi-automatic strategy is very effective in supporting the task of selecting the sets of terms required to guide a crawling process.  相似文献   

14.
The complexity of web information environments and multiple‐topic web pages are negative factors significantly affecting the performance of focused crawling. A highly relevant region in a web page may be obscured because of low overall relevance of that page. Segmenting the web pages into smaller units will significantly improve the performance. Conquering and traversing irrelevant page to reach a relevant one (tunneling) can improve the effectiveness of focused crawling by expanding its reach. This paper presents a heuristic‐based method to enhance focused crawling performance. The method uses a Document Object Model (DOM)‐based page partition algorithm to segment a web page into content blocks with a hierarchical structure and investigates how to take advantage of block‐level evidence to enhance focused crawling by tunneling. Page segmentation can transform an uninteresting multi‐topic web page into several single topic context blocks and some of which may be interesting. Accordingly, focused crawler can pursue the interesting content blocks to retrieve the relevant pages. Experimental results indicate that this approach outperforms Breadth‐First, Best‐First and Link‐context algorithm both in harvest rate, target recall and target length. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

15.
This work addresses issues related to the design and implementation of focused crawlers. Several variants of state-of-the-art crawlers relying on web page content and link information for estimating the relevance of web pages to a given topic are proposed. Particular emphasis is given to crawlers capable of learning not only the content of relevant pages (as classic crawlers do) but also paths leading to relevant pages. A novel learning crawler inspired by a previously proposed Hidden Markov Model (HMM) crawler is described as well. The crawlers have been implemented using the same baseline implementation (only the priority assignment function differs in each crawler) providing an unbiased evaluation framework for a comparative analysis of their performance. All crawlers achieve their maximum performance when a combination of web page content and (link) anchor text is used for assigning download priorities to web pages. Furthermore, the new HMM crawler improved the performance of the original HMM crawler and also outperforms classic focused crawlers in searching for specialized topics.  相似文献   

16.
In recent years, there has been considerable research on constructing crawlers which find resources satisfying specific conditions called predicates. Such a predicate could be a keyword query, a topical query, or some arbitrary contraint on the internal structure of the web page. Several techniques such as focussed crawling and intelligent crawling have recently been proposed for performing the topic specific resource discovery process. All these crawlers are linkage based, since they use the hyperlink behavior in order to perform resource discovery. Recent studies have shown that the topical correlations in hyperlinks are quite noisy and may not always show the consistency necessary for a reliable resource discovery process. In this paper, we will approach the problem of resource discovery from an entirely different perspective; we will mine the significant browsing patterns of world wide web users in order to model the likelihood of web pages belonging to a specified predicate. This user behavior can be mined from the freely available traces of large public domain proxies on the world wide web. For example, proxy caches such as Squid are hierarchical proxies which make their logs publically available. As we shall see in this paper, such traces are a rich source of information which can be mined in order to find the users that are most relevant to the topic of a given crawl. We refer to this technique as collaborative crawling because it mines the collective user experiences in order to find topical resources. Such a strategy turns out to be extremely effective because the topical consistency in world wide web browsing patterns turns out to very high compared to the noisy linkage information. In addition, the user-centered crawling system can be combined with linkage based systems to create an overall system which works more effectively than a system based purely on either user behavior or hyperlinks.  相似文献   

17.
Focused web crawlers collect topic-related web pages from the Internet. Using Q learning and semi-supervised learning theories, this study proposes an online semi-supervised clustering approach for topical web crawlers (SCTWC) to select the most topic-related URL to crawl based on the scores of the URLs in the unvisited list. The scores are calculated based on the fuzzy class memberships and the Q values of the unlabelled URLs. Experimental results show that SCTWC increases the crawling performance.  相似文献   

18.
基于概率模型的主题爬虫的研究和实现   总被引:1,自引:1,他引:0       下载免费PDF全文
在现有多种主题爬虫的基础上,提出了一种基于概率模型的主题爬虫。它综合抓取过程中获得的多方面的特征信息来进行分析,并运用概率模型计算每个URL的优先值,从而对URL进行过滤和排序。基于概率模型的主题爬虫解决了大多数爬虫抓取策略单一这个缺陷,它与以往主题爬虫的不同之处是除了使用主题相关度评价指标外,还使用了历史评价指标和网页质量评价指标,较好地解决了"主题漂移"和"隧道穿越"问题,同时保证了资源的质量。最后通过多组实验验证了其在主题网页召回率和平均主题相关度上的优越性。  相似文献   

19.
《Computer Networks》1999,31(11-16):1623-1640
The rapid growth of the World-Wide Web poses unprecedented scaling challenges for general-purpose crawlers and search engines. In this paper we describe a new hypertext resource discovery system called a Focused Crawler. The goal of a focused crawler is to selectively seek out pages that are relevant to a pre-defined set of topics. The topics are specified not using keywords, but using exemplary documents. Rather than collecting and indexing all accessible Web documents to be able to answer all possible ad-hoc queries, a focused crawler analyzes its crawl boundary to find the links that are likely to be most relevant for the crawl, and avoids irrelevant regions of the Web. This leads to significant savings in hardware and network resources, and helps keep the crawl more up-to-date.To achieve such goal-directed crawling, we designed two hypertext mining programs that guide our crawler: a classifier that evaluates the relevance of a hypertext document with respect to the focus topics, and a distiller that identifies hypertext nodes that are great access points to many relevant pages within a few links. We report on extensive focused-crawling experiments using several topics at different levels of specificity. Focused crawling acquires relevant pages steadily while standard crawling quickly loses its way, even though they are started from the same root set. Focused crawling is robust against large perturbations in the starting set of URLs. It discovers largely overlapping sets of resources in spite of these perturbations. It is also capable of exploring out and discovering valuable resources that are dozens of links away from the start set, while carefully pruning the millions of pages that may lie within this same radius. Our anecdotes suggest that focused crawling is very effective for building high-quality collections of Web documents on specific topics, using modest desktop hardware.  相似文献   

20.
近年来人们提出了很多新的搜集思想,他们都使用了一个共同的技术——集中式搜集。集中式搜集通过分析搜索的区域,来发现与主题最相关的链接,防止访问网上不相关的区域,这可以大量地节省硬件和网络资源,使网页得到尽快的更新。为了达到这个搜索目标,本文提出了两个算法:一个是基于多层分类的网页过滤算法,试验结果表明,这种算法有较高的准确率,而且分类速度明显高于一般的分类算法;另一个是基于Web结构的URL排序算法,这个算法充分地利用了Web的结构特征和网页的分布特征。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号