首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Focussed crawlers enable the automatic discovery of Web resources about a given topic by automatically navigating the Web link structure and selecting the hyperlinks to follow by estimating their relevance to the topic based on evidence obtained from the already downloaded pages. This work proposes a classifier-guided focussed crawling approach that estimates the relevance of a hyperlink to an unvisited Web resource based on the combination of textual evidence representing its local context, namely the textual content appearing in its vicinity in the parent page, with visual evidence associated with its global context, namely the presence of images relevant to the topic within the parent page. The proposed focussed crawling approach is applied towards the discovery of environmental Web resources that provide air quality measurements and forecasts, since such measurements (and particularly the forecasts) are not only provided in textual form, but are also commonly encoded as multimedia, mainly in the form of heatmaps. Our evaluation experiments indicate the effectiveness of incorporating visual evidence in the link selection process applied by the focussed crawler over the use of textual features alone, particularly in conjunction with hyperlink exploration strategies that allow for the discovery of highly relevant pages that lie behind apparently irrelevant ones.  相似文献   

2.
This work addresses issues related to the design and implementation of focused crawlers. Several variants of state-of-the-art crawlers relying on web page content and link information for estimating the relevance of web pages to a given topic are proposed. Particular emphasis is given to crawlers capable of learning not only the content of relevant pages (as classic crawlers do) but also paths leading to relevant pages. A novel learning crawler inspired by a previously proposed Hidden Markov Model (HMM) crawler is described as well. The crawlers have been implemented using the same baseline implementation (only the priority assignment function differs in each crawler) providing an unbiased evaluation framework for a comparative analysis of their performance. All crawlers achieve their maximum performance when a combination of web page content and (link) anchor text is used for assigning download priorities to web pages. Furthermore, the new HMM crawler improved the performance of the original HMM crawler and also outperforms classic focused crawlers in searching for specialized topics.  相似文献   

3.
受到学习模型爬虫的启发,主题爬虫结合网页内容和链接信息来估计网页对给定主题的相关性,得到两个新型的爬虫变种。新型爬虫强调的不仅是有学习相关网页内容的能力,而且有引向相关网页的能力,并且在查找特定主题方面的能力有质的提高。  相似文献   

4.
Indexing the Web is becoming a laborious task for search engines as the Web exponentially grows in size and distribution. Presently, the most effective known approach to overcome this problem is the use of focused crawlers. A focused crawler employs a significant and unique algorithm in order to detect the pages on the Web that relate to its topic of interest. For this purpose we proposed a custom method that uses specific HTML elements of a page to predict the topical focus of all the pages that have an unvisited link within the current page. These recognized on-topic pages have to be sorted later based on their relevance to the main topic of the crawler for further actual downloads. In the Treasure-Crawler, we use a hierarchical structure called T-Graph which is an exemplary guide to assign appropriate priority score to each unvisited link. These URLs will later be downloaded based on this priority. This paper embodies the implementation, test results and performance evaluation of the Treasure-Crawler system. The Treasure-Crawler is evaluated in terms of specific information retrieval criteria such as recall and precision, both with values close to 50%. Gaining such outcome asserts the significance of the proposed approach.  相似文献   

5.
Classical Web crawlers make use of only hyperlink information in the crawling process. However, focused crawlers are intended to download only Web pages that are relevant to a given topic by utilizing word information before downloading the Web page. But, Web pages contain additional information that can be useful for the crawling process. We have developed a crawler, iCrawler (intelligent crawler), the backbone of which is a Web content extractor that automatically pulls content out of seven different blocks: menus, links, main texts, headlines, summaries, additional necessaries, and unnecessary texts from Web pages. The extraction process consists of two steps, which invoke each other to obtain information from the blocks. The first step learns which HTML tags refer to which blocks using the decision tree learning algorithm. Being guided by numerous sources of information, the crawler becomes considerably effective. It achieved a relatively high accuracy of 96.37% in our experiments of block extraction. In the second step, the crawler extracts content from the blocks using string matching functions. These functions along with the mapping between tags and blocks learned in the first step provide iCrawler with considerable time and storage efficiency. More specifically, iCrawler performs 14 times faster in the second step than in the first step. Furthermore, iCrawler significantly decreases storage costs by 57.10% when compared with the texts obtained through classical HTML stripping. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
基于遗传算法的定题信息搜索策略   总被引:4,自引:0,他引:4  
定题检索将信息检索限定在特定主题领域,提供主题领域内信息的检索服务。它是新一代搜索引擎的发展方向之一。定题检索的关键技术是主题相关信息的搜索。本文提出了基于遗传算法的定题信息搜索策略,提高链接于内容相似度不高的网页之后的页面被搜索的机会,扩大了相关网页的搜索范围。同时,借助超链Metadata的提示信息预测链接页面的主题相关度,加快了搜索速度。对比搜索试验证明了算法具有较好的性能。  相似文献   

7.
Web crawlers are essential to many Web applications, such as Web search engines, Web archives, and Web directories, which maintain Web pages in their local repositories. In this paper, we study the problem of crawl scheduling that biases crawl ordering toward important pages. We propose a set of crawling algorithms for effective and efficient crawl ordering by prioritizing important pages with the well-known PageRank as the importance metric. In order to score URLs, the proposed algorithms utilize various features, including partial link structure, inter-host links, page titles, and topic relevance. We conduct a large-scale experiment using publicly available data sets to examine the effect of each feature on crawl ordering and evaluate the performance of many algorithms. The experimental results verify the efficacy of our schemes. In particular, compared with the representative RankMass crawler, the FPR-title-host algorithm reduces computational overhead by a factor as great as three in running time while improving effectiveness by 5?% in cumulative PageRank.  相似文献   

8.
支持向量机在化学主题爬虫中的应用   总被引:3,自引:0,他引:3  
爬虫是搜索引擎的重要组成部分,它沿着网页中的超链接自动爬行,搜集各种资源。为了提高对特定主题资源的采集效率,文本分类技术被用来指导爬虫的爬行。本文把基于支持向量机的文本自动分类技术应用到化学主题爬虫中,通过SVM 分类器对爬行的网页进行打分,用于指导它爬行化学相关网页。通过与基于广度优先算法的非主题爬虫和基于关键词匹配算法的主题爬虫的比较,表明基于SVM分类器的主题爬虫能有效地提高针对化学Web资源的采集效率。  相似文献   

9.
Inherit/Feedback:一种新的Web主题挖掘方法   总被引:4,自引:0,他引:4  
经典链接分析方法(如PageRank和HITS)更多地关注的是网页的权威度,而不是其主题相关度,所以在引导主题搜索的过程中,很快就发生主题漂移.为此,在构建主题关联拓扑模型的基础上,提出了Inherit/Feedback方法,以用于Web主题挖掘.基本思想是:在搜索路径上,一个结点继承其父辈结点的主题相关度,并且将其主题相关度反馈给父辈结点.同时,提出了基于Inhefit/feedback的主题搜索算法(IFC).实验结果表明,这种方法能有效地引导主题搜索,适用于对领域型网站做深层次的搜索和挖掘.  相似文献   

10.
针对传统主题爬虫方法容易陷入局部最优和主题描述不足的问题,提出一种融合本体和改进禁忌搜索策略(On-ITS)的主题爬虫方法。首先利用本体语义相似度计算主题语义向量,基于超级文本标记语言(HTML)网页文本特征位置加权构建网页文本特征向量,然后采用向量空间模型计算网页的主题相关度。在此基础上,计算锚文本主题相关度以及链接指向网页的PR值,综合分析链接优先度。另外,为了避免爬虫陷入局部最优,设计了基于ITS的主题爬虫,优化爬行队列。以暴雨灾害和台风灾害为主题,在相同的实验环境下,基于On-ITS的主题爬虫方法比对比算法的爬准率最多高58%,最少高8%,其他评价指标也很好。基于On-ITS的主题爬虫方法能有效提高获取领域信息的准确性,抓取更多与主题相关的网页。  相似文献   

11.
飞速发展的网络给综合性的采集系统带来了巨大的挑战,由此小型的专题信息采集已成为近年的研究热点。文章介绍了专题的Web信息采集系统的基本原理,分析了专题页面在网络中的分布特性,提出了一种通过提供高质量种子集的方法来改善采集器性能的方法,节约了硬件和网络资源,使更新更加容易。  相似文献   

12.
Given a user keyword query, current Web search engines return a list of individual Web pages ranked by their "goodness" with respect to the query. Thus, the basic unit for search and retrieval is an individual page, even though information on a topic is often spread across multiple pages. This degrades the quality of search results, especially for long or uncorrelated (multitopic) queries (in which individual keywords rarely occur together in the same document), where a single page is unlikely to satisfy the user's information need. We propose a technique that, given a keyword query, on the fly generates new pages, called composed pages, which contain all query keywords. The composed pages are generated by extracting and stitching together relevant pieces from hyperlinked Web pages and retaining links to the original Web pages. To rank the composed pages, we consider both the hyperlink structure of the original pages and the associations between the keywords within each page. Furthermore, we present and experimentally evaluate heuristic algorithms to efficiently generate the top composed pages. The quality of our method is compared to current approaches by using user surveys. Finally, we also show how our techniques can be used to perform query-specific summarization of Web pages.  相似文献   

13.
基于动态主题库的主题爬虫   总被引:1,自引:0,他引:1  
通过对基于不同策略过滤URL的主题爬虫的研究,提出了一种基于动态主题库的主题爬虫.它能够在运行期间实时地更新主题库,提高了对URL过滤的准确度.实验表明,所提的主题爬虫能够在相对较少的时间中,检索尽量少的网络空间,抓取到较多与主题相关的网页.  相似文献   

14.
互联网网页所形成的主题孤岛严重影响了搜索引擎系统的主题爬虫性能,通过人工增加大量的初始种子链接来发现新主题的方法无法保证主题网页的全面性.在分析传统基于内容分析、基于链接分析和基于语境图的主题爬行策略的基础上,提出了一种基于动态隧道技术的主题爬虫爬行策略.该策略结合页面主题相关度计算和URL链接相关度预测的方法确定主题孤岛之间的网页页面主题相关性,并构建层次化的主题判断模型来解决主题孤岛之间的弱链接问题.同时,该策略能有效防止主题爬虫因采集过多的主题无关页面而导致的主题漂移现象,从而可以实现在保持主题语义信息的爬行方向上的动态隧道控制.实验过程利用主题网页层次结构检测页面主题相关性并抽取“体育”主题关键词,然后以此对采集的主题网页进行索引查询测试.结果表明,基于动态隧道技术的爬行策略能够较好的解决主题孤岛问题,明显提升了“体育”主题搜索引擎的准确率和召回率.  相似文献   

15.
面向主题爬取的多粒度URLs优先级计算方法   总被引:1,自引:0,他引:1  
垂直检索系统中主题爬虫的性能对整个系统至关重要。在设计主题爬虫时需要解决两个问题一是计算当前页面与给定主题的相关度, 二是计算待爬取URLs的访问优先级。对第一个问题,给出利用页面的主题文本块和相关链接块的相关度计算方法; 对第二个问题, 给出基于主题上下文和四种不同的粒度(即站点级、页面级、块级和链接级)的优先级计算方法。在此基础上, 提出基于上述方法的主题爬取算法。实验证明, 新算法在不增加时间复杂度的前提下, 在查准率和信息量总和方面明显优于其他三种经典的爬取算法。  相似文献   

16.
基于概率模型的主题爬虫的研究和实现   总被引:1,自引:1,他引:0  
在现有多种主题爬虫的基础上,提出了一种基于概率模型的主题爬虫。它综合抓取过程中获得的多方面的特征信息来进行分析,并运用概率模型计算每个URL的优先值,从而对URL进行过滤和排序。基于概率模型的主题爬虫解决了大多数爬虫抓取策略单一这个缺陷,它与以往主题爬虫的不同之处是除了使用主题相关度评价指标外,还使用了历史评价指标和网页质量评价指标,较好地解决了"主题漂移"和"隧道穿越"问题,同时保证了资源的质量。最后通过多组实验验证了其在主题网页召回率和平均主题相关度上的优越性。  相似文献   

17.
The tremendous growth of the Web poses many challenges for all-purpose single-process crawlers including the presence of some irrelevant answers among search results and the coverage and scaling issues regarding the enormous dimension of the World Wide Web. Hence, more enhanced and convincing algorithms are on demand to yield more precise and relevant search results in an appropriate amount of time. Since employing link based Web page importance metrics within a multi-processes crawler bears a considerable communication overhead on the overall system and cannot produce the precise answer set, employing these metrics in search engines is not an absolute solution to identify the best search answer set by the overall search system. Thus considering the employment of a link independent Web page importance metric is required to govern the priority rule within the queue of fetched URLs. The aim of this paper is to propose a modest weighted architecture for a focused structured parallel Web crawler which employs a link independent clickstream based Web page importance metric. The experiments of this metric over the restricted boundary Web zone of our crowded UTM University Web site shows the efficiency of the proposed metric.  相似文献   

18.
从Web中快速、准确地检索出所需信息的迫切需求催生了专业搜索引擎技术。在专业搜索引擎中,网络爬虫(Crawler)负责在Web上搜集特定专业领域的信息,是专业搜索引擎的重要核心部件。该文对中文专业网页的爬取问题进行了研究,基于KL距离验证了网页内容与链接前后文在分布上的差异,在此基础上提出了以链接锚文本及其前后文为特征、Nave Bayes分类器制导的中文专业网页爬取算法,设计了自动获取带链接类标的训练数据的算法。以金融专业网页的爬取为例,分别对所提出的算法进行了离线和在线测试,结果表明,Nave Bayes分类器制导的网络爬虫可以达到近90%的专业网页收割率。  相似文献   

19.
We develop a new approach for Web information discovery and filtering. Our system, called WID, allows the user to specify long-term information needs by means of various topic profile specifications. An entire example page or an index page can be accepted as input for the discovery. It makes use of a simulated annealing algorithm to automatically explore new Web pages. Simulated annealing algorithms possess some favorable properties to fulfill the discovery objectives. Information retrieval techniques are adopted to evaluate the content-based relevance of each page being explored. The hyperlink information, in addition to the textual context, is considered in the relevance score evaluation of a Web page. WID allows users to provide three forms of the relevance feedback model, namely, the positive page feedback, the negative page feedback, and the positive keyword feedback. The system is domain independent and does not rely on any prior knowledge or information about the Web content. Extensive experiments have been conducted to demonstrate the effectiveness of the discovery performance achieved by WID.  相似文献   

20.
Topic-sensitive PageRank: a context-sensitive ranking algorithm for Web search   总被引:14,自引:0,他引:14  
The original PageRank algorithm for improving the ranking of search-query results computes a single vector, using the link structure of the Web, to capture the relative "importance" of Web pages, independent of any particular search query. To yield more accurate search results, we propose computing a set of PageRank vectors, biased using a set of representative topics, to capture more accurately the notion of importance with respect to a particular topic. For ordinary keyword search queries, we compute the topic-sensitive PageRank scores for pages satisfying the query using the topic of the query keywords. For searches done in context (e.g., when the search query is performed by highlighting words in a Web page), we compute the topic-sensitive PageRank scores using the topic of the context in which the query appeared. By using linear combinations of these (precomputed) biased PageRank vectors to generate context-specific importance scores for pages at query time, we show that we can generate more accurate rankings than with a single, generic PageRank vector. We describe techniques for efficiently implementing a large-scale search system based on the topic-sensitive PageRank scheme.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号