首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Hernández  Inma  Rivero  Carlos R.  Ruiz  David 《World Wide Web》2019,22(4):1577-1610
World Wide Web - Deep Web crawling refers to the problem of traversing the collection of pages in a deep Web site, which are dynamically generated in response to a particular query that is...  相似文献   

2.
3.
Marie-4, an intelligent software agent-based Web crawler and caption filter, searches the Web to find image captions and the associated image objects. It uses a broad set of criteria to yield higher recall than competing systems, which generally focus on high precision.  相似文献   

4.
《Computer Networks》1999,31(11-16):1623-1640
The rapid growth of the World-Wide Web poses unprecedented scaling challenges for general-purpose crawlers and search engines. In this paper we describe a new hypertext resource discovery system called a Focused Crawler. The goal of a focused crawler is to selectively seek out pages that are relevant to a pre-defined set of topics. The topics are specified not using keywords, but using exemplary documents. Rather than collecting and indexing all accessible Web documents to be able to answer all possible ad-hoc queries, a focused crawler analyzes its crawl boundary to find the links that are likely to be most relevant for the crawl, and avoids irrelevant regions of the Web. This leads to significant savings in hardware and network resources, and helps keep the crawl more up-to-date.To achieve such goal-directed crawling, we designed two hypertext mining programs that guide our crawler: a classifier that evaluates the relevance of a hypertext document with respect to the focus topics, and a distiller that identifies hypertext nodes that are great access points to many relevant pages within a few links. We report on extensive focused-crawling experiments using several topics at different levels of specificity. Focused crawling acquires relevant pages steadily while standard crawling quickly loses its way, even though they are started from the same root set. Focused crawling is robust against large perturbations in the starting set of URLs. It discovers largely overlapping sets of resources in spite of these perturbations. It is also capable of exploring out and discovering valuable resources that are dozens of links away from the start set, while carefully pruning the millions of pages that may lie within this same radius. Our anecdotes suggest that focused crawling is very effective for building high-quality collections of Web documents on specific topics, using modest desktop hardware.  相似文献   

5.
网络论坛中蕴涵着大量具有实用价值和商业价值的信息,是搜索引擎和问答系统信息的重要来源。针对论坛结构复杂、链接种类繁多,以及容易陷入采集陷阱等问题,提出了一种基于结构驱动的采集路径选择方法。首先根据用户标注的少量类型数据,利用DOM树对采样网页基于网页结构进行结构聚类;其次根据各节点的评价进行采集路径选择;最后对翻页链接进行有效的识别和处理。实验表明,该方法采集的覆盖率和有效率明显优于传统算法,并且应用在中国科学院计算所舆情监测平台上取得了良好的效果。  相似文献   

6.
The Journal of Supercomputing - The Internet, having a sea of Web applications, is one of the largest data stores for big data analysis. To explore and retrieve the states (pages) from Web...  相似文献   

7.
We show that it is possible to collect data that are useful for collaborative filtering (CF) using an autonomous Web spider. In CF, entities are recommended to a new user based on the stated preferences of other, similar users. We describe a CF spider that collects from the Web lists of semantically related entities. These lists can then be used by existing CF algorithms by encoding them as ‘pseudo-users'. Importantly, the spider can collect useful data without pre-programmed knowledge about the format of particular pages or particular sites. Instead, the CF spider uses commercial Web-search engines to find pages likely to contain lists in the domain of interest, and then applies previously proposed heuristics to extract lists from these pages. We show that data collected by this spider are nearly as effective for CF as data collected from real users, and more effective than data collected by two plausible hand-programmed spiders. In some cases, autonomously spidered data can also be combined with actual user data to improve performance.  相似文献   

8.
《Network Security》2003,2003(7):1-2
Software giants who feel cheated of revenue from software piracy have been deploying Web crawling software since January in Europe to curb the loss.This is a short news story only. Visit www.compseconline.com for the latest computer security industry news  相似文献   

9.
The on-line auction is one of the most successful types of electronic marketplace and has been the subject of many academic studies. In recent years, empirical research on on-line auctions has been flourishing because of the availability of large amounts of high-quality bid data from on-line auction sites. However, the increasingly large volumes of bid data have made data collection ever more complex and time consuming, and there are no effective resources that can adequately support this work. So this study focuses on the parallel crawling and filtering of on-line auctions from the social network perspective to help researchers collect and analyze auction data more effectively. The issues raised in this study include parallel crawling architecture, crawling strategies, content filtering strategies, prototype system implementation, and a pilot test of social network analysis. Finally we conduct an empirical experiment on eBay US and Ruten Taiwan to evaluate the performance of our crawling architecture and to understand auction customers?? bidding behavior characteristics. The results of this study show that our parallel crawling and filtering methods are able to work in the real world, and are significantly more effective than manual web crawling. The collected data are useful for drawing social network maps and analyzing bidding problems.  相似文献   

10.
《Computer Networks》1999,31(11-16):1495-1507
The Web mostly contains semi-structured information. It is, however, not easy to search and extract structural data hidden in a Web page. Current practices address this problem by (1) syntax analysis (i.e. HTML tags); or (2) wrappers or user-defined declarative languages. The former is only suitable for highly structured Web sites and the latter is time-consuming and offers low scalability. Wrappers could handle tens, but certainly not thousands, of information sources. In this paper, we present a novel information mining algorithm, namely KPS, over semi-structured information on the Web. KPS employs keywords, patterns and/or samples to mine the desired information. Experimental results show that KPS is more efficient than existing Web extracting methods.  相似文献   

11.
Web日志会话的个性化识别方法的研究   总被引:2,自引:1,他引:1       下载免费PDF全文
会话识别是Web日志挖掘中的重要步骤。针对目前的各种会话识别方法,提出了一种改进的基于页面内容、下载时间等多个参数综合得到的针对每个用户的个性化识别方法。该方法通过使用访问时间间隔,判断是否在极大、极小两个阈值范围内来识别会话。根据页面内容、站点结构确定页面重要程度,通过页面的信息容量确定用户正常的阅读时间,通过Web日志中页面下载时间来确定起始阅读时间,对以上因素进行综合后对该阈值进行调整。实验结果表明,相对于目前的对所有用户页面使用单一先验阈值进行会话识别的方法及使用针对用户页面的阈值动态调整方法,提出的方法能更准确地个性化确定出页面访问时间阈值,更为合理有效。  相似文献   

12.
提出了一种改进的会话识别方法.该方法基于访问站点的首页和导航页,以首页或导航页作为新会话开始的标识.选取真实的Web日志,用PL/SQL编程实现改进的会话识别方法,并与现有方法进行比较.实验结果证明,改进的会话识别方法比现有方法识别会话更有效.  相似文献   

13.
一种Web挖掘的框架   总被引:1,自引:3,他引:1  
随着Web信息量的增长,Web用户也迅速增长,如何在海量信息中找出用户需要的信息变得更加重要。基于Web服务器日志,分析在线用户的浏览行为,挖掘Web数据并找出用户的遍历模式已经成为一个新的研究领域。针对Web站点的结构,给出了一个Web挖掘的完整框架,允许在分析复杂的遍历模式时加入约束条件,然后对框架中算法的执行效率和执行准确性进行比较和分析,同时展望了Web挖掘的未来研究方向。  相似文献   

14.
一种基于HITS的主题敏感爬行方法   总被引:2,自引:0,他引:2  
基于主题的信息采集是信息检索领域内一个新兴且实用的方法,通过将下载页面限定在特定的主题领域,来提高搜索引擎的效率和提供信息的质量。其思想是在爬行过程中按预先定义好的主题有选择地收集相关网页,避免下载主题不相关的网页,其目标是更准确地找到对用户有用的信息。探讨了主题爬虫的一些关键问题,通过改进主题模型、链接分类模型的学习方法及链接分析方法来提高下载网页的主题相关度及质量。在此基础上设计并实现了一个主题爬虫系统,该系统利用主题敏感HITS来计算网页优先级。实验表明效果良好。  相似文献   

15.
武健 《计算机应用》2014,(Z2):120-122,150
针对时序动态数据挖掘算法有限的问题,充分考虑动态数据之间的依赖性,将隐马尔可夫模型和启发式聚类策略相结合实现对时序动态数据发展变化特征及规律的挖掘。首先,基于隐马尔可夫模型将时序数据转换到似然空间,并以对称性KL( Kullback-Leibler)距离来标识似然度的大小;其次,构建对称性KL距离转移矩阵,并借助分层聚类方法实现对时序动态数据变化模式的分类。通过将该方法应用于计算机网络专业职位需求变化规律的知识发现,挖掘出职位需求变化的五类模式。  相似文献   

16.
Web序列模式挖掘是Web数据挖掘重要研究内容之一。在WAP算法的基础上提出了一种改进算法,该算法在Web序列模式挖掘过程中不需要反复生成条件树,从而提高了算法的运行效率。实验表明,该算法在运行时间上相对于WAP算法具有明显的优势。  相似文献   

17.
基于主题的信息采集是信息检索领域内一个新兴且实用的方法,通过将下载页面限定在特定的主题领域,来提高搜索引擎的效率和提供信息的质量。其思想是在爬行过程中按预先定义好的主题有选择地收集相关网页,避免下载主题不相关的网页,其目标是更准确地找到对用户有用的信息。探讨了主题爬虫的一些关键问题,通过改进主题模型、链接分类模型的学习方法及链接分析方法来提高下载网页的主题相关度及质量。在此基础上设计并实现了一个主题爬虫系统,该系统利用主题敏感HITS来计算网页优先级。实验表明效果良好。  相似文献   

18.
冉丽  何毅舟  许龙飞 《计算机应用》2004,24(10):158-160
搜索引擎作弊行为从搜索引擎优化中演变而来,却对网络发展带来负面影响。通过构造站内站外精简模型用于判断几类作弊行为,得出PageRank改进算法中惩罚因子的公式和其中三个函数的特征,展望了搜索引擎作弊检测方法的发展前景。  相似文献   

19.
Topical Web crawling is an established technique for domain-specific information retrieval. However, almost all the conventional topical Web crawlers focus on building crawlers using different classifiers, which needs a lot of labeled training data that is very difficult to label manually. This paper presents a novel approach called clustering-based topical Web crawling which is utilized to retrieve information on a specific domain based on link-context and does not require any labeled training data. In order to collect domain-specific content units, a novel hierarchical clustering method called bottom-up approach is used to illustrate the process of clustering where a new data structure, a linked list in combination with CFu-tree, is implemented to store cluster label, feature vector and content unit. During clustering, four metrics are presented. First, comparison variation (CV) is defined to judge whether the closest pair of clusters can be merged. Second, cluster impurity (CIP) evaluates the cluster error. Then, the precision and recall of clustering are also presented to evaluate the accuracy and comprehensive degree of the whole clustering process. Link-context extraction technique is used to expand the feature vector of anchor text which improves the clustering accuracy greatly. Experimental results show that the performance of our proposed method overcomes conventional focused Web crawlers both in Harvest rate and Target recall.  相似文献   

20.
Data mining for Web intelligence   总被引:2,自引:0,他引:2  
Searching, comprehending, and using the semistructured HTML, XML, and database-service-engine information stored on the Web poses a significant challenge. This data is more sophisticated and dynamic than the information commercial database systems store. To supplement keyword-based indexing, researchers have applied data mining to Web-page ranking. In this context, data mining helps Web search engines find high-quality Web pages and enhances Web click stream analysis. For the Web to reach its full potential, however, we must improve its services, make it more comprehensible, and increase its usability. As researchers continue to develop data mining techniques, the authors believe this technology will play an increasingly important role in meeting the challenges of developing the intelligent Web. Ultimately, data mining for Web intelligence will make the Web a richer, friendlier, and more intelligent resource that we can all share and explore. The paper considers how data mining holds the key to uncovering and cataloging the authoritative links, traversal patterns, and semantic structures that will bring intelligence and direction to our Web interactions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号