首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
针对传统网页排序算法Okapi BM25通常会出现网页与查询关键词领域无关的领域漂移现象,以及改进算法需要人工建立领域向量的问题,提出了一种基于BM25和Softmax回归分类模型的网页搜索排序算法。该方法首先对网页文本进行数据预处理并利用词袋模型进行网页文本的向量表示,之后通过少量的网页数据来训练Softmax回归分类模型,来预测测试网页数据的类别分数,并与BM25信息检索的分数结合在一起,得到最终的网页排序结果。实验结果显示该检索算法无须人工建立领域向量,即可达到很好的网页排序结果。  相似文献   

2.
加速评估算法:一种提高Web结构挖掘质量的新方法   总被引:13,自引:1,他引:13  
利用Web结构挖掘可以找到Web上的高质量网页,它大大地提高了搜索引擎的检索精度,目前的Web结构挖掘算法是通过统计链接到每个页面的超链接的数量和源结点的质量对页面进行评估,基于统计链接数目的算法存在一个严重缺陷:页面评价两极分化,一些传统的高质量页面经常出现在Web检索结果的前面,而Web上新加入的高质量页面很难被用户找到,提出了加速评估算法以克服现有Web超链接分析中的不足,并通过搜索引擎平台对算法进行了测试和验证。  相似文献   

3.
随着Web技术的发展和Web上越来越多的各种信息,如何提供高质量、相关的查询结果成为当前Web搜索引擎的一个巨大挑战.PageRank和HITS是两个最重要的基于链接的排序算法并在商业搜索引擎中使用.然而,在PageRank算法中,每个网页的PR值被平均地分配到它所指向的所有网页,网页之间的质量差异被完全忽略.这样的算法很容易被当前的Web SPAM攻击.基于这样的认识,提出了一个关于PageRank算法的改进,称为Page Quality Based PageRank(QPR)算法.QPR算法动态地评估每个网页的质量,并根据网页的质量对每个网页的PR值做相应公平的分配.在多个不同特性的数据集上进行了全面的实验,实验结果显示,提出的QPR算法能大大提高查询结果的排序,并能有效减轻SPAM网页对查询结果的影响.  相似文献   

4.
5.
介绍了目前应用较为广泛的两种算法——PageRank算法和HITS算法。PageRank算法是基于用户随机的向前浏览网页的直觉知识,HITS算法考虑的是Authoritive网页和Hub网页间的加强关系。PageRank算法的基本思想是:如果一个页面被许多其他页面引用,则这个页面很可能是重要页面;一个页面尽管没有被多次引用,但被一个重要页面引用,那么这个页面很可能也是重要页面;一个页面的重要性被均分并传递到它所引用的页面。而HITS算法则专注于改善泛指主题检索的结果,通过一定的计算(迭代计算)方法以得到针对某个检索提问的最具价值的网页,即排名最高的authority。  相似文献   

6.
设计了一种采集分析互联网新闻网页的方法。该方法根据给定的新闻网站的入口地址在网络上找出所有的相关链接;区分这些链接所指向的页面特征,过滤掉相关性不大的内容,提取所有新闻网页的链接;进而进行多层次链接分析,根据新闻的图片、标题字体属性及日期,采用NewsPageRank算法计算每个新闻链接的权重。测试结果表明该方法对Internet上的新闻站点普遍具有较好的分析效果,性能可以满足实用要求。  相似文献   

7.
提出一种基于PageRank的页面排序算法.采用网页类别相关度计算,对来自不同类别网页所传递的权威值赋予相应的权重;根据链接所属信息块重要性的不同,赋予相应权值.实验表明,该算法对提高页面排序质量是有效的.  相似文献   

8.
陈伟柱  陈英  吴燕 《计算机应用》2005,25(5):995-997,1003
提出了一种基于分类技术的搜索引擎新排名算法CategoryRank。该算法能够借助类别信息,更加准确地计算网页的排名得分,提高搜索引擎排名的准确性。算法基于任意两个网页之间的类别信息,对链接图进行了分析和计算,并且与PageRank等算法进行相比,该算法能够更加准确地模拟用户浏览网页的习惯。同时针对Web中的每个网页,算法计算出它的类别属性,直接体现了该页面针对不同用户的重要程度。最后,把该算法的离线模型扣在线模型统一起来,阐明了算法在搜索引擎排名中的运行机制。  相似文献   

9.
现有的搜索引擎查询结果聚类算法大多针对用户查询生成的网页摘要进行聚类,由于网页摘要篇幅较短,质量良莠不齐,聚类效果难以有较大的提高(比如后缀树算法,Lingo算法);而传统的基于全文的聚类算法运算复杂度较高,且难以生成高质量的类别标签,无法满足在线聚类的需求(比如KMeans算法)。该文提出一种基于全文最大频繁项集的网页在线聚类算法MFIC (Maximal Frequent Itemset Clustering)。算法首先基于全文挖掘最大频繁项集,然后依据网页集合之间最大频繁项集的共享关系进行聚类,最后依据类别包含的频繁项生成类别标签。实验结果表明MFIC算法降低了基于网页全文聚类的时间,聚类精度提高15%左右,且能生成可读性较好的类别标签。  相似文献   

10.
With the development of mobile technology, the users browsing habits are gradually shifted from only information retrieval to active recommendation. The classification mapping algorithm between users interests and web contents has been become more and more difficult with the volume and variety of web pages. Some big news portal sites and social media companies hire more editors to label these new concepts and words, and use the computing servers with larger memory to deal with the massive document classification, based on traditional supervised or semi-supervised machine learning methods. This paper provides an optimized classification algorithm for massive web page classification using semantic networks, such as Wikipedia, WordNet. In this paper, we used Wikipedia data set and initialized a few category entity words as class words. A weight estimation algorithm based on the depth and breadth of Wikipedia network is used to calculate the class weight of all Wikipedia Entity Words. A kinship-relation association based on content similarity of entity was therefore suggested optimizing the unbalance problem when a category node inherited the probability from multiple fathers. The keywords in the web page are extracted from the title and the main text using N-gram with Wikipedia Entity Words, and Bayesian classifier is used to estimate the page class probability. Experimental results showed that the proposed method obtained good scalability, robustness and reliability for massive web pages.  相似文献   

11.
面向结构相似的网页聚类是网络数据挖掘的一项重要技术。传统的网页聚类没有给出网页簇中心的表示方式,在计算点簇间和簇簇间相似度时需要计算多个点对的相似度,这种聚类算法一般比使用簇中心的聚类算法慢,难以满足大规模快速增量聚类的需求。针对此问题,该文提出一种快速增量网页聚类方法FPC(Fast Page Clustering)。在该方法中,先提出一种新的计算网页相似度的方法,其计算速度是简单树匹配算法的500倍;给出一种网页簇中心的表示方式,在此基础上使用Kmeans算法的一个变种MKmeans(Merge-Kmeans)进行聚类,在聚类算法层面上提高效率;使用局部敏感哈希技术,从数量庞大的网页类集中快速找出最相似的类,在增量合并层面上提高效率。  相似文献   

12.
互联网中存在着大量的重复网页,在进行信息检索或大规模网页采集时,网页去重是提高效率的关键之一。本文在研究"指纹"或特征码等网页去重算法的基础上,提出了一种基于编辑距离的网页去重算法,通过计算网页指纹序列的编辑距离得到网页之间的相似度。它克服了"指纹"或特征码这类算法没有兼顾网页正文结构的缺点,同时从网页内容和正文结构上进行比较,使得网页重复的判断更加准确。实验证明,该算法是有效的,去重的准确率和召回率都比较高。  相似文献   

13.
In this work we propose a model to represent the web as a directed hypergraph (instead of a graph), where links connect pairs of disjointed sets of pages. The web hypergraph is derived from the web graph by dividing the set of pages into non-overlapping blocks and using the links between pages of distinct blocks to create hyperarcs. A hyperarc connects a block of pages to a single page, in order to provide more reliable information for link analysis. We use the hypergraph model to create the hypergraph versions of the Pagerank and Indegree algorithms, referred to as HyperPagerank and HyperIndegree, respectively. The hypergraph is derived from the web graph by grouping pages by two different partition criteria: grouping together the pages that belong to the same web host or to the same web domain. We compared the original page-based algorithms with the host-based and domain-based versions of the algorithms, considering a combination of the page reputation, the textual content of the pages and the anchor text. Experimental results using three distinct web collections show that the HyperPagerank and HyperIndegree algorithms may yield better results than the original graph versions of the Pagerank and Indegree algorithms. We also show that the hypergraph versions of the algorithms were slightly less affected by noise links and spamming.  相似文献   

14.
随着web技术的发展,好的网页排序算法越来越重要。本文主要讨论了网页排序应当考虑的因素如网页更新时间等。在对这些因素进行分析之后,本文提出了一种基于文本分类的网页排序算法。这个算法能很好地优化查询结果,提高搜索引擎的性能。  相似文献   

15.
Existing PageRank algorithm exploits the Hyperlink Structure of the web with uniform transition probability distribution to measure the relative importance of web pages. This paper proposes a novel method namely Proportionate Prestige Score (PPS) for prestige analysis. This proposed PPS method is purely based on the exact prestige of web pages, which is applicable to Initial Probability Distribution (IPD) matrix and Transition Probability Distribution (TPD) matrix. This proposed PPS method computes the single PageRank vector with non-uniform transition probability distribution, using the link structure of the web pages offline. This non-uniform transition probability distribution has efficiently overcome the dangling page problem than the existing PageRank algorithm. This paper provides benchmark analysis of ranking methods: PageRank and proposed PPS. These methods are tested with real social network data from three different domains: Social Circle:Facebook, Wikipedia vote network and Enron email network. The findings of this research work propose that the quality of the ranking has improved by using the proposed PPS method compared with the existing PageRank algorithm.  相似文献   

16.
Most Web pages contain location information, which are usually neglected by traditional search engines. Queries combining location and textual terms are called as spatial textual Web queries. Based on the fact that traditional search engines pay little attention in the location information in Web pages, in this paper we study a framework to utilize location information for Web search. The proposed framework consists of an offline stage to extract focused locations for crawled Web pages, as well as an online ranking stage to perform location-aware ranking for search results. The focused locations of a Web page refer to the most appropriate locations associated with the Web page. In the offline stage, we extract the focused locations and keywords from Web pages and map each keyword with specific focused locations, which forms a set of <keyword, location> pairs. In the second online query processing stage, we extract keywords from the query, and computer the ranking scores based on location relevance and the location-constrained scores for each querying keyword. The experiments on various real datasets crawled from nj.gov, BBC and New York Time show that the performance of our algorithm on focused location extraction is superior to previous methods and the proposed ranking algorithm has the best performance w.r.t different spatial textual queries.  相似文献   

17.
This paper addresses new and significant research issues in web page design in relation to the use of graphics. The original findings include that (a) graphics play an important role in enhancing the appearance and thus users' feelings (aesthetics) about web pages and that (b) the effective use of graphics is crucial in designing web pages. In addition, we have developed a web page design support database based on a user-centered experimental procedure and a neural network model. This design support database can be used to examine how a specific combination of design elements, particularly the ratio of graphics to text, will affect the users' feelings about a web page. As a general rule, the ratio of graphics to text between 3:1 and 1:1 will give the users the best feelings of ease-to-use and clear-to-follow. A web page with a ratio of 1:1 will have the most realistic look, while a ratio of over 3:1 will have the fanciest appearance. The result provides useful insights in using graphics on web pages that help web designers best meet users' specific expectations and aesthetic consistency.  相似文献   

18.
针对目前主题网络爬虫搜索策略难以在全局范围内找到最优解,通过对遗传算法的分析与研究,文中设计了一个基于遗传算法的主题爬虫方案。引入了结合文本内容的PageRank算法;采用向量空间模型算法计算网页主题相关度;采取网页链接结构与主题相关度来评判网页的重要性;依据网页重要性选择爬行中的遗传因子;设置适应度函数筛选与主题相关的网页。与普通的主题爬虫比较,该策略能够获取大量主题相关度高的网页信息,能够提高获取的网页的重要性,能够满足用户对所需主题网页的检索需求,并在一定程度上解决了上述问题。  相似文献   

19.
With the Internet growing exponentially, search engines are encountering unprecedented challenges. A focused search engine selectively seeks out web pages that are relevant to user topics. Determining the best strategy to utilize a focused search is a crucial and popular research topic. At present, the rank values of unvisited web pages are computed by considering the hyperlinks (as in the PageRank algorithm), a Vector Space Model and a combination of them, and not by considering the semantic relations between the user topic and unvisited web pages. In this paper, we propose a concept context graph to store the knowledge context based on the user's history of clicked web pages and to guide a focused crawler for the next crawling. The concept context graph provides a novel semantic ranking to guide the web crawler in order to retrieve highly relevant web pages on the user's topic. By computing the concept distance and concept similarity among the concepts of the concept context graph and by matching unvisited web pages with the concept context graph, we compute the rank values of the unvisited web pages to pick out the relevant hyperlinks. Additionally, we constitute the focused crawling system, and we retrieve the precision, recall, average harvest rate, and F-measure of our proposed approach, using Breadth First, Cosine Similarity, the Link Context Graph and the Relevancy Context Graph. The results show that our proposed method outperforms other methods.  相似文献   

20.
基于网站影响力的网页排序算法   总被引:1,自引:0,他引:1  
张芳  郭常盈 《计算机应用》2012,32(6):1666-1669
传统的排序算法主要是根据网页之间的链接关系进行排序,没有考虑到网站与网页之间互相增强的关系和用户对网页的重要性的评价。为此提出了一种基于更新时间、网页权威性和用户对网页的反映的相关排序算法。该算法以网站为节点计算每个网站权威值,在为网页分配权威值时考虑了网页在网站内的位置和用户对其的反映,并通过网站与网页之间相互影响的关系来相互反馈。实验结果表明,与传统的PageRank、HITS等排序算法相比,该算法在检索性能上有明显提高。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号