首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 250 毫秒
1.
搜索引擎PageRank算法的改进   总被引:2,自引:1,他引:1       下载免费PDF全文
为了解决企业快速决策时信息检索的问题,提出一种改进的PageRank算法。在考虑网页产生时间因素的同时,通过锚文本与网页主题的相似度分析按权重分配网页各正向链接PageRank值,产生的PageRank值更贴合主题搜索引擎的要求,并保持算法的简洁性。实验结果证明该改进算法能有效减少主题漂移现象,恰当提升新网页PageRank值。  相似文献   

2.
融合VSM技术的PageRank算法研究与应用   总被引:1,自引:0,他引:1  
李卫东  陆玲 《计算机与现代化》2011,(7):96-98,101,104
为解决PageRank算法存在的"主题漂移"问题,本文提出一种融合VSM(向量空间模型)技术的改进方法。首先根据网页的链接结构计算PageRank值,然后建立网页的内容特征向量空间,计算主题内容相似度,最后将这两个值按一定的权重系数进行融合计算,产生新的PageRank值。经过对比实验证明,改进后的PageRank算法减少了无关网页的数量,为搜索引擎提供了更好的排序结果。  相似文献   

3.
语义相似的PageRank改进算法   总被引:1,自引:0,他引:1       下载免费PDF全文
PageRank算法是一种用于网页排序的算法,它利用网页间的相互引用关系评价网页的重要性。但由于它只考虑网页与网页之间的链接结构,忽略了网页与主题的相关性,容易造成主题漂移现象。在分析了原PageRank算法基础上,给出了一种基于语义相似度的PageRank改进算法。该算法能够按照网页结构和网页主要内容计算出网页的PageRank值,既不会增加算法的时空复杂度,又极大地减少了“主题漂移”现象,从而提高查询效率和质量。  相似文献   

4.
PageRank基于链接分析计算页面的权威度,衡量网页的权威性,实现搜索结果的等级排序。文章针对传统PageRank存在的主题漂移问题提出了一种基于查询主题相关性的改进算法。通过引入搜索页面与查询主题的相关性度量,有效地抑制了传统PageRank算法的主题漂移问题,并通过实例加以验证。  相似文献   

5.
改进的PageRank在Web信息搜集中的应用   总被引:7,自引:0,他引:7  
PageRank是一种用于网页排序的算法,它利用网页间的相互引用关系评价网页的重要性·但由于它对每条出链赋予相同的权值,忽略了网页与主题的相关性,容易造成主题漂移现象·在分析了几种PageRank算法基础上,提出了一种新的基于主题分块的PageRank算法·该算法按照网页结构对网页进行分块,依照各块与主题的相关性大小对块中的链接传递不同的PageRank值,并能根据已访问的链接对块进行相关性反馈·实验表明,所提出的算法能较好地改进搜索结果的精确度·  相似文献   

6.
现有PageRank算法的多种改进研究,对新网页歧视、语言差异歧视、主题漂移、忽视用户浏览兴趣等问题仍然没有给出较好的解决方案。本文提出改进算法TWPR(PageRank based on Three Weights)。该算法将时间特性分析、语言链接结构分析和用户行为相结合,旨在提升更新较快、链接行为优良、用户感兴趣度高的中文网页PR值。实验证明,改进算法可有效提高网页检索的命中率,改善搜索质量。  相似文献   

7.

摘  要:针对PageRank算法完全依据链接结构排序,未考虑网页内容分析,造成平均分配PR值、主题漂移、偏重旧网页的现象,且已有改进算法存在单一性优化等问题,提出一种多特征因子融合的PageRank算法。该算法为使搜索结果更接近用户查询需求,同时兼顾搜索内容的相关度和查准率,通过添加链入链出权重因子、用户反馈因子、主题相关因子和时间因子,共同改善PageRank算法存在的不足。实验结果表明,所提算法在内容相关性和查准率方面,较其他网页排序算法有明显提高,达到优化PageRank算法的目的。  相似文献   

8.
提出基于Tf-Idf和网页链接对传统的PageRank算法不足之处进行改进。该算法不仅较好地解决了PageRank主题漂移问题,而且在查准率和查全率方面也有较大的提高。通过实验证明,该算法可以获得优于传统PageRank算法的查询结果集。  相似文献   

9.
针对PageRank算法存在主题漂移以及偏重旧网页的问题,结合锚文本相似度和时间反馈因子提出了一种PageRank改进算法STPR,并对STPR算法进行实验分析。先比较了传统PageRank算法与加入锚文本相似度的PageR-ank算法,结果表明加入锚文本相似度的PageRank算法有利于减少主题漂移现象的发生;其次比较了加入锚文本相似度的PageRank算法与STPR算法,结果表明STPR算法不但减少了主题漂移现象,而且还弥补了新网页的PageRank值。  相似文献   

10.
经典的基于链接结构的PageRank算法,它主要是依据页面之间的链接关系进行排序,容易出现主题漂移、忽视专业站点、偏重旧网页等缺点。针对这些问题,从超文本相关性、基于网站权威性权重因子和时间权重方面提出改进。实验结果表明,与传统的PageRank排序算法相比,改进算法能有效提高查准率,提高用户对排序结果的满意度。  相似文献   

11.
通过分析PageRank算法存在的偏重旧网页问题、主题偏移问题及网页欺骗问题,提出一种基于用户反馈的PageRank改进算法,该算法在原算法的基础上添加用户点击次数反馈和点击时间反馈及反馈权重,并结合基于网页内容的排序算法思想,加入网页内容权重,对PR值的计算公式进行改进,从而克服PageRank算法中存在的问题。  相似文献   

12.
基于主题特征和时间因子的改进PageRank算法   总被引:2,自引:0,他引:2  
经典PageRank算法单纯地考虑到对网页的链接结构进行分析,而不能考虑到网页在搜索主题方面的相关性和权威性,以及用户对新旧网页的依赖程度的不同.针对经典PageRank算法存在的上述缺陷,综合网页的主题特征和时间特征两个因素,提出了一种改进的PageRank算法WTPR(weighmd topic PageRank).该算法通过网页链接分析和内容分析来解决网页的权威程度和相关程度,通过时间因子实现PageRank值随时间的变动而浮动.仿真结果表明,改进后的算法与PageRank算法相比获得了更好的效果.  相似文献   

13.
With the Internet growing exponentially, search engines are encountering unprecedented challenges. A focused search engine selectively seeks out web pages that are relevant to user topics. Determining the best strategy to utilize a focused search is a crucial and popular research topic. At present, the rank values of unvisited web pages are computed by considering the hyperlinks (as in the PageRank algorithm), a Vector Space Model and a combination of them, and not by considering the semantic relations between the user topic and unvisited web pages. In this paper, we propose a concept context graph to store the knowledge context based on the user's history of clicked web pages and to guide a focused crawler for the next crawling. The concept context graph provides a novel semantic ranking to guide the web crawler in order to retrieve highly relevant web pages on the user's topic. By computing the concept distance and concept similarity among the concepts of the concept context graph and by matching unvisited web pages with the concept context graph, we compute the rank values of the unvisited web pages to pick out the relevant hyperlinks. Additionally, we constitute the focused crawling system, and we retrieve the precision, recall, average harvest rate, and F-measure of our proposed approach, using Breadth First, Cosine Similarity, the Link Context Graph and the Relevancy Context Graph. The results show that our proposed method outperforms other methods.  相似文献   

14.
《Computer Networks》2007,51(1):177-189
Web masters usually place certain web pages such as home pages and index pages in front of others. Under such a design, it is necessary to go through some pages to reach the destination pages, which is similar to the scenario of reaching an inner town of a peninsula through other towns at the edge of the peninsula. In this paper, we try to validate that peninsulas are a universal phenomenon in the World-Wide Web, and clarify how this phenomenon can be used to enhance web search and study web connectivity problems. For this purpose, we model the web as a directed graph, and give a proper definition of peninsulas based on this graph. We also present an efficient algorithm to find web peninsulas. Using data collected from the Chinese web by Tianwang search engine, we perform an experiment on the distribution of sizes of peninsulas and their correlations with PageRank values, outdegrees, or indegrees of the ties with other outside vertices. The results show that the peninsula structure on a web graph can greatly expedite the computation of PageRank values; and it can also significantly affect the link extraction capability and information coverage of web crawlers.  相似文献   

15.
Existing PageRank algorithm exploits the Hyperlink Structure of the web with uniform transition probability distribution to measure the relative importance of web pages. This paper proposes a novel method namely Proportionate Prestige Score (PPS) for prestige analysis. This proposed PPS method is purely based on the exact prestige of web pages, which is applicable to Initial Probability Distribution (IPD) matrix and Transition Probability Distribution (TPD) matrix. This proposed PPS method computes the single PageRank vector with non-uniform transition probability distribution, using the link structure of the web pages offline. This non-uniform transition probability distribution has efficiently overcome the dangling page problem than the existing PageRank algorithm. This paper provides benchmark analysis of ranking methods: PageRank and proposed PPS. These methods are tested with real social network data from three different domains: Social Circle:Facebook, Wikipedia vote network and Enron email network. The findings of this research work propose that the quality of the ranking has improved by using the proposed PPS method compared with the existing PageRank algorithm.  相似文献   

16.
针对传统PageRank算法存在的平分链接权重和忽略用户兴趣等问题,提出一种基于学习自动机和用户兴趣的页面排序算法LUPR。在所提方法中,给每个网页分配学习自动机,其功能是确定网页之间超链接的权重。通过对用户行为进一步分析,以用户的浏览行为衡量用户对网页的兴趣度,从而获得兴趣度因子。该算法根据网页间的超链接和用户对网页的兴趣度衡量网页权重计算每个网页的排名。最后的仿真实验表明,较传统的PageRank算法和WPR算法,改进后的LUPR算法在一定程度上提高了信息检索的准确度和用户满意度。  相似文献   

17.
Search engines result pages (SERPs) for a specific query are constructed according to several mechanisms. One of them consists in ranking Web pages regarding their importance, regardless of their semantic. Indeed, relevance to a query is not enough to provide high quality results, and popularity is used to arbitrate between equally relevant Web pages. The most well-known algorithm that ranks Web pages according to their popularity is the PageRank.The term Webspam was coined to denotes Web pages created with the only purpose of fooling ranking algorithms such as the PageRank. Indeed, the goal of Webspam is to promote a target page by increasing its rank. It is an important issue for Web search engines to spot and discard Webspam to provide their users with a nonbiased list of results. Webspam techniques are evolving constantly to remain efficient but most of the time they still consist in creating a specific linking architecture around the target page to increase its rank.In this paper we propose to study the effects of node aggregation on the well-known ranking algorithm of Google (the PageRank) in the presence of Webspam. Our node aggregation methods have the purpose to construct clusters of nodes that are considered as a sole node in the PageRank computation. Since the Web graph is way to big to apply classic clustering techniques, we present four lightweight aggregation techniques suitable for its size. Experimental results on the WEBSPAM-UK2007 dataset show the interest of the approach, which is moreover confirmed by statistical evidence.  相似文献   

18.
针对目前主题网络爬虫搜索策略难以在全局范围内找到最优解,通过对遗传算法的分析与研究,文中设计了一个基于遗传算法的主题爬虫方案。引入了结合文本内容的PageRank算法;采用向量空间模型算法计算网页主题相关度;采取网页链接结构与主题相关度来评判网页的重要性;依据网页重要性选择爬行中的遗传因子;设置适应度函数筛选与主题相关的网页。与普通的主题爬虫比较,该策略能够获取大量主题相关度高的网页信息,能够提高获取的网页的重要性,能够满足用户对所需主题网页的检索需求,并在一定程度上解决了上述问题。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号