共查询到20条相似文献,搜索用时 15 毫秒
1.
基于维基百科社区挖掘的词语语义相似度计算 总被引:1,自引:0,他引:1
词语语义相似度计算在自然语言处理如词义消歧、语义信息检索、文本自动分类中有着广泛的应用。不同于传统的方法,提出的是一种基于维基百科社区挖掘的词语语义相似度计算方法。本方法不考虑单词页面文本内容,而是利用维基百科庞大的带有类别标签的单词页面网信息,将基于主题的社区发现算法HITS应用到该页面网,获取单词页面的社区。在获取社区的基础上,从3个方面来考虑两个单词间的语义相似度:(1)单词页面语义关系;(2)单词页面社区语义关系;(3)单词页面社区所属类别的语义关系。最后,在标准数据集WordSimilarity-353上的实验结果显示,该算法具有可行性且略优于目前的一些经典算法;在最好的情况下,其Spearman相关系数达到0.58。 相似文献
2.
Modeling users’ interests plays an important role in the current web since it is at the basis of many services such as recommendation and customization. Using semantic technologies to represent users’ interests may help to reduce problems such as sparsity, over-specialization and domain-dependency, which are known to be critical issues of state of the art recommenders. In this paper we present a method for high-coverage modeling of Twitter users supported by a hierarchical representation of their interests, which we call a Twixonomy. In order to automatically build a population, community, or single-user Twixonomy we first identify “topical” friends in users’ friendship lists (i.e., friends representing an interest rather than a social relation between peers). We classify as topical those users with an associated page on Wikipedia. A word-sense disambiguation algorithm is used to select the appropriate Wikipedia page for each topical friend. Next, starting from the set of wikipages representing the main topics of interests of the considered Twitter population, we extract all paths connecting these pages with topmost Wikipedia category nodes, and we then prune the resulting graph efficiently so as to induce a direct acyclic graph and significantly reduce over ambiguity, a well known problem of the Wikipedia category graph. We release the Twixonomy produced in this work under creative common license. 相似文献
3.
提出了一种基于分类技术的搜索引擎新排名算法CategoryRank。该算法能够借助类别信息,更加准确地计算网页的排名得分,提高搜索引擎排名的准确性。算法基于任意两个网页之间的类别信息,对链接图进行了分析和计算,并且与PageRank等算法进行相比,该算法能够更加准确地模拟用户浏览网页的习惯。同时针对Web中的每个网页,算法计算出它的类别属性,直接体现了该页面针对不同用户的重要程度。最后,把该算法的离线模型扣在线模型统一起来,阐明了算法在搜索引擎排名中的运行机制。 相似文献
4.
Existing PageRank algorithm exploits the Hyperlink Structure of the web with uniform transition probability distribution to measure the relative importance of web pages. This paper proposes a novel method namely Proportionate Prestige Score (PPS) for prestige analysis. This proposed PPS method is purely based on the exact prestige of web pages, which is applicable to Initial Probability Distribution (IPD) matrix and Transition Probability Distribution (TPD) matrix. This proposed PPS method computes the single PageRank vector with non-uniform transition probability distribution, using the link structure of the web pages offline. This non-uniform transition probability distribution has efficiently overcome the dangling page problem than the existing PageRank algorithm. This paper provides benchmark analysis of ranking methods: PageRank and proposed PPS. These methods are tested with real social network data from three different domains: Social Circle:Facebook, Wikipedia vote network and Enron email network. The findings of this research work propose that the quality of the ranking has improved by using the proposed PPS method compared with the existing PageRank algorithm. 相似文献
5.
基于统计分词的中文网页分类 总被引:9,自引:3,他引:9
本文将基于统计的二元分词方法应用于中文网页分类,实现了在事先没有词表的情况下通过统计构造二字词词表,从而根据网页中的文本进行分词,进而进行网页的分类。因特网上不同类型和来源的文本内容用词风格和类型存在相当的差别,新词不断出现,而且易于获得大量的同类型文本作为训练语料。这些都为实现统计分词提供了条件。本文通过试验测试了统计分词构造二字词表用于中文网页分类的效果。试验表明,在统计阈值选择合适的时候,通过构建的词表进行分词进而进行网页分类,能有效地提高网页分类的分类精度。此外,本文还分析了单字和分词对于文本分类的不同影响及其原因。 相似文献
6.
为了更好地向用户提供个性化的Web检索服务,实现了一种改进的个性化词典的生成算法——IGAUPD,用于在用户浏览的大量兴趣网页中挖掘出真正符合用户兴趣的词语,以此缩小传统词库的容量,使得在用户兴趣建模时,能更快更准确地形成兴趣网页的特征描述,并更好地支持个性化检索。IGAUPD算法采用新的词权计算公式IWTUPD,以更好地描述词语在网页集中的重要性,有效排除频繁词。最后,用实验验证了由IGAUPD算法生成的个性化词典的优势。 相似文献
7.
中文网页分类技术是数据挖掘中一个研究热点领域,而支持向量机(SVM)是一种高效的分类识别方法,在解决高维模式识别问题中表现出许多特有的优势.提出了基于支持向量机的中文网页分类方法,其中包括对该过程中的网页文本预处理、特征提取和多分类算法等关键技术的介绍.实验表明,该方法训练数据规模大大减少,训练效率较高,同时具有较好的精确率和召回率. 相似文献
8.
Social networking websites, which profile objects with predefined attributes and their relationships, often rely heavily on their users to contribute the required information. We, however, have observed that many web pages are actually created collectively according to the composition of some physical or abstract entity, e.g., company, people, and event. Furthermore, users often like to organize pages into conceptual categories for better search and retrieval, making it feasible to extract relevant attributes and relationships from the web. Given a set of entities each consisting of a set of web pages, we name the task of assigning pages to the corresponding conceptual categories conceptual web classification. To address this, we propose an entity-based co-training (EcT) algorithm which learns from the unlabeled examples to boost its performance. Different from existing co-training algorithms, EcT has taken into account the entity semantics hidden in web pages and requires no prior knowledge about the underlying class distribution which is crucial in standard co-training algorithms used in web classification. In our experiments, we evaluated EcT, standard co-training, and other three non co-training learning methods on Conf-425 dataset. Both EcT and co-training performed well when compared to the baseline methods that required large amount of training examples. 相似文献
9.
10.
为了对用户访问过并感兴趣的网页进行准确描述,分析了对网页特征描述中涉及到的特征抽取范围以及特征词权重计算方法。根据“主题相关词非线性加权的方法”提出了一种改进特征词权重计算的方法,该方法不仅考虑了出现在标题中的特征词的重要性,而且利用非线性函数对特征词出现频率的处理思想,使得权重的计算更加准确。使用改进的特征权重计算方法提高了网页特征描述的准确性,从而提高了用户个性化搜索的效率。 相似文献
11.
Wikipedia has become one of the largest online repositories of encyclopedic knowledge. Wikipedia editions are available for more than 200 languages, with entries varying from a few pages to more than 1 million articles per language. Embedded in each Wikipedia article is an abundance of links connecting the most important words or phrases in the text to other pages, thereby letting users quickly access additional information. An automatic text-annotation system combines keyword extraction and word-sense disambiguation to identify relevant links to Wikipedia pages. 相似文献
12.
Web站点导航是Web数据挖掘的一个重要研究领域,是准确理解用户访问网站行为的关键;传统Web站点导航技术很难全面反映出用户对页面浏览的兴趣程度,找到用户感兴趣页面路径准确度比较低;为提高找到用户感兴趣页面路径准确度,提出一种基于蚁群算法的Web站点导航技术;将网络用户看作人工的蚂蚁,用户的浏览兴趣作蚂蚁的信息素,通过利用Web日志数据采用正负反馈机制和路径概率选择机制建立一个Web站点导航模型,挖掘用户感兴趣页面的导航路径;仿真实验结果表明,基于蚁群算法的Web站点导航技术提高了找到用户感兴趣页面路径准确度,更加能够准确反映出用户的浏览兴趣,用于Web站点导航是可行的。 相似文献
13.
本文针对已有命名实体识别算法在网页结构特征利用方面的问题,提出了基于网页结构特征的中文命名实体识别算法和实体关联算法。该算法结合了网页结构特征,提出了候选实体生成方法,将实体类型识别问题转化为候选实体分类问题。同时提出了基于DOM-Ttee的实体关联算法,实验显示本文的系统是非常有效的。 相似文献
14.
15.
本文提出一种基于网页结构特征的用户建模技术。它通过对某些网页标记内的词汇人为提升词频数,将提取到的网页特征加入到用户模型的计算中。实验结果表明,该技术能建立更有效的用户模型。 相似文献
16.
17.
针对传统PageRank算法存在的平分链接权重和忽略用户兴趣等问题,提出一种基于学习自动机和用户兴趣的页面排序算法LUPR。在所提方法中,给每个网页分配学习自动机,其功能是确定网页之间超链接的权重。通过对用户行为进一步分析,以用户的浏览行为衡量用户对网页的兴趣度,从而获得兴趣度因子。该算法根据网页间的超链接和用户对网页的兴趣度衡量网页权重计算每个网页的排名。最后的仿真实验表明,较传统的PageRank算法和WPR算法,改进后的LUPR算法在一定程度上提高了信息检索的准确度和用户满意度。 相似文献
18.
19.
20.
提出一种基于PageRank的页面排序算法.采用网页类别相关度计算,对来自不同类别网页所传递的权威值赋予相应的权重;根据链接所属信息块重要性的不同,赋予相应权值.实验表明,该算法对提高页面排序质量是有效的. 相似文献