首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
基于维基百科和网页分块的主题爬行策略   总被引:1,自引:0,他引:1  
熊忠阳  史艳  张玉芳 《计算机应用》2011,31(12):3264-3267
针对传统主题爬行策略的不足和局限性,提出一种基于维基百科(Wikipedia)和网页分块的主题爬行策略,通过Wikipedia的主题分类树和主题描述文档获取主题向量,以此来描述主题;并在下载网页后引入网页分块,过滤噪声链接;在计算候选链接优先级时,引入块相关性,以弥补锚文本信息量有限的缺点;通过改变主题向量空间的大小来...  相似文献   

2.
随着微博、照片分享等社会化媒体的快速发展,每天产生了大量的短文本内容如评论、微博等,对其进行深入挖掘有重大的应用价值和学术意义。该文选取微博作为例子,详细阐述我们提出的方法。微博信息流因其简短和实时的特性而具有非常大的价值,已经成为市场营销,股票预测、舆情监控等应用的重要信息源。尽管如此,微博内容特征极其稀疏、上下文语境提取困难,使得微博信息的挖掘面临着很大挑战。因此,我们提出一种基于Wikipedia的微博语义概念扩展方法,通过自动识别那些与微博信息语义相关的Wikipedia概念来丰富它的内容特征,从而有效提高微博信息数据挖掘和分析的效果。该文工作首先通过可链接性剪枝、概念关联和消歧,发现微博信息中重要的n-gram所对应的Wikipedia概念;其次,采用基于概念-文档关联矩阵的NMF分解(非负矩阵分解)方法获取Wikipedia概念之间的语义近邻,为微博信息扩展相关的语义概念。基于TREC 2011的微博数据集和Wikipedia 2011数据集进行实验,与已有两个相关研究工作比较,该文提出的方法取得了较好的效果。  相似文献   

3.
周博  刘奕群  张敏  金奕江  马少平 《软件学报》2011,22(8):1714-1724
锚文本对网络信息检索性能的提升作用已经得到验证,并被广泛地应用于商用网络搜索引擎.然而,锚文本制作的不可控性导致其中蕴含大量与目标网页不相关或具有作弊倾向的无用信息.另外,对于需要衡量检索结果服务质量的事务类查询,原始锚文本推荐的目标网页也往往与真实的用户体验不一致.为了解决上述问题,基于大规模真实用户的互联网浏览行为日志展开研究.首先提出了锚文本检索有效性的评估框架,然后分析了用户网络浏览点击行为与锚文本检索有效性之间的联系,挖掘了用户网络浏览点击行为中有助于筛选高质量锚文本的特征.基于这些特征,提出了两种超链接文档生成方法.实验结果表明,基于用户网络浏览点击行为特征筛选出的锚文本,与原始锚文本相比,能够明显地提升网络检索的性能.  相似文献   

4.
网页和纯文本结构差异性决定了传统的IR排序技术不能适应网络发展。为合理排序检索结果,引入了基于文献引文分析法原理的链接分析方法。该方法对被多个网页链接的网页赋予较高评价,同时考虑锚文本与查询词的相似度。源网页质量参差不齐,链向相同网页的锚文本质量也有优劣之分,但高质量源网页的锚文本不一定比质量低源网页的准确。对相似度高的锚文本加以修正,即通过计算查询词和锚文本相似度,对于相似度较高但源于PageRank值低的源网页的锚文本加以补偿,并重新排序查询结果。  相似文献   

5.
在W2DR算法实验中,部分网页因其锚文本提供的信息量不足,导致利用半结构化的网页信息填充结构化数据库内容效果不佳。为此,提出一种基于链接路径包的URL属性集成方法。采用将锚文本和网页标题相结合的机制,从被搜索网页集中,根据最佳匹配策略求解得到URL属性值,并将其填充到目标数据库。实验结果表明,与W2DR算法相比,该方法在2个不同数据集中的F值分别提高13.91%和3.54%。  相似文献   

6.
实体链接是指将文本中具有歧义的实体指称项链接到知识库中相应实体的过程。该文首先对实体链接系统进行了分析,指出实体链接系统中的核心问题—实体指称项文本与候选实体之间的语义相似度计算。接着提出了一种基于图模型的维基概念相似度计算方法,并将该相似度计算方法应用在实体指称项文本与候选实体语义相似度的计算中。在此基础上,设计了一个基于排序学习算法框架的实体链接系统。实验结果表明,相比于传统的计算方法,新的相似度计算方法可以更加有效地捕捉实体指称项文本与候选实体间的语义相似度。同时,融入了多种特征的实体链接系统在性能上获得了达到state-of-art的水平。  相似文献   

7.
在各类规划、调研报告的编制过程中,编制人员往往需要根据拟定的目录或标题去收集、阅读大量文本素材,分类整理后再甄选使用,不仅工作量大而且质量无法得到保障。为此,在数字政府规划文档编制领域中提出了一种结合标签分类和语义查询扩展的文本素材推荐方法,从信息检索的角度出发,将目录中的各级标题视为查询语句,将参阅的文本素材作为目标文档,从而进行文本素材检索与推荐。该方法基于差分进化算法,将基于词向量平均的文本素材推荐方法、基于语义查询扩展的文本素材推荐方法和基于标签分类的文本素材推荐方法有机结合,弥补了传统的文本素材推荐方法的不足,实现了通过目录结构的标题检索以段落为粒度的文本素材。在10个数据集上的实验验证结果表明,该方法的性能提升显著,能够大大减少人工素材选择的工作量,同时减少素材分类的工作量,降低文档编制的难度。  相似文献   

8.
怀宝兴  宝腾飞  祝恒书  刘淇 《软件学报》2014,25(9):2076-2087
命名实体链接(named entity linking,简称NEL)是把文档中给定的命名实体链接到知识库中一个无歧义实体的过程,包括同义实体的合并、歧义实体的消歧等.该技术可以提升在线推荐系统、互联网搜索引擎等实际应用的信息过滤能力.然而,实体数量的激增给实体消歧等带来了巨大挑战,使得当前的命名实体链接技术越来越难以满足人们对链接准确率的要求.考虑到文档中的词和实体往往具有不同的语义主题(如“苹果”既能表示水果又可以是某电子品牌),而同一文档中的词与实体应当具有相似的主题,因此提出在语义层面对文档进行建模和实体消歧的思想.基于此设计一种完整的、基于概率主题模型的命名实体链接方法.首先,利用维基百科(Wikipedia)构建知识库;然后,利用概率主题模型将词和命名实体映射到同一个主题空间,并根据实体在主题空间中的位置向量,把给定文本中的命名实体链接到知识库中一个无歧义的命名实体;最后,在真实的数据集上进行大量实验,并与标准方法进行对比.实验结果表明:所提出的框架能够较好地解决了实体歧义问题,取得了更高的实体链接准确度.  相似文献   

9.
生物医学文本蕴含着丰富的探索价值,其为生物医学工作者进行研究提供了宝贵的领域知识.充分且高效地利用海量的生物医学文献,并从中发现重要的隐藏信息、获取专业领域知识,对生物医学研究具有重要的意义.生物医学实体链接是对生物医学文本中的命名实体进行识别,并将表示该实体的某些字符串映射到生物医学领域知识库中对应概念.生物医学实体链接任务通常面临两个主要的挑战:(1)自然语言描述的歧义性.(2)自然语言文本与生物医学知识库的异构性.传统的方法基于特征选择或规则发现,依赖于手动选择特征或定义规则,处理分阶段模型中也可能出现误差传播.因此,本工作提出了一种深度学习和知识库相结合的实体链接方法,通过深度挖掘自然语言文本的隐藏特征,及其与知识库概念图间结构的相似性,将生物医学实体识别与实体-概念对齐两个任务进行联合式处理.该方法旨在通过标准的生物医学知识库,自动获取生物医学实体的语义信息,挖掘生物医学实体之间的语义关系.实验表明,该方法在实体识别与对齐方面取得了较好的效果,显著提高了任务的精确性,在实体链接核心任务上取得了超过10%的性能提升.  相似文献   

10.
交互式机器翻译(Interactive Machine Translation,IMT)是一种通过机器翻译系统与译员之间的相互作用指导计算机解码并改善输出译文质量的技术。目前主流的IMT方法使用译员确定的前缀作为唯一约束指导解码,交互方式受限,交互效率低。该文从交互方式和解码算法两个方面对IMT方法进行改进。在交互方式方面,允许译员译前从短语译项列表中为源语言短语选择正确译项。该文还提出了基于短语表的多样性排序算法,来提高短语候选译项的多样性,并根据译员的翻译认知过程设计交互界面,改善译员在翻译过程中的用户体验。在解码算法方面,将双语短语与前缀一同作为约束参与指导解码过程,提高翻译假设评价和过滤的准确性。在LDC汉英平行语料上进行了人工评测,实验结果表明该方法较传统的IMT方法能够减轻译员的认知负担,减少翻译时间,提升翻译效率。  相似文献   

11.
Wikipedia has become one of the largest online repositories of encyclopedic knowledge. Wikipedia editions are available for more than 200 languages, with entries varying from a few pages to more than 1 million articles per language. Embedded in each Wikipedia article is an abundance of links connecting the most important words or phrases in the text to other pages, thereby letting users quickly access additional information. An automatic text-annotation system combines keyword extraction and word-sense disambiguation to identify relevant links to Wikipedia pages.  相似文献   

12.
维基百科(Wikipedia)提供了海量的描述著名概念的高质量文章,丰富的图片使它们有更高的价值。但大部分Wikipedia文章都没有图片或图很少,为此给出了综合的框架WIMAGE来为Wikipedia文章发现高精度、高召回度和高多样性图片。WIMAGE包括生成查询的方法及两种图片排序方法。采用Wikipedia中4个常见类别的40篇文章进行实验,结果显示WIMAGE能有效地为Wikipedia文章发现高精度、高召回度以及高多样性的图片,且同时考虑了视觉相似度和文本相似度的排序方法效果最好。  相似文献   

13.
In this work we study how people navigate the information network of Wikipedia and investigate (i) free-form navigation by studying all clicks within the English Wikipedia over an entire month and (ii) goal-directed Wikipedia navigation by analyzing wikigames, where users are challenged to retrieve articles by following links. To study how the organization of Wikipedia articles in terms of layout and links affects navigation behavior, we first investigate the characteristics of the structural organization and of hyperlinks in Wikipedia and then evaluate link selection models based on article structure and other potential influences in navigation, such as the generality of an article's topic. In free-form Wikipedia navigation, covering all Wikipedia usage scenarios, we find that click choices can be best modeled by a bias towards article structure, such as a tendency to click links located in the lead section. For the goal-directed navigation of wikigames, our findings confirm the zoom-out and the homing-in phases identified by previous work, where users are guided by generality at first and textual similarity to the target later. However, our interpretation of the link selection models accentuates that article structure is the best explanation for the navigation paths in all except these initial and final stages. Overall, we find evidence that users more frequently click on links that are located close to the top of an article. The structure of Wikipedia articles, which places links to more general concepts near the top, supports navigation by allowing users to quickly find the better-connected articles that facilitate navigation. Our results highlight the importance of article structure and link position in Wikipedia navigation and suggest that better organization of information can help make information networks more navigable.  相似文献   

14.
The Linked Hypernyms Dataset (LHD) provides entities described by Dutch, English and German Wikipedia articles with types in the DBpedia namespace. The types are extracted from the first sentences of Wikipedia articles using Hearst pattern matching over part-of-speech annotated text and disambiguated to DBpedia concepts. The dataset covers 1.3 million RDF type triples from English Wikipedia, out of which 1 million RDF type triples were found not to overlap with DBpedia, and 0.4 million with YAGO2s. There are about 770 thousand German and 650 thousand Dutch Wikipedia entities assigned a novel type, which exceeds the number of entities in the localized DBpedia for the respective language. RDF type triples from the German dataset have been incorporated to the German DBpedia. Quality assessment was performed altogether based on 16.500 human ratings and annotations. For the English dataset, the average accuracy is 0.86, for German 0.77 and for Dutch 0.88. The accuracy of raw plain text hypernyms exceeds 0.90 for all languages. The LHD release described and evaluated in this article targets DBpedia 3.8, LHD version for the DBpedia 3.9 containing approximately 4.5 million RDF type triples is also available.  相似文献   

15.
The paper addresses the problem of automatic dictionary translation.The proposed method translates a dictionary by means of mining repositories in the source and target languages, without any directly given relationships connecting the two languages. It consists of two stages: (1) translation by lexical similarity, where words are compared graphically, and (2) translation by semantic similarity, where contexts are compared. In the experiments Polish and English version of Wikipedia were used as text corpora. The method and its phases are thoroughly analyzed. The results allow implementing this method in human-in-the-middle systems.  相似文献   

16.
双语翻译对在跨语言信息检索、机器翻译等领域有着重要的用途,尤其是专有名词、新词、俚语和术语等的翻译是影响其系统性能的关键因素,但是这些翻译对很难从现有的词典中获得。该文针对维基百科的领域覆盖率和结构特征,提出了一种从维基百科中自动获取高质量中英文翻译对的模板挖掘方法,不但能有效地挖掘出常见的模板,而且能够发现人工不容易察觉的复杂模板。主要方法包括三步: 1)从语言工具栏中直接抽取翻译对,作为进一步挖掘的启发知识;2)在维基百科页面中采用PAT-Array结构挖掘中英翻译对模板;3)利用挖掘的模板在页面中自动挖掘其他中英文翻译对,并进行模板评估。实验结果表明,模板发现翻译对的正确率达90.4%。  相似文献   

17.
In our work, we review and empirically evaluate five different raw methods of text representation that allow automatic processing of Wikipedia articles. The main contribution of the article—evaluation of approaches to text representation for machine learning tasks—indicates that the text representation is fundamental for achieving good categorization results. The analysis of the representation methods creates a baseline that cannot be compensated for even by sophisticated machine learning algorithms. It confirms the thesis that proper data representation is a prerequisite for achieving high-quality results of data analysis. Evaluation of the text representations was performed within the Wikipedia repository by examination of classification parameters observed during automatic reconstruction of human-made categories. For that purpose, we use a classifier based on a support vector machines method, extended with multilabel and multiclass functionalities. During classifier construction we observed parameters such as learning time, representation size, and classification quality that allow us to draw conclusions about text representations. For the experiments presented in the article, we use data sets created from Wikipedia dumps. We describe our software, called Matrix’u, which allows a user to build computational representations of Wikipedia articles. The software is the second contribution of our research, because it is a universal tool for converting Wikipedia from a human-readable form to a form that can be processed by a machine. Results generated using Matrix’u can be used in a wide range of applications that involve usage of Wikipedia data.  相似文献   

18.
针对传统人工编辑导致大量类别信息重复和不规范的问题,提出了应用协同过滤技术为中文维基百科文章自动推荐类别。利用中文维基百科中的四个重要语义特征即链入、链出、链入的类别和链出的类别来表示维基百科文章,得到与目标文章相似的前若干篇文章的所有类别后,通过查询返回的相似度值计算各个类别的权重,选择前面的若干个类别作为推荐结果返回给目标文章。实验结果表明了这四个语义特征能较好地表征一篇维基百科文章,同时也验证了协同过滤方法在中文维基百科自动推荐类别中的有效性。  相似文献   

19.
The problem of machine translation can be viewed as consisting of twosubproblems (a) lexical selection and (b) lexical reordering. In thispaper, we propose stochastic finite-state models for these two subproblems. Stochastic finite-state models are efficiently learnablefrom data, effective for decoding and are associated with a calculusfor composing models which allows for tight integration of constraintsfrom various levels of language processing. We present a method forlearning stochastic finite-state models for lexical selection andlexical reordering that are trained automatically from pairs of sourceand target utterances. We use this method to develop models forEnglish–Japanese and English–SPANISH translation and present the performance of these models for translation on speech and text. We also evaluate the efficacy of such a translation model in the context of a call routing task of unconstrained speech utterances.  相似文献   

20.
The main tasks in Example-based Machine Translation (EBMT) comprise of source text decomposition, following with translation examples matching and selection, and finally adaptation and recombination of the target translation. As the natural language is ambiguous in nature, the preservation of source text’s meaning throughout these processes is complex and challenging. A structural semantics is introduced, as an attempt towards meaning-based approach to improve the EBMT system. The structural semantics is used to support deeper semantic similarity measurement and impose structural constraints in translation examples selection. A semantic compositional structure is derived from the structural semantics of the selected translation examples. This semantic compositional structure serves as a representation structure to preserve the consistency and integrity of the input sentence’s meaning structure throughout the recombination process. In this paper, an English to Malay EBMT system is presented to demonstrate the practical application of this structural semantics. Evaluation of the translation test results shows that the new translation framework based on the structural semantics has outperformed the previous EBMT framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号