首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
实体链接是指将文本中具有歧义的实体指称项链接到知识库中相应实体的过程。该文首先对实体链接系统进行了分析,指出实体链接系统中的核心问题—实体指称项文本与候选实体之间的语义相似度计算。接着提出了一种基于图模型的维基概念相似度计算方法,并将该相似度计算方法应用在实体指称项文本与候选实体语义相似度的计算中。在此基础上,设计了一个基于排序学习算法框架的实体链接系统。实验结果表明,相比于传统的计算方法,新的相似度计算方法可以更加有效地捕捉实体指称项文本与候选实体间的语义相似度。同时,融入了多种特征的实体链接系统在性能上获得了达到state-of-art的水平。  相似文献   

2.
针对传统语义查询扩展方法存在返回结果多以及准确率不高的问题,以旅游领域为背景,提出一种语义加权查询扩展方法。利用本体推理能力与本体中的实体相关度进行查询扩展,结合TF-IDF算法的词频加权与语义相关度加权改善检索结果的排序。实验结果表明,相比其他2种方法,该方法能使更多符合要求的查询结果靠前排列,提高了旅游信息检索的正确率。  相似文献   

3.
语义相关度计算在信息检索、词义消歧、自动文摘、拼写校正等自然语言处理中均扮演着重要的角色。该文采用基于维基百科的显性语义分析方法计算汉语词语之间的语义相关度。基于中文维基百科,将词表示为带权重的概念向量,进而将词之间相关度的计算转化为相应的概念向量的比较。进一步,引入页面的先验概率,利用维基百科页面之间的链接信息对概念向量各分量的值进行修正。实验结果表明,使用该方法计算汉语语义相关度,与人工标注标准的斯皮尔曼等级相关系数可以达到0.52,显著改善了相关度计算的结果。  相似文献   

4.
This article presents YAGO, a large ontology with high coverage and precision. YAGO has been automatically derived from Wikipedia and WordNet. It comprises entities and relations, and currently contains more than 1.7 million entities and 15 million facts. These include the taxonomic Is-A hierarchy as well as semantic relations between entities. The facts for YAGO have been extracted from the category system and the infoboxes of Wikipedia and have been combined with taxonomic relations from WordNet. Type checking techniques help us keep YAGO’s precision at 95%—as proven by an extensive evaluation study. YAGO is based on a clean logical model with a decidable consistency. Furthermore, it allows representing n-ary relations in a natural way while maintaining compatibility with RDFS. A powerful query model facilitates access to YAGO’s data.  相似文献   

5.
We study the problem of entity salience by proposing the design and implementation of Swat , a system that identifies the salient Wikipedia entities occurring in an input document. Swat consists of several modules that are able to detect and classify on‐the‐fly Wikipedia entities as salient or not, based on a large number of syntactic, semantic, and latent features properly extracted via a supervised process, which has been trained over millions of examples drawn from the New York Times corpus. The validation process is performed through a large experimental assessment, eventually showing that Swat improves known solutions over all publicly available datasets. We release Swat via an API that we describe and comment in the paper to ease its use in other software.  相似文献   

6.
网络信息检索在当前互联网社会得到了广泛应用,但是其检索准确性却不容乐观,究其原因是割裂了检索关键词之间的概念联系。从一类限定领域的用户需求入手,以搜索引擎作为网络语料资源的访问接口,综合利用规则与统计的方法,生成查询需求的语义概念图。可将其作为需求分析的结果,导引后续的语义检索过程,提高用户查询与返回结果的相关性。实验结果表明,生成方法是有效可行的,对基于概念图的语义检索有一定的探索意义。  相似文献   

7.
A search query, being a very concise grounding of user intent, could potentially have many possible interpretations. Search engines hedge their bets by diversifying top results to cover multiple such possibilities so that the user is likely to be satisfied, whatever be her intended interpretation. Diversified Query Expansion is the problem of diversifying query expansion suggestions, so that the user can specialize the query to better suit her intent, even before perusing search results. In this paper, we consider the usage of semantic resources and tools to arrive at improved methods for diversified query expansion. In particular, we develop two methods, those that leverage Wikipedia and pre-learnt distributional word embeddings respectively. Both the approaches operate on a common three-phase framework; that of first taking a set of informative terms from the search results of the initial query, then building a graph, following by using a diversity-conscious node ranking to prioritize candidate terms for diversified query expansion. Our methods differ in the second phase, with the first method Select-Link-Rank (SLR) linking terms with Wikipedia entities to accomplish graph construction; on the other hand, our second method, Select-Embed-Rank (SER), constructs the graph using similarities between distributional word embeddings. Through an empirical analysis and user study, we show that SLR ourperforms state-of-the-art diversified query expansion methods, thus establishing that Wikipedia is an effective resource to aid diversified query expansion. Our empirical analysis also illustrates that SER outperforms the baselines convincingly, asserting that it is the best available method for those cases where SLR is not applicable; these include narrow-focus search systems where a relevant knowledge base is unavailable. Our SLR method is also seen to outperform a state-of-the-art method in the task of diversified entity ranking.  相似文献   

8.
随着大规模知识图谱的出现以及企业高效管理领域知识图谱的需求,知识图谱中的自组织实体检索成为研究热点。给定知识图谱以及用户查询,实体检索的目标在于从给定的知识图谱中返回实体的排序列表。从匹配的角度来看,传统的实体检索模型大都将用户查询和实体统一映射到词的特征空间。这样做具有明显的缺点,例如,将同属于一个实体的两个词视为独立的。为此,该文提出将用户查询和实体同时映射到实体与词两个特征空间方法,称为双特征空间的排序学习。首先将实体抽象成若干个域。之后从词空间和实体空间两个维度分别抽取排序特征,最终应用于排序学习算法中。实验结果表明,在标准数据集上,双特征空间的实体排序学习模型性能显著优于当前先进的实体检索模型。  相似文献   

9.
由于跨境民族相关的文化实体常出现相同实体具有不同名称表达的情况,使用当前主流的文本检索方法在跨境民族文化数据集上将面临语义稀疏的问题。该文提出一种基于实体语义扩展的跨境民族文化检索方法,利用跨境民族文化知识图谱,以知识三元组的形式将跨境民族文化之间的实体关联起来,并添加实体类别标签,以此缓解跨境民族文化实体中语义信息不充分的问题。通过TransH模型对实体及扩展语义信息进行向量化表示,融合到查询文本中进行语义增强,以此提升跨境民族文化文本检索的准确性。实验结果表明,该方法比基线模型提高了5.4%。  相似文献   

10.
设计和实现一个支持语义的分布式视频检索系统:"语寻"。该系统利用一个改进的视频语义处理工具(该工具基于IBM VideoAnnEx标注工具,并增加镜头语义图标注和自然语言处理的功能)对视频进行语义分析和标注,生成包含语义信息的MPEG-7描述文件,然后对视频的MPEG-7描述文件建立分布式索引,并同时分布式存储视频文件;系统提供丰富的Web查询接口,包括关键字语义扩展查询,语义图查询以及自然语句查询,当用户提交语义查询意图后,便能够迅速地检索到感兴趣的视频和片段,并且可以浏览点播;整个系统采用分布式架构,具备良好的可扩展性,并能够支持海量视频信息的索引和检索。  相似文献   

11.
Entity linking is a fundamental task in natural language processing. The task of entity linking with knowledge graphs aims at linking mentions in text to their correct entities in a knowledge graph like DBpedia or YAGO2. Most of existing methods rely on hand‐designed features to model the contexts of mentions and entities, which are sparse and hard to calibrate. In this paper, we present a neural model that first combines co‐attention mechanism with graph convolutional network for entity linking with knowledge graphs, which extracts features of mentions and entities from their contexts automatically. Specifically, given the context of a mention and one of its candidate entities' context, we introduce the co‐attention mechanism to learn the relatedness between the mention context and the candidate entity context, and build the mention representation in consideration of such relatedness. Moreover, we propose a context‐aware graph convolutional network for entity representation, which takes both the graph structure of the candidate entity and its relatedness with the mention context into consideration. Experimental results show that our model consistently outperforms the baseline methods on five widely used datasets.  相似文献   

12.
陈天莹  符红光 《计算机工程》2008,34(12):164-166
个性化图形搜索打破了传统的查询方式搜索,将基于关键词的查询方式转变为基于图形的查询方式,使图形的查询具有一定的语义关系,查询结果也更加准确。该文给出一种基于语义关系对的SVG图形搜索引擎。目前大多数浏览器不直接支持SVG图形,但通过该文提出的SVG图形解析器和SVG图形显示器可以对网络上的SVG图形进行检索和显示。  相似文献   

13.
互联网上聚集了大量的文本、图像等非结构化信息,RDF作为W3C提出的互联网上的资源描述框架,非常适合于描述网络上的非结构化信息,因此形成了大量的RDF知识库,如Freebase、Yago、DBPedia等。RDF知识库中包含丰富的语义信息,可以对来自网页的名字实体进行标注,实现语义扩充。将网页上的名字实体映射到知识库中对应实体上称作实体标注。实体标注包括两个主要部分:实体间的映射和标注去歧义。利用海量RDF知识库的特性,提出了一种有效的实体标注方法。该方法采用简单的图加权及计算解决实体标注的去歧义问题。该方法已在云平台上实现,并通过实验验证了其准确度和可扩展性。  相似文献   

14.
李岩  张博文  郝红卫 《计算机应用》2016,36(9):2526-2530
针对传统查询扩展方法在专业领域中扩展词与原始查询之间缺乏语义关联的问题,提出一种基于语义向量表示的查询扩展方法。首先,构建了一个语义向量表示模型,通过对语料库中词的上下文语义进行学习,得到词的语义向量表示;其次,根据词语义向量表示,计算词之间的语义相似度;然后,选取与查询中词汇的语义最相似的词作为查询的扩展词,扩展原始查询语句;最后,基于提出的查询扩展方法构建了生物医学文档检索系统,针对基于维基百科或WordNet的传统查询扩展方法和BioASQ 2014—2015参加竞赛的系统进行对比实验和显著性差异指标分析。实验结果表明,基于语义向量表示查询扩展的检索方法所得到结果优于传统查询扩展方法的结果,平均准确率至少提高了1个百分点,在与竞赛系统的对比中,系统的效果均有显著性提高。  相似文献   

15.
自然语言词汇的语义相关度的计算需要获取大量的背景知识,而维基百科是当前规模最大的百科全书,其不仅是一个规模巨大的语料库,而且还是一个包含了大量人类背景知识和语义关系的知识库,研究表明,其是进行语义计算的理想资源,本文提出了一种将维基百科的链接结构和分类体系相结合计算中文词汇语义相关度的算法,算法只利用了维基百科的链接结构和分类体系,无需进行复杂的文本处理,计算所需的开销较小.在多个人工评测的数据集上的实验结果显示,获得了比单独使用链接结构或分类体系的算法更好的效果,在最好的情况下,Spearman相关系数提高了30.96%.  相似文献   

16.
Modeling users’ interests plays an important role in the current web since it is at the basis of many services such as recommendation and customization. Using semantic technologies to represent users’ interests may help to reduce problems such as sparsity, over-specialization and domain-dependency, which are known to be critical issues of state of the art recommenders. In this paper we present a method for high-coverage modeling of Twitter users supported by a hierarchical representation of their interests, which we call a Twixonomy. In order to automatically build a population, community, or single-user Twixonomy we first identify “topical” friends in users’ friendship lists (i.e., friends representing an interest rather than a social relation between peers). We classify as topical those users with an associated page on Wikipedia. A word-sense disambiguation algorithm is used to select the appropriate Wikipedia page for each topical friend. Next, starting from the set of wikipages representing the main topics of interests of the considered Twitter population, we extract all paths connecting these pages with topmost Wikipedia category nodes, and we then prune the resulting graph efficiently so as to induce a direct acyclic graph and significantly reduce over ambiguity, a well known problem of the Wikipedia category graph. We release the Twixonomy produced in this work under creative common license.  相似文献   

17.
现有汉越跨语言新闻事件检索方法较少使用新闻领域内的事件实体知识,在候选文档中存在多个事件的情况下,与查询句无关的事件会干扰查询句与候选文档间的匹配精度,影响检索性能。提出一种融入事件实体知识的汉越跨语言新闻事件检索模型。通过查询翻译方法将汉语事件查询句翻译为越南语事件查询句,把跨语言新闻事件检索问题转化为单语新闻事件检索问题。考虑到查询句中只有单个事件,候选文档中多个事件共存会影响查询句和文档的精准匹配,利用事件触发词划分候选文档事件范围,减小文档中与查询无关事件的干扰。在此基础上,利用知识图谱和事件触发词得到事件实体丰富的知识表示,通过查询句与文档事件范围间的交互,提取到事件实体知识表示与词以及事件实体知识表示之间的排序特征。在汉越双语新闻数据集上的实验结果表明,与BM25、Conv-KNRM、ATER等基线模型相比,该模型能够取得较好的跨语言新闻事件检索效果,NDCG和MAP指标最高可提升0.712 2和0.587 2。  相似文献   

18.
维基百科作为一个以开放和用户协作编辑为特点的Web 2.0知识库系统,具有知识面覆盖度广,结构化程度高,信息更新速度快等优点。然而,维基百科的官方仅提供一些半结构化的数据文件,很多有用的结构化信息和数据,并不能直接地获取和利用。因此,该文首先从这些数据文件中抽取整理出多种结构化信息;然后,对维基百科中的各种信息建立了对象模型,并提供了一套开放的应用程序接口,大大降低了利用维基百科信息的难度;最后,利用维基百科中获取的信息,该文提出了一种基于链接所对应主题页面所属类别的词语语义相关度计算方法。  相似文献   

19.
20.
Multimodal Retrieval is a well-established approach for image retrieval. Usually, images are accompanied by text caption along with associated documents describing the image. Textual query expansion as a form of enhancing image retrieval is a relatively less explored area. In this paper, we first study the effect of expanding textual query on both image and its associated text retrieval. Our study reveals that judicious expansion of textual query through keyphrase extraction can lead to better results, either in terms of text-retrieval or both image and text-retrieval. To establish this, we use two well-known keyphrase extraction techniques based on tf-idf and KEA. While query expansion results in increased retrieval efficiency, it is imperative that the expansion be semantically justified. So, we propose a graph-based keyphrase extraction model that captures the relatedness between words in terms of both mutual information and relevance feedback. Most of the existing works have stressed on bridging the semantic gap by using textual and visual features, either in combination or individually. The way these text and image features are combined determines the efficacy of any retrieval. For this purpose, we adopt Fisher-LDA to adjudge the appropriate weights for each modality. This provides us with an intelligent decision-making process favoring the feature set to be infused into the final query. Our proposed algorithm is shown to supersede the previously mentioned keyphrase extraction algorithms for query expansion significantly. A rigorous set of experiments performed on ImageCLEF-2011 Wikipedia Retrieval task dataset validates our claim that capturing the semantic relation between words through Mutual Information followed by expansion of a textual query using relevance feedback can simultaneously enhance both text and image retrieval.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号