首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
The information content (IC) of a concept provides an estimation of its degree of generality/concreteness, a dimension which enables a better understanding of concept’s semantics. As a result, IC has been successfully applied to the automatic assessment of the semantic similarity between concepts. In the past, IC has been estimated as the probability of appearance of concepts in corpora. However, the applicability and scalability of this method are hampered due to corpora dependency and data sparseness. More recently, some authors proposed IC-based measures using taxonomical features extracted from an ontology for a particular concept, obtaining promising results. In this paper, we analyse these ontology-based approaches for IC computation and propose several improvements aimed to better capture the semantic evidence modelled in the ontology for the particular concept. Our approach has been evaluated and compared with related works (both corpora and ontology-based ones) when applied to the task of semantic similarity estimation. Results obtained for a widely used benchmark show that our method enables similarity estimations which are better correlated with human judgements than related works.  相似文献   

2.
Computing the semantic similarity/relatedness between terms is an important research area for several disciplines, including artificial intelligence, cognitive science, linguistics, psychology, biomedicine and information retrieval. These measures exploit knowledge bases to express the semantics of concepts. Some approaches, such as the information theoretical approaches, rely on knowledge structure, while others, such as the gloss-based approaches, use knowledge content. Firstly, based on structure, we propose a new intrinsic Information Content (IC) computing method which is based on the quantification of the subgraph formed by the ancestors of the target concept. Taxonomic measures including the IC-based ones consume the topological parameters that must be extracted from taxonomies considered as Directed Acyclic Graphs (DAGs). Accordingly, we propose a routine of graph algorithms that are able to provide some basic parameters, such as depth, ancestors, descendents, Lowest Common Subsumer (LCS). The IC-computing method is assessed using several knowledge structures which are: the noun and verb WordNet “is a” taxonomies, Wikipedia Category Graph (WCG), and MeSH taxonomy. We also propose an aggregation schema that exploits the WordNet “is a” taxonomy and WCG in a complementary way through the IC-based measures to improve coverage capacity. Secondly, taking content into consideration, we propose a gloss-based semantic similarity measure that operates based on the noun weighting mechanism using our IC-computing method, as well as on the WordNet, Wiktionary and Wikipedia resources. Further evaluation is performed on various items, including nouns, verbs, multiword expressions and biomedical datasets, using well-recognized benchmarks. The results indicate an improvement in terms of similarity and relatedness assessment accuracy.  相似文献   

3.
对汉语特殊句型的语义分析是当前中文信息处理的难点之一。现有的传统语义分析方法存在一些问题,不能很好的反映汉语中各个词语或成分之间的语义关联。该文以汉语连动句为例,提出了基于特征结构模型的语义标注方法,探讨了连动句的语义标注模型,并在此基础上建构了一个大规模的汉语语义资源。结果表明,特征结构模型能够对连动句中的主语与多个谓语动词、多个宾语之间的复杂语义关系进行全面准确的描述,为面向汉语的自然语言处理提供了一种不同的语义分析方法。  相似文献   

4.
This paper develops methods for calculating the semantic similarity (closeness)-relatedness of natural language words. The concept of semantic relatedness allows one to construct algorithmic models for the context-linguistic analysis with a view to solving problems such as word sense disambiguation, named entity recognition, natural language text analysis, etc. A new algorithm is proposed for estimating the semantic distance between natural language words. This method is a weighted modification of the well-known Lesk approach based on the lexical intersection of glossary entries.  相似文献   

5.
词汇间的语义相似度计算在自然语言处理相关的许多应用中有基础作用。该文提出了一种新的计算方法,具有高效实用、准确率较高的特点。该方法从传统的分布相似度假设“相似的词汇出现在相似的上下文中”出发,提出不再采用词汇在句子中的邻接词,而是采用词汇在二词名词短语中的搭配词作为其上下文,将更能体现词汇的语义特征,可取得更好的计算结果。在自动构建大规模二词名词短语的基础上,首先基于tf-idf构造直接和间接搭配词向量,然后通过计算搭配词向量间的余弦距离得到词汇间的语义相似度。为了便于与相关方法比较,构建了基于人工评分的中文词汇语义相似度基准测试集,在该测试集中的名、动、形容词中,方法分别得到了0.703、0.509、0.700的相关系数,及100%的覆盖率。  相似文献   

6.
给出了一个新的用于计算WordNet中概念的语义相似度的IC(信息内容)模型。该模型以WordNet的is_a关系为基础,只通过WordNet本身结构就可求出WordNet中每个概念的IC值,而不需要其他语料库的参与。该模型不仅考虑了每个概念所包含的子节点的个数,而且将该概念所处WordNet分类树中的深度引入到模型当中,使得概念的IC值更为精确。实验结果显示将该模型代入到多个相似度算法当中,可以明显提高这些算法的性能。  相似文献   

7.
提出一种计算WordNet中概念间语义相似度的算法,该算法同时考虑概念的信息内容(IC)以及2个概念在WordNet is_a关系分类树中的距离信息,由此提高算法性能。给出一种计算概念IC值的新方法,通过考虑概念的子节点数及概念所处WordNet分类树中的深度,使计算结果更精确。与其他5种语义相似度算法的比较结果表明,该算法能够求得更准确的相似度。  相似文献   

8.
Estimation of the semantic likeness between words is of great importance in many applications dealing with textual data such as natural language processing, knowledge acquisition and information retrieval. Semantic similarity measures exploit knowledge sources as the base to perform the estimations. In recent years, ontologies have grown in interest thanks to global initiatives such as the Semantic Web, offering an structured knowledge representation. Thanks to the possibilities that ontologies enable regarding semantic interpretation of terms many ontology-based similarity measures have been developed. According to the principle in which those measures base the similarity assessment and the way in which ontologies are exploited or complemented with other sources several families of measures can be identified. In this paper, we survey and classify most of the ontology-based approaches developed in order to evaluate their advantages and limitations and compare their expected performance both from theoretical and practical points of view. We also present a new ontology-based measure relying on the exploitation of taxonomical features. The evaluation and comparison of our approach’s results against those reported by related works under a common framework suggest that our measure provides a high accuracy without some of the limitations observed in other works.  相似文献   

9.
中文词语语义相似度计算——基于《知网》2000   总被引:8,自引:2,他引:8  
李峰  李芳 《中文信息学报》2007,21(3):99-105
词语语义相似度的计算,一种比较常用的方法是使用分类体系的语义词典(如Wordnet)。本文首先利用Hownet中“义原”的树状层次结构,得到“义原”的相似度,再通过“义原”的相似度得到词语(“概念”)的相似度。本文通过引入事物信息量的思想,提出了自己的观点: 认为知网中的“义原”对“概念”描述的作用大小取决于其本身所含的语义信息量;“义原”对“概念”的描述划分为直接描述和间接描述两类,并据此计算中文词语语义相似度,在一定程度上得到了和人的直观更加符合的结果。  相似文献   

10.
词语的语义计算是自然语言处理领域的重要问题之一,目前的研究主要集中在词语语义的相似度计算方面,对词语语义的相关度计算方法研究不够.为此,本文提出了一种基于语义词典和语料库相结合的词语语义相关度计算模型.首先,以HowNet和大规模语料库为基础,制定了相关的语义关系提取规则,抽取了大量的语义依存关系;然后,以语义关系三元组为存储形式,构建了语义关系图;最后,采用图论的相关理论,对语义关系图中的语义关系进行处理,设计了一个基于语义关系图的词语语义相关度计算模型.实验结果表明,本文提出的模型在词语语义相关度计算方面具有较好的效果,在WordSimilarity-353数据集上的斯皮尔曼等级相关系数达到了0.5358,显著地提升了中文词语语义相关度的计算效果.  相似文献   

11.
Studies of lexical–semantic relations aim to understand the mechanism of semantic memory and the organization of the mental lexicon. However, standard paradigmatic relations such as “hypernym” and “hyponym” cannot capture connections among concepts from different parts of speech. WordNet, which organizes synsets (i.e., synonym sets) using these lexical–semantic relations, is rather sparse in its connectivity. According to WordNet statistics, the average number of outgoing/incoming arcs for the hypernym/hyponym relation per synset is 1.33. Evocation, defined as how much a concept (expressed by one or more words) brings to mind another, is proposed as a new directed and weighted measure for the semantic relatedness among concepts. Commonly applied semantic relations and relatedness measures do not seem to be fully compatible with data that reflect evocations among concepts. They are compatible but evocation captures MORE. This work aims to provide a reliable and extendable dataset of concepts evoked by, and evoking, other concepts to enrich WordNet, the existing semantic network. We propose the use of disambiguated free word association data (first responses to verbal stimuli) to infer and collect evocation ratings. WordNet aims to represent the organization of mental lexicon, and free word association which has been used by psycholinguists to explore semantic organization can contribute to the understanding. This work was carried out in two phases. In the first phase, it was confirmed that existing free word association norms can be converted into evocation data computationally. In the second phase, a two-stage association-annotation procedure of collecting evocation data from human judgment was compared to the state-of-the-art method, showing that introducing free association can greatly improve the quality of the evocation data generated. Evocation can be incorporated into WordNet as directed links with scales, and benefits various natural language processing applications.  相似文献   

12.
基于上下文词语同现向量的词语相似度计算   总被引:3,自引:0,他引:3  
词语的语义相似度是词语间语义相似紧密的一种数量化表示。提出一种词语的语义相似度计算方法 ,利用上下文词语同现向量来描述词语的语义知识 ,在此基础上 ,使用 min/ max的方法计算词语之间的语义相似度。实验结果表明 ,该方法能够比较准确地反映词语之间的语义关系 ,为词语间的语义关系提供一种有效度量。  相似文献   

13.
语义相关度计算在信息检索、词义消歧、自动文摘、拼写校正等自然语言处理中均扮演着重要的角色。该文采用基于维基百科的显性语义分析方法计算汉语词语之间的语义相关度。基于中文维基百科,将词表示为带权重的概念向量,进而将词之间相关度的计算转化为相应的概念向量的比较。进一步,引入页面的先验概率,利用维基百科页面之间的链接信息对概念向量各分量的值进行修正。实验结果表明,使用该方法计算汉语语义相关度,与人工标注标准的斯皮尔曼等级相关系数可以达到0.52,显著改善了相关度计算的结果。  相似文献   

14.
Assessing semantic similarity is a fundamental requirement for many AI applications. Crisp ontology (CO) is one of the knowledge representation tools that can be used for this purpose. Thanks to the development of semantic web, CO‐based similarity assessment has become a popular approach in recent years. However, in the presence of vague information, CO cannot consider uncertainty of relations between concepts. On the other hand, fuzzy ontology (FO) can effectively process uncertainty of concepts and their relations. This paper aims at proposing an approach for assessing concept similarity based on FO. The proposed approach incorporates fuzzy relation composition in combination with an edge counting approach to assess the similarity. Accordingly, proposed measure relies on taxonomical features of an ontology in combination with statistical features of concepts. Furthermore, an evaluation approach for the FO‐based similarity measure named as FOSE is proposed. Considering social network data, proposed similarity measure is evaluated using FOSE. The evaluation results prove the dominance of proposed approach over its respective CO‐based measure.  相似文献   

15.
针对目前中文词语语义相似度方法中,基于信息内容的算法研究不足的问题,对知网信息模型上使用基于信息内容的中文词语相似度算法进行了研究。根据知网采用语义表达式表示知识而缺乏完整概念结构的特点,通过抽取知网语义表达式中的抽象概念,结合原知网义原树构建具有多重继承特征的知网义项网作为基于信息内容的计算本体。根据该义项网,对基于信息内容的词语相似度算法进行了改进,提出了新的信息内容含量计算方法。经过Miller&Charles(MC30)基准平台的测试,验证了基于信息内容方法在计算中文语义相似度方面的可行性,也证明了本文的计算策略和改进算法的合理性。  相似文献   

16.
Semantic-oriented service matching is one of the challenges in automatic Web service discovery. Service users may search for Web services using keywords and receive the matching services in terms of their functional profiles. A number of approaches to computing the semantic similarity between words have been developed to enhance the precision of matchmaking, which can be classified into ontology-based and corpus-based approaches. The ontology-based approaches commonly use the differentiated concept information provided by a large ontology for measuring lexical similarity with word sense disambiguation. Nevertheless, most of the ontologies are domain-special and limited to lexical coverage, which have a limited applicability. On the other hand, corpus-based approaches rely on the distributional statistics of context to represent per word as a vector and measure the distance of word vectors. However, the polysemous problem may lead to a low computational accuracy. In this paper, in order to augment the semantic information content in word vectors, we propose a multiple semantic fusion (MSF) model to generate sense-specific vector per word. In this model, various semantic properties of the general-purpose ontology WordNet are integrated to fine-tune the distributed word representations learned from corpus, in terms of vector combination strategies. The retrofitted word vectors are modeled as semantic vectors for estimating semantic similarity. The MSF model-based similarity measure is validated against other similarity measures on multiple benchmark datasets. Experimental results of word similarity evaluation indicate that our computational method can obtain higher correlation coefficient with human judgment in most cases. Moreover, the proposed similarity measure is demonstrated to improve the performance of Web service matchmaking based on a single semantic resource. Accordingly, our findings provide a new method and perspective to understand and represent lexical semantics.  相似文献   

17.
Semantic technologies are playing an increasingly popular role as a means for advancing the capabilities of knowledge management systems. Among these advancements, researchers have successfully leveraged semantic technologies, and their accompanying techniques, to improve the representation and search capabilities of knowledge management systems. This paper introduces a further application of semantic techniques. We explore semantic relatedness as a means of facilitating the development of more “intelligent” engineering knowledge management systems. Using semantic relatedness quantifications to analyze and rank concept pairs, this novel approach exploits semantic relationships to help identify key engineering relationships, similar to those leveraged in change management systems, in product development processes. As part of this work, we review several different semantic relatedness techniques, including a meronomic technique recently introduced by the authors. We introduce an aggregate measure, termed “An Algorithm for Identifying Engineering Relationships in Ontologies,” or AIERO, as a means to purposely quantify semantic relationships within product development frameworks. To assess its consistency and accuracy, AIERO is tested using three separate, independently developed ontologies. The results indicate AIERO is capable of returning consistent rankings of concept pairs across varying knowledge frameworks. A PCB (printed circuit board) case study then highlights AIERO’s unique ability to leverage semantic relationships to systematically narrow where engineering interdependencies are likely to be found between various elements of product development processes.  相似文献   

18.
语义的模糊性给词语的情感分析带来了挑战。有些情感词语不仅使用频率高,而且语义模糊性强。如何消除语义模糊性成为词语情感分析中亟待解决的问题。该文提出了一种规则和统计相结合的框架来分析具有强语义模糊性词语的情感倾向。该框架根据词语的相邻信息获取有效的特征,利用粗糙集的属性约简方法生成决策规则,对于规则无法识别的情况,再利用贝叶斯分类器消除语义模糊性。该文以强语义模糊性词语“好”为例,对提出的框架在多个语料上进行实验,结果表明该框架可以有效消除“好”的语义模糊性以改进情感分析的效果。  相似文献   

19.
基于证据理论的单词语义相似度度量   总被引:1,自引:0,他引:1  
单词语义相似度度量一直是自然语言处理领域的经典和热点问题, 其成果可对词义消歧、机器翻译、本体映射、计算语言学等应用具有重要影响. 本文通过结合证据理论和知识库,提出一个新颖的度量单词语义相似度度量途径. 首先,借助通用本体WordNet获取证据;其次,利用散点图分析证据的合理性; 然后,使用统计和分段线性插值生成基本信任分配函数;最后,结合证据冲突处理、 重要度分配和D-S合成规则实现信息融合获得全局基本信任分配函数, 并在此基础上量化单词语义相似度.在数据集RG(65)上, 对比本文算法评判结果与人类评判结果的相关度,采用5折交叉验证对算法进行分析, 相关度达到0.912,比当前最优方法PS高出0.4个百分点, 比经典算法reLHS、distJC、simLC、simL和simR高出7%~13%; 在数据集MC(30)和WordSim353上也取得了比较好的实验结果, 相关度分别为0.915和0.941;且算法的运行效率和经典算法相当. 实验结果显示使用证据理论解决单词语义相似度问题是合理有效的.  相似文献   

20.
作文跑题检测任务的核心问题是文本相似度计算。传统的文本相似度计算方法一般基于向量空间模型,即把文本表示成高维向量,再计算文本之间的相似度。这种方法只考虑文本中出现的词项(词袋模型),而没有利用词项的语义信息。该文提出一种新的文本相似度计算方法:基于词扩展的文本相似度计算方法,将词袋模型(Bag-of-Words)方法与词的分布式表示相结合,在词的分布式表示向量空间中寻找与文本出现的词项语义上相似的词加入到文本表示中,实现文本中单词的扩展。然后对扩展后的文本计算相似度。该文将这种方法运用到英文作文的跑题检测中,构建一套跑题检测系统,并在一个真实数据中进行测试。实验结果表明该文的跑题检测系统能有效识别跑题作文,性能明显高于基准系统。
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号