首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
基于合一句法和实体语义树的中文语义关系抽取   总被引:1,自引:0,他引:1  
该文提出了一种基于卷积树核函数的中文实体语义关系抽取方法,该方法通过在关系实例的结构化信息中加入实体语义信息,如实体类型、引用类型和GPE角色等,从而构造能有效捕获结构化信息和实体语义信息的合一句法和实体语义关系树,以提高中文语义关系抽取的性能。在ACE RDC 2005中文基准语料上进行的关系探测和关系抽取的实验表明,该方法能显著提高中文语义关系抽取性能,大类抽取的最佳F值达到67.0,这说明结构化句法信息和实体语义信息在中文语义关系抽取中具有互补性。  相似文献   

2.
实体链接技术是将文本中的实体指称表述项正确链接到知识库中实体的过程。其中,命名实体消歧的准确性直接影响实体链接的准确性。针对中文实体链接中命名实体的消歧,提出一种融合多种特征的解决方案。首先,以中文维基百科为知识库支撑,从实体指称表述项的上下文和候选实体在维基百科的内容描述两个方面,抽取多种语义特征并计算语义相似度;然后将语义相似度融合到构建的图模型中,基于PageRank算法计算该图模型的最终平稳分布;最后对候选实体排序,选取Top1实体作为消歧后的实体链接结果。实验通过与仅仅围绕名称表述特征进行消歧的基线系统相比,F值提升了9%,并且高于其他实体链接技术实验的F值,表明该方法在解决中文实体链接技术的命名实体消歧问题上,取得了较好的整体效果。  相似文献   

3.
季元叶 《福建电脑》2010,26(6):78-79
实体间语义关系抽取是信息抽取中的重要环节,其目的是从文本中找出实体对之间的语义关系并对它们进行分类。本文主要通过发掘有效的词汇特征、实体特征、基本短语块特征等基本语言学特征,采用基于支持向量机的学习方法,来提高中文实体间语义关系抽取的性能,使得关系抽取的准确率和召回率得到提高,最终提高关系探测、大类抽取和子类抽取的F值。  相似文献   

4.
实体关系抽取是信息抽取研究领域中的重要研究课题之一.针对已有方法在处理复杂文本上的不足,提出了复杂中文文本的实体关系抽取方法.结合中文文本的语法特征,提出了7条抽取关系特征序列的启发式规则,并采用语义序列核和KNN机器学习算法结合的方法来分类和标注关系的类型.通过对ACE评测定义下的两个子类的实体关系抽取,关系抽取的平均F值迭到了76%,明显高于传统的基于特征向量和最短依存路径核的方法.  相似文献   

5.
基于维基百科和模式聚类的实体关系抽取方法   总被引:1,自引:0,他引:1  
该文提出了一种基于维基百科和模式聚类的方法,旨在从开放文本中抽取高准确率的中文关系实体对。首次使用从人工标注知识体系知网到维基百科实体映射的方式获取关系实例,并且充分利用了维基百科的结构化特性,该方法很好地解决了实体识别的问题,生成了准确而显著的句子实例;进一步,提出了显著性假设和关键词假设,在此基础上构建基于关键词的分类及层次聚类算法,显著提升了模式的可信度。实验结果表明该方法有效提升了句子实例及模式的质量,获得了良好的抽取性能。  相似文献   

6.
基于树核函数的实体语义关系抽取方法研究   总被引:3,自引:2,他引:3  
该文描述了一种改进的基于树核函数的实体语义关系抽取方法,通过在原有关系实例的结构化信息中加入实体语义信息和去除冗余信息的方法来提高关系抽取的性能。该方法在最短路径包含树的基础上,首先加入实体类型、引用类型等与实体相关的语义信息,然后对树进行裁剪,去掉修饰语冗余和并列冗余信息,并扩充所有格结构,最后生成实体语义关系实例。在ACE RDC 2004基准语料上进行的关系检测和7个关系大类抽取的实验表明,该方法在较大程度上提高了实体语义关系识别和分类的效果,F值分别达到了79.1%和71.9%。  相似文献   

7.
针对嵌套命名实体识别,神经网络模型中提出基于跨度的框架。该框架首先产生跨度种子,然后搭建分类器进行筛选。但单独对跨度区域进行分类存在丢失全局语义信息的问题。另外,在中文嵌套命名实体识别中,因为缺少分隔符且中文高度依赖上下文,跨度区域无法有效使用词边界特征,导致识别性能不佳。为解决上述问题,本文提出结合实体标签的中文嵌套命名实体识别模型(CEL)。该模型生成跨度种子后,在原句子的跨度区域开始及结束位置嵌入实体标签,再作为分类器输入,从而更好地学习到跨度种区域边界和上下文之间的语义依赖特征。论文在ACE2005中文数据集上进行实验,实验表明,CEL模型在F1值上达到了较好水平。  相似文献   

8.
基于自监督学习的维基百科家庭关系抽取   总被引:1,自引:0,他引:1  
传统有监督的关系抽取方法需要大量人工标注的训练语料,而半监督方法则召回率较低,对此提出了一种基于自监督学习来抽取人物家庭关系的方法。该方法首先将中文维基百科的半结构化信息--家庭关系三元组映射到自由文本中,从而自动生成已标注的训练语料;然后,使用基于特征的关系抽取方法从中文维基百科的文本中获取人物间的家庭关系。在一个人工标注的家庭关系网络测试集上的实验结果表明,该方法优于自举方法,其F1指数达到77%,说明自监督学习可以较为有效地抽取人物家庭关系。  相似文献   

9.
命名实体识别是自然语言处理的一个重要基础任务。传统基于统计学习模型的命名实体识别方法严重依赖特征工程,特征设计需要大量人工参与和专家知识,而且已有的方法通常大多将中文命名实体识别任务看作一个字符序列标注问题,需要依赖局部字符标记区分实体边界。为了减弱系统对人工特征设计的依赖,避免字符序列化标注方法的不足,该文对基于神经网络的片段级中文命名实体识别方法进行探索研究。通过采用深度学习片段神经网络结构,实现特征的自动学习,并通过获取片段信息对片段整体分配标记,同时完成实体边界识别和分类。基于神经网络的片段级中文命名实体识别方法在MSRA数据集上对人名、地名和机构名识别的总体F1值达到了90.44%。  相似文献   

10.
文本分类任务作为文本挖掘的核心问题,已成为自然语言处理领域的一个重要课题.而短文本分类由于稀疏性、实时性和不规范性等特点,已成为文本分类亟待解决的问题之一.在某些特定场景,短文本存在大量隐含语义,由此给挖掘有限文本内的隐含语义特征等任务带来挑战.已有的方法对短文本分类主要采用传统机器学习或深度学习算法,但该类算法的模型构建复杂且工作量大,效率不高.此外,短文本包含有效信息较少且口语化严重,对模型的特征学习能力要求较高.针对以上问题,提出了KAe RCNN模型,该模型在TextRCNN模型的基础上,融合了知识感知与双重注意力机制.知识感知包含了知识图谱实体链接和知识图谱嵌入,可以引入外部知识以获取语义特征,同时,双重注意力机制可以提高模型对短文本中有效信息提取的效率.实验结果表明,KAe RCNN模型在分类准确度、F1值和实际应用效果等方面显著优于传统的机器学习算法.对算法的性能和适应性进行了验证,准确率达到95.54%, F1值达到0.901,对比4种传统机器学习算法,准确率平均提高了约14%, F1值提升了约13%.与TextRCNN相比,KAe RCNN模型在准确性方面提升了约3%...  相似文献   

11.
实体链接是指将文本中具有歧义的实体指称项链接到知识库中相应实体的过程。该文首先对实体链接系统进行了分析,指出实体链接系统中的核心问题—实体指称项文本与候选实体之间的语义相似度计算。接着提出了一种基于图模型的维基概念相似度计算方法,并将该相似度计算方法应用在实体指称项文本与候选实体语义相似度的计算中。在此基础上,设计了一个基于排序学习算法框架的实体链接系统。实验结果表明,相比于传统的计算方法,新的相似度计算方法可以更加有效地捕捉实体指称项文本与候选实体间的语义相似度。同时,融入了多种特征的实体链接系统在性能上获得了达到state-of-art的水平。  相似文献   

12.
With the development of mobile technology, the users browsing habits are gradually shifted from only information retrieval to active recommendation. The classification mapping algorithm between users interests and web contents has been become more and more difficult with the volume and variety of web pages. Some big news portal sites and social media companies hire more editors to label these new concepts and words, and use the computing servers with larger memory to deal with the massive document classification, based on traditional supervised or semi-supervised machine learning methods. This paper provides an optimized classification algorithm for massive web page classification using semantic networks, such as Wikipedia, WordNet. In this paper, we used Wikipedia data set and initialized a few category entity words as class words. A weight estimation algorithm based on the depth and breadth of Wikipedia network is used to calculate the class weight of all Wikipedia Entity Words. A kinship-relation association based on content similarity of entity was therefore suggested optimizing the unbalance problem when a category node inherited the probability from multiple fathers. The keywords in the web page are extracted from the title and the main text using N-gram with Wikipedia Entity Words, and Bayesian classifier is used to estimate the page class probability. Experimental results showed that the proposed method obtained good scalability, robustness and reliability for massive web pages.  相似文献   

13.
实体关系自动抽取   总被引:36,自引:7,他引:36  
实体关系抽取是信息抽取领域中的重要研究课题。本文使用两种基于特征向量的机器学习算法,Winnow 和支持向量机(SVM) ,在2004 年ACE(Automatic Content Extraction) 评测的训练数据上进行实体关系抽取实验。两种算法都进行适当的特征选择,当选择每个实体的左右两个词为特征时,达到最好的抽取效果,Winnow和SVM算法的加权平均F-Score 分别为73108 %和73127 %。可见在使用相同的特征集,不同的学习算法进行实体关系的识别时,最终性能差别不大。因此使用自动的方法进行实体关系抽取时,应当集中精力寻找好的特征。  相似文献   

14.
In our work, we review and empirically evaluate five different raw methods of text representation that allow automatic processing of Wikipedia articles. The main contribution of the article—evaluation of approaches to text representation for machine learning tasks—indicates that the text representation is fundamental for achieving good categorization results. The analysis of the representation methods creates a baseline that cannot be compensated for even by sophisticated machine learning algorithms. It confirms the thesis that proper data representation is a prerequisite for achieving high-quality results of data analysis. Evaluation of the text representations was performed within the Wikipedia repository by examination of classification parameters observed during automatic reconstruction of human-made categories. For that purpose, we use a classifier based on a support vector machines method, extended with multilabel and multiclass functionalities. During classifier construction we observed parameters such as learning time, representation size, and classification quality that allow us to draw conclusions about text representations. For the experiments presented in the article, we use data sets created from Wikipedia dumps. We describe our software, called Matrix’u, which allows a user to build computational representations of Wikipedia articles. The software is the second contribution of our research, because it is a universal tool for converting Wikipedia from a human-readable form to a form that can be processed by a machine. Results generated using Matrix’u can be used in a wide range of applications that involve usage of Wikipedia data.  相似文献   

15.
Inferring semantic types of the entity mentions in a sentence is a necessary yet challenging task. Most of existing methods employ a very coarse-grained type taxonomy, which is too general and not exact enough for many tasks. However, the performances of the methods drop sharply when we extend the type taxonomy to a fine-grained one with several hundreds of types. In this paper, we introduce a hybrid neural network model for type classification of entity mentions with a fine-grained taxonomy. There are four components in our model, namely, the entity mention component, the context component, the relation component, the already known type component, which are used to extract features from the target entity mention, context, relations and already known types of the entity mentions in surrounding context respectively. The learned features by the four components are concatenated and fed into a softmax layer to predict the type distribution. We carried out extensive experiments to evaluate our proposed model. Experimental results demonstrate that our model achieves state-of-the-art performance on the FIGER dataset. Moreover, we extracted larger datasets from Wikipedia and DBpedia. On the larger datasets, our model achieves the comparable performance to the state-of-the-art methods with the coarse-grained type taxonomy, but performs much better than those methods with the fine-grained type taxonomy in terms of micro-F1, macro-F1 and weighted-F1.  相似文献   

16.
17.
利用维基百科(Wikipedia)和已有命名实体资源,提出维基百科类的隶属度计算方法,通过匹配、计算、过滤、扩展、去噪五个步骤构建出具有较高质量和较大规模的命名实体实例集.在英语维基百科数据上进行实验,结果显示,基于隶属度方法自动获取的人名实例规模较DBpedia抽取出的人名实例规模高出近10倍,通过对不同隶属度区间的抽取实例进行人工检验,发现抽取出的前15000个维基百科类的准确率达到99%左右,能够有效支持命名实体类实例的扩充.  相似文献   

18.
There are several commercial financial expert systems that can be used for trading on the stock exchange. However, their predictions are somewhat limited since they primarily rely on time-series analysis of the market. With the rise of the Internet, new forms of collective intelligence (e.g. Google and Wikipedia) have emerged, representing a new generation of “crowd-sourced” knowledge bases. They collate information on publicly traded companies, while capturing web traffic statistics that reflect the public’s collective interest. Google and Wikipedia have become important “knowledge bases” for investors. In this research, we hypothesize that combining disparate online data sources with traditional time-series and technical indicators for a stock can provide a more effective and intelligent daily trading expert system. Three machine learning models, decision trees, neural networks and support vector machines, serve as the basis for our “inference engine”. To evaluate the performance of our expert system, we present a case study based on the AAPL (Apple NASDAQ) stock. Our expert system had an 85% accuracy in predicting the next-day AAPL stock movement, which outperforms the reported rates in the literature. Our results suggest that: (a) the knowledge base of financial expert systems can benefit from data captured from nontraditional “experts” like Google and Wikipedia; (b) diversifying the knowledge base by combining data from disparate sources can help improve the performance of financial expert systems; and (c) the use of simple machine learning models for inference and rule generation is appropriate with our rich knowledge database. Finally, an intelligent decision making tool is provided to assist investors in making trading decisions on any stock, commodity or index.  相似文献   

19.
该文提出了一种从维基百科的可比语料中抽取对齐句子的方法。在获取了维基百科中英文数据库备份并进行一定处理后,重构成本地维基语料数据库。在此基础上,统计了词汇数据、构建了命名实体词典,并通过维基百科本身的对齐机制获得了双语可比语料文本。然后,该文在标注的过程中分析了维基百科语料的特点,以此为指导设计了一系列的特征,并确定了“对齐”、“部分对齐”、“不对齐”三分类体系,最终采用SVM分类器对维基百科语料和来自第三方的平行语料进行了句子对齐实验。实验表明:对于语言较规范的可比语料,分类器对对齐句的分类正确率可达到82%,对于平行语料,可以达到92%,这说明该方法是可行且有效的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号