首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper describes the organization and results of the automatic keyphrase extraction task held at the Workshop on Semantic Evaluation 2010 (SemEval-2010). The keyphrase extraction task was specifically geared towards scientific articles. Systems were automatically evaluated by matching their extracted keyphrases against those assigned by the authors as well as the readers to the same documents. We outline the task, present the overall ranking of the submitted systems, and discuss the improvements to the state-of-the-art in keyphrase extraction.  相似文献   

2.
An automatic keyphrase extraction system for scientific documents   总被引:1,自引:0,他引:1  
Automatic keyphrase extraction techniques play an important role for many tasks including indexing, categorizing, summarizing, and searching. In this paper, we develop and evaluate an automatic keyphrase extraction system for scientific documents. Compared with previous work, our system concentrates on two important issues: (1) more precise location for potential keyphrases: a new candidate phrase generation method is proposed based on the core word expansion algorithm, which can reduce the size of the candidate set by about 75% without increasing the computational complexity; (2) overlap elimination for the output list: when a phrase and its sub-phrases coexist as candidates, an inverse document frequency feature is introduced for selecting the proper granularity. Additional new features are added for phrase weighting. Experiments based on real-world datasets were carried out to evaluate the proposed system. The results show the efficiency and effectiveness of the refined candidate set and demonstrate that the new features improve the accuracy of the system. The overall performance of our system compares favorably with other state-of-the-art keyphrase extraction systems.  相似文献   

3.
从单个文档中直接提取关键词不能满足关键词提取的精度要求,而现有基于邻居信息的关键词提取相关研究又耗时较长.因此,文中提出利用科学文献中共同作者关系以构建邻居网络,并联合使用这些邻居网络信息及文档本身内容提取关键词的方法.在此基础上,进一步提出利用领域知识中高频度共现词对以提取关键词,获得更高质量的关键词的方法.实验表明,文中方法性能较优.  相似文献   

4.
While several automatic keyphrase extraction (AKE) techniques have been developed and analyzed, there is little consensus on the definition of the task and a lack of overview of the effectiveness of different techniques. Proper evaluation of keyphrase extraction requires large test collections with multiple opinions, currently not available for research. In this paper, we (i) present a set of test collections derived from various sources with multiple annotations (which we also refer to as opinions in the remained of the paper) for each document, (ii) systematically evaluate keyphrase extraction using several supervised and unsupervised AKE techniques, (iii) and experimentally analyze the effects of disagreement on AKE evaluation. Our newly created set of test collections spans different types of topical content from general news and magazines, and is annotated with multiple annotations per article by a large annotator panel. Our annotator study shows that for a given document there seems to be a large disagreement on the preferred keyphrases, suggesting the need for multiple opinions per document. A first systematic evaluation of ranking and classification of keyphrases using both unsupervised and supervised AKE techniques on the test collections shows a superior effectiveness of supervised models, even for a low annotation effort and with basic positional and frequency features, and highlights the importance of a suitable keyphrase candidate generation approach. We also study the influence of multiple opinions, training data and document length on evaluation of keyphrase extraction. Our new test collection for keyphrase extraction is one of the largest of its kind and will be made available to stimulate future work to improve reliable evaluation of new keyphrase extractors.  相似文献   

5.
Keyphrase extraction from social media is a crucial and challenging task. Previous studies usually focus on extracting keyphrases that provide the summary of a corpus. However, they do not take users’ specific needs into consideration. In this paper, we propose a novel three-stage model to learn a keyphrase set that represents or related to a particular topic. Firstly, a phrase mining algorithm is applied to segment the documents into human-interpretable phrases. Secondly, we propose a weakly supervised model to extract candidate keyphrases, which uses a few pre-specific seed keyphrases to guide the model. The model consequently makes the extracted keyphrases more specific and related to the seed keyphrases (which reflect the user’s needs). Finally, to further identify the implicitly related phrases, the PMI-IR algorithm is employed to obtain the synonyms of the extracted candidate keyphrases. We conducted experiments on two publicly available datasets from news and Twitter. The experimental results demonstrate that our approach outperforms the state-of-the-art baselines and has the potential to extract high-quality task-oriented keyphrases.  相似文献   

6.
针对现有的基于图的关键词提取方法未能有效整合文本序列中词与词之间的潜在语义关系的问题,提出了一个融合词向量与位置信息的基于图的关键词提取算法EPRank。通过词向量表示模型学得目标文档中每个词的表示向量;将该反映词与词之间的潜在语义关系的词向量与位置特征相结合融合到PageRank评分模型中;选择几个排名靠前的单词或短语作为目标文档的关键词。实验结果表明,提出的EPRank方法在KDD和SIGIR两个数据集上的各项评估指标均高于5个现有的关键词提取方法。  相似文献   

7.
基于语义的关键词提取算法   总被引:3,自引:1,他引:2  
关键词1提供了文档内容的概要信息,它们被使用在很多数据挖掘的应用中,在目前的关键词提取算法中,我们发现词汇层面(代表意思的词)和概念层面(意思本身)的差别导致了关键字提取的不准确,比如不同语法的词可能有着相同的意思,而相同语法的词在不同的上下文有着不同的意思.为了解决这个问题,这篇文章提出使用词义代替词并且通过考虑关键候选词的语义信息来提高关键词提取算法性能的方法.与现有的关键词提取方法不同,该方法首先通过使用消歧算法,通过上下文得到候选词的词义;然后在后面的词合并、特征提取和评估的步骤中,候选词义之间的语义相关度被用来提高算法的性能.在评估算法时,我们采用一种更为有效的基于语义的评估方法与著名的Kea系统作比较.在不同领域间的实验中可以发现,当考虑语义信息后,关键词提取算法的性能能够得到很大的提高.在同领域的实验中,我们的算法的性能与Kea 算法的相近.我们的算法没有领域的限制性,因此具有更好的应用前景.  相似文献   

8.
The keyphrases of a text entity are a set of words or phrases that concisely describe the main content of that text. Automatic keyphrase extraction plays an important role in natural language processing and information retrieval tasks such as text summarization, text categorization, full-text indexing, and cross-lingual text reuse. However, automatic keyphrase extraction is still a complicated task and the performance of the current keyphrase extraction methods is low. Automatic discovery of high-quality and meaningful keyphrases requires the application of useful information and suitable mining techniques. This paper proposes Topical and Structural Keyphrase Extractor (TSAKE) for the task of automatic keyphrase extraction. TSAKE combines the prior knowledge about the input langue learned by an N-gram topical model (TNG) with the co-occurrence graph of the input text to form some topical graphs. Different from most of the recent keyphrase extraction models, TSAKE uses the topic model to weight the edges instead of the nodes of the co-occurrence graph. Moreover, while TNG represents the general topics of the language, TSAKE applies network analysis techniques to each topical graph to detect finer grained sub-topics and extract more important words of each sub-topic. The use of these informative words in the ranking process of the candidate keyphrases improves the quality of the final keyphrases proposed by TSAKE. The results of our experiment studies conducted on three manually annotated datasets show the superiority of the proposed model over three baseline techniques and six state-of-the-art models.  相似文献   

9.
关键词生成是自然语言处理中一项经典但具有挑战性的任务,需要从文档中自动生成一组具有代表性和特征性的词语。基于深度学习的序列到序列模型在这项任务中取得了显著的效果,弥补了以往关键词抽取存在的一个严重缺陷:无法产生不存在于原文中的关键词。由于其产生的结果更切合实际,关键词生成方法逐渐超越了以往的抽取方法,成为了关键词提取任务的主流方法。介绍了关键词提取的发展历程以及关键词生成任务的主要数据集,对基础设计采用序列到序列模型的关键词生成方法进行了分类梳理,分析其原理和优缺点。概述了关键词生成任务的评价方法,并对其未来研究重点进行了展望。  相似文献   

10.
利用关联规则挖掘文本主题词的方法   总被引:3,自引:1,他引:2       下载免费PDF全文
主题词抽取是目前信息检索领域研究的热点,与一系列数据挖掘相关的任务密切相关。该文提出一种新的利用关联规则挖掘中文文本主题词的方法,该方法抽取的主题词包括关键词和相关检索词两部分。在关键词抽取的基础上,采用数据挖掘中的关联规则挖掘算法抽取相关检索词,用于扩展检索或相关检索,提高了用户对于文档的理解。实验表明该方法取得了较好的效果。  相似文献   

11.
面向文本的关键词自动提取一直以来是自然语言处理领域的一个关键基础问题和研究热点.特别是,随着当前对文本数据应用需求的不断增加,使得关键词提取技术进一步得到研究者的广泛关注.尽管近年来关键词提取技术得到长足的发展,但提取结果目前还远未取得令人满意的效果.为了促进关键词提取问题的解决,本文对近年来国内、外学者在该研究领域取得的成果进行了系统总结,具体包括候选关键词生成、特征工程和关键词提取三个主要步骤,并对未来可能的研究方向进行了探讨和展望.不同于围绕提取方法进行总结的综述文献,本文主要围绕着各种方法使用的特征信息归纳总结现有成果,这种从特征驱动的视角考察现有研究成果的方式有助于综合利用现有特征或提出新特征,进而提出更有效的关键词提取方法.  相似文献   

12.
基于主题特征的关键词抽取   总被引:2,自引:1,他引:1  
为了使抽取出的关键词更能反映文档主题,提出了一种新的词的主题特征(topic feature,TF)计算方法,该方法利用主题模型中词和主题的分布情况计算词的主题特征。并将该特征与关键词抽取中的常用特征结合,用装袋决策树方法构造一个关键词抽取模型。实验结果表明提出的主题特征可以提升关键词抽取的效果,同时验证了装袋决策树在关键词抽取中的适用性。  相似文献   

13.
This paper investigates user interpretation of search result displays on small screen devices. Such devices present interesting design challenges given their limited display capabilities, particularly in relation to screen size. Our aim is to provide users with succinct yet useful representations of search results that allow rapid and accurate decisions to be made about the utility of result documents, yet minimize user actions (such as scrolling), the use of device resources, and the volume of data to be downloaded. Our hypothesis is that keyphrases that are automatically extracted from documents can support this aim. We report on a user study that compared how accurately users categorized result documents on small screens when the document surrogates consisted of either keyphrases only, or document titles. We found no significant performance differences between the two conditions. In addition to these encouraging results, keyphrases have the benefit that they can be extracted and presented when no other document metadata can be identified.  相似文献   

14.
关键词抽取技术是自然语言处理领域的一个研究热点。在目前的关键词抽取算法中,深度学习方法较少考虑到中文的特点,汉字粒度的信息利用不充分,中文短文本关键词的提取效果仍有较大的提升空间。为了改进短文本的关键词提取效果,针对论文摘要关键词自动抽取任务,提出了一种将双向长短时记忆神经网络(Bidirectional Long Shot-Term Memory,BiLSTM)与注意力机制(Attention)相结合的基于序列标注(Sequence Tagging)的关键词提取模型(Bidirectional Long Short-term Memory and Attention Mechanism Based on Sequence Tagging,BAST)。首先使用基于词语粒度的词向量和基于字粒度的字向量分别表示输入文本信息;然后,训练BAST模型,利用BiLSTM和注意力机制提取文本特征,并对每个单词的标签进行分类预测;最后使用字向量模型校正词向量模型的关键词抽取结果。实验结果表明,在8159条论文摘要数据上,BAST模型的F1值达到66.93%,比BiLSTM-CRF(Bidirectional Long Shoft-Term Memory and Conditional Random Field)算法提升了2.08%,较其他传统关键词抽取算法也有进一步的提高。该模型的创新之处在于结合了字向量和词向量模型的抽取结果,充分利用了中文文本信息的特征,可以有效提取短文本的关键词,提取效果得到了进一步的改进。  相似文献   

15.
This paper describes a software sysem (SOFTLIB) that has been developed to assist in the management of software documentation generated during systems development projects. It provides facilities to manage large numbers of documents, to file documents when they are complete and to issue them to system developers and maintainers. It also includes an information retrieval facility that allows programming staff to find documents, to examine their contents before issue and to assess the state of the software project documentation. SOFTLIB is explicitly intended to help manage the documentation generated during software development — it is not designed for use by end-users of that software or for managing end-user documentation. The novel characteristic of this system is the approach that is taken to the consistency and completeness of documentation. The documentation associated with a software system is organized in such a way that it may be detected if document sets are complete (that is, if all documentation which should be provided for a software component is available) and if document sets are likely to be inconsistent. This means that if a document has been changed without a comparable change being made to other associated documents, this is detectable by the librarian system. In addition, a subsidiary aim of our work was to investigate the utility of menu systems to complex software tools by building a user interface to SOFTLIB. We conclude that menu systems are far from ideal in such situations because of the range of possible options which must be handled by the system.  相似文献   

16.
In this paper a system for analysis and automatic indexing of imaged documents for high-volume applications is described. This system, named STRETCH (STorage and RETrieval by Content of imaged documents), is based on an Archiving and Retrieval Engine, which overcomes the bottleneck of document profiling bypassing some limitations of existing pre-defined indexing schemes. The engine exploits a structured document representation and can activate appropriate methods to characterise and automatically index heterogeneous documents with variable layout. The originality of STRETCH lies principally in the possibility for unskilled users to define the indexes relevant to the document domains of their interest by simply presenting visual examples and applying reliable automatic information extraction methods (document classification, flexible reading strategies) to index the documents automatically, thus creating archives as desired. STRETCH offers ease of use and application programming and the ability to dynamically adapt to new types of documents. The system has been tested in two applications in particular, one concerning passive invoices and the other bank documents. In these applications, several classes of documents are involved. The indexing strategy first automatically classifies the document, thus avoiding pre-sorting, then locates and reads the information pertaining to the specific document class. Experimental results are encouraging overall; in particular, document classification results fulfill the requirements of high-volume application. Integration into production lines is under execution. Received March 30, 2000 / Revised June 26, 2001  相似文献   

17.
We present a methodology for learning a taxonomy from a set of text documents that each describes one concept. The taxonomy is obtained by clustering the concept definition documents with a hierarchical approach to the Self-Organizing Map. In this study, we compare three different feature extraction approaches with varying degree of language independence. The feature extraction schemes include fuzzy logic-based feature weighting and selection, statistical keyphrase extraction, and the traditional tf-idf weighting scheme. The experiments are conducted for English, Finnish, and Spanish. The results show that while the rule-based fuzzy logic systems have an advantage in automatic taxonomy learning, taxonomies can also be constructed with tolerable results using statistical methods without domain- or style-specific knowledge.  相似文献   

18.

The internet changed the way that people communicate, and this has led to a vast amount of Text that is available in electronic format. It includes things like e-mail, technical and scientific reports, tweets, physician notes and military field reports. Providing key-phrases for these extensive text collections thus allows users to grab the essence of the lengthy contents quickly and helps to locate information with high efficiency. While designing a Keyword Extraction and Indexing system, it is essential to pick unique properties, called features. In this article, we proposed different unsupervised keyword extraction approaches, which is independent of the structure, size and domain of the documents. The proposed method relies on the novel and cognitive inspired set of standard, phrase, word embedding and external knowledge source features. The individual and selected feature results are reported through experimentation on four different datasets viz. SemEval, KDD, Inspec, and DUC. The selected (feature selection) and word embedding based features are the best features set to be used for keywords extraction and indexing among all mentioned datasets. That is the proposed distributed word vector with additional knowledge improves the results significantly over the use of individual features, combined features after feature selection and state-of-the-art. After successfully achieving the objective of developing various keyphrase extraction methods we also experimented it for document classification task.

  相似文献   

19.
Multimodal Retrieval is a well-established approach for image retrieval. Usually, images are accompanied by text caption along with associated documents describing the image. Textual query expansion as a form of enhancing image retrieval is a relatively less explored area. In this paper, we first study the effect of expanding textual query on both image and its associated text retrieval. Our study reveals that judicious expansion of textual query through keyphrase extraction can lead to better results, either in terms of text-retrieval or both image and text-retrieval. To establish this, we use two well-known keyphrase extraction techniques based on tf-idf and KEA. While query expansion results in increased retrieval efficiency, it is imperative that the expansion be semantically justified. So, we propose a graph-based keyphrase extraction model that captures the relatedness between words in terms of both mutual information and relevance feedback. Most of the existing works have stressed on bridging the semantic gap by using textual and visual features, either in combination or individually. The way these text and image features are combined determines the efficacy of any retrieval. For this purpose, we adopt Fisher-LDA to adjudge the appropriate weights for each modality. This provides us with an intelligent decision-making process favoring the feature set to be infused into the final query. Our proposed algorithm is shown to supersede the previously mentioned keyphrase extraction algorithms for query expansion significantly. A rigorous set of experiments performed on ImageCLEF-2011 Wikipedia Retrieval task dataset validates our claim that capturing the semantic relation between words through Mutual Information followed by expansion of a textual query using relevance feedback can simultaneously enhance both text and image retrieval.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号