首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
User comments, as a large group of online short texts, are becoming increasingly prevalent with the development of online communications. These short texts are characterized by their co-occurrences with usually lengthier normal documents. For example, there could be multiple user comments following one news article, or multiple reader reviews following one blog post. The co-occurring structure inherent in such text corpora is important for efficient learning of topics, but is rarely captured by conventional topic models. To capture such structure, we propose a topic model for co-occurring documents, referred to as COTM. In COTM, we assume there are two sets of topics: formal topics and informal topics, where formal topics can appear in both normal documents and short texts whereas informal topics can only appear in short texts. Each normal document has a probability distribution over a set of formal topics; each short text is composed of two topics, one from the set of formal topics, whose selection is governed by the topic probabilities of the corresponding normal document, and the other from a set of informal topics. We also develop an online algorithm for COTM to deal with large scale corpus. Extensive experiments on real-world datasets demonstrate that COTM and its online algorithm outperform state-of-art methods by discovering more prominent, coherent and comprehensive topics.  相似文献   

2.
Researchers across the globe have been increasingly interested in the manner in which important research topics evolve over time within the corpus of scientific literature. In a dataset of scientific articles, each document can be considered to comprise both the words of the document itself and its citations of other documents. In this paper, we propose a citation- content-latent Dirichlet allocation (LDA) topic discovery method that accounts for both document citation relations and the con-tent of the document itself via a probabilistic generative model. The citation-content-LDA topic model exploits a two-level topic model that includes the citation information for ‘father’ topics and text information for sub-topics. The model parameters are estimated by a collapsed Gibbs sampling algorithm. We also propose a topic evolution algorithm that runs in two steps: topic segmentation and topic dependency relation calculation. We have tested the proposed citation-content-LDA model and topic evolution algorithm on two online datasets, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) and IEEE Computer Society (CS), to demonstrate that our algorithm effectively discovers important topics and reflects the topic evolution of important research themes. According to our evaluation metrics, citation-content-LDA outperforms both content-LDA and citation-LDA.  相似文献   

3.
Effective and efficient classification on a search-engine model   总被引:5,自引:5,他引:0  
Traditional document classification frameworks, which apply the learned classifier to each document in a corpus one by one, are infeasible for extremely large document corpora, like the Web or large corporate intranets. We consider the classification problem on a corpus that has been processed primarily for the purpose of searching, and thus our access to documents is solely through the inverted index of a large scale search engine. Our main goal is to build the “best” short query that characterizes a document class using operators normally available within search engines. We show that surprisingly good classification accuracy can be achieved on average over multiple classes by queries with as few as 10 terms. As part of our study, we enhance some of the feature-selection techniques that are found in the literature by forcing the inclusion of terms that are negatively correlated with the target class and by making use of term correlations; we show that both of those techniques can offer significant advantages. Moreover, we show that optimizing the efficiency of query execution by careful selection of terms can further reduce the query costs. More precisely, we show that on our set-up the best 10-term query can achieve 93% of the accuracy of the best SVM classifier (14,000 terms), and if we are willing to tolerate a reduction to 89% of the best SVM, we can build a 10-term query that can be executed more than twice as fast as the best 10-term query.  相似文献   

4.
This paper presents a probabilistic mixture modeling framework for the hierarchic organisation of document collections. It is demonstrated that the probabilistic corpus model which emerges from the automatic or unsupervised hierarchical organisation of a document collection can be further exploited to create a kernel which boosts the performance of state-of-the-art support vector machine document classifiers. It is shown that the performance of such a classifier is further enhanced when employing the kernel derived from an appropriate hierarchic mixture model used for partitioning a document corpus rather than the kernel associated with a flat non-hierarchic mixture model. This has important implications for document classification when a hierarchic ordering of topics exists. This can be considered as the effective combination of documents with no topic or class labels (unlabeled data), labeled documents, and prior domain knowledge (in the form of the known hierarchic structure), in providing enhanced document classification performance.  相似文献   

5.
Knowledge discovery through directed probabilistic topic models: a survey   总被引:1,自引:0,他引:1  
Graphical models have become the basic framework for topic based probabilistic modeling. Especially models with latent variables have proved to be effective in capturing hidden structures in the data. In this paper, we survey an important subclass Directed Probabilistic Topic Models (DPTMs) with soft clustering abilities and their applications for knowledge discovery in text corpora. From an unsupervised learning perspective, “topics are semantically related probabilistic clusters of words in text corpora; and the process for finding these topics is called topic modeling”. In topic modeling, a document consists of different hidden topics and the topic probabilities provide an explicit representation of a document to smooth data from the semantic level. It has been an active area of research during the last decade. Many models have been proposed for handling the problems of modeling text corpora with different characteristics, for applications such as document classification, hidden association finding, expert finding, community discovery and temporal trend analysis. We give basic concepts, advantages and disadvantages in a chronological order, existing models classification into different categories, their parameter estimation and inference making algorithms with models performance evaluation measures. We also discuss their applications, open challenges and future directions in this dynamic area of research.  相似文献   

6.
Discovering unexpected documents in corpora   总被引:1,自引:0,他引:1  
Text mining is widely used to discover frequent patterns in large corpora of documents. Hence, many classical data mining techniques, that have been proven fruitful in the context of data stored in relational databases, are now successfully used in the context of textual data. Nevertheless, there are many situations where it is more valuable to discover unexpected information rather than frequent ones. In the context of technology watch for example, we may want to discover new trends in specific markets, or discover what competitors are planning in the near future, etc. This paper is related to that context of research. We have proposed several unexpectedness measures and implemented them in a prototype, called UnexpectedMiner, that can be used by watchers, in order to discover unexpected documents in large corpora of documents (patents, datasheets, advertisements, scientific papers, etc.). UnexpectedMiner is able to take into account the structure of documents during the discovery of unexpected information. Many experiments have been performed in order to validate our measures and show the interest of our system.  相似文献   

7.
针对挖掘大规模科技文献中作者、主题和时间及其关系的问题,考虑科技文献的内外部特征,提出了一个作者主题演化(AToT)模型。模型中文档表示为一定概率比例的主题混合体,每个主题对应一个词项上的多项分布和一个随时间变化的贝塔分布,主题词项分布不仅由文档中单词共现决定,同时受文档时间戳影响,每个作者也对应一个主题上的多项分布。主题词项分布与作者主题分布分别用来描述主题随时间变化的规律和作者研究兴趣的变化规律。采用吉布斯采样的方法,通过学习文档集可以获得模型的参数。在1700篇NIPS会议论文集上的实验结果显示,作者主题演化模型可以描述文档集中潜在的主题演化规律,动态发现作者研究兴趣的变化,可以预测与主题相关的作者,与作者主题模型相比计算困惑度更低。  相似文献   

8.
Field Association (FA) Terms—words or phrases that serve to identify document fields are effective in document classification, similar file retrieval and passage retrieval. But the problem lies in the lack of an effective method to extract and select relevant FA Terms to build a comprehensive dictionary of FA Terms. This paper presents a new method to extract, select and rank FA Terms from domain-specific corpora using part-of-speech (POS) pattern rules, corpora comparison and modified tf-idf weighting. Experimental evaluation on 21 fields using 306 MB of domain-specific corpora obtained from English Wikipedia dumps selected up to 2,517 FA Terms (single and compound) per field at precision and recall of 74–97 and 65–98. This is better than the traditional methods. The FA Terms dictionary constructed using this method achieved an average accuracy of 97.6% in identifying the fields of 10,077 test documents collected from Wikipedia, Reuters RCV1 corpus and 20 Newsgroup data set.  相似文献   

9.
在当今处于信息数量爆炸式增长的互联网时代,如何分析海量文本中的信息并从而提取出所蕴含的有利用价值的部分,是一个值得关注的问题。然而论坛语料作为网络语料,其结构和内容较一般语料相比更为复杂,文本也更加短小。该文提出的方法利用LDA模型对语料集进行建模,将话题从中抽取出来,根据生成的话题空间找到相应的话题支持文档,计算文档支持率作为话题强度;将话题强度反映在时间轴上,得到话题的强度趋势;通过在不同时间段上对语料重新建模,并结合全局话题,得到话题的内容演化路径。实验结果说明,上述方法是合理和有效的。  相似文献   

10.
随着XML技术的发展,如何利用现有的数据库技术存储和查询XML文档已成为XML数据管理领域研究的热点问题。本文介绍了一种新的文档编码方法,以及基于这种编码方式提出了一种新的XML文档存储方法。方法按照文档中结点类型将XML文档树型结构分解为结点,分别存储到对应的关系表中,这种方法能够将任意结构的文档存储到一个固定的关系模式中。同时为了便于实现数据的查询,将文档中出现的简单路径模式也存储为一个表。这种新的文档存储方法能够有效地支持文档的查询操作,并能根据结点的编码信息实现原XML文档的正确恢复。最后,对本文提出的存储方法和恢复算法进行了实验验证。  相似文献   

11.
《Knowledge》2007,20(7):607-613
Discovering topics from large amount of documents has become an important task recently. Most of the topic models treat document as a word sequence, whether in discrete character or term frequency form. However, the number of words in a document is greatly different from that in other documents. This will lead to several problems for current topic models in dealing with topics analysis. On the other hand, it is difficult to perform topic transition analysis based on current topic models. In an attempt to overcome these deficiencies, a variable space hidden Markov model (VSHMM) is proposed to represent the topics, and several operations based on space computation are presented. A hierarchical clustering algorithm with dynamically changing of the component number in topic model is proposed to demonstrate the effectiveness of the VSHMM. Method of document partition based on topic transition is also present. Experiments on a real-world dataset show that the VSHMM can improve the accuracy while decreasing the algorithm’s time complexity greatly compared with the algorithm based on current mixture model.  相似文献   

12.
13.
Topic Distillation and Spectral Filtering   总被引:1,自引:0,他引:1  
This paper discuss topic distillation, an information retrieval problemthat is emerging as a critical task for the www. Algorithms for this problemmust distill a small number of high-quality documents addressing a broadtopic from a large set of candidates.We give a review of the literature, and compare the problem with relatedtasks such as classification, clustering, and indexing. We then describe ageneral approach to topic distillation with applications to searching andpartitioning, based on the algebraic properties of matrices derived fromparticular documents within the corpus. Our method – which we call special filtering – combines the use of terms, hyperlinks and anchor-textto improve retrieval performance. We give results for broad-topic querieson the www, and also give some anecdotal results applying the sametechniques to US Supreme Court law cases, US patents, and a set of WallStreet Journal newspaper articles.  相似文献   

14.
Topic models are generative probabilistic models which have been applied to information retrieval to automatically organize and provide structure to a text corpus. Topic models discover topics in the corpus, which represent real world concepts by frequently co-occurring words. Recently, researchers found topics to be effective tools for structuring various software artifacts, such as source code, requirements documents, and bug reports. This research also hypothesized that using topics to describe the evolution of software repositories could be useful for maintenance and understanding tasks. However, research has yet to determine whether these automatically discovered topic evolutions describe the evolution of source code in a way that is relevant or meaningful to project stakeholders, and thus it is not clear whether topic models are a suitable tool for this task.In this paper, we take a first step towards evaluating topic models in the analysis of software evolution by performing a detailed manual analysis on the source code histories of two well-known and well-documented systems, JHotDraw and jEdit. We define and compute various metrics on the discovered topic evolutions and manually investigate how and why the metrics evolve over time. We find that the large majority (87%–89%) of topic evolutions correspond well with actual code change activities by developers. We are thus encouraged to use topic models as tools for studying the evolution of a software system.  相似文献   

15.
We investigate the unique requirements of the adaptive textual document filtering problem and propose a new high‐dimensional on‐line learning framework, known as the REPGER (relevant feature pool with good training example retrieval rule) algorithm to tackle this problem. Our algorithm possesses three characteristics. First, it maintains a pool of selective features with potentially high predictive power to predict document relevance. Second, besides retrieving documents according to their predicted relevance, it also retrieves incoming documents that are considered good training examples. Third, it can dynamically adjust the dissemination threshold throughout the filtering process so as to maintain a good filtering performance in a fully interactive environment. We have conducted experiments on three document corpora, namely, Associated Press, Foreign Broadcast Information Service, and Wall Street Journal to compare the performance of our REPGER algorithm with two existing on‐line learning algorithms. The results demonstrate that our REPGER algorithm gives better performance most of the time. Comparison with the TREC (Text Retrieval Conference) adaptive text filtering track participants was also made. The result shows that our REPGER algorithm is comparable to them.  相似文献   

16.
This paper reports a document retrieval technique that retrieves machine-printed Latin-based document images through word shape coding. Adopting the idea of image annotation, a word shape coding scheme is proposed, which converts each word image into a word shape code by using a few shape features. The text contents of imaged documents are thus captured by a document vector constructed with the converted word shape code and word frequency information. Similarities between different document images are then gauged based on the constructed document vectors. We divide the retrieval process into two stages. Based on the observation that documents of the same language share a large number of high-frequency language-specific stop words, the first stage retrieves documents with the same underlying language as that of the query document. The second stage then re-ranks the documents retrieved in the first stage based on the topic similarity. Experiments show that document images of different languages and topics can be retrieved properly by using the proposed word shape coding scheme.  相似文献   

17.
18.
Concept learning has attracted considerable attention as a means to tackle problems of representation and learning corpus knowledge. In this paper, we investigate a challenging problem to automatically construct a patent concept learning model. Our model consists of two main processes; which is the acquisition of the initial concept graph and refined process for the initial concept graph. The learning algorithm of a patent concept graph is designed based on the Association Link Network (ALN). A concept is usually described by multiple documents utilizing ALN here in concept learning. We propose a mixture-ALN, which add links between documents and the lexical level, compared with the ALN. Then, a heuristic algorithm is proposed to refine the concept graph, leading to a more concise and simpler knowledge for the concept. The heuristic algorithm consists of four phases; first, for simplifying bag of words for concept in patent corpus, we start to select a core node from the initial concept graph. Second,for learning the association rule for the concept, we searched important association rules around the core node in our rules collection. Third, to ensure coherent semantics of the concept, we selected corresponding documents based on the selected association rules and words. Finally, for enriching semantics of the refined concept, we iteratively selected core nodes based on the corresponding documents and restarted our heuristic algorithm. In the experiments, our model shows effectiveness and improvements in prediction accuracy in the retrieve task of the patent.  相似文献   

19.
Yong Suk Choi 《Knowledge》2011,24(8):1139-1150
Recently, due to the widespread on-line availability of syntactically annotated text corpora, some automated tools for searching in such text corpora have gained great attention. Generally, those conventional corpus search tools use a decomposition-matching-merging method based on relational predicates for matching a tree pattern query to the desired parts of text corpora. Thus, their query formulation and expressivity are often complicated due to poorly understood query formalisms, and their searching tasks may require a big computational overhead due to a large number of repeated trials of matching tree patterns. To overcome these difficulties, we present TPEMatcher, a tool for searching in parsed text corpora. TPEMatcher provides not only an efficient way of query formulation and searching but also a good query expressivity based on concise syntax and semantics of tree pattern query. We also demonstrate that TPEMatcher can be effectively used for a text mining in practice with its useful interface providing in-depth details of search results.  相似文献   

20.
Patterns of words used in different text collections can characterize interesting properties of a corpus. However, these patterns are challenging to explore as they often involve complex relationships across many words and collections in a large space of words. In this paper, we propose a configurable colorfield design to aid this exploration. Our approach uses a dense colorfield overview to present large amounts of data in ways that make patterns perceptible. It allows flexible configuration of both data mappings and aggregations to expose different kinds of patterns, and provides interactions to help connect detailed patterns to the corpus overview. TextDNA, our prototype implementation, leverages the GPU to provide interactivity in the web browser even on large corpora. We present five case studies showing how the tool supports inquiry in corpora ranging in size from single document to millions of books. Our work shows how to make a configurable colorfield approach practical for a range of analytic tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号