首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Adaptive Bayesian Latent Semantic Analysis   总被引:1,自引:0,他引:1  
Due to the vast growth of data collections, the statistical document modeling has become increasingly important in language processing areas. Probabilistic latent semantic analysis (PLSA) is a popular approach whereby the semantics and statistics can be effectively captured for modeling. However, PLSA is highly sensitive to task domain, which is continuously changing in real-world documents. In this paper, a novel Bayesian PLSA framework is presented. We focus on exploiting the incremental learning algorithm for solving the updating problem of new domain articles. This algorithm is developed to improve document modeling by incrementally extracting up-to-date latent semantic information to match the changing domains at run time. By adequately representing the priors of PLSA parameters using Dirichlet densities, the posterior densities belong to the same distribution so that a reproducible prior/posterior mechanism is activated for incremental learning from constantly accumulated documents. An incremental PLSA algorithm is constructed to accomplish the parameter estimation as well as the hyperparameter updating. Compared to standard PLSA using maximum likelihood estimate, the proposed approach is capable of performing dynamic document indexing and modeling. We also present the maximum a posteriori PLSA for corrective training. Experiments on information retrieval and document categorization demonstrate the superiority of using Bayesian PLSA methods.  相似文献   

2.
The development of micro-blog, generating large-scale short texts, provides people with convenient communication. In the meantime, discovering topics from short texts genuinely becomes an intractable problem. It was hard for traditional topic model-to-model short texts, such as probabilistic latent semantic analysis (PLSA) and Latent Dirichlet Allocation (LDA). They suffered from the severe data sparsity when disposed short texts. Moreover, K-means clustering algorithm can make topics discriminative when datasets is intensive and the difference among topic documents is distinct. In this paper, BTM topic model is employed to process short texts–micro-blog data for alleviating the problem of sparsity. At the same time, we integrating K-means clustering algorithm into BTM (Biterm Topic Model) for topics discovery further. The results of experiments on Sina micro-blog short text collections demonstrate that our method can discover topics effectively.  相似文献   

3.
A thousand words in a scene   总被引:2,自引:0,他引:2  
This paper presents a novel approach for visual scene modeling and classification, investigating the combined use of text modeling methods and local invariant features. Our work attempts to elucidate (1) whether a textlike bag-of-visterms (BOV) representation (histogram of quantized local visual features) is suitable for scene (rather than object) classification, (2) whether some analogies between discrete scene representations and text documents exist, and 3) whether unsupervised, latent space models can be used both as feature extractors for the classification task and to discover patterns of visual co-occurrence. Using several data sets, we validate our approach, presenting and discussing experiments on each of these issues. We first show, with extensive experiments on binary and multiclass scene classification tasks using a 9,500-image data set, that the BOV representation consistently outperforms classical scene classification approaches. In other data sets, we show that our approach competes with or outperforms other recent more complex methods. We also show that probabilistic latent semantic analysis (PLSA) generates a compact scene representation, is discriminative for accurate classification, and is more robust than the BOV representation when less labeled training data is available. Finally, through aspect-based image ranking experiments, we show the ability of PLSA to automatically extract visually meaningful scene patterns, making such representation useful for browsing image collections.  相似文献   

4.
李卫疆  王真真  余正涛 《计算机科学》2017,44(2):257-261, 274
近年来,微博等社交网络的发展给人们的沟通交流提供了方便。由于每条微博都限定在140字以内,因此产生了大量的短文本信息。从短文本中发现话题日渐成为一项重要的课题。传统的话题模型(如概率潜在语义分析(PLSA)、潜在狄利克雷分配(LDA)等) 在处理短文本方面都面临着严重的数据稀疏问题。另外,当数据集比较集中并且话题文档间的差别较明显时,K-means 聚类算法能够聚类出有区分度的话题。引入BTM话题模型来处理微博数据这样的短文本,以缓解数据稀疏的问题。同时,整合了K-means聚类算法来对BTM模型所发现的话题进行聚类。在新浪微博短文本集上进行的实验证明了此方法发现话题的有效性。  相似文献   

5.
This paper proposes a novel approach to generate and analyze path model by structure equation modeling (SEM). SEM is an important technique to carry out causal analysis based on path model. As such, constructing path models, which result in reliable analysis, are important in SEM. LSA-based method, which is used to build a path model from text data, is proposed. However, this method requires each document to belong to one topic; thus, the model cannot express natural variables and relationships. Therefore, this paper extends the existing approach to latent Dirichlet allocation (LDA) and generates a path model from the extracted topics by LDA. Experiments using review text data can confirm the feasibility and applicability of the proposed process.  相似文献   

6.
传统潜在语义分析(Latent Semantic Analysis, LSA)方法无法获得场景目标空间分布信息和潜在主题的判别信息。针对这一问题提出了一种基于多尺度空间判别性概率潜在语义分析(Probabilistic Latent Semantic Analysis, PLSA)的场景分类方法。首先通过空间金字塔方法对图像进行空间多尺度划分获得图像空间信息,结合PLSA模型获得每个局部块的潜在语义信息;然后串接每个特定局部块中的语义信息得到图像多尺度空间潜在语义信息;最后结合提出的权值学习方法来学习不同图像主题间的判别信息,从而得到图像的多尺度空间判别性潜在语义信息,并将学习到的权值信息嵌入支持向量基(Support Vector Machine, SVM)分类器中完成图像的场景分类。在常用的三个场景图像库(Scene-13、Scene-15和Caltech-101)上的实验表明,该方法平均分类精度比现有许多state-of-art方法均优。验证了其有效性和鲁棒性。  相似文献   

7.
Probabilistic latent semantic analysis (PLSA) is a topic model for text documents, which has been widely used in text mining, computer vision, computational biology and so on. For batch PLSA inference algorithms, the required memory size grows linearly with the data size, and handling massive data streams is very difficult. To process big data streams, we propose an online belief propagation (OBP) algorithm based on the improved factor graph representation for PLSA. The factor graph of PLSA facilitates the classic belief propagation (BP) algorithm. Furthermore, OBP splits the data stream into a set of small segments, and uses the estimated parameters of previous segments to calculate the gradient descent of the current segment. Because OBP removes each segment from memory after processing, it is memory-efficient for big data streams. We examine the performance of OBP on four document data sets, and demonstrate that OBP is competitive in both speed and accuracy for online expectation maximization (OEM) in PLSA, and can also give a more accurate topic evolution. Experiments on massive data streams from Baidu further confirm the effectiveness of the OBP algorithm.  相似文献   

8.
基于LDA模型的文本分类研究   总被引:3,自引:0,他引:3       下载免费PDF全文
针对传统的降维算法在处理高维和大规模的文本分类时存在的局限性,提出了一种基于LDA模型的文本分类算法,在判别模型SVM框架中,应用LDA概率增长模型,对文档集进行主题建模,在文档集的隐含主题-文本矩阵上训练SVM,构造文本分类器。参数推理采用Gibbs抽样,将每个文本表示为固定隐含主题集上的概率分布。应用贝叶斯统计理论中的标准方法,确定最优主题数T。在语料库上进行的分类实验表明,与文本表示采用VSM结合SVM,LSI结合SVM相比,具有较好的分类效果。  相似文献   

9.
针对部分网站中新闻话题没有分类或者分类不清等问题,将LDA模型应用到新闻话题的分类中。首先对新闻数据集进行LDA主题建模,根据贝叶斯标准方法选择最佳主题数,采用Gibbs抽样间接计算出模型参数,得到数据集的主题概率分布;然后根据JS距离计算文档之间的语义相似度,得到相似度矩阵;最后利用增量文本聚类算法对新闻文档聚类,将新闻话题分成若干个不同结构的子话题。实验结果显示表明该方法能有效地实现对新闻话题的划分。  相似文献   

10.
Understanding how topics within a document evolve over the structure of the document is an interesting and potentially important problem in exploratory and predictive text analytics. In this article, we address this problem by presenting a novel variant of latent Dirichlet?allocation (LDA): Sequential LDA (SeqLDA). This variant directly considers the underlying sequential structure, i.e. a document consists of multiple segments (e.g. chapters, paragraphs), each of which is correlated to its antecedent and subsequent segments. Such progressive sequential dependency is captured by using the hierarchical two-parameter Poisson–Dirichlet process (HPDP). We develop an efficient collapsed Gibbs sampling algorithm to sample from the posterior of the SeqLDA based on the HPDP. Our experimental results on patent documents show that by considering the sequential structure within a document, our SeqLDA model has a higher fidelity over LDA in terms of perplexity (a standard measure of dictionary-based compressibility). The SeqLDA model also yields a nicer sequential topic structure than LDA, as we show in experiments on several books such as Melville’s ‘Moby Dick’.  相似文献   

11.
理解软件代码的功能是软件复用的一个重要环节。基于主题建模技术的代码理解方法能够挖掘软件代码中潜在的主题,这些主题在一定程度上代表了软件代码所实现的功能。但是使用主题建模技术所挖掘出的代码主题有着语义模糊、难以理解的弊端。潜在狄利克雷分配(Latent Dirichlet Allocation,LDA)技术是一种比较常用的主题建模技术, 其在软件代码主题挖掘领域已取得了较好的结果,但同样存在上述问题。为此,需要为主题生成解释性文本描述。基于LDA的软件代码主题摘要自动生成方法除了利用主题建模技术对源代码生成主题之外,还利用文档、问答信息等包含软件系统功能描述的各类软件资源挖掘出代码主题的描述文本并提取摘要,从而能够更好地帮助开发人员理解软件的功能。  相似文献   

12.
In the context of information retrieval (IR) from text documents, the term weighting scheme (TWS) is a key component of the matching mechanism when using the vector space model. In this paper, we propose a new TWS that is based on computing the average term occurrences of terms in documents and it also uses a discriminative approach based on the document centroid vector to remove less significant weights from the documents. We call our approach Term Frequency With Average Term Occurrence (TF-ATO). An analysis of commonly used document collections shows that test collections are not fully judged as achieving that is expensive and maybe infeasible for large collections. A document collection being fully judged means that every document in the collection acts as a relevant document to a specific query or a group of queries. The discriminative approach used in our proposed approach is a heuristic method for improving the IR effectiveness and performance and it has the advantage of not requiring previous knowledge about relevance judgements. We compare the performance of the proposed TF-ATO to the well-known TF-IDF approach and show that using TF-ATO results in better effectiveness in both static and dynamic document collections. In addition, this paper investigates the impact that stop-words removal and our discriminative approach have on TF-IDF and TF-ATO. The results show that both, stop-words removal and the discriminative approach, have a positive effect on both term-weighting schemes. More importantly, it is shown that using the proposed discriminative approach is beneficial for improving IR effectiveness and performance with no information on the relevance judgement for the collection.  相似文献   

13.
This paper comparatively analyzes a method to automatically classify case studies of building information modeling (BIM) in construction projects by BIM use. It generally takes a minimum of thirty minutes to hours of collection and review and an average of four information sources to identify a project that has used BIM in a manner that is of interest. To automate and expedite the analysis tasks, this study deployed natural language processing (NLP) and commonly used unsupervised learning for text classification, namely latent semantic analysis (LSA) and latent Dirichlet allocation (LDA). The results were validated against one of representative supervised learning methods for text classification—support vector machine (SVM). When LSA and LDA detected phrases in a BIM case study that had higher similarity values to the definition of each BIM use than the threshold values, the system determined that the project had deployed BIM in the detected approach. For the classification of BIM use, the BIM uses specified by Pennsylvania State University were utilized. The approach was validated using 240 BIM case studies (512,892 features). When BIM uses were employed in a project, the project was labeled as “1”; when they were not, the project was labeled as “0.” The performance was analyzed by changing parameters: namely, document segmentation, feature weighting, dimensionality reduction coefficient (k-value), the number of topics, and the number of iterations. LDA yielded the highest F1 score, 80.75% on average. LDA and LSA yielded high recall and low precision in most cases. Conversely, SVM yielded high precision and low recall in most cases and fluctuations in F1 scores.  相似文献   

14.
This paper presents a complete system able to categorize handwritten documents, i.e. to classify documents according to their topic. The categorization approach is based on the detection of some discriminative keywords prior to the use of the well-known tf-idf representation for document categorization. Two keyword extraction strategies are explored. The first one proceeds to the recognition of the whole document. However, the performance of this strategy strongly decreases when the lexicon size increases. The second strategy only extracts the discriminative keywords in the handwritten documents. This information extraction strategy relies on the integration of a rejection model (or anti-lexicon model) in the recognition system. Experiments have been carried out on an unconstrained handwritten document database coming from an industrial application concerning the processing of incoming mails. Results show that the discriminative keyword extraction system leads to better recall/precision tradeoffs than the full recognition strategy. The keyword extraction strategy also outperforms the full recognition strategy for the categorization task.  相似文献   

15.
使用主题模型对文本建模,提取文本的隐含主题,进而进行词性标注和文本分类等工作,是机器学习和文本挖掘领域的研究热点。提出一个基于LDA的主题模型,它基于“段袋“假设--文本中的段落具有相同的主题,且连续的段落更倾向于具有相同的主题。对于文章的段落,采用条件随机场(CRF)模型划分并判断它们是否具有相同主题。实验表明,新模型相比LDA模型能更好得提取主题并具有更低的困惑度,同时,能够较好地进行词性标注和文本分类工作。  相似文献   

16.
基于双语主题模型思想分析双语文本相似性,提出基于双语LDA跨语言文本相似度计算方法。先利用双语平行语料集训练双语LDA模型,再利用该模型预测新语料集主题分布,将新语料集的双语文档映射到同一个主题向量空间,结合主题分布使用余弦相似度方法计算新语料集双语文档的相似度,使用从类别间和类别内的主题分布离散度的角度改进的主题频率-逆文档频率方法计算特征主题权重。实验表明,改进后的权重计算对于基于双语LDA相似度算法的召回率有较大提高,算法对类别不受限且有较好的可靠性。  相似文献   

17.
高阳  杨璐  刘晓升  严建峰 《计算机科学》2015,42(8):279-282, 304
潜在狄利克雷分配(LDA)被广泛应用于文本的聚类。有效理解信息检索的查询和文本,被证明能提高信息检索的性能。其中吉布斯采样和置信传播是求解LDA模型的两种热门的近似推理算法。比较了两种近似推理算法在不同主题规模下对信息检索性能的影响,并比较了LDA对文本解释的两种不同方式,即用文档的主题分布来替换原查询和文本,以及用文档的单词重构来替换原查询和文本。实验结果表明,文档的主题解释以及吉布斯采样算法能够有效提高信息检索的性能。  相似文献   

18.
针对传统话题模型不能很好地获取文本情感信息并进行情感分类的问题,提出了情感LDA(latent Dirichlet allocation)模型,并通过对文本情感进行建模分析,提出了情感词耦合关系的LDA模型。该模型不但考虑了情感词的话题语境,而且考虑了词的情感耦合关系,并且通过引入情感变量对情感词的概率分布进行控制,采用隐马尔科夫模型对情感词耦合关系的转移进行建模分析。实验表明,该模型可以对情感词耦合关系和话题同时进行分析,不仅能有效地进行文本情感建模,而且提升了情感分类结果的准确度。  相似文献   

19.
Semantic gap has become a bottleneck of content-based image retrieval in recent years. In order to bridge the gap and improve the retrieval performance, automatic image annotation has emerged as a crucial problem. In this paper, a hybrid approach is proposed to learn the semantic concepts of images automatically. Firstly, we present continuous probabilistic latent semantic analysis (PLSA) and derive its corresponding Expectation–Maximization (EM) algorithm. Continuous PLSA assumes that elements are sampled from a multivariate Gaussian distribution given a latent aspect, instead of a multinomial one in traditional PLSA. Furthermore, we propose a hybrid framework which employs continuous PLSA to model visual features of images in generative learning stage and uses ensembles of classifier chains to classify the multi-label data in discriminative learning stage. Therefore, the framework can learn the correlations between features as well as the correlations between words. Since the hybrid approach combines the advantages of generative and discriminative learning, it can predict semantic annotation precisely for unseen images. Finally, we conduct the experiments on three baseline datasets and the results show that our approach outperforms many state-of-the-art approaches.  相似文献   

20.
基于词的向量空间模型是文本分类中的传统的表示文本的方法。这种表示方法的一个缺点是忽略了词之间的关系。最近一些使用潜在主题文本表示的方法,如隐含狄利克雷分配LDA (Latent Dirichlet Allocation)引起了人们的注意,这种表示方法可以处理词之间的关系。但是,只使用基于潜在主题的文本表示可能造成词信息的损失。我们使用改进的随机森林方法结合基于词的和基于LDA主题的两种文本表示方法。 对于两类特征分别构造随机森林,最终分类结果通过投票机制决定。在标准数据集上的实验结果表明,相比只使用一种文本特征的方法,我们的方法可以有效地结合两类特征,提高文本分类的性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号