首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Topic modeling for short texts faces a tough challenge, owing to the sparsity problem. An effective solution is to aggregate short texts into long pseudo-documents before training a standard topic model. The main concern of this solution is the way of aggregating short texts. A recent developed self-aggregation-based topic model (SATM) can adaptively aggregate short texts without using heuristic information. However, the model definition of SATM is a bit rigid, and more importantly, it tends to overfitting and time-consuming for large-scale corpora. To improve SATM, we propose a generalized topic model for short texts, namely latent topic model (LTM). In LTM, we assume that the observable short texts are snippets of normal long texts (namely original documents) generated by a given standard topic model, but their original document memberships are unknown. With Gibbs sampling, LTM drives an adaptive aggregation process of short texts, and simultaneously estimates other latent variables of interest. Additionally, we propose a mini-batch scheme for fast inference. Experimental results indicate that LTM is competitive with the state-of-the-art baseline models on short text topic modeling.  相似文献   

2.
User comments, as a large group of online short texts, are becoming increasingly prevalent with the development of online communications. These short texts are characterized by their co-occurrences with usually lengthier normal documents. For example, there could be multiple user comments following one news article, or multiple reader reviews following one blog post. The co-occurring structure inherent in such text corpora is important for efficient learning of topics, but is rarely captured by conventional topic models. To capture such structure, we propose a topic model for co-occurring documents, referred to as COTM. In COTM, we assume there are two sets of topics: formal topics and informal topics, where formal topics can appear in both normal documents and short texts whereas informal topics can only appear in short texts. Each normal document has a probability distribution over a set of formal topics; each short text is composed of two topics, one from the set of formal topics, whose selection is governed by the topic probabilities of the corresponding normal document, and the other from a set of informal topics. We also develop an online algorithm for COTM to deal with large scale corpus. Extensive experiments on real-world datasets demonstrate that COTM and its online algorithm outperform state-of-art methods by discovering more prominent, coherent and comprehensive topics.  相似文献   

3.
The development of micro-blog, generating large-scale short texts, provides people with convenient communication. In the meantime, discovering topics from short texts genuinely becomes an intractable problem. It was hard for traditional topic model-to-model short texts, such as probabilistic latent semantic analysis (PLSA) and Latent Dirichlet Allocation (LDA). They suffered from the severe data sparsity when disposed short texts. Moreover, K-means clustering algorithm can make topics discriminative when datasets is intensive and the difference among topic documents is distinct. In this paper, BTM topic model is employed to process short texts–micro-blog data for alleviating the problem of sparsity. At the same time, we integrating K-means clustering algorithm into BTM (Biterm Topic Model) for topics discovery further. The results of experiments on Sina micro-blog short text collections demonstrate that our method can discover topics effectively.  相似文献   

4.
李卫疆  王真真  余正涛 《计算机科学》2017,44(2):257-261, 274
近年来,微博等社交网络的发展给人们的沟通交流提供了方便。由于每条微博都限定在140字以内,因此产生了大量的短文本信息。从短文本中发现话题日渐成为一项重要的课题。传统的话题模型(如概率潜在语义分析(PLSA)、潜在狄利克雷分配(LDA)等) 在处理短文本方面都面临着严重的数据稀疏问题。另外,当数据集比较集中并且话题文档间的差别较明显时,K-means 聚类算法能够聚类出有区分度的话题。引入BTM话题模型来处理微博数据这样的短文本,以缓解数据稀疏的问题。同时,整合了K-means聚类算法来对BTM模型所发现的话题进行聚类。在新浪微博短文本集上进行的实验证明了此方法发现话题的有效性。  相似文献   

5.
传统主题模型方法很大程度上依赖于词共现模式生成文档主题, 短文本由于缺乏足够的上下文信息导致的数据稀疏性成为传统主题模型在短文本上取得良好效果的瓶颈. 基于此, 本文提出一种基于语义增强的短文本主题模型, 算法将DMM (Dirichlet Multinomial Mixture)与词嵌入模型相结合, 通过训练全局词嵌入与局部词嵌入获得词的向量表示, 融合全局词嵌入向量与局部词嵌入向量计算词向量间的语义相关度, 并通过主题相关词权重进行词的语义增强计算. 实验表明, 本文提出的模型在主题一致性表示上更准确, 且提升了模型在短文本上的分类正确率.  相似文献   

6.
针对短文本内容简短、特征稀疏等特点,提出一种新的融合词语类别特征和语义的短文本分类方法。该方法采用改进的特征选择方法从短文本中选择最能代表类别特征的词语构造特征词典,同时结合利用隐含狄利克雷分布LDA主题模型从背景知识中选择最优主题形成新的短文本特征,在此基础上建立分类器进行分类。采用支持向量机SVM与k近邻法k-NN分类器对搜狗语料库数据集上的搜狐新闻标题内容进行分类,实验结果表明该方法对提高短文本分类的性能是有效的。  相似文献   

7.
针对短文本上以LDA为主的传统主题模型易受特征稀疏、噪声以及冗余影响的问题,首先梳理了文本特征表示法的变化以及短文本上主题模型的发展现状,并系统地总结了LDA模型和狄利克雷多项混合模型(DMM)各自的生成过程和相应的吉布斯采样参数推导。关于主题模型最优主题数,选取常见的4种优化指标进行了详细的对比说明。最后分析了近2年主题模型的扩展研究和其在网络舆情上的简单应用,并以此指明了未来主题模型的研究方向和侧重点。  相似文献   

8.
Traditional emotion models, when tagging single emotions in documents, often ignore the fact that most documents convey complex human emotions. In this paper, we join emotion analysis with topic models to find complex emotions in documents, as well as the intensity of the emotions, and study how the document emotions vary with topics. Hierarchical Bayesian networks are employed to generate the latent topic variables and emotion variables. On average, our model on single emotion classification outperforms the traditional supervised machine learning models such as SVM and Naive Bayes. The other model on the complex emotion classification also achieves promising results. We thoroughly analyze the impact of vocabulary quality and topic quantity to emotion and intensity prediction in our experiments. The distribution of topics such as Friend and Job are found to be sensitive to the documents’ emotions, which we call emotion topic variation in this paper. This reveals the deeper relationship between topics and emotions.  相似文献   

9.
社交媒体的广泛使用使短文本聚类成为一个重要的研究课题。但短文本词向量的高维、稀疏性限制了传统文本聚类方法在短文本中的效果,并且由于词的稀疏性,词对簇结构的判别能力对短文本类结构的学习显得尤为重要。本文我们提出了一种基于概率模型的具有词判别力学习能力的短文本聚类框架,并在经典文本聚类模型LDA(Ldatant Drichilet Allocation)、BTM(Biterm Topic Model)和GSDMM(Gibbs Sampling Drichilet Mutitional Mixture model)模型中验证了词判别力学习对类结构学习的有效性。通过Gibbs采样算法对模型中的参数进行求解。最后在真实数据集上的实验结果显示具有词判别力学习的概率模型可以提高已有模型的聚类效果。  相似文献   

10.
针对网络中海量的Web服务聚类时,因其表征数据稀疏而导致使用传统建模方法所获效果不理想的问题,提出了一种基于BTM主题模型的Web服务聚类方法。该方法首先利用BTM学习整个Web服务描述文档集的隐含主题,通过推理得出每篇文档的主题分布,然后应用K Means算法对Web服务进行聚类。通过与LDA、TF IDF等方法进行对比发现,该方法在聚类纯度、熵和F Measure指标上均具有更好的效果。实验表明,该方法能够有效解决因Web服务描述所具有的短文本性质而导致的数据稀疏性问题,可显著提高服务聚类效果。  相似文献   

11.
基于线索树双层聚类的微博话题检测   总被引:1,自引:0,他引:1  
微博作为一种全新的信息发布模式,在极大程度上增强了网络信息的开放性和互动性,但同时也造成微博空间内信息量的裂变式增长。利用话题检测技术将微博文本信息按照话题进行归类和组织,可以帮助用户在动态变化的信息环境下高效获取个性信息或热点话题。该文针对微博文本短、半结构、上下文信息丰富等特点,提出了基于线索树的双层聚类的话题检测方法,通过利用融合了时序特征和作者信息的话题模型(Temporal-Author-Topic, TAT)进行线索树内的局部聚类,借以实现垃圾微博的过滤,最后利用整合后的线索树进行全局话题检测。实验结果显示该方法在解决数据稀疏方面取得了较好的效果,话题检测的F值达到31.2%。  相似文献   

12.
当前监督或半监督隐藏狄利克雷分配(latent Dirichlet allocation,LDA)模型多数采用DSTM(down-stream supervised topic model)或USTM(upstream supervised topic model)方式加入额外信息,使得模型具有较高的主题提取和数据降维能力,然而无法处理包含多种额外信息的学术文档数据。通过对LDA及其扩展模型的研究,提出了一种将DSTM和USTM结合的概率主题模型ART(author & reference topic)。ART模型分别以USTM和DSTM方式构建了文档作者和引用文献的生成过程,因此可以对既包含作者信息又包含引用文献信息的文档进行有效的分析处理。在实验过程中采用Stochastic EM Sampling 方法对模型参数进行了学习,并将实验结果与Labeled LDA和DMR模型进行了对比。实验结果表明,ART模型不仅拥有高效的文档主题提取和聚类能力,同时还拥有优良的文档作者判别和引用文献排序能力。  相似文献   

13.
随着网络的发展,主题提取的应用越来越广泛,尤其是学术文献的主题提取。尽管学术文献摘要是短文本,但其具有高维性的特点导致文本主题模型难以处理,其时效性的特点致使主题挖掘时容易忽略时间因素,造成主题分布不均、不明确。针对此类问题,提出一种基于TTF-LDA(time+tf-idf+latent Dirichlet allocation)的学术文献摘要主题聚类模型。通过引入TF-IDF特征提取的方法,对摘要进行特征词的提取,能有效降低LDA模型的输入文本维度,融合学术文献的发表时间因素,建立时间窗口,限定学术文献主题分析的时间,并通过文献的发表时间增加特征词的时间权重,使用特征词的时间权重之和协同主题引导特征词词库作为LDA的影响因子。通过在爬虫爬取的数据集上进行实验,与标准的LDA和MVC-LDA相比,在选取相同的主题数的情况下,模型的混乱程度更低,主题与主题之间的区分度更高,更符合学术文献本身的特点。  相似文献   

14.
针对小样本短文本分类过程中出现的语义稀疏与过拟合问题,在异构图卷积网络中利用双重注意力机制学习不同相邻节点的重要性和不同节点类型对当前节点的重要性,构建小样本短文本分类模型HGCN-RN。利用BTM主题模型在短文本数据集中提取主题信息,构造一个集成实体和主题信息的短文本异构信息网络,用于解决短文本语义稀疏问题。在此基础上,构造基于随机去邻法和双重注意力机制的异构图卷积网络,提取短文本异构信息网络中的语义信息,同时利用随机去邻法进行数据增强,用于缓解过拟合问题。在3个短文本数据集上的实验结果表明,与LSTM、Text GCN、HGAT等基准模型相比,该模型在每个类别只有10个标记样本的情况下仍能达到最优性能。  相似文献   

15.
We propose an information filtering system for documents by a user profile using latent semantics obtained by singular value decomposition (SVD) and independent component analysis (ICA). In information filtering systems, it is useful to analyze the latent semantics of documents. ICA is one method to analyze the latent semantics. We assume that topics are independent of each other. Hence, when ICA is applied to documents, we obtain the topics included in the documents. By using SVD remove noises before applying ICA, we can improve the accuracy of topic extraction. By representation of the documents with those topics, we improve recommendations by the user profile. In addition, we construct a user profile with a genetic algorithm (GA) and evaluate it by 11-point average precision. We carried out an experiment using a test collection to confirm the advantages of the proposed method. This work was presented in part at the 10th International Symposium on Artificial Life and Robotics, Oita, Japan, February 4–6, 2005  相似文献   

16.
项目文档主题表征的好坏直接影响后续评审专家的推荐效果.为有效利用项目文档片段之间的关联关系进行项目主题分析,提出一种基于半监督图聚类的项目主题模型构建方法.该方法首先分析项目文档的结构特点,提取项目名称、项目关键字等能表征主题的结构信息,结合专家证据文档、专家主题关系网等能表征专家主题的外部资源,定义及提取项目文档片段之间的关联关系特征;然后,利用不同类型的关联关系计算项目文档片段之间的相关性,构建项目文档片段间的无向图模型;最后,利用已标记关联关系特征作为聚类的监督信息,采用半监督图聚类算法对项目文档片段进行聚类,从而实现项目主题的提取.项目主题提取对比实验结果验证了所提方法的有效性,项目文档结构化特征、专家证据文档以及专家主题关系网对项目主题模型的构建具有一定的指导作用.  相似文献   

17.
Topic modeling is a powerful tool for discovering the underlying or hidden structure in text corpora. Typical algorithms for topic modeling include probabilistic latent semantic analysis (PLSA) and latent Dirichlet allocation (LDA). Despite their different inspirations, both approaches are instances of generative model, whereas the discriminative structure of the documents is ignored. In this paper, we propose locally discriminative topic model (LDTM), a novel topic modeling approach which considers both generative and discriminative structures of the data space. Different from PLSA and LDA in which the topic distribution of a document is dependent on all the other documents, LDTM takes a local perspective that the topic distribution of each document is strongly dependent on its neighbors. By modeling the local relationships of documents within each neighborhood via a local linear model, we learn topic distributions that vary smoothly along the geodesics of the data manifold, and can better capture the discriminative structure in the data. The experimental results on text clustering and web page categorization demonstrate the effectiveness of our proposed approach.  相似文献   

18.
In this paper, we propose a hierarchical probabilistic model for scene classification. This model infers the local–class–shared and local–class-specific latent topics respectively. Our approach consists of first learning the latent topics from the BoW representation and subsequently, training SVM on the distribution of the latent topics. This approach is compared to that of using traditional graphical models to learn the latent topics and training SVM on the topic distribution. The experiments on a variety of datasets show that the topics learned by our model have higher discriminative power.  相似文献   

19.
Nonnegative matrix factorization (NMF) has been widely used in topic modeling of large-scale document corpora, where a set of underlying topics are extracted by a low-rank factor matrix from NMF. However, the resulting topics often convey only general, thus redundant information about the documents rather than information that might be minor, but potentially meaningful to users. To address this problem, we present a novel ensemble method based on nonnegative matrix factorization that discovers meaningful local topics. Our method leverages the idea of an ensemble model, which has shown advantages in supervised learning, into an unsupervised topic modeling context. That is, our model successively performs NMF given a residual matrix obtained from previous stages and generates a sequence of topic sets. The algorithm we employ to update is novel in two aspects. The first lies in utilizing the residual matrix inspired by a state-of-the-art gradient boosting model, and the second stems from applying a sophisticated local weighting scheme on the given matrix to enhance the locality of topics, which in turn delivers high-quality, focused topics of interest to users. We subsequently extend this ensemble model by adding keyword- and document-based user interaction to introduce user-driven topic discovery.  相似文献   

20.
近年来概率主题模型受到了研究者的广泛关注,LDA(Latent Dirichlet Allocation)模型是主题模型中具有代表性的概率生成模型之一,它能够检测文本的隐含主题。提出一个基于LDA模型的主题特征,该特征计算文档的主题分布与句子主题分布的距离。结合传统多文档自动文摘中的常用特征,计算句子权重,最终根据句子的分值抽取句子形成摘要。实验结果证明,加入LDA模型的主题特征后,自动文摘的性能得到了显著的提高。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号