首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 171 毫秒
1.
摘 要: 为了从日益丰富的蒙古文信息中快速准确地检索用户需求的主题信息,提出了一种融合主题模型LDA与语言模型的方法。该方法首先对蒙古文文本建立一元和二元语言模型,得到文本的语言概率分布;然后基于LDA建立主题模型,利用吉普斯抽样方法计算模型的参数,挖掘得到文档隐含的主题概率分布;最后,计算出文档主题分布与语言分布的线性组合概率分布,以此分布来计算文档主题与查询关键词之间的相似度,返回与查询关键词主题最相关的文档。语言模型充分利用蒙古文语法特征,而主题模型LDA又具有良好的潜在语义挖掘及主题发现的泛化学习能力,从而结合两种方法更好的实现蒙古文文档的主题语义检索,提高检索准确性。实验结果表明,融合LDA模型与语言模型的方法相比单一模型体现主题语义方面取得了较好的效果。  相似文献   

2.
主题模型LDA的多文档自动文摘   总被引:3,自引:0,他引:3  
近年来使用概率主题模型表示多文档文摘问题受到研究者的关注.LDA (latent dirichlet allocation)是主题模型中具有代表性的概率生成性模型之一.提出了一种基于LDA的文摘方法,该方法以混乱度确定LDA模型的主题数目,以Gibbs抽样获得模型中句子的主题概率分布和主题的词汇概率分布,以句子中主题权重的加和确定各个主题的重要程度,并根据LDA模型中主题的概率分布和句子的概率分布提出了2种不同的句子权重计算模型.实验中使用ROUGE评测标准,与代表最新水平的SumBasic方法和其他2种基于LDA的多文档自动文摘方法在通用型多文档摘要测试集DUC2002上的评测数据进行比较,结果表明提出的基于LDA的多文档自动文摘方法在ROUGE的各个评测标准上均优于SumBasic方法,与其他基于LDA模型的文摘相比也具有优势.  相似文献   

3.
该文针对分布式信息检索时不同集合对最终检索结果贡献度有差异的现象,提出一种基于LDA主题模型的集合选择方法。该方法首先使用基于查询的采样方法获取各集合描述信息;其次,通过建立LDA主题模型计算查询与文档的主题相关度;再次,用基于关键词相关度与主题相关度相结合的方法估计查询与样本集中文档的综合相关度,进而估计查询与各集合的相关度;最后,选择相关度最高的M个集合进行检索。实验部分采用RmP@nMAP作为评价指标,对集合选择方法的性能进行了验证。实验结果表明该方法能更准确的定位到包含相关文档多的集合,提高了检索结果的召回率和准确率。  相似文献   

4.
一种新的基于多启发式的特征选择算法   总被引:25,自引:1,他引:24  
朱颢东  钟勇 《计算机应用》2009,29(3):849-851
在查询扩展方法中,如果通过查询结果中关键词的上下文来计算候选关键词的权重,将权重大的词作为查询扩展词,其候选关键词来源于文档中关键词的上下文,这种方法存在主题漂移的问题。为了解决这个问题,提出一种将初始查询结果过滤,只选择与源文档语境相似的搜索结果,来帮助选择查询扩展词的方法。实验结果表明该方法能获得更合适的查询扩展词。  相似文献   

5.
基于文档与搜索结果上下文的查询扩展方法   总被引:1,自引:0,他引:1  
蒋辉  阳小华 《计算机应用》2009,29(3):852-853
在查询扩展方法中,如果通过查询结果中关键词的上下文来计算候选关键词的权重,将权重大的词作为查询扩展词,其候选关键词来源于文档中关键词的上下文,这种方法存在主题漂移的问题。为了解决这个问题,提出一种将初始查询结果过滤,只选择与源文档语境相似的搜索结果,来帮助选择查询扩展词的方法。实验结果表明该方法能获得更合适的查询扩展词。  相似文献   

6.
为了实现基于语义的密文检索,提高密文检索的准确率和效率,本文提出了一种基于biterm主题模型(biterm topic model, BTM)的多关键词可排序对称可搜索加密方案(BTM-MRSE).通过主题模型对关键词和文档之间的潜在语义进行建模,用户利用查询关键词的概率分布作为检索陷门,根据查询关键词与文档之间的语义相关性得分来获得最相关的文档.本方案将密文检索中的特定关键词替换为基于语义的主题,实现了关键词和文档标识符的分离,从而增强了文档关键词与查询关键词的隐私保护.为了减小索引规模,我们提出两层索引结构,利用平衡二叉树构造关键词-主题安全索引,结合倒排索引构造主题-文档安全索引.一方面,主题模型减小了索引节点中向量的维数,从而提高了检索效率,同时基于平衡二叉树的二级索引机制也进一步改善了密文检索效率.安全性分析证明了所提方案是安全有效的,同时利用真实数据集进行实验对比,表明本方案的密文检索准确率和效率都有极大提升.  相似文献   

7.
基于文档实例的中文信息检索   总被引:2,自引:0,他引:2  
传统的信息检索系统基于关键词建立索引并进行信息检索.这些系统存在查询返回文档集大、准确率低和普通用户不便于构造查询等不足.为此,该文提出基于文档实例的信息检索,即以已有文档作为样本,在文档库中检索与样本文档相似的所有文档.文中给出了基于文档实例的中文信息检索的解决方法和实现技术.初步实验结果表明该方法是行之有效的.  相似文献   

8.
针对部分网站中新闻话题没有分类或者分类不清等问题,将LDA模型应用到新闻话题的分类中。首先对新闻数据集进行LDA主题建模,根据贝叶斯标准方法选择最佳主题数,采用Gibbs抽样间接计算出模型参数,得到数据集的主题概率分布;然后根据JS距离计算文档之间的语义相似度,得到相似度矩阵;最后利用增量文本聚类算法对新闻文档聚类,将新闻话题分成若干个不同结构的子话题。实验结果显示表明该方法能有效地实现对新闻话题的划分。  相似文献   

9.
针对部分网站中新闻话题没有分类或者分类不清等问题,将LDA模型应用到新闻话题的分类中。首先对新闻数据集进行LDA主题建模,根据贝叶斯标准方法选择最佳主题数,采用Gibbs抽样间接计算出模型参数,得到数据集的主题概率分布;然后根据JS距离计算文档之间的语义相似度,得到相似度矩阵;最后利用增量文本聚类算法对新闻文档聚类,将新闻话题分成若干个不同结构的子话题。实验结果显示表明该方法能有效地实现对新闻话题的划分。  相似文献   

10.
通过优化Spark MLlib机器学习库中的隐含狄利克雷分布(LDA)主题模型,提出一种改进的学术研究热点挖掘方法。采用LDA主题模型对学术论文关键词进行建模,利用困惑度确定主题模型的最佳主题个数,并将文档-主题和主题-词概率分布矩阵转化为文档-主题和主题-词评分矩阵。通过计算背景主题与评分矩阵中各主题之间的相似度对主题进行排序,挖掘出学术论文中的研究热点。实验结果表明,该方法能提高LDA主题模型的挖掘效果,有助于发现有价值的学术研究热点主题。  相似文献   

11.
Previous work on the one-class collaborative filtering (OCCF) problem can be roughly categorized into pointwise methods, pairwise methods, and content-based methods. A fundamental assumption of these approaches is that all missing values in the user-item rating matrix are considered negative. However, this assumption may not hold because the missing values may contain negative and positive examples. For example, a user who fails to give positive feedback about an item may not necessarily dislike it; he may simply be unfamiliar with it. Meanwhile, content-based methods, e.g. collaborative topic regression (CTR), usually require textual content information of the items, and thus their applicability is largely limited when the text information is not available. In this paper, we propose to apply the latent Dirichlet allocation (LDA) model on OCCF to address the above-mentioned problems. The basic idea of this approach is that items are regarded as words, users are considered as documents, and the user-item feedback matrix constitutes the corpus. Our model drops the strong assumption that missing values are all negative and only utilizes the observed data to predict a user’s interest. Additionally, the proposed model does not need content information of the items. Experimental results indicate that the proposed method outperforms previous methods on various ranking-oriented evaluation metrics. We further combine this method with a matrix factorization-based method to tackle the multi-class collaborative filtering (MCCF) problem, which also achieves better performance on predicting user ratings.  相似文献   

12.
We argue that expert finding is sensitive to multiple document features in an organizational intranet. These document features include multiple levels of associations between experts and a query topic from sentence, paragraph, up to document levels, document authority information such as the PageRank, indegree, and URL length of documents, and internal document structures that indicate the experts’ relationship with the content of documents. Our assumption is that expert finding can largely benefit from the incorporation of these document features. However, existing language modeling approaches for expert finding have not sufficiently taken into account these document features. We propose a novel language modeling approach, which integrates multiple document features, for expert finding. Our experiments on two large scale TREC Enterprise Track datasets, i.e., the W3C and CSIRO datasets, demonstrate that the natures of the two organizational intranets and two types of expert finding tasks, i.e., key contact finding for CSIRO and knowledgeable person finding for W3C, influence the effectiveness of different document features. Our work provides insights into which document features work for certain types of expert finding tasks, and helps design expert finding strategies that are effective for different scenarios. Our main contribution is to develop an effective formal method for modeling multiple document features in expert finding, and conduct a systematic investigation of their effects. It is worth noting that our novel approach achieves better results in terms of MAP than previous language model based approaches and the best automatic runs in both the TREC2006 and TREC2007 expert search tasks, respectively.  相似文献   

13.
D2S: Document-to-sentence framework for novelty detection   总被引:2,自引:1,他引:1  
Novelty detection aims at identifying novel information from an incoming stream of documents. In this paper, we propose a new framework for document-level novelty detection using document-to-sentence (D2S) annotations and discuss the applicability of this method. D2S first segments a document into sentences, determines the novelty of each sentence, then computes the document-level novelty score based on a fixed threshold. Experimental results on APWSJ data show that D2S outperforms standard document-level novelty detection in terms of redundancy-precision (RP) and redundancy-recall (RR). We applied D2S on the document-level data from the TREC 2004 and TREC 2003 Novelty Track and find that D2S is useful in detecting novel information in data with a high percentage of novel documents. However, D2S shows a strong capability to detect redundant information regardless of the percentage of novel documents. D2S has been successfully integrated in a real-world novelty detection system.  相似文献   

14.
We present a statistical method called Covering Topic Score (CTS) to predict query performance for information retrieval. Estimation is based on how well the topic of a user's query is covered by documents retrieved from a certain retrieval system. Our approach is conceptually simple and intuitive, and can be easily extended to incorporate features beyond bag- of-words such as phrases and proximity of terms. Experiments demonstrate that CTS significantly correlates with query performance in a variety of TREC test collections, and in particular CTS gains more prediction power benefiting from features of phrases and proximity of terms. We compare CTS with previous state-of-the-art methods for query performance prediction including clarity score and robustness score. Our experimental results show that CTS consistently performs better than, or at least as well as, these other methods. In addition to its high effectiveness, CTS is also shown to have very low computational complexity, meaning that it can be practical for real applications.  相似文献   

15.
Semantic smoothing, which incorporates synonym and sense information into the language models, is effective and potentially significant to improve retrieval performance. Previously implemented semantic smoothing models such as the translation model have shown good experimental results. However, these models are unable to incorporate contextual information. To overcome this limitation, we propose a novel context-sensitive semantic smoothing method that decomposes a document into a set of weighted context-sensitive topic signatures and then maps those topic signatures into query terms. The language model with such a context- sensitive semantic smoothing is referred to as the topic signature language model. In detail, we implement two types of topic signatures, depending on whether ontology exists in the application domain. One is the ontology-based concept and the other is the multiword phrase. The mapping probabilities from each topic signature to individual terms are estimated through the EM algorithm. Document models based on topic signature mapping are then derived. The new smoothing method is evaluated on the TREC 2004/ 2005 Genomics Track with ontology-based concepts, as well as the TREC Ad Hoc Track (Disks 1, 2, and 3) with multiword phrases. Both experiments show significant improvements over the two-stage language model, as well as the language model with context- insensitive semantic smoothing.  相似文献   

16.
近年来微博检索已经成为信息检索领域的研究热点。相关的研究表明,微博检索具有时间敏感性。已有工作根据不同的时间敏感性假设,例如,时间越新文档越相关,或者时间越接近热点时刻文档越相关,得到多种不同的检索模型,都在一定程度上提高了检索效果。但是这些假设主要来自于观察,是一种直观简化的假设,仅能从某个方面反映时间因素影响微博排序的规律。该文验证了微博检索具有复杂的时间敏感特性,直观的简化假设并不能准确地描述这种特性。在此基础上提出了一个利用微博的时间特征和文本特征,通过机器学习的方式来构建一个针对时间敏感的微博检索的排序学习模型(TLTR)。在时间特征上,考察了查询相关的全局时间特征以及查询-文档对的局部时间特征。在TREC Microblog Track 20112012数据集上的实验结果表明,TLTR模型优于现有的其他时间敏感的微博排序方法。  相似文献   

17.
In information retrieval (IR) research, more and more focus has been placed on optimizing a query language model by detecting and estimating the dependencies between the query and the observed terms occurring in the selected relevance feedback documents. In this paper, we propose a novel Aspect Language Modeling framework featuring term association acquisition, document segmentation, query decomposition, and an Aspect Model (AM) for parameter optimization. Through the proposed framework, we advance the theory and practice of applying high‐order and context‐sensitive term relationships to IR. We first decompose a query into subsets of query terms. Then we segment the relevance feedback documents into chunks using multiple sliding windows. Finally we discover the higher order term associations, that is, the terms in these chunks with high degree of association to the subsets of the query. In this process, we adopt an approach by combining the AM with the Association Rule (AR) mining. In our approach, the AM not only considers the subsets of a query as “hidden” states and estimates their prior distributions, but also evaluates the dependencies between the subsets of a query and the observed terms extracted from the chunks of feedback documents. The AR provides a reasonable initial estimation of the high‐order term associations by discovering the associated rules from the document chunks. Experimental results on various TREC collections verify the effectiveness of our approach, which significantly outperforms a baseline language model and two state‐of‐the‐art query language models namely the Relevance Model and the Information Flow model.  相似文献   

18.
社会化标签提供了网页信息的额外描述,直观上对搜索具有重要价值。该文提出一种新颖的利用社会化标签的分类属性进行检索的方法。该方法通过将群体的标注信息建模为高层类别来估计话题模型,然后基于该话题模型来对语言模型进行平滑。建模方法可以降低标注稀疏性的影响,有效地表达标签含义,从而提升检索效果。基于TREC评测构建的数据集上的实验结果表明,该方法优于基于LDA的检索方法以及现有其他基于标签数据的检索方法。  相似文献   

19.
文本语义相关度计算在自然语言处理、语义信息检索等方面起着重要作用,以Wikipedia为知识库,基于词汇特征的ESA(Explicit Semantic Analysis)因简单有效的特点在这些领域中受到学术界的广泛关注和应用。然而其语义相关度计算因为有大量冗余概念的参与变成了一种高维度、低效率的计算方式,同时也忽略了文本所属主题因素对语义相关度计算的作用。引入LDA(Latent Dirichlet Allocation)主题模型,对ESA返回的相关度较高的概念转换为模型的主题概率向量,从而达到降低维度和提高效率的目的;将JSD距离(Jensen-Shannon Divergence)替换余弦距离的测量方法,使得文本语义相关度计算更加合理和有效。最后对不同层次的数据集进行算法的测试评估,结果表明混合词汇特征和主题模型的语义相关度计算方法的皮尔逊相关系数比ESA和LDA分别高出3%和9%以上。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号