首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
孙芯宇  吴江  蒲强 《计算机应用》2016,36(5):1313-1318
针对由不稳定聚类估计的相关模型影响检索性能的问题,提出了基于稳定性语义聚类的相关模型(SSRM)。首先利用初始查询前N个结果文档构成反馈数据集;然后探测数据集中稳定的语义类别数量;接着从稳定性语义聚类中选择与用户查询最相似的语义类别估计SSRM;最后通过实验对模型的检索性能进行了验证。对TREC数据集5个子集的实验结果显示,SSRM相比相关模型(RM)、语义相关模型(SRM),平均准确率(MAP)性能最少提高了32.11%和0.41%;相比基于聚类的文档模型(CBDM)、基于LDA的文档模型(LBDM)和Resampling等基于聚类的检索方法,MAP性能最少提高了23.64%,19.59%和8.03%。实验结果表明,SSRM有利于改善检索性能。  相似文献   

2.
摘 要: 为了从日益丰富的蒙古文信息中快速准确地检索用户需求的主题信息,提出了一种融合主题模型LDA与语言模型的方法。该方法首先对蒙古文文本建立一元和二元语言模型,得到文本的语言概率分布;然后基于LDA建立主题模型,利用吉普斯抽样方法计算模型的参数,挖掘得到文档隐含的主题概率分布;最后,计算出文档主题分布与语言分布的线性组合概率分布,以此分布来计算文档主题与查询关键词之间的相似度,返回与查询关键词主题最相关的文档。语言模型充分利用蒙古文语法特征,而主题模型LDA又具有良好的潜在语义挖掘及主题发现的泛化学习能力,从而结合两种方法更好的实现蒙古文文档的主题语义检索,提高检索准确性。实验结果表明,融合LDA模型与语言模型的方法相比单一模型体现主题语义方面取得了较好的效果。  相似文献   

3.
《计算机工程》2018,(3):189-194
传统的搜索引擎仅返回给用户包含查询关键字的文档,忽略了查询背后用户真正的信息需求。为此,将文档检索看作个性化推荐问题,提出一种查询意图识别的主题模型个性化检索算法。对用户检索历史进行潜在狄利克雷分布主题建模,结合检索历史主题模型识别用户查询的潜在意图,并按主题相关度进行文档推荐,计算查询到文档集的KL距离对文档集排序,最终返回给用户个性化检索文档列表。实验结果表明,与基于协同相似计算和基于用户聚类的推荐算法相比,该算法能够更准确有效地为用户提供个性化检索。  相似文献   

4.
传统伪相关反馈容易产生“查询主题漂移”,有效避免“查询主题漂移”的首要前提是确定高质量的相关文档,形成与用户查询需求相关的伪相关文档集合.在检索结果聚类的基础上,研究了XML伪相关文档查找方法,在充分考虑XML内容和结构特征的前提下,提出了基于均衡化权值的簇标签提取方法,并以此为基础,提出了候选簇的排序模型和基于候选簇的文档排序模型.相关实验数据表明,与初始检索结果相比,排序模型获得了较好的性能,有效地查找到了更多的XML伪相关文档.  相似文献   

5.
伪反馈一直以来都被认为是一种有效的查询扩展技术.但是近来的研究表明传统的伪反馈容易带来主题漂移并因此而影响检索性能.如何确定相关文档以及如何从相关文档中挑选有用的扩展词项是伪反馈中两个重要的方面.与传统查询扩展不同,XML查询扩展不仅需要内容扩展还需要考虑结构扩展.提出了一个解决框架,利用聚类和词组抽取技术来查找相关文档和选择有用的扩展信息.结合XML的语义特征,提出了一种全新的基于层次信息的文档相似性度量方案.基于此,将初始检索结果聚类,获得与查询请求最为相关的文档簇,然后在文档簇中抽取词组,找到符合用户查询意图的扩展查询词组,并在扩展查询词组的基础上进行结构扩展,最终形成完整的"内容+结构"的查询扩展表达式.IEEE CS实验数据上的实验结果表明,结合了聚类和抽取技术的XML伪反馈查询扩展方法能有效地降低主题漂移现象,获得更好的检索质量.  相似文献   

6.
综合文档语义与用户查询语义的XML关键字检索   总被引:1,自引:0,他引:1  
黎军  熊海灵 《计算机应用》2010,30(11):2945-2948
为了解决XML关键字查询中语义信息丢失的问题,提出了一种语义相关的关键字检索方法。利用文档的半结构化特点提取文档隐含的语义,利用查询语法捕获用户查询意图,然后根据用户意图查询满足条件的元素,并结合文档语义,由最小最近公共祖先改进为语义相关实体子树集来表达查询结果。实验结果表明,该方法能够有效提高关键字检索结果的查准率。  相似文献   

7.
袁柳  张龙波 《计算机应用》2010,30(12):3401-3406
针对已有Web文档语义标注技术在标注完整性方面的缺陷,将潜在狄里克雷分配(LDA)模型用于对Web文档添加语义标注。考虑到Web文档具有明显的领域特征,在传统的LDA模型中嵌入领域信息,提出Domain-enable LDA模型,提高了标注结果的完整性并避免了对词汇主题的强制分配;同时在文档隐含主题和文档所在领域本体概念间建立关联,利用本体概念表达的语义对隐含主题进行准确的解释,使文档的语义清晰化,为文档检索提供有效帮助。根据LDA模型可为每个词汇分配隐含主题的特征,提出多粒度语义标注的概念。在20news-group和WebKB数据集上的实验证明了Domain-enable LDA模型的有效性,并指出对文档进行多粒度标注有助于有效处理不同类型查询。  相似文献   

8.
本文提出一种基于词语—主题词相关关系的语言模型TSA-LM(Term-Subject Association Based Language Model ),它的基本思想是把一篇文档分成两个文档块,一部分是由领域主题词表中的主题词构成的主题词文档块,另一部分是由非主题词构成的非主题词文档块,分别计算两个文档块和查询的似然程度。对非主题词文档块,假设词语间独立无关,沿用经典的语言模型计算;对主题词文档块,把查询词语和主题词相关关系引入语言模型中来估计该文档块和查询的似然程度。词语—主题词相关关系采用词语—主题词相关度来衡量。词语—主题词相关度的计算除了来源于对文档中词语—主题词共现性的观察外,还来源于宏观上对词语—文档—主题词归属关系的观察。公开数据集上的检索实验结果表明,基于词语—主题词相关关系的语言模型可以有效提高检索效果。  相似文献   

9.
传统的伪相关反馈(pseudo relevance feedback,PRF)方法,将文档作为基本抽取单元进行查询扩展,抽取粒度过大造成扩展源中噪音量的增加。研究利用主题分析技术来减轻扩展源的低质量现象。通过获取隐藏在伪相关文档集(pseudo-relevant set)各文档内容中的语义信息,并从中提取与用户查询相关的抽象主题内容作为基本抽取单元用于查询扩展。在NTCIR 8中文语料上,与传统PRF方法和基于主题模型的PRF方法相比较,实验结果表明该方法可以抽取出更符合用户查询的扩展词。此外,结果显示从更小的主题内容粒度出发进行查询扩展,可以有效提升检索性能。  相似文献   

10.
针对当前的信息检索模型并不能提供语义信息的检索问题,提出了一个基于描述逻辑方法的语义检索模型,定义了文档的逻辑视图、查询的逻辑视图和两种视图之间的相似度计算方法,并给出了模型的存储结构.该模型将用户的检索请求和待查询的数据(文档)转化成基于描述逻辑知识库为基础的个体集合,不仅能够有效表示文档和查询的语义信息,而且有利于计算机自动推理的实现,可以有效提高检索的准确率和召回率.  相似文献   

11.
Knowledge discovery through directed probabilistic topic models: a survey   总被引:1,自引:0,他引:1  
Graphical models have become the basic framework for topic based probabilistic modeling. Especially models with latent variables have proved to be effective in capturing hidden structures in the data. In this paper, we survey an important subclass Directed Probabilistic Topic Models (DPTMs) with soft clustering abilities and their applications for knowledge discovery in text corpora. From an unsupervised learning perspective, “topics are semantically related probabilistic clusters of words in text corpora; and the process for finding these topics is called topic modeling”. In topic modeling, a document consists of different hidden topics and the topic probabilities provide an explicit representation of a document to smooth data from the semantic level. It has been an active area of research during the last decade. Many models have been proposed for handling the problems of modeling text corpora with different characteristics, for applications such as document classification, hidden association finding, expert finding, community discovery and temporal trend analysis. We give basic concepts, advantages and disadvantages in a chronological order, existing models classification into different categories, their parameter estimation and inference making algorithms with models performance evaluation measures. We also discuss their applications, open challenges and future directions in this dynamic area of research.  相似文献   

12.
《Knowledge》2007,20(7):607-613
Discovering topics from large amount of documents has become an important task recently. Most of the topic models treat document as a word sequence, whether in discrete character or term frequency form. However, the number of words in a document is greatly different from that in other documents. This will lead to several problems for current topic models in dealing with topics analysis. On the other hand, it is difficult to perform topic transition analysis based on current topic models. In an attempt to overcome these deficiencies, a variable space hidden Markov model (VSHMM) is proposed to represent the topics, and several operations based on space computation are presented. A hierarchical clustering algorithm with dynamically changing of the component number in topic model is proposed to demonstrate the effectiveness of the VSHMM. Method of document partition based on topic transition is also present. Experiments on a real-world dataset show that the VSHMM can improve the accuracy while decreasing the algorithm’s time complexity greatly compared with the algorithm based on current mixture model.  相似文献   

13.
We describe an information filtering system using independent component analysis (ICA). A document–word matrix is generally sparse and has an ambiguity of synonyms. To solve this problem, we propose a method to use document vectors represented by independent components. An independent component generated by ICA is considered as a topic. In practice, we map the document vectors into a topics space. Since some independent components are useless for recommendation, we select the necessary components from all independent components by a maximum distance algorithm (MDA). Although Euclidean distance is usually used by MDA, we propose topic selection by cosine-distance-based MDA to solve the mismatch of similarities in information filtering. We create a user profile from the transformed data with a genetic algorithm (GA). Finally, we recommend documents with the user profile and evaluate the accuracy by imputation precision. We have carried out an evaluation experiment to confirm the practicality of the proposed method.This work was presented, in part, at the 9th International Symposium on Artificial Life and Robotics, Oita, Japan, January 28–30, 2004  相似文献   

14.
Database Information Visualization and Analysis system ( ) is a computer program that helps perform bibliometric analysis of collections of scientific literature and patents for technology forecasting. Documents, drawn from the technological field of interest, are visualized as clusters on a two dimensional map, permitting exploration of the relationships among the documents and document clusters and also permitting derivation of summary data about each document cluster. Such information, when provided to subject matter experts performing a technology forecast, can yield insight into trends in the technological field of interest. This paper discusses the document visualization and analysis process: acquisition of documents, mapping documents, clustering, exploration of relationships, and generation of summary and trend information. Detailed discussion of exploration functions is presented and followed by an example of visualization and analysis of a set of documents about chemical sensors.  相似文献   

15.
Database Information Visualization and Analysis system ( ) is a computer program that helps perform bibliometric analysis of collections of scientific literature and patents for technology forecasting. Documents, drawn from the technological field of interest, are visualized as clusters on a two dimensional map, permitting exploration of the relationships among the documents and document clusters and also permitting derivation of summary data about each document cluster. Such information, when provided to subject matter experts performing a technology forecast, can yield insight into trends in the technological field of interest. This paper discusses the document visualization and analysis process: acquisition of documents, mapping documents, clustering, exploration of relationships, and generation of summary and trend information. Detailed discussion of exploration functions is presented and followed by an example of visualization and analysis of a set of documents about chemical sensors.  相似文献   

16.
In this paper, we address exploratory analysis of textual data streams and we propose a bootstrapping process based on a combination of keyword similarity and clustering techniques to: (i) classify documents into fine-grained similarity clusters, based on keyword commonalities; (ii) aggregate similar clusters into larger document collections sharing a richer, more user-prominent keyword set that we call topic; (iii) assimilate newly extracted topics of current bootstrapping cycle with existing topics resulting from previous bootstrapping cycles, by linking similar topics of different time periods, if any, to highlight topic trends and evolution. An analysis framework is also defined enabling the topic-based exploration of the underlying textual data stream according to a thematic perspective and a temporal perspective. The bootstrapping process is evaluated on a real data stream of about 330.000 newspaper articles about politics published by the New York Times from Jan 1st 1900 to Dec 31st 2015.  相似文献   

17.
With the wider growth of web-based documents, the necessity of automatic document clustering and text summarization is increased. Here, document summarization that is extracting the essential task with appropriate information, removal of unnecessary data and providing the data in a cohesive and coherent manner is determined to be a most confronting task. In this research, a novel intelligent model for document clustering is designed with graph model and Fuzzy based association rule generation (gFAR). Initially, the graph model is used to map the relationship among the data (multi-source) followed by the establishment of document clustering with the generation of association rule using the fuzzy concept. This method shows benefit in redundancy elimination by mapping the relevant document using graph model and reduces the time consumption and improves the accuracy using the association rule generation with fuzzy. This framework is provided in an interpretable way for document clustering. It iteratively reduces the error rate during relationship mapping among the data (clusters) with the assistance of weighted document content. Also, this model represents the significance of data features with class discrimination. It is also helpful in measuring the significance of the features during the data clustering process. The simulation is done with MATLAB 2016b environment and evaluated with the empirical standards like Relative Risk Patterns (RRP), ROUGE score, and Discrimination Information Measure (DMI) respectively. Here, DailyMail and DUC 2004 dataset is used to extract the empirical results. The proposed gFAR model gives better trade-off while compared with various prevailing approaches.  相似文献   

18.
一种改进的自适应文本信息过滤模型   总被引:18,自引:1,他引:18  
自适应信息过滤技术能够帮助用户从Web等信息海洋中获得感兴趣的内容或过滤无关垃圾信息.针对现有自适应过滤系统的不足,提出了一种改进的自适应文本信息过滤模型.模型中提供了两种相关性检索机制,在此基础上改进了反馈算法,并采用了增量训练的思想,对过滤中的自适应学习机制也提出了新的算法.基于本模型的系统在相关领域的国际评测中取得良好成绩.试验数据说明各项改进是有效的,新模型具有更高的性能.  相似文献   

19.
Web document cluster analysis plays an important role in information retrieval by organizing large amounts of documents into a small number of meaningful clusters. Traditional web document clustering is based on the Vector Space Model (VSM), which takes into account only two-level (document and term) knowledge granularity but ignores the bridging paragraph granularity. However, this two-level granularity may lead to unsatisfactory clustering results with “false correlation”. In order to deal with the problem, a Hierarchical Representation Model with Multi-granularity (HRMM), which consists of five-layer representation of data and a two-phase clustering process is proposed based on granular computing and article structure theory. To deal with the zero-valued similarity problem resulted from the sparse term-paragraph matrix, an ontology based strategy and a tolerance-rough-set based strategy are introduced into HRMM. By using granular computing, structural knowledge hidden in documents can be more efficiently and effectively captured in HRMM and thus web document clusters with higher quality can be generated. Extensive experiments show that HRMM, HRMM with tolerance-rough-set strategy, and HRMM with ontology all outperform VSM and a representative non VSM-based algorithm, WFP, significantly in terms of the F-Score.  相似文献   

20.
主题分割技术是快速并有效地对新闻故事节目进行检索和管理的基础。传统的基于隐马尔可夫模型(HiddenMarkov Model,HMM)的主题分割技术仅使用主题和主题之间的转移寻找主题边界进行新闻分割,并未考虑各主题中词与词之间存在的潜在语义关系。本文提出一种基于隐马尔科夫模型的改进算法。该算法使用潜在语义分析(Latent Se-mantic Analysis,LSA)对词频向量进行特征提取和降维,考虑了词与词之间的上下文关系,通过聚类得到文档类别信息,以LSA特征和主题类别作为HMM的观测和隐状态,这样同时考虑了主题之间的关系,最终实现对文本主题分割。数据实验表明,该算法具有较好的分割性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号