首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Latent semantic analysis (LSA) is a tool for extracting semantic information from texts as well as a model of language learning based on the exposure to texts. We rely on LSA to represent the student model in a tutoring system. Domain examples and student productions are represented in a high-dimensional semantic space, automatically built from a statistical analysis of the co-occurrences of their lexemes. We also designed tutoring strategies to automatically detect lexeme misunderstandings and to select among the various examples of a domain the one which is best to expose the student to. Two systems are presented: the first one successively presents texts to be read by the student, selecting the next one according to the comprehension of the prior ones by the student. The second plays a board game (kalah) with the student in such a way that the next configuration of the board is supposed to be the most appropriate with respect to the semantic structure of the domain and the previous student's moves.  相似文献   

2.
一种基于潜在语义分析的查询扩展算法   总被引:5,自引:0,他引:5  
该文提出一种新的查询扩展算法。通过对文本进行潜在语义分析,引入计算词语间语义相似度的方法,将文本聚类应用到检索的交互过程中,以提高信息检索的质量。实验结果表明该算法对于提高检索的准确率是十分有效的。  相似文献   

3.
潜在语义分析权重计算的改进   总被引:12,自引:0,他引:12  
自从潜在语义分析方法诞生以来,被广泛应用于信息检索、文本分类、自动问答系统等领域中。潜在语义分析的一个重要过程是对词语文档矩阵作加权转换,加权函数直接影响潜在语义分析结果的优劣。本文首先总结了传统的、已成熟的权重计算方法,包括局部权重部分和词语全局权重部分,随后指出已有方法的不足之处,并对权重计算方法进行扩展,提出文档全局权重的概念。在最后的实验中,提出了一种新的检验潜在语义分析结果优劣的方法———文档自检索矩阵,实验结果证明改进后的权重计算方法提高了检索效率。  相似文献   

4.
Adaptive Bayesian Latent Semantic Analysis   总被引:1,自引:0,他引:1  
Due to the vast growth of data collections, the statistical document modeling has become increasingly important in language processing areas. Probabilistic latent semantic analysis (PLSA) is a popular approach whereby the semantics and statistics can be effectively captured for modeling. However, PLSA is highly sensitive to task domain, which is continuously changing in real-world documents. In this paper, a novel Bayesian PLSA framework is presented. We focus on exploiting the incremental learning algorithm for solving the updating problem of new domain articles. This algorithm is developed to improve document modeling by incrementally extracting up-to-date latent semantic information to match the changing domains at run time. By adequately representing the priors of PLSA parameters using Dirichlet densities, the posterior densities belong to the same distribution so that a reproducible prior/posterior mechanism is activated for incremental learning from constantly accumulated documents. An incremental PLSA algorithm is constructed to accomplish the parameter estimation as well as the hyperparameter updating. Compared to standard PLSA using maximum likelihood estimate, the proposed approach is capable of performing dynamic document indexing and modeling. We also present the maximum a posteriori PLSA for corrective training. Experiments on information retrieval and document categorization demonstrate the superiority of using Bayesian PLSA methods.  相似文献   

5.
A novel updating method for Probabilistic Latent Semantic Analysis (PLSA), called Recursive PLSA (RPLSA), is proposed. The updating of conditional probabilities is derived from first principles for both the asymmetric and the symmetric PLSA formulations. The performance of RPLSA for both formulations is compared to that of the PLSA folding-in, the PLSA rerun from the breakpoint, and well-known LSA updating methods, such as the singular value decomposition (SVD) folding-in and the SVD-updating. The experimental results demonstrate that the RPLSA outperforms the other updating methods under study with respect to the maximization of the average log-likelihood and the minimization of the average absolute error between the probabilities estimated by the updating methods and those derived by applying the non-adaptive PLSA from scratch. A comparison in terms of CPU run time is conducted as well. Finally, in document clustering using the Adjusted Rand index, it is demonstrated that the clusters generated by the RPLSA are: (a) similar to those generated by the PLSA applied from scratch; (b) closer to the ground truth than those created by the other PLSA or LSA updating methods.  相似文献   

6.
Nowadays, many software organizations rely on automatic problem reporting tools to collect crash reports directly from users’ environments. These crash reports are later grouped together into crash types. Usually, developers prioritize crash types based on the number of crash reports and file bug reports for the top crash types. Because a bug can trigger a crash in different usage scenarios, different crash types are sometimes related to the same bug. Two bugs are correlated when the occurrence of one bug causes the other bug to occur. We refer to a group of crash types related to identical or correlated bug reports, as a crash correlation group. In this paper, we propose five rules to identify correlated crash types automatically. We propose an algorithm to locate and rank buggy files using crash correlation groups. We also propose a method to identify duplicate and related bug reports. Through an empirical study on Firefox and Eclipse, we show that the first three rules can identify crash correlation groups using stack trace information, with a precision of 91 % and a recall of 87 % for Firefox and a precision of 76 % and a recall of 61 % for Eclipse. On the top three buggy file candidates, the proposed bug localization algorithm achieves a recall of 62 % and a precision of 42 % for Firefox, and a recall of 52 % and a precision of 50 % for Eclipse. On the top 10 buggy file candidates, the recall increases to 92 % for Firefox and 90 % for Eclipse. The proposed duplicate bug report identification method achieves a recall of 50 % and a precision of 55 % on Firefox, and a recall of 47 % and a precision of 35 % on Eclipse. Developers can combine the proposed crash correlation rules with the new bug localization algorithm to identify and fix correlated crash types all together. Triagers can use the duplicate bug report identification method to reduce their workload by filtering duplicate bug reports automatically.  相似文献   

7.
Unsupervised Learning by Probabilistic Latent Semantic Analysis   总被引:8,自引:0,他引:8  
Hofmann  Thomas 《Machine Learning》2001,42(1-2):177-196
This paper presents a novel statistical method for factor analysis of binary and count data which is closely related to a technique known as Latent Semantic Analysis. In contrast to the latter method which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed technique uses a generative latent class model to perform a probabilistic mixture decomposition. This results in a more principled approach with a solid foundation in statistical inference. More precisely, we propose to make use of a temperature controlled version of the Expectation Maximization algorithm for model fitting, which has shown excellent performance in practice. Probabilistic Latent Semantic Analysis has many applications, most prominently in information retrieval, natural language processing, machine learning from text, and in related areas. The paper presents perplexity results for different types of text and linguistic data collections and discusses an application in automated document indexing. The experiments indicate substantial and consistent improvements of the probabilistic method over standard Latent Semantic Analysis.  相似文献   

8.
Latent Semantic Analysis: five methodological recommendations   总被引:1,自引:0,他引:1  
The recent influx in generation, storage, and availability of textual data presents researchers with the challenge of developing suitable methods for their analysis. Latent Semantic Analysis (LSA), a member of a family of methodological approaches that offers an opportunity to address this gap by describing the semantic content in textual data as a set of vectors, was pioneered by researchers in psychology, information retrieval, and bibliometrics. LSA involves a matrix operation called singular value decomposition, an extension of principal component analysis. LSA generates latent semantic dimensions that are either interpreted, if the researcher's primary interest lies with the understanding of the thematic structure in the textual data, or used for purposes of clustering, categorization, and predictive modeling, if the interest lies with the conversion of raw text into numerical data, as a precursor to subsequent analysis. This paper reviews five methodological issues that need to be addressed by the researcher who will embark on LSA. We examine the dilemmas, present the choices, and discuss the considerations under which good methodological decisions are made. We illustrate these issues with the help of four small studies, involving the analysis of abstracts for papers published in the European Journal of Information Systems.  相似文献   

9.
基于潜在语义分析的视频检索   总被引:1,自引:1,他引:1       下载免费PDF全文
潜在语义分析技术是建立在视频分析基础之上的,它通过某种映射关系来建立视频特征矩阵,实现了基于内容的视频检索。阐述了潜在语义分析技术,进行了视频颜色和纹理特征提取研究,实验结果表明,潜在语义分析对于视频内容检索效果较好。  相似文献   

10.
基于潜在语义分析的跨语言查询扩展方法   总被引:1,自引:0,他引:1       下载免费PDF全文
针对传统查询扩展方法存在的问题,提出一种基于潜在语义分析的跨语言扩展方法,利用聚类提高扩展文本集合的精度,并用潜在语义分析实现无需翻译的查询扩展,减轻翻译歧义带来的影响。实验结果表明,该方法能够获得较好的性能。  相似文献   

11.
International Journal of Computer Vision - In this paper, we propose a novel continuous latent semantic analysis fitting method, to efficiently and effectively estimate the parameters of model...  相似文献   

12.
针对传统pLSA模型中语义建模和参数求解不足的问题,提出一种基于先验信息的pLSA场景分类方法.首先对概率模型中的参数矩阵增加同类场景数据的低秩性及单幅图像相对语义主题的稀疏性约束,建立基于先验信息的优化决策模型;然后采用非精确增广拉格朗日乘子法给出模型参数求解算法;最后将基于潜在语义分析的场景分类方法应用到较大规模的场景分类任务中.与其他基于pLSA模型的分类算法进行比较的实验结果表明,文中方法便于产生低维空间中紧致有效的场景语义表示,避免了EM算法收敛性欠佳引起的局部最优问题,具有更好的场景分类性能.  相似文献   

13.
基于MapReduce的并行PLSA算法及在文本挖掘中的应用   总被引:1,自引:0,他引:1  
PLSA(Probabilistic Latent Semantic Analysis)是一种典型的主题模型。复杂的建模过程使其难以处理海量数据,针对串行PLSA难以处理海量数据的问题,该文提出一种基于MapReduce计算框架的并行PLSA算法,能够以简洁的形式和分布式的方案来解决大规模数据的并行处理问题,并把并行PLSA算法运用到文本聚类和语义分析的文本挖掘应用中。实验结果表明该算法在处理较大数据量时表现出了很好的性能。  相似文献   

14.
基于潜在语义分析的BBS文档Bayes鉴别器   总被引:13,自引:1,他引:12  
电子公告栏(BBS)的滥用是一种以信息污染为特色的社会问题,对BBS文档进行鉴别已成为信息安全重要内容之一,该文融合了数据挖掘技术、数理统计技术和自然语言理解技术,提出了基于潜在语义分析与Bayes分类的BBS文档鉴别方法:利用自然语言处理技术从训练文档中抽取典型短语集;通过潜在语义分析进行典型短语同义归约,应用关联规则采掘技术提高典型短语间的独立性;用Bayes分类器对BBS文档进行鉴别。该文还对影响系统的关键参数进行了大量的讨论和测试,实验表明该方法对于BBS文档的鉴别是可行而有效的。  相似文献   

15.
网络化大数据时代的到来丰富了网络空间中的信息资源,然而由于数据资源类型的多样性及其增长的快速性,给网络空间的存储和信息资源的有效利用带来了压力和挑战。该文提出了一种基于潜在语义分析的文本指纹提取方法,该方法是对数据信息的一种压缩表示,是针对目前指纹提取方法语义缺失的一种改进。该方法主要通过奇异值分解获取原始文档的潜在语义特征,然后将原文档向量空间转换到与其对应的潜在语义空间,再根据随机超平面原理将该空间的文档转换成二进制数字指纹,最终用汉明距离来衡量指纹间的差异程度。实验以中国知网上的学术论文作为数据对象,通过对论文文本进行相似度实验和聚类实验对该文提出的方法进行实验验证。实验结果表明该方法能够较好地表征文档语义信息,进而验证了文本语义压缩表示的准确性和有效性。  相似文献   

16.
基于潜在语义分析的信息检索   总被引:15,自引:1,他引:14  
潜在语义分析是一种用于自动实现知识提取和表示的理论和方法,它通过对大量的文本集进行统计分析,从中提取出词语的上下文使用含义。文章介绍了基于潜在语义分析的文本信息检索的基本思想、特点以及实现方法。  相似文献   

17.
潜在语义分析理论及其应用   总被引:17,自引:1,他引:17  
潜在语义分析(Latent Sereantic AnMysis,LSA)是一种用于自动地实现知识提取和表示的理论和方法,它通过对大量的文本集进行统计分析,从中提取出词语的上下文使用含义。在技术上,它同向量空间模型类型类似,都是采用空间向量表示文本,但通过SVD分解等处理,消除了同义词、多义词的影响,提高了后续处理的精度。将着重介绍LSA方法的基本思想、特点、实现方法,以及基于LSA思想的具体应用。  相似文献   

18.
问答系统应该能够用准确、简洁的语言回答用户用自然语言提出的问题,其关键和核心实现技术是答案抽取。结合关键词在用户问句和返回文档中的权重,通过潜在语义分析技术实现了中文问答系统中的答案抽取。实验结果表明,加权LSA的MRR值要明显优于未加权LSA和空间向量模型的MRR值,实际用于回答用户提出的问题具有较好的效果。  相似文献   

19.
潜在语义分析在中文信息处理中的应用   总被引:11,自引:2,他引:11  
潜在语义分析是一种关于自然语言信息提取和再现的理论方法,它通过代数的方法提取语义空间中潜在结构。论文叙述了潜在语义分析的基本理论方法,概述了这种方法所建立的潜在语义空间的数学意义;然后通过一个简单示例说明LSA在中文信息处理中的分析方法,并通过分析结果中文本间、词汇间关联度的变化来说明LSA在中文信息处理中的重要意义。  相似文献   

20.
基于潜在语义分析的汉语问答系统答案提取   总被引:24,自引:0,他引:24  
为了解决在汉语问答系统答案提取时,由于词的同义或多义现象而导致的“漏提”或“错提”等问题,提出了一种基于潜在语义分析(LSA)的问题和答案句子相似度计算方法.它利用空间向量模型作为问题和句子的表示方法,借助于潜在语义分析理论,对大量问答作句子语料统计分析,构建了一个潜在的词一句子语义空间,从而消除了词之间的相关性,并在语义空间上实现了问题与答案句子相似度计算,有效地解决了词的同义和多义问题.最后结合问题类型和相似度计算结果,对汉语基于事实的简单陈述问题进行了答案句子提取实验.答案提取的MRR值达到了0.47,明显优于空间向量模型.结果说明该方法具有很好的效果.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号