首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 127 毫秒
1.
一种基于LDA的潜在语义区划分及Web文档聚类算法   总被引:2,自引:0,他引:2  
该文应用LDA模型进行文档的潜在语义分析,将语义分布划分成低频、中频、高频语义区,以低频语义区的语义进行Web游离文档检测,以中、高频语义区的语义作为文档特征进行文档聚类,采用文档类别与语义互作用机制对聚类结果进行修正。与相关工作比较,该文不仅应用LDA模型表示文档,而且进行了深入的语义分布区域划分,并将分析结果应用于Web文档聚类。实验表明,该文提出的基于LDA的文档类别与语义互作用聚类算法获得了更好的聚类结果。  相似文献   

2.
网络化大数据时代的到来丰富了网络空间中的信息资源,然而由于数据资源类型的多样性及其增长的快速性,给网络空间的存储和信息资源的有效利用带来了压力和挑战。该文提出了一种基于潜在语义分析的文本指纹提取方法,该方法是对数据信息的一种压缩表示,是针对目前指纹提取方法语义缺失的一种改进。该方法主要通过奇异值分解获取原始文档的潜在语义特征,然后将原文档向量空间转换到与其对应的潜在语义空间,再根据随机超平面原理将该空间的文档转换成二进制数字指纹,最终用汉明距离来衡量指纹间的差异程度。实验以中国知网上的学术论文作为数据对象,通过对论文文本进行相似度实验和聚类实验对该文提出的方法进行实验验证。实验结果表明该方法能够较好地表征文档语义信息,进而验证了文本语义压缩表示的准确性和有效性。  相似文献   

3.
双宾短语是一种特殊的语言现象,为了使计算机能够理解并处理双宾短语,该文从语法和语义两个层面对双宾短语进行了分析,基于概念知识树知识表示模型建立了双宾短语的语义表达模型;并提出一种双宾短语分析算法,实现了从双宾短语到其语义表达模型的自动转换。双宾短语分析算法采用自顶向下和自底向上相结合的方法,自顶向下用于对双宾短语的语法成分进行划分,获得构成双宾短语的双宾动词成分、间接宾语成分和直接宾语成分;自底向上用于使用基于概念知识树的短语分析推理算法对双宾短语中的这三种成分分别进行分析,获得对应的语义表达;最后,利用三种成分的语义分析结果构建双宾短语完整的语义表达。该文从权威文献和语法词典中选取了122个双宾动词,对这些双宾动词构成的209个短语进行了分析,分析的正确率为90.43%,证明了该文提出的双宾短语分析算法和语义表达模型的有效性。  相似文献   

4.
潜在语义索引中特征优化技术的研究   总被引:3,自引:0,他引:3  
潜在语义索引被广泛应用于信息检索、文本分类、自动问答等领域中。潜在语义索引是一种降维方法,它把共现特征映射到同一维空间上,而非共现特征映射到不同的空间上。在潜在语义索引的语义空间中,共现特征通过文档内部以及文档之间的特征传递关系获得。该文认为这种特征传递关系会引入一些不存在的共现特征,从而降低潜在语义索引的性能,应该对这种特征传递关系进行一些选择,削除不存在的共现特征信息。该文采用文档频率对文档集合进行特征选择,用Complete-Link聚类算法在两个公开语料上进行三个实验,实验结果显示,保留文档频度的10%~15%时,其F1值分别提高了6.577 0%,1.992 8%和3.361 4%。  相似文献   

5.
基于语义模式分解的英语介词语义分析和汉译   总被引:1,自引:0,他引:1       下载免费PDF全文
在研究英语介词相关短语和句式的基础上,给出了语义模式的概念,构建了介词相关短语语义模式库、相关句式语义模式库、主虚量库和固定搭配知识库。根据介词相关短语语义模式特点,提出了一种基于语义模式分解的介词语义分析和汉译算法,结合相关句式语义模式库和固定搭配知识库,对英语介词进行语义分析和汉译。实验表明,利用该文方法能有效解决英语介词汉译的问题。  相似文献   

6.
概率潜在语义检索模型使用统计的方法建立“文档—潜在语义一词”之间概率分布关系并利用这种关系进行检索。本文比较了在概率潜在语义检索模型中不同中文索引技术对检索效果的影响,考察了基于分词、二元和关键词抽取三种不同的索引技术,并和向量空间模型作了对比分析。实验结果表明:在概率潜在语义检索模型中,词的正确切分能提高检索的平均精度。  相似文献   

7.
潜在语义分析在进行大规模语义检索时计算效率较低、存储开销较大。针对该问题,提出一种基于聚类的潜在语义检索算法。通过文档之间的结构关系对文档进行聚类,利用簇代替文档分析潜在语义,以此减少处理文档的个数。实验结果表明,该算法能减少查询时间,且检索精确度较高。  相似文献   

8.
基于Rough集潜在语义索引的Web文档分类   总被引:5,自引:0,他引:5  
Rough集(粗糙集)埋论是一种处理不确定或模糊知识的数学工具。提出了一种基于Rough集理论的潜在语义索引的Web文档分类方法。首先应用向量空间模型表示Web文档信息,然后通过矩阵的奇异值分解来进行信息过滤和潜在语义索引;运用属性约简算法生成分类规则,最后利用多知识库进行文档分类。通过试验比较,该方法具有较好的分类效果。  相似文献   

9.
名词短语一直是中外语言学领域的重要研究对象,近年来在自然语言处理领域也受到了研究者的持续关注。英文方面,已建立了一定规模的名词短语语义关系知识库。但迄今为止,尚未建立相应或更大规模的描述名词短语语义关系的中文资源。该文借鉴国内外诸多学者对名词短语语义分类的研究成果,对大规模真实语料中的基本复合名词短语实例进行试标注与分析,建立了中文基本复合名词短语语义关系体系及相应句法语义知识库,该库能够为中文基本复合名词短语句法语义的研究提供基础数据资源。目前该库共含有18 281条高频基本复合名词短语,每条短语均标注了语义关系、短语结构及是否指称实体等信息,每条短语包含的两个名词还分别标注了语义类信息。语义类信息基于北京大学《现代汉语语义词典》。基于该知识库,该文还做了基本复合名词短语句法语义的初步统计与分析。  相似文献   

10.
关系抽取是信息抽取研究的重要方向,已逐步从句子级扩展到了文档级。与句子相比,文档通常蕴含更多的关系事实,可为知识库构建、信息检索和语义分析等提供更多的信息支持。然而,文档级关系抽取复杂度更高,难度更大,目前缺乏较为系统全面的梳理和总结。为更好地促进文档级关系抽取的深入研究与发展,文中对已有技术和方法进行了综合深入分析,从数据预处理方式和核心算法角度,将已有文档级关系抽取研究大致分为基于树、基于序列和基于图3种类别;在此基础上,分析描述了各类研究中的部分典型方法、最新进展以及存在的不足;同时,介绍了现有研究中部分常用数据集和性能评价指标,并列出了已有部分典型方法的具体性能;最后,对现有文档级关系抽取研究存在的问题进行了分析和总结,指出了未来可能的发展趋势及可进一步深入关注的研究方向。  相似文献   

11.
Text document clustering plays an important role in providing better document retrieval, document browsing, and text mining. Traditionally, clustering techniques do not consider the semantic relationships between words, such as synonymy and hypernymy. To exploit semantic relationships, ontologies such as WordNet have been used to improve clustering results. However, WordNet-based clustering methods mostly rely on single-term analysis of text; they do not perform any phrase-based analysis. In addition, these methods utilize synonymy to identify concepts and only explore hypernymy to calculate concept frequencies, without considering other semantic relationships such as hyponymy. To address these issues, we combine detection of noun phrases with the use of WordNet as background knowledge to explore better ways of representing documents semantically for clustering. First, based on noun phrases as well as single-term analysis, we exploit different document representation methods to analyze the effectiveness of hypernymy, hyponymy, holonymy, and meronymy. Second, we choose the most effective method and compare it with the WordNet-based clustering method proposed by others. The experimental results show the effectiveness of semantic relationships for clustering are (from highest to lowest): hypernymy, hyponymy, meronymy, and holonymy. Moreover, we found that noun phrase analysis improves the WordNet-based clustering method.  相似文献   

12.
Document Similarity Using a Phrase Indexing Graph Model   总被引:3,自引:1,他引:2  
Document clustering techniques mostly rely on single term analysis of text, such as the vector space model. To better capture the structure of documents, the underlying data model should be able to represent the phrases in the document as well as single terms. We present a novel data model, the Document Index Graph, which indexes Web documents based on phrases rather than on single terms only. The semistructured Web documents help in identifying potential phrases that when matched with other documents indicate strong similarity between the documents. The Document Index Graph captures this information, and finding significant matching phrases between documents becomes easy and efficient with such model. The model is flexible in that it could revert to a compact representation of the vector space model if we choose not to index phrases. However, using phrase indexing yields more accurate document similarity calculations. The similarity between documents is based on both single term weights and matching phrase weights. The combined similarities are used with standard document clustering techniques to test their effect on the clustering quality. Experimental results show that our phrase-based similarity, combined with single-term similarity measures, gives a more accurate measure of document similarity and thus significantly enhances Web document clustering quality.  相似文献   

13.
Vector space model using bag of phrases plays an important role in modeling Chinese text. However, the conventional way of using fixed gram scanning to identify free-length phrases is costly. To address this problem, we propose a novel approach for key phrase identification which is capable of identify phrases with all lengths and thus improves the coding efficiency and discrimination of the data representation. In the proposed method, we first convert each document into a context graph, a directed graph that encapsulates the statistical and positional information of all the 2-word strings in the document. We treat every transmission path in the graph as a hypothesis for a phrase, and select the corresponding phrase as a candidate phrase if the hypothesis is valid in the original document. Finally, we selectively divide some of the complex candidate phrases into sub-phrases to improve the coding efficiency, resulting in a set of phrases for codebook construction. The experiments on both balanced and unbalanced datasets show that the codebooks generated by our approach are more efficient than those by conventional methods (one syntactical method and three statistical methods are investigated). Furthermore, the data representation created by our approach has demonstrated higher discrimination than those by conventional methods in classification task.  相似文献   

14.
Efficient phrase-based document indexing for Web document clustering   总被引:4,自引:0,他引:4  
Document clustering techniques mostly rely on single term analysis of the document data set, such as the vector space model. To achieve more accurate document clustering, more informative features including phrases and their weights are particularly important in such scenarios. Document clustering is particularly useful in many applications such as automatic categorization of documents, grouping search engine results, building a taxonomy of documents, and others. This article presents two key parts of successful document clustering. The first part is a novel phrase-based document index model, the document index graph, which allows for incremental construction of a phrase-based index of the document set with an emphasis on efficiency, rather than relying on single-term indexes only. It provides efficient phrase matching that is used to judge the similarity between documents. The model is flexible in that it could revert to a compact representation of the vector space model if we choose not to index phrases. The second part is an incremental document clustering algorithm based on maximizing the tightness of clusters by carefully watching the pair-wise document similarity distribution inside clusters. The combination of these two components creates an underlying model for robust and accurate document similarity calculation that leads to much improved results in Web document clustering over traditional methods.  相似文献   

15.
In an empirical user study, we assessed two approaches to ranking the results from a keyword search using semantic contextual match based on Latent Semantic Analysis. These techniques involved searches initiated from words found in a seed document within a corpus. The first approach used the sentence around the search query in the document as context while the second used the entire document. With a corpus of 20,000 documents and a small proportion of relevant documents (<0.1%), both techniques outperformed a conventional keyword search on a recall-based information retrieval (IR) task. These context-based techniques were associated with a reduction in the number of searches conducted, an increase in users’ precision and, to a lesser extent, an increase in recall. This improvement was strongest when the ranking was based on document, rather than sentence, context. Individuals were more effective on the IR task when the lists returned by the techniques were ranked better. User performance on the task also correlated with achievement on a generalized IQ test but not on a linguistic ability test.  相似文献   

16.
基于LDA模型的文本分类研究   总被引:3,自引:0,他引:3       下载免费PDF全文
针对传统的降维算法在处理高维和大规模的文本分类时存在的局限性,提出了一种基于LDA模型的文本分类算法,在判别模型SVM框架中,应用LDA概率增长模型,对文档集进行主题建模,在文档集的隐含主题-文本矩阵上训练SVM,构造文本分类器。参数推理采用Gibbs抽样,将每个文本表示为固定隐含主题集上的概率分布。应用贝叶斯统计理论中的标准方法,确定最优主题数T。在语料库上进行的分类实验表明,与文本表示采用VSM结合SVM,LSI结合SVM相比,具有较好的分类效果。  相似文献   

17.
该文采用中英韩跨语种文本数据研究不同语种文档间相似度的计算方法。首先,通过共现词映射将某语种空间中的文档向量表示成另一语种空间中的文档向量;其次,利用潜在语义分析补充了不同语言间一词多义现象造成的向量缺失;最后,在具有等价语义信息的同一语种空间中计算了两个文档之间的余弦相似度。该文工作避开了外部词典和知识库,利用中英韩三个语种的对齐语料库,建立了不同语种词汇间的对应关系。结果表明,共现词映射对计算不同语种文档之间的相似度具有较大影响,对同语义的不同语种文档(即译文)的检索准确率达到95%,验证了该方法的有效性。  相似文献   

18.
In this paper, we introduce variability of syntactic phrases and propose a new retrieval approach reflecting the variability of syntactic phrase representation. With variability measure of a phrase, we can estimate how likely a phrase in a given query would appear in relevant documents and control the impact of syntactic phrases in a retrieval model. Various experimental results over different types of queries and document collections show that our retrieval model based on variability of syntactic phrases is very effective in terms of retrieval performance, especially for long natural language queries.  相似文献   

19.
基于潜在语义索引的文本特征词权重计算方法   总被引:1,自引:0,他引:1  
李媛媛  马永强 《计算机应用》2008,28(6):1460-1462
潜在语义索引具有可计算性强,需要人参与少等优点。对其中重要的优化过程--权重计算,进行了深入分析。针对目前应用最广泛的TF-IDF方法中,采用线性处理的不合理性以及难以突出对文本内容起关键性作用的特征的缺点,提出了一种基于"Sigmiod函数"和"位置因子"的新权重方案。突出了文本中不同特征词的重要程度,更有利于潜在语义空间的构造。通过实验平台"中文潜在语义索引分析系统"的测试结果表明,该权重方法更利于基于潜在语义的检索性能的提高。  相似文献   

20.
Unsupervised Learning by Probabilistic Latent Semantic Analysis   总被引:8,自引:0,他引:8  
Hofmann  Thomas 《Machine Learning》2001,42(1-2):177-196
This paper presents a novel statistical method for factor analysis of binary and count data which is closely related to a technique known as Latent Semantic Analysis. In contrast to the latter method which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed technique uses a generative latent class model to perform a probabilistic mixture decomposition. This results in a more principled approach with a solid foundation in statistical inference. More precisely, we propose to make use of a temperature controlled version of the Expectation Maximization algorithm for model fitting, which has shown excellent performance in practice. Probabilistic Latent Semantic Analysis has many applications, most prominently in information retrieval, natural language processing, machine learning from text, and in related areas. The paper presents perplexity results for different types of text and linguistic data collections and discusses an application in automated document indexing. The experiments indicate substantial and consistent improvements of the probabilistic method over standard Latent Semantic Analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号