首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Statistical semantics for enhancing document clustering   总被引:1,自引:1,他引:0  
Document clustering algorithms usually use vector space model (VSM) as their underlying model for document representation. VSM assumes that terms are independent and accordingly ignores any semantic relations between them. This results in mapping documents to a space where the proximity between document vectors does not reflect their true semantic similarity. This paper proposes new models for document representation that capture semantic similarity between documents based on measures of correlations between their terms. The paper uses the proposed models to enhance the effectiveness of different algorithms for document clustering. The proposed representation models define a corpus-specific semantic similarity by estimating measures of term–term correlations from the documents to be clustered. The corpus of documents accordingly defines a context in which semantic similarity is calculated. Experiments have been conducted on thirteen benchmark data sets to empirically evaluate the effectiveness of the proposed models and compare them to VSM and other well-known models for capturing semantic similarity.  相似文献   

2.
为提高中文短文本相似度计算的准确率,提出一种新的基于混合策略的中文短文本相似度计算方法。首先,根据词语的语义距离,利用层次聚类,构建短文本聚类二叉树,改进传统的向量空间模型(VSM),计算关键词加权的文本相似度。然后,通过提取句子的主干成分对传统的基于语法语义模型的方法进行改进,得到文本主干的语义相似度;最后,对两种相似度进行加权,计算最终的文本相似度。实验结果表明,提出的方法在短文本相似度计算方面准确性更高,更加适合人们的主观判断。  相似文献   

3.
基于统计的文本相似度量方法大多先采用TF-IDF方法将文本表示为词频向量,然后利用余弦计算文本之间的相似度。此类方法由于忽略文本中词项的语义信息,不能很好地反映文本之间的相似度。基于语义的方法虽然能够较好地弥补这一缺陷,但需要知识库来构建词语之间的语义关系。研究了以上两类文本相似度计算方法的优缺点,提出了一种新颖的文本相似度量方法,该方法首先对文本进行预处理,然后挑选TF-IDF值较高的词项作为特征项,再借助HowNet语义词典和TF-IDF方法对特征项进行语义分析和词频统计相结合的文本相似度计算,最后利用文本相似度在基准文本数据集合上进行聚类实验。实验结果表明,采用提出的方法得到的F-度量值明显优于只采用TF-IDF方法或词语语义的方法,从而证明了提出的文本相似度计算方法的有效性。  相似文献   

4.
针对现有的空间向量模型在进行文档表示时忽略词条之间的语义关系的不足,提出了一种新的基于关联规则的文档向量表示方法。在广义空间向量模型中分析词条的频繁同现关系得到词条同现语义,根据关联规则分析词条之间的关联相关性,挖掘出文档中词条之间的潜在关联语义关系,将词条同现语义和关联语义线性加权对文档进行表示。实验结果表明,与BOW模型和GVSM模型相比,采用关联规则文档向量表示的文档聚类结果更准确。  相似文献   

5.
传统文本聚类方法只适合处理静态样本,且时间复杂度较高。针对该问题,提出一种基于簇相合性的文本增量聚类算法。采用基于词项语义相似度的文本表示模型,利用词项之间的语义信息,通过计算新增文本与已有簇之间的相合性实现对文本的增量聚类。增量处理完部分文本后,对其中错分可能性较大的文本重新指派类别,以进一步提高聚类性能。该算法可在对象数据不断增长或更新的情况下,避免大量重复计算,提高聚类性能。在20 Newsgroups数据集上进行实验,结果表明,与k-means算法和SHC算法相比,该算法可减少聚类时间,提高聚类性能。  相似文献   

6.
Semantic-based searching in peer-to-peer (P2P) networks has drawn significant attention recently. A number of semantic searching schemes, such as GES proposed by Zhu Y et al., employ search models in Information Retrieval (IR). All these IR-based schemes use one vector to summarize semantic contents of all documents on a single node. For example, GES derives a node vector based on the IR model: VSM (Vector Space Model). A topology adaptation algorithm and a search protocol are then designed according to the similarity between node vectors of different nodes. Although the single semantic vector is suitable when the distribution of documents in each node is uniform, it may not be efficient when the distribution is diverse. When there are many categories of documents at each node, the node vector representation may be inaccurate. We extend the idea of GES and present a new class-based semantic searching scheme (CSS) specifically designed for unstructured P2P networks with heterogeneous single-node document collection. It makes use of a state-of-the-art data clustering algorithm, online spherical k-means clustering (OSKM), to cluster all documents on a node into several classes. Each class can be viewed as a virtual node. Virtual nodes are connected through virtual links. As a result, the class vector replaces the node vector and plays an important role in the class-based topology adaptation and search process. This makes CSS very efficient. Our simulation using the IR benchmark TREC collection demonstrates that CSS outperforms GES in terms of higher recall, higher precision, and lower search cost.  相似文献   

7.
Text document clustering using global term context vectors   总被引:2,自引:2,他引:0  
Despite the advantages of the traditional vector space model (VSM) representation, there are known deficiencies concerning the term independence assumption. The high dimensionality and sparsity of the text feature space and phenomena such as polysemy and synonymy can only be handled if a way is provided to measure term similarity. Many approaches have been proposed that map document vectors onto a new feature space where learning algorithms can achieve better solutions. This paper presents the global term context vector-VSM (GTCV-VSM) method for text document representation. It is an extension to VSM that: (i) it captures local contextual information for each term occurrence in the term sequences of documents; (ii) the local contexts for the occurrences of a term are combined to define the global context of that term; (iii) using the global context of all terms a proper semantic matrix is constructed; (iv) this matrix is further used to linearly map traditional VSM (Bag of Words—BOW) document vectors onto a ‘semantically smoothed’ feature space where problems such as text document clustering can be solved more efficiently. We present an experimental study demonstrating the improvement of clustering results when the proposed GTCV-VSM representation is used compared with traditional VSM-based approaches.  相似文献   

8.
Wang  Tao  Cai  Yi  Leung  Ho-fung  Lau  Raymond Y. K.  Xie  Haoran  Li  Qing 《Knowledge and Information Systems》2021,63(9):2313-2346

In text categorization, Vector Space Model (VSM) has been widely used for representing documents, in which a document is represented by a vector of terms. Since different terms contribute to a document’s semantics in various degrees, a number of term weighting schemes have been proposed for VSM to improve text categorization performance. Much evidence shows that the performance of a term weighting scheme often varies across different text categorization tasks, while the mechanism underlying variability in a scheme’s performance remains unclear. Moreover, existing schemes often weight a term with respect to a category locally, without considering the global distribution of a term’s occurrences across all categories in a corpus. In this paper, we first systematically examine pros and cons of existing term weighting schemes in text categorization and explore the reasons why some schemes with sound theoretical bases, such as chi-square test and information gain, perform poorly in empirical evaluations. By measuring the concentration that a term distributes across all categories in a corpus, we then propose a series of entropy-based term weighting schemes to measure the distinguishing power of a term in text categorization. Through extensive experiments on five different datasets, the proposed term weighting schemes consistently outperform the state-of-the-art schemes. Moreover, our findings shed new light on how to choose and develop an effective term weighting scheme for a specific text categorization task.

  相似文献   

9.
Clustering is a very powerful data mining technique for topic discovery from text documents. The partitional clustering algorithms, such as the family of k-means, are reported performing well on document clustering. They treat the clustering problem as an optimization process of grouping documents into k clusters so that a particular criterion function is minimized or maximized. Usually, the cosine function is used to measure the similarity between two documents in the criterion function, but it may not work well when the clusters are not well separated. To solve this problem, we applied the concepts of neighbors and link, introduced in [S. Guha, R. Rastogi, K. Shim, ROCK: a robust clustering algorithm for categorical attributes, Information Systems 25 (5) (2000) 345–366], to document clustering. If two documents are similar enough, they are considered as neighbors of each other. And the link between two documents represents the number of their common neighbors. Instead of just considering the pairwise similarity, the neighbors and link involve the global information into the measurement of the closeness of two documents. In this paper, we propose to use the neighbors and link for the family of k-means algorithms in three aspects: a new method to select initial cluster centroids based on the ranks of candidate documents; a new similarity measure which uses a combination of the cosine and link functions; and a new heuristic function for selecting a cluster to split based on the neighbors of the cluster centroids. Our experimental results on real-life data sets demonstrated that our proposed methods can significantly improve the performance of document clustering in terms of accuracy without increasing the execution time much.  相似文献   

10.
文本表示是自然语言处理中的基础任务,针对传统短文本表示高维稀疏问题,提出1种基于语义特征空间上下文的短文本表示学习方法。考虑到初始特征空间维度过高,通过计算词项间互信息与共现关系,得到初始相似度并对词项进行聚类,利用聚类中心表示降维后的语义特征空间。然后,在聚类后形成的簇上结合词项的上下文信息,设计3种相似度计算方法分别计算待表示文本中词项与特征空间中特征词的相似度,以形成文本映射矩阵对短文本进行表示学习。实验结果表明,所提出的方法能很好地反映短文本的语义信息,能对短文本进行合理而有效的表示学习。  相似文献   

11.
针对基于VSM(vector space model)的文本聚类算法忽略了词之间的语义信息和各维度之间的关系,导致文本的相似度计算不够精确,提出了一种基于语义相似度的群智能文本聚类的新方法。该方法融合了模拟退火算法的全局搜索和蚁群算法的正反馈能力。其思路是,首先从语义上分析文本,利用K-均值算法进行文本聚类,再根据K-均值算法的结果,使用蚁群和模拟退火算法进行调整聚类。测试结果表明这种算法能够提高聚类精度和召回率,也验证了混合算法的正确性。  相似文献   

12.
Document clustering is text processing that groups documents with similar concepts. It's usually considered an unsupervised learning approach because there's no teacher to guide the training process, and topical information is often assumed to be unavailable. A guided approach to document clustering that integrates linguistic top-down knowledge from WordNet into text vector representations based on the extended significance vector weighting technique improves both classification accuracy and average quantization error. In our guided self-organization approach we integrate topical and semantic information from WordNet. Because a document-training set with preclassified information implies relationships between a word and its preference class, we propose a novel document vector representation approach to extract these relationships for document clustering. Furthermore, merging statistical methods, competitive neural models, and semantic relationships from symbolic Word-Net, our hybrid learning approach is robust and scales up to a real-world task of clustering 100,000 news documents.  相似文献   

13.
基于向量空间模型(VSM)的文本聚类会出现向量维度过高以及缺乏语义信息的问题,导致聚类效果出现偏差。为解决以上问题,引入《知网》作为语义词典,并改进词语相似度算法的不足。利用改进的词语语义相似度算法对文本特征进行语义压缩,使所有特征词都是主题相关的,利用调整后的TF-IDF算法对特征项进行加权,完成文本特征抽取,降低文本表示模型的维度。在聚类中,将同一类的文本划分为同一个簇,利用簇中所有文本的特征词完成簇的语义特征抽取,簇的表示模型和文本的表示模型有着相同的形式。通过计算簇之间的语义相似度,将相似度大于阈值的簇合并,更新簇的特征,直到算法结束。通过实验验证,与基于K-Means和VSM的聚类算法相比,文中算法大幅降低了向量维度,聚类效果也有明显提升。  相似文献   

14.
基于级连神经网络和SVD的文本分类新模型   总被引:1,自引:1,他引:0       下载免费PDF全文
提出了一个基于级连神经网络(Cascade-Correlation Neural Network,CCNN)和SVD(Singular Value Decomposition)的文本分类新模型。该神经网络用级连相关算法来训练网络。大部分的文本分类系统用向量空间模型(Vector Space Model,VSM)来表现文档,然而这种方法需要很高的维度,并且考虑不到文本特征词间的语义隐含信息,因此分类效果不是太理想。引入SVD来学习和表现文本特征词,在降低特征维度的基础上,将文本特征的隐含信息表现出来。实验证明,在加快训练速度的基础上,提高了分类的精度。  相似文献   

15.
邱先标  陈笑蓉 《计算机科学》2018,45(Z6):106-109, 139
计算文本的相似度是许多文本信息处理技术的基础。然而,常用的基于向量空间模型(VSM)的相似度计算方法存在着高维稀疏和语义敏感度较差等问题,因此相似度计算的效果 并不理想。在传统的LDA(Latent Dirichlet Allocation)模型的基础上,针对其需要人工确定主题数目的问题,提出了一种能通过模型自身迭代确定主题个数的自适应LDA(SA_LDA)模型。然后,将其引入文本的相似度计算中,在一定程度上解决了高维稀疏等问题。通过实验表明,该方法能自动确定模型主题的个数,并且利用该模型计算文本相似度时取得了比VSM模型更高的准确度。  相似文献   

16.
ABSTRACT

Text clustering is an important topic in text mining. One of the most effective methods for text clustering is an approach based on frequent itemsets (FIs), and thus, there are many related algorithms that aim to improve the accuracy of text clustering. However, these do not focus on the weights of terms in documents, even though the frequency of each term in each document has a great impact on the results. In this work, we propose a new method for text clustering based on frequent weighted utility itemsets (FWUI). First, we calculate the Term Frequency (TF) for each term in documents to create a weight matrix for all documents. The weights of terms in documents are based on the Inverse Document Frequency. Next, we use the Modification Weighted Itemset Tidset (MWIT)-FWUI algorithm for mining FWUI from a number matrix and the weights of terms in documents. Finally, based on frequent utility itemsets, we cluster documents using the MC (Maximum Capturing) algorithm. The proposed method has been evaluated on three data sets consisting of 1,600 documents covering 16 topics. The experimental results show that our method, using FWUI, improves the accuracy of the text clustering compared to methods using FIs.  相似文献   

17.
文本聚类在很多领域都有广泛的应用,传统的文本聚类方法由于并不考虑语义因素,得出的聚类效果并不理想.利用语义对VSM模型进行变换,即基于语义对VSM模型的各维进行扭曲,将原本的正交坐标系基于语义变换为斜角坐标系,然后将文本的特征向量映射到变换后的VSM模型上再进行聚类,相对减小语义相关的特征向量间的语义距离,从而提高了文...  相似文献   

18.
Learning element similarity matrix for semi-structured document analysis   总被引:3,自引:3,他引:0  
Capturing latent structural and semantic properties in semi-structured documents (e.g., XML documents) is crucial for improving the performance of related document analysis tasks. Structured Link Vector Mode (SLVM) is a representation recently proposed for modeling semi-structured documents. It uses an element similarity matrix to capture the latent relationships between XML elements—the constructing components of an XML document. In this paper, instead of applying heuristics to define the element similarity matrix, we propose to compute the matrix using the machine learning approach. In addition, we incorporate term semantics into SLVM using latent semantic indexing to enhance the model accuracy, with the element similarity learnability property preserved. For performance evaluation, we applied the similarity learning to k-nearest neighbors search and similarity-based clustering, and tested the performance using two different XML document collections. The SLVM obtained via learning was found to outperform significantly the conventional Vector Space Model and the edit-distance-based methods. Also, the similarity matrix, obtained as a by-product, can provide higher-level knowledge on the semantic relationships between the XML elements.
Xiaoou ChenEmail:
  相似文献   

19.
Text document clustering plays an important role in providing better document retrieval, document browsing, and text mining. Traditionally, clustering techniques do not consider the semantic relationships between words, such as synonymy and hypernymy. To exploit semantic relationships, ontologies such as WordNet have been used to improve clustering results. However, WordNet-based clustering methods mostly rely on single-term analysis of text; they do not perform any phrase-based analysis. In addition, these methods utilize synonymy to identify concepts and only explore hypernymy to calculate concept frequencies, without considering other semantic relationships such as hyponymy. To address these issues, we combine detection of noun phrases with the use of WordNet as background knowledge to explore better ways of representing documents semantically for clustering. First, based on noun phrases as well as single-term analysis, we exploit different document representation methods to analyze the effectiveness of hypernymy, hyponymy, holonymy, and meronymy. Second, we choose the most effective method and compare it with the WordNet-based clustering method proposed by others. The experimental results show the effectiveness of semantic relationships for clustering are (from highest to lowest): hypernymy, hyponymy, meronymy, and holonymy. Moreover, we found that noun phrase analysis improves the WordNet-based clustering method.  相似文献   

20.
针对标题文本聚类中的聚类结果不稳定问题,提出一种基于聚类融合的标题文本聚类方法。该方法对标题文本的特征词进行筛选,将标题文本转化为特征词集合;提出基于统计和语义的相似度计算方法,计算特征词集合间的相似度;引入基于共协矩阵的聚类融合算法,得出聚类结果。实验结果表明,和传统聚类算法相比,该方法提升了标题文本聚类的稳定性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号