首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 113 毫秒
1.
许伟佳 《数字社区&智能家居》2009,5(9):7281-7283,7286
文档聚类在Web文本挖掘中占有重要地位.是聚类分析在文本处理领域的应用。文章介绍了基于向量空间模型的文本表示方法,分析并优化了向量空间模型中特征词条权重的评价函数,使基于距离的相似性度量更为准确。重点分析了Web文档聚类中普遍使用的基于划分的k-means算法.对于k-means算法随机选取初始聚类中心的缺陷.详细介绍了采用基于最大最小距离法的原则,结合抽样技术思想,来稳定初始聚类中心的选取,改善聚类结果。  相似文献   

2.
介绍了Web文档聚类中普遍使用的、基于分割的k-means算法,分析了k-means算法所使用的向量空间模型和基于距离的相似性度量的局限性,从而提出了一种改善向量空间模型以及相似性度量的方法。  相似文献   

3.
文章介绍了Web文档聚类中普遍使用的基于分割的k-means算法,分析了k-means算法所使用的向量空间模型和基于距离的相似性度量的局限性,从而提出了一种改善向量空间模型以及相似性度量的方法。实验表明,改进后的k-means算法不仅保留了原k-means算法效率高的优点,而且具有更高的准确性。  相似文献   

4.
基于混合并行遗传算法的文本聚类研究   总被引:2,自引:0,他引:2  
针对传统K-Means聚类算法对初始聚类中心的选择敏感,易陷入局部最优解的问题,提出一种基于混合并行遗传算法的文本聚类方法。该方法首先将文档集合表示成向量空间模型,并在文档向量中随机选择初始聚类中心形成染色体,然后结合K-Means算法的高效性和并行遗传算法的全局优化能力,通过种群内的遗传、变异和种群间的并行进化、联姻,有效地避免了局部最优解的出现。实验表明该算法相对于K-Means算法、简单遗传算法等文本聚类方法具有更高的精确度和全局寻优能力。  相似文献   

5.
基于频繁词集和k-Means的Web文本聚类混合算法   总被引:2,自引:1,他引:1       下载免费PDF全文
当前,Web文本聚类主要存在三个挑战:数据规模海量性、高雏空间处理复杂性和聚类结果的可理解性。针对上述挑战,本文提出了一个基于top-k频繁词集和k-means的混合聚类算法topHDC。该算法在生成初始聚簇时避免了高维空间向量处理,k个频繁词集对聚类结果提供了可理解的解释。topHDC避免了已有算法中聚类结果受文档长度干扰的问题。在两个公共数据集上的实验证明,topHDC算法在聚类质量和运行效率上明显优于另外两个具有代表性的聚类算法。  相似文献   

6.
提出了一种把人工免疫网络(aiNet)和k-means算法结合的文档聚类算法.先把文档集预处理成向量集表示,基于向量之间的余弦相似度,用aiNet算法对文档进行聚类,用得到的相似度矩阵初始化k-means的聚类中心,再用k-means算法对文档聚类.实验结果表明,该算法是可行的,并且能改善聚类质量.  相似文献   

7.
改进的K-means 算法在网络舆情分析中的应用   总被引:1,自引:0,他引:1  
结合网络舆情分析的应用需求背景,首先介绍了文本信息的处理,然后探讨了文本聚类中的K-means算法,针对其对初始聚类中心的依赖性的特点,对算法加以改进。基于文档标题能够代表文档内容的思想,改进算法采用稀疏特征向量表示文本标题,计算标题间的稀疏相似度,确定初始聚类中心。最后实验证明改进的K-means算法提高了聚类的准确度;与基于最大最小距离原则的初始中心选择算法比较,提高了执行效率,同时保证了聚类准确度。  相似文献   

8.
一种基于群体智能的Web文档聚类算法   总被引:31,自引:0,他引:31  
将群体智能聚类模型运用于文档聚类,提出了一种基于群体智能的Web文档聚类算法,首先运用向量空间模型表示Web文档信息,采用常规方法如消除无用词和特征词条约简法则得到文本特征集,然后将文档的向量随机分布到一个平面上,运用基于群体智能的聚类方法进行文档聚类,最后从平面上采用递归算法收集聚类结果,为了改善算法的实用性,将原算法与k均值算法结合提出一种混合聚类算法,通过实验比较,结果表明基于群体智能的Web文档聚类算法具有较好的聚类特性,它能将与一个主题相关的Web文档较完全而准确地聚成一类。  相似文献   

9.
随着科技的发展,网络信息迅速增加,而文本聚类技术则成为web文本挖掘中的研究热点。该文详细介绍了文档聚类算法中的基于划分的k-means算法,对于k-means算法的缺陷,又介绍了对k-means算法有所改善的k中心点算法,并比较二者的优缺点。  相似文献   

10.
通过基于概念的聚类方法,对博客作者的情感极性进行分析。在知网情感词汇库的基础上,将概念引入向量空间模型。首先,提取博客文本情感词,利用基于情感词概念的向量空间模型完成对博客文本的表示。然后,使用k-means算法对博客文本进行聚类,完成对博客情感极性的分析。在向量空间模型中使用概念作为特征项,提高了对博客作者情感极性分析的精度。实验证明基于概念的向量空间模型比传统基于词语的向量空间模型在博客文本情感聚类上具有更好的性能。  相似文献   

11.
This paper proposes an automatic folder allocation system for text documents through the implementation of a hybrid classification method which combines the Bayesian (Bayes) approach and the Support Vector Machines (SVMs). Folder allocation for text documents in computer is typically executed manually by the user. Every time the user creates text documents by using text editors or downloads the documents from the internet, and wishes to store these documents on the computer, the user needs to determine and allocate the appropriate folder in which to store these new documents. This situation is inconvenient as repeating the folder allocation each time a text document is stored becomes tedious especially when the numbers and layers of folders are huge and the structure is complex and continuously growing. This problem can be overcome by implementing Artificial Intelligence machine learning methods to classify the new text documents and allocate the most appropriate folder as the storage for them. In this paper we propose the Bayes-SVMs hybrid classification framework to perform the tedious task of automatically allocating the right folder for text documents in computers.  相似文献   

12.
The convenience of search, both on the personal computer hard disk as well as on the web, is still limited mainly to machine printed text documents and images because of the poor accuracy of handwriting recognizers. The focus of research in this paper is the segmentation of handwritten text and machine printed text from annotated documents sometimes referred to as the task of “ink separation” to advance the state-of-art in realizing search of hand-annotated documents. We propose a method which contains two main steps—patch level separation and pixel level separation. In the patch level separation step, the entire document is modeled as a Markov Random Field (MRF). Three different classes (machine printed text, handwritten text and overlapped text) are initially identified using G-means based classification followed by a MRF based relabeling procedure. A MRF based classification approach is then used to separate overlapped text into machine printed text and handwritten text using pixel level features forming the second step of the method. Experimental results on a set of machine-printed documents which have been annotated by multiple writers in an office/collaborative environment show that our method is robust and provides good text separation performance.  相似文献   

13.
Since documents on the Web are naturally partitioned into many text databases, the efficient document retrieval process requires identifying the text databases that are most likely to provide relevant documents to the query and then searching for the identified text databases. In this paper, we propose a neural net based approach to such an efficient document retrieval. First, we present a neural net agent that learns about underlying text databases from the user's relevance feedback. For a given query, the neural net agent, which is sufficiently trained on the basis of the BPN learning mechanism, discovers the text databases associated with the relevant documents and retrieves those documents effectively. In order to scale our approach with the large number of text databases, we also propose the hierarchical organization of neural net agents which reduces the total training cost at the acceptable level. Finally, we evaluate the performance of our approach by comparing it to those of the conventional well-known approaches. Received 5 March 1999 / Revised 7 March 2000 / Accepted in revised form 2 November 2000  相似文献   

14.
Traditional approaches for text data stream classification usually require the manual labeling of a number of documents, which is an expensive and time consuming process. In this paper, to overcome this limitation, we propose to classify text streams by keywords without labeled documents so as to reduce the burden of labeling manually. We build our base text classifiers with the help of keywords and unlabeled documents to classify text streams, and utilize classifier ensemble algorithms to cope with concept drifting in text data streams. Experimental results demonstrate that the proposed method can build good classifiers by keywords without manual labeling, and when the ensemble based algorithm is used, the concept drift in the streams can be well detected and adapted, which performs better than the single window algorithm.  相似文献   

15.
Text Database Discovery on the Web: Neural Net Based Approach   总被引:1,自引:0,他引:1  
As large numbers of text databases have become available on the Web, many efforts have been made to solve the text database discovery problem: finding which text databases (out of many candidates) are most likely to provide relevant documents to a given query. In this paper, we propose a neural net based approach to this problem. First, we present a neural net agent that learns about underlying text databases from the user's relevance feedback. For a given query, the neural net agent, which is sufficiently trained on the basis of the backpropagation learning mechanism, discovers the text databases associated with the relevant documents and retrieves those documents effectively. In order to scale our approach with the large number of text databases, we also propose the hierarchical organization of neural net agents which reduces the total training cost at the acceptable level. Finally, we evaluate the performance of our approach by comparing it to those of the conventional well-known statistical approaches.  相似文献   

16.
基于主动学习的文档分类   总被引:3,自引:0,他引:3  
In the field of text categorization,the number of unlabeled documents is generally much gretaer than that of labeled documents. Text categorization is the problem of categorization in high-dimension vector space, and more training samples will generally improve the accuracy of text classifier. How to add the unlabeled documents of training set so as to expand training set is a valuable problem. The theory of active learning is introducted and applied to the field of text categorization in this paper ,exploring the method of using unlabeled documents to improve the accuracy oftext classifier. It is expected that such technology will improve text classifier's accuracy through adopting relativelylarge number of unlabelled documents samples. We brought forward an active learning based algorithm for text categorization,and the experiments on Reuters news corpus showed that when enough training samples available,it′s effective for the algorithm to promote text classifier's accuracy through adopting unlabelled document samples.  相似文献   

17.
文本数字水印   总被引:32,自引:0,他引:32  
目前数字水印技术的研究和文献主要集中在静止图像和视频的保护等方面,文本数字水印研究的很少,国内甚至还未见到文本数字水印的相关文献。而实际上,一些文本文档比图像、视频等更需要得到保护;数字文本的保护对互联网时代的政府工作和电子商务等也具有重要意义。本文主要介绍文本数字水印技术的基本思想和目前的研究状况,首先介绍了文本数字水印的嵌入与检测方法,然后分析了用于中文的文本数字水印的研究方向以及可能的应用前景。  相似文献   

18.
梁鹏鹏  柴玉梅  王黎明 《计算机工程》2011,37(21):124-125,130
针对传统文本分类方法对文档间关联关系考虑不充分的问题,提出一种基于iTopicModel的关联文本分类算法。根据类信息已知的文档归属于各个主题的概率判断主题代表的类信息,利用待分类文档归属于各个主题的概率及文本信息对文档进行分类。实验结果表 明,当文档间的关联关系对类信息影响较大时,TC-iTM的分类性能优于传统文本分类方法。  相似文献   

19.
Web文本分类及其阻塞减少策略   总被引:1,自引:0,他引:1  
Web挖掘中,根据内容对Web文档进行分类是至关重要的一步.在Web文档分类中一种通常的方法是层次型分类方法,这种方法采用自顶向下的方式把文档分类到一个分类树的相应类别.然而,层次型分类方法在对文档进行分类时经常产生待分类的文档在分类树的上层分类器被错误地拒绝的现象(阻塞).针对这种现象,采用了以分类器为中心的阻塞因子去衡量阻塞的程度,并介绍了两种新的层次型分类方法,即基于降低阈值的方法和基于限制投票的方法,去改善Web文档分类中文档被错误阻塞的情况.  相似文献   

20.
In graphical documents (e.g., maps, engineering drawings), artistic documents etc., the text lines are annotated in multiple orientations or curvilinear way to illustrate different locations or symbols. For the optical character recognition of such documents, individual text lines from the documents need to be extracted. In this paper, we propose a novel method to segment such text lines and the method is based on the foreground and background information of the text components. To effectively utilize the background information, a water reservoir concept is used here. In the proposed scheme, at first, individual components are detected and grouped into character clusters in a hierarchical way using size and positional information. Next, the clusters are extended in two extreme sides to determine potential candidate regions. Finally, with the help of these candidate regions, individual lines are extracted. The experimental results are presented on different datasets of graphical documents, camera-based warped documents, noisy images containing seals, etc. The results demonstrate that our approach is robust and invariant to size and orientation of the text lines present in the document.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号