首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
针对微博短文本有效特征较稀疏且难以提取,从而影响微博文本表示、分类与聚类准确性的问题,提出一种基于统计与语义信息相结合的微博短文本特征词选择算法。该算法基于词性组合匹配规则,根据词项的TF-IDF、词性与词长因子构造综合评估函数,结合词项与文本内容的语义相关度,对微博短文本进行特征词选择,以使挑选出来的特征词能准确表示微博短文本内容主题。将新的特征词选择算法与朴素贝叶斯分类算法相结合,对微博分类语料集进行实验,结果表明,相比其它的传统算法,新算法使得微博短文本分类准确率更高,表明该算法选取出来的特征词能够更准确地表示微博短文本内容主题。  相似文献   

2.
We are witnessing the era of big data computing where computing the resources is becoming the main bottleneck to deal with those large datasets. In the case of high-dimensional data where each view of data is of high dimensionality, feature selection is necessary for further improving the clustering and classification results. In this paper, we propose a new feature selection method, Incremental Filtering Feature Selection (IF2S) algorithm, and a new clustering algorithm, Temporal Interval based Fuzzy Minimal Clustering (TIFMC) algorithm that employs the Fuzzy Rough Set for selecting optimal subset of features and for effective grouping of large volumes of data, respectively. An extensive experimental comparison of the proposed method and other methods are done using four different classifiers. The performance of the proposed algorithms yields promising results on the feature selection, clustering and classification accuracy in the field of biomedical data mining.  相似文献   

3.
This paper proposes three feature selection algorithms with feature weight scheme and dynamic dimension reduction for the text document clustering problem. Text document clustering is a new trend in text mining; in this process, text documents are separated into several coherent clusters according to carefully selected informative features by using proper evaluation function, which usually depends on term frequency. Informative features in each document are selected using feature selection methods. Genetic algorithm (GA), harmony search (HS) algorithm, and particle swarm optimization (PSO) algorithm are the most successful feature selection methods established using a novel weighting scheme, namely, length feature weight (LFW), which depends on term frequency and appearance of features in other documents. A new dynamic dimension reduction (DDR) method is also provided to reduce the number of features used in clustering and thus improve the performance of the algorithms. Finally, k-mean, which is a popular clustering method, is used to cluster the set of text documents based on the terms (or features) obtained by dynamic reduction. Seven text mining benchmark text datasets of different sizes and complexities are evaluated. Analysis with k-mean shows that particle swarm optimization with length feature weight and dynamic reduction produces the optimal outcomes for almost all datasets tested. This paper provides new alternatives for text mining community to cluster text documents by using cohesive and informative features.  相似文献   

4.
基于遗传算法及聚类的基因表达数据特征选择   总被引:1,自引:0,他引:1  
特征选择是模式识别及数据挖掘等领域的重要问题之一。针对高维数据对象(如基因表达数据)的特征选择,一方面可以提高分类及聚类的精度和效率,另一方面可以找出富含信息的特征子集,如发现与疾病密切相关的重要基因。针对此问题,本文提出了一种新的面向基因表达数据的特征选择方法,在特征子集搜索上采用遗传算法进行随机搜索,在特征子集评价上采用聚类算法及聚类错误率作为学习算法及评价指标。实验结果表明,该算法可有效地找出具有较好可分离性的特征子集,从而实现降维并提高聚类及分类精度。  相似文献   

5.
Nowadays a vast amount of textual information is collected and stored in various databases around the world, including the Internet as the largest database of all. This rapidly increasing growth of published text means that even the most avid reader cannot hope to keep up with all the reading in a field and consequently the nuggets of insight or new knowledge are at risk of languishing undiscovered in the literature. Text mining offers a solution to this problem by replacing or supplementing the human reader with automatic systems undeterred by the text explosion. It involves analyzing a large collection of documents to discover previously unknown information. Text clustering is one of the most important areas in text mining, which includes text preprocessing, dimension reduction by selecting some terms (features) and finally clustering using selected terms. Feature selection appears to be the most important step in the process. Conventional unsupervised feature selection methods define a measure of the discriminating power of terms to select proper terms from corpus. However up to now the valuation of terms in groups has not been investigated in reported works. In this paper a new and robust unsupervised feature selection approach is proposed that evaluates terms in groups. In addition a new Modified Term Variance measuring method is proposed for evaluating groups of terms. Furthermore a genetic based algorithm is designed and implemented for finding the most valuable groups of terms based on the new measure. These terms then will be utilized to generate the final feature vector for the clustering process . In order to evaluate and justify our approach the proposed method and also a conventional term variance method are implemented and tested using corpus collection Reuters-21578. For a more accurate comparison, methods have been tested on three corpuses and for each corpus clustering task has been done ten times and results are averaged. Results of comparing these two methods are very promising and show that our method produces better average accuracy and F1-measure than the conventional term variance method.  相似文献   

6.
基于特征分析的粒子群优化聚类算法   总被引:1,自引:0,他引:1       下载免费PDF全文
为提高粒子群优化聚类算法的性能,结合特征分析相关方法,提出一种新的串联聚类算法KPCA-PSO,阐述算法的基本原理和实施方案。在特征分析过程中,以一种简单有效的特征值选择方法避免手动选择特征值的繁琐过程。以人工数据和实际数据测试算法性能,实验结果表明该方法具有较好的聚类效果。  相似文献   

7.
The text clustering technique is an appropriate method used to partition a huge amount of text documents into groups. The documents size affects the text clustering by decreasing its performance. Subsequently, text documents contain sparse and uninformative features, which reduce the performance of the underlying text clustering algorithm and increase the computational time. Feature selection is a fundamental unsupervised learning technique used to select a new subset of informative text features to improve the performance of the text clustering and reduce the computational time. This paper proposes a hybrid of particle swarm optimization algorithm with genetic operators for the feature selection problem. The k-means clustering is used to evaluate the effectiveness of the obtained features subsets. The experiments were conducted using eight common text datasets with variant characteristics. The results show that the proposed algorithm hybrid algorithm (H-FSPSOTC) improved the performance of the clustering algorithm by generating a new subset of more informative features. The proposed algorithm is compared with the other comparative algorithms published in the literature. Finally, the feature selection technique encourages the clustering algorithm to obtain accurate clusters.  相似文献   

8.
文本挖掘之前首先要对文本集进行有效的特征选择,传统的特征选择算法在维数约减及文本表征方面效果有限,并且因需要用到文本的类别信息而不适用于无监督的文本聚类任务。针对这种情况,设计一种适用于文本聚类任务的特征选择算法,提出词条属性的概念,首先基于词频、文档频、词位置及词间关联性构建词条特征模型,重点研究了词位置属性及词间关联性属性的权值计算方法,改进了Apriori算法用于词间关联性属性权值计算,然后通过改进的k-means聚类算法对词条特征模型进行多次聚类完成文本特征选择。实验结果表明,与传统特征选择算法相比,该算法获得较好维数约减率的同时提高了所选特征词的文本表征能力,能有效适用于文本聚类任务。  相似文献   

9.
ABSTRACT

Text clustering is an important topic in text mining. One of the most effective methods for text clustering is an approach based on frequent itemsets (FIs), and thus, there are many related algorithms that aim to improve the accuracy of text clustering. However, these do not focus on the weights of terms in documents, even though the frequency of each term in each document has a great impact on the results. In this work, we propose a new method for text clustering based on frequent weighted utility itemsets (FWUI). First, we calculate the Term Frequency (TF) for each term in documents to create a weight matrix for all documents. The weights of terms in documents are based on the Inverse Document Frequency. Next, we use the Modification Weighted Itemset Tidset (MWIT)-FWUI algorithm for mining FWUI from a number matrix and the weights of terms in documents. Finally, based on frequent utility itemsets, we cluster documents using the MC (Maximum Capturing) algorithm. The proposed method has been evaluated on three data sets consisting of 1,600 documents covering 16 topics. The experimental results show that our method, using FWUI, improves the accuracy of the text clustering compared to methods using FIs.  相似文献   

10.
针对文本信息特征冗余多、噪声大问题,提出基于和声搜索机制的文本特征选择算法。以词频逆文本频率指数为目标函数评估特征词条;在初始文档集中通过和声搜索的记忆考虑、纵向倾角调整和随机选择3种特征选择新解更新规则,迭代搜索最优特征子集;以最优特征子集为基础,以K均值进行文本聚类。利用4种典型文档数据集进行仿真实验,实验结果表明,该算法可以有效降低文本特征维度,聚类准确率更高。  相似文献   

11.
张万山  肖瑶  梁俊杰  余敦辉 《计算机应用》2014,34(11):3144-3146
针对传统Web文本聚类算法没有考虑Web文本主题信息导致对多主题Web文本聚类结果准确率不高的问题,提出基于主题的Web文本聚类方法。该方法通过主题提取、特征抽取、文本聚类三个步骤实现对多主题Web文本的聚类。相对于传统的Web文本聚类算法,所提方法充分考虑了Web文本的主题信息。实验结果表明,对多主题Web文本聚类,所提方法的准确率比基于K-means的文本聚类方法和基于《知网》的文本聚类方法要好。  相似文献   

12.
位置加权文本聚类算法   总被引:2,自引:2,他引:0  
文本聚类是自然语言处理研究中一项重要研究课题,文本聚类技术广泛地应用于信息检索、Web挖掘和数字图书馆等领域。本文针对特征词在文档中的不同位置对文档的贡献大小不同,提出了基于特征词的位置加权文本聚类改进算法——TCABPW。通过选取反映文档主题的前L个高权值的特征项构造新的文本特征向量,采用层次聚类和K-means文本聚类相结合的改进算法实现文本聚类。实验结果表明,提出的改进算法在不影响聚类质量的情况下大大地降低了文本聚类的维度,在稳定性和纯度上都有显著提高,获得了较好的聚类效果。  相似文献   

13.
Unsupervised feature selection is an important problem, especially for high‐dimensional data. However, until now, it has been scarcely studied and the existing algorithms cannot provide satisfying performance. Thus, in this paper, we propose a new unsupervised feature selection algorithm using similarity‐based feature clustering, Feature Selection‐based Feature Clustering (FSFC). FSFC removes redundant features according to the results of feature clustering based on feature similarity. First, it clusters the features according to their similarity. A new feature clustering algorithm is proposed, which overcomes the shortcomings of K‐means. Second, it selects a representative feature from each cluster, which contains most interesting information of features in the cluster. The efficiency and effectiveness of FSFC are tested upon real‐world data sets and compared with two representative unsupervised feature selection algorithms, Feature Selection Using Similarity (FSUS) and Multi‐Cluster‐based Feature Selection (MCFS) in terms of runtime, feature compression ratio, and the clustering results of K‐means. The results show that FSFC can not only reduce the feature space in less time, but also significantly improve the clustering performance of K‐means.  相似文献   

14.
刘逵  周竹荣 《计算机应用》2012,32(8):2245-2249
为了更全面地对文本进行特征选择,提高文本特征选择的准确率,提出一种基于野草算法的文本特征选择方法,利用野草算法中子代个体按正态分布的方式分布于父代个体周围,在进化过程中通过动态调整子代个体正态分布的标准差,使算法在早期与中期充分保持种群多样性的优势,对文本进行比较全面的特征选择;在算法后期加强对优秀个体的特征选择,保证算法稳健地收敛到全局最优解,提高文本特征选择的准确率。实验结果表明,这种方法可以给予权重值低的词条进行特征选择的机会,并且保证权重值高的词条特征选择优势,从而提高文本特征选择的全面性和准确性。  相似文献   

15.
面向分类特征的无监督特征选择方法研究   总被引:1,自引:0,他引:1  
针对分类特征数据给出一种新的特征重要性程度度量方法.以一趟聚类算法为基础,提出一种无监督特征选择方法.理论分析表明该方法时间复杂度与数据集的大小和特征个数成近似线性关系,适合于大规模数据集中的特征选择.在UC I数据集上的实验结果表明,与文献中的经典方法相比,本文方法具有较好的性能,说明提出的特征选择方法是有效可行的.  相似文献   

16.
Unlike traditional clustering analysis,the biclustering algorithm works simultaneously on two dimensions of samples (row) and variables (column).In recent years,biclustering methods have been developed rapidly and widely applied in biological data analysis,text clustering,recommendation system and other fields.The traditional clustering algorithms cannot be well adapted to process high-dimensional data and/or large-scale data.At present,most of the biclustering algorithms are designed for the differentially expressed big biological data.However,there is little discussion on binary data clustering mining such as miRNA-targeted gene data.Here,we propose a novel biclustering method for miRNA-targeted gene data based on graph autoencoder named as GAEBic.GAEBic applies graph autoencoder to capture the similarity of sample sets or variable sets,and takes a new irregular clustering strategy to mine biclusters with excellent generalization.Based on the miRNA-targeted gene data of soybean,we benchmark several different types of the biclustering algorithm,and find that GAEBic performs better than Bimax,Bibit and the Spectral Biclustering algorithm in terms of target gene enrichment.This biclustering method achieves comparable performance on the high throughput miRNA data of soybean and it can also be used for other species.  相似文献   

17.
Text categorization is the task of automatically assigning unlabeled text documents to some predefined category labels by means of an induction algorithm. Since the data in text categorization are high-dimensional, often feature selection is used for reducing the dimensionality. In this paper, we make an evaluation and comparison of the feature selection policies used in text categorization by employing some of the popular feature selection metrics. For the experiments, we use datasets which vary in size, complexity, and skewness. We use support vector machine as the classifier and tf-idf weighting for weighting the terms. In addition to the evaluation of the policies, we propose new feature selection metrics which show high success rates especially with low number of keywords. These metrics are two-sided local metrics and are based on the difference of the distributions of a term in the documents belonging to a class and in the documents not belonging to that class. Moreover, we propose a keyword selection framework called adaptive keyword selection. It is based on selecting different number of terms for each class and it shows significant improvement on skewed datasets that have a limited number of training instances for some of the classes.  相似文献   

18.
Dimensionality reduction is an important and challenging task in machine learning and data mining. Feature selection and feature extraction are two commonly used techniques for decreasing dimensionality of the data and increasing efficiency of learning algorithms. Specifically, feature selection realized in the absence of class labels, namely unsupervised feature selection, is challenging and interesting. In this paper, we propose a new unsupervised feature selection criterion developed from the viewpoint of subspace learning, which is treated as a matrix factorization problem. The advantages of this work are four-fold. First, dwelling on the technique of matrix factorization, a unified framework is established for feature selection, feature extraction and clustering. Second, an iterative update algorithm is provided via matrix factorization, which is an efficient technique to deal with high-dimensional data. Third, an effective method for feature selection with numeric data is put forward, instead of drawing support from the discretization process. Fourth, this new criterion provides a sound foundation for embedding kernel tricks into feature selection. With this regard, an algorithm based on kernel methods is also proposed. The algorithms are compared with four state-of-the-art feature selection methods using six publicly available datasets. Experimental results demonstrate that in terms of clustering results, the proposed two algorithms come with better performance than the others for almost all datasets we experimented with here.  相似文献   

19.
Text categorization is continuing to be one of the most researched NLP problems due to the ever-increasing amounts of electronic documents and digital libraries. In this paper, we present a new text categorization method that combines the distributional clustering of words and a learning logic technique, called Lsquare, for constructing text classifiers. The high dimensionality of text in a document has not been fruitful for the task of categorization, for which reason, feature clustering has been proven to be an ideal alternative to feature selection for reducing the dimensionality. We, therefore, use distributional clustering method (IB) to generate an efficient representation of documents and apply Lsquare for training text classifiers. The method was extensively tested and evaluated. The proposed method achieves higher or comparable classification accuracy and {rm F}_1 results compared with SVM on exact experimental settings with a small number of training documents on three benchmark data sets WebKB, 20Newsgroup, and Reuters-21578. The results prove that the method is a good choice for applications with a limited amount of labeled training data. We also demonstrate the effect of changing training size on the classification performance of the learners.  相似文献   

20.
短文本因其具有特征稀疏、动态交错等特点,令传统的权重加权计算方法难以得到有效使用. 本文通过引入翻译决策模型,将某种词性出现的概率作为特征,提出一种新的基于词性特征的特征权重计算方法,并用文本聚类算法进行测试. 测试结果表明:与TF-IDF、QPSO两种权重计算算法相比,改进的特征权重计算算法取得更好的聚类效果.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号