首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper proposes three feature selection algorithms with feature weight scheme and dynamic dimension reduction for the text document clustering problem. Text document clustering is a new trend in text mining; in this process, text documents are separated into several coherent clusters according to carefully selected informative features by using proper evaluation function, which usually depends on term frequency. Informative features in each document are selected using feature selection methods. Genetic algorithm (GA), harmony search (HS) algorithm, and particle swarm optimization (PSO) algorithm are the most successful feature selection methods established using a novel weighting scheme, namely, length feature weight (LFW), which depends on term frequency and appearance of features in other documents. A new dynamic dimension reduction (DDR) method is also provided to reduce the number of features used in clustering and thus improve the performance of the algorithms. Finally, k-mean, which is a popular clustering method, is used to cluster the set of text documents based on the terms (or features) obtained by dynamic reduction. Seven text mining benchmark text datasets of different sizes and complexities are evaluated. Analysis with k-mean shows that particle swarm optimization with length feature weight and dynamic reduction produces the optimal outcomes for almost all datasets tested. This paper provides new alternatives for text mining community to cluster text documents by using cohesive and informative features.  相似文献   

2.
针对文本信息特征冗余多、噪声大问题,提出基于和声搜索机制的文本特征选择算法.以词频逆文本频率指数为目标函数评估特征词条;在初始文档集中通过和声搜索的记忆考虑、纵向倾角调整和随机选择3种特征选择新解更新规则,迭代搜索最优特征子集;以最优特征子集为基础,以K均值进行文本聚类.利用4种典型文档数据集进行仿真实验,实验结果表明...  相似文献   

3.
Harun Uğuz 《Knowledge》2011,24(7):1024-1032
Text categorization is widely used when organizing documents in a digital form. Due to the increasing number of documents in digital form, automated text categorization has become more promising in the last ten years. A major problem of text categorization is its large number of features. Most of those are irrelevant noise that can mislead the classifier. Therefore, feature selection is often used in text categorization to reduce the dimensionality of the feature space and to improve performance. In this study, two-stage feature selection and feature extraction is used to improve the performance of text categorization. In the first stage, each term within the document is ranked depending on their importance for classification using the information gain (IG) method. In the second stage, genetic algorithm (GA) and principal component analysis (PCA) feature selection and feature extraction methods are applied separately to the terms which are ranked in decreasing order of importance, and a dimension reduction is carried out. Thereby, during text categorization, terms of less importance are ignored, and feature selection and extraction methods are applied to the terms of highest importance; thus, the computational time and complexity of categorization is reduced. To evaluate the effectiveness of dimension reduction methods on our purposed model, experiments are conducted using the k-nearest neighbour (KNN) and C4.5 decision tree algorithm on Reuters-21,578 and Classic3 datasets collection for text categorization. The experimental results show that the proposed model is able to achieve high categorization effectiveness as measured by precision, recall and F-measure.  相似文献   

4.
In this paper we study the use of a semi‐supervised agglomerative hierarchical clustering (ssAHC) algorithm to text categorization, which consists of assigning text documents to predefined categories. ssAHC is (i) a clustering algorithm that (ii) uses a finite design set of labeled data to (iii) help agglomerative hierarchical clustering (AHC) algorithms partition a finite set of unlabeled data and then (iv) terminates without the capability to label other objects. We first describe the text representation method we use in this work; we then present a feature selection method that is used to reduce the dimensionality of the feature space. Finally, we apply the ssAHC algorithm to the Reuters database of documents and show that its performance is superior to the Bayes classifier and to the Expectation‐Maximization algorithm combined with Bayes classifier. We showed also that ssAHC helps AHC techniques to improve their performance. © 2000 John Wiley & Sons, Inc.  相似文献   

5.
文本挖掘之前首先要对文本集进行有效的特征选择,传统的特征选择算法在维数约减及文本表征方面效果有限,并且因需要用到文本的类别信息而不适用于无监督的文本聚类任务。针对这种情况,设计一种适用于文本聚类任务的特征选择算法,提出词条属性的概念,首先基于词频、文档频、词位置及词间关联性构建词条特征模型,重点研究了词位置属性及词间关联性属性的权值计算方法,改进了Apriori算法用于词间关联性属性权值计算,然后通过改进的k-means聚类算法对词条特征模型进行多次聚类完成文本特征选择。实验结果表明,与传统特征选择算法相比,该算法获得较好维数约减率的同时提高了所选特征词的文本表征能力,能有效适用于文本聚类任务。  相似文献   

6.
一种基于聚类的文本特征选择方法   总被引:6,自引:0,他引:6  
传统的文本特征选择方法存在一个共性,即通过某种评价函数分别计算单个特征对类别的区分能力,由于没有考虑特征间的关联性,这些方法选择的特征集往往存在着冗余。针对这一问题,提出了一种基于聚类的特征选择方法,先使用聚类的方法对特征间的冗余性进行裁减,然后使用信息增益的方法选取类别区分能力强的特征。实验结果表明,这种基于聚类的特征选择方法使得文本分类的正确性得到了有效的提高。  相似文献   

7.
以智慧城市管理应用系统中的案件上报短文本为对象,研究有效的特征生成和特征选择方法,实现案件快速准确地自动分类。根据案件描述短文本的特点,提出一种互邻特征组合算法,以生成描述力更强的组合特征;为进一步约减特征并优化特征空间,提出一种新的隶属度函数来为分类体系中的每个类别构建一个类别特征域,然后利用类别特征域进一步优化选择原始特征与组合特征,最终得到对分类贡献最高的特征表示集合。以南宁市青秀区“城管通”App中的案例分类为实例,验证提出的特征生成及选择方法,实验表明相对于文档频率、互信息和信息增益,提出的方法对案件分类的准确率更高,引入组合特征能显著提升分类准确率。  相似文献   

8.
针对传统的文本分类算法存在着各特征词对分类的结果影响相同、分类准确率较低、造成算法时间复杂度增加的问题,提出了一种改进的最大熵C-均值聚类文本分类方法。该方法充分结合了C-均值聚类和最大熵值算法的优点,以香农熵作为最大熵模型中的目标函数,简化分类器的表达形式,然后采用C-均值聚类算法对最优特征进行分类。仿真实验结果表明,与传统的文本分类方法相比,提出的方法能够快速得到最优分类特征子集,大大提高了文本分类准确率。  相似文献   

9.
大型搜索系统对用户查询的快速响应尤为必要,同时在计算候选文档的特征相关性时,必须遵守严格的后端延迟约束。通过特征选择,提高了机器学习的效率。针对排序学习中快速特征选择的起点多为单一排序效果最好的特征的特点,首先提出了一种用层次聚类法生成特征选择起点的算法,并将该算法应用于已有的2种快速特征选择中。除此之外,还提出了一种充分利用聚类特征的新方法来处理特征选择。在2个标准数据集上的实验表明,该算法既可以在不影响精度的情况下获得较小的特征子集,也可以在中等子集上获得最佳的排序精度。  相似文献   

10.
基于索引项权重的文本特征选择方法   总被引:1,自引:1,他引:0  
为改善文本分类的效率和效果,降低计算复杂度,在分析了经典的特征选择方法后,提出加权的文本特征选择方法.该方法不仅利用数据集中文本的个数,还充分考虑到索引项的权重信息,并构造新的评估函数,改进了信息增益、期望交又熵以及文本证据权.利用KNN分类器在Reuters-21578标准数据集上进行训练和测试.实验结果表明,该方法能够选出有效特征,提高文本分类的性能.  相似文献   

11.
PCCS部分聚类分类:一种快速的Web文档聚类方法   总被引:16,自引:1,他引:15  
PCCS是为了帮助Web用户从搜索引擎所返回的大量文档片中筛选出自已所需要的文档,而使用的一种对Web文档进行快速聚类的部分聚类分法,首先对一部分文档进行聚类,然后根据聚类结果形成类模型对其余的文档进行分类,采用交互式的一次改进一个聚类摘选的聚类方法快速地创建一个聚类摘选集,将其余的文档使用Naive-Bayes分类器进行划分,为了提高聚类与分类的效率,提出了一种混合特征选取方法以减少文档表示的维数,重新计算文档中各特征的熵,从中选取具有最大熵值的前若干个特征,或者基于持久分类模型中的特征集来进行特征选取,实验证明,部分聚类方法能够快速,准确地根据文档主题内容组织Web文档,使用户在更高的术题层次上来查看搜索引擎返回的结果,从以主题相似的文档所形成的集簇中选取相关文档。  相似文献   

12.
In classification, feature selection is an important data pre-processing technique, but it is a difficult problem due mainly to the large search space. Particle swarm optimisation (PSO) is an efficient evolutionary computation technique. However, the traditional personal best and global best updating mechanism in PSO limits its performance for feature selection and the potential of PSO for feature selection has not been fully investigated. This paper proposes three new initialisation strategies and three new personal best and global best updating mechanisms in PSO to develop novel feature selection approaches with the goals of maximising the classification performance, minimising the number of features and reducing the computational time. The proposed initialisation strategies and updating mechanisms are compared with the traditional initialisation and the traditional updating mechanism. Meanwhile, the most promising initialisation strategy and updating mechanism are combined to form a new approach (PSO(4-2)) to address feature selection problems and it is compared with two traditional feature selection methods and two PSO based methods. Experiments on twenty benchmark datasets show that PSO with the new initialisation strategies and/or the new updating mechanisms can automatically evolve a feature subset with a smaller number of features and higher classification performance than using all features. PSO(4-2) outperforms the two traditional methods and two PSO based algorithm in terms of the computational time, the number of features and the classification performance. The superior performance of this algorithm is due mainly to both the proposed initialisation strategy, which aims to take the advantages of both the forward selection and backward selection to decrease the number of features and the computational time, and the new updating mechanism, which can overcome the limitations of traditional updating mechanisms by taking the number of features into account, which reduces the number of features and the computational time.  相似文献   

13.
对高维特征集的降维是文本分类的一个主要问题。在分析现有特征降维方法的基础上,借助《知网》提出一种新的二次降维方法:采用传统的特征选择方法提取一个候选特征集合;利用《知网》对候选集合中的特征项进行概念映射,把大量底层分散的原始特征项替换成少量的高层概念进行第二次特征降维。实验表明,这种方法可以在减少文本语义信息丢失的前提下,有效地降低特征空间维数,提升文本分类的准确度。  相似文献   

14.
随着互联网和物联网技术的发展,数据的收集变得越发容易。但是,高维数据中包含了很多冗余和不相关的特征,直接使用会徒增模型的计算量,甚至会降低模型的表现性能,故很有必要对高维数据进行降维处理。特征选择可以通过减少特征维度来降低计算开销和去除冗余特征,以提高机器学习模型的性能,并保留了数据的原始特征,具有良好的可解释性。特征选择已经成为机器学习领域中重要的数据预处理步骤之一。粗糙集理论是一种可用于特征选择的有效方法,它可以通过去除冗余信息来保留原始特征的特性。然而,由于计算所有的特征子集组合的开销较大,传统的基于粗糙集的特征选择方法很难找到全局最优的特征子集。针对上述问题,文中提出了一种基于粗糙集和改进鲸鱼优化算法的特征选择方法。为避免鲸鱼算法陷入局部优化,文中提出了种群优化和扰动策略的改进鲸鱼算法。该算法首先随机初始化一系列特征子集,然后用基于粗糙集属性依赖度的目标函数来评价各子集的优劣,最后使用改进鲸鱼优化算法,通过不断迭代找到可接受的近似最优特征子集。在UCI数据集上的实验结果表明,当以支持向量机为评价所用的分类器时,文中提出的算法能找到具有较少信息损失的特征子集,且具有较高的分类精度。因此,所提算法在特征选择方面具有一定的优势。  相似文献   

15.
为了将语义信息用于文本聚类和有效地进行特征选择,文中提出一种基于协同聚类的两阶段文本聚类方法.该方法分别对文档和特征进行聚类从而得到特征与主题之间的语义关联关系.然后利用此关系来相互调整彼此的聚类结果.实验结果表明,利用特征与主题之间的语义关联关系能有效提高聚类效果.  相似文献   

16.
特征是一切观点挖掘和情感分析任务的关键所在。对于无监督的文本聚类任务,文本特征的优劣直接影响聚类效果。考察三种语义特征(名词、名词短语、语义角色)对主题聚类的作用以及不同特征之间的相容关系,提出一种消除冗余特征的方法。该方法能有效地去除冗余特征,提高聚类精度。同时还提出一种基于语义角色标注的直接定位有效词特征的聚类方法,实验表明该方法是直接的和有效的,并为特征选择方法提供了新思路。  相似文献   

17.
Feature subset selection is basically an optimization problem for choosing the most important features from various alternatives in order to facilitate classification or mining problems. Though lots of algorithms have been developed so far, none is considered to be the best for all situations and researchers are still trying to come up with better solutions. In this work, a flexible and user-guided feature subset selection algorithm, named as FCTFS (Feature Cluster Taxonomy based Feature Selection) has been proposed for selecting suitable feature subset from a large feature set. The proposed algorithm falls under the genre of clustering based feature selection techniques in which features are initially clustered according to their intrinsic characteristics following the filter approach. In the second step the most suitable feature is selected from each cluster to form the final subset following a wrapper approach. The two stage hybrid process lowers the computational cost of subset selection, especially for large feature data sets. One of the main novelty of the proposed approach lies in the process of determining optimal number of feature clusters. Unlike currently available methods, which mostly employ a trial and error approach, the proposed method characterises and quantifies the feature clusters according to the quality of the features inside the clusters and defines a taxonomy of the feature clusters. The selection of individual features from a feature cluster can be done judiciously considering both the relevancy and redundancy according to user’s intention and requirement. The algorithm has been verified by simulation experiments with different bench mark data set containing features ranging from 10 to more than 800 and compared with other currently used feature selection algorithms. The simulation results prove the superiority of our proposal in terms of model performance, flexibility of use in practical problems and extendibility to large feature sets. Though the current proposal is verified in the domain of unsupervised classification, it can be easily used in case of supervised classification.  相似文献   

18.
维度灾难是机器学习任务中的常见问题,特征选择算法能够从原始数据集中选取出最优特征子集,降低特征维度.提出一种混合式特征选择算法,首先用卡方检验和过滤式方法选择重要特征子集并进行标准化缩放,再用序列后向选择算法(SBS)与支持向量机(SVM)包裹的SBS-SVM算法选择最优特征子集,实现分类性能最大化并有效降低特征数量.实验中,将包裹阶段的SBS-SVM与其他两种算法在3个经典数据集上进行测试,结果表明,SBS-SVM算法在分类性能和泛化能力方面均具有较好的表现.  相似文献   

19.
Li  Zhao  Lu  Wei  Sun  Zhanquan  Xing  Weiwei 《Neural computing & applications》2016,28(1):513-524

Text classification is a popular research topic in data mining. Many classification methods have been proposed. Feature selection is an important technique for text classification since it is effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving result comprehensibility. In recent years, data have become increasingly larger in both the number of instances and the number of features in many applications. As a result, classical feature selection methods do not work well in processing large-scale dataset due to the expensive computational cost. To address this issue, in this paper, a parallel feature selection method based on MapReduce is proposed. Specifically, mutual information based on Renyi entropy is used to measure the relationship between feature variables and class variables. Maximum mutual information theory is then employed to choose the most informative combination of feature variables. We implemented the selection process based on MapReduce, which is efficient and scalable for large-scale problems. At last, a practical example well demonstrates the efficiency of the proposed method.

  相似文献   

20.
Knowledge-based vector space model for text clustering   总被引:5,自引:4,他引:1  
This paper presents a new knowledge-based vector space model (VSM) for text clustering. In the new model, semantic relationships between terms (e.g., words or concepts) are included in representing text documents as a set of vectors. The idea is to calculate the dissimilarity between two documents more effectively so that text clustering results can be enhanced. In this paper, the semantic relationship between two terms is defined by the similarity of the two terms. Such similarity is used to re-weight term frequency in the VSM. We consider and study two different similarity measures for computing the semantic relationship between two terms based on two different approaches. The first approach is based on the existing ontologies like WordNet and MeSH. We define a new similarity measure that combines the edge-counting technique, the average distance and the position weighting method to compute the similarity of two terms from an ontology hierarchy. The second approach is to make use of text corpora to construct the relationships between terms and then calculate their semantic similarities. Three clustering algorithms, bisecting k-means, feature weighting k-means and a hierarchical clustering algorithm, have been used to cluster real-world text data represented in the new knowledge-based VSM. The experimental results show that the clustering performance based on the new model was much better than that based on the traditional term-based VSM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号