首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The aim of bankruptcy prediction in the areas of data mining and machine learning is to develop an effective model which can provide the higher prediction accuracy. In the prior literature, various classification techniques have been developed and studied, in/with which classifier ensembles by combining multiple classifiers approach have shown their outperformance over many single classifiers. However, in terms of constructing classifier ensembles, there are three critical issues which can affect their performance. The first one is the classification technique actually used/adopted, and the other two are the combination method to combine multiple classifiers and the number of classifiers to be combined, respectively. Since there are limited, relevant studies examining these aforementioned disuses, this paper conducts a comprehensive study of comparing classifier ensembles by three widely used classification techniques including multilayer perceptron (MLP) neural networks, support vector machines (SVM), and decision trees (DT) based on two well-known combination methods including bagging and boosting and different numbers of combined classifiers. Our experimental results by three public datasets show that DT ensembles composed of 80–100 classifiers using the boosting method perform best. The Wilcoxon signed ranked test also demonstrates that DT ensembles by boosting perform significantly different from the other classifier ensembles. Moreover, a further study over a real-world case by a Taiwan bankruptcy dataset was conducted, which also demonstrates the superiority of DT ensembles by boosting over the others.  相似文献   

2.
Hybrid models based on feature selection and machine learning techniques have significantly enhanced the accuracy of standalone models. This paper presents a feature selection‐based hybrid‐bagging algorithm (FS‐HB) for improved credit risk evaluation. The 2 feature selection methods chi‐square and principal component analysis were used for ranking and selecting the important features from the datasets. The classifiers were built on 5 training and test data partitions of the input data set. The performance of the hybrid algorithm was compared with that of the standalone classifiers: feature selection‐based classifiers and bagging. The hybrid FS‐HB algorithm performed best for qualitative dataset with less features and tree‐based unstable base classifier. Its performance on numeric data was also better than other standalone classifiers, whereas comparable to bagging with only selected features. Its performance was found better on 70:30 data partition and the type II error, which is very significant in risk evaluation was also reduced significantly. The improved performance of FS‐HB is attributed to the important features used for developing the classifier thereby reducing the complexity of the algorithm and the use of ensemble methodology, which added to the classical bias variance trade‐off and performed better than standalone classifiers.  相似文献   

3.
Feature subset selection with the aim of reducing dependency of feature selection techniques and obtaining a high-quality minimal feature subset from a real-world domain is the main task of this research. For this end, firstly, two types of feature representation are presented for feature sets, namely unigram-based and part-of-speech based feature sets. Secondly, five methods of feature ranking are employed for creating feature vectors. Finally, we propose two methods for the integration feature vectors and feature subsets. An ordinal-based integration of different feature vectors (OIFV) is proposed in order to obtain a new feature vector. The new feature vector depends on the order of features in the old vectors. A frequency-based integration of different feature subsets (FIFS) with most effective features, which are obtained from a hybrid filter and wrapper methods in the feature selection task, is then proposed. In addition, four well-known text classification algorithms are employed as classifiers in the wrapper method for the selection of the feature subsets. A wide range of comparative experiments on five widely-used datasets in sentiment analysis were carried out. The experiments demonstrate that proposed methods can effectively improve the performance of sentiment classification. These results also show that proposed part-of-speech patterns are more effective in their classification accuracy compared to unigram-based features.  相似文献   

4.
This paper presents a novel wrapper feature selection algorithm for classification problems, namely hybrid genetic algorithm (GA)- and extreme learning machine (ELM)-based feature selection algorithm (HGEFS). It utilizes GA to wrap ELM to search for the optimum subsets in the huge feature space, and then, a set of subsets are selected to make ensemble to improve the final prediction accuracy. To prevent GA from being trapped in the local optimum, we propose a novel and efficient mechanism specifically designed for feature selection problems to maintain GA’s diversity. To measure each subset’s quality fairly and efficiently, we adopt a modified ELM called error-minimized extreme learning machine (EM-ELM) which automatically determines an appropriate network architecture for each feature subsets. Moreover, EM-ELM has good generalization ability and extreme learning speed which allows us to perform wrapper feature selection processes in an affordable time. In other words, we simultaneously optimize feature subset and classifiers’ parameters. After finishing the search process of GA, to further promote the prediction accuracy and get a stable result, we select a set of EM-ELMs from the obtained population to make the final ensemble according to a specific ranking and selecting strategy. To verify the performance of HGEFS, empirical comparisons are carried out on different feature selection methods and HGEFS with benchmark datasets. The results reveal that HGEFS is a useful method for feature selection problems and always outperforms other algorithms in comparison.  相似文献   

5.
Feature selection, both for supervised as well as for unsupervised classification is a relevant problem pursued by researchers for decades. There are multiple benchmark algorithms based on filter, wrapper and hybrid methods. These algorithms adopt different techniques which vary from traditional search-based techniques to more advanced nature inspired algorithm based techniques. In this paper, a hybrid feature selection algorithm using graph-based technique has been proposed. The proposed algorithm has used the concept of Feature Association Map (FAM) as an underlying foundation. It has used graph-theoretic principles of minimal vertex cover and maximal independent set to derive feature subset. This algorithm applies to both supervised and unsupervised classification. The performance of the proposed algorithm has been compared with several benchmark supervised and unsupervised feature selection algorithms and found to be better than them. Also, the proposed algorithm is less computationally expensive and hence has taken less execution time for the publicly available datasets used in the experiments, which include high-dimensional datasets.  相似文献   

6.
从多个弱分类器重构出强分类器的集成学习方法是机器学习领域的重要研究方向之一。尽管已有多种多样性基本分类器的生成方法被提出,但这些方法的鲁棒性仍有待提高。递减样本集成学习算法综合了目前最为流行的boosting与bagging算法的学习思想,通过不断移除训练集中置信度较高的样本,使训练集空间依次递减,使得某些被低估的样本在后续的分类器中得到充分训练。该策略形成一系列递减的训练子集,因而也生成一系列多样性的基本分类器。类似于boosting与bagging算法,递减样本集成学习方法采用投票策略对基本分类器进行整合。通过严格的十折叠交叉检验,在8个UCI数据集与7种基本分类器上的测试表明,递减样本集成学习算法总体上要优于boosting与bagging算法。  相似文献   

7.
Feature selection is a process aimed at filtering out unrepresentative features from a given dataset, usually allowing the later data mining and analysis steps to produce better results. However, different feature selection algorithms use different criteria to select representative features, making it difficult to find the best algorithm for different domain datasets. The limitations of single feature selection methods can be overcome by the application of ensemble methods, combining multiple feature selection results. In the literature, feature selection algorithms are classified as filter, wrapper, or embedded techniques. However, to the best of our knowledge, there has been no study focusing on combining these three types of techniques to produce ensemble feature selection. Therefore, the aim here is to answer the question as to which combination of different types of feature selection algorithms offers the best performance for different types of medical data including categorical, numerical, and mixed data types. The experimental results show that a combination of filter (i.e., principal component analysis) and wrapper (i.e., genetic algorithms) techniques by the union method is a better choice, providing relatively high classification accuracy and a reasonably good feature reduction rate.  相似文献   

8.
网络故障诊断中大量无关或冗余的特征会降低诊断的精度,需要对初始特征进行选择。Wrapper模式特征选择方法分类算法计算量大,为了降低计算量,本文提出了基于支持向量的二进制粒子群(SVB-BPSO)的故障特征选择方法。该算法以SVM为分类器,首先通过对所有样本的SVM训练选出SV集,在封装的分类训练中仅使用SV集,然后采用异类支持向量之间的平均距离作为SVM的参数进行训练,最后根据分类结果,利用BPSO在特征空间中进行全局搜索选出最优特征集。在DARPA数据集上的实验表明本文提出的方法能够降低封装模式特征选择的计算量且获得了较高的分类精度以及较明显的降维效果。  相似文献   

9.
针对监督分类中的特征选择问题, 提出一种基于量子进化算法的包装式特征选择方法. 首先分析了现有子集评价方法存在过度偏好分类精度的缺点, 进而提出基于固定阈值和统计检验的两种子集评价方法. 然后改进了量子进化算法的进化策略, 即将整个进化过程分为两个阶段, 分别选用个体极值和全局极值作为种群的进化目标. 在此基础上, 按照包装式特征选择遵循的一般框架设计了特征选择算法. 最后, 通过15个UCI数据集分别验证了子集评价方法和进化策略的有效性, 以及新方法相较于其它6种特征选择方法的优越性. 结果表明, 新方法在80%以上的数据集上取得相似甚至更好的分类精度, 在86.67%的数据集上选择了特征个数更小的子集.  相似文献   

10.
不平衡数据分类是当前机器学习的研究热点,传统分类算法通常基于数据集平衡状态的前提,不能直接应用于不平衡数据的分类学习.针对不平衡数据分类问题,文章提出一种基于特征选择的改进不平衡分类提升算法,从数据集的不同类型属性来权衡对少数类样本的重要性,筛选出对有效预测分类出少数类样本更意义的属性,同时也起到了约减数据维度的目的.然后结合不平衡分类算法使数据达到平衡状态,最后针对原始算法错分样本权值增长过快问题提出新的改进方案,有效抑制权值的增长速度.实验结果表明,该算法能有效提高不平衡数据的分类性能,尤其是少数类的分类性能.  相似文献   

11.

This paper is about enhancing the smart grid by proposing a new hybrid feature-selection method called feature selection-based ranking (FSBR). In general, feature selection is to exclude non-promising features out from the collected data at Fog. This could be achieved using filter methods, wrapper methods, or a hybrid. Our proposed method consists of two phases: filter and wrapper phases. In the filter phase, the whole data go through different ranking techniques (i.e., relative weight ranking, effectiveness ranking, and information gain ranking) The results of these ranks are sent to a fuzzy inference engine to generate the final ranks. In the wrapper phase, data is being selected based on the final ranks and passed on three different classifiers (i.e., Naive Bayes, Support Vector Machine, and neural network) to select the best set of the features based on the performance of the classifiers. This process can enhance the smart grid by reducing the amount of data being sent to the cloud, decreasing computation time, and decreasing data complexity. Thus, the FSBR methodology enables the user load forecasting (ULF) to take a fast decision, the fast reaction in short-term load forecasting, and to provide a high prediction accuracy. The authors explain the suggested approach via numerical examples. Two datasets are used in the applied experiments. The first dataset reported that the proposed method was compared with six other methods, and the proposed method was represented the best accuracy of 91%. The second data set, the generalization data set, reported 90% accuracy of the proposed method compared to fourteen different methods.

  相似文献   

12.
This paper presents a hybrid filter-wrapper feature subset selection algorithm based on particle swarm optimization (PSO) for support vector machine (SVM) classification. The filter model is based on the mutual information and is a composite measure of feature relevance and redundancy with respect to the feature subset selected. The wrapper model is a modified discrete PSO algorithm. This hybrid algorithm, called maximum relevance minimum redundancy PSO (mr2PSO), is novel in the sense that it uses the mutual information available from the filter model to weigh the bit selection probabilities in the discrete PSO. Hence, mr2PSO uniquely brings together the efficiency of filters and the greater accuracy of wrappers. The proposed algorithm is tested over several well-known benchmarking datasets. The performance of the proposed algorithm is also compared with a recent hybrid filter-wrapper algorithm based on a genetic algorithm and a wrapper algorithm based on PSO. The results show that the mr2PSO algorithm is competitive in terms of both classification accuracy and computational performance.  相似文献   

13.
This correspondence presents a novel hybrid wrapper and filter feature selection algorithm for a classification problem using a memetic framework. It incorporates a filter ranking method in the traditional genetic algorithm to improve classification performance and accelerate the search in identifying the core feature subsets. Particularly, the method adds or deletes a feature from a candidate feature subset based on the univariate feature ranking information. This empirical study on commonly used data sets from the University of California, Irvine repository and microarray data sets shows that the proposed method outperforms existing methods in terms of classification accuracy, number of selected features, and computational efficiency. Furthermore, we investigate several major issues of memetic algorithm (MA) to identify a good balance between local search and genetic search so as to maximize search quality and efficiency in the hybrid filter and wrapper MA  相似文献   

14.
Today, feature selection is an active research in machine learning. The main idea of feature selection is to choose a subset of available features, by eliminating features with little or no predictive information, as well as redundant features that are strongly correlated. There are a lot of approaches for feature selection, but most of them can only work with crisp data. Until now there have not been many different approaches which can directly work with both crisp and low quality (imprecise and uncertain) data. That is why, we propose a new method of feature selection which can handle both crisp and low quality data. The proposed approach is based on a Fuzzy Random Forest and it integrates filter and wrapper methods into a sequential search procedure with improved classification accuracy of the features selected. This approach consists of the following main steps: (1) scaling and discretization process of the feature set; and feature pre-selection using the discretization process (filter); (2) ranking process of the feature pre-selection using the Fuzzy Decision Trees of a Fuzzy Random Forest ensemble; and (3) wrapper feature selection using a Fuzzy Random Forest ensemble based on cross-validation. The efficiency and effectiveness of this approach is proved through several experiments using both high dimensional and low quality datasets. The approach shows a good performance (not only classification accuracy, but also with respect to the number of features selected) and good behavior both with high dimensional datasets (microarray datasets) and with low quality datasets.  相似文献   

15.
This paper deals with the problem of supervised wrapper-based feature subset selection in datasets with a very large number of attributes. Recently the literature has contained numerous references to the use of hybrid selection algorithms: based on a filter ranking, they perform an incremental wrapper selection over that ranking. Though working fine, these methods still have their problems: (1) depending on the complexity of the wrapper search method, the number of wrapper evaluations can still be too large; and (2) they rely on a univariate ranking that does not take into account interaction between the variables already included in the selected subset and the remaining ones.Here we propose a new approach whose main goal is to drastically reduce the number of wrapper evaluations while maintaining good performance (e.g. accuracy and size of the obtained subset). To do this we propose an algorithm that iteratively alternates between filter ranking construction and wrapper feature subset selection (FSS). Thus, the FSS only uses the first block of ranked attributes and the ranking method uses the current selected subset in order to build a new ranking where this knowledge is considered. The algorithm terminates when no new attribute is selected in the last call to the FSS algorithm. The main advantage of this approach is that only a few blocks of variables are analyzed, and so the number of wrapper evaluations decreases drastically.The proposed method is tested over eleven high-dimensional datasets (2400-46,000 variables) using different classifiers. The results show an impressive reduction in the number of wrapper evaluations without degrading the quality of the obtained subset.  相似文献   

16.
基于Relief和SVM-RFE的组合式SNP特征选择   总被引:1,自引:0,他引:1  
针对SNP的全基因组关联分析面临SNP数据的高维小样本特性和遗传疾病病理的复杂性两大难点,将特征选择引入SNP全基因组关联分析中,提出基于Relief和SVM-RFE的组合式SNP特征选择方法。该方法包括两个阶段:Filter阶段,使用Relief算法剔除无关SNPs;Wrapper阶段,使用基于支持向量机的特征递归消减方法(SVM-RFE)筛选出与遗传疾病相关的关键SNPs。实验表明,该方法具有明显优于单独使用SVM-RFE算法的性能,优于单独使用Relief-SVM算法的分类准确率,为SNP全基因组关联分析提供了一种有效途径。  相似文献   

17.
This correspondence presents a novel hybrid wrapper and filter feature selection algorithm for a classification problem using a memetic framework. It incorporates a filter ranking method in the traditional genetic algorithm to improve classification performance and accelerate the search in identifying the core feature subsets. Particularly, the method adds or deletes a feature from a candidate feature subset based on the univariate feature ranking information. This empirical study on commonly used data sets from the University of California, Irvine repository and microarray data sets shows that the proposed method outperforms existing methods in terms of classification accuracy, number of selected features, and computational efficiency. Furthermore, we investigate several major issues of memetic algorithm (MA) to identify a good balance between local search and genetic search so as to maximize search quality and efficiency in the hybrid filter and wrapper MA.  相似文献   

18.
特征选择方法主要包括过滤方法和绕封方法。为了利用过滤方法计算简单和绕封方法精度高的优点,提出一种组合过滤和绕封方法的特征选择新方法。该方法首先利用基于互信息准则的过滤方法得到满足一定精度要求的子集后,再采用绕封方法找到最后的优化特征子集。由于遗传算法在组合优化问题上的成功应用,对特征子集寻优采用了遗传算法。在数值仿真和轴承故障特征选择中,采用新方法在保证诊断精度的同时,可以节省大量选择时间。组合特征选择方法有较好的寻优特征子集的能力,能够节省选择时间,具有高效、高精度的双重优点。  相似文献   

19.
针对基于约束得分的特征选择容易受成对约束的组成和基数影响的问题, 提出了一种基于约束得分的动态集成选择算法(dynamic ensemble selection based on bagging constraint score, BCS-DES)。该算法将bagging约束得分(bagging constraint score, BCS)引入动态集成选择算法, 通过将样本空间划分为不同的区域, 使用多种群并行遗传算法为不同测试样本选择局部最优的分类集成, 达到提高分类精度的目的。在UCI实验数据集上进行的实验表明, BCS-DES算法较现有的特征选择算法受成对约束组成和基数影响更小, 效果更好。  相似文献   

20.
现有过滤型特征选择算法并未考虑非线性数据的内在结构,从而分类准确率远远低于封装型算法,对此提出一种基于再生核希尔伯特空间映射的高维数据特征选算法。首先,基于分支定界法建立搜索树,并对其进行搜索;然后,基于再生核希尔伯特空间映射分析非线性数据的内部结构;最终,根据数据集的内部结构选择最优的距离计算方法。对比仿真实验结果表明,本方法与封装型特征选择算法具有接近的分类准确率,同时在计算效率上具有明显的优势,适用于大数据分析。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号