首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this work, we formalise and evaluate an ensemble of classifiers that is designed for the resolution of multi-class problems. To achieve a good accuracy rate, the base learners are built with pairwise coupled binary and multi-class classifiers. Moreover, to reduce the computational cost of the ensemble and to improve its performance, these classifiers are trained using a specific attribute subset. This proposal offers the opportunity to capture the advantages provided by binary decomposition methods, by attribute partitioning methods, and by cooperative characteristics associated with a combination of redundant base learners. To analyse the quality of this architecture, its performance has been tested on different domains, and the results have been compared to other well-known classification methods. This experimental evaluation indicates that our model is, in most cases, as accurate as these methods, but it is much more efficient.  相似文献   

2.
Clustering ensemble integrates multiple base clustering results to obtain a consensus result and thus improves the stability and robustness of the single clustering method. Since it is natural to use a hypergraph to represent the multiple base clustering results, where instances are represented by nodes and base clusters are represented by hyperedges, some hypergraph based clustering ensemble methods are proposed. Conventional hypergraph based methods obtain the final consensus result by partitioning a pre-defined static hypergraph. However, since base clusters may be imperfect due to the unreliability of base clustering methods, the pre-defined hypergraph constructed from the base clusters is also unreliable. Therefore, directly obtaining the final clustering result by partitioning the unreliable hypergraph is inappropriate. To tackle this problem, in this paper, we propose a clustering ensemble method via structured hypergraph learning, i.e., instead of being constructed directly, the hypergraph is dynamically learned from base results, which will be more reliable. Moreover, when dynamically learning the hypergraph, we enforce it to have a clear clustering structure, which will be more appropriate for clustering tasks, and thus we do not need to perform any uncertain postprocessing, such as hypergraph partitioning. Extensive experiments show that, our method not only performs better than the conventional hypergraph based ensemble methods, but also outperforms the state-of-the-art clustering ensemble methods.  相似文献   

3.
Neural network ensemble based on rough sets reduct is proposed to decrease the computational complexity of conventional ensemble feature selection algorithm. First, a dynamic reduction technology combining genetic algorithm with resampling method is adopted to obtain reducts with good generalization ability. Second, Multiple BP neural networks based on different reducts are built as base classifiers. According to the idea of selective ensemble, the neural network ensemble with best generalization ability can be found by search strategies. Finally, classification based on neural network ensemble is implemented by combining the predictions of component networks with voting. The method has been verified in the experiment of remote sensing image and five UCI datasets classification. Compared with conventional ensemble feature selection algorithms, it costs less time and lower computing complexity, and the classification accuracy is satisfactory.  相似文献   

4.
Neural network ensembles: evaluation of aggregation algorithms   总被引:1,自引:0,他引:1  
Ensembles of artificial neural networks show improved generalization capabilities that outperform those of single networks. However, for aggregation to be effective, the individual networks must be as accurate and diverse as possible. An important problem is, then, how to tune the aggregate members in order to have an optimal compromise between these two conflicting conditions. We present here an extensive evaluation of several algorithms for ensemble construction, including new proposals and comparing them with standard methods in the literature. We also discuss a potential problem with sequential aggregation algorithms: the non-frequent but damaging selection through their heuristics of particularly bad ensemble members. We introduce modified algorithms that cope with this problem by allowing individual weighting of aggregate members. Our algorithms and their weighted modifications are favorably tested against other methods in the literature, producing a sensible improvement in performance on most of the standard statistical databases used as benchmarks.  相似文献   

5.
This paper presents a new method called one-against-all ensemble for solving multiclass pattern classification problems. The proposed method incorporates a neural network ensemble into the one-against-all method to improve the generalization performance of the classifier. The experimental results show that the proposed method can reduce the uncertainty of the decision and it is comparable to the other widely used methods.  相似文献   

6.
Feature selection for ensembles has shown to be an effective strategy for ensemble creation due to its ability of producing good subsets of features, which make the classifiers of the ensemble disagree on difficult cases. In this paper we present an ensemble feature selection approach based on a hierarchical multi-objective genetic algorithm. The underpinning paradigm is the “overproduce and choose”. The algorithm operates in two levels. Firstly, it performs feature selection in order to generate a set of classifiers and then it chooses the best team of classifiers. In order to show its robustness, the method is evaluated in two different contexts:supervised and unsupervised feature selection. In the former, we have considered the problem of handwritten digit recognition and used three different feature sets and multi-layer perceptron neural networks as classifiers. In the latter, we took into account the problem of handwritten month word recognition and used three different feature sets and hidden Markov models as classifiers. Experiments and comparisons with classical methods, such as Bagging and Boosting, demonstrated that the proposed methodology brings compelling improvements when classifiers have to work with very low error rates. Comparisons have been done by considering the recognition rates only.  相似文献   

7.
朱帮助 《计算机科学》2008,35(3):132-133
针对现有神经网络集成研究方法在输入属性、集成方式和集成形式上的不足,提出一种基于特征提取的选择性神经网络集成模型-NsNNEIPCABag.该模型通过Bagging算法产生若干训练子集;利用改进的主成分分析(IPCA)提取主成分作为输入来训练个体网络;采用IPCA从所有个体网络中选择出部分线性无关的个体网络;采用神经网络对选择出来的个体网络进行非线性集成.为检验该模型的有效性,将其用于时间序列预测,结果表明本文提出的方法的泛化能力优于流行的其它集成方法.  相似文献   

8.
蔡铁  伍星  李烨 《计算机应用》2008,28(8):2091-2093
为构造集成学习中具有差异性的基分类器,提出基于数据离散化的基分类器构造方法,并用于支持向量机集成。该方法采用粗糙集和布尔推理离散化算法处理训练样本集,能有效删除不相关和冗余的属性,提高基分类器的准确性和差异性。实验结果表明,所提方法能取得比传统集成学习算法Bagging和Adaboost更好的性能。  相似文献   

9.
AdaBoost is a highly effective ensemble learning method that combines several weak learners to produce a strong committee with higher accuracy. However, similar to other ensemble methods, AdaBoost uses a large number of base learners to produce the final outcome while addressing high-dimensional data. Thus, it poses a critical challenge in the form of high memory-space consumption. Feature selection methods can significantly reduce dimensionality in regression and have been established to be applicable in ensemble pruning. By pruning the ensemble, it is possible to generate a simpler ensemble with fewer base learners but a higher accuracy. In this article, we propose the minimax concave penalty (MCP) function to prune an AdaBoost ensemble to simplify the model and improve its accuracy simultaneously. The MCP penalty function is compared with LASSO and SCAD in terms of performance in pruning the ensemble. Experiments performed on real datasets demonstrate that MCP-pruning outperforms the other two methods. It can reduce the ensemble size effectively, and generate marginally more accurate predictions than the unpruned AdaBoost model.  相似文献   

10.
In the last decades, several tools and various methodologies have been proposed by the researchers for developing effective medical decision support systems. Moreover, new methodologies and new tools are continued to develop and represent day by day. Diagnosing of the heart disease is one of the important issue and many researchers investigated to develop intelligent medical decision support systems to improve the ability of the physicians. In this paper, we introduce a methodology which uses SAS base software 9.1.3 for diagnosing of the heart disease. A neural networks ensemble method is in the centre of the proposed system. This ensemble based methods creates new models by combining the posterior probabilities or the predicted values from multiple predecessor models. So, more effective models can be created. We performed experiments with the proposed tool. We obtained 89.01% classification accuracy from the experiments made on the data taken from Cleveland heart disease database. We also obtained 80.95% and 95.91% sensitivity and specificity values, respectively, in heart disease diagnosis.  相似文献   

11.
Both theoretical and experimental studies have shown that combining accurate neural networks (NNs) in the ensemble with negative error correlation greatly improves their generalization abilities. Negative correlation learning (NCL) and mixture of experts (ME), two popular combining methods, each employ different special error functions for the simultaneous training of NNs to produce negatively correlated NNs. In this paper, we review the properties of the NCL and ME methods, discussing their advantages and disadvantages. Characterization of both methods showed that they have different but complementary features, so if a hybrid system can be designed to include features of both NCL and ME, it may be better than each of its basis approaches. In this study, two approaches are proposed to combine the features of both methods in order to solve the weaknesses of one method with the strength of the other method, i.e., gated-NCL (G-NCL) and mixture of negatively correlated experts (MNCE). In the first approach, G-NCL, a dynamic combiner of ME is used to combine the outputs of base experts in the NCL method. The suggested combiner method provides an efficient tool to evaluate and combine the NCL experts by the weights estimated dynamically from the inputs based on the different competences of each expert regarding different parts of the problem. In the second approach, MNCE, the capability of a control parameter for NCL is incorporated in the error function of ME, which enables the training algorithm of ME to efficiently adjust the measure of negative correlation between the experts. This control parameter can be regarded as a regularization term added to the error function of ME to establish better balance in bias–variance–covariance trade-offs and thus improves the generalization ability. The two proposed hybrid ensemble methods, G-NCL and MNCE, are compared with their constituent methods, ME and NCL, in solving several benchmark problems. The experimental results show that our proposed methods preserve the advantages and alleviate the disadvantages of their basis approaches, offering significantly improved performance over the original methods.  相似文献   

12.
Recent researches in fault classification have shown the importance of accurately selecting the features that have to be used as inputs to the diagnostic model. In this work, a multi-objective genetic algorithm (MOGA) is considered for the feature selection phase. Then, two different techniques for using the selected features to develop the fault classification model are compared: a single classifier based on the feature subset with the best classification performance and an ensemble of classifiers working on different feature subsets. The motivation for developing ensembles of classifiers is that they can achieve higher accuracies than single classifiers. An important issue for an ensemble to be effective is the diversity in the predictions of the base classifiers which constitute it, i.e. their capability of erring on different sub-regions of the pattern space. In order to show the benefits of having diverse base classifiers in the ensemble, two different ensembles have been developed: in the first, the base classifiers are constructed on feature subsets found by MOGAs aimed at maximizing the fault classification performance and at minimizing the number of features of the subsets; in the second, diversity among classifiers is added to the MOGA search as the third objective function to maximize. In both cases, a voting technique is used to effectively combine the predictions of the base classifiers to construct the ensemble output. For verification, some numerical experiments are conducted on a case of multiple-fault classification in rotating machinery and the results achieved by the two ensembles are compared with those obtained by a single optimal classifier.  相似文献   

13.
ContextSeveral issues hinder software defect data including redundancy, correlation, feature irrelevance and missing samples. It is also hard to ensure balanced distribution between data pertaining to defective and non-defective software. In most experimental cases, data related to the latter software class is dominantly present in the dataset.ObjectiveThe objectives of this paper are to demonstrate the positive effects of combining feature selection and ensemble learning on the performance of defect classification. Along with efficient feature selection, a new two-variant (with and without feature selection) ensemble learning algorithm is proposed to provide robustness to both data imbalance and feature redundancy.MethodWe carefully combine selected ensemble learning models with efficient feature selection to address these issues and mitigate their effects on the defect classification performance.ResultsForward selection showed that only few features contribute to high area under the receiver-operating curve (AUC). On the tested datasets, greedy forward selection (GFS) method outperformed other feature selection techniques such as Pearson’s correlation. This suggests that features are highly unstable. However, ensemble learners like random forests and the proposed algorithm, average probability ensemble (APE), are not as affected by poor features as in the case of weighted support vector machines (W-SVMs). Moreover, the APE model combined with greedy forward selection (enhanced APE) achieved AUC values of approximately 1.0 for the NASA datasets: PC2, PC4, and MC1.ConclusionThis paper shows that features of a software dataset must be carefully selected for accurate classification of defective components. Furthermore, tackling the software data issues, mentioned above, with the proposed combined learning model resulted in remarkable classification performance paving the way for successful quality control.  相似文献   

14.
特征选择的优化算法研究   总被引:1,自引:0,他引:1  
目标识别成为战争胜败的关键,而对目标识别的关键之一是对目标特征的提取与选择.因此,特征的选择尤为重要.为了提高效率,通过一种算法选择较少(优化)的特征是所希望的.鉴于此该文简单介绍基于扩张矩阵与粗集理论的算法、启发式搜索算法、自适应神经网络、混沌神经网络等几种典型特征选择的优化算法的原理,并比较它们的性能,在此基础上提出了一种结合混沌神经网络和自适应神经网络的特征选择的改进方法,并对其原理进行简单介绍.最后用MATLAB编程验证启发式搜索算法特征选择的有效性.  相似文献   

15.
Correlated information between multiple views can provide useful information for building robust classifiers. One way to extract correlated features from different views is using canonical correlation analysis (CCA). However, CCA is an unsupervised method and can not preserve discriminant information in feature extraction. In this paper, we first incorporate discriminant information into CCA by using random cross-view correlations between within-class examples. Because of the random property, we can construct a lot of feature extractors based on CCA and random correlation. So furthermore, we fuse those feature extractors and propose a novel method called random correlation ensemble (RCE) for multi-view ensemble learning. We compare RCE with existing multi-view feature extraction methods including CCA and discriminant CCA (DCCA) which use all cross-view correlations between within-class examples, as well as the trivial ensembles of CCA and DCCA which adopt standard bagging and boosting strategies for ensemble learning. Experimental results on several multi-view data sets validate the effectiveness of the proposed method.  相似文献   

16.
使用深度神经网络处理物联网设备的急剧增加产生的海量图像数据是大势所趋,但由于深度神经网络对于对抗样本的脆弱性,它容易受到攻击而危及物联网的安全.所以,如何提高模型的鲁棒性,就成了一个非常重要的课题.通常情况下,组合模型的防御表现要优于单模型防御方法,但物联网设备有限的计算能力使得组合模型难以应用.为此,提出一种在单模型上实现组合模型防御效果的模型改造及训练方法:在基础模型上添加额外的分支;使用特征金字塔对分支进行特征融合;引入整体多样性计算辅助训练.通过在MNIST和CIFAR-10这两个图像分类领域最常用的数据集上的实验表明,该方法能够显著提高模型的鲁棒性.在FGSM等4种基于梯度的攻击下的分类正确率有5倍以上的提高,在JSMA,C&W以及EAD攻击下的分类正确率可达到原模型的10倍.同时,不干扰模型对干净样本的分类精度,也可与对抗训练方法联合使用获得更好的防御效果.  相似文献   

17.
This paper presents a method for improved ensemble learning, by treating the optimization of an ensemble of classifiers as a compressed sensing problem. Ensemble learning methods improve the performance of a learned predictor by integrating a weighted combination of multiple predictive models. Ideally, the number of models needed in the ensemble should be minimized, while optimizing the weights associated with each included model. We solve this problem by treating it as an example of the compressed sensing problem, in which a sparse solution must be reconstructed from an under-determined linear system. Compressed sensing techniques are then employed to find an ensemble which is both small and effective. An additional contribution of this paper, is to present a new performance evaluation method (a new pairwise diversity measurement) called the roulette-wheel kappa-error. This method takes into account the different weightings of the classifiers, and also reduces the total number of pairs of classifiers needed in the kappa-error diagram, by selecting pairs through a roulette-wheel selection method according to the weightings of the classifiers. This approach can greatly improve the clarity and informativeness of the kappa-error diagram, especially when the number of classifiers in the ensemble is large. We use 25 different public data sets to evaluate and compare the performance of compressed sensing ensembles using four different sparse reconstruction algorithms, combined with two different classifier learning algorithms and two different training data manipulation techniques. We also give the comparison experiments of our method against another five state-of-the-art pruning methods. These experiments show that our method produces comparable or better accuracy, while being significantly faster than the compared methods.  相似文献   

18.
Ensembles of classifiers that are trained on different parts of the input space provide good results in general. As a popular boosting technique, AdaBoost is an iterative and gradient based deterministic method used for this purpose where an exponential loss function is minimized. Bagging is a random search based ensemble creation technique where the training set of each classifier is arbitrarily selected. In this paper, a genetic algorithm based ensemble creation approach is proposed where both resampled training sets and classifier prototypes evolve so as to maximize the combined accuracy. The objective function based random search procedure of the resultant system guided by both ensemble accuracy and diversity can be considered to share the basic properties of bagging and boosting. Experimental results have shown that the proposed approach provides better combined accuracies using a fewer number of classifiers than AdaBoost.  相似文献   

19.
Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing decision makers in many areas. Combining multiple models can be an effective way to improve forecasting performance. Recently, considerable research has been taken in neural network ensembles. Most of the work, however, is devoted to the classification type of problems. As time series problems are often more difficult to model due to issues such as autocorrelation and single realization at any particular time point, more research is needed in this area.In this paper, we propose a jittered ensemble method for time series forecasting and test its effectiveness with both simulated and real time series. The central idea of the jittered ensemble is adding noises to the input data and thus augments the original training data set to form models based on different but related training samples. Our results show that the proposed method is able to consistently outperform the single modeling approach with a variety of time series processes. We also find that relatively small ensemble sizes of 5 and 10 are quite effective in forecasting performance improvement.  相似文献   

20.
The aim of this paper is to propose a new hybrid data mining model based on combination of various feature selection and ensemble learning classification algorithms, in order to support decision making process. The model is built through several stages. In the first stage, initial dataset is preprocessed and apart of applying different preprocessing techniques, we paid a great attention to the feature selection. Five different feature selection algorithms were applied and their results, based on ROC and accuracy measures of logistic regression algorithm, were combined based on different voting types. We also proposed a new voting method, called if_any, that outperformed all other voting methods, as well as a single feature selection algorithm's results. In the next stage, a four different classification algorithms, including generalized linear model, support vector machine, naive Bayes and decision tree, were performed based on dataset obtained in the feature selection process. These classifiers were combined in eight different ensemble models using soft voting method. Using the real dataset, the experimental results show that hybrid model that is based on features selected by if_any voting method and ensemble GLM + DT model performs the highest performance and outperforms all other ensemble and single classifier models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号