首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
A set of classification rules can be considered as a disjunction of rules, where each rule is a disjunct. A small disjunct is a rule covering a small number of examples. Small disjuncts are a serious problem for effective classification, because the small number of examples satisfying these rules makes their prediction unreliable and error-prone. This paper offers two main contributions to the research on small disjuncts. First, it investigates six candidate solutions (algorithms) for the problem of small disjuncts. Second, it reports the results of a meta-learning experiment, which produced meta-rules predicting which algorithm will tend to perform best for a given data set. The algorithms investigated in this paper belong to different machine learning paradigms and their hybrid combinations, as follows: two versions of a decision-tree (DT) induction algorithm; two versions of a hybrid DT/genetic algorithm (GA) method; one GA; one hybrid DT/instance-based learning (IBL) algorithm. Experiments with 22 data sets evaluated both the predictive accuracy and the simplicity of the discovered rule sets, with the following conclusions. If one wants to maximize predictive accuracy only, then the hybrid DT/IBL seems to be the best choice. On the other hand, if one wants to maximize both predictive accuracy and rule set simplicity -- which is important in the context of data mining -- then a hybrid DT/GA seems to be the best choice.  相似文献   

2.
Bank failures threaten the economic system as a whole. Therefore, predicting bank financial failures is crucial to prevent and/or lessen the incoming negative effects on the economic system. This is originally a classification problem to categorize banks as healthy or non-healthy ones. This study aims to apply various neural network techniques, support vector machines and multivariate statistical methods to the bank failure prediction problem in a Turkish case, and to present a comprehensive computational comparison of the classification performances of the techniques tested. Twenty financial ratios with six feature groups including capital adequacy, asset quality, management quality, earnings, liquidity and sensitivity to market risk (CAMELS) are selected as predictor variables in the study. Four different data sets with different characteristics are developed using official financial data to improve the prediction performance. Each data set is also divided into training and validation sets. In the category of neural networks, four different architectures namely multi-layer perceptron, competitive learning, self-organizing map and learning vector quantization are employed. The multivariate statistical methods; multivariate discriminant analysis, k-means cluster analysis and logistic regression analysis are tested. Experimental results are evaluated with respect to the correct accuracy performance of techniques. Results show that multi-layer perceptron and learning vector quantization can be considered as the most successful models in predicting the financial failure of banks.  相似文献   

3.
半监督学习和集成学习是目前机器学习领域中的重要方法。半监督学习利用未标记样本,而集成学习综合多个弱学习器,以提高分类精度。针对名词型数据,本文提出一种融合聚类和集成学习的半监督分类方法SUCE。在不同的参数设置下,采用多个聚类算法生成大量的弱学习器;利用已有的类标签信息,对弱学习器进行评价和选择;通过集成弱学习器对测试集进行预分类,并将置信度高的样本放入训练集;利用扩展的训练集,使用ID3、Nave Bayes、 kNN、C4.5、OneR、Logistic等基础算法对其他样本进行分类。在UCI数据集上的实验结果表明,当训练样本较少时,本方法能稳定提高多数基础算法的准确性。  相似文献   

4.

Successful use of probabilistic classification requires well-calibrated probability estimates, i.e., the predicted class probabilities must correspond to the true probabilities. In addition, a probabilistic classifier must, of course, also be as accurate as possible. In this paper, Venn predictors, and its special case Venn-Abers predictors, are evaluated for probabilistic classification, using random forests as the underlying models. Venn predictors output multiple probabilities for each label, i.e., the predicted label is associated with a probability interval. Since all Venn predictors are valid in the long run, the size of the probability intervals is very important, with tighter intervals being more informative. The standard solution when calibrating a classifier is to employ an additional step, transforming the outputs from a classifier into probability estimates, using a labeled data set not employed for training of the models. For random forests, and other bagged ensembles, it is, however, possible to use the out-of-bag instances for calibration, making all training data available for both model learning and calibration. This procedure has previously been successfully applied to conformal prediction, but was here evaluated for the first time for Venn predictors. The empirical investigation, using 22 publicly available data sets, showed that all four versions of the Venn predictors were better calibrated than both the raw estimates from the random forest, and the standard techniques Platt scaling and isotonic regression. Regarding both informativeness and accuracy, the standard Venn predictor calibrated on out-of-bag instances was the best setup evaluated. Most importantly, calibrating on out-of-bag instances, instead of using a separate calibration set, resulted in tighter intervals and more accurate models on every data set, for both the Venn predictors and the Venn-Abers predictors.

  相似文献   

5.
This paper presents a method for distributed multivariate regression using wavelet-based collective data mining (CDM). The method seamlessly blends machine learning and the theory of communication with the statistical methods employed in parametric multivariate regression to provide an effective data mining technique for use in a distributed data and computation environment. The technique is applied to two benchmark data sets, producing results that are consistent with those obtained by applying standard parametric regression techniques to centralized data sets. Evaluation of the method in terms of mode accuracy as a function of appropriateness of the selected wavelet function, relative number of nonlinear cross-terms, and sample size demonstrates that accurate parametric multivariate regression models can be generated from distributed, heterogeneous, data sets with minimal data communication overhead compared to that required to aggregate a distributed data set. Application of this method to linear discriminant analysis, which is related to parametric multivariate regression, produced classification results on the Iris data set that are comparable to those obtained with centralized data analysis.  相似文献   

6.
One of the key problems in machine learning theory and practice is setting the correct value of the regularization parameter; this is particularly crucial in Kernel Machines such as Support Vector Machines, Regularized Least Square or Neural Networks with Weight Decay terms. Well known methods such as Leave-One-Out (or GCV) and Evidence Maximization offer a way of predicting the regularization parameter. This work points out the failure of these methods for predicting the regularization parameter when coping with the, apparently trivial and here introduced, regularized mean problem; this is the simplest form of Tikhonov regularization, that, in turn, is the primal form of the learning algorithm Regularized Least Squares. This controlled environment gives the possibility to define oracular notions of regularization and to experiment new methodologies for predicting the regularization parameter that can be extended to the more general regression case. The analysis stems from James–Stein theory, shows the equivalence of shrinking and regularization and is carried using multiple kernels learning for regression and SVD analysis; a mean value estimator is built, first via a rational function and secondly via a balanced neural network architecture suitable for estimating statistical quantities and gaining symmetric expectations. The obtained results show that a non-linear analysis of the sample and a non-linear estimation of the mean obtained by neural networks can be profitably used to improve the accuracy of mean value estimations, especially when a small number of realizations is provided.  相似文献   

7.
针对经典水印技术在进行深度学习模型知识产权保护过程中, 存在水印多模型时可复用性不高和开销较大、易被检测和攻击等问题; 在黑盒场景下, 本文从构造触发集、设计嵌入方式等方面切入, 设计一种基于标志网络(Logo Network, LogoNet)的深度学习多模型水印方案(Logo Network based Deep Learning Multi-model Watermarking Scheme, LNMMWS)。首先, 利用二进制编码生成触发集, 并随机裁剪原训练样本以生成噪声集, 精简 LogoNet 层结构, 并在触发集和噪声集的混合数据集上训练LogoNet, LogoNet 拟合触发集并泛化噪声集以获取较高的水印触发模式识别精度和噪声处理能力。其次, 根据不同目标模型的分类类别, 从 LogoNet 中选择水印触发模式, 并调整 LogoNet 输出层的维度, 使 LogoNet 输出层和不同目标模型的输出层相嵌合, 以实现多模型水印的目的。最后, 当所有者发现可疑的远程应用程序接口服务时, 可以输入多组特定的触发样本, 经过输入层变换后, 触发特定的输出以核验水印并实现所有权验证。实验及分析表明, 使用 LNMMWS 进行深度学习模型所有权验证时,具有较高的水印触发模式识别精度、较小的嵌入影响、较多的水印触发模式数量, 并相比已有方案具有更低的时间开销;LNMMWS 在模型压缩攻击、模型微调攻击下具有较好的稳定性, 并具备较强的隐秘性, 能够规避恶意检测风险。  相似文献   

8.
This paper addresses the problem of fusing multiple sets of heterogeneous sensor data using Gaussian processes (GPs). Experiments on large scale terrain modeling in mining automation are presented. Three techniques in increasing order of model complexity are discussed. The first is based on adding data to an existing GP model. The second approach treats data from different sources as different noisy samples of a common underlying terrain and fusion is performed using heteroscedastic GPs. The final approach, based on dependent GPs, models each data set by a separate GP and learns spatial correlations between data sets through auto and cross covariances. The paper presents a unifying view of approaches to data fusion using GPs, a statistical evaluation that compares these approaches and multiple previously untested variants of them and an insight into the effect of model complexity on data fusion. Experiments suggest that in situations where data being fused is not rich enough to require a complex GP data fusion model or when computational resources are limited, the use of simpler GP data fusion techniques, which are constrained versions of the more generic models, reduces optimization complexity and consequently can enable superior learning of hyperparameters, resulting in a performance gain.  相似文献   

9.
This research investigates classification of documents according to the ethnic group of their authors and/or to the historical period when the documents were written. The classification is done using various combinations of six sets of stylistic features: quantitative, orthographic, topographic, lexical, function, and vocabulary richness. The application domain is Jewish Law articles written in Hebrew-Aramaic, languages that are rich in their morphological forms. Four popular machine learning methods have been applied. The logistic regression method led to the best accuracy results: about 99.6% while classifying to the ethnic group of their authors or to the historical period when the articles were written and about 98.3% while classifying to both classifications. The quantitative feature set was found as very successful and superior to all other sets. The lexical and function feature sets have also been found to be useful. The quantitative and the function features are domain independent and language independent. These two feature sets might be generalized to similar classification tasks for other languages and can therefore be useful for the text classification community at large.  相似文献   

10.
Empirical characterization of random forest variable importance measures   总被引:2,自引:0,他引:2  
Microarray studies yield data sets consisting of a large number of candidate predictors (genes) on a small number of observations (samples). When interest lies in predicting phenotypic class using gene expression data, often the goals are both to produce an accurate classifier and to uncover the predictive structure of the problem. Most machine learning methods, such as k-nearest neighbors, support vector machines, and neural networks, are useful for classification. However, these methods provide no insight regarding the covariates that best contribute to the predictive structure. Other methods, such as linear discriminant analysis, require the predictor space be substantially reduced prior to deriving the classifier. A recently developed method, random forests (RF), does not require reduction of the predictor space prior to classification. Additionally, RF yield variable importance measures for each candidate predictor. This study examined the effectiveness of RF variable importance measures in identifying the true predictor among a large number of candidate predictors. An extensive simulation study was conducted using 20 levels of correlation among the predictor variables and 7 levels of association between the true predictor and the dichotomous response. We conclude that the RF methodology is attractive for use in classification problems when the goals of the study are to produce an accurate classifier and to provide insight regarding the discriminative ability of individual predictor variables. Such goals are common among microarray studies, and therefore application of the RF methodology for the purpose of obtaining variable importance measures is demonstrated on a microarray data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号