首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The purpose of this study is to use the truncated Newton method in prior correction logistic regression (LR). A regularization term is added to prior correction LR to improve its performance, which results in the truncated‐regularized prior correction algorithm. The performance of this algorithm is compared with that of weighted LR and the regular LR methods for large imbalanced binary class data sets. The results, based on the KDD99 intrusion detection data set, and 6 other data sets at both the prior correction and the weighted LRs have the same computational efficiency when the truncated Newton method is used in both of them. A higher discriminative performance, however, resulted from weighting, which exceeded both the prior correction and the regular LR on nearly all the data sets. From this study, we conclude that weighting outperforms both the regular and prior correction LR models in most data sets and it is the method of choice when LR is used to evaluate imbalanced and rare event data.  相似文献   

2.
Incremental learning has been used extensively for data stream classification. Most attention on the data stream classification paid on non-evolutionary methods. In this paper, we introduce new incremental learning algorithms based on harmony search. We first propose a new classification algorithm for the classification of batch data called harmony-based classifier and then give its incremental version for classification of data streams called incremental harmony-based classifier. Finally, we improve it to reduce its computational overhead in absence of drifts and increase its robustness in presence of noise. This improved version is called improved incremental harmony-based classifier. The proposed methods are evaluated on some real world and synthetic data sets. Experimental results show that the proposed batch classifier outperforms some batch classifiers and also the proposed incremental methods can effectively address the issues usually encountered in the data stream environments. Improved incremental harmony-based classifier has significantly better speed and accuracy on capturing concept drifts than the non-incremental harmony based method and its accuracy is comparable to non-evolutionary algorithms. The experimental results also show the robustness of improved incremental harmony-based classifier.  相似文献   

3.
Recent research shows that rule based models perform well while classifying large data sets such as data streams with concept drifts. A genetic algorithm is a strong rule based classification algorithm which is used only for mining static small data sets. If the genetic algorithm can be made scalable and adaptable by reducing its I/O intensity, it will become an efficient and effective tool for mining large data sets like data streams. In this paper a scalable and adaptable online genetic algorithm is proposed to mine classification rules for the data streams with concept drifts. Since the data streams are generated continuously in a rapid rate, the proposed method does not use a fixed static data set for fitness calculation. Instead, it extracts a small snapshot of the training example from the current part of data stream whenever data is required for the fitness calculation. The proposed method also builds rules for all the classes separately in a parallel independent iterative manner. This makes the proposed method scalable to the data streams and also adaptable to the concept drifts that occur in the data stream in a fast and more natural way without storing the whole stream or a part of the stream in a compressed form as done by the other rule based algorithms. The results of the proposed method are comparable with the other standard methods which are used for mining the data streams.  相似文献   

4.
《Information Fusion》2008,9(1):56-68
In the real world concepts are often not stable but change with time. A typical example of this in the biomedical context is antibiotic resistance, where pathogen sensitivity may change over time as new pathogen strains develop resistance to antibiotics that were previously effective. This problem, known as concept drift, complicates the task of learning a model from data and requires special approaches, different from commonly used techniques that treat arriving instances as equally important contributors to the final concept. The underlying data distribution may change as well, making previously built models useless. This is known as virtual concept drift. Both types of concept drifts make regular updates of the model necessary. Among the most popular and effective approaches to handle concept drift is ensemble learning, where a set of models built over different time periods is maintained and the best model is selected or the predictions of models are combined, usually according to their expertise level regarding the current concept. In this paper we propose the use of an ensemble integration technique that would help to better handle concept drift at an instance level. In dynamic integration of classifiers, each base classifier is given a weight proportional to its local accuracy with regard to the instance tested, and the best base classifier is selected, or the classifiers are integrated using weighted voting. Our experiments with synthetic data sets simulating abrupt and gradual concept drifts and with a real-world antibiotic resistance data set demonstrate that dynamic integration of classifiers built over small time intervals or fixed-sized data blocks can be significantly better than majority voting and weighted voting, which are currently the most commonly used integration techniques for handling concept drift with ensembles.  相似文献   

5.
Forecasting the direction of the daily changes of stock indices is an important yet difficult task for market participants. Advances on data mining and machine learning make it possible to develop more accurate predictions to assist investment decision making. This paper attempts to develop a learning architecture LR2GBDT for forecasting and trading stock indices, mainly by cascading the logistic regression (LR) model onto the gradient boosted decision trees (GBDT) model. Without any assumption on the underlying data generating process, raw price data and twelve technical indicators are employed for extracting the information contained in the stock indices. The proposed architecture is evaluated by comparing the experimental results with the LR, GBDT, SVM (support vector machine), NN (neural network) and TPOT (tree-based pipeline optimization tool) models on three stock indices data of two different stock markets, which are an emerging market (Shanghai Stock Exchange Composite Index) and a mature stock market (Nasdaq Composite Index and S&P 500 Composite Stock Price Index). Given the same test conditions, the cascaded model not only outperforms the other models, but also shows statistically and economically significant improvements for exploiting simple trading strategies, even when transaction cost is taken into account.  相似文献   

6.
Some methods from statistical machine learning and from robust statistics have two drawbacks. Firstly, they are computer-intensive such that they can hardly be used for massive data sets, say with millions of data points. Secondly, robust and non-parametric confidence intervals for the predictions according to the fitted models are often unknown. A simple but general method is proposed to overcome these problems in the context of huge data sets. An implementation of the method is scalable to the memory of the computer and can be distributed on several processors to reduce the computation time. The method offers distribution-free confidence intervals for the median of the predictions. The main focus is on general support vector machines (SVM) based on minimizing regularized risks. As an example, a combination of two methods from modern statistical machine learning, i.e. kernel logistic regression and ε-support vector regression, is used to model a data set from several insurance companies. The approach can also be helpful to fit robust estimators in parametric models for huge data sets.  相似文献   

7.
Acute coronary syndrome (ACS) is a leading cause of mortality and morbidity in the Arabian Gulf. In this study, the in‐hospital mortality amongst patients admitted with ACS to Arabian Gulf hospitals is predicted using a comprehensive modelling framework that combines powerful machine‐learning methods such as support‐vector machine (SVM), Naïve Bayes (NB), artificial neural networks (NN), and decision trees (DT). The performance of the machine‐learning methods is compared with that of the performance of a commonly used statistical method, namely, logistic regression (LR). The study follows the current practise of computing mortality risk using risk scores such as the Global Registry of Acute Coronary Events (GRACE) score, which has not been validated for Arabian Gulf patients. Cardiac registry data of 7,000 patients from 65 hospitals located in Arabian Gulf countries are used for the study. This study is unique as it uses a contemporary data analytics framework. A k‐fold (k = 10) cross‐validation is utilized to generate training and validation samples from the GRACE dataset. The machine‐learning‐based predictive models often incur prejudgments for imbalanced training data patterns. To mitigate the data imbalance due to scarce observations for in‐hospital mortalities, we have utilized specialized methods such as random undersampling (RUS) and synthetic minority over sampling technique (SMOTE). A detailed simulation experimentation is carried out to build models with each of the five predictive methods (LR, NN, NB, SVM, and DT) for the each of the three datasets k‐fold subsamples generated. The predictive models are developed under three schemes of the k‐fold samples that include no data imbalance, RUS, and SMOTE. We have implemented an information fusion method rooted in computing weighted impact scores obtain for an individual medical history attributes from each of the predictive models simulated for a collective recommendation based on an impact score specific to a predictor. Finally, we grouped the predictors using fuzzy c‐mean clustering method into three categories, high‐, medium‐, and low‐risk factors for in‐hospital mortality due to ACS. Our study revealed that patients with medical history related to the presences of peripheral artery disease, congestive heart failure, cardiovascular transient ischemic attack valvular disease, and coronary artery bypass grafting amongst others have the most risk for in‐hospital mortality.  相似文献   

8.
Random forests is currently one of the most used machine learning algorithms in the non-streaming (batch) setting. This preference is attributable to its high learning performance and low demands with respect to input preparation and hyper-parameter tuning. However, in the challenging context of evolving data streams, there is no random forests algorithm that can be considered state-of-the-art in comparison to bagging and boosting based algorithms. In this work, we present the adaptive random forest (ARF) algorithm for classification of evolving data streams. In contrast to previous attempts of replicating random forests for data stream learning, ARF includes an effective resampling method and adaptive operators that can cope with different types of concept drifts without complex optimizations for different data sets. We present experiments with a parallel implementation of ARF which has no degradation in terms of classification performance in comparison to a serial implementation, since trees and adaptive operators are independent from one another. Finally, we compare ARF with state-of-the-art algorithms in a traditional test-then-train evaluation and a novel delayed labelling evaluation, and show that ARF is accurate and uses a feasible amount of resources.  相似文献   

9.
在软件开发初期及时识别出软件存在的缺陷,可以帮助项目管理团队及时优化开发测试资源分配,以便对可能含有缺陷的软件进行严格的质量保证活动,这对于软件的高质量交付有着重要的作用,因此,软件缺陷预测成为软件工程领域内一个研究热点。虽然人们已经使用多种机器学习算法建立了缺陷预测模型,但还没有对这些模型的贝叶斯方法进行研究。提出了无信息先验和信息先验的贝叶斯Logistic回归方法来建立缺陷预测模型,并对贝叶斯Logistic回归的优势以及先验信息在贝叶斯Logistic回归中的作用进行了研究。最后,在PROMISE数据集上与其他已有缺陷预测方法(LR、NB、RF、SVM)进行了比较研究,结果表明:贝叶斯Logistic回归方法可以取得很好的预测性能。  相似文献   

10.
There is an increasing interest in modeling groundwater contamination, particularly geogenic contaminant, on a large scale both from the researcher’s as well as policy maker’s point of view. However, modeling large scale groundwater contamination is very challenging due to the incomplete understanding of geochemical and hydrological processes in the aquifer. Despite the incomplete understanding, existing knowledge provides sufficient hints to develop predictive models of geogenic contamination. In this study we used a global database of fluoride measurements (>60,000 entities), as well as global-scale information relevant to soil, geology, elevation, climate, and hydrology to evaluate several hybrid methods. The hybrid methods were developed by combining two classification techniques including classification and regression tree (CART) and “knowledge based clustering” (KBC) and three predictive techniques including multiple linear regression (MLR), adoptive neuro-fuzzy inference system (ANFIS) and logistic regression (LR). The results indicated that combination of classification techniques and nonlinear predictive method (ANFIS and LR) were more reliable than others and provided a better prediction capability. Among the different hybrid procedures, combination of KBC-ANFIS and also CART-ANFIS resulted in larger true positive rates and smaller false negative rates for both training and test data sets. However, as the CART classifier is very unstable and very sensitive to resampling, the combination of KBC and ANFIS is preferred as it not only is more robust but also is flexible enough to account for geohydrological conditions.  相似文献   

11.
Classification is an important data analysis tool that uses a model built from historical data to predict class labels for new observations. More and more applications are featuring data streams, rather than finite stored data sets, which are a challenge for traditional classification algorithms. Concept drifts and skewed distributions, two common properties of data stream applications, make the task of learning in streams difficult. The authors aim to develop a new approach to classify skewed data streams that uses an ensemble of models to match the distribution over under-samples of negatives and repeated samples of positives.  相似文献   

12.
In this paper, a new approach for centralised and distributed learning from spatial heterogeneous databases is proposed. The centralised algorithm consists of a spatial clustering followed by local regression aimed at learning relationships between driving attributes and the target variable inside each region identified through clustering. For distributed learning, similar regions in multiple databases are first discovered by applying a spatial clustering algorithm independently on all sites, and then identifying corresponding clusters on participating sites. Local regression models are built on identified clusters and transferred among the sites for combining the models responsible for identified regions. Extensive experiments on spatial data sets with missing and irrelevant attributes, and with different levels of noise, resulted in a higher prediction accuracy of both centralised and distributed methods, as compared to using global models. In addition, experiments performed indicate that both methods are computationally more efficient than the global approach, due to the smaller data sets used for learning. Furthermore, the accuracy of the distributed method was comparable to the centralised approach, thus providing a viable alternative to moving all data to a central location.  相似文献   

13.
A clear understanding of risk factors is very important to develop appropriate prevention and control strategies for infection caused by such pathogens as Salmonella (S.) Typhimurium. The objective of this study is to utilise intelligent models to identify significant risk factors for S. Typhimurium DT104 and non-DT104 illness in Canada, and compare findings to those obtained using traditional statistical methods. Previous studies have focused on analysing each risk factor separately using single variable analysis (SVA), or modelling multiple risk factors using statistical models, such as logistic regression (LR) models. In this paper, neural networks and statistical models are developed and compared to determine which method produces superior results. In general, simulation results show that the neural network yields more accurate prediction than the statistical models. The network size, number of training iterations, learning rate, and training sample size in the neural networks are discussed to improve the performance of systems.  相似文献   

14.
Ribonucleic acid (RNA) hybridization is widely used in popular RNA simulation software in bioinformatics. However, limited by the exponential computational complexity of combinatorial problems, it is challenging to decide, within an acceptable time, whether a specific RNA hybridization is effective. We hereby introduce a machine learning based technique to address this problem. Sample machine learning (ML) models tested in the training phase include algorithms based on the boosted tree (BT), random forest (RF), decision tree (DT) and logistic regression (LR), and the corresponding models are obtained. Given the RNA molecular coding training and testing sets, the trained machine learning models are applied to predict the classification of RNA hybridization results. The experiment results show that the optimal predictive accuracies are 96.2%, 96.6%, 96.0% and 69.8% for the RF, BT, DT and LR-based approaches, respectively, under the strong constraint condition, compared with traditional representative methods. Furthermore, the average computation efficiency of the RF, BT, DT and LR-based approaches are 208 679, 269 756, 184 333 and 187 458 times higher than that of existing approach, respectively. Given an RNA design, the BT-based approach demonstrates high computational efficiency and better predictive accuracy in determining the biological effectiveness of molecular hybridization.   相似文献   

15.
The demand for development of good quality software has seen rapid growth in the last few years. This is leading to increase in the use of the machine learning methods for analyzing and assessing public domain data sets. These methods can be used in developing models for estimating software quality attributes such as fault proneness, maintenance effort, testing effort. Software fault prediction in the early phases of software development can help and guide software practitioners to focus the available testing resources on the weaker areas during the software development. This paper analyses and compares the statistical and six machine learning methods for fault prediction. These methods (Decision Tree, Artificial Neural Network, Cascade Correlation Network, Support Vector Machine, Group Method of Data Handling Method, and Gene Expression Programming) are empirically validated to find the relationship between the static code metrics and the fault proneness of a module. In order to assess and compare the models predicted using the regression and the machine learning methods we used two publicly available data sets AR1 and AR6. We compared the predictive capability of the models using the Area Under the Curve (measured from the Receiver Operating Characteristic (ROC) analysis). The study confirms the predictive capability of the machine learning methods for software fault prediction. The results show that the Area Under the Curve of model predicted using the Decision Tree method is 0.8 and 0.9 (for AR1 and AR6 data sets, respectively) and is a better model than the model predicted using the logistic regression and other machine learning methods.  相似文献   

16.
Data analysis often involves finding models that can explain patterns in data, and reduce possibly large data sets to more compact model‐based representations. In Statistics, many methods are available to compute model information. Among others, regression models are widely used to explain data. However, regression analysis typically searches for the best model based on the global distribution of data. On the other hand, a data set may be partitioned into subsets, each requiring individual models. While automatic data subsetting methods exist, these often require parameters or domain knowledge to work with. We propose a system for visual‐interactive regression analysis for scatter plot data, supporting both global and local regression modeling. We introduce a novel regression lens concept, allowing a user to interactively select a portion of data, on which regression analysis is run in interactive time. The lens gives encompassing visual feedback on the quality of candidate models as it is interactively navigated across the input data. While our regression lens can be used for fully interactive modeling, we also provide user guidance suggesting appropriate models and data subsets, by means of regression quality scores. We show, by means of use cases, that our regression lens is an effective tool for user‐driven regression modeling and supports model understanding.  相似文献   

17.
监督学习情况下,经常遇到样例的维数远远大于样本个数的学习情况。此时,样例中存在许多与样例类标签无关的特征,研究如何同时实现稀疏特征选择并具有更好的分类性能的算法具有优势。提出了基于权核逻辑斯蒂非线性回归模型的分类和特征选择算法。权对角矩阵的对角元素在0到1之间取值,对角元素的取值作为学习参数由最优化过程确定,讨论了提出的快速轮转优化算法。提出的算法在十个实际数据集上进行了测试,实验结果显示,提出的分类算法与L1,L2,Lp正则化逻辑斯蒂模型分类算法比较具有优势。  相似文献   

18.
We propose a logistic regression method based on the hybridation of a linear model and product-unit neural network models for binary classification. In a first step we use an evolutionary algorithm to determine the basic structure of the product-unit model and afterwards we apply logistic regression in the new space of the derived features. This hybrid model has been applied to seven benchmark data sets and a new microbiological problem. The hybrid model outperforms the linear part and the nonlinear part obtaining a good compromise between them and they perform well compared to several other learning classification techniques. We obtain a binary classifier with very promising results in terms of classification accuracy and the complexity of the classifier.  相似文献   

19.
In many applications of information systems learning algorithms have to act in dynamic environments where data are collected in the form of transient data streams. Compared to static data mining, processing streams imposes new computational requirements for algorithms to incrementally process incoming examples while using limited memory and time. Furthermore, due to the non-stationary characteristics of streaming data, prediction models are often also required to adapt to concept drifts. Out of several new proposed stream algorithms, ensembles play an important role, in particular for non-stationary environments. This paper surveys research on ensembles for data stream classification as well as regression tasks. Besides presenting a comprehensive spectrum of ensemble approaches for data streams, we also discuss advanced learning concepts such as imbalanced data streams, novelty detection, active and semi-supervised learning, complex data representations and structured outputs. The paper concludes with a discussion of open research problems and lines of future research.  相似文献   

20.
图像超分辨重建(Super-Resolution,SR)是指利用信号处理和机器学习等方法,从单幅或者多幅低分辨率图像(Low Resolution,LR)中重建对应的高分辨率图像(High Resolution,HR)的技术。由于多幅LR图像之间亚像素位移的不可预知性,单幅图像超分辨重建(Single Image Super-Resolution,SISR)逐渐成为超分辨研究的主要方向。近年来,深度学习方法得到迅速发展,并广泛应用到图像处理领域。因此,针对单幅图像超分辨重建所使用的深度学习相关算法和网络模型进行系统的总结。介绍图像超分辨问题的设置和评价指标;讨论和比较单幅图像超分辨重建的深度学习算法,主要从网络结构设计、损失函数和上采样方式三方面进行论述;介绍常用的标准数据集,并选用基于不同网络模型的几种典型算法进行实验对比分析;展望图像超分辨技术未来的研究趋势和发展方向。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号