首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 948 毫秒
1.
The objectives of this paper are (1) to propose new techniques to learn and improve the multicriteria decision analysis (MCDA) method PROAFTN based on machine learning approaches and (2) to compare the performance of the developed methods with other well‐known machine learning classification algorithms. The proposed learning methods consist of two stages: The first stage involves using the discretization techniques to obtain the required parameters for the PROAFTN method, and the second stage is the development of a new inductive approach to construct PROAFTN prototypes for classification. The comparative study is based on the generated classification accuracy of the algorithms on the data sets. For further robust analysis of the experiments, we used the Friedman statistical measure with the corresponding post hoc tests. The proposed approaches significantly improved the performance of the classification method PROAFTN. Based on the generated results on the same data sets, PROAFTN outperforms widely used classification algorithms. Furthermore, the method is simple, no preprocessing is required, and no loss of information during learning. © 2011 Wiley Periodicals, Inc.  相似文献   

2.

Features subset selection (FSS) generally plays an essential role in the implementation of data mining, particularly in the field of high-dimensional medical data analysis, as well as in supplying early detection with essential features and high accuracy. The latest modern feature selection models are now using the ability of optimization algorithms for extracting features of particular properties to get the highest accuracy performance possible. Many of the optimization algorithms, such as genetic algorithm, often use the required parameters that would need to be adjusted for better results. For the function selection procedure, tuning these parameter values is a difficult challenge. In this paper, a new wrapper-based feature selection approach called binary teaching learning based optimization (BTLBO) is introduced. The binary teaching learning based optimization (BTLBO) is among the most sophisticated meta-heuristic method which does not involve any specific algorithm parameters. It requires only standard process parameters such as population size and a number of iterations to extract a set of features selected from a data. This is a demanding process, to achieve the best possible set of features would be to use a method which is independent of the method controlling parameters. This paper introduces a new modified binary teaching–learning-based optimization (NMBTLBO) as a technique to select subset features and demonstrate support vector machine (SVM) accuracy of binary identification as a fitness function for the implementation of the feature subset selection process. The new proposed algorithm NMBTLBO contains two steps: first, the new updating procedure, second, the new method to select the primary teacher in teacher phase in binary teaching-learning based on optimization algorithm. The proposed technique NMBTLBO was used to classify the rheumatic disease datasets collected from Baghdad Teaching Hospital Outpatient Rheumatology Clinic during 2016–2018. Compared with the original BTLBO algorithm, the improved NMBTLBO algorithm has achieved a major difference in accuracy. Validation was carried out by testing the accuracy of four classification methods: K-nearest neighbors, decision trees, support vector machines and K-means. Study results showed that the classification accuracy of the four methods was increased for the proposed method of selection of features (NMBTLBO) compared to the BTLBO algorithm. SVM classifier provided 89% accuracy of BTLBO-SVM and 95% with NMBTLBO –SVM. Decision trees set the values of 94% with BTLBO-SVM and 95% with the feature selection of NMBTLBO-SVM. The analysis indicates that the latest method (NMBTLBO) enhances classification accuracy.

  相似文献   

3.
喻飞  赵志勇  魏波 《计算机科学》2016,43(9):269-273
因子分解机(Factorization Machine,FM) 算法是一种基于矩阵分解的机器学习算法,可用于求解回归、分类和排序等问题。FM模型中的参数求解使用的是基于梯度的优化方法,然而在样本较少的情况下,该优化方法收敛速度慢,且易陷入局部最优。差分进化算法(Differential Evolution,DE)是一种启发式的全局优化算法,具有收敛速度快等特性。为提高FM模型的训练速度,利用DE计算FM模型参数,提出了DE-FM算法。在数据集Diabetes、HorseColic以及音乐分类数据集Music上的实验结果表明,改进后的基于差分进化的因子分解机算法DE-FM在训练速度和准确性上均有所提高。  相似文献   

4.
The Naive Bayes (NB) learning algorithm is simple and effective in many domains including text classification. However, its performance depends on the accuracy of the estimated conditional probability terms. Sometimes these terms are hard to be accurately estimated especially when the training data is scarce. This work transforms the probability estimation problem into an optimization problem, and exploits three metaheuristic approaches to solve it. These approaches are Genetic Algorithms (GA), Simulated Annealing (SA), and Differential Evolution (DE). We also propose a novel DE algorithm that uses multi-parent mutation and crossover operations (MPDE) and three different methods to select the final solution. We create an initial population by manipulating the solution generated by a method used for fine tuning the NB. We evaluate the proposed methods by using their resulted solutions to build NB classifiers and compare their results with the results of obtained from classical NB and Fine-Tuning Naïve Bayesian (FTNB) algorithm, using 53 UCI benchmark data sets. We name these obtained classifiers NBGA, NBSA, NBDE, and NB-MPDE respectively. We also evaluate the performance NB-MPDE for text-classification using 18 text-classification data sets, and compare its results with the results of obtained from FTNB, BNB, and MNB. The experimental results show that using DE in general and the proposed MPDE algorithm in particular are more convenient for fine-tuning NB than all other methods, including the other two metaheuristic methods (GA, and SA). They also indicate that NB-MPDE achieves superiority over classical NB, FTNB, NBDE, NBGA, NBSA, MNB, and BNB.  相似文献   

5.
“Dimensionality” is one of the major problems which affect the quality of learning process in most of the machine learning and data mining tasks. Having high dimensional datasets for training a classification model may lead to have “overfitting” of the learned model to the training data. Overfitting reduces generalization of the model, therefore causes poor classification accuracy for the new test instances. Another disadvantage of dimensionality of dataset is to have high CPU time requirement for learning and testing the model. Applying feature selection to the dataset before the learning process is essential to improve the performance of the classification task. In this study, a new hybrid method which combines artificial bee colony optimization technique with differential evolution algorithm is proposed for feature selection of classification tasks. The developed hybrid method is evaluated by using fifteen datasets from the UCI Repository which are commonly used in classification problems. To make a complete evaluation, the proposed hybrid feature selection method is compared with the artificial bee colony optimization, and differential evolution based feature selection methods, as well as with the three most popular feature selection techniques that are information gain, chi-square, and correlation feature selection. In addition to these, the performance of the proposed method is also compared with the studies in the literature which uses the same datasets. The experimental results of this study show that our developed hybrid method is able to select good features for classification tasks to improve run-time performance and accuracy of the classifier. The proposed hybrid method may also be applied to other search and optimization problems as its performance for feature selection is better than pure artificial bee colony optimization, and differential evolution.  相似文献   

6.
In this study, a new multi-criteria classification technique for nominal and ordinal groups is developed by expanding the UTilites Additives DIScriminantes (UTADIS) method with a polynomial of degree T which is used as the utility function rather than using a piecewise linear function as an approximation of the utility function of each attribute. We called this method as PUTADIS. The objective is calculating the coefficients of the polynomial and the threshold limit of classes and weight of attributes such that it minimizes the number of misclassification error. Estimation of unknown parameters of the problem is calculated by using a hybrid algorithm which is a combination of particle swarm optimization algorithm (PSO) and Genetic Algorithm (GA). The results obtained by implementing the model on different datasets and comparing its performance with other previous methods show the high efficiency of the proposed method.  相似文献   

7.
This paper proposes a new self-adaptive differential evolution algorithm (DE) for continuous optimization problems. The proposed self-adaptive differential evolution algorithm extends the concept of the DE/current-to-best/1 mutation strategy to allow the adaptation of the mutation parameters. The control parameters in the mutation operation are gradually self-adapted according to the feedback from the evolutionary search. Moreover, the proposed differential evolution algorithm also consists of a new local search based on the krill herd algorithm. In this study, the proposed algorithm has been evaluated and compared with the traditional DE algorithm and two other adaptive DE algorithms. The experimental results on 21 benchmark problems show that the proposed algorithm is very effective in solving complex optimization problems.  相似文献   

8.
焦斌  徐志翔 《控制工程》2012,19(4):681-686
支持向量机(SVM)一种新型的统计学习方法。但是作为分类算法,它存在计算量大、运行时间长的缺点。针对LSSVM的参数选择问题,引入物理学中的黑洞概念,建立黑洞模型,结合模拟退火算法,提出了黑洞粒子群-模拟退火算法(BH-PSOSA)。该算法可以增加粒子的多样性,克服PSO算法优化过程中陷入局部极值的问题,提高了优化性能,改善了收敛特性。利用BHPSO-SA算法对LSSVM的参数进行优化选择,用UCI数据库的数据进行分类验证,相比CV参数优化的LSSVM,提高了分类速度和精度。最后把BHPSOSA-LSSVM算法应用到风机齿轮箱的故障诊断中,取得了良好的效果。  相似文献   

9.
This paper presents a new approach for solving short-term hydrothermal scheduling (HTS) using an integrated algorithm based on teaching learning based optimization (TLBO) and oppositional based learning (OBL). The practical hydrothermal system is highly complex and possesses nonlinear relationship of the problem variables, cascading nature of hydro reservoirs, water transport delay and scheduling time linkage that make the problem of optimization difficult using standard optimization methods. To overcome these problems, the proposed quasi-oppositional teaching learning based optimization (QOTLBO) is employed. To show its efficiency and robustness, the proposed QOTLBO algorithm is applied on two test systems. Numerical results of QOTLBO are compared with those obtained by two phase neural network, augmented Lagrange method, particle swarm optimization (PSO), improved self-adaptive PSO (ISAPSO), improved PSO (IPSO), differential evolution (DE), modified DE (MDE), fuzzy based evolutionary programming (Fuzzy EP), clonal selection algorithm (CSA) and TLBO approaches. The simulation results reveal that the proposed algorithm appears to be the best in terms of convergence speed, solution time and minimum cost when compared with other established methods. This method is considered to be a promising alternative approach for solving the short-term HTS problems in practical power system.  相似文献   

10.
In this paper, an optimized support vector machine (SVM) based on a new bio-inspired method called magnetic bacteria optimization algorithm method is proposed to construct a high performance classifier for motor imagery electroencephalograph based brain–computer interface (BCI). Butterworth band-pass filter and artifact removal technique are combined to extract the feature of frequency band of the ERD/ERS. Common spatial pattern is used to extract the feature vector which are put into the classifier later. The optimization mechanism involves kernel parameters setting in the SVM training procedure, which significantly influences the classification accuracy. Our novel approach aims to optimize the penalty factor parameter C and kernel parameter g of the SVM. The experimental results on the BCI Competition IV dataset II-a clearly present the effectiveness of the proposed method outperforming other competing methods in the literature such as genetic algorithm, particle swarm algorithm, artificial bee colony, biogeography based optimization.  相似文献   

11.
ABSTRACT

Learning parameters of a probabilistic model is a necessary step in machine learning tasks. We present a method to improve learning from small datasets by using monotonicity conditions. Monotonicity simplifies the learning and it is often required by users. We present an algorithm for Bayesian Networks parameter learning. The algorithm and monotonicity conditions are described, and it is shown that with the monotonicity conditions we can better fit underlying data. Our algorithm is tested on artificial and empiric datasets. We use different methods satisfying monotonicity conditions: the proposed gradient descent, isotonic regression EM, and non-linear optimization. We also provide results of unrestricted EM and gradient descent methods. Learned models are compared with respect to their ability to fit data in terms of log-likelihood and their fit of parameters of the generating model. Our proposed method outperforms other methods for small sets, and provides better or comparable results for larger sets.  相似文献   

12.
Matrix-based methods such as generalized low rank approximations of matrices (GLRAM) have gained wide attention from researchers in pattern recognition and machine learning communities. In this paper, a novel concept of bilinear Lanczos components (BLC) is introduced to approximate the projection vectors obtained from eigen-based methods without explicit computing eigenvectors of the matrix. This new method sequentially reduces the reconstruction error for a Frobenius-norm based optimization criterion, and the resulting approximation performance is thus improved during successive iterations. In addition, a theoretical clue for selecting suitable dimensionality parameters without losing classification information is presented in this paper. The BLC approach realizes dimensionality reduction and feature extraction by using a small number of Lanczos components. Extensive experiments on face recognition and image classification are conducted to evaluate the efficiency and effectiveness of the proposed algorithm. Results show that the new approach is competitive with the state-of-the-art methods, while it has a much lower training cost.  相似文献   

13.
Differential evolution (DE) is a simple yet powerful evolutionary algorithm (EA) for global numerical optimization. However, its performance is significantly influenced by its parameters. Parameter adaptation has been proven to be an efficient way for the enhancement of the performance of the DE algorithm. Based on the analysis of the behavior of the crossover in DE, we find that the trial vector is directly related to its binary string, but not directly related to the crossover rate. Based on this inspiration, in this paper, we propose a crossover rate repair technique for the adaptive DE algorithms that are based on successful parameters. The crossover rate in DE is repaired by its corresponding binary string, i.e. by using the average number of components taken from the mutant. The average value of the binary string is used to replace the original crossover rate. To verify the effectiveness of the proposed technique, it is combined with an adaptive DE variant, JADE, which is a highly competitive DE variant. Experiments have been conducted on 25 functions presented in CEC-2005 competition. The results indicate that our proposed crossover rate technique is able to enhance the performance of JADE. In addition, compared with other DE variants and state-of-the-art EAs, the improved JADE method obtains better, or at least comparable, results in terms of the quality of final solutions and the convergence rate.  相似文献   

14.
In this paper, we present a new methodology for learning parameters of multiple criteria classification method PROAFTN from data. There are numerous representations and techniques available for data mining, for example decision trees, rule bases, artificial neural networks, density estimation, regression and clustering. The PROAFTN method constitutes another approach for data mining. It belongs to the class of supervised learning algorithms and assigns membership degree of the alternatives to the classes. The PROAFTN method requires the elicitation of its parameters for the purpose of classification. Therefore, we need an automatic method that helps us to establish these parameters from the given data with minimum classification errors. Here, we propose variable neighborhood search metaheuristic for getting these parameters. The performances of the newly proposed method were evaluated using 10 cross validation technique. The results are compared with those obtained by other classification methods previously reported on the same data. It appears that the solutions of substantially better quality are obtained with proposed method than with these former ones.  相似文献   

15.
现阶段雷达目标检测识别主要依赖人工算法提取目标的特征,难点在于环境自适应能力弱,高强度杂波背景下难以有效检测到目标;针对上述问题,结合深度学习在图像识别等领域表现出的强大的学习表示能力,提出基于堆叠双向长短期记忆网络的雷达目标识别方法;网络模型以雷达多普勒维的回波数据构建数据集,采用双向LSTM提取雷达回波数据在时间序列上的正向和逆向信息,通过RMSProp优化算法对神经网络参数迭代训练,实现了对无人机这种低空慢速小目标的有效识别;实验结果表明,基于堆叠双向LSTM的雷达目标识别方法优于传统的SVM分类算法和卷积神经网络分类算法.  相似文献   

16.
为解决传统核极限学习机算法参数优化困难的问题,提高分类准确度,提出一种改进贝叶斯优化的核极限学习机算法.用樽海鞘群设计贝叶斯优化框架中获取函数的下置信界策略,提高算法的局部搜索能力和寻优能力;用这种改进的贝叶斯优化算法对核极限学习机的参数进行寻优,用最优参数构造核极限学习机分类器.在UCI真实数据集上进行仿真实验,实验...  相似文献   

17.
This paper proposes a methodology for automatically extracting T–S fuzzy models from data using particle swarm optimization (PSO). In the proposed method, the structures and parameters of the fuzzy models are encoded into a particle and evolve together so that the optimal structure and parameters can be achieved simultaneously. An improved version of the original PSO algorithm, the cooperative random learning particle swarm optimization (CRPSO), is put forward to enhance the performance of PSO. CRPSO employs several sub-swarms to search the space and the useful information is exchanged among them during the iteration process. Simulation results indicate that CRPSO outperforms the standard PSO algorithm, genetic algorithm (GA) and differential evolution (DE) on the functions optimization and benchmark modeling problems. Moreover, the proposed CRPSO-based method can extract accurate T–S fuzzy model with appropriate number of rules.  相似文献   

18.
In this work, a new classification method called Soft Competitive Learning Fuzzy Adaptive Resonance Theory (SFART) is proposed to diagnose bearing faults. In order to solve the misclassification caused by the traditional Fuzzy ART based on hard competitive learning, a soft competitive learning ART model is established using Yu’s norm similarity criterion and lateral inhibition theory. The proposed SFART is based on Yu’s norm similarity criterion and soft competitive learning mechanism. In SFART, Yu’s similarity criterion and the lateral inhibition theory were employed to measure the proximity and select winning neurons, respectively. To further improve the classification accuracy, a feature selection technique based on Yu’s norms is also proposed. In addition, Particle Swarm Optimization (PSO) is introduced to optimize the model parameters of SFART. Meanwhile, the validity of the feature selection technique and parameter optimization method is demonstrated. Finally, fuzzy ART/ ARTMAP (FAM) as well as the feasibility of the proposed SFART algorithm are validated by comparing the diagnosis effectiveness of the proposed algorithm with the classic Fuzzy c-means (FCM), Fuzzy ART and fuzzy ARTMAP (FAM).  相似文献   

19.
The search capabilities of the Differential Evolution (DE) algorithm – a global optimization technique – make it suitable for finding both the architecture and the best internal parameters of a neural network, usually determined by the training phase. In this paper, two variants of the DE algorithm (classical DE and self-adaptive mechanism) were used to obtain the best neural networks in two distinct cases: for prediction and classification problems. Oxygen mass transfer in stirred bioreactors is modeled with neural networks developed with the DE algorithm, based on the consideration that the oxygen constitutes one of the decisive factors of cultivated microorganism growth and can play an important role in the scale-up and economy of aerobic biosynthesis systems. The coefficient of mass transfer oxygen is related to the viscosity, superficial speed of air, specific power, and oxygen-vector volumetric fraction (being predicted as function of these parameters) using stacked neural networks. On the other hand, simple neural networks are designed with DE in order to classify the values of the mass transfer coefficient oxygen into different classes. Satisfactory results are obtained in both cases, proving that the neural network based modeling is an appropriate technique and the DE algorithm is able to lead to the near-optimal neural network topology.  相似文献   

20.
The k nearest neighbor is a lazy learning algorithm that is inefficient in the classification phase because it needs to compare the query sample with all training samples. A template reduction method is recently proposed that uses only samples near the decision boundary for classification and removes those far from the decision boundary. However, when class distributions overlap, more border samples are retrained and it leads to inefficient performance in the classification phase. Because the number of reduced samples are limited, using an appropriate feature reduction method seems a logical choice to improve classification time. This paper proposes a new prototype reduction method for the k nearest neighbor algorithm, and it is based on template reduction and ViSOM. The potential property of ViSOM is displaying the topology of data on a two-dimensional feature map, it provides an intuitive way for users to observe and analyze data. An efficient classification framework is then presented, which combines the feature reduction method and the prototype selection algorithm. It needs a very small data size for classification while keeping recognition rate. In the experiments, both of synthetic and real datasets are used to evaluate the performance. Experimental results demonstrate that the proposed method obtains above 70 % speedup ratio and 90 % compression ratio while maintaining similar performance to kNN.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号