首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
基于核学习的强大非线性映射性能,针对短时交通流量预测,提出一类基于核学习方法的预测模型。核递推最小二乘(KRLS)基于近似线性依赖(approximate linear dependence,ALD) 技术可降低计算复杂度及存储量,是一种在线核学习方法,适用于较大规模数据集的学习;核偏最小二乘(KPLS)方法将输入变量投影在潜在变量上,利用输入与输出变量之间的协方差信息提取潜在特征;核极限学习机(KELM)方法用核函数表示未知的隐含层非线性特征映射,通过正则化最小二乘算法计算网络的输出权值,能以极快的学习速度获得良好的推广性。为验证所提方法的有效性,将KELM、KPLS、ALD-KRLS用于不同实测交通流数据中,在同等条件下,与现有方法进行比较。实验结果表明,不同核学习方法的预测精度和训练速度均有提高,体现了核学习方法在短时交通流量预测中的应用潜力。  相似文献   

2.
任瑞琪  李军 《测控技术》2018,37(6):15-19
针对电力负荷预测,提出了一种优化的核极限学习机(O-KELM)的方法.核极限学习机(KELM)方法仅以核函数表示未知的隐含层非线性特征映射,无需选择隐含层的节点数目,通过正则化最小二乘算法计算网络的输出权值.将优化算法应用于KELM方法中,给出基于遗传算法、微分演化、模拟退火的3种优化KELM方法,优化选择核函数的参数以及正则化系数,以进一步提高KELM方法的学习性能.为验证方法的有效性,将O-KELM方法应用于某地区的中期峰值电力负荷预测研究中,在同等条件下与优化极限学习机(O-ELM)方法、SVM等方法进行比较.实验结果表明,O-KELM方法具有很好的预测性能,其中GA-KELM方法的建模精度最高.  相似文献   

3.
为提高电力变压器故障诊断的准确度,提出一种基于核极限学习机(KELM)的变压器故障诊断方法,利用混沌优化改善粒子群算法的全局寻优性能。该方法首先用KELM建立故障诊断模型,再利用改进后的混沌粒子群算法(CPSO)对KELM的参数进行优化。结合油中溶解气体分析法(DGA)获得样本数据,通过实例仿真结果对比分析表明,所用算法具有更高的诊断准确率,提高了变压器故障诊断的可靠性。  相似文献   

4.
提出了一种特征加权的核学习方法,其主要为了解决当前核方法在分类任务中对所有数据特征的同等对待的不足。在分类任务中,数据样本的每个特征所起的作用并不是相同的,有些特征对分类任务有促进作用,应该给予更多的关注。提出的算法集成了多核学习的优势,以加权的方式组合不同的核函数,但所需的计算复杂度更低。实验结果证明,提出的算法与支持向量机、多核学习算法相比,分类准确度优于支持向量机和多核学习算法,在计算复杂度上略高于支持向量机,但远远低于多核学习算法。  相似文献   

5.
为了提高核极限学习机(KELM)数据分类的精度,提出了一种结合K折交叉验证(K-CV)与遗传算法(GA)的KELM分类器参数优化方法(GA-KELM),将CV训练所得多个模型的平均精度作为GA的适应度评价函数,为KELM的参数优化提供评价标准,用获得GA优化最优参数的KELM算法进行数据分类.利用UCI中数据集进行仿真,实验结果表明:所提方法在整体性能上优于GA结合支持向量机法(GA-SVM)和GA结合反向传播(GA-BP)算法,具有更高的分类精度.  相似文献   

6.
In this paper, we propose new approach: Boosted Multiple-Kernel Extreme Learning Machines (BMKELMs), a multiple kernel version of Kernel Extreme Learning Machine (KELM). We apply it to the classification of fully polarized SAR images using multiple polarimetric and spatial features. Compared with other conventional multiple kernel learning methods, BMKELMs exploit KELM with the boosting paradigm coming from ensemble learning (EL) to train multiple kernels. Additionally, different fusion strategies such as majority voting, weighted majority voting, MetaBoost, and ErrorPrune were used for selecting the classification result with the highest overall accuracy. To show the performance of BMKELMs against other state-of-the-art approaches, two L-band fully polarimetric airborne SAR images (Airborne Synthetic Aperture Radar (AIRSAR) data collected by NASA JPL over the Flevoland area of The Netherlands and Electromagnetics Institute Synthetic Aperture Radar (EMISAR) data collected by DLR over Foulum in Denmark) were considered. Experimental results indicate that the proposed technique achieves the highest classification accuracy values when dealing with multiple features, such as a combination of polarimetric coherency and multi-scale spatial features.  相似文献   

7.
The conversion functions in the hidden layer of radial basis function neural networks (RBFNN) are Gaussian functions. The Gaussian functions are local to the kernel centers. In most of the existing research, the spatial local response of the sample is inaccurately calculated because the kernels have the same shape as a hypersphere, and the kernel parameters in the network are determined by experience. The influence of the fine structure in the local space is not considered during feature extraction. In addition, it is difficult to obtain a better feature extraction ability with less computational complexity. Therefore, this paper develops a multi-scale RBF kernel learning algorithm and proposes a new multi-layer RBF neural network model. For the samples of each class, the expectation maximization (EM) algorithm is used to obtain multi-layer nested sub-distribution models with different local response ranges, which are called multi-scale kernels in the network. The prior information of each sub-distribution is used as the connection weight between the multi-scale kernels. Finally, feature extraction is implemented using multi-layer kernel subspace embedding. The multi-scale kernel learning model can efficiently and accurately describe the fine structure of the samples and is fault tolerant to setting the number of kernels to a certain extent. Considering the prior probability of each kernel as the weight makes the feature extraction process satisfy the Bayes rule, which can enhance the interpretability of feature extraction in the network. This paper also theoretically proves that the proposed neural network is a generalized version of the original RBFNN. The experimental results show that the proposed method has better performance compared with some state-of-the-art algorithms.  相似文献   

8.
Although both online learning and kernel learning have been studied extensively in machine learning, there is limited effort in addressing the intersecting research problems of these two important topics. As an attempt to fill the gap, we address a new research problem, termed Online Multiple Kernel Classification (OMKC), which learns a kernel-based prediction function by selecting a subset of predefined kernel functions in an online learning fashion. OMKC is in general more challenging than typical online learning because both the kernel classifiers and the subset of selected kernels are unknown, and more importantly the solutions to the kernel classifiers and their combination weights are correlated. The proposed algorithms are based on the fusion of two online learning algorithms, i.e., the Perceptron algorithm that learns a classifier for a given kernel, and the Hedge algorithm that combines classifiers by linear weights. We develop stochastic selection strategies that randomly select a subset of kernels for combination and model updating, thus improving the learning efficiency. Our empirical study with 15 data sets shows promising performance of the proposed algorithms for OMKC in both learning efficiency and prediction accuracy.  相似文献   

9.
针对极限学习机对滑坡预测准确性低及在训练过程中模型不稳定的问题,引入RBF高斯核函数并使用极限梯度提升树算法Xgboost对KELM进行优化,建立了Xgboost优化后的Xgboost-KELM预测模型;首先采用高斯核RBF作为极限学习机的核函数,解决隐藏节点随机映射问题,增加模型稳定性及适用性;其次将清洗后的监测数据作为模型输入,并使用Xgboost寻优算法对核函数中的超参数进行优化,通过4组测试集进行Xgboost-KELM建模,依据均方误差迭代曲线得出最佳超参数;最后使用两组10%样本集验证模型评价指标及稳定性,实验结果AUC均值对比模型至少提高3个百分点,Precision、Accuracy及Recall至少高于对比模型1.7个百分点,同时Xgboost-KELM模型的方差及偏差都较小,证明该模型稳定性较好,实验结果说明Xgboost-KELM模型具有较好的预测效果,在滑坡灾害预测中有较好的预测能力。  相似文献   

10.
A novel sparse kernel density estimation method is proposed based on the sparse Bayesian learning with random iterative dictionary preprocessing. Using empirical cumulative distribution function as the response vectors, the sparse weights of density estimation are estimated by sparse Bayesian learning. The proposed iterative dictionary learning algorithm is used to reduce the number of kernel computations, which is an essential step of the sparse Bayesian learning. With the sparse kernel density estimation, the quadratic Renyi entropy based normalized mutual information feature selection method is proposed. The simulation of three examples demonstrates that the proposed method is comparable to the typical Parzen kernel density estimations. And compared with other state-of-art sparse kernel density estimations, our method also has a shown very good performance as to the number of kernels required in density estimation. For the last example, the Friedman data and Housing data are used to show the property of the proposed feature variables selection method.  相似文献   

11.
In recent years, several methods have been proposed to combine multiple kernels using a weighted linear sum of kernels. These different kernels may be using information coming from multiple sources or may correspond to using different notions of similarity on the same source. We note that such methods, in addition to the usual ones of the canonical support vector machine formulation, introduce new regularization parameters that affect the solution quality and, in this work, we propose to optimize them using response surface methodology on cross-validation data. On several bioinformatics and digit recognition benchmark data sets, we compare multiple kernel learning and our proposed regularized variant in terms of accuracy, support vector count, and the number of kernels selected. We see that our proposed variant achieves statistically similar or higher accuracy results by using fewer kernel functions and/or support vectors through suitable regularization; it also allows better knowledge extraction because unnecessary kernels are pruned and the favored kernels reflect the properties of the problem at hand.  相似文献   

12.
标记分布学习作为一种新的学习范式,利用最大熵模型构造的专用化算法能够很好地解决某些标记多样性问题,但是计算量巨大。基于此,引入运行速度快、稳定性更高的核极限学习机模型,提出基于核极限学习机的标记分布学习算法(KELM-LDL)。首先在极限学习机算法中通过RBF核函数将特征映射到高维空间,然后对原标记空间建立KELM回归模型求得输出权值,最后通过模型计算预测未知样本的标记分布。与现有算法在各领域不同规模数据集的实验表明,实验结果均优于多个对比算法,统计假设检验进一步说明KELM-LDL算法的有效性和稳定性。  相似文献   

13.
滚动轴承作为旋转机械中的必需元件,其任何故障都可能导致机器乃至整个系统发生故障,从而导致巨大的经济损失和时间的浪费,因此必须要及时准确地诊断滚动轴承故障。针对传统极限学习机中模型参数对滚动轴承故障诊断精度影响较大的问题,提出了一种基于贝叶斯优化的深度核极限学习机的滚动轴承故障诊断方法。首先,将自动编码器与核极限学习机相结合,构建了深度核极限学习机(Deep kernel extreme learning machine, DKELM)模型。其次,利用贝叶斯优化(Bayesian optimization, BO)算法对DKELM中的超参数进行寻优,使得训练数据集和验证数据集在DKELM模型中的分类错误率之和最低。然后,将测试数据集输入到训练好的BO-DKELM中进行故障诊断。最后,采用凯斯西储大学轴承故障数据集对所提方法进行验证,最终故障诊断精度为99.6%,与深度置信网络和卷积神经网络等传统智能算法进行对比,所提方法具有更高的故障诊断精度。  相似文献   

14.
结合半监督核的高斯过程分类   总被引:1,自引:0,他引:1  
提出了一种半监督算法用于学习高斯过程分类器, 其通过结合非参数的半监督核向分类器提供未标记数据信息. 该算法主要包括以下几个方面: 1)通过图拉普拉斯的谱分解获得核矩阵, 其联合了标记数据和未标记数据信息; 2)采用凸最优化方法学习核矩阵特征向量的最优权值, 构建非参数的半监督核; 3)把半监督核整合到高斯过程模型中, 构建所提出的半监督学习算法. 该算法的主要特点是: 把基于整个数据集的非参数半监督核应用于高斯过程模型, 该模型有着明确的概率描述, 可以方便地对数据之间的不确定性进行建模, 并能够解决复杂的推论问题. 通过实验结果表明, 该算法与其他方法相比具有更高的可靠性.  相似文献   

15.
马超 《计算机应用研究》2021,38(9):2726-2731
帕金森病是一种常见的神经性慢性疾病,由于其病因尚不明确,导致早期诊断精度低的问题,提出一种改进的优化核极限学习机方法用于帕金森病的早期诊断.研究利用混沌理论和高斯变异方法改进樽海鞘算法(salp swarm algorithm,SSA),提出一种基于进化机制的智能诊断模型ISSA-KELM.改进的SSA算法同步实现特征选择和KELM核函数的参数优化,有效地解决了模型的参数设定和最优特征选择问题,并基于OpenMP平台多线程调度处理模型,在保证模型分类精度最大化的同时进一步提高计算效率.实验结果表明,提出模型在分类精度上高于已有方法,计算效率也得到极大提高,具有较好的综合性能,验证了本模型有着很好的应用前景,有助于辅助临床医生在诊断中作出更准确的决策.  相似文献   

16.
为提高企业财务危机的预测准确率,提出一种基于引力搜索算法优化核极限学习机(KELM)的并行模型PHGSA-KELM。模型考虑了特征选择机制和参数优化两者对KELM模型起着同等重要的作用,提出改进的引力搜索算法(HGSA)同步实现特征选择机制和KELM参数优化,同时设计的线性加权多目标函数综合考虑了分类精度和特征子集数量,改善了算法的分类性能,并且基于多核平台的多线程并行方式进一步提高了算法的计算效率。通过真实数据集的实验结果表明,提出的模型不仅获得了较少的特征子集个数,找出了与企业财务危机紧密相关的特征,得到了很高的分类准确率,并且计算效率也得到极大提高,是一种有效的企业财务危机预警模型。  相似文献   

17.
Kernel discriminant analysis (KDA) is one of the state-of-the-art kernel-based methods for pattern classification and dimensionality reduction. It performs linear discriminant analysis in the feature space via kernel function. However, the performance of KDA greatly depends on the selection of the optimal kernel for the learning task of interest. In this paper, we propose a novel algorithm termed as elastic multiple kernel discriminant analysis (EMKDA) by using hybrid regularization for automatically learning kernels over a linear combination of pre-specified kernel functions. EMKDA makes use of a mixing norm regularization function to compromise the sparsity and non-sparsity of the kernel weights. A semi-infinite program based algorithm is then proposed to solve EMKDA. Extensive experiments on synthetic datasets, UCI benchmark datasets, digit and terrain database are conducted to show the effectiveness of the proposed methods.  相似文献   

18.
针对微阵列基因表达数据高维小样本、高冗余且高噪声的问题,提出一种基于FCBF特征选择和集成优化学习的分类算法FICS-EKELM。首先使用快速关联过滤方法FCBF滤除部分不相关特征和噪声,找出与类别相关性较高的特征集合;其次,运用抽样技术生成多个样本子集,在每个训练子集上利用改进乌鸦搜索算法同步实现最优特征子集选择和核极限学习机KELM分类器参数优化;然后基于基分类器构建集成分类模型对目标数据进行分类识别;此外运用多核平台多线程并行方式进一步提高算法计算效率。在六组基因数据集上的实验结果表明,本文算法不仅能用较少特征基因达到较优的分类效果,并且分类结果显著高于已有和相似方法,是一种有效的高维数据分类方法。  相似文献   

19.
张凯军  梁循 《自动化学报》2014,40(10):2288-2294
在支持向量机(Support vector machine, SVM)中, 对核函数的定义非常重要, 不同的核会产生不同的分类结果. 如何充分利用多个不同核函数的特点, 来共同提高SVM学习的效果, 已成为一个研究热点. 于是, 多核学习(Multiple kernel learning, MKL)方法应运而生. 最近, 有的学者提出了一种简单有效的稀疏MKL算法,即GMKL (Generalized MKL)算法, 它结合了L1 范式和L2范式的优点, 形成了一个对核权重的弹性限定. 然而, GMKL算法也并没有考虑到如何在充分利用已经选用的核函数中的共有信息. 另一方面, MultiK-MHKS算法则考虑了利用典型关联分析(Canonical correlation analysis, CCA)来获取核函数之间的共有信息, 但是却没有考虑到核函数的筛选问题. 本文模型则基于这两种算法进行了一定程度的改进, 我们称我们的算法为改进的显性多核支持向量机 (Improved domain multiple kernel support vector machine, IDMK-SVM). 我们证明了本文的模型保持了GMKL 的特性, 并且证明了算法的收敛性. 最后通过模拟实验, 本文证明了本文的多核学习方法相比于传统的多核学习方法有一定的精确性优势.  相似文献   

20.
Kernel based methods have been widely applied for signal analysis and processing. In this paper, we propose a sparse kernel based algorithm for online time series prediction. In classical kernel methods, the kernel function number is very large which makes them of a high computational cost and only applicable for off-line or batch learning. In online learning settings, the learning system is updated when each training sample is obtained and it requires a higher computational speed. To make the kernel methods suitable for online learning, we propose a sparsification method based on the Hessian matrix of the system loss function to continuously examine the significance of the new training sample in order to select a sparse dictionary (support vector set). The Hessian matrix is equivalent to the correlation matrix of sample inputs in the kernel weight updating using the recursive least square (RLS) algorithm. This makes the algorithm able to be easily implemented with an affordable computational cost for real-time applications. Experimental results show the ability of the proposed algorithm for both real-world and artificial time series data forecasting and prediction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号