首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
经典线性算法的非线性核形式   总被引:6,自引:1,他引:6  
经典线性算法的非线性核形式是近10年发展起来的一类非线性机器学习技术.它们最显著的特点是利用满足Mercer条件的核函数巧妙地推导出线性算法的非线性形式。并表述为与样本数目有关、与维数无关的优化问题.为了提高数值计算的稳定性、控制算法的推广能力以及改善迭代过程的收敛性。部分算法还采用了正则化技术.在概述核思想与核函数、正则化技术的基础上,系统地介绍了经典线性算法的非线性核形式,同时分析它们的优缺点,井讨论了进一步发展的方向.  相似文献   

2.
王一宾    裴根生  程玉胜   《智能系统学报》2019,14(4):831-842
将正则化极限学习机或者核极限学习机理论应用到多标记分类中,一定程度上提高了算法的稳定性。但目前这些算法关于损失函数添加的正则项都基于L2正则,导致模型缺乏稀疏性表达。同时,弹性网络正则化既保证模型鲁棒性且兼具模型稀疏化学习,但结合弹性网络的极限学习机如何解决多标记问题鲜有研究。基于此,本文提出一种对核极限学习机添加弹性网络正则化的多标记学习算法。首先,对多标记数据特征空间使用径向基核函数映射;随后,对核极限学习机损失函数施加弹性网络正则项;最后,采用坐标下降法迭代求解输出权值以得到最终预测标记。通过对比试验和统计分析表明,提出的算法具有更好的性能表现。  相似文献   

3.
传统的半监督降维技术中,在原特征空间中定义流形正则化项,但其构造无助于接下来的分类任务.针对此问题,文中提出一种自适应正则化核二维判别分析算法.首先每个图像矩阵经奇异值分解为两个正交矩阵与一个对角矩阵的乘积,通过两个核函数将两个正交矩阵列向量从原始非线性空间映射到一个高维特征空间.然后在低维特征空间中定义自适应正则化项,并将其与二维矩阵非线性方法整合于单个目标函数中,通过交替优化技术,在两个核子空间提取判别特征.最后在两个人脸数据集上的实验表明,文中算法在分类精度上获得较大提升.  相似文献   

4.
自适应CRBF非线性滤波器及其改进学习算法   总被引:1,自引:0,他引:1  
传统的随机梯度算法由于采用基于二阶统计量的平方误差代价函数,因此含有的信息量较少,难以实现更高的精度。针对此问题,以基于高阶统计量的指数平方误差作为代价函数,结合基于两层RBF网络凸组合的非线性自适应滤波器,提出了最小指数平方误差自适应学习算法。非线性系统辨识和非线性信道均衡的实验仿真结果表明,该改进算法的收敛性能明显优于传统的随机梯度算法。  相似文献   

5.
任瑞琪  李军 《测控技术》2018,37(6):15-19
针对电力负荷预测,提出了一种优化的核极限学习机(O-KELM)的方法.核极限学习机(KELM)方法仅以核函数表示未知的隐含层非线性特征映射,无需选择隐含层的节点数目,通过正则化最小二乘算法计算网络的输出权值.将优化算法应用于KELM方法中,给出基于遗传算法、微分演化、模拟退火的3种优化KELM方法,优化选择核函数的参数以及正则化系数,以进一步提高KELM方法的学习性能.为验证方法的有效性,将O-KELM方法应用于某地区的中期峰值电力负荷预测研究中,在同等条件下与优化极限学习机(O-ELM)方法、SVM等方法进行比较.实验结果表明,O-KELM方法具有很好的预测性能,其中GA-KELM方法的建模精度最高.  相似文献   

6.
基于近红外光子的辐射传输方程给出一种光学层析图像的正则化重建方法。通过引入图像熵和局部平滑函数为正则化项克服了重建问题的病态特性。首先阐述了基于辐射传输方程光学层析成像的前向模型,进而提出基于平滑准则的正则化重建方法。重建过程是对目标函数的优化过程。目标函数包括预测值和测量值之间的误差函数和正则化函数两部分。对上述目标函数,采用基于梯度的迭代优化方法。本文提出一种具体的基于梯度树的梯度求解算法。实验表明:该方法与非正则化重建方法相比,可有效降低重建的病态性。提高图像重建质量。  相似文献   

7.
潘雅璞  谢莉  杨慧中 《控制与决策》2021,36(12):3049-3055
利用提升技术可将非均匀采样非线性系统离散化为一个多输入单输出传递函数模型,从而将系统输出表示为非均匀刷新非线性输入和输出回归项的线性参数模型,进一步基于非线性输入的估计或过参数化方法进行辨识.然而,当非线性环节结构未知或不能被可测非均匀输入参数化表示时,上述辨识方法将不再适用.为了解决这个问题,利用核方法将原始非线性数据投影到高维特征空间中使其线性可分,再对投影后的数据应用递推最小二乘算法进行辨识,提出基于核递推最小二乘的非均匀采样非线性系统辨识方法.此外,针对系统含有有色噪声干扰的情况,参考递推增广最小二乘算法的思想,利用估计残差代替不可测噪声,提出核递推增广最小二乘算法.最后,通过仿真例子验证所提算法的有效性.  相似文献   

8.
何杜博  孙胜祥 《控制与决策》2024,39(5):1478-1486
针对传统多目标回归算法无法处理输入与多输出间的非线性关系,且忽视了数据点在输入与输出之间的结构信息,导致算法泛化性能受限、缺乏稳健性等问题,提出一种基于实例与目标相关性的多目标稀疏回归(multi-target sparse regression with instances and targets correlations,MTR-ITC)算法.首先,通过嵌入潜变量空间来对复杂的输入与输出以及输出间的关联结构解耦,并利用核技巧和稀疏回归学习输入输出间的非线性关系和输出间的相关结构;然后,引入流形正则化项探索不同实例在输入与输出变量间的相关性,确保模型输出与真实结果在局部和全局结构的一致性,以提升模型泛化性能;最后,提出一种交替优化算法来对目标函数进行求解,使其能快速收敛至全局最优.在基准测试数据集上的实验表明,所提算法在不同MTR数据集上均具有较好的测试性能.  相似文献   

9.
针对基于压缩感知(CS)的磁共振成像(MRI)稀疏重建中存在的两个非平滑正则项问题,提出了一种基于Moreau包络的近似平滑迭代算法(PSIA)。基于CS的经典MRI稀疏重建是求解一个由最小二乘保真项、小波变换稀疏正则项和总变分(TV)正则项线性组合成的目标函数最小化问题。首先,对目标函数中的小波变换正则项作平滑近似;然后,将数据保真项与平滑近似后的小波正则项的线性组合看成一个新的可以连续求导的凸函数;最后,采用PSIA对新的优化问题进行求解。该算法不仅可以同时处理优化问题中的两个正则约束项,还避免了固定权重带来的算法鲁棒性问题。仿真得到的体模图像及真实磁共振图像的实验结果表明,所提算法与四种经典的稀疏重建算法:共轭梯度(CG)下降算法、TV1范数压缩MRI(TVCMRI)算法、部分k空间重建算法(RecPF)和快速复合分离算法(FCSA)相比,在图像信噪比、相对误差和结构相似性指数上具有更好的重建结果,且在算法复杂度上与现有最快重建算法即FCSA相当。  相似文献   

10.
考虑到在很多情况下,人们更关心预报模型的预报值与实际值的相对误差情况,从而该文采用实际输出与希望输出的相对误差的平方和作为目标函数,给出了一种基于相对误差平方和为最小的BP算法。考虑到网络的实际输出值介于0到1之间,该文对实际问题的理想输出值给出了一种规范化处理方法。通过大量算例检验证实,在基于相对误差平方和为检验标准前提下,利用该文所给算法求得的拟合值或预报结果要优于传统的基于绝对误差乎方和作为目标函数的BP算法所得结果。  相似文献   

11.
《国际计算机数学杂志》2012,89(7):1321-1333
In this study, we investigate the consistency of half supervised coefficient regularization learning with indefinite kernels. In our setting, the hypothesis space and learning algorithms are based on two different groups of input data which are drawn i.i.d. according to an unknown probability measure ρ X . The only conditions imposed on the kernel function are the continuity and boundedness instead of a Mercer kernel and the output data are not asked to be bounded uniformly. By a mild assumption of unbounded output data and a refined integral operator technique, the generalization error is decomposed into hypothesis error, sample error and approximation error. By estimating these three parts, we deduce satisfactory learning rates with proper choice of the regularization parameter.  相似文献   

12.
Kernelized nonlinear extensions of Fisher's discriminant analysis, discriminant analysis based on generalized singular value decomposition (LDA/GSVD), and discriminant analysis based on the minimum squared error formulation (MSE) have recently been widely utilized for handling undersampled high-dimensional problems and nonlinearly separable data sets. As the data sets are modified from incorporating new data points and deleting obsolete data points, there is a need to develop efficient updating and downdating algorithms for these methods to avoid expensive recomputation of the solution from scratch. In this paper, an efficient algorithm for adaptive linear and nonlinear kernel discriminant analysis based on regularized MSE, called adaptive KDA/RMSE, is proposed. In adaptive KDA/RMSE, updating and downdating of the computationally expensive eigenvalue decomposition (EVD) or singular value decomposition (SVD) is approximated by updating and downdating of the QR decomposition achieving an order of magnitude speed up. This fast algorithm for adaptive kernelized discriminant analysis is designed by utilizing regularization techniques and the relationship between linear and nonlinear discriminant analysis and the MSE. In addition, an efficient algorithm to compute leave-one-out cross validation is also introduced by utilizing downdating of KDA/RMSE.  相似文献   

13.
A conditional density function, which describes the relationship between response and explanatory variables, plays an important role in many analysis problems. In this paper, we propose a new kernel-based parametric method to estimate conditional density. An exponential function is employed to approximate the unknown density, and its parameters are computed from the given explanatory variable via a nonlinear mapping using kernel principal component analysis (KPCA). We develop a new kernel function, which is a variant to polynomial kernels, to be used in KPCA. The proposed method is compared with the Nadaraya-Watson estimator through numerical simulation and practical data. Experimental results show that the proposed method outperforms the Nadaraya-Watson estimator in terms of revised mean integrated squared error (RMISE). Therefore, the proposed method is an effective method for estimating the conditional densities.  相似文献   

14.
Two obvious limitations exist for baseline kernel minimum squared error (KMSE): lack of sparseness of the solution and the ill-posed problem. Previous sparse methods for KMSE have overcome the second limitation using a regularization strategy, which introduces an increase in the computational cost to determine the regularization parameter. Hence, in this paper, a constructive sparse algorithm for KMSE (CS-KMSE) and its improved version (ICS-KMSE) are proposed which will simultaneously address the two limitations described above. CS-KMSE chooses the training samples that incur the largest reductions on the objective function as the significant nodes on the basis of the Householder transformation. In contrast with CS-KMSE, there is an additional replacement mechanism using Givens rotation in ICS-KMSE, which results in ICS-KMSE giving better performance than CS-KMSE in terms of sparseness. CS-KMSE and ICS-KMSE do not require the regularization parameter at all before they begin to choose significant nodes, which is beneficial since it saves on the model selection time. More importantly, CS-KMSE and ICS-KMSE terminate their procedures with an early stopping strategy that acts as an implicit regularization term, which avoids overfitting and curbs the sparse level on the solution of the baseline KMSE. Finally, in comparison with other algorithms, both ICS-KMSE and CS-KMSE have superior sparseness, and extensive comparisons confirm their effectiveness and feasibility.  相似文献   

15.
钱鹏江  王士同  邓赵红 《自动化学报》2011,37(12):1422-1434
首先证明了快速核密度估计 (Fast kernel density estimate, FKDE) 定理: 基于抽样子集的高斯核密度估计(KDE)与原数据集的KDE间的误差与抽样容量和核参数相关, 而与总样本容量无关. 接着本文揭示了基于高斯核形式的图论松弛聚类(Graph-based relaxed clustering, GRC)算法的目标表达式可分解成“Parzen窗加权和 + 平方熵”的形式, 即此时GRC可视作一个核密度估计问题, 这样基于KDE近似策略, 本文提出了大规模图论松弛聚类方法(Scaling up GRC by KDE approximation, SUGRC-KDEA). 较之先前的工作, 这一方法的优势在于为GRC作用于大规模数据集提供了更简单和易于实现的方案.  相似文献   

16.
Bo L  Wang L  Jiao L 《Neural computation》2006,18(4):961-978
Kernel fisher discriminant analysis (KFD) is a successful approach to classification. It is well known that the key challenge in KFD lies in the selection of free parameters such as kernel parameters and regularization parameters. Here we focus on the feature-scaling kernel where each feature individually associates with a scaling factor. A novel algorithm, named FS-KFD, is developed to tune the scaling factors and regularization parameters for the feature-scaling kernel. The proposed algorithm is based on optimizing the smooth leave-one-out error via a gradient-descent method and has been demonstrated to be computationally feasible. FS-KFD is motivated by the following two fundamental facts: the leave-one-out error of KFD can be expressed in closed form and the step function can be approximated by a sigmoid function. Empirical comparisons on artificial and benchmark data sets suggest that FS-KFD improves KFD in terms of classification accuracy.  相似文献   

17.
This paper proposes a view independent face detection method based on horizontal rectangular features, and accuracy improvement by combining kernels of various sizes. Since the view changes of faces induce large variation in appearance in the horizontal direction, local kernels are applied to horizontal rectangular regions to model such appearance changes. Local kernels are integrated by summation, and then used as a summation kernel for support vector machine (SVM). View independence is shown to be realized by the integration of local horizontal rectangular kernels. However, in general, local kernels (features) of various sizes have different similarity measures, such as detailed and rough similarity, and thus their error patterns are different. If the local and global kernels are combined well, the generalization ability is improved. This research demonstrates the comparative effectiveness of combining the global kernel and local kernels of various sizes as a summation kernel for SVM against use of only the global kernel, only the combination of local kernels and Adaboost with SVMs with a kind of local kernel.  相似文献   

18.
Regularized classifiers are known to be a kind of kernel-based classification methods generated from Tikhonov regularization schemes, and the trigonometric polynomial kernels are ones of the most important kernels and play key roles in signal processing. The main target of this paper is to provide convergence rates of classification algorithms generated by regularization schemes with trigonometric polynomial kernels. As a special case, an error analysis for the support vector machines (SVMs) soft margin classifier is presented. The norms of Fejér operator in reproducing kernel Hilbert space and properties of approximation of the operator in L 1 space with periodic function play key roles in the analysis of regularization error. Some new bounds on the learning rate of regularization algorithms based on the measure of covering number for normalized loss functions are established. Together with the analysis of sample error, the explicit learning rates for SVM are also derived.  相似文献   

19.
In most cases evaluation of Fourier series requires that special summation methods be applied or that the coefficients of the series be suitably modified to suppress strong oscillations at discontinuties of the approximated function. All these methods may be described as a substitution of the Dirichlet kernel by other kernels. In this paper eight of these kernels are briefly reviewed and compared with a ninth kernel which is based on Chebyshev polynominals. A closed-form representation has been derived for the Fourier coefficients of this kernel as well as a recursive relation for their practical computation. Furthermore, an error criterion is given which allows the determination of an upper bound of the difference between Fourier series and approximated function provided upper limits on both the variation and the second derivative of the latter are known.  相似文献   

20.
一种基于核函数的非线性感知器算法   总被引:16,自引:1,他引:16  
为了提高经典Rosenblatt感知器算法的分类能力,该文提出一种基于核函数的非线性感知器算法,简称核感知器算法,其特点是用简单的迭代过程和核函数来实现非线性分类器的一种设计,核感知器算法能够处理原始属性空间中线性不可分问题和高维特征空间中线性可分问题。同时,文中详细分析了其算法与径向基函数神经网络、势函数方法和支持向量机等非线性算法的关系。人工和实际数据的计算结果表明:与线性感知器算法相比,核感知器算法可以有效地提高分类精度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号