共查询到20条相似文献,搜索用时 31 毫秒
1.
自适应RBF-LBF串联神经网络结构与参数优化方法 总被引:2,自引:0,他引:2
研究了前向单层径基函数(RBF)网络和前向单层线性基本函数(LBF)网络的分类机理,提出了RBF的中心和宽度应通过学习自动确定,在学习过程中根据错分样本被错分入的类别自动生成新的核函数这一观点.如果两个或两个以上核函数属于同一类,在输入空间相距较近且未被其它类别的样本分隔开来的情况下,则应考虑将之合并,或者使它们的作用区域部分重叠.从理论上阐明了采用Sigmoid活化函数的单层感知器的分类阈值为0.5,进而提出了由单层RBF网络和单层感知器组成的串联RBF—LBF神经网络.文中详细给出了确定该串联RBF—LBF神经网络结构、核函数个数、位置与宽度的优化算法.一般来说,该算法的计算复杂性比前向单隐层感知器采用的误差反传算法要小或至少相当.对几个经典的模式分类难题的处理结果表明,与一般RBF网络和前向单隐层感知器网络相比,该串联RBF—LBF网络及其自适应学习算法具有收敛速度快,分类精度高,易于得到最小结构,在学习过程中不易陷入局部极小点等优点,有利于实现实时分析.实验结果同时也验证了单层LBF网络对提高RBF—LBF网络分类正确率的重要性. 相似文献
2.
This paper addresses the problem of optimal feature extraction from a wavelet representation. Our work aims at building features by selecting wavelet coefficients resulting from signal or image decomposition on an adapted wavelet basis. For this purpose, we jointly learn in a kernelized large-margin context the wavelet shape as well as the appropriate scale and translation of the wavelets, hence the name “wavelet kernel learning”. This problem is posed as a multiple kernel learning problem, where the number of kernels can be very large. For solving such a problem, we introduce a novel multiple kernel learning algorithm based on active constraints methods. We furthermore propose some variants of this algorithm that can produce approximate solutions more efficiently. Empirical analysis show that our active constraint MKL algorithm achieves state-of-the art efficiency. When used for wavelet kernel learning, our experimental results show that the approaches we propose are competitive with respect to the state-of-the-art on brain–computer interface and Brodatz texture datasets. 相似文献
3.
多核学习方法(Multiple kernel learning, MKL)在视觉语义概念检测中有广泛应用, 但传统多核学习大都采用线性平稳的核组合方式而无法准确刻画复杂的数据分布. 本文将精确欧氏空间位置敏感哈希(Exact Euclidean locality sensitive Hashing, E2LSH)算法用于聚类, 结合非线性多核组合方法的优势, 提出一种非线性非平稳的多核组合方法—E2LSH-MKL. 该方法利用Hadamard内积实现对不同核函数的非线性加权,充分利用了不同核函数之间交互得到的信息; 同时利用基于E2LSH哈希原理的聚类算法,先将原始图像数据集哈希聚类为若干图像子集, 再根据不同核函数对各图像子集的相对贡献大小赋予各自不同的核权重, 从而实现多核的非平稳加权以提高学习器性能; 最后,把E2LSH-MKL应用于视觉语义概念检测. 在Caltech-256和TRECVID 2005数据集上的实验结果表明,新方法性能优于现有的几种多核学习方法. 相似文献
4.
5.
In this paper, we propose a novel learning algorithm, named SABC-MKELM, based on a kernel extreme learning machine (KELM) method for single-hidden-layer feedforward networks. In SABC-MKELM, the combination of Gaussian kernels is used as the activate function of KELM instead of simple fixed kernel learning, where the related parameters of kernels and the weights of kernels can be optimised by a novel self-adaptive artificial bee colony (SABC) approach simultaneously. SABC-MKELM outperforms six other state-of-the-art approaches in general, as it could effectively determine solution updating strategies and suitable parameters to produce a flexible kernel function involved in SABC. Simulations have demonstrated that the proposed algorithm not only self-adaptively determines suitable parameters and solution updating strategies learning from the previous experiences, but also achieves better generalisation performances than several related methods, and the results show good stability of the proposed algorithm. 相似文献
6.
《Multimedia, IEEE Transactions on》2008,10(6):969-981
7.
8.
RRL is a relational reinforcement learning system based on Q-learning in relational state-action spaces. It aims to enable
agents to learn how to act in an environment that has no natural representation as a tuple of constants. For relational reinforcement
learning, the learning algorithm used to approximate the mapping between state-action pairs and their so called Q(uality)-value
has to be very reliable, and it has to be able to handle the relational representation of state-action pairs. In this paper
we investigate the use of Gaussian processes to approximate the Q-values of state-action pairs. In order to employ Gaussian
processes in a relational setting we propose graph kernels as a covariance function between state-action pairs. The standard
prediction mechanism for Gaussian processes requires a matrix inversion which can become unstable when the kernel matrix has
low rank. These instabilities can be avoided by employing QR-factorization. This leads to better and more stable performance
of the algorithm and a more efficient incremental update mechanism. Experiments conducted in the blocks world and with the
Tetris game show that Gaussian processes with graph kernels can compete with, and often improve on, regression trees and instance
based regression as a generalization algorithm for RRL.
Editors: David Page and Akihiro Yamamoto 相似文献
9.
10.
Online chaotic time series prediction using unbiased composite kernel machine via Cholesky factorization 总被引:1,自引:1,他引:0
Hongqiao Wang Fuchun Sun Yanning Cai Zongtao Zhao 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2010,14(9):931-944
The kernel method has proved to be an effective machine learning tool in many fields. Support vector machines with various
kernel functions may have different performances, as the kernels belong to two different types, the local kernels and the
global kernels. So the composite kernel, which can bring more stable results and good precision in classification and regression,
is an inevitable choice. To reduce the computational complexity of the kernel machine’s online modeling, an unbiased least
squares support vector regression model with composite kernel is proposed. The bias item of LSSVR is eliminated by improving
the form of structure risk in this model, and then the calculating method of the regression coefficients is greatly simplified.
Simultaneously, through introducing the composite kernel to the LSSVM, the model can easily adapt to the irregular variation
of the chaotic time series. Considering the real-time performance, an online learning algorithm based on Cholesky factorization
is designed according to the characteristic of extended kernel function matrix. Experimental results indicate that the unbiased
composite kernel LSSVR is effective and suitable for online time series with both the steep variations and the smooth variations,
as it can well track the dynamic character of the series with good prediction precisions, better generalization and stability.
The algorithm can also save much computation time comparing to those methods using matrix inversion, although there is a little
more loss in time than that with the usage of single kernels. 相似文献
11.
多尺度核方法是当前核机器学习领域的一个热点。通常多尺度核的学习在多核处理时存在诸如多核平均组合、迭代学习时间长、经验选择合成系数等弊端。文中基于核目标度量规则,提出一种多尺度核方法的自适应序列学习算法,实现多核加权系数的自动快速求取。实验表明,该方法在回归精度、分类正确率方面比单核支持向量机方法结果更优,函数拟合与分类稳定性更强,证明该算法具有普遍适用性。 相似文献
12.
Orthogonal least squares learning algorithm for radial basisfunction networks 总被引:146,自引:0,他引:146
The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular-value decomposition to solve for the weights of the network. Such a procedure has several drawbacks, and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The authors propose an alternative learning procedure based on the orthogonal least-squares method. The procedure chooses radial basis function centers one by one in a rational way until an adequate network has been constructed. In the algorithm, each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least-squares learning strategy provides a simple and efficient means for fitting radial basis function networks. This is illustrated using examples taken from two different signal processing applications. 相似文献
13.
This paper proposes a hybrid neural network model using a possible combination of different transfer projection functions (sigmoidal unit, SU, product unit, PU) and kernel functions (radial basis function, RBF) in the hidden layer of a feed-forward neural network. An evolutionary algorithm is adapted to this model and applied for learning the architecture, weights and node typology. Three different combined basis function models are proposed with all the different pairs that can be obtained with SU, PU and RBF nodes: product–sigmoidal unit (PSU) neural networks, product–radial basis function (PRBF) neural networks, and sigmoidal–radial basis function (SRBF) neural networks; and these are compared to the corresponding pure models: product unit neural network (PUNN), multilayer perceptron (MLP) and the RBF neural network. The proposals are tested using ten benchmark classification problems from well known machine learning problems. Combined functions using projection and kernel functions are found to be better than pure basis functions for the task of classification in several datasets. 相似文献
14.
15.
基于图像的视觉伺服机器人控制方法通过机器人的视觉获取图像信息,然后形成基于图像信息的闭环反馈来控制机器人的合理运动.经典视觉伺服的伺服增益的选取在大多数条件下是人工赋值的,故存在鲁棒性差、收敛速度慢等问题.针对该问题,提出一种基于Dyna-Q的旋翼无人机视觉伺服智能控制方法调节伺服增益以提高其自适应性.首先,使用基于费尔曼链码的图像特征提取算法提取目标特征点;然后,使用基于图像的视觉伺服形成特征误差的闭环控制;其次,针对旋翼无人机强耦合欠驱动的动力学特性提出一种解耦的视觉伺服控制模型;最后,建立使用Dyna-Q学习调节伺服增益的强化学习模型,通过训练可以使得旋翼无人机自主选择伺服增益.Dyna-Q学习在经典的Q学习的基础上通过建立环境模型来存储经验,环境模型产生的虚拟样本可以作为学习样本来进行值函数的迭代.实验结果表明,所提出的方法相比于传统控制方法PID控制以及经典的基于图像视觉伺服方法具有收敛速度快、稳定性高的优势. 相似文献
16.
In this paper, a higher-order-statistics (HOS)-based radial basis function (RBF) network for signal enhancement is introduced. In the proposed scheme, higher order cumulants of the reference signal were used as the input of HOS-based RBF. An HOS-based supervised learning algorithm, with mean square error obtained from higher order cumulants of the desired input and the system output as the learning criterion, was used to adapt weights. The motivation is that the HOS can effectively suppress Gaussian and symmetrically distributed non-Gaussian noise. The influence of a Gaussian noise on the input of HOS-based RBF and the HOS-based learning algorithm can be mitigated. Simulated results indicate that HOS-based RBF can provide better performance for signal enhancement under different noise levels, and its performance is insensitive to the selection of learning rates. Moreover, the efficiency of HOS-based RBF under the nonstationary Gaussian noise is stable 相似文献
17.
Regularized classifiers are known to be a kind of kernel-based classification methods generated from Tikhonov regularization schemes, and the trigonometric polynomial kernels are ones of the most important kernels and play key roles in signal processing. The main target of this paper is to provide convergence rates of classification algorithms generated by regularization schemes with trigonometric polynomial kernels. As a special case, an error analysis for the support vector machines (SVMs) soft margin classifier is presented. The norms of Fejér operator in reproducing kernel Hilbert space and properties of approximation of the operator in L 1 space with periodic function play key roles in the analysis of regularization error. Some new bounds on the learning rate of regularization algorithms based on the measure of covering number for normalized loss functions are established. Together with the analysis of sample error, the explicit learning rates for SVM are also derived. 相似文献
18.
Arthur Tenenhaus Alain Giron Michel Béra Bernard Fertil 《Computational statistics & data analysis》2007,51(9):4083-4100
“Kernel logistic PLS” (KL-PLS) is a new tool for supervised nonlinear dimensionality reduction and binary classification. The principles of KL-PLS are based on both PLS latent variables construction and learning with kernels. The KL-PLS algorithm can be seen as a supervised dimensionality reduction (complexity control step) followed by a classification based on logistic regression. The algorithm is applied to 11 benchmark data sets for binary classification and to three medical problems. In all cases, KL-PLS proved its competitiveness with other state-of-the-art classification methods such as support vector machines. Moreover, due to successions of regressions and logistic regressions carried out on only a small number of uncorrelated variables, KL-PLS allows handling high-dimensional data. The proposed approach is simple and easy to implement. It provides an efficient complexity control by dimensionality reduction and allows the visual inspection of data segmentation. 相似文献
19.
The conversion functions in the hidden layer of radial basis function neural networks (RBFNN) are Gaussian functions. The Gaussian functions are local to the kernel centers. In most of the existing research, the spatial local response of the sample is inaccurately calculated because the kernels have the same shape as a hypersphere, and the kernel parameters in the network are determined by experience. The influence of the fine structure in the local space is not considered during feature extraction. In addition, it is difficult to obtain a better feature extraction ability with less computational complexity. Therefore, this paper develops a multi-scale RBF kernel learning algorithm and proposes a new multi-layer RBF neural network model. For the samples of each class, the expectation maximization (EM) algorithm is used to obtain multi-layer nested sub-distribution models with different local response ranges, which are called multi-scale kernels in the network. The prior information of each sub-distribution is used as the connection weight between the multi-scale kernels. Finally, feature extraction is implemented using multi-layer kernel subspace embedding. The multi-scale kernel learning model can efficiently and accurately describe the fine structure of the samples and is fault tolerant to setting the number of kernels to a certain extent. Considering the prior probability of each kernel as the weight makes the feature extraction process satisfy the Bayes rule, which can enhance the interpretability of feature extraction in the network. This paper also theoretically proves that the proposed neural network is a generalized version of the original RBFNN. The experimental results show that the proposed method has better performance compared with some state-of-the-art algorithms. 相似文献
20.
结合半监督核的高斯过程分类 总被引:1,自引:0,他引:1
提出了一种半监督算法用于学习高斯过程分类器, 其通过结合非参数的半监督核向分类器提供未标记数据信息. 该算法主要包括以下几个方面: 1)通过图拉普拉斯的谱分解获得核矩阵, 其联合了标记数据和未标记数据信息; 2)采用凸最优化方法学习核矩阵特征向量的最优权值, 构建非参数的半监督核; 3)把半监督核整合到高斯过程模型中, 构建所提出的半监督学习算法. 该算法的主要特点是: 把基于整个数据集的非参数半监督核应用于高斯过程模型, 该模型有着明确的概率描述, 可以方便地对数据之间的不确定性进行建模, 并能够解决复杂的推论问题. 通过实验结果表明, 该算法与其他方法相比具有更高的可靠性. 相似文献