首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
RBF网络的微分进化正交最小二乘算法   总被引:1,自引:1,他引:0  
研究用于径向基函数(RBF)网络训练的一种微分进化正交最小二乘(DEOLS)算法。把微分进化(DE)算法的种群作为正交最小二乘(OLS)算法的候选径向基函数集合,利用OLS对DE的种群个体进行评断,以确定RBF网络的隐结点的数目、中心和宽度。该算法融合了DE的强大搜索能力和OLS的高效评断能力,隐结点的选择比OLS要合理,同时避免DE的复杂性。最后使用实验验证了该算法的优越性。  相似文献   

2.
介绍了一种三层径向基函数神经网络,其学习算法采用正交最小二乘算法.首先根据正交最小二乘算法得到径向基函数神经网络的结构;然后对该网络的权值进行训练使它逼近给定的函数.为了验证径向基函数神经网络所具有的对任意非线性映射的任意逼近能力和自学习、自适应能力,以两关节机械手为辨识对象来进行实验研究.实验结果表明,该径向基函数神经网络具有良好的模型学习和逼近能力,并且学习速度快、收敛性好、鲁棒性强,尤其适合于具有连续线性与非线性对象的复杂系统的控制实时性要求.  相似文献   

3.
针对非线性自回归模型(Nonlinear Auto-Regressive with extrainput,NARX)系统辨识问题,利用非正交的方法来构造较为稀疏的逼近NARX模型的径向基函数模型。与已有的径向基或其他的核模型只采用同一固定的尺度不同,采用多个尺度,通过最小化当前训练误差,选择最佳的核中心和尺度参数。在学习过程中,采用非正交核函数的方法进行模型逐步回归。对样本数据利用k均值聚类算法得到核函数中心参数备选项,同时设置多个备选尺度,并通过最小二乘法求得相应核函数的权值,利用前向选择方法从中找出使模型误差最小的最优核函数。仿真实验验证了方法在泛化性能和稀疏性方面的可行性。  相似文献   

4.
提出了一种基于径向基函数的非线性过程预测控制策略.首先,开发过程径基函数网络模型:根据过程特性,选择模型阶次、径基函数类型,用K均值法确定基函数中心位置,统计F检验确定基函数中心的数目,迭代最小二乘法确定径基函数网络权系数.然后,利用网络模型抽取非线性预测控制器(NLPC)特征样本训练构造径基函数网络预测控制器(RBFPC).仿真结果表明,与NLPC比较,由于RBFPC不必在线解非线性最优化问题,易于在线快速实施;与PI控制器比较,RBFPC具有更好的跟踪设定值性能和抗干扰性能.  相似文献   

5.
针对传统径向基函数(RBF)网络难以确定迭代停止条件的缺点,提出采用最小化留一误差来训练多尺度RBF网络的算法。分别使用全局k均值聚类算法和经验选择方法,构造RBF节点的中心和尺度参数备选项集合,利用正交前向选择方法逐步最小化留一误差,从而确定网络的每一项中心和尺度参数。实验结果显示,该算法能够自动终止新网络节点选择,不需要额外的迭代终止条件,与传统的RBF网络相比,能够产生稀疏性更高且泛化能力更好的径向基网络。  相似文献   

6.
基于粒子群算法的RBF网络参数优化算法   总被引:4,自引:1,他引:3  
针对神经网络的一些缺陷,研究神经网络基于粒子群优化的学习算法,将粒子群优化算法用于RBF神经网络的学习训练。提出了一种基于粒子群优化(PSO)算法的径向基(RBF)网络参数优化算法,首先利用减聚类算法确定网络径向基函数中心的个数,再用PSO算法优化径向基函数的中心及宽度,最后用PSO算法训练隐含层到输出层的网络权值,找到神经网络权值的最优解,以达到优化神经网络学习的目的。最后,通过一个实验与最小二乘法优化的神经网络进行了比较,验证了算法的有效性。  相似文献   

7.
RBF神经网络最优分割算法及其在股市预测中的应用   总被引:1,自引:0,他引:1  
将最优分割算法(optimal partition algorithm,OPA)用于径向基函数神经网络参数的训练中.对OPA进行了适当的改进,在改进的OPA中增加了类的中心与宽度的确定方法,并将它们用于确定RBF网络的中心与宽度.提出了利用类的目标函数的差分对网络结构进行动态调整的方法,从而实现了隐节点数的自适应选择.用于股价预测的数值模拟结果验证了该方法的有效性.与传统算法进行比较的结果表明,在预测方面OPA具有较明显的优势.将OPA算法与正交最小二乘法相结合的OPA-OLS算法可以提高趋势预测的正确率.  相似文献   

8.
基于PSO的RBF神经网络学习算法及其应用   总被引:17,自引:0,他引:17  
提出了一种基于粒子群优化(PSO)算法的径向基函数(RBF)神经网络学习方法,首先利用减聚类算法确定网络径向基层的单元数,再用PSO对基中心和宽度进行优化,并与最小二乘法相结合训练RBF神经网络。将此算法用于混沌时间序列的预测,实例仿真表明此方法是有效的。  相似文献   

9.
基于RBF神经网络PID控制的交流伺服系统   总被引:1,自引:0,他引:1  
将神经网络和PID控制相结合,提出了一种神经网络整定的PID控制策略,并将其应用于交流伺服系统的控制.利用一个两层神经网络在线自适应调整PID控制器的参数,从而使系统的静态和动态性能指标较为理想.径向基函数神经网络用来辨识交流伺服系统的Jacobian信息,其学习算法采用正交最小二乘算法,首先得到径向基函数神经网络的结构.然后用BP算法对该网络的权值进行训练使它逼近给定的函数.实验结果表明,该交流伺服系统具有响应速度快、稳态精度高和鲁棒性强等特点.  相似文献   

10.
在传统的径向基神经网络框架的基础上,通过引入中心超平面的概念,提出了超平面中心的径向基函数神经网络。在此网络中以点到中心超平面的距离代替传统的径向基神经网络中点到点的距离,其优势在于中心超平面作为数据中心包含了更多原始数据之间的信息。以函数逼近和数据分类的实验为例,证明了超平面中心的径向基神经网络相对于传统的网络有一定的优势。  相似文献   

11.
Orthogonal least squares learning algorithm for radial basisfunction networks   总被引:146,自引:0,他引:146  
The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular-value decomposition to solve for the weights of the network. Such a procedure has several drawbacks, and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The authors propose an alternative learning procedure based on the orthogonal least-squares method. The procedure chooses radial basis function centers one by one in a rational way until an adequate network has been constructed. In the algorithm, each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least-squares learning strategy provides a simple and efficient means for fitting radial basis function networks. This is illustrated using examples taken from two different signal processing applications.  相似文献   

12.
Recursive orthogonal least squares (ROLS) is a numerically robust method for solving for the output layer weights of a radial basis function (RBF) network, and requires less computer memory than the batch alternative. In the paper, the use of ROLS is extended to selecting the centers of an RBF network. It is shown that the information available in an ROLS algorithm after network training can be used to sequentially select centers to minimize the network output error. This provides efficient methods for network reduction to achieve smaller architectures with acceptable accuracy and without retraining. Two selection methods are developed, forward and backward. The methods are illustrated in applications of RBF networks to modeling a nonlinear time series and a real multiinput-multioutput chemical process. The final network models obtained achieve acceptable accuracy with significant reductions in the number of required centers.  相似文献   

13.
A recursive orthogonal least squares (ROLS) algorithm for multi-input, multi-output systems is developed in this paper and is applied to updating the weighting matrix of a radial basis function network. An illustrative example is given, to demonstrate the effectiveness of the algorithm for eliminating the effects of ill-conditioning in the training data, in an application of neural modelling of a multi-variable chemical process. Comparisons with results from using standard least squares algorithms, in batch and recursive form, show that the ROLS algorithm can significantly improve the neural modelling accuracy. The ROLS algorithm can also be applied to a large data set with much lower requirements on computer memory than the batch OLS algorithm.  相似文献   

14.
In this article, we propose a novel complex radial basis function network approach for dynamic behavioral modeling of nonlinear power amplifier with memory in 3 G systems. The proposed approach utilizes the complex QR‐decomposition based recursive least squares (QRD‐RLS) algorithm, which is implemented using the complex Givens rotations, to update the weighting matrix of the complex radial basis function (RBF) network. Comparisons with standard least squares algorithms, in batch and recursive process, the QRD‐RLS algorithm has the characteristics of good numerical robustness and regular structure, and can significantly improve the complex RBF network modeling accuracy. In this approach, only the signal's complex envelope is used for the model training and validation. The model has been validated using ADS simulated and real measured data. Finally, parallel implementation of the resulting method is briefly discussed. © 2009 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2009.  相似文献   

15.
针对RBF神经网络隐含层节点数过多导致网络结构复杂的问题,提出了一种基于改进遗传算法(IGA)的RBF神经网络优化算法。利用IGA优化基于正交最小二乘法的RBF神经网络结构,通过对隐含层输出矩阵的列向量进行全局寻优,从而设计出结构更优的基于IGA的RBF神经网络(IGA-RBF)。将IGA-RBF神经网络的学习算法应用于电子元器件贮存环境温湿度预测模型,与基于正交最小二乘法的RBF神经网络进行比较的结果表明:IGA-RBF神经网络设计出来的网络训练步数减少了44步,隐含层节点数减少了34个,且预测模型得到的温湿度误差较小,拟合精度大于0.95,具有更高的预测精度。  相似文献   

16.
This paper reviews some frequently used methods to initialize an radial basis function (RBF) network and presents systematic design procedures for pre-processing unit(s) to initialize RBF network from available input–output data sets. The pre-processing units are computationally hybrid two-step training algorithms that can be named as (1) construction of initial structure and (2) coarse-tuning of free parameters. The first step, the number, and the locations of the initial centers of RBF network can be determined. Thus, an orthogonal least squares algorithm and a modified counter propagation network can be employed for this purpose. In the second step, a coarse-tuning of free parameters is achieved by using clustering procedures. Thus, the Gustafson–Kessel and the fuzzy C-means clustering methods are evaluated for the coarse-tuning. The first two-step behaves like a pre-processing unit for the last stage (or fine-tuning stage—a gradient descent algorithm). The initialization ability of the proposed four pre-processing units (modular combination of the existing methods) is compared with three non-linear benchmarks in terms of root mean square errors. Finally, the proposed hybrid pre-processing units may initialize a fairly accurate, IF–THEN-wise readable initial model automatically and efficiently with a minimum user inference.  相似文献   

17.
径向基函数神经网络的一种两级学习方法   总被引:2,自引:1,他引:1  
建立RBF(radial basis function)神经网络模型关键在于确定网络隐中心向量、基宽度参数和隐节点数.为设计结构简单,且具有良好泛化性能径向基网络结构,本文提出了一种RBF网络的两级学习新设计方法.该方法在下级由正则化正交最小二乘法与D-最优试验设计结合算法自动构建结构节俭的RBF网络模型;在上级通过粒子群优化算法优选结合算法中影响网络泛化性能的3个学习参数,即基宽度参数、正则化系数和D-最优代价系数的最佳参数组合.仿真实例表明了该方法的有效性.  相似文献   

18.
地震数据处理中基于RBF网络的函数逼近   总被引:2,自引:0,他引:2  
该文将径向基函数网络引入地震数据处理中,实现了函数逼近法地震数据的插值处理,在实际地震数据处理中取得了较好的应用效果。主要研究了径向基函数网络的理论、方法、应用及其逼近性能。该网络充分地利用了包含在训练数据中的信息,可自适应地确定网络隐层节点数目、径向基函数中心以及网络的权系数,生成的网络具有规模小、收敛快和数值稳定等优点。对同一函数进行逼近且精度相同时,径向基函数网络所用时间远远小于BP网络,因此是有广阔应用前景的一种新型神经网络。  相似文献   

19.
The use of radial basis function (RBF) networks and least squares algorithms for acquisition and fine tracking of NASA's 70-m-deep space network antennas is described and evaluated. We demonstrate that such a network, trained using the computationally efficient orthogonal least squares algorithm and working in conjunction with an array feed compensation system, can point a 70-m-deep space antenna with root mean square (rms) errors of 0.1-0.5 millidegrees (mdeg) under a wide range of signal-to-noise ratios and antenna elevations. This pointing accuracy is significantly better than the 0.8 mdeg benchmark for communications at Ka-band frequencies (32 GHz). Continuous adaptation strategies for the RBF network were also implemented to compensate for antenna aging, thermal gradients, and other factors leading to time-varying changes in the antenna structure, resulting in dramatic improvements in system performance. The systems described here are currently in testing phases at NASA's Goldstone Deep Space Network (DSN) and were evaluated using Ka-band telemetry from the Cassini spacecraft.  相似文献   

20.
Sparse RBF Networks with Multi-kernels   总被引:1,自引:1,他引:0  
While the conventional standard radial basis function (RBF) networks are based on a single kernel, in practice, it is often desirable to base the networks on combinations of multiple kernels. In this paper, a multi-kernel function is introduced by combining several kernel functions linearly. A novel RBF network with the multi-kernel is constructed to obtain a parsimonious and flexible regression model. The unknown centres of the multi-kernels are determined by an improved k-means clustering algorithm. And orthogonal least squares (OLS) algorithm is used to determine the remaining parameters. The complexity of the newly proposed algorithm is also analyzed. It is demonstrated that the new network can lead to a more parsimonious model with much better generalization property compared with the traditional RBF networks with a single kernel.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号