首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
多层前向神经网络的RLS训练算法及其在辨识中的应用   总被引:18,自引:0,他引:18  
本文提出了一种基于递推最小二乘法(RLS)的多层前向神经网络的快速学习算法,并用其对非线性过程进行辨识,仿真及对实际例子的辨识结果表明本文提出的方法是有效的。  相似文献   

2.
双并联前向过程神经网络及其应用研究   总被引:6,自引:0,他引:6  
钟诗胜  丁刚 《控制与决策》2005,20(7):764-768
为克服多层前向过程神经网络收敛速度慢、精度低的问题,提出了一种双并联前向过程神经网络模型.在输入空间中引入一组合适的函数正交基,将输入函数和网络权函数表示为该组正交基的展开形式,并利用基函数的正交性简化网络聚合运算过程.给出了相应的学习算法,并以飞机发动机状态监控中发动机排气温度的预测为例验证了模型和算法的有效性.  相似文献   

3.
针对脉冲神经元基于精确定时的多脉冲编码信息的特点,提出了一种基于卷积计算的多层脉冲神经网络监督学习的新算法。该算法应用核函数的卷积计算将离散的脉冲序列转换为连续函数,在多层前馈脉冲神经网络结构中,使用梯度下降的方法得到基于核函数卷积表示的学习规则,并用来调整神经元连接的突触权值。在实验部分,首先验证了该算法学习脉冲序列的效果,然后应用该算法对Iris数据集进行分类。结果显示,该算法能够实现脉冲序列复杂时空模式的学习,对非线性模式分类问题具有较高的分类正确率。  相似文献   

4.
一种多层前馈网参数可分离学习算法   总被引:1,自引:0,他引:1  
目前大部分神经网络学习算法都是对网络所有的参数同时进行学习.当网络规模较 大时,这种做法常常很耗时.由于许多网络,例如感知器、径向基函数网络、概率广义回归网络 以及模糊神经网络,都是一种多层前馈型网络,它们的输入输出映射都可以表示为一组可变 基的线性组合.网络的参数也表现为二类:可变基中的参数是非线性的,组合系数是线性的. 为此,提出了一个将这二类参数进行分离学习的算法.仿真结果表明,这个学习算法加快了学 习过程,提高了网络的逼近性能.  相似文献   

5.
构造性核覆盖算法在图像识别中的应用   总被引:14,自引:0,他引:14       下载免费PDF全文
构造性神经网络的主要特点是:在对给定的具体数据的处理过程中,能同时给出网络的结构和参数;支持向量机就是先通过引入核函数的非线性变换,然后在这个核空间中求取最优线性分类面,其所求得的分类函数,形式上类似于一个神经网络,而构造性核覆盖算法(简称为CKCA)则是一种将神经网络中的构造性学习方法(如覆盖算法)与支持向量机(SVM)中的核函数法相结合的方法。CKCA方法具有运算量小、构造性强、直观等特点,适于处理大规模分类问题和图像识别问题。为验证CKCA算法的应用效果,利用图像质量不高的车牌字符进行了识别实验,并取得了较好的结果。  相似文献   

6.
This paper proposes firstly to use a neural network with a mixed structure for learning the system dynamics of a nonlinear plant, which consists of multilayer and recurrent structure. Since a neural network with a mixed structure can learn time series, it can learn the dynamics of a plant without knowing the plant order. Secondly, a novel method of synthesizing the optimal control is developed using the proposed neural network. Procedures are as follows: (1) Let a neural network with a mixed structure learn the unknown dynamics of a nonlinear plant with arbitrary order, (2) after the learning is completed, the network is expanded into an equivalent feedforward multilayer network, (3) it is shown that the gradient of a criterion functional to be optimized can be easily obtained from this multilayer network, and then (4) the optimal control is generated by applying any of the existing non-linear programming algorithm based on this gradient information. The proposed method is successfully applied to the optimal control synthesis problem of a nonlinear coupled vibratory plant with a linear quadratic criterion functional.  相似文献   

7.

The paper observes a similarity between the stochastic optimal control of discrete dynamical systems and the learning multilayer neural networks. It focuses on contemporary deep networks with nonconvex nonsmooth loss and activation functions. The machine learning problems are treated as nonconvex nonsmooth stochastic optimization problems. As a model of nonsmooth nonconvex dependences, the so-called generalized-differentiable functions are used. The backpropagation method for calculating stochastic generalized gradients of the learning quality functional for such systems is substantiated basing on Hamilton–Pontryagin formalism. Stochastic generalized gradient learning algorithms are extended for training nonconvex nonsmooth neural networks. The performance of a stochastic generalized gradient algorithm is illustrated by the linear multiclass classification problem.

  相似文献   

8.
In this paper, a new method of biomedical signal classification using complex- valued pseudo autoregressive (CAR) modeling approach has been proposed. The CAR coefficients were computed from the synaptic weights and coefficients of a split weight and activation function of a feedforward multilayer complex valued neural network. The performance of the proposed technique has been evaluated using PIMA Indian diabetes dataset with different complex-valued data normalization techniques and four different values of learning rate. An accuracy value of 81.28% has been obtained using this proposed technique.  相似文献   

9.
In this paper, we propose the approximate transformable technique, which includes the direct transformation and indirect transformation, to obtain a Chebyshev-Polynomials-Based (CPB) unified model neural networks for feedforward/recurrent neural networks via Chebyshev polynomials approximation. Based on this approximate transformable technique, we have derived the relationship between the single-layer neural networks and multilayer perceptron neural networks. It is shown that the CPB unified model neural networks can be represented as a functional link networks that are based on Chebyshev polynomials, and those networks use the recursive least square method with forgetting factor as learning algorithm. It turns out that the CPB unified model neural networks not only has the same capability of universal approximator, but also has faster learning speed than conventional feedforward/recurrent neural networks. Furthermore, we have also derived the condition such that the unified model generating by Chebyshev polynomials is optimal in the sense of error least square approximation in the single variable ease. Computer simulations show that the proposed method does have the capability of universal approximator in some functional approximation with considerable reduction in learning time.  相似文献   

10.
传统支持向量机是近几年发展起来的一种基于统计学习理论的学习机器,在非线性函数回归估计方面有许多应用。最小二乘支持向量机用等式约束代替传统支持向量机方法中的不等式约束,利用求解一组线性方程得出对象模型,避免了求解二次规划问题。本文采用最小二乘支持向量机解决了航空煤油干点的在线估计问题,结果表明,最小二乘支持向量机学习速度快、精度高,是一种软测量建模的有效方法。在相同样本条件下,比RBF网络具有较好的模型逼近性和泛化性能,比传统支持向量机可节省大量的计算时间。  相似文献   

11.
Probability density functions are estimated by an exponential family of densities based on multilayer feedforward networks. The role of the multilayer feedforward networks, in the proposed estimator, is to approximate the logarithm of the probability density functions. The method of maximum likelihood is used, as the main contribution, to derive an unsupervised backpropagation learning law to estimate the probability density functions. Computer simulation results demonstrating the use of the derived learning law are presented.  相似文献   

12.
A multilayer neural network based on multivalued neurons (MLMVN) is a neural network with a traditional feedforward architecture. At the same time, this network has a number of specific different features. Its backpropagation learning algorithm is derivative-free. The functionality of MLMVN is superior to that of the traditional feedforward neural networks and of a variety kernel-based networks. Its higher flexibility and faster adaptation to the target mapping enables to model complex problems using simpler networks. In this paper, the MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring. The simulation results show the high efficiency of the proposed approach. It is confirmed that the MLMVN is a powerful tool for solving classification problems, especially multiclass ones.  相似文献   

13.
顾哲彬  曹飞龙 《计算机科学》2018,45(Z11):238-243
传统人工神经网络的输入均为向量形式,而图像由矩阵形式表示,因此,在用人工神经网络进行图像处理时,图像将以向量形式输入至神经网络,这破坏了图像的结构信息,从而影响了图像处理的效果。为了提高网络对图像的处理能力,文中借鉴了深度学习的思想与方法,引进了具有矩阵输入的多层前向神经网络。同时,采用传统的反向传播训练算法(BP)训练该网络,给出了训练过程与训练算法,并在USPS手写数字数据集上进行了数值实验。实验结果表明,相对于单隐层矩阵输入前向神经网络(2D-BP),所提多层网络具有较好的分类效果。此外,对于彩色图片分类问题,利用所提出的2D-BP网络,给出了一个有效的可行方法。  相似文献   

14.
目前神经网络已经成为解决非线性系统辨识问题的一类有效的方法,但是常用的多层感知器存在网络稳定性差、收敛速度慢的问题.在多层感知器和傅里叶级数基础上提出的傅里叶神经网络具有较好的泛化性、模式识别能力,但其学习算法主要采用最速下降法,易产生陷入局部极小,学习速度慢等问题.提出一种采用双折线步方法的傅里叶神经网络,避免了局部极小问题,且具有二阶收敛速度.通过相应的数值算例验证新算法的性能,并应用于非线性系统的识别问题中,其结果和几类经典的神经网络算法做了相应的对比和分析.  相似文献   

15.
In this paper we introduce the approximate feedback linearisation using multilayer feedforward neural networks. We propose to approximate a basis of the one-dimensional codistribution of an affine nonlinear system with the derivative of a multilayer neural network [6] and form a change of coordinates with n multilayer neural networks [5]. In this paper we will prove that the transformation can define a local diffeomorphism, with which a local stabilising feedback law can be designed for a kind of non-linearisable nonlinear systems.  相似文献   

16.
A novel approach toward neural networks modeling is presented in the paper. It is unique in the fact that allows nets' weights to change according to changes of some environmental factors even after completing the learning process. The models of context-dependent (cd) neuron, one- and multilayer feedforward net are presented, with basic learning algorithms and examples of functioning. The Vapnik-Chervonenkis (VC) dimension of a cd neuron is derived, as well as VC dimension of multilayer feedforward nets. Cd nets' properties are discussed and compared with the properties of traditional nets. Possibilities of applications to classification and control problems are also outlined and an example presented.  相似文献   

17.
广义BP算法及网络容错性和泛化能力的研究   总被引:34,自引:0,他引:34  
董聪  刘西拉 《控制与决策》1998,13(2):120-124
给出广义BP算法及其网络学习的多种方式,常用的前向网络全并行权值修改方式是其中效率较低的一种,有许多更好的权值修改方式可以使用。网络的泛化能力依赖于网络的拓扑结构,对国际上为改进网络泛化能力而采用的几种修正学习算法的实际功能做了简要的评论。  相似文献   

18.
In this work an adaptive mechanism for choosing the activation function is proposed and described. Four bi-modal derivative sigmoidal adaptive activation function is used as the activation function at the hidden layer of a single hidden layer sigmoidal feedforward artificial neural networks. These four bi-modal derivative activation functions are grouped as asymmetric and anti-symmetric activation functions (in groups of two each). For the purpose of comparison, the logistic function (an asymmetric function) and the function obtained by subtracting 0.5 from it (an anti-symmetric) function is also used as activation function for the hidden layer nodes’. The resilient backpropagation algorithm with improved weight-tracking (iRprop+) is used to adapt the parameter of the activation functions and also the weights and/or biases of the sigmoidal feedforward artificial neural networks. The learning tasks used to demonstrate the efficacy and efficiency of the proposed mechanism are 10 function approximation tasks and four real benchmark problems taken from the UCI machine learning repository. The obtained results demonstrate that both for asymmetric as well as anti-symmetric activation usage, the proposed/used adaptive activation functions are demonstratively as good as if not better than the sigmoidal function without any adaptive parameter when used as activation function of the hidden layer nodes.  相似文献   

19.
The BP-SOM architecture and learning rule   总被引:3,自引:0,他引:3  
For some problems, the back-propagation learning rule often used for training multilayer feedforward networks appears to have serious limitations. In this paper we describe BP-SOM, an alternative training procedure. In BP-SOM the traditional back-propagation learning rule is combined with unsupervised learning in self-organizing maps. While the multilayer feedforward network is trained, the hidden-unit activations of the feedforward network are used as training material for the accompanying self-organizing maps. After a few training cycles, the maps develop, to a certain extent, self-organization. The information in the maps is used in updating the connection weights of the feedforward network. The effect is that during BP-SOM learning, hidden-unit activations of patterns, associated with the same class, becomemore similar to each other. Results on two hard to learn classification tasks show that the BP-SOM architecture and learning rule offer a strong alternative for training multilayer feedforward networks with back-propagation.  相似文献   

20.
本文针对前馈神经网络BP算法所存在的收敛速度慢区常遇局部极小值等缺陷,提出一种基于U-D分解的渐消记忆推广卡尔曼滤波学习新算法.与BP和EKF学习算法相比,新算法不仅大大加快了学习收敛速度、数值稳定性好,而且需较少的学习次数和隐节点数即可达到更好的学习效果,对初始权值,初始方差阵等参数的选取不敏感,便于工程应用.非线性系统建模与辨识的仿真计算表明,该算法是提高网络学习速度、改善学习效果的一种非常有效的方法.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号