首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
用于传感器非线性误差校正的新颖神经网络   总被引:5,自引:0,他引:5  
朱庆保 《软件学报》1999,10(12):1298-1303
该文阐述了用神经网络校正传感系统非线性误差的原理和方法,提出了一种新颖的简化小脑模型神经网络(SCMAC)及其模型、算法与实现技术.模型、算法采用直接权地址映射技术,以训练样本的输入为地址,建立起输入与权重的关系.任意输入作为相近的权地址,即可找到对应的权,经过联想插补后可获得高精度输出.此外,采用磁盘文件存储、寻址权重等方法,避免了微机内存溢出,使得实现容易.最后给出了一个仿真实验.实验结果表明,用SCMAC校正后,可使传感器的非线性误差减少到近似为零.  相似文献   

2.
《计算机工程与科学》2017,(10):1934-1940
普通神经网络进行抽油机工况诊断时存在诊断精度偏低的问题,提出选用连续过程神经元网络作为诊断模型,特征输入选取能直接反映示功图几何形态特征的位移和载荷两种连续信号。为提高模型学习速度,提出过程神经网络的极限学习算法,将训练转换为最小二乘问题,根据样本输入计算隐层输出矩阵,使用SVD法求解Moore-Penrose广义逆,最后计算隐层输出权值。通过诊断实验,模型学习速度提升5倍左右,与普通神经网络进行对比,诊断精度提高8个百分点左右,验证了方法的有效性。  相似文献   

3.
本文在一类称为一般存储器神经网络(General Memory Neural Network(GMNN))的统一框架下来研究学习收敛性。该一般模型类的结构由三部分组成:输入空间量化、存储器地址产生器、查表式某种组合输出。当产生的地址是固定有限的个数以及网络输出是线性求和时,可以证明GMNN能在最小平方误差意义下收敛。CMAC(Cerebellar Model Articulation Controller)、SLLUP(Single—Layer Look up Perceptrons)是该类模型的典型代表。本文的意义在于为构造新的基于局部学习的神经网络模型提供理论指导,最后给出了这种构造的两个例子——SDM(Sparse Distributed Memory)和SLLUP的两个推广模型。  相似文献   

4.
多层反馈神经网络的FP学习和综合算法   总被引:19,自引:1,他引:19  
张铃  张钹 《软件学报》1997,8(4):252-258
本文给出多层反馈神经网络的FP学习和综合算法,并讨论此类网络的性质,指出将它应用于聚类分析能给出不粒度的聚类,且具有收敛速度快(是样本个数的线性函数)、算法计算量少(是样本个数和输入、输出维数的双线性函数)、网络元件个数少、权系数简单(只取3个值)、网络容易硬件实现等优点.作为聚类器的神经网络的学习和综合问题已得到较圆满地解决.  相似文献   

5.
针对多输出极限学习机(MELM)分类模型输入层权值和阈值随机选取导致的分类精度波动问题,提出一种基于改进花粉算法(CS-ACFPA)的极限学习机多分类模型(CS-ACFPA-MELM)。利用自适应算子和Tent策略优化花粉算法的寻优方式,构造一种基于代价敏感的适应度函数,使花粉算法能够更好地匹配MELM模型的输出,最后使用改进的花粉算法和基于代价敏感的适应度函数优化极限学习机的输入权值和阈值,以提高MELM模型的的分类性能。通过对比实验验证了CS-ACFPA算法对MELM模型改进的有效性,并且体现了CS-ACFPA-MELM模型在大规模样本上的优势以及小样本上的适用性。  相似文献   

6.
针对复杂非线性动态系统辨识问题,提出了一种基于过程神经元网络(PNN)的辨识模型和方法.根 据系统待辨识的模型结构和反映系统模态变化特征的动态样本数据,利用PNN 对时变输入/输出信号的非线性变 换机制和自适应学习能力,建立基于PNN 的系统辨识模型.辨识模型能够同时反映多输入时变信号的空间加权聚 合以及阶段时间效应累积结果,直接实现非线性系统输入/输出之间的动态映射关系.文中构建了用于并联结构和 串-并联结构辨识的PNN 模型,给出了相应的学习算法和实现机制,实验结果验证了模型和算法的有效性.  相似文献   

7.
一种基于Walsh变换的反馈过程神经网络模型及学习算法   总被引:1,自引:0,他引:1  
提出了一种带有反馈输入的过程式神经元网络模型,模型为三层结构,其隐层和输出层均为过程神经元.输入层完成连续信号的输入,隐层完成输入信号的空间聚合和向输出层逐点映射,并将输出信号逐点反馈到输入层;输出层完成隐层输出信号的时、空聚合运算和系统输出.在对权函数实施Walsh变换的基础上给出了该模型的学习算法.仿真实验证明了模型和算法的有效性.  相似文献   

8.
针对动态信号模式分类问题,提出了一种反馈过程神经元网络模型和基于该模型的分类方法。这种网络的输入可直接为时变函数,网络的信息传输既有与前馈神经元网络一样的前向流,也有后面各层节点到前层节点的反馈,且可对节点自身反馈输出信息,能直接用于动态信号的模式分类。由于反馈过程神经元网络在对输入样本的学习中增加了神经元输出信息的反馈,可提高网络的学习效率和稳定性。给出了具体学习算法,以时变函数样本集的分类问题为例,实验结果验证了模型和算法的有效性。  相似文献   

9.
基于分层高斯混合模型的半监督学习算法   总被引:10,自引:0,他引:10  
提出了一种基于分层高斯混合模型的半监督学习算法,半监督学习算法的学习样本包括已标记类别样本和未标记类别学习样本。如用高斯混合模型拟合每个类别已标记学习样本的概率分布,进而用高斯数为类别数的分层高斯混合模型拟合全部(已标记和未标记)学习样本的分布,则形成为一个基于分层的高斯混合模型的半监督学习问题。基于EM算法,首先利用每个类别已标记样本学习高斯混合模型,然后以该模型参数和已标记样本的频率分布作为分层高斯混合模型参数的初值,给出了基于分层高斯混合模型的半监督学习算法,以银行票据印刷体数字识别做实验,实验结果表明,本算法能够获得较好的效果。  相似文献   

10.
针对输入和输出均为时变函数或过程的实际系统建模和仿真问题,提出一种输入和输出均为时变函数的反馈过程神经网络模型,该模型的第1隐层对来自输入层的时变信号进行空间加权聚合和激励运算,并在将其输出传送至第2隐层的同时反馈至输入层;第2隐层完成对其时变输入的空间加权聚合、时间累积聚合和激励运算,并将其输出传送至输出层.给出了相应的学习算法,并以实例验证了该模型及其学习算法的有效性.  相似文献   

11.
This paper shows how a slowly time-varying nonlinear mapping can be learned, if, for every possible input value, the corresponding estimated output value is stored in memory. This representation form can be called “flash map”, or pointwise representation, or look-up table. Thus, very fast access to the mapping is provided. The learning process is performed online during regular operation of the system and must avoid “adaptation holes” which could occur when some of the points are more frequently updated than other points. After analyzing the problems of previous approaches we show how radial basis function networks can be modified for flash maps and present the tent roof tensioning algorithm which is exclusively designed for learning flash maps. The convergence of the tent roof tensioning algorithm is proved. Finally, we compare the two approaches concluding that under the flash map restriction the tent roof tensioning algorithm is the better choice for learning low-dimensional mappings, if a polygonal approximation of the desired mapping is sufficiently smooth  相似文献   

12.
针对鲸鱼优化算法(WOA)存在的收敛速度慢、收敛精度低和易陷入局部最优等问题,提出了采用非线性收敛因子、协同a的惯性权重、时变独立搜索概率和免疫记忆改进的鲸鱼优化算法(IWTWOA);应用非线性收敛因子、协同a的惯性权重和时变独立搜索概率改进WOA迭代模型,平衡了算法的全局搜索和局部搜索能力,有效避免了陷入局部最优的问题;引入免疫算法的免疫记忆机制,提高了算法收敛速度;选取了15个基准测试函数进行性能测试,结果表明IWTWOA算法在稳定性、计算精度和收敛速度上均有所提高;最终将其应用在路径规划问题中,获得了较好的结果.  相似文献   

13.
In the conventional backpropagation (BP) learning algorithm used for the training of the connecting weights of the artificial neural network (ANN), a fixed slope−based sigmoidal activation function is used. This limitation leads to slower training of the network because only the weights of different layers are adjusted using the conventional BP algorithm. To accelerate the rate of convergence during the training phase of the ANN, in addition to updates of weights, the slope of the sigmoid function associated with artificial neuron can also be adjusted by using a newly developed learning rule. To achieve this objective, in this paper, new BP learning rules for slope adjustment of the activation function associated with the neurons have been derived. The combined rules both for connecting weights and slopes of sigmoid functions are then applied to the ANN structure to achieve faster training. In addition, two benchmark problems: classification and nonlinear system identification are solved using the trained ANN. The results of simulation-based experiments demonstrate that, in general, the proposed new BP learning rules for slope and weight adjustments of ANN provide superior convergence performance during the training phase as well as improved performance in terms of root mean square error and mean absolute deviation for classification and nonlinear system identification problems.  相似文献   

14.
本文针对小波网络现有学习算法的不足,把Levenberg-Marquardt算法(简称LM算法)和最小二乘算法有机地结合在一起,提出了一种新的小波网络混合学习算法.在该混合算法中LM算法用来训练小波网络的非线性参数,而最小二乘算法用来训练线性参数.最后以辩识一个混沌系统为例进行了数值仿真,并与改进的BP算法和单纯LM算法进行了比较,结果说明了所提算法具有很好的收敛性能和收敛速度.  相似文献   

15.
用光栅构成的机器人速度、位置传感器测量系统常用硬件进行信号细分,存在细分数不高,硬件复杂等问题。为此,研究了一种神经网络细分方法,并研究了一种用于细分的直接映射小脑模型神经网络。实验和实用证明,研究的小脑神经网络具有学习精度高、速度快,算法简单等特点;只要用很少的训练样本即可达到很高的细分精度,使分辨力得到很大提高,简化了硬件设计,提高了系统的可靠性。  相似文献   

16.
This paper gives a general insight into how the neuron structure in a multilayer perceptron (MLP) can affect the ability of neurons to deal with classification. Most of the common neuron structures are based on monotonic activation functions and linear input mappings. In comparison, the proposed neuron structure utilizes a nonmonotonic activation function and/or a nonlinear input mapping to increase the power of a neuron. An MLP of these high power neurons usually requires a less number of hidden nodes than conventional MLP for solving classification problems. The fewer number of neurons is equivalent to the smaller number of network weights that must be optimally determined by a learning algorithm. The performance of learning algorithm is usually improved by reducing the number of weights, i.e., the dimension of the search space. This usually helps the learning algorithm to escape local optimums, and also, the convergence speed of the algorithm is increased regardless of which algorithm is used for learning. Several 2-dimensional examples are provided manually to visualize how the number of neurons can be reduced by choosing an appropriate neuron structure. Moreover, to show the efficiency of the proposed scheme in solving real-world classification problems, the Iris data classification problem is solved using an MLP whose neurons are equipped by nonmonotonic activation functions, and the result is compared with two well-known monotonic activation functions.  相似文献   

17.
针对深度强化学习算法在复杂动态环境中训练时,由于环境的部分可观测性原因导致智能体难以获得有用信息而不能学习到良好策略且算法收敛速度慢等典型问题,提出一种基于LSTM和非对称actor-critic网络的改进DDPG算法。该算法在actor-critic网络结构中引入LSTM结构,通过记忆推理来学习部分可观测马尔可夫状态中的隐藏状态,同时在actor网络只使用RGB图像作为部分可观测输入的情况下,critic网络利用仿真环境的完全状态进行训练构成非对称网络,加快了训练收敛速度。通过在ROS中进行机械臂抓取仿真实验,结果显示该算法相比于DDPG、PPO和LSTM-DDPG算法获得了更高的成功率,同时具有较快的收敛速度。  相似文献   

18.
针对标准WOA算法初始种群分布不均、收敛速度较慢、全局搜索能力弱且易陷入局部最优等问题,提出一种混合策略改进的鲸鱼优化算法。采用Sobol序列初始化种群以使初始解在解空间分布更均匀;通过非线性时变因子和惯性权重平衡并提高全局搜索及局部开发能力,并结合随机性学习策略增加迭代过程中种群的多样性;引入柯西变异提升算法跳出局部最优的能力。通过对12个基准函数和一个水资源需求预测模型的参数估计进行优化实验,结果表明,基于混合策略改进的鲸鱼优化算法在寻优精度及收敛速度上均有明显提升。  相似文献   

19.
A robust backpropagation learning algorithm for functionapproximation   总被引:3,自引:0,他引:3  
The backpropagation (BP) algorithm allows multilayer feedforward neural networks to learn input-output mappings from training samples. Due to the nonlinear modeling power of such networks, the learned mapping may interpolate all the training points. When erroneous training data are employed, the learned mapping can oscillate badly between data points. In this paper we derive a robust BP learning algorithm that is resistant to the noise effects and is capable of rejecting gross errors during the approximation process. The spirit of this algorithm comes from the pioneering work in robust statistics by Huber and Hampel. Our work is different from that of M-estimators in two aspects: 1) the shape of the objective function changes with the iteration time; and 2) the parametric form of the functional approximator is a nonlinear cascade of affine transformations. In contrast to the conventional BP algorithm, three advantages of the robust BP algorithm are: 1) it approximates an underlying mapping rather than interpolating training samples; 2) it is robust against gross errors; and 3) its rate of convergence is improved since the influence of incorrect samples is gracefully suppressed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号