首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
通过对轮式移动机器人轨迹跟踪优化问题的研究,提出了一种适应性强、收敛速度快且跟踪误差小的迭代滤波学习控制方法,充分发挥了迭代学习控制和Kalman滤波算法的优势,通过引入状态补偿项和设计新的迭代学习增益矩阵对迭代学习律进行了改进。改进的迭代学习控制能够更快速、更精确、更有效地跟踪期望的圆轨迹。采用离散的Kalman滤波器对干扰和噪声进行滤波,抑制了干扰和噪声对轨迹跟踪的影响,使该控制算法更适合于工程应用。计算机实验和仿真表明该方法具有较好的轨迹跟踪能力。  相似文献   

2.
自适应增强卷积神经网络图像识别   总被引:2,自引:0,他引:2       下载免费PDF全文
目的 为了进一步提高卷积神经网络的收敛性能和识别精度,增强泛化能力,提出一种自适应增强卷积神经网络图像识别算法。方法 构建自适应增强模型,分析卷积神经网络分类识别过程中误差产生的原因和误差反馈模式,针对分类误差进行有目的地训练,实现分类特征基于迭代次数和识别结果的自适应增强以及卷积神经网络权值的优化调整。自适应增强卷积神经网络与多种算法在收敛速度和识别精度等性能上进行对比,并在多种数据集上检测自适应卷积神经网络的泛化能力。结果 通过对比实验可知,自适应增强卷积神经网络算法可以在很大程度上优化收敛效果,提高收敛速度和识别精度,收敛时在手写数字数据集上的误识率可降低20.93%,在手写字母和高光谱图像数据集上的误识率可降低11.82%和15.12%;与不同卷积神经网络优化算法对比,误识率比动态自适应池化算法和双重优化算法最多可降低58.29%和43.50%;基于不同梯度算法的优化,误识率最多可降低33.11%;与不同的图像识别算法对比,识别率也有较大程度提高。结论 实验结果表明,自适应增强卷积神经网络算法可以实现分类特征的自适应增强,对收敛性能和识别精度有较大的提高,对多种数据集有较强的泛化能力。这种自适应增强模型可以进一步推广到其他与卷积神经网络相关的深度学习算法中。  相似文献   

3.
Determining the architecture of a neural network is an important issue for any learning task. For recurrent neural networks no general methods exist that permit the estimation of the number of layers of hidden neurons, the size of layers or the number of weights. We present a simple pruning heuristic that significantly improves the generalization performance of trained recurrent networks. We illustrate this heuristic by training a fully recurrent neural network on positive and negative strings of a regular grammar. We also show that rules extracted from networks trained with this pruning heuristic are more consistent with the rules to be learned. This performance improvement is obtained by pruning and retraining the networks. Simulations are shown for training and pruning a recurrent neural net on strings generated by two regular grammars, a randomly-generated 10-state grammar and an 8-state, triple-parity grammar. Further simulations indicate that this pruning method can have generalization performance superior to that obtained by training with weight decay.  相似文献   

4.
姜雷  李新 《计算机时代》2010,(12):29-30
在标准BP神经网络的训练中,将误差函数作为权值调整的依据,使用固定学习率计算权值,这样的结果往往使网络的学习速度过慢甚至无法收敛。对此,从网络收敛的稳定性和速度的角度出发,分析了误差函数和权值修改函数,对算法中学习率的作用进行了具体的讨论,提出了一种根据误差变化对学习率进行动态调整的方法。该方法简单实用,能有效防止网络训练时出现发散,提高网络的收敛速度和稳定性。  相似文献   

5.
The prediction accuracy and generalization ability of neural/neurofuzzy models for chaotic time series prediction highly depends on employed network model as well as learning algorithm. In this study, several neural and neurofuzzy models with different learning algorithms are examined for prediction of several benchmark chaotic systems and time series. The prediction performance of locally linear neurofuzzy models with recently developed Locally Linear Model Tree (LoLiMoT) learning algorithm is compared with that of Radial Basis Function (RBF) neural network with Orthogonal Least Squares (OLS) learning algorithm, MultiLayer Perceptron neural network with error back-propagation learning algorithm, and Adaptive Network based Fuzzy Inference System. Particularly, cross validation techniques based on the evaluation of error indices on multiple validation sets is utilized to optimize the number of neurons and to prevent over fitting in the incremental learning algorithms. To make a fair comparison between neural and neurofuzzy models, they are compared at their best structure based on their prediction accuracy, generalization, and computational complexity. The experiments are basically designed to analyze the generalization capability and accuracy of the learning techniques when dealing with limited number of training samples from deterministic chaotic time series, but the effect of noise on the performance of the techniques is also considered. Various chaotic systems and time series including Lorenz system, Mackey-Glass chaotic equation, Henon map, AE geomagnetic activity index, and sunspot numbers are examined as case studies. The obtained results indicate the superior performance of incremental learning algorithms and their respective networks, such as, OLS for RBF network and LoLiMoT for locally linear neurofuzzy model.  相似文献   

6.
Injecting input noise during feedforward neural network (NN) training can improve generalization performance markedly. Reported works justify this fact arguing that noise injection is equivalent to a smoothing regularization with the input noise variance playing the role of the regularization parameter. The success of this approach depends on the appropriate choice of the input noise variance. However, it is often not known a priori if the degree of smoothness imposed on the FNN mapping is consistent with the unknown function to be approximated. In order to have a better control over this smoothing effect, a loss function putting in balance the smoothed fitting induced by the noise injection and the precision of approximation, is proposed. The second term, which aims at penalizing the undesirable effect of input noise injection or controlling the deviation of the random perturbed loss, was obtained by expressing a certain distance between the original loss function and its random perturbed version. In fact, this term can be derived in general for parametrical models that satisfy the Lipschitz property. An example is included to illustrate the effectiveness of learning with this proposed loss function when noise injection is used.  相似文献   

7.
New dynamical optimal learning for linear multilayer FNN   总被引:1,自引:0,他引:1  
This letter presents a new dynamical optimal learning (DOL) algorithm for three-layer linear neural networks and investigates its generalization ability. The optimal learning rates can be fully determined during the training process. The mean squared error (mse) is guaranteed to be stably decreased and the learning is less sensitive to initial parameter settings. The simulation results illustrate that the proposed DOL algorithm gives better generalization performance and faster convergence as compared to standard error back propagation algorithm.  相似文献   

8.
An efficient constrained training algorithm for feedforwardnetworks   总被引:3,自引:0,他引:3  
A novel algorithm is presented which supplements the training phase in feedforward networks with various forms of information about desired learning properties. This information is represented by conditions which must be satisfied in addition to the demand for minimization of the usual mean square error cost function. The purpose of these conditions is to improve convergence, learning speed, and generalization properties through prompt activation of the hidden units, optimal alignment of successive weight vector offsets, elimination of excessive hidden nodes, and regulation of the magnitude of search steps in the weight space. The algorithm is applied to several small- and large-scale binary benchmark training tasks, to test its convergence ability and learning speed, as well as to a large-scale OCR problem, to test its generalization capability. Its performance in terms of percentage of local minima, learning speed, and generalization ability is evaluated and found superior to the performance of the backpropagation algorithm and variants thereof taking especially into account the statistical significance of the results.  相似文献   

9.
针对深度强化学习算法中经验缓存机制构建问题,提出一种基于TD误差的重抽样优选缓存机制;针对该机制存在的训练集坍塌现象,提出基于排行的分层抽样算法进行改进,并结合该机制对已有的几种典型基于DQN的深度强化学习算法进行改进.通过对Open AI Gym平台上Cart Port学习控制问题的仿真实验对比分析表明,优选机制能够提升训练样本的质量,实现对值函数的有效逼近,具有良好的学习效率和泛化性能,收敛速度和训练性能均有明显提升.  相似文献   

10.
Natural gradient learning is known to be efficient in escaping plateau, which is a main cause of the slow learning speed of neural networks. The adaptive natural gradient learning method for practical implementation also has been developed, and its advantage in real-world problems has been confirmed. In this letter, we deal with the generalization performances of the natural gradient method. Since natural gradient learning makes parameters fit to training data quickly,the overfitting phenomenon may easily occur, which results in poor generalization performance. To solve the problem, we introduce the regularization term in natural gradient learning and propose an efficient optimizing method for the scale of regularization by using a generalized Akaike information criterion (network information criterion). We discuss the properties of the optimized regularization strength by NIC through theoretical analysis as well as computer simulations. We confirm the computational efficiency and generalization performance of the proposed method in real-world applications through computational experiments on benchmark problems.  相似文献   

11.
Sensitivity to noise in bidirectional associative memory (BAM)   总被引:1,自引:0,他引:1  
Original Hebbian encoding scheme of bidirectional associative memory (BAM) provides a poor pattern capacity and recall performance. Based on Rosenblatt's perceptron learning algorithm, the pattern capacity of BAM is enlarged, and perfect recall of all training pattern pairs is guaranteed. However, these methods put their emphases on pattern capacity, rather than error correction capability which is another critical point of BAM. This paper analyzes the sensitivity to noise in BAM and obtains an interesting idea to improve noise immunity of BAM. Some researchers have found that the noise sensitivity of BAM relates to the minimum absolute value of net inputs (MAV). However, in this paper, the analysis on failure association shows that it is related not only to MAV but also to the variance of weights associated with synapse connections. In fact, it is a positive monotone increasing function of the quotient of MAV divided by the variance of weights. This idea provides an useful principle of improving error correction capability of BAM. Some revised encoding schemes, such as small variance learning for BAM (SVBAM), evolutionary pseudorelaxation learning for BAM (EPRLAB) and evolutionary bidirectional learning (EBL), have been introduced to illustrate the performance of this principle. All these methods perform better than their original versions in noise immunity. Moreover, these methods have no negative effect on the pattern capacity of BAM. The convergence of these methods is also discussed in this paper. If there exist solutions, EPRLAB and EBL always converge to a global optimal solution in the senses of both pattern capacity and noise immunity. However, the convergence of SVBAM may be affected by a preset function.  相似文献   

12.
In this paper, the Correntropy Induced Metric (CIM) as an alternative to the well-known mean square error (MSE) is employed in Chebyshev functional link artificial neural network (CFLANN) to deal with the noisy training data set and enhance the generalization performance. The MSE performs well under Gaussian noise but it is sensitive to large outliers. The CIM as a local similarity measure, however, can improve significantly the anti-noise ability of CFLANN. The convergence of the proposed algorithm, namely the CFLANN based on CIM (CFLANNCIM), has been analyzed. Simulation results on nonlinear channel identification show that CFLANNCIM can perform much better than the traditional CFLANN and multiple-layer perceptron (MLP) neural networks trained under MSE criterion.  相似文献   

13.
Performance of deterministic learning in noisy environments   总被引:1,自引:0,他引:1  
In this paper, based on the previous results of deterministic learning, we investigate the performance of deterministic learning in noisy environments. Two different types of noises arising in practical implementations are considered: the system noise and the measurement noise. By employing the convergence results of a class of perturbed linear time-varying (LTV) systems, the effects of these noises upon the learning performance are revealed. It is shown that while there is little effect upon the learning speed, noises have much influence on the learning accuracy. Compared with system noise, the effects of measurement noise appear to be more complicated. Under the noisy environments, robustification technique on the learning algorithm is required to avoid parameter drift. Furthermore, it is shown that additive system noise can be used to enhance the generalization ability of the RBF networks. Simulation studies are included to illustrate the results.  相似文献   

14.
Subspace information criterion for model selection   总被引:7,自引:0,他引:7  
The problem of model selection is considerably important for acquiring higher levels of generalization capability in supervised learning. In this article, we propose a new criterion for model selection, the subspace information criterion (SIC), which is a generalization of Mallows's C(L). It is assumed that the learning target function belongs to a specified functional Hilbert space and the generalization error is defined as the Hilbert space squared norm of the difference between the learning result function and target function. SIC gives an unbiased estimate of the generalization error so defined. SIC assumes the availability of an unbiased estimate of the target function and the noise covariance matrix, which are generally unknown. A practical calculation method of SIC for least-mean-squares learning is provided under the assumption that the dimension of the Hilbert space is less than the number of training examples. Finally, computer simulations in two examples show that SIC works well even when the number of training examples is small.  相似文献   

15.
基于贝叶斯方法的神经网络非线性模型辨识   总被引:12,自引:1,他引:11  
研究了基于贝叶斯推理的多层前向神经网络训练算法,以提高网络的泛化性能。在网络目标函数中引入表示网络结构复杂性的惩罚项,以便能够在训练优化过程中降低网络结构的复杂性,达到避免网络过拟合的目的。训练过程中使用显式的概率分布假设对模型进行分析和推断,根据融入先验分布的假设和依据,获取网络参数和正则化参数的后验条件概率,并基于后验分布的贝叶斯推理得出最优化参数。利用上述算法训练前向网络,对一个微型锅炉对象进行了模型辨识,通过测试,证明所辨识出的对象模型能够较好地表现出对象的动态行为,且具有较好的泛化性能。  相似文献   

16.
Elia  Michel  Francesco  Amaury 《Neurocomputing》2009,72(16-18):3692
The problem of residual variance estimation consists of estimating the best possible generalization error obtainable by any model based on a finite sample of data. Even though it is a natural generalization of linear correlation, residual variance estimation in its general form has attracted relatively little attention in machine learning.In this paper, we examine four different residual variance estimators and analyze their properties both theoretically and experimentally to understand better their applicability in machine learning problems. The theoretical treatment differs from previous work by being based on a general formulation of the problem covering also heteroscedastic noise in contrary to previous work, which concentrates on homoscedastic and additive noise.In the second part of the paper, we demonstrate practical applications in input and model structure selection. The experimental results show that using residual variance estimators in these tasks gives good results often with a reduced computational complexity, while the nearest neighbor estimators are simple and easy to implement.  相似文献   

17.
在医学图像分割任务中,域偏移问题会影响训练好的分割模型在未见域的性能,因此,提高模型泛化性对于医学图像智能模型的实际应用至关重要。表示学习是目前解决域泛化问题的主流方法之一,大多使用图像级损失和一致性损失来监督图像生成,但是对医学图像微小形态特征的偏差不够敏感,会导致生成图像边缘不清晰,影响模型后续学习。为了提高模型的泛化性,提出一种半监督的基于特征级损失和可学习噪声的医学图像域泛化分割模型FLLN-DG,首先引入特征级损失改善生成图像边界不清晰的问题,其次引入可学习噪声组件,进一步增加数据多样性,提升模型泛化性。与基线模型相比,FLLN-DG在未见域的性能提升2%~4%,证明了特征级损失和可学习噪声组件的有效性,与nnUNet,SDNet+AUG,LDDG,SAML,Meta等典型域泛化模型相比,FLLN-DG也表现出更优越的性能。  相似文献   

18.
In the literature, there exist statistical tests to compare supervised learning algorithms on multiple data sets in terms of accuracy but they do not always generate an ordering. We propose Multi2Test, a generalization of our previous work, for ordering multiple learning algorithms on multiple data sets from “best” to “worst” where our goodness measure is composed of a prior cost term additional to generalization error. Our simulations show that Multi2Test generates orderings using pairwise tests on error and different types of cost using time and space complexity of the learning algorithms.  相似文献   

19.
针对电子战条件下,通信信号易受压制干扰的问题,提出了一种基于动态学习率深度自编码器(dynamic learning rate deep AutoEncoder,DLr-DAE)的信道编码算法来提高系统抗压制干扰性能。首先对输入未编码信号进行预处理,将原始输入信号转换为单热矢量,随后使用训练数据样本集,用非监督学习方法训练深度自编码器,基于随机梯度下降法(SGD)更新网络参数,利用指数衰减函数,在迭代次数和网络损失函数值变化过程中动态微调学习率,减少网络迭代循环次数,避免收敛结果陷入局部最优点,从而获得面向电子战环境的信道编码深度学习网络。仿真结果表明,相比现有深度学习编码算法,该算法在取得同等误码率时,抗噪声压制干扰性能最大可提升0.74 dB。  相似文献   

20.
一种通过反馈提高神经网络学习性能的新算法   总被引:8,自引:0,他引:8  
为了有效提高前向神经网络的学习性能,需要从一个新的角度考虑神经网络的学习训练.基于此,提出了一种基于结果反馈的新算法——FBBP算法.将神经网络输入调整与通常的权值调整的反向传播算法结合起来,通过调整权值和输入矢量值的双重作用来最小化神经网络的误差函数.并通过几个函数逼近和模式分类问题的实例仿真,将FBBP算法与加动量项BP算法、最新的一种加快收敛的权值更新的算法进行了比较,来验证所提出的算法的有效性.实验结果表明,所提出的算法具有训练速度快和泛化能力高的双重优点,是一种非常有效的学习方法.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号