首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 234 毫秒
1.
BP网络改进模型的性能对比研究   总被引:9,自引:1,他引:9  
文章通过实例对BP网络的几种代表性改进模型进行了性能对比研究。分析了BP网络基于标准梯度下降法和基于数值优化方法的算法改进和训练策略改进获得的代表性模型的优缺点,并结合遥感图像分类实例,对其收敛速度和分类效果进行了对比分析,其结果可为BP网络模型的选择和改进提供一些思路和借鉴。  相似文献   

2.
基于人工鱼群优化支持向量机水文预报系统模型*   总被引:4,自引:1,他引:3  
在深入分析比较各种水文预报方法的基础上,利用人工鱼群算法对支持向量机训练算法进行了改进,提出了基于人工鱼群优化的支持向量机算法。实验结果表明,基于人工鱼群优化的训练算法的训练速度优于标准的支持向量机的训练速度,能够为水文预报提供更快捷的技术支持。  相似文献   

3.
基于改进并行回火算法的RBM网络训练研究   总被引:1,自引:0,他引:1  
目前受限玻尔兹曼机网络训练算法主要是基于采样的算法.当用采样算法进行梯度计算时,得到的采样梯度是真实梯度的近似值,采样梯度和真实梯度之间存在较大的误差,这严重影响了网络的训练效果.针对该问题,本文首先分析了采样梯度和真实梯度之间的数值误差和方向误差,以及它们对网络训练性能的影响,然后从马尔科夫采样的角度对以上问题进行了理论分析,并建立了梯度修正模型,通过修正梯度对采样梯度进行数值和方向的调节,并提出了基于改进并行回火算法的训练算法,即GFPT(Gradient fixing parallel tempering)算法.最后给出GFPT算法与现有算法的对比实验,仿真结果表明,GFPT算法可以极大地减小采样梯度和真实梯度之间的误差,大幅度提升受限玻尔兹曼机网络的训练效果.  相似文献   

4.
利用蚁群算法和BP网络训练算法相结合的方法对无线传感网络节点路由路径搜索展开了分析研究,简单分析了蚁群算法实现的基本原理,在此基础上重点给出了基于蚁群算法的BP网络优化算法的基本原理及其实现步骤,并对该优化算法与传统的BP网络训练算法的性能进行了对比仿真测试。  相似文献   

5.
针对无线ad hoc 网络,讨论了基于MIL STD_188_ 220C标准的稀疏路由树算法,分析了其优缺点,并在此基础上进行改进了稀疏路由树算法中的路由搜索和路由维护机制,给出一种全分布式路由协议.最后对两种算法进行对比分析,得出结论.  相似文献   

6.
基于面向对象自适应粒子群算法的神经网络训练*   总被引:2,自引:0,他引:2  
针对传统的神经网络训练算法收敛速度慢和泛化性能低的缺陷,提出一种新的基于面向对象的自适应粒子群优化算法(OAPSO)用于神经网络的训练。该算法通过改进PSO的编码方式和自适应搜索策略以提高网络的训练速度与泛化性能,并结合Iris和Ionosphere分类数据集进行测试。实验结果表明:基于OAPSO算法训练的神经网络在分类准确率上明显优于BP算法及标准PSO算法,极大地提高了网络泛化能力和优化效果,具有快速全局收敛的性能。  相似文献   

7.
为提高小波网络对定制产品成本估算的精度,在分析小波网络和蚁群算法基本原理基础上,对蚁群算法进行了改进,提出了基于改进自适应蚁群算法的小波网络学习算法。在对定制产品进行成本估算的实例研究中,得出该方法的收敛速度和求解精度都要优于其它传统学习算法,说明该方法在训练小波网络时具有更好的学习能力和估算精度。  相似文献   

8.
基于改进的QPSO训练BP网络的网络流量预测*   总被引:2,自引:0,他引:2  
为了提高网络流量预测的精度,采用一种改进的QPSO算法训练BP神经网络对网络流量数据的时间序列进行建模预测。针对标准的QPSO算法不可避免地出现早熟的不足,提出一种新的基于参数自适应的QPSO算法,较好地避免了粒子群的早熟,提高了算法的全局收敛性能。仿真实验结果表明,与PSO训练的BP网络、QPSO训练的BP网络作为预测模型相比,该模型具有更高的预测精度及很好的稳定性。  相似文献   

9.
基于神经网络的语音识别技术研究   总被引:5,自引:0,他引:5  
对BP神经网络在特定人语音识别技术中的应用进行了探索性的研究,进而对非特定人语音识别做了一定的实验和研究。通过对比分析了传统的语音识别方法——模板匹配法和人工神经网络语音识别方法的优缺点。神经网络可以得到较高的识别准确度,但是训练速度慢是它的弱点,因此,针对经典的BP算法训练速度慢的缺点,对BP网络加以改进,提高网络训练速度,通过改进使神经网络用于语音识别的各种优越性充分发挥。  相似文献   

10.
针对IEEE 802.15.4网络的MAC层采用基于时隙的CSMA/CA标准算法可能引发数据封包碰撞和网络拥塞的问题,参考ABE算法对标准算法进行了改进,并采用NS-2网络模拟软件对基于时隙的CSMA/CA标准算法与ABE算法对2.4 GHz频段上的IEEE802.15.4星型网络性能进行了模拟实验。结果表明,基于ABE算法的2.4 GHz IEEE 802.15.4网络在网络吞吐量、数据封包成功传输率、网络公平性、LQI类型封包遗失数量等方面的性能都得到了有效提升。  相似文献   

11.
This article proposes a novel technique for accelerating the convergence of the previously published norm-optimal iterative learning control (NOILC) methodology. The basis of the results is a formal proof of an observation made by D.H. Owens, namely that the NOILC algorithm is equivalent to a successive projection algorithm between linear varieties in a suitable product Hilbert space. This leads to two proposed accelerated algorithms together with well-defined convergence properties. The results show that the proposed accelerated algorithms are capable of ensuring monotonic error norm reductions and can outperform NOILC by more rapid reductions in error norm from iteration to iteration. In particular, examples indicate that the approach can improve the performance of NOILC for the problematic case of non-minimum phase systems. Realisation of the algorithms is discussed and numerical simulations are provided for comparative purposes and to demonstrate the numerical performance and effectiveness of the proposed methods.  相似文献   

12.
While a number of algorithms for multiobjective reinforcement learning have been proposed, and a small number of applications developed, there has been very little rigorous empirical evaluation of the performance and limitations of these algorithms. This paper proposes standard methods for such empirical evaluation, to act as a foundation for future comparative studies. Two classes of multiobjective reinforcement learning algorithms are identified, and appropriate evaluation metrics and methodologies are proposed for each class. A suite of benchmark problems with known Pareto fronts is described, and future extensions and implementations of this benchmark suite are discussed. The utility of the proposed evaluation methods are demonstrated via an empirical comparison of two example learning algorithms.  相似文献   

13.
Ranking items is an essential problem in recommendation systems. Since comparing two items is the simplest type of queries in order to measure the relevance of items, the problem of aggregating pairwise comparisons to obtain a global ranking has been widely studied. Furthermore, ranking with pairwise comparisons has recently received a lot of attention in crowdsourcing systems where binary comparative queries can be used effectively to make assessments faster for precise rankings. In order to learn a ranking based on a training set of queries and their labels obtained from annotators, machine learning algorithms are generally used to find the appropriate ranking model which describes the data set the best.In this paper, we propose a probabilistic model for learning multiple latent rankings by using pairwise comparisons. Our novel model can capture multiple hidden rankings underlying the pairwise comparisons. Based on the model, we develop an efficient inference algorithm to learn multiple latent rankings as well as an effective inference algorithm for active learning to update the model parameters in crowdsourcing systems whenever new pairwise comparisons are supplied. The performance study with synthetic and real-life data sets confirms the effectiveness of our model and inference algorithms.  相似文献   

14.
An output nonlinear Wiener system is rewritten as a standard least squares form by reconstructing the input-output items of its difference equation. Multi-innovation based stochastic gradient (MISG) algorithm and its derivate algorithms are introduced to formulate identification methods of Wiener models. In order to increase the convergence performance of stochastic gradient (SG) algorithm, the scalar innovation in SG algorithm is expanded to an innovation vector which contains more information about input-output data. Furthermore, a proper forgetting factor for SG algorithm is introduced to get a faster convergence rates. The comparisons of convergence performance and estimation errors of proposed algorithms are illustrated by two numerical simulation examples.  相似文献   

15.
We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.  相似文献   

16.
Grading learning for blind source separation   总被引:12,自引:0,他引:12  
By generalizing the learning rate parameter to a learning rate matrix, this paper proposes a grading learning algorithm for blind source separation. The whole learning process is divided into three stages: initial stage, capturing stage and tracking stage. In different stages, different learning rates are used for each output component, which is determined by its dependency on other output components. It is shown that the grading learning algorithm is equivariant and can keep the separating matrix from becoming singular. Simulations show that the proposed algorithm can achieve faster convergence, better steady-state performance and higher numerical robustness, as compared with the existing algorithms using fixed, time-descending and adaptive learning rates.  相似文献   

17.
In this paper, weighted stochastic gradient (WSG) algorithms for ARX models are proposed by modifying the standard stochastic gradient identification algorithms. In the proposed algorithms, the correction term is a weighting combination of the correction terms of the standard stochastic gradient (SG) algorithm in the current and last recursive steps. In addition, a latest estimation based WSG (LE‐WSG) algorithm is also established. The convergence performance of the proposed LE‐WSG algorithm is then analyzed. It is shown by a numerical example that both the WSG and LE‐WSG algorithms can possess faster convergence speed and higher convergence precision compared with the standard SG algorithms if the weighting factor is appropriately chosen.  相似文献   

18.
随着计算机硬件性能的提高,目前在个人终端上也开始出现使用预训练机器学习模型进行推理的运用.Caffe是一款流行的深度学习框架,擅长图像分类等任务,但是在默认状态下只能单核运行,无法充分发挥异构并行计算设备的计算能力.深度学习对于计算性能的要求较高,如果能并行化以充分使用所有计算设备,就能提升计算速度和使用体验.由于CPU和GPU的计算性能之比在不同模型下存在差异,因此不能简单将任务均分到多个计算设备.而任务拆分过多或者需要等待多设备完成任务后同步的调度算法会引入更多开销.因此,还需要设计合适的调度算法减少设备空闲时间,才能获得更好的性能.已有一些提高Caffe并行表现的方法,但是对于具体平台有限制且使用难度较高,无法简单充分利用异构并行计算设备的计算能力.本文将Caffe接口扩展,使得自定义程序可以调用异构并行平台的多核或多计算设备使用Caffe进行深度学习推理.接着将目前已有的多种调度算法运用到此类任务上并考察了运行效果.为了减少已有调度算法的同步开销,本文提出了先进先出调度和快速分块调度两种新的算法.测试表明,使用快速分块调度算法结合异构并行计算设备,Caffe的推理速度相比只使用单个CPU核心或者单个GPU都大幅提升.而且,相比已有调度算法中表现最好的HAT算法,本文提出的快速分块调度算法在MNIST和Cifar-10两个数据集上分别减少了7.4%和21.0%的计算性能浪费.  相似文献   

19.
This paper presents a comparative study of Bayesian belief network structure learning algorithms with a view to identify a suitable algorithm for modeling the contextual relations among objects typically found in natural imagery. Four popular structure learning algorithms are compared: two constraint-based algorithms (PC proposed by Spirtes and Glymour and Fast Incremental Association Markov Blanket proposed by Yaramakala and Margaritis), a score-based algorithm (Hill Climbing as implemented by Daly), and a hybrid algorithm (Max-Min Hill Climbing proposed by Tsamardinos et al.). Contrary to the belief regarding the superiority of constraint-based approaches, our empirical results show that a score-based approach performs better on our context dataset in terms of prediction power and learning time. The hybrid algorithm could achieve similar prediction performance as the score-based approach, but requires longer time to learn the desired network. Another interesting fact the study has revealed is the existence of strong correspondence between the linear correlation pattern within the dataset and the edges found in the learned networks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号