首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
针对过程神经网络时空聚合运算机制复杂、学习周期长的问题,提出了一种基于数据并行的过程神经网络训练算法。该方法基于梯度下降的批处理训练方式,应用MPI并行模式进行算法设计,在局域网内实现多台计算机的机群并行计算。文中给出了基于数据并行的过程神经网络训练算法和实现机制,对不同规模的训练函数样本集和进程数进行了对比实验,并对加速比、并行效率等算法性质进行了分析。实验结果表明,根据网络和样本规模适当选取并行粒度,算法可较大提高过程神经网络的训练效率。  相似文献   

2.
借助混沌免疫遗传优化算法对于BP神经网络进行训练,建立基于混沌免疫遗传算法的混合神经网络模型。针对混沌免疫遗传神经网络计算工作量大,训练速度慢的缺点,利用Matlab的Parallel Computing Toolbox对于所建立的混沌免疫遗传神经网络模型进行并行化算法设计实现,并对渤海海区年极值冰厚数据进行预测,对比分析了串行和并行算法的计算效率和加速比,表明基于多核系统的并行化设计算法可以提高加速比和计算效率。  相似文献   

3.
自适应数字预失真是克服高功率放大器非线性失真最有前途的一项技术。为提高预失真的效率和效果,引入并行计算平台下的演化计算技术,提出了基于PSO算法预训练神经网络的方法,给出了算法软件实现的基本流程。在所述基础上,采用带抽头延时的双入双出三层前向神经网络结构,根据非直接学习结构和反向传播算法实现自适应,可同时补偿放大器的记忆失真和非线性失真的预失真技术。仿真实验表明,通过与无PSO预训练算法的相比,基于PSO预训练的神经网络训练算法有更好的性能。  相似文献   

4.
针对目前神经网络在处理类似生物信息数据库这类较大规模数据时,遇到的大规模数据处理耗时过长、内存资源不足等问题.在分析当前神经网络分布式学习的基础上,提出了一种新的基于Agent和切片思想的分布式神经网络协同训练算法.通过对训练样本和训练过程的有效切分,整个样本集的学习被分配到一个分布式神经网络集群环境中进行协同训练,同时通过竞争筛选机制,使得学习性能较好的训练个体能有效地在神经网络群中迁移,以获得较多的资源进行学习.理论分析论证了该方法不仅能有效提高神经网络向目标解收敛的成功率,同时也具有较高的并行计算性能,以加快向目标解逼近的速度.最后,该方法被应用到了蛋白质二级结构预测这一生物信息学领域的问题上.结果显示,该分布式学习算法不仅能有效地处理大规模样本集的学习,同时也改进了训练得到的神经网络性能.  相似文献   

5.
The supervised training of feedforward neural networks is often based on the error backpropagation algorithm. The authors consider the successive layers of a feedforward neural network as the stages of a pipeline which is used to improve the efficiency of the parallel algorithm. A simple placement rule is used to take advantage of simultaneous executions of the calculations on each layer of the network. The analytic expressions show that the parallelization is efficient. Moreover, they indicate that the performance of this implementation is almost independent of the neural network architecture. Their simplicity assures easy prediction of learning performance on a parallel machine for any neural network architecture. The experimental results are in agreement with analytical estimates.  相似文献   

6.
针对大数据环境下DCNN(deep convolutional neural network)算法中存在网络冗余参数过多、参数寻优能力不佳和并行效率低的问题,提出了大数据环境下基于特征图和并行计算熵的深度卷积神经网络算法MR-FPDCNN(deep convolutional neural network algorithm based on feature graph and parallel computing entropy using MapReduce)。该算法设计了基于泰勒损失的特征图剪枝策略FMPTL(feature map pruning based on Taylor loss),预训练网络,获得压缩后的DCNN,有效减少了冗余参数,降低了DCNN训练的计算代价。提出了基于信息共享搜索策略ISS(information sharing strategy)的萤火虫优化算法IFAS(improved firefly algorithm based on ISS),根据“IFAS”算法初始化DCNN参数,实现DCNN的并行化训练,提高网络的寻优能力。在Reduce阶段提出了...  相似文献   

7.
Ensemble learning has gained considerable attention in different tasks including regression, classification and clustering. Adaboost and Bagging are two popular approaches used to train these models. The former provides accurate estimations in regression settings but is computationally expensive because of its inherently sequential structure, while the latter is less accurate but highly efficient. One of the drawbacks of the ensemble algorithms is the high computational cost of the training stage. To address this issue, we propose a parallel implementation of the Resampling Local Negative Correlation (RLNC) algorithm for training a neural network ensemble in order to acquire a competitive accuracy like that of Adaboost and an efficiency comparable to that of Bagging. We test our approach on both synthetic and real datasets from the UCI and Statlib repositories for the regression task. In particular, our fine-grained parallel approach allows us to achieve a satisfactory balance between accuracy and parallel efficiency.  相似文献   

8.
本文提出并行搜索和规划算法,以及实现它们的高阶二维时态-竟争激励神经网络.这种网络还能实现基于传统符号逻辑的许多问题求解算法.本文的方法克服了通常的神经网络求解优化问题的缺陷.同时,也避免了符号逻辑算法的串行性及符号逻辑Systolic结构复杂性等问题.给出了求解隐式图搜索、LCS问题、TSP问题及0-1背包问题的实例.  相似文献   

9.
In the past few decades, much success has been achieved in the use of artificial neural networks for classification, recognition, approximation and control. Flexible neural tree (FNT) is a special kind of artificial neural network with flexible tree structures. The most distinctive feature of FNT is its flexible tree structures. This makes it possible for FNT to obtain near-optimal network structures using tree structure optimization algorithms. But the modeling efficiency of FNT is always a problem due to its two-stage optimization. This paper designed a parallel evolving algorithm for FNT (PE-FNT). This algorithm uses PIPE algorithm to optimize tree structures and PSO algorithm to optimize parameters. The evaluation processes of tree structure populations and parameter populations were both parallelized. As an implementation of PE-FNT algorithm, two parallel programs were developed using MPI. A small data set, two medium data sets and three large data sets were applied for the performance evaluations of these programs. Experimental results show that PE-FNT algorithm is an effective parallel FNT algorithm especially for large data sets.  相似文献   

10.
This paper presents a neural network approach with successful implementation for the robot task-sequencing problem. The problem addresses the sequencing of tasks comprising loading and unloading of parts into and from the machines by a material-handling robot. The performance criterion is to minimize a weighted objective of the total robot travel time for a set of tasks and the tardiness of the tasks being sequenced. A three-phased parallel implementation of the neural network algorithm on Thinking Machine's CM-5 parallel computer is also presented which resulted in a dramatic increase in the speed of finding solutions. To evaluate the performance of the neural network approach, a branch-and-bound method and a heuristic procedure have been developed for the problem. The neural network method is shown to give good results and is especially useful for solving large problems on a parallel-computing platform.  相似文献   

11.
面向小词典的高效英汉双语语料对齐算法   总被引:1,自引:0,他引:1       下载免费PDF全文
熊伟  陈蓉  刘佳  徐淼  于中华 《计算机工程》2007,33(13):210-212
双语语料自动对齐是自然语言处理的一个重要研究课题。该文针对基于词典译文的英汉句子对齐算法存在的缺点,提出了面向小词典的高效英汉句子对齐算法,该算法在小词典的情况下仍具有较高的准确率,效率比传统算法提高近一倍。通过理论分析、对比实验可知,该算法是有效的。  相似文献   

12.
为减少空间降水插值的计算时间,以MPI并行接口为技术手段,采用数据划分建模方法,实现改进Kriging算法的并行算法.在Linux操作系统上搭建并行计算环境,试验数据表明,该并行算法能有效节省计算时间并具有良好的加速比、并行效率和扩展性.为Kriging插值算法的并行化实现和应用提供有意义的参考.  相似文献   

13.
基于灰色神经网络的入侵检测系统研究   总被引:1,自引:0,他引:1  
将灰色预测和神经网络有机的结合起来,构造出了新的灰色神经网络GNNM,并用于入侵检测系统(IDS)中,仿真结果表明,GNNM算法在较低误报率的基础上达到了理想的检测率,与传统的神经网络算法相比,不但提高了系统的并行计算能力和系统的可用信息的利用率,还提高了系统的建模效率与模型精度.  相似文献   

14.
A new two-dimensional systolic algorithm is proposed in this paper for parallel implementation of the multi-layered neural network. To reduce communication overhead, the input data flow is passed along the horizontal and vertical directions of the systolic array alternately, over different layers of the neural network. This new algorithm accelerates learning process of the neural network. Transputer implementation of the proposed algorithm and experimental results are presented to show efficiency of the new algorithm.  相似文献   

15.
提出一种神经网络与PD并行控制的机器人学习控制系统。为了加快神经网络的学习算法,在数字复合正交神经网络的基础上给出一种模拟复合正交神经网络的学习算法,以两关节机器人为对象仿真结果表明,该控制方法使机器人跟踪期望轨迹,其系统响应、跟踪精度和鲁棒性优于常规的控制方法,位置跟踪获得了满意的控制效果。该模拟神经控制器为不确定系统的控制提供了一种新的途径。  相似文献   

16.
蒙特卡洛树搜索算法是一种常用的强化学习算法,博弈过程中动态空间的指数级增长是制约该算法学习效率的因素。基于并行方法对蒙特卡洛树搜索算法进行优化,提出基于胜率估值传递的并行蒙特卡洛树搜索算法。改进后的并行博弈搜索策略框架包含一个主进程和多个子进程,其中子进程用于探索,主进程根据子进程传递的胜率估值数据进行决策。结合多智能体博弈平台Pommerman进行实验验证,与传统的蒙特卡罗树搜索算法相比,并行蒙特卡罗树搜索算法有效提高了资源利用率、博弈胜率及决策效率。  相似文献   

17.
随着电力通信网络规模的不断扩大,电力通信网络不间断地产生海量通信数据。同时,对通信网络的攻击手段也在不断进化,给电力通信网络的安全造成极大威胁。针对以上问题,结合Spark大数据计算框架和PSO优化神经网络算法的优点,提出基于Spark内存计算框架的并行PSO优化神经网络算法对电力通信网络的安全态势进行预测。本研究首先引入Spark计算框架,Spark框架具有内存计算以及准实时处理的特点,符合电力通信大数据处理的要求。然后提出PSO优化算法对神经网络的权值进行修正,以增加神经网络的学习效率和准确性。之后结合RDD的并行特点,提出了一种并行PSO优化神经网络算法。最后通过实验比较可以看出,基于Spark框架的PSO优化神经网络算法的准确度高,且相较于传统基于Hadoop的预测方法在处理速度上有显著提高。  相似文献   

18.
This paper studies parallel training of an improved neural network for text categorization. With the explosive growth on the amount of digital information available on the Internet, text categorization problem has become more and more important, especially when millions of mobile devices are now connecting to the Internet. Improved back-propagation neural network (IBPNN) is an efficient approach for classification problems which overcomes the limitations of traditional BPNN. In this paper, we utilize parallel computing to speedup the neural network training process of IBPNN. The parallel IBNPP algorithm for text categorization is implemented on a Sun Cluster with 34 nodes (processors). The communication time and speedup for the parallel IBPNN versus various number of nodes are studied. Experiments are conducted on various data sets and the results show that the parallel IBPNN together with SVD technique achieves fast computational speed and high text categorization correctness.  相似文献   

19.
基于分布式神经网络递推预报误差算法的非线性系统建模   总被引:1,自引:0,他引:1  
采用基于递推预报误差算法的分布式神经网络 结构建立非线性系统模型.子神经网络模型及其连接权值均采用递推预报误差方法来进行训 练,将所有子网络融合得到的分布式神经网络模型在模型精确性和鲁棒性方面有显著地增加 .该方法较好地应用于复杂非线性动态系统的建模.  相似文献   

20.
近年来图形处理器(GPU)快速拓展的可编程性能力加上渲染流水线的高速度及并行性,使得图形处理器通用计算(GPGPU)迅速成为一个研究热点。针对大规模神经网络BP算法效率低下问题,提出了一种GPU加速的神经网络BP算法。将BP网络的前向计算、反向学习转换为GPU纹理的渲染过程,从而利用GPU强大的浮点运算能力和高度并行的计算特性对BP算法进行求解。实验结果表明,在保证求解结果准确度不变的情况下,该方法运行效率有明显的提高。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号