首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 55 毫秒
1.
为解决脉冲神经网络训练困难的问题,基于仿生学思路,提出脉冲神经网络的权值学习算法和结构学习算法,设计一种含有卷积结构的脉冲神经网络模型,搭建适合脉冲神经网络的软件仿真平台。实验结果表明,权值学习算法训练的网络对MNIST数据集识别准确率能够达到84.12%,具备良好的快速收敛能力和低功耗特点;结构学习算法能够自动生成网络结构,具有高度生物相似性。  相似文献   

2.
深度神经网络在实际应用中的局限性日益凸显,具备生物可解释性的类脑计算脉冲神经网络成为了人们研究的热点课题。应用场景的不确定性及复杂多样性给研究者提出了新的挑战,要求类似生物大脑组织具备多尺度架构的类脑计算脉冲神经网络,能够实现对多模态、不确定性信息的感知决策功能。文中主要介绍了多尺度生物合理性的类脑计算脉冲神经网络模型及其面向多模态信息表征和不确定信息感知的学习算法,并分析探讨了基于忆阻器互联的脉冲神经网络可实现多尺度架构类脑计算的两个关键技术问题,即多模态、不确定信息与脉冲时序表示一致性问题和多尺度脉冲神经网络学习算法与容错计算问题。最后,对类脑计算脉冲神经网络的研究方向进行了分析与展望。  相似文献   

3.
近年来,起源于计算神经科学的脉冲神经网络因其具有丰富的时空动力学特征、多样的编码机制、契合硬件的事件驱动特性等优势,在神经形态工程和类脑计算领域已得到广泛的关注.脉冲神经网络与当前计算机科学导向的以深度卷积网络为代表的人工神经网络的交叉融合被认为是发展人工通用智能的有力途径.对此,回顾了脉冲神经网络的发展历程,将其划分为神经元模型、训练算法、编程框架、数据集以及硬件芯片等5个重点方向,全方位介绍脉冲神经网络的最新进展和内涵,讨论并分析了脉冲神经网络领域各个重点方向的发展机遇和挑战.希望本综述能够吸引不同学科的研究者,通过跨学科的思想交流与合作研究,推动脉冲神经网络领域的发展.  相似文献   

4.
相较于第1代和第2代神经网络,第3代神经网络的脉冲神经网络是一种更加接近于生物神经网络的模型,因此更具有生物可解释性和低功耗性。基于脉冲神经元模型,脉冲神经网络可以通过脉冲信号的形式模拟生物信号在神经网络中的传播,通过脉冲神经元的膜电位变化来发放脉冲序列,脉冲序列通过时空联合表达不仅传递了空间信息还传递了时间信息。当前面向模式识别任务的脉冲神经网络模型性能还不及深度学习,其中一个重要原因在于脉冲神经网络的学习方法不成熟,深度学习中神经网络的人工神经元是基于实数形式的输出,这使得其可以使用全局性的反向传播算法对深度神经网络的参数进行训练,脉冲序列是二值性的离散输出,这直接导致对脉冲神经网络的训练存在一定困难,如何对脉冲神经网络进行高效训练是一个具有挑战的研究问题。本文首先总结了脉冲神经网络研究领域中的相关学习算法,然后对其中主要的方法:直接监督学习、无监督学习的算法以及ANN2SNN的转换算法进行分析介绍,并对其中代表性的工作进行对比分析,最后基于对当前主流方法的总结,对未来更高效、更仿生的脉冲神经网络参数学习方法进行展望。  相似文献   

5.
脉冲神经网络(spiking neural network,SNN)以异步事件驱动,支持大规模并行计算,在改善同步模拟神经网络的计算效率方面具有巨大潜力.然而,目前SNN仍然面临无法直接训练的难题,为此,受到神经科学领域关于LIF(leaky integrate-and-fire)神经元响应机制研究启发,提出了一种新的...  相似文献   

6.
人脑具有协同多种认知功能的能力与极强的自主学习能力, 随着脑与神经科学的快速发展, 亟需计算结构模拟人脑的、性能更强大的计算平台进行人脑智能与认知行为机制的进一步探索. 受人脑神经机制的启发, 本文提出了基于神经认知计算架构的众核类脑计算系统BiCoSS, 该系统以并行计算的现场可编程门阵列(Field-programmable gate array, FPGA)为核心处理器, 以地址事件表达的神经放电作为信息传递载体, 以具有认知计算功能的神经元作为信息处理单元, 实现了四百万神经元数量级大规模神经元网络认知行为的实时计算, 填补了从细胞动力学层面理解人脑认知功能的鸿沟. 实验结果从计算能力、计算效率、功耗、通信效率、可扩展性等方面显示了BiCoSS系统的优越性能. BiCoSS通过人脑信息处理的计算架构以更贴近神经科学本质的模式实现了类脑智能; 同时, BiCoSS为神经认知和类脑计算的研究和应用提供了新的有效手段.  相似文献   

7.
8.
张铁林  徐波 《计算机学报》2021,44(9):1767-1785
脉冲神经网络(Spiking Neural Network,SNN)包含具有时序动力学特性的神经元节点、稳态-可塑性平衡的突触结构、功能特异性的网络环路等,高度借鉴了生物启发的局部非监督(如脉冲时序依赖可塑性、短时突触可塑性、局部稳态调节等)、全局弱监督(如多巴胺奖赏学习、基于能量的函数优化等)的生物优化方法,因此具有强大的时空信息表征、异步事件信息处理、网络自组织学习等能力.SNN的研究属于交叉学科,将深入融合脑科学和计算机科学,因此对其研究也可以主要分为两大类:一类是以更好地理解生物系统为最终目的 ;另一类是以追求卓越计算性能为优化目标.本文首先对当前这两大类SNN的研究进展、研究特点等进行分析,重点介绍基于Spike的多类异步信息编码、基于Motif分布的多亚型复杂网络结构、多层时钟网络自组织计算、神经形态计算芯片的软硬结合等.同时,介绍一种融合生物多尺度、多类型神经可塑性的高效SNN优化策略,使得SNN中的信度分配可以从宏观尺度有效覆盖到微观尺度,如全部的网络输出、网络隐层状态、局部的各个神经节点等,并部分解答生物系统是如何通过局部参数的调优而实现全局网络优化的问题.这将不仅为现有人工智能模型提高其认知能力指明一种可能的生物类优化方向,还为反向促进生命科学中生物神经网络的可塑性研究新发现提供启发.本文认为,脉冲神经网络的发展目标不是构建人工神经网络的生物版本替代品,而是通过突破生物启发的多尺度可塑性优化理论,去粗取精,最终实现具有生物认知计算特色的新一代高效脉冲神经网络模型,使其有望获得更快的学习速度、更小的能量消耗、更强的适应性和更好的可解释性等.  相似文献   

9.
随着人工智能的发展,目前主流的神经网络面临着计算量大、功耗高、智能化程度低等问题.为解决以上问题,根据人脑的特性,提出具有普适性的多层脉冲神经网络结构,利用生物学的因果律提出脉冲神经网络算法.通过控制引导神经元的激活时间间接调整目标权值,将算法应用在扑克游戏中,使扑克机器人能够学习一个人的打牌能力,实现拟人化程度为...  相似文献   

10.
目的 类脑计算,是指仿真、模拟和借鉴大脑神经网络结构和信息处理过程的装置、模型和方法,其目标是制造类脑计算机和类脑智能。方法 类脑计算相关研究已经有20多年的历史,本文从模拟生物神经元和神经突触的神经形态器件、神经网络芯片、类脑计算模型与应用等方面对国内外研究进展和面临的挑战进行介绍,并对未来的发展趋势进行展望。结果 与经典人工智能符号主义、连接主义、行为主义以及机器学习的统计主义这些技术路线不同,类脑计算采取仿真主义:结构层次模仿脑(非冯·诺依曼体系结构),器件层次逼近脑(模拟神经元和神经突触的神经形态器件),智能层次超越脑(主要靠自主学习训练而不是人工编程)。结论 目前类脑计算离工业界实际应用还有较大差距,这也为研究者提供了重要研究方向与机遇。  相似文献   

11.
The problem of training feedforward neural networks is considered. To solve it, new algorithms are proposed. They are based on the asymptotic analysis of the extended Kalman filter (EKF) and on a separable network structure. Linear weights are interpreted as diffusion random variables with zero expectation and a covariance matrix proportional to an arbitrarily large parameter λ. Asymptotic expressions for the EKF are derived as λ→∞. They are called diffusion learning algorithms (DLAs). It is shown that they are robust with respect to the accumulation of rounding errors in contrast to their prototype EKF with a large but finite λ and that, under certain simplifying assumptions, an extreme learning machine (ELM) algorithm can be obtained from a DLA. A numerical example shows that the accuracy of a DLA may be higher than that of an ELM algorithm.  相似文献   

12.
On-line learning algorithms for locally recurrent neural networks   总被引:9,自引:0,他引:9  
This paper focuses on online learning procedures for locally recurrent neural nets with emphasis on multilayer perceptron (MLP) with infinite impulse response (IIR) synapses and its variations which include generalized output and activation feedback multilayer networks (MLN). We propose a new gradient-based procedure called recursive backpropagation (RBP) whose online version, causal recursive backpropagation (CRBP), has some advantages over other online methods. CRBP includes as particular cases backpropagation (BP), temporal BP, Back-Tsoi algorithm (1991) among others, thereby providing a unifying view on gradient calculation for recurrent nets with local feedback. The only learning method known for locally recurrent nets with no architectural restriction is the one by Back and Tsoi. The proposed algorithm has better stability and faster convergence with respect to the Back-Tsoi algorithm. The computational complexity of the CRBP is comparable with that of the Back-Tsoi algorithm, e.g., less that a factor of 1.5 for usual architectures and parameter settings. The superior performance of the new algorithm, however, easily justifies this small increase in computational burden. In addition, the general paradigms of truncated BPTT and RTRL are applied to networks with local feedback and compared with CRBP. CRBP exhibits similar performances and the detailed analysis of complexity reveals that CRBP is much simpler and easier to implement, e.g., CRBP is local in space and in time while RTRL is not local in space.  相似文献   

13.
The knowledge-based artificial neural network (KBANN) is composed of phases involving the expression of domain knowledge, the abstraction of domain knowledge at neural networks, the training of neural networks, and finally, the extraction of rules from trained neural networks. The KBANN attempts to open up the neural network black box and generates symbolic rules with (approximately) the same predictive power as the neural network itself. An advantage of using KBANN is that the neural network considers the contribution of the inputs towards classification as a group, while rule-based algorithms like C5.0 measure the individual contribution of the inputs one at a time as the tree is grown. The knowledge consolidation model (KCM) combines the rules extracted using KBANN (NeuroRule), frequency matrix (which is similar to the Naïve Bayesian technique), and C5.0 algorithm. The KCM can effectively integrate multiple rule sets into one centralized knowledge base. The cumulative rules from other single models can improve overall performance as it can reduce error-term and increase R-square. The key idea in the KCM is to combine a number of classifiers such that the resulting combined system achieves higher classification accuracy and efficiency than the original single classifiers. The aim of KCM is to design a composite system that outperforms any individual classifier by pooling together the decisions of all classifiers. Another advantage of KCM is that it does not need the memory space to store the dataset as only extracted knowledge is necessary in build this integrated model. It can also reduce the costs from storage allocation, memory, and time schedule. In order to verify the feasibility and effectiveness of KCM, personal credit rating dataset provided by a local bank in Seoul, Republic of Korea is used in this study. The results from the tests show that the performance of KCM is superior to that of the other single models such as multiple discriminant analysis, logistic regression, frequency matrix, neural networks, decision trees, and NeuroRule. Moreover, our model is superior to a previous algorithm for the extraction of rules from general neural networks.  相似文献   

14.
15.
基于神经网络的支持向量机学习方法研究   总被引:4,自引:0,他引:4       下载免费PDF全文
针对支持向量机(Support Vector Machine,SVM)对大规模样本分类效率低下的问题,提出了基于自适应共振理论(Adaptive Resonance Theory,ART)神经网络与自组织特征映射(Self-Organizing feature Map,SOM)神经网络的SVM训练算法,分别称为ART-SVM算法与SOM-SVM算法。这两种算法通过聚类压缩数据集,使SVM训练的速度大大提高,同时可获得令人满意的泛化能力。  相似文献   

16.
Discusses the learning problem of neural networks with self-feedback connections and shows that when the neural network is used as associative memory, the learning problem can be transformed into some sort of programming (optimization) problem. Thus, the rather mature optimization technique in programming mathematics can be used for solving the learning problem of neural networks with self-feedback connections. Two learning algorithms based on programming technique are presented. Their complexity is just polynomial. Then, the optimization of the radius of attraction of the training samples is discussed using quadratic programming techniques and the corresponding algorithm is given. Finally, the comparison is made between the given learning algorithm and some other known algorithms  相似文献   

17.
Convergent on-line algorithms for supervised learning in neural networks   总被引:1,自引:0,他引:1  
We define online algorithms for neural network training, based on the construction of multiple copies of the network, which are trained by employing different data blocks. It is shown that suitable training algorithms can be defined, in a way that the disagreement between the different copies of the network is asymptotically reduced, and convergence toward stationary points of the global error function can be guaranteed. Relevant features of the proposed approach are that the learning rate must be not necessarily forced to zero and that real-time learning is permitted.  相似文献   

18.
19.
刘志刚  许少华  李盼池 《控制与决策》2016,31(12):2241-2247
连续过程神经元网络在权函数正交基展开时, 基函数个数无法有效确定, 因此逼近精度不高. 针对该问题, 提出一种离散过程神经元网络, 使用三次样条数值积分处理离散样本和权值的时域聚合运算. 模型训练采用双链量子粒子群完成输入权值的全局寻优, 通过量子旋转门和非门完成种群进化. 局部使用极限学习, 通过Moore-Penrose广义逆计算输出权值. 以时间序列预测为例进行仿真实验, 结果验证了模型的有效性, 且训练收敛能力和逼近能力都有一定程度的提高.  相似文献   

20.
We present two fuzzy conjugate gradient learning algorithms based on evolutionary algorithms for polygonal fuzzy neural networks (PFNN). First, we design a new algorithm, fuzzy conjugate algorithm based on genetic algorithm (GA). In the algorithm, we obtain an optimal learning constant η by GA and the experiment indicates the new algorithm always converges. Because the algorithm based on GA is a little slow in every iteration step, we propose to get the learning constant η by quantum genetic algorithm (QGA) in place of GA to decrease time spent in every iteration step. The PFNN tuned by the proposed learning algorithm is applied to approximation realization of fuzzy inference rules, and some experiments demonstrate the whole process. © 2011 Wiley Periodicals, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号