首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 188 毫秒
1.
忆阻器(Memristor)是一种无源的二端电子元件, 同时也是一种纳米级元件, 具有低能耗、高存储、小体积和非易失性等特点. 作为一种新型的存储器件, 忆阻器的研制, 有望使计算机实现人脑特有的信息存储与信息处理一体化的功能, 打破目前冯·诺伊曼(Von Neumann)计算机架构, 为下一代计算机的研制提供一种全新的架构. 鉴于忆阻器与生物神经元突触具有十分相似的功能, 使忆阻器得以充当人工神经元的突触, 建立起一种基于忆阻器的人工神经网络即忆阻神经网络. 忆阻器的问世, 为人工神经网络从电路上模拟人脑提供了可能, 必将极大推动人工智能的发展. 此外, 忆阻神经网络的硬件实现及信号传递过程中, 不可避免会出现时滞与分岔等现象, 因此讨论含各种时滞, 如离散、分布、泄漏时滞以及它们混合的时滞忆阻神经网络系统更具有现实意义. 首先介绍了忆阻器的多种数学模型及其分类, 建立了时滞忆阻神经网络(Delayed memristive neural networks, DMNN)的数学模型并阐述了其优点. 然后提出了处理时滞忆阻神经网络动力学行为与控制问题的两种思路, 详细综述了时滞忆阻神经网络系统的稳定性(镇定)、耗散性与无源性及其同步控制方面的内容, 简述了其他方面的动力学行为与控制, 并介绍了时滞忆阻神经网络动力学行为与控制研究新方向. 最后, 对所述问题进行了总结与展望.  相似文献   

2.
研发动态     
有学习能力的纳米忆阻器元件近年来,忆阻器(memristor)被视为突触的电子孪生兄弟.通常,一个神经细胞经由数千个突触与其他神经细胞相连.忆阻器像突触一样可以记住早期脉冲,像神经元一样,只有当某一脉冲超过一定阈值时才对这一脉冲作出反应.由于忆阻器具有类似突触的这些性质,能利用忆阻器模拟大脑学习过程,特别适合用于制造极其省电而又结实耐用具有学习能力的处理器.  相似文献   

3.
针对如何将忆阻器融入人工神经网络算法并进行硬件实现的问题,提出了一种在现场可编程逻辑门阵列(FPGA)平台上实现的基于忆阻特性的监督神经网络算法。该设计以忆阻器模块作为神经网络中的权值存储模块,构建误差反馈机制的监督学习。将该忆阻神经网络电路应用于图像分类问题,并进行了资源占用和处理速度的优化。实验结果表明其分类结果良好,在Cyclone Ⅱ:EP2C70F896I8平台上,整体网络算法占用11 773个逻辑单元(LEs),训练耗时0. 33 ms,图像的测试耗时10μs。这一工作对忆阻器和神经网络的结合提出了一个有益的参考。  相似文献   

4.
针对基于Hopfield神经网络的最大频繁项集挖掘(HNNMFI)算法存在的挖掘结果不准确的问题,提出基于电流阈值自适应忆阻器(TEAM)模型的Hopfield神经网络的改进关联规则挖掘算法。首先,使用TEAM模型设计实现突触,利用阈值忆阻器的忆阻值随方波电压连续变化的能力来设定和更新突触权值,自适应关联规则挖掘算法的输入。其次,改进原算法的能量函数以对齐标准能量函数,并用忆阻值表示权值,放大权值和偏置。最后,设计由最大频繁项集生成关联规则的算法。使用10组大小在30以内的随机事务集进行1000次仿真实验,实验结果表明,与HNNMFI算法相比,所提算法在关联挖掘结果准确率上提高33.9个百分点以上,说明忆阻器能够有效提高Hopfield神经网络在关联规则挖掘中的结果准确率。  相似文献   

5.
忆阻器是一种动态特性的电阻,其阻值可以根据外场的变化而变化,并且在外场撤掉后能够保持原来的阻值,具有类似于生物神经突触连接强度的特性,可以用来存储突触权值。在此基础上,为了实现基于Temporal rule对IRIS数据集识别学习的功能,建立了以桥式忆阻器为突触的神经网络SPICE仿真电路。采用单个脉冲的编码方式,脉冲的时刻代表着数据信息,该神经网络电路由48个脉冲输入端口、144个突触、3个输出端口组成。基于Temporal rule学习规则对突触的权值修改,通过仿真该神经网络电路对IRIS数据集的分类正确率最高能达到93.33%,表明了此神经系统结构设计在类脑脉冲神经网络中的可用性。  相似文献   

6.
忆阻器具有独特的记忆功能和连续可变的电导状态,在人工智能与神经网络等研究领域具有巨大的应用优势.详细推导了忆阻器的电荷控制模型,将纳米忆阻器与具有智能信息处理能力的混沌神经网络相结合,提出了一种新型的基于忆阻器的连续学习混沌神经网络模型.利用忆阻器可直接实现网络中繁多的反馈与迭代,即完成外部输入对神经元及神经元之间相互作用的时空总和.提出的忆阻连续学习混沌神经网络可以实现对已知模式和未知模式的区分,并能对未知模式进行自动学习和记忆.给出的计算机仿真验证了方案的可行性.由于忆阻器具有纳米级尺寸和自动的记忆能力,该方案有望大大简化混沌神经网络结构.  相似文献   

7.
利用4个相同忆容器构建一个能实现零、正和负突触权重的忆容桥电路。在附加3个晶体三极管后,忆容桥权重电路能够实现神经细胞的突触操作。由于整个操作都是基于脉冲输入信号,因此整个电路是高效节能的。通过Matlab实现突触权重设计和突触权重乘法的模拟。仿真实验结果表明,基于线性忆容桥的突触电路在性能上与忆阻突触桥电路基本相当,优于传统突触乘法电路。  相似文献   

8.
基于MNIST的忆阻神经网络稳定性研究   总被引:1,自引:0,他引:1  
为了探究忆阻器的稳定性问题对忆阻神经网络性能的影响,基于等效电阻拓扑结构的忆阻器模型,搭建了一个将忆阻器作为突触的BP神经网络,并利用MNIST数据集对该网络进行训练和测试。忆阻器的稳定性问题通过设置忆阻器参数波动来模拟,最终发现忆阻器一定程度内的性能波动会促进神经网络的收敛,但波动过大则会降低网络的收敛速度。为了表征波动的临界程度,测得了基于忆阻器模型的各参数的最大波动范围,并进一步计算出忆阻器件工艺层次参量的取值范围,为忆阻神经网络硬件化中忆阻器件的工艺制造和选用提供了参考。  相似文献   

9.
将新型的电路元件忆阻器与传统细胞神经网络相结合,构建出体积小、功耗低、计算速度快的忆阻细胞神经网络。用该网络实现对车牌图像定位的预处理,对应的计算机仿真结果验证了方案的有效性。提出的忆阻细胞神经网络将提高硬件电路实现的集成度,同时也有利于车牌识别速度和效率的提高。  相似文献   

10.
多智能体强化学习算法在用于复杂的分布式系统时存在着状态空间大、学习效率低等问题.针对网络环境中的资源分配问题对多智能体强化学习算法进行了研究,将Q-学习算法和链式反馈(chain feedback,CF)学习算法相结合,提出了Q-CF多智能体强化学习算法,利用一种称为信息链式反馈的机制实现了多智能体之间的高效协同.仿真...  相似文献   

11.
相较于第1代和第2代神经网络,第3代神经网络的脉冲神经网络是一种更加接近于生物神经网络的模型,因此更具有生物可解释性和低功耗性。基于脉冲神经元模型,脉冲神经网络可以通过脉冲信号的形式模拟生物信号在神经网络中的传播,通过脉冲神经元的膜电位变化来发放脉冲序列,脉冲序列通过时空联合表达不仅传递了空间信息还传递了时间信息。当前面向模式识别任务的脉冲神经网络模型性能还不及深度学习,其中一个重要原因在于脉冲神经网络的学习方法不成熟,深度学习中神经网络的人工神经元是基于实数形式的输出,这使得其可以使用全局性的反向传播算法对深度神经网络的参数进行训练,脉冲序列是二值性的离散输出,这直接导致对脉冲神经网络的训练存在一定困难,如何对脉冲神经网络进行高效训练是一个具有挑战的研究问题。本文首先总结了脉冲神经网络研究领域中的相关学习算法,然后对其中主要的方法:直接监督学习、无监督学习的算法以及ANN2SNN的转换算法进行分析介绍,并对其中代表性的工作进行对比分析,最后基于对当前主流方法的总结,对未来更高效、更仿生的脉冲神经网络参数学习方法进行展望。  相似文献   

12.
近年来,起源于计算神经科学的脉冲神经网络因其具有丰富的时空动力学特征、多样的编码机制、契合硬件的事件驱动特性等优势,在神经形态工程和类脑计算领域已得到广泛的关注.脉冲神经网络与当前计算机科学导向的以深度卷积网络为代表的人工神经网络的交叉融合被认为是发展人工通用智能的有力途径.对此,回顾了脉冲神经网络的发展历程,将其划分为神经元模型、训练算法、编程框架、数据集以及硬件芯片等5个重点方向,全方位介绍脉冲神经网络的最新进展和内涵,讨论并分析了脉冲神经网络领域各个重点方向的发展机遇和挑战.希望本综述能够吸引不同学科的研究者,通过跨学科的思想交流与合作研究,推动脉冲神经网络领域的发展.  相似文献   

13.
Neuromorphic computing is considered to be the future of machine learning,and it provides a new way of cognitive computing.Inspired by the excellent performance of spiking neural networks(SNNs)on the fields of low-power consumption and parallel computing,many groups tried to simulate the SNN with the hardware platform.However,the efficiency of training SNNs with neuromorphic algorithms is not ideal enough.Facing this,Michael et al.proposed a method which can solve the problem with the help of DNN(deep neural network).With this method,we can easily convert a well-trained DNN into an SCNN(spiking convolutional neural network).So far,there is a little of work focusing on the hardware accelerating of SCNN.The motivation of this paper is to design an SNN processor to accelerate SNN inference for SNNs obtained by this DNN-to-SNN method.We propose SIES(Spiking Neural Network Inference Engine for SCNN Accelerating).It uses a systolic array to accomplish the task of membrane potential increments computation.It integrates an optional hardware module of max-pooling to reduce additional data moving between the host and the SIES.We also design a hardware data setup mechanism for the convolutional layer on the SIES with which we can minimize the time of input spikes preparing.We implement the SIES on FPGA XCVU440.The number of neurons it supports is up to 4000 while the synapses are 256000.The SIES can run with the working frequency of 200 MHz,and its peak performance is 1.5625 TOPS.  相似文献   

14.
随着深度学习在训练成本、泛化能力、可解释性以及可靠性等方面的不足日益突出,类脑计算已成为下一代人工智能的研究热点。脉冲神经网络能更好地模拟生物神经元的信息传递方式,且具有计算能力强、功耗低等特点,在模拟人脑学习、记忆、推理、判断和决策等复杂信息方面具有重要的潜力。本文对脉冲神经网络从以下几个方面进行总结:首先阐述脉冲神经网络的基本结构和工作原理;在结构优化方面,从脉冲神经网络的编码方式、脉冲神经元改进、拓扑结构、训练算法以及结合其他算法这5个方面进行总结;在训练算法方面,从基于反向传播方法、基于脉冲时序依赖可塑性规则方法、人工神经网络转脉冲神经网络和其他学习算法这4个方面进行总结;针对脉冲神经网络的不足与发展,从监督学习和无监督学习两方面剖析;最后,将脉冲神经网络应用到类脑计算和仿生任务中。本文对脉冲神经网络的基本原理、编码方式、网络结构和训练算法进行了系统归纳,对脉冲神经网络的研究发展具有一定的积极意义。  相似文献   

15.
Biologically-inspired packet switched network on chip (NoC) based hardware spiking neural network (SNN) architectures have been proposed as an embedded computing platform for classification, estimation and control applications. Storage of large synaptic connectivity (SNN topology) information in SNNs require large distributed on-chip memory, which poses serious challenges for compact hardware implementation of such architectures. Based on the structured neural organisation observed in human brain, a modular neural networks (MNN) design strategy partitions complex application tasks into smaller subtasks executing on distinct neural network modules, and integrates intermediate outputs in higher level functions. This paper proposes a hardware modular neural tile (MNT) architecture that reduces the SNN topology memory requirement of NoC-based hardware SNNs by using a combination of fixed and configurable synaptic connections. The proposed MNT contains a 16:16 fully-connected feed-forward SNN structure and integrates in a mesh topology NoC communication infrastructure. The SNN topology memory requirement is 50 % of the monolithic NoC-based hardware SNN implementation. The paper also presents a lookup table based SNN topology memory allocation technique, which further increases the memory utilisation efficiency. Overall the area requirement of the architecture is reduced by an average of 66 % for practical SNN application topologies. The paper presents micro-architecture details of the proposed MNT and digital neuron circuit. The proposed architecture has been validated on a Xilinx Virtex-6 FPGA and synthesised using 65 nm low-power CMOS technology. The evolvable capability of the proposed MNT and its suitability for executing subtasks within a MNN execution architecture is demonstrated by successfully evolving benchmark SNN application tasks representing classification and non-linear control functions. The paper addresses hardware modular SNN design and implementation challenges and contributes to the development of a compact hardware modular SNN architecture suitable for embedded applications  相似文献   

16.
The so far developed and widely utilized connectionist systems (artificial neural networks) are mainly based on a single brain-like connectionist principle of information processing, where learning and information exchange occur in the connections. This paper extends this paradigm of connectionist systems to a new trend—integrative connectionist learning systems (ICOS) that integrate in their structure and learning algorithms principles from different hierarchical levels of information processing in the brain, including neuronal-, genetic-, quantum. Spiking neural networks (SNN) are used as a basic connectionist learning model which is further extended with other information learning principles to create different ICOS. For example, evolving SNN for multitask learning are presented and illustrated on a case study of person authentification based on multimodal auditory and visual information. Integrative gene-SNN are presented, where gene interactions are included in the functioning of a spiking neuron. They are applied on a case study of computational neurogenetic modeling. Integrative quantum-SNN are introduced with a quantum Hebbian learning, where input features as well as information spikes are represented by quantum bits that result in exponentially faster feature selection and model learning. ICOS can be used to solve more efficiently challenging biological and engineering problems when fast adaptive learning systems are needed to incrementally learn in a large dimensional space. They can also help to better understand complex information processes in the brain especially how information processes at different information levels interact. Open questions, challenges and directions for further research are presented.  相似文献   

17.
In this paper, we presented the development of a navigation control system for a sailboat based on spiking neural networks (SNN). Our inspiration for this choice of network lies in their potential to achieve fast and low-energy computing on specialized hardware. To train our system, we use the modulated spike time-dependent plasticity reinforcement learning rule and a simulation environment based on the BindsNET library and USVSim simulator. Our objective was to develop a spiking neural network-based control systems that can learn policies allowing sailboats to navigate between two points by following a straight line or performing tacking and gybing strategies, depending on the sailing scenario conditions. We presented the mathematical definition of the problem, the operation scheme of the simulation environment, the spiking neural network controllers, and the control strategy used. As a result, we obtained 425 SNN-based controllers that completed the proposed navigation task, indicating that the simulation environment and the implemented control strategy work effectively. Finally, we compare the behavior of our best controller with other algorithms and present some possible strategies to improve its performance.  相似文献   

18.
In this paper, the robust global asymptotic stability (RGAS) of generalized static neural networks (SNNs) with linear fractional uncertainties and a constant or time-varying delay is concerned within a novel input-output framework. The activation functions in the model are assumed to satisfy a more general condition than the usually used Lipschitz-type ones. First, by four steps of technical transformations, the original generalized SNN model is equivalently converted into the interconnection of two subsystems, where the forward one is a linear time-invariant system with a constant delay while the feedback one bears the norm-bounded property. Then, based on the scaled small gain theorem, delay-dependent sufficient conditions for the RGAS of generalized SNNs are derived via combining a complete Lyapunov functional and the celebrated discretization scheme. All the results are given in terms of linear matrix inequalities so that the RGAS problem of generalized SNNs is projected into the feasibility of convex optimization problems that can be readily solved by effective numerical algorithms. The effectiveness and superiority of our results over the existing ones are demonstrated by two numerical examples.  相似文献   

19.
计算机视觉旨在通过计算机模拟人的视觉系统,让计算机学会"看",是人工智能、神经科学研究的一个热点。作为计算机视觉的经典任务,图像分类吸引了越来越多的研究,尤其是基于神经网络的算法在各种分类任务上表现优异。然而,传统浅层人工神经网络特征学习能力不强、生物可解释性不足,而深层神经网络存在过拟合、高功耗的缺点,因此在低功耗环境下具有生物可解释性的图像分类算法研究仍然是一个具有挑战性的任务。为了解决上述问题,结合脉冲神经网络,设计并实现了一种基于Jetson TK1和脉冲神经网络的图像分类算法。研究的主要创新点有:(1)设计了深度脉冲卷积神经网络算法,用于图像分类;(2)实现了基于CUDA改进的脉冲神经网络模型,并部署在Jetson TK1开发环境上。  相似文献   

20.
程龙  刘洋 《控制与决策》2018,33(5):923-937
脉冲神经网络是目前最具有生物解释性的人工神经网络,是类脑智能领域的核心组成部分.首先介绍各类常用的脉冲神经元模型以及前馈和循环型脉冲神经网络结构;然后介绍脉冲神经网络的时间编码方式,在此基础上,系统地介绍脉冲神经网络的学习算法,包括无监督学习和监督学习算法,其中监督学习算法按照梯度下降算法、结合STDP规则的算法和基于脉冲序列卷积核的算法3大类别分别展开详细介绍和总结;接着列举脉冲神经网络在控制领域、模式识别领域和类脑智能研究领域的应用,并在此基础上介绍各国脑计划中,脉冲神经网络与神经形态处理器相结合的案例;最后分析脉冲神经网络目前所存在的困难和挑战.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号