首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A learning machine, called a clustering interpreting probabilistic associative memory (CIPAM), is proposed. CIPAM consists of a clusterer and an interpreter. The clusterer is a recurrent hierarchical neural network of unsupervised processing units (UPUs). The interpreter is a number of supervised processing units (SPUs) that branch out from the clusterer. Each processing unit (PU), UPU or SPU, comprises “dendritic encoders” for encoding inputs to the PU, “synapses” for storing resultant codes, a “nonspiking neuron” for generating inhibitory graded signals to modulate neighboring spiking neurons, “spiking neurons” for computing the subjective probability distribution (SPD) or the membership function, in the sense of fuzzy logic, of the label of said inputs to the PU and generating spike trains with the SPD or membership function as the firing rates, and a masking matrix for maximizing generalization. While UPUs employ unsupervised covariance learning mechanisms, SPUs employ supervised ones. They both also have unsupervised accumulation learning mechanisms. The clusterer of CIPAM clusters temporal and spatial data. The interpreter interprets the resultant clusters, effecting detection and recognition of temporal and hierarchical causes.  相似文献   

2.
Hlne  Rgis  Samy 《Neurocomputing》2008,71(7-9):1143-1158
We propose a multi-timescale learning rule for spiking neuron networks, in the line of the recently emerging field of reservoir computing. The reservoir is a network model of spiking neurons, with random topology and driven by STDP (spike-time-dependent plasticity), a temporal Hebbian unsupervised learning mode, biologically observed. The model is further driven by a supervised learning algorithm, based on a margin criterion, that affects the synaptic delays linking the network to the readout neurons, with classification as a goal task. The network processing and the resulting performance can be explained by the concept of polychronization, proposed by Izhikevich [Polychronization: computation with spikes, Neural Comput. 18(2) (2006) 245–282], on physiological grounds. The model emphasizes that polychronization can be used as a tool for exploiting the computational power of synaptic delays and for monitoring the topology and activity of a spiking neuron network.  相似文献   

3.
程龙  刘洋 《控制与决策》2018,33(5):923-937
脉冲神经网络是目前最具有生物解释性的人工神经网络,是类脑智能领域的核心组成部分.首先介绍各类常用的脉冲神经元模型以及前馈和循环型脉冲神经网络结构;然后介绍脉冲神经网络的时间编码方式,在此基础上,系统地介绍脉冲神经网络的学习算法,包括无监督学习和监督学习算法,其中监督学习算法按照梯度下降算法、结合STDP规则的算法和基于脉冲序列卷积核的算法3大类别分别展开详细介绍和总结;接着列举脉冲神经网络在控制领域、模式识别领域和类脑智能研究领域的应用,并在此基础上介绍各国脑计划中,脉冲神经网络与神经形态处理器相结合的案例;最后分析脉冲神经网络目前所存在的困难和挑战.  相似文献   

4.
针对脉冲神经元基于精确定时的多脉冲编码信息的特点,提出了一种基于卷积计算的多层脉冲神经网络监督学习的新算法。该算法应用核函数的卷积计算将离散的脉冲序列转换为连续函数,在多层前馈脉冲神经网络结构中,使用梯度下降的方法得到基于核函数卷积表示的学习规则,并用来调整神经元连接的突触权值。在实验部分,首先验证了该算法学习脉冲序列的效果,然后应用该算法对Iris数据集进行分类。结果显示,该算法能够实现脉冲序列复杂时空模式的学习,对非线性模式分类问题具有较高的分类正确率。  相似文献   

5.
相较于第1代和第2代神经网络,第3代神经网络的脉冲神经网络是一种更加接近于生物神经网络的模型,因此更具有生物可解释性和低功耗性。基于脉冲神经元模型,脉冲神经网络可以通过脉冲信号的形式模拟生物信号在神经网络中的传播,通过脉冲神经元的膜电位变化来发放脉冲序列,脉冲序列通过时空联合表达不仅传递了空间信息还传递了时间信息。当前面向模式识别任务的脉冲神经网络模型性能还不及深度学习,其中一个重要原因在于脉冲神经网络的学习方法不成熟,深度学习中神经网络的人工神经元是基于实数形式的输出,这使得其可以使用全局性的反向传播算法对深度神经网络的参数进行训练,脉冲序列是二值性的离散输出,这直接导致对脉冲神经网络的训练存在一定困难,如何对脉冲神经网络进行高效训练是一个具有挑战的研究问题。本文首先总结了脉冲神经网络研究领域中的相关学习算法,然后对其中主要的方法:直接监督学习、无监督学习的算法以及ANN2SNN的转换算法进行分析介绍,并对其中代表性的工作进行对比分析,最后基于对当前主流方法的总结,对未来更高效、更仿生的脉冲神经网络参数学习方法进行展望。  相似文献   

6.
随着大数据时代的演进,互联网中的谣言成井喷状涌现。目前网络谣言鉴别方法中,基于监督学习的模型在训练过程中需要大量标注数据,同时网络谣言的人工标注用时较长,故提出采用半监督学习的图卷积神经网络,可有效利用无标注数据。通过在有标注节点上训练模型,更新所有节点共享的权重矩阵,将有标注节点信息传播给无标注节点,同时解决监督学习模型泛化能力不强和无监督学习模型不稳定的问题。与基于SVM算法、逻辑回归算法和BiLSTM模型的三种网络谣言鉴别方法相比,该方法在召回率、F1值两个评价指标上分别达到86.1%、85.3%,进一步提升了网络谣言鉴别的准确性和稳定性。该方法可有效减少人工标注代价,鉴别社交媒体和网络新闻中的谣言,为网络谣言的治理提供新思路。  相似文献   

7.
Spiking neurons are very flexible computational modules, which can implement with different values of their adjustable synaptic parameters an enormous variety of different transformations F from input spike trains to output spike trains. We examine in this letter the question to what extent a spiking neuron with biologically realistic models for dynamic synapses can be taught via spike-timing-dependent plasticity (STDP) to implement a given transformation F. We consider a supervised learning paradigm where during training, the output of the neuron is clamped to the target signal (teacher forcing). The well-known perceptron convergence theorem asserts the convergence of a simple supervised learning algorithm for drastically simplified neuron models (McCulloch-Pitts neurons). We show that in contrast to the perceptron convergence theorem, no theoretical guarantee can be given for the convergence of STDP with teacher forcing that holds for arbitrary input spike patterns. On the other hand, we prove that average case versions of the perceptron convergence theorem hold for STDP in the case of uncorrelated and correlated Poisson input spike trains and simple models for spiking neurons. For a wide class of cross-correlation functions of the input spike trains, the resulting necessary and sufficient condition can be formulated in terms of linear separability, analogously as the well-known condition of learnability by perceptrons. However, the linear separability criterion has to be applied here to the columns of the correlation matrix of the Poisson input. We demonstrate through extensive computer simulations that the theoretically predicted convergence of STDP with teacher forcing also holds for more realistic models for neurons, dynamic synapses, and more general input distributions. In addition, we show through computer simulations that these positive learning results hold not only for the common interpretation of STDP, where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses.  相似文献   

8.
Few algorithms for supervised training of spiking neural networks exist that can deal with patterns of multiple spikes, and their computational properties are largely unexplored. We demonstrate in a set of simulations that the ReSuMe learning algorithm can successfully be applied to layered neural networks. Input and output patterns are encoded as spike trains of multiple precisely timed spikes, and the network learns to transform the input trains into target output trains. This is done by combining the ReSuMe learning algorithm with multiplicative scaling of the connections of downstream neurons. We show in particular that layered networks with one hidden layer can learn the basic logical operations, including Exclusive-Or, while networks without hidden layer cannot, mirroring an analogous result for layered networks of rate neurons. While supervised learning in spiking neural networks is not yet fit for technical purposes, exploring computational properties of spiking neural networks advances our understanding of how computations can be done with spike trains.  相似文献   

9.
Naturally occurring sensory stimuli are dynamic. In this letter, we consider how spiking neural populations might transmit information about continuous dynamic stimulus variables. The combination of simple encoders and temporal stimulus correlations leads to a code in which information is not readily available to downstream neurons. Here, we explore a complex encoder that is paired with a simple decoder that allows representation and manipulation of the dynamic information in neural systems. The encoder we present takes the form of a biologically plausible recurrent spiking neural network where the output population recodes its inputs to produce spikes that are independently decodeable. We show that this network can be learned in a supervised manner by a simple local learning rule.  相似文献   

10.
A neural network that combines unsupervised and supervised learning for pattern recognition is proposed. The network is a hierarchical self-organization map, which is trained by unsupervised learning at first. When the network fails to recognize similar patterns, supervised learning is applied to teach the network to give different scaling factors for different features so as to discriminate similar patterns. Simulation results show that the model obtains good generalization capability as well as sharp discrimination between similar patterns.  相似文献   

11.
随着深度学习在训练成本、泛化能力、可解释性以及可靠性等方面的不足日益突出,类脑计算已成为下一代人工智能的研究热点。脉冲神经网络能更好地模拟生物神经元的信息传递方式,且具有计算能力强、功耗低等特点,在模拟人脑学习、记忆、推理、判断和决策等复杂信息方面具有重要的潜力。本文对脉冲神经网络从以下几个方面进行总结:首先阐述脉冲神经网络的基本结构和工作原理;在结构优化方面,从脉冲神经网络的编码方式、脉冲神经元改进、拓扑结构、训练算法以及结合其他算法这5个方面进行总结;在训练算法方面,从基于反向传播方法、基于脉冲时序依赖可塑性规则方法、人工神经网络转脉冲神经网络和其他学习算法这4个方面进行总结;针对脉冲神经网络的不足与发展,从监督学习和无监督学习两方面剖析;最后,将脉冲神经网络应用到类脑计算和仿生任务中。本文对脉冲神经网络的基本原理、编码方式、网络结构和训练算法进行了系统归纳,对脉冲神经网络的研究发展具有一定的积极意义。  相似文献   

12.
This paper presents new findings in the design and application of biologically plausible neural networks based on spiking neuron models, which represent a more plausible model of real biological neurons where time is considered as an important feature for information encoding and processing in the brain. The design approach consists of an evolutionary strategy based supervised training algorithm, newly developed by the authors, and the use of different biologically plausible neuronal models. A dynamic synapse (DS) based neuron model, a biologically more detailed model, and the spike response model (SRM) are investigated in order to demonstrate the efficacy of the proposed approach and to further our understanding of the computing capabilities of the nervous system. Unlike the conventional synapse, represented as a static entity with a fixed weight, employed in conventional and SRM-based neural networks, a DS is weightless and its strength changes upon the arrival of incoming input spikes. Therefore its efficacy depends on the temporal structure of the impinging spike trains. In the proposed approach, the training of the network free parameters is achieved using an evolutionary strategy where, instead of binary encoding, real values are used to encode the static and DS parameters which underlie the learning process. The results show that spiking neural networks based on both types of synapse are capable of learning non-linearly separable data by means of spatio-temporal encoding. Furthermore, a comparison of the obtained performance with classical neural networks (multi-layer perceptrons) is presented.  相似文献   

13.
We study how the location of synaptic input influences the stablex firing states in coupled model neurons bursting rhythmically at the gamma frequencies (20-70 Hz). The model neuron consists of two compartments and generates one, two, three or four spikes in each burst depending on the intensity of input current and the maximum conductance of M-type potassium current. If the somata are connected by reciprocal excitatory synapses, we find strong correlations between the changes in the bursting mode and those in the stable phase-locked states of the coupled neurons. The stability of the in-phase phase-locked state (synchronously firing state) tends to change when the individual neurons change their bursting patterns. If, however, the synaptic connections are terminated on the dendritic compartments, no such correlated changes occur. In this case, the coupled bursting neurons do not show the in-phase phase-locked state in any bursting mode. These results indicate that synchronization behaviour of bursting neurons significantly depends on the synaptic location, unlike a coupled system of regular spiking neurons.  相似文献   

14.
Query-based learning (QBL) has been introduced for training a supervised network model with additional queried samples. Experiments demonstrated that the classification accuracy is further increased. Although QBL has been successfully applied to supervised neural networks, it is not suitable for unsupervised learning models without external supervisors. In this paper, an unsupervised QBL (UQBL) algorithm using selective-attention and self-regulation is proposed. Applying the selective-attention, we can ask the network to respond to its goal-directed behavior with self-focus. Since there is no supervisor to verify the self-focus, a compromise is then made to environment-focus with self-regulation. In this paper, we introduce UQBL1 and UQBL2 as two versions of UQBL; both of them can provide fast convergence. Our experiments indicate that the proposed methods are more insensitive to network initialization. They have better generalization performance and can be a significant reduction in their training size.  相似文献   

15.
徐彦  熊迎军  杨静 《计算机应用》2018,38(6):1527-1534
脉冲神经元是一种新颖的人工神经元模型,其有监督学习的目的是通过学习使得神经元激发出一串通过精确时间编码来表达特定信息的脉冲序列,故称为脉冲序列学习。针对单神经元的脉冲序列学习应用价值显著、理论基础多样、影响因素众多的特点,对已有脉冲序列学习方法进行了综述对比。首先介绍了脉冲神经元模型与脉冲序列学习的基本概念;然后详细介绍了典型的脉冲序列学习方法,指出了每种方法的理论基础和突触权值调整方式;最后通过实验比较了这些学习方法的性能,系统总结了每种方法的特点,并且讨论了脉冲序列学习的研究现状和进一步的发展方向。该研究结果有助于脉冲序列学习方法的综合应用。  相似文献   

16.
Complex application domains involve difficult pattern classification problems. The state space of these problems consists of regions that lie near class separation boundaries and require the construction of complex discriminants while for the rest regions the classification task is significantly simpler. The motivation for developing the Supervised Network Self-Organizing Map (SNet-SOM) model is to exploit this fact for designing computationally effective solutions. Specifically, the SNet-SOM utilizes unsupervised learning for classifying at the simple regions and supervised learning for the difficult ones in a two stage learning process. The unsupervised learning approach is based on the Self-Organizing Map (SOM) of Kohonen. The basic SOM is modified with a dynamic node insertion/deletion process controlled with an entropy based criterion that allows an adaptive extension of the SOM. This extension proceeds until the total number of training patterns that are mapped to neurons with high entropy (and therefore with ambiguous classification) reduces to a size manageable numerically with a capable supervised model. The second learning phase (the supervised training) has the objective of constructing better decision boundaries at the ambiguous regions. At this phase, a special supervised network is trained for the computationally reduced task of performing the classification at the ambiguous regions only. The performance of the SNet-SOM has been evaluated on both synthetic data and on an ischemia detection application with data extracted from the European ST-T database. In all cases, the utilization of SNet-SOM with supervised learning based on both Radial Basis Functions and Support Vector Machines has improved the results significantly related to those obtained with the unsupervised SOM and has enhanced the scalability of the supervised learning schemes. The highly disciplined design of the generalization performance of the Support Vector Machine allows to design the proper model for the particular training set.  相似文献   

17.
《Information Fusion》2007,8(3):227-251
This paper presents a new approach to higher-level information fusion in which knowledge and data are represented using semantic networks composed of coupled spiking neuron nodes. Networks of simulated spiking neurons have been shown to exhibit synchronization, in which sub-assemblies of nodes become phase locked to one another. This phase locking reflects the tendency of biological neural systems to produce synchronized neural assemblies, which have been hypothesized to be involved in binding of low-level features in the perception of objects. The approach presented in this paper embeds spiking neurons in a semantic network, in which a synchronized sub-assembly of nodes represents a hypothesis about a situation. Likewise, multiple synchronized assemblies that are out-of-phase with one another represent multiple hypotheses. The initial network is hand-coded, but additional semantic relationships can be established by associative learning mechanisms. This approach is demonstrated by simulation of proof-of-concept scenarios involving the tracking of suspected criminal vehicles between meeting places in an urban environment. Our results indicate that synchronized sub-assemblies of spiking nodes can be used to represent multiple simultaneous events occurring in the environment and to effectively learn new relationships between semantic items in response to these events. In contrast to models of synchronized spiking networks that use physiologically realistic parameters in order to explain limits in human short-term memory (STM) capacity, our networks are not subject to the same limitations in representational capacity for multiple simultaneous events. Simulations demonstrate that the representational capacity of our networks can be very large, but as more simultaneous events are represented by synchronized sub-assemblies, the effective learning rate for establishing new relationships decreases. We propose that this effect could be countered by speeding up the spiking dynamics of the networks (a tactic of limited availability to biological systems). Such a speedup would allow the number of simultaneous events to increase without compromising the learning rate.  相似文献   

18.
A supervised learning rule for Spiking Neural Networks (SNNs) is presented that can cope with neurons that spike multiple times. The rule is developed by extending the existing SpikeProp algorithm which could only be used for one spike per neuron. The problem caused by the discontinuity in the spike process is counteracted with a simple but effective rule, which makes the learning process more efficient. Our learning rule is successfully tested on a classification task of Poisson spike trains. We also applied the algorithm on a temporal version of the XOR problem and show that it is possible to learn this classical problem using only one spiking neuron making use of a hair-trigger situation.  相似文献   

19.
Optoelectronic spiking neuron that is based on bispin-device is described. The neuron has separate optical inputs for excitatory and inhibitory signals, which are represented with pulses of single polarity. Experimental data, which demonstrates similarity in form of output pulses and set of functions of the suggested neuron and a biological one is given. An example of hardware implementation of optoelectronic pulsed neural network (PNN) that is based on proposed neurons is described. Main elements of the neural network are a line of pulsed neurons and a connection array, part of which is made as a spatial light modulator (SLM) with memory. Usage of SLM allows modification of weights of connections in the learning process of the network. It is possible to create adaptive (capable of additional learning and relearning) optoelectronic PNNs with about 2000 neurons.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号