首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Hosoya H 《Neural computation》2012,24(8):2119-2150
We study the interplay of Bayesian inference and natural image learning in a hierarchical vision system, in relation to the response properties of early visual cortex. We particularly focus on a Bayesian network with multinomial variables that can represent discrete feature spaces similar to hypercolumns combining minicolumns, enforce sparsity of activation to learn efficient representations, and explain divisive normalization. We demonstrate that maximal-likelihood learning using sampling-based Bayesian inference gives rise to classical receptive field properties similar to V1 simple cells and V2 cells, while inference performed on the trained network yields nonclassical context-dependent response properties such as cross-orientation suppression and filling in. Comparison with known physiological properties reveals some qualitative and quantitative similarities.  相似文献   

2.
Dayhoff JE 《Neural computation》2007,19(9):2433-2467
We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.  相似文献   

3.
Starting from the hypothesis that the mammalian neocortex to a first approximation functions as an associative memory of the attractor network type, we formulate a quantitative computational model of neocortical layers 2/3. The model employs biophysically detailed multi-compartmental model neurons with conductance based synapses and includes pyramidal cells and two types of inhibitory interneurons, i.e., regular spiking non-pyramidal cells and basket cells. The simulated network has a minicolumnar as well as a hypercolumnar modular structure and we propose that minicolumns rather than single cells are the basic computational units in neocortex. The minicolumns are represented in full scale and synaptic input to the different types of model neurons is carefully matched to reproduce experimentally measured values and to allow a quantitative reproduction of single cell recordings. Several key phenomena seen experimentally in vitro and in vivo appear as emergent features of this model. It exhibits a robust and fast attractor dynamics with pattern completion and pattern rivalry and it suggests an explanation for the so-called attentional blink phenomenon. During assembly dynamics, the model faithfully reproduces several features of local UP states, as they have been experimentally observed in vitro, as well as oscillatory behavior similar to that observed in the neocortex.  相似文献   

4.
Associative neural memories are models of biological phenomena that allow for the storage of pattern associations and the retrieval of the desired output pattern upon presentation of a possibly noisy or incomplete version of an input pattern. In this paper, we introduce implicative fuzzy associative memories (IFAMs), a class of associative neural memories based on fuzzy set theory. An IFAM consists of a network of completely interconnected Pedrycz logic neurons with threshold whose connection weights are determined by the minimum of implications of presynaptic and postsynaptic activations. We present a series of results for autoassociative models including one pass convergence, unlimited storage capacity and tolerance with respect to eroded patterns. Finally, we present some results on fixed points and discuss the relationship between implicative fuzzy associative memories and morphological associative memories  相似文献   

5.
The CA3 region of the hippocampus is a recurrent neural network that is essential for the storage and replay of sequences of patterns that represent behavioral events. Here we present a theoretical framework to calculate a sparsely connected network's capacity to store such sequences. As in CA3, only a limited subset of neurons in the network is active at any one time, pattern retrieval is subject to error, and the resources for plasticity are limited. Our analysis combines an analytical mean field approach, stochastic dynamics, and cellular simulations of a time-discrete McCulloch-Pitts network with binary synapses. To maximize the number of sequences that can be stored in the network, we concurrently optimize the number of active neurons, that is, pattern size, and the firing threshold. We find that for one-step associations (i.e., minimal sequences), the optimal pattern size is inversely proportional to the mean connectivity c, whereas the optimal firing threshold is independent of the connectivity. If the number of synapses per neuron is fixed, the maximum number P of stored sequences in a sufficiently large, nonmodular network is independent of its number N of cells. On the other hand, if the number of synapses scales as the network size to the power of 3/2, the number of sequences P is proportional to N. In other words, sequential memory is scalable. Furthermore, we find that there is an optimal ratio r between silent and nonsilent synapses at which the storage capacity alpha = P//[c(1 + r)N] assumes a maximum. For long sequences, the capacity of sequential memory is about one order of magnitude below the capacity for minimal sequences, but otherwise behaves similar to the case of minimal sequences. In a biologically inspired scenario, the information content per synapse is far below theoretical optimality, suggesting that the brain trades off error tolerance against information content in encoding sequential memories.  相似文献   

6.
7.
This work concisely reviews and unifies the analysis of different variants of neural associative networks consisting of binary neurons and synapses (Willshaw model). We compute storage capacity, fault tolerance, and retrieval efficiency and point out problems of the classical Willshaw model such as limited fault tolerance and restriction to logarithmically sparse random patterns. Then we suggest possible solutions employing spiking neurons, compression of the memory structures, and additional cell layers. Finally, we discuss from a technical perspective whether distributed neural associative memories have any practical advantage over localized storage, e.g., in compressed look-up tables.  相似文献   

8.
We study a model of the cortical macrocolumn consisting of a collection of inhibitorily coupled minicolumns. The proposed system overcomes several severe deficits of systems based on single neurons as cerebral functional units, notably limited robustness to damage and unrealistically large computation time. Motivated by neuroanatomical and neurophysiological findings, the utilized dynamics is based on a simple model of a spiking neuron with refractory period, fixed random excitatory interconnection within minicolumns, and instantaneous inhibition within one macrocolumn. A stability analysis of the system's dynamical equations shows that minicolumns can act as monolithic functional units for purposes of critical, fast decisions and learning. Oscillating inhibition (in the gamma frequency range) leads to a phase-coupled population rate code and high sensitivity to small imbalances in minicolumn inputs. Minicolumns are shown to be able to organize their collective inputs without supervision by Hebbian plasticity into selective receptive field shapes, thereby becoming classifiers for input patterns. Using the bars test, we critically compare our system's performance with that of others and demonstrate its ability for distributed neural coding.  相似文献   

9.
Temporal information processing, for instance the temporal association, plays an important role on many functions of brain. Among the various dynamics of neural networks, dynamic depression synapses and chaotic behavior have been regarded as the intriguing characteristics of biological neurons. In this paper, temporal association based on dynamic synapses and chaotic neurons is proposed. Interestingly, by introducing dynamic synapses into a temporal association, we found that the sequence storage capacity can be enlarged, that the transition time between patterns in the sequence can be shortened, and that the stability of the sequence can be enhanced. For particular interest, owing to chaotic neurons, the steady-state period becomes shorter in the temporal association and it can be adjusted by changing the parameter values of chaotic neurons. Simulation results demonstrating the performance of the temporal association are presented.  相似文献   

10.
Almost all applications of Artificial Neural Networks (ANNs) depend mainly on their memory ability. The characteristics of typical ANN models are fixed connections, with evolved weights, globalized representations, and globalized optimizations, all based on a mathematical approach. This makes those models to be deficient in robustness, efficiency of learning, capacity, anti-jamming between training sets, and correlativity of samples, etc. In this paper, we attempt to address these problems by adopting the characteristics of biological neurons in morphology and signal processing. A hierarchical neural network was designed and realized to implement structure learning and representations based on connected structures. The basic characteristics of this model are localized and random connections, field limitations of neuron fan-in and fan-out, dynamic behavior of neurons, and samples represented through different sub-circuits of neurons specialized into different response patterns. At the end of this paper, some important aspects of error correction, capacity, learning efficiency, and soundness of structural representation are analyzed theoretically. This paper has demonstrated the feasibility and advantages of structure learning and representation. This model can serve as a fundamental element of cognitive systems such as perception and associative memory.  相似文献   

11.
为了提高回声状态网络对于混沌时间序列特征提取与预测的能力,提出一种层次化可塑性回声状态网络模型.该模型将多个储备池顺序连接,通过逐层特征变换的方式增强对非线性多尺度动态特征的提取能力.同时,引入神经科学中的内在可塑性机制模拟真实生物神经元的放电率分布,以最大化神经元的信息传递为目标对储备池进行预训练.层次化可塑性回声状态网络不仅能够增加模型的容量,降低随机投影所带来的不稳定性,而且也为理解储备池的表示、处理、记忆及储存操作提供一种新的思路.仿真实验结果表明,相比于其他7种改进的回声状态网络模型,所提出的模型在人造数据和真实数据所构成的混沌时间序列预测任务中均能取得最优的预测精度.  相似文献   

12.
T Tanaka  T Aoyagi  T Kaneko 《Neural computation》2012,24(10):2700-2725
We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.  相似文献   

13.
量子Hopfield神经网络及图像识别   总被引:1,自引:0,他引:1       下载免费PDF全文
传统的Hopfield网络的存储容量是神经元个数的0·14倍(P=0·14N)。由于它在识别大量的图像或模式时遇到了巨大的困难,所以研究人员一直在寻找新的方法。由量子计算和神经网络结合而产生的量子神经网络是新兴和前沿的学科之一。为了提高图像识别的速度和增加图像识别量,在分析了量子线性叠加特性的基础上,提出了一种用于存储矩阵元素的基于概率分布的量子Hopfield神经网络,它在存储容量或记忆容量上提高到了神经元个数的2N倍,比传统的Hopfield神经网络有了指数级的提高。通过图像识别的实例分析和仿真试验的结果表明,该量子Hopfield神经网络能有效地识别图像或模式,并且工作过程符合量子演化过程。  相似文献   

14.
A previously developed method for efficiently simulating complex networks of integrate-and-fire neurons was specialized to the case in which the neurons have fast unitary postsynaptic conductances. However, inhibitory synaptic conductances are often slower than excitatory ones for cortical neurons, and this difference can have a profound effect on network dynamics that cannot be captured with neurons that have only fast synapses. We thus extend the model to include slow inhibitory synapses. In this model, neurons are grouped into large populations of similar neurons. For each population, we calculate the evolution of a probability density function (PDF), which describes the distribution of neurons over state-space. The population firing rate is given by the flux of probability across the threshold voltage for firing an action potential. In the case of fast synaptic conductances, the PDF was one-dimensional, as the state of a neuron was completely determined by its transmembrane voltage. An exact extension to slow inhibitory synapses increases the dimension of the PDF to two or three, as the state of a neuron now includes the state of its inhibitory synaptic conductance. However, by assuming that the expected value of a neuron's inhibitory conductance is independent of its voltage, we derive a reduction to a one-dimensional PDF and avoid increasing the computational complexity of the problem. We demonstrate that although this assumption is not strictly valid, the results of the reduced model are surprisingly accurate.  相似文献   

15.
Winner-take-all networks have been proposed to underlie many of the brain's fundamental computational abilities. However, not much is known about how to extend the grouping of potential winners in these networks beyond single neuron or uniformly arranged groups of neurons. We show that competition between arbitrary groups of neurons can be realized by organizing lateral inhibition in linear threshold networks. Given a collection of potentially overlapping groups (with the exception of some degenerate cases), the lateral inhibition results in network dynamics such that any permitted set of neurons that can be coactivated by some input at a stable steady state is contained in one of the groups. The information about the input is preserved in this operation. The activity level of a neuron in a permitted set corresponds to its stimulus strength, amplified by some constant. Sets of neurons that are not part of a group cannot be coactivated by any input at a stable steady state. We analyze the storage capacity of such a network for random groups--the number of random groups the network can store as permitted sets without creating too many spurious ones. In this framework, we calculate the optimal sparsity of the groups (maximizing group entropy). We find that for dense inputs, the optimal sparsity is unphysiologically small. However, when the inputs and the groups are equally sparse, we derive a more plausible optimal sparsity. We believe our results are the first steps toward attractor theories in hybrid analog-digital networks.  相似文献   

16.
小波神经网络在两足步行机器人爬斜坡中的应用   总被引:3,自引:0,他引:3  
张克  傅佩琛  强文义 《机器人》2000,22(5):384-389
针对传统的神经网络中神经元模型在结构和信息存 储能力上存在的不足,本文提出了一种基于广义小波基函数网络的神经元集聚模型.这种小 波神经网络不仅收敛速度快,非线性逼近能力更好,而且具有内部结构变尺度、自适应调整 和广义信息存储等智能化特点,更符合生物原型的实际情况.静态学习和准动态学习仿真实 验证明这种神经网络结构的有效性.  相似文献   

17.
王剑  毛宗源 《计算机工程》2004,30(4):16-18,106
提出了一种新型的联想记忆神经网络,神经元的状态为向量,含N个神经元的网络中存储的模式由N个具有M个分量的二级模式组成,每个模式存储在一个由N个连接组成的“模式环”中,连接由“连接状态”和“禁止路径”组成,前者用于存储二级模式,后者用于消除假模式,回忆时允许输入不完整的模式,记忆容量为(N-1)!。  相似文献   

18.
Edge storage stores the data directly at the data collection point, and does not need to transmit the collected data to the storage central server through the network. It is a critical technology that supports applications such as edge computing and 5G network applications, with lower network communication overhead, lower interaction delay and lower bandwidth cost. However, with the explosion of data and higher real-time requirements, the traditional Internet of Things (IoT) storage architecture cannot meet the requirements of low latency and large capacity. Non-volatile memory (NVM) presents new possibilities regarding this aspect. This paper classifies the different storage architectures based on NVM and compares the system goals, architectures, features, and limitations to explore new research opportunities. Moreover, the existing solutions to reduce the write latency and energy consumption and increase the lifetime of NVM IoT storage devices are analyzed. Furthermore, we discuss the security and privacy issues of IoT devices and compare the mainstream solutions. Finally, we present the opportunities and challenges of building IoT storage systems based on NVM.  相似文献   

19.
随着移动互联网迅猛发展,移动终端的硬件资源如计算能力、存储能力及电池续航能力等等,已严重制约了移动互联网的发展。针对移动终端存储能力不足问题,结合目前集群技术及云存储技术,考虑到移动终端文件主要以小文件为主,提出一种基于云存储的移动终端网络存储模型,其体型结构包括基础设施层、基础管理层、应用接口层和访问层四部分,把信息的存储和处理移植到云上,从而为解决移动终端资源受限提供一种可行方案。  相似文献   

20.
首先介绍了主动网络及其实现原理,然后讨论了在虚地址空间基于文件的操作系统中,计算机存储系统层次性和网络互连的关系,同时讨论了跨域调用和文件操作的关系,最后研究了虚地址空间基于文件的操作系统中主动网络的实现原理,分析了该主动网络具有的优点和存在的问题。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号