首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
概率逻辑神经元网络的记忆容量   总被引:1,自引:0,他引:1  
张钹  张铃 《计算机学报》1993,16(11):807-813
本文研究概率逻辑神经元网络的记忆容量问题,主要结果有,1给出构造指定记忆容量网络的方法2给出网络独立记忆容量与网络规模之间的数量关系,得出PLN网络的记忆容量不仅与元件的输入线个数有关,而且与网络的元件个数也有关。以上结论为PLN网络的综合提供了新方法。  相似文献   

2.
张钹  张铃 《软件学报》1994,5(3):1-11
本文在分析概率逻辑神经元(PLN)网络原型存在不足的基础上,提出PLN元件的一个改进模型,并指出原来的PLN网络模型、Hopfield学习律以及Boltzman机的演化律等都是PLN网络改进模型的特例.文中还给出PLN网络改进模型在联想记忆应用中的模拟实验结果,说明改进模型无论在鲁棒性和收敛速度上比原型都有很大改进.  相似文献   

3.
张铃  张钹 《软件学报》1994,5(8):9-13
本文证明在PLN网络中适当给定广义A-学习律,可使网络满足如下条件:(1)所有训练样本都是稳定状态;(2)每个稳定状态具有最大的吸引域;(3)具有很快的收敛速度.由此可见,将这种PLN网络作为联想记忆器是很理想的.  相似文献   

4.
概率逻辑神经元网络收敛性的分析   总被引:2,自引:0,他引:2  
张钹  张铃 《计算机学报》1993,16(1):1-12
本文以马尔科夫链理论为工具,研究PLN(概率逻辑神经元)网络的定量性质.我们得到的主要结果是,在一给定网络中,给出各状态收敛到稳定状态的概率,平均收敛步数和方差以及一般PLN网络平均收敛步数的上(下)界估计.给出计算机的一个模拟结果,并与理论结论相对比,以验证理论结果的正确性.  相似文献   

5.
Regarding a single-layered PLN network with feedback connections as an associative memory network,the complexity of recognitions is discussed.We have the main result:if the size of the network N is m,then the complexity of recognition is an exponential function of m.The necessary condition under which the complexity of recognition is polynomial is given.  相似文献   

6.
We introduce sparse encoding into the autoassociative memory model with replacement units. Utilizing computer simulation, we search the optimal number of replacement units in two terms: the memory capacity and the information capacity of the network. We show that the optimal number of replacement units to maximize the memory capacity and the information capacity decreases as the firing ratio decreases, and that the difference in the memory capacity between sparse encoding and non-sparse encoding becomes small as the number of replacement units increases.  相似文献   

7.
A memory capacity exists for artificial neural networks of associative memory. The addition of new memories beyond the capacity overloads the network system and makes all learned memories irretrievable (catastrophic forgetting) unless there is a provision for forgetting old memories. This article describes a property of associative memory networks in which a number of units are replaced when networks learn. In our network, every time the network learns a new item or pattern, a number of units are erased and the same number of units are added. It is shown that the memory capacity of the network depends on the number of replaced units, and that there exists a optimal number of replaced units in which the memory capacity is maximized. The optimal number of replaced units is small, and seems to be independent of the network size. This work was presented in part at the 12th International Symposium on Artificial Life and Robotics, Oita, Japan, January 25–27, 2007  相似文献   

8.
单层反馈PLN网络的识别复杂性   总被引:1,自引:0,他引:1  
张钹  张铃 《计算机学报》1993,16(5):321-326
本文讨论将单层反馈PLN(概率逻辑神经元模型)网络当作联想记忆网络时,其识别的复杂性问题,即其平均识别速度与网络规模之间的数量关系,我们得到的主要结论是,设单层反馈PLN网络N的规模为m,则其识别计算量是m的指数函数,给出识别计算量是m多项式的必要条件。  相似文献   

9.
A new type of model neuron is introduced as a building block of an associative memory. The neuron, which has a number of receptor zones, processes both the amplitude and the frequency of input signals, associating a small number of features encoded by those signals. Using this two-parameter input in our model compared to the one-dimensional inputs of conventional model neurons (e.g., the McCulloch Pitts neuron) offers an increased memory capacity. In our model, there is a competition among inputs in each zone with a subsequent cooperation of the winners to specify the output. The associative memory consists of a network of such neurons. A state-space model is used to define the neurodynamics. We explore properties of the neuron and the network and demonstrate its favorable capacity and recall capabilities. Finally, the network is used in an application designed to find trademarks that sound alike.  相似文献   

10.
The Hopfield model effectively stores a comparatively small number of initial patterns, about 15% of the size of the neural network. A greater value can be attained only in the Potts-glass associative memory model, in which neurons may exist in more than two states. Still greater memory capacity is exhibited by a parametric neural network based on the nonlinear optical signal transfer and processing principles. A formalism describing both the Potts-glass associative memory and the parametric neural network within a unified framework is developed. The memory capacity is evaluated by the Chebyshev–Chernov statistical method.  相似文献   

11.
Romani S  Amit DJ  Amit Y 《Neural computation》2008,20(8):1928-1950
A network of excitatory synapses trained with a conservative version of Hebbian learning is used as a model for recognizing the familiarity of thousands of once-seen stimuli from those never seen before. Such networks were initially proposed for modeling memory retrieval (selective delay activity). We show that the same framework allows the incorporation of both familiarity recognition and memory retrieval, and estimate the network's capacity. In the case of binary neurons, we extend the analysis of Amit and Fusi (1994) to obtain capacity limits based on computations of signal-to-noise ratio of the field difference between selective and non-selective neurons of learned signals. We show that with fast learning (potentiation probability approximately 1), the most recently learned patterns can be retrieved in working memory (selective delay activity). A much higher number of once-seen learned patterns elicit a realistic familiarity signal in the presence of an external field. With potentiation probability much less than 1 (slow learning), memory retrieval disappears, whereas familiarity recognition capacity is maintained at a similarly high level. This analysis is corroborated in simulations. For analog neurons, where such analysis is more difficult, we simplify the capacity analysis by studying the excess number of potentiated synapses above the steady-state distribution. In this framework, we derive the optimal constraint between potentiation and depression probabilities that maximizes the capacity.  相似文献   

12.
This paper presents a further theoretical analysis on the asymptotic memory capacity of the generalized Hopfield network (GHN) under the perceptron learning scheme. It has been proved that the asymptotic memory capacity of the GHN is exactly 2(n– 1), where n is the number of neurons in the network. That is, the GHN of n neurons can store 2(n– 1) bipolar sample patterns as its stable states when n is large, which has significantly improved the existing results on the asymptotic memory capacity of the GHN.  相似文献   

13.
We present an efficient feature selection algorithm for the general regression problem, which utilizes a piecewise linear orthonormal least squares (OLS) procedure. The algorithm 1) determines an appropriate piecewise linear network (PLN) model for the given data set, 2) applies the OLS procedure to the PLN model, and 3) searches for useful feature subsets using a floating search algorithm. The floating search prevents the “nesting effect.” The proposed algorithm is computationally very efficient because only one data pass is required. Several examples are given to demonstrate the effectiveness of the proposed algorithm.  相似文献   

14.
联想记忆神经网络的训练   总被引:2,自引:0,他引:2  
张承福  赵刚 《自动化学报》1995,21(6):641-648
提出了一种联想记忆神经网络的优化训练方案,说明网络的样本吸引域可用阱深参数作一定程度的控制,使网络具有尽可能好的容错性.计算表明,训练网络可达到α<1(α=M/N,N是神经元数,M是贮存样本数),而仍有良好的容错性,明显优于外积法、正交化外积法、赝逆法等常用方案.文中还对训练网络的对称性与收敛性问题进行了讨论.  相似文献   

15.
We show that the memory capacity of the fully connected binary Hopfield network is significantly reduced by a small amount of noise in training patterns. Our analytical results obtained with the mean field method are supported by extensive computer simulations.  相似文献   

16.
The CA3 region of the hippocampus is a recurrent neural network that is essential for the storage and replay of sequences of patterns that represent behavioral events. Here we present a theoretical framework to calculate a sparsely connected network's capacity to store such sequences. As in CA3, only a limited subset of neurons in the network is active at any one time, pattern retrieval is subject to error, and the resources for plasticity are limited. Our analysis combines an analytical mean field approach, stochastic dynamics, and cellular simulations of a time-discrete McCulloch-Pitts network with binary synapses. To maximize the number of sequences that can be stored in the network, we concurrently optimize the number of active neurons, that is, pattern size, and the firing threshold. We find that for one-step associations (i.e., minimal sequences), the optimal pattern size is inversely proportional to the mean connectivity c, whereas the optimal firing threshold is independent of the connectivity. If the number of synapses per neuron is fixed, the maximum number P of stored sequences in a sufficiently large, nonmodular network is independent of its number N of cells. On the other hand, if the number of synapses scales as the network size to the power of 3/2, the number of sequences P is proportional to N. In other words, sequential memory is scalable. Furthermore, we find that there is an optimal ratio r between silent and nonsilent synapses at which the storage capacity alpha = P//[c(1 + r)N] assumes a maximum. For long sequences, the capacity of sequential memory is about one order of magnitude below the capacity for minimal sequences, but otherwise behaves similar to the case of minimal sequences. In a biologically inspired scenario, the information content per synapse is far below theoretical optimality, suggesting that the brain trades off error tolerance against information content in encoding sequential memories.  相似文献   

17.
The objective of this paper is to to resolve important issues in artificial neural nets-exact recall and capacity in multilayer associative memories. These problems have imposed restrictions on coding strategies. We propose the following triple-layered hybrid neural network: the first synapse is a one-shot associative memory using the modified Kohonen's adaptive learning algorithm with arbitrary input patterns; the second one is Kosko's bidirectional associative memory consisting of orthogonal input/output basis vectors such as Walsh series satisfying the strict continuity condition; and finally, the third one is a simple one-shot associative memory with arbitrary output images. A mathematical framework based on the relationship between energy local minima (capacity of the neural net) and noise-free recall is established. The robust capacity conditions of this multilayer associative neural network that lead to forming the local minima of the energy function at the exact training pairs are derived. The chosen strategy not only maximizes the total number of stored images but also completely relaxes any code-dependent conditions of the learning pairs.  相似文献   

18.
Uniform memory multicore neural network accelerators (UNNAs) furnish huge computing power to emerging neural network applications. Meanwhile, with neural network architectures going deeper and wider, the limited memory capacity has become a constraint to deploy models on UNNA platforms. Therefore how to efficiently manage memory space and how to reduce workload footprints are urgently significant. In this paper, we propose Tetris: a heuristic static memory management framework for UNNA platforms. Tetris reconstructs execution flows and synchronization relationships among cores to analyze each tensor’s liveness interval. Then the memory management problem is converted to a sequence permutation problem. Tetris uses a genetic algorithm to explore the permutation space to optimize the memory management strategy and reduce memory footprints. We evaluate several typical neural networks and the experimental results demonstrate that Tetris outperforms the state-of-the-art memory allocation methods, and achieves an average memory reduction ratio of 91.9% and 87.9% for a quad-core and a 16-core Cambricon-X platform, respectively.  相似文献   

19.
《Parallel Computing》2014,40(5-6):86-99
Simulation of in vivo cellular processes with the reaction–diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel efficiency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli. Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.  相似文献   

20.
《Computer Networks》2008,52(15):2924-2946
In this paper, we propose a fast simulation framework, TranSim, that expedites simulation by reducing the rate of generating packet-events. In the framework, we transform an IP network into an alternate network that generates a smaller number of packet-events, conduct simulation in the “transformed” network, and extrapolate simulation results for the original network from those obtained in the “transformed” network. We formally prove that, as long as the network invariant – the bandwidth-delay product – is preserved, the network dynamics, such as the queue dynamics and the packet dropping probability at each link, and TCP dynamics, such as the congestion window, RTTs, and rate dynamics, are also preserved in the course of network transformation.We implement TranSim in ns-2, and carry out a simulation study to evaluate it against packet-level simulation, with respect to the capability of capturing transient, packet-level network dynamics, the reduction in the execution time and memory usage, and the discrepancy in the network throughput. The simulation results indicate maximally two orders of magnitude improvement in the execution time, and the performance improvement becomes more prominent as the network size increases (in terms of the number of nodes, the number of flows, the complexity of topology, and link capacity) or as the degree of downsizing increases. The memory usage incurred in TranSim is comparable to that in packet-level simulation. The error discrepancy between TranSim and packet-level simulation, on the other hand, is between 1% and 10% in a wide variety of network topologies, inclusive of randomly generated topologies, traffic loads with different AQM strategies, different combination of operating systems and hardware systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号