首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 698 毫秒
1.
Permitted and forbidden sets in symmetric threshold-linear networks   总被引:1,自引:0,他引:1  
The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about high-level biological computation. In past theoretical studies, constraints on the synaptic connection patterns of threshold-linear networks were found that guaranteed bounded network dynamics, convergence to attractive fixed points, and multistability, all fundamental aspects of cortical information processing. However, these conditions were only sufficient, and it remained unclear which were the minimal (necessary) conditions for convergence and multistability. We show that symmetric threshold-linear networks converge to a set of attractive fixed points if and only if the network matrix is copositive. Furthermore, the set of attractive fixed points is nonconnected (the network is multiattractive) if and only if the network matrix is not positive semidefinite. There are permitted sets of neurons that can be coactive at a stable steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we provide a formulation of long-term memory that is more general than the traditional perspective of fixed-point attractor networks. There is a close correspondence between threshold-linear networks and networks defined by the generalized Lotka-Volterra equations.  相似文献   

2.
Hebbian learning, the paradigm of memory formation, needs further mechanisms to guarantee creation and maintenance of a viable memory system. One such proposed mechanism is Hebbian unlearning, a process hypothesized to occur during sleep. It can remove spurious states and eliminate global correlations in the memory system. However, the problem of spurious states is unimportant in the biologically interesting case of memories that are sparsely coded on excitatory neurons. Moreover, if some memories are anomalously strong and have to be weakened to guarantee proper functioning of the network, we show that it is advantageous to do that by neuronal regulation (NR) rather than synaptic unlearning. Neuronal regulation can account for dynamical maintenance of memory systems that undergo continuous synaptic turnover. This neuronal-based mechanism, regulating all excitatory synapses according to neuronal average activity, has recently gained strong experimental support. NR achieves synaptic maintenance over short time scales by preserving the average neuronal input field. On longer time scales it acts to maintain memories by letting the stronger synapses grow to their upper bounds. In ageing, these bounds are increased to allow stronger values of remaining synapses to overcome the loss of synapses that have perished.  相似文献   

3.
汪涛  俞瑞钊 《计算机学报》1995,18(6):475-479
本文提出了一种联想记忆网络中的最速递减动态演化规则,首先改变能产生最大系统能量递减的神经元状态,从而减小落入多余吸引子的可能性。在理论上,我们分析了样本的吸引域和联想规则的收敛性,计算机实验结果说明了它的优越性。  相似文献   

4.
Neurophysiological experiments show that the strength of synaptic connections can undergo substantial changes on a short time scale. These changes depend on the history of the presynaptic input. Using mean-field techniques, we study how short-time dynamics of synaptic connections influence the performance of attractor neural networks in terms of their memory capacity and capability to process external signals. For binary discrete-time as well as for firing rate continuous-time neural networks, the fixed points of the network dynamics are shown to be unaffected by synaptic dynamics. However, the stability of patterns changes considerably. Synaptic depression turns out to reduce the storage capacity. On the other hand, synaptic depression is shown to be advantageous for processing of pattern sequences. The analytical results on stability, size of the basins of attraction and on the switching between patterns are complemented by numerical simulations.  相似文献   

5.

The FastICA algorithm is one of the most popular algorithms in the domain of independent component analysis (ICA). Despite its success, it is observed that FastICA occasionally yields outcomes that do not correspond to any true solutions (known as demixing vectors) of the ICA problem. These outcomes are commonly referred to as spurious solutions. Although FastICA is a well-studied ICA algorithm, the occurrence of spurious solutions is not yet completely understood by the ICA community. In this contribution, we aim at addressing this issue. In the first part of this work, we are interested in characterizing the relationship between demixing vectors, local optimizers of the contrast function and (attractive or unattractive) fixed points of FastICA algorithm. We will show that there exists an inclusion relationship between these sets. In the second part, we investigate the possible scenarios where spurious solutions occur. It will be shown that when certain bimodal Gaussian mixture distributions are involved, there may exist spurious solutions that are attractive fixed points of FastICA. In this case, popular nonlinearities such as “Gauss” or “tanh” tend to yield spurious solutions, whereas “kurtosis” gives much more reliable results.

  相似文献   

6.
A theoretical investigation into the performance of the Hopfieldmodel   总被引:16,自引:0,他引:16  
An analysis is made of the behavior of the Hopfield model as a content-addressable memory (CAM) and as a method of solving the traveling salesman problem (TSP). The analysis is based on the geometry of the subspace set up by the degenerate eigenvalues of the connection matrix. The dynamic equation is shown to be equivalent to a projection of the input vector onto this subspace. In the case of content-addressable memory, it is shown that spurious fixed points can occur at any corner of the hypercube that is on or near the subspace spanned by the memory vectors. Analysed is why the network can frequently converge to an invalid solution when applied to the traveling salesman problem energy function. With these expressions, the network can be made robust and can reliably solve the traveling salesman problem with tour sizes of 50 cities or more.  相似文献   

7.
A modified Hopfield auto-associative memory with improved capacity   总被引:2,自引:0,他引:2  
This paper describes a new procedure to implement a recurrent neural network (RNN), based on a new approach to the well-known Hopfield autoassociative memory. In our approach a RNN is seen as a complete graph G and the learning mechanism is also based on Hebb's law, but with a very significant difference: the weights, which control the dynamics of the net, are obtained by coloring the graph G. Once the training is complete, the synaptic matrix of the net will be the weight matrix of the graph. Any one of these matrices will fulfil some spatial properties, for this reason they will be referred to as tetrahedral matrices. The geometrical properties of these tetrahedral matrices may be used for classifying the n-dimensional state-vector space in n classes. In the recall stage, a parameter vector is introduced, which is related with the capacity of the network. It may be shown that the bigger the value of the ith component of the parameter vector is, the lower the capacity of the [i] class of the state-vector space becomes. Once the capacity has been controlled, a new set of parameters that uses the statistical deviation of the prototypes to compare them with those that appear as fixed points is introduced, eliminating thus a great number of parasitic fixed points.  相似文献   

8.
The problem of optimal asymmetric Hopfield-type associative memory (HAM) design based on perceptron-type learning algorithms is considered. It is found that most of the existing methods considered the design problem as either 1) finding optimal hyperplanes according to normal distance from the prototype vectors to the hyperplane surface or 2) obtaining weight matrix W=[w/sub ij/] by solving a constraint optimization problem. In this paper, we show that since the state space of the HAM consists of only bipolar patterns, i.e., V=(v/sub 1/,v/sub 2/,...,v/sub N/)/sup T//spl isin/{-1,+1}/sup N/, the basins of attraction around each prototype (training) vector should be expanded by using Hamming distance measure. For this reason, in this paper, the design problem is considered from a different point of view. Our idea is to systematically increase the size of the training set according to the desired basin of attraction around each prototype vector. We name this concept the higher order Hamming stability and show that conventional minimum-overlap algorithm can be modified to incorporate this concept. Experimental results show that the recall capability as well as the number of spurious memories are all improved by using the proposed method. Moreover, it is well known that setting all self-connections w/sub ii//spl forall/i to zero has the effect of reducing the number of spurious memories in state space. From the experimental results, we find that the basin width around each prototype vector can be enlarged by allowing nonzero diagonal elements on learning of the weight matrix W. If the magnitude of w/sub ii/ is small for all i, then the condition w/sub ii/=0/spl forall/i can be relaxed without seriously affecting the number of spurious memories in the state space. Therefore, the method proposed in this paper can be used to increase the basin width around each prototype vector with the cost of slightly increasing the number of spurious memories in the state space.  相似文献   

9.
The local identical index (LII) associative memory (AM) proposed by the authors in a previous paper is a one-shot feedforward structure designed to exhibit no spurious attractors. In this paper we relax the latter design constraint in exchange for enlarged basins of attraction and we develop a family of modified LII AM networks that exhibit improved performance, particularly in memorizing highly correlated patterns. The new algorithm meets the requirement of no spurious attractors only in a local sense. Finally, we show that the modified LII family of networks can accommodate composite patterns of any size by storing (memorizing) only the basic (prime) prototype patterns. The latter property translates to low learning complexity and a simple network structure with significant memory savings. Simulation studies and comparisons illustrate and support the the optical developments.  相似文献   

10.
Nonconvex optimization arises in many areas of computational science and engineering. However, most nonconvex optimization algorithms are only known to have local convergence or subsequence convergence properties. In this paper, we propose an algorithm for nonconvex optimization and establish its global convergence (of the whole sequence) to a critical point. In addition, we give its asymptotic convergence rate and numerically demonstrate its efficiency. In our algorithm, the variables of the underlying problem are either treated as one block or multiple disjoint blocks. It is assumed that each non-differentiable component of the objective function, or each constraint, applies only to one block of variables. The differentiable components of the objective function, however, can involve multiple blocks of variables together. Our algorithm updates one block of variables at a time by minimizing a certain prox-linear surrogate, along with an extrapolation to accelerate its convergence. The order of update can be either deterministically cyclic or randomly shuffled for each cycle. In fact, our convergence analysis only needs that each block be updated at least once in every fixed number of iterations. We show its global convergence (of the whole sequence) to a critical point under fairly loose conditions including, in particular, the Kurdyka–?ojasiewicz condition, which is satisfied by a broad class of nonconvex/nonsmooth applications. These results, of course, remain valid when the underlying problem is convex. We apply our convergence results to the coordinate descent iteration for non-convex regularized linear regression, as well as a modified rank-one residue iteration for nonnegative matrix factorization. We show that both applications have global convergence. Numerically, we tested our algorithm on nonnegative matrix and tensor factorization problems, where random shuffling clearly improves the chance to avoid low-quality local solutions.  相似文献   

11.
The exact dynamics of shallow loaded associative neural memories are generated and characterized. The Boolean matrix analysis approach is employed for the efficient generation of all possible state transition trajectories for parallel updated binary-state dynamic associative memories (DAMs). General expressions for the size of the basin of attraction of fundamental and oscillatory memories and the number of oscillatory and stable states are derived for discrete synchronous Hopfield DAMs loaded with one, two, or three even-dimensionality bipolar memory vectors having the same mutual Hamming distances between them. Spurious memories are shown to occur only if the number of stored patterns exceeds two in an even-dimensionality Hopfield memory. The effects of odd- versus even-dimensionality memory vectors on DAM dynamics and the effects of memory pattern encoding on DAM performance are tested. An extension of the Boolean matrix dynamics characterization technique to other, more complex DAMs is presented.  相似文献   

12.
针对联想记忆器存在的根本的问题,除记忆的样本作为稳定状态外,还有伪状态出现,采用带有一个附加参数(即阈值)的三阶输出函数代替二阶输出函数;应用平均场理论研究该阈值在系统行为上有哪些效果,怎样选取该阈值使伪状态的吸引域达到最小,最后给出模拟结果来验证理论分析和综合的结果。  相似文献   

13.
Synthesis of Brain-State-in-a-Box (BSB) based associative memories   总被引:2,自引:0,他引:2  
Presents a novel synthesis procedure to realize an associative memory using the Generalized-Brain-State-in-a-Box (GBSB) neural model. The implementation yields an interconnection structure that guarantees that the desired memory patterns are stored as asymptotically stable equilibrium points and that possesses very few spurious states. Furthermore, the interconnection structure is in general non-symmetric. Simulation examples are given to illustrate the effectiveness of the proposed synthesis method. The results obtained for the GBSB model are successfully applied to other neural network models.  相似文献   

14.
The fixed‐time relative position tracking and attitude synchronization control problem of a spacecraft fly‐around mission for a noncooperative target in the presence of parameter uncertainties and external disturbances is investigated. Firstly, a novel and coupled relative position and attitude motion model for a noncooperative fly‐around mission is established. Subsequently, a novel nonsingular fast terminal sliding mode (NFTSM) surface is developed, and the explicit estimation of the convergence time independent of initial states is provided. Fair and systematic comparisons among several typical terminal sliding modes show that the designed NFTSM has faster convergence performance than the fast terminal sliding mode. Then, a robust integrated adaptive fixed‐time NFTSM control law with no precise knowledge of the mass and inertia matrix and disturbances by combining the nonsingular terminal sliding mode technique with an adaptive methodology is proposed, which can eliminate the chattering phenomenon and guarantee that the relative position and attitude tracking errors can converge into the small regions containing the origin in fixed time. Finally, numerical simulations are performed to demonstrate the effectiveness of the proposed control schemes.  相似文献   

15.
陈雷  付鲲 《计算机应用研究》2020,37(4):999-1003,1024
针对仅使用群智能优化算法及点云空间信息进行点云配准时,优化过程寻找两片点云对应点耗时较长,收敛速度较慢的缺点,提出一种基于曲率信息的人工蜂群点云配准算法。算法根据曲率信息提取特征点,通过改进人工蜂群算法优化目标函数得到可以使两片点云重合的最佳变换矩阵。在种群优化过程中根据曲率信息约束对应点寻找范围,缩小参与计算点云的规模。对比实验表明,与仅采用随机选点方法和使用点云空间坐标信息的配准算法等相比,所提出算法可以在不降低配准精度的同时,有效加快配准收敛速度,显著缩短点云配准所用时间。  相似文献   

16.
A neural network consisting of a gallery of independent subnetworks is developed for associative memory which stores and recalls gray scale images. Each original image is encoded by a unique stable state of one of neural recurrent subnetworks. Comparing to Amari-Hopfield associative memory, our solution has no spurious states, is less sensitive to noise, and its network complexity is significantly lower. Computer simulations confirm that associative recall in this system for images of natural scenes is very robust. Colored additive and multiplicative noise with standard deviation up to =2 can be removed perfectly from normalized image. The same observations are valid for spiky noise distributed on up to 70% of image area. Even if we remove up to 95% pixels from the original image in deterministic or random way, still the network performs the correct association.  相似文献   

17.
遗传算法的全局动力学形态分析   总被引:1,自引:0,他引:1  
目前,对遗传算法的运行机理分析大都集中在算法的极限收敛性等问题,对算法的全局动力学形态研究较少.从一个具有代表性的、简化的2—bit问题入手,可以对遗传算法中常用的各种进化算子及其组合进行形式化描述,从而全面分析GA的全局动力学形态.针对各种参数的选取,分别建立了4个数学模型.通过分析这些模型中各个不动点的吸引性,揭示出不同进化算子对动力学形态的影响.对于这个问题,证明了算法的全局收敛性.并指出,当存在两个被此竞争的局部极值点时,模型中只有两个吸引点和一个鞍点(或排斥点),不存在其他的不动点或周期点.算法的收敛结果完全由初始条件处于状态空间中的位置所决定,相应的收敛区域的比例完全由模型的参数决定.  相似文献   

18.
本文提出了一种基于分解转移矩阵的PageRank的迭代计算方法。该方法对PageRank理论模型进一步推导,把其Markov状态转移矩阵进行了分解,从而降低存储开销和计算复杂度,减少I/O需求,使得PageRank计算的工程化实现更为简单。实验表明1 700多万的网页2.8亿条链接,可以在30秒内完成一次迭代,内存需求峰值585MB,可以满足工程化应用的需求。  相似文献   

19.
The problem of spurious patterns in neural associative memory models is discussed. Some suggestions to solve this problem from the literature are reviewed and their inadequacies are pointed out. A solution based on the notion of neural self-interaction with a suitably chosen magnitude is presented for the Hebbian learning rule. For an optimal learning rule based on linear programming, asymmetric dilution of synaptic connections is presented as another solution to the problem of spurious patterns. With varying percentages of asymmetric dilution it is demonstrated numerically that this optimal learning rule leads to near total suppression of spurious patterns. For practical usage of neural associative memory networks a combination of the two solutions with the optimal learning rule is recommended to be the best proposition.  相似文献   

20.
Senn W  Fusi S 《Neural computation》2005,17(10):2106-2138
Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memories. Here we show that this forgetting can be avoided by introducing additional constraints on the synaptic and neural dynamics. We consider Hebbian plasticity of excitatory synapses. A synapse is modified only if the postsynaptic response does not match the desired output. With this learning rule, the original memory performances with unbounded weights are regained, provided that (1) there is some global inhibition, (2) the learning rate is small, and (3) the neurons can discriminate small differences in the total synaptic input (e.g., by making the neuronal threshold small compared to the total postsynaptic input). We prove in the form of a generalized perceptron convergence theorem that under these constraints, a neuron learns to classify any linearly separable set of patterns, including a wide class of highly correlated random patterns. During the learning process, excitation becomes roughly balanced by inhibition, and the neuron classifies the patterns on the basis of small differences around this balance. The fact that synapses saturate has the additional benefit that nonlinearly separable patterns, such as similar patterns with contradicting outputs, eventually generate a subthreshold response, and therefore silence neurons that cannot provide any information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号