首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Associative neural memories are models of biological phenomena that allow for the storage of pattern associations and the retrieval of the desired output pattern upon presentation of a possibly noisy or incomplete version of an input pattern. In this paper, we introduce implicative fuzzy associative memories (IFAMs), a class of associative neural memories based on fuzzy set theory. An IFAM consists of a network of completely interconnected Pedrycz logic neurons with threshold whose connection weights are determined by the minimum of implications of presynaptic and postsynaptic activations. We present a series of results for autoassociative models including one pass convergence, unlimited storage capacity and tolerance with respect to eroded patterns. Finally, we present some results on fixed points and discuss the relationship between implicative fuzzy associative memories and morphological associative memories  相似文献   

2.
Neural associative memories are perceptron-like single-layer networks with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Previous work optimized the memory capacity for various models of synaptic learning: linear Hopfield-type rules, the Willshaw model employing binary synapses, or the BCPNN rule of Lansner and Ekeberg, for example. Here I show that all of these previous models are limit cases of a general optimal model where synaptic learning is determined by probabilistic Bayesian considerations. Asymptotically, for large networks and very sparse neuron activity, the Bayesian model becomes identical to an inhibitory implementation of the Willshaw and BCPNN-type models. For less sparse patterns, the Bayesian model becomes identical to Hopfield-type networks employing the covariance rule. For intermediate sparseness or finite networks, the optimal Bayesian learning rule differs from the previous models and can significantly improve memory performance. I also provide a unified analytical framework to determine memory capacity at a given output noise level that links approaches based on mutual information, Hamming distance, and signal-to-noise ratio.  相似文献   

3.
Hopfield associative memories with αn malfunctioning neurons are considered. Using some facts from exchangeable events theory, the asymptotic storage capacity of such a network is derived as a function of the parameter α under stability and attractivity requirements. It is shown that the asymptotic storage capacity is (1-α)2n/(4 log n) under stability and (1-α)2(1-2ρ)2n/(4 log n) under attractivity requirements, respectively. Comparing these capacities with their maximum values corresponding to the case when there is no malfunctioning neurons, α=0, shows the robustness of the retrieval mechanism of Hopfield associative memories with respect to the existence of malfunctioning neurons. This result also supports the claim that neural networks are fault tolerant  相似文献   

4.
The recursive training algorithm for the optimal interpolative (OI) classification network is extended to include distributed fault tolerance. The conventional OI Net learning algorithm leads to network weights that are nonoptimally distributed (in the sense of fault tolerance). Fault tolerance is becoming an increasingly important factor in hardware implementations of neural networks. But fault tolerance is often taken for granted in neural networks rather than being explicitly accounted for in the architecture or learning algorithm. In addition, when fault tolerance is considered, it is often accounted for using an unrealistic fault model (e.g., neurons that are stuck on or off rather than small weight perturbations). Realistic fault tolerance can be achieved through a smooth distribution of weights, resulting in low weight salience and distributed computation. Results of trained OI Nets on the Iris classification problem show that fault tolerance can be increased with the algorithm presented in this paper.  相似文献   

5.
Maximally fault tolerant neural networks   总被引:4,自引:0,他引:4  
An application of neural network modeling is described for generating hypotheses about the relationships between response properties of neurons and information processing in the auditory system. The goal is to study response properties that are useful for extracting sound localization information from directionally selective spectral filtering provided by the pinna. For studying sound localization based on spectral cues provided by the pinna, a feedforward neural network model with a guaranteed level of fault tolerance is introduced. Fault tolerance and uniform fault tolerance in a neural network are formally defined and a method is described to ensure that the estimated network exhibits fault tolerance. The problem of estimating weights for such a network is formulated as a large-scale nonlinear optimization problem. Numerical experiments indicate that solutions with uniform fault tolerance exist for the pattern recognition problem considered. Solutions derived by introducing fault tolerance constraints have better generalization properties than solutions obtained via unconstrained back-propagation.  相似文献   

6.
联想记忆神经网络的一个有效学习算法   总被引:6,自引:0,他引:6  
提出一种新的联想记忆网络模型的有效学习算法,它具有下述特点:(1)可以全部存 储任意给定的训练模式集,即对于训练模式的数目和它们之间相关性的强弱没有限制;(2)最 小的训练模型吸引域达到最大;(3)在(2)的基础上,每个训练模式具有尽可能大的吸引域; (4)联想记忆神经网络是全局稳定的.大量的计算机仿真实验结果充分说明所提出的学习算 法比已有算法具有更强的存储能力和联想容错能力.  相似文献   

7.
The remarkable processing capabilities of the nervous system must derive at least in part from the large numbers of neurons participating (roughly 1010), since the timescales involved are of the order of milliseconds, rather than the nanoseconds of modern computers. We summarise common features of the neural network models which attempt to capture this behaviour and describe the many levels of parallelism which they exhibit. A range of models has been implemented on the SIMD (ICL Distributed Array Processor) and MIMD (Meiko Computing Surface) hardware at Edinburgh. Examples include: (i) training algorithms in the context of the Hopfield net, with specific application to the storage of words and text with content-addressable memory; (ii) the back-propagation training algorithm for the multi-layer perception; (iii) image restoration with Hopfield and Tank analogue neurons, and (iv) the Durbin and Willshaw elastic net, as applied to the travelling salesman problem.  相似文献   

8.
本文采用耦合的混沌振荡子作为单个混沌神经元构造混沌神经网络模型,用改进Hebb算法设计网络的连接权值。在此基础上,实现了混沌神经网络的动态联想记忆并应用该混沌神经网络模型对发电机定子绕组匝间短路故障进行诊断。结果表明,该种方法有助于故障模式的记  相似文献   

9.
具有内连接的指数多值双向联想记忆模型   总被引:3,自引:0,他引:3       下载免费PDF全文
C_C Wang的多值指数双向联想记忆模型(MVeBAM)是一种高存储容量的联想神经网络.本文在MVeBAM的基础上通过引入自相关项(或内连接)提出了一个新的具有内连接的多值指数双向联想记忆模型,推广了MVeBAM.通过定义简单的能量函数证明了其在同、异步方式下的稳定性,从而保证了所学模式对成为被推广的MVeBAM(EMVeBAM)的稳定点.最后,计算机模拟证实了EMVeBAM比MVeBAM具有更高的存储容量和更好的纠错性能.  相似文献   

10.
C_CWang的多值指数双向联想记忆模型 (MVeBAM)是一种高存储容量的联想神经网络. 本文在MVe BAM的基础上通过引入自相关项 (或内连接 )提出了一个新的具有内连接的多值指数双向联想记忆模型, 推广了MVeBAM. 通过定义简单的能量函数证明了其在同、异步方式下的稳定性, 从而保证了所学模式对成为被推广的MVeBAM(EMVeBAM)的稳定点. 最后, 计算机模拟证实了EMVeBAM比MVeBAM具有更高的存储容量和更好的纠错性能.  相似文献   

11.
Gray-scale morphological associative memories   总被引:4,自引:0,他引:4  
Neural models of associative memories are usually concerned with the storage and the retrieval of binary or bipolar patterns. Thus far, the emphasis in research on morphological associative memory systems has been on binary models, although a number of notable features of autoassociative morphological memories (AMMs) such as optimal absolute storage capacity and one-step convergence have been shown to hold in the general, gray-scale setting. In this paper, we make extensive use of minimax algebra to analyze gray-scale autoassociative morphological memories. Specifically, we provide a complete characterization of the fixed points and basins of attractions which allows us to describe the storage and recall mechanisms of gray-scale AMMs. Computer simulations using gray-scale images illustrate our rigorous mathematical results on the storage capacity and the noise tolerance of gray-scale morphological associative memories (MAMs). Finally, we introduce a modified gray-scale AMM model that yields a fixed point which is closest to the input pattern with respect to the Chebyshev distance and show how gray-scale AMMs can be used as classifiers.  相似文献   

12.
We study pulse-coupled neural networks that satisfy only two assumptions: each isolated neuron fires periodically, and the neurons are weakly connected. Each such network can be transformed by a piece-wise continuous change of variables into a phase model, whose synchronization behavior and oscillatory associative properties are easier to analyze and understand. Using the phase model, we can predict whether a given pulse-coupled network has oscillatory associative memory, or what minimal adjustments should be made so that it can acquire memory. In the search for such minimal adjustments we obtain a large class of simple pulse-coupled neural networks that ran memorize and reproduce synchronized temporal patterns the same way a Hopfield network does with static patterns. The learning occurs via modification of synaptic weights and/or synaptic transmission delays.  相似文献   

13.
14.
The Hopfield model effectively stores a comparatively small number of initial patterns, about 15% of the size of the neural network. A greater value can be attained only in the Potts-glass associative memory model, in which neurons may exist in more than two states. Still greater memory capacity is exhibited by a parametric neural network based on the nonlinear optical signal transfer and processing principles. A formalism describing both the Potts-glass associative memory and the parametric neural network within a unified framework is developed. The memory capacity is evaluated by the Chebyshev–Chernov statistical method.  相似文献   

15.
文章提出了一种新型联想记忆神经网络,每个模式被存储在一个通过网络中所有神经元的环路中,连接包括逻辑状态和一组神经元编号,网络中处理和传递的信号为神经元编号组成的序列,神经元执行一组处理这种序列的符号和逻辑运算;网络记忆容量为2N-2N、完全消除了假模式、同时具有更高的记忆效率和可靠性。  相似文献   

16.
在融合了模糊逻辑的推理能力和神经网络的自适应、自学习能力。同时采用输出空间模式聚类的快速学习算法,引入补偿模糊神经元,模糊运算采用动态的全局优化运算,使学习后的网络具有更高的容错性,并弥补了神经网络学习耗时的缺点,提高了效率,进行了故障检测的仿真分析,并且将其运用于未建模系统的故障诊断中取得良好的效果。  相似文献   

17.
We have combined competitive and Hebbian learning in a neural network designed to learn and recall complex spatiotemporal sequences. In such sequences, a particular item may occur more than once or the sequence may share states with another sequence. Processing of repeated/shared states is a hard problem that occurs very often in the domain of robotics. The proposed model consists of two groups of synaptic weights: competitive interlayer and Hebbian intralayer connections, which are responsible for encoding respectively the spatial and temporal features of the input sequence. Three additional mechanisms allow the network to deal with shared states: context units, neurons disabled from learning, and redundancy used to encode sequence states. The network operates by determining the current and the next state of the learned sequences. The model is simulated over various sets of robot trajectories in order to evaluate its storage and retrieval abilities; its sequence sampling effects; its robustness to noise and its tolerance to fault.  相似文献   

18.
容差模拟电路故障检测对于电子设备的稳定运行而言至关重要,针对传统检测算法计算代价大、训练时间长及检测误差率高的不足,提出基于模块化神经网络的容差模拟电路故障检测算法研究。对神经网络检测模型的功能模块进行划分,并基于功能模块提取容差模拟电路的故障信号特征;基于样本中心到故障特征点的欧式距离,对比故障样本的特征向量,依据模块化神经网络决策分类函数,实现对容差模拟电路故障的准确定位和检测。仿真数据表明,在不同样本容量条件下提出检测算法均具有优势,最低误差值为0.382%.  相似文献   

19.
It is commonly assumed that neural networks have a built-in fault tolerance property mainly due to their parallel structure. The international community of neural networks discussed these properties until 1994 and afterward the subject has been mostly ignored. Recently, the subject was again brought to discussion due to the possibility of using neural networks in areas where fault tolerance and graceful degradation properties would be an added value, like medical applications of nano-electronics or space missions. Nevertheless, the evaluation of fault tolerance and graceful degradation characteristics remained difficult because there were no systematic methods or tools that could be easily applied to a given Artificial Neural Networks application. The discussion of models is the first step for sorting ways of developing the fault tolerance capability and for building a tool that can evaluate and improve this characteristic. The present work proposes a fault tolerance model, presents solutions for improving it and introduces the Fault Tolerance Simulation and Evaluation Tool for Artificial Neural Networks that evaluates and improves fault tolerance.  相似文献   

20.
王涛  王科俊  贾诺 《计算机应用》2011,31(5):1311-1313
为了提高混沌神经网络用于信息处理的能力,采用一种参数调节控制方法,通过对一种延时对称全局耦合混沌神经网络的黏合参数的控制研究了网络的动态联想记忆,使被控网络在仅有部分神经元进入周期态的情况下达到输出稳定,并且稳定输出序列只包含与输入模式相关的存储模式及其相反模式。仿真实验说明网络具有良好的容错能力和很高的回忆正确率,适合应用于信息处理和模式识别。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号