首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper introduces a general class of neural networks with arbitrary constant delays in the neuron interconnections, and neuron activations belonging to the set of discontinuous monotone increasing and (possibly) unbounded functions. The discontinuities in the activations are an ideal model of the situation where the gain of the neuron amplifiers is very high and tends to infinity, while the delay accounts for the finite switching speed of the neuron amplifiers, or the finite signal propagation speed. It is known that the delay in combination with high-gain nonlinearities is a particularly harmful source of potential instability. The goal of this paper is to single out a subclass of the considered discontinuous neural networks for which stability is instead insensitive to the presence of a delay. More precisely, conditions are given under which there is a unique equilibrium point of the neural network, which is globally exponentially stable for the states, with a known convergence rate. The conditions are easily testable and independent of the delay. Moreover, global convergence in finite time of the state and output is investigated. In doing so, new interesting dynamical phenomena are highlighted with respect to the case without delay, which make the study of convergence in finite time significantly more difficult. The obtained results extend previous work on global stability of delayed neural networks with Lipschitz continuous neuron activations, and neural networks with discontinuous neuron activations but without delays.  相似文献   

2.
We investigate possibilities of inducing temporal structures without fading memory in recurrent networks of spiking neurons strictly operating in the pulse-coding regime. We extend the existing gradient-based algorithm for training feedforward spiking neuron networks, SpikeProp (Bohte, Kok, & La Poutré, 2002), to recurrent network topologies, so that temporal dependencies in the input stream are taken into account. It is shown that temporal structures with unbounded input memory specified by simple Moore machines (MM) can be induced by recurrent spiking neuron networks (RSNN). The networks are able to discover pulse-coded representations of abstract information processing states coding potentially unbounded histories of processed inputs. We show that it is often possible to extract from trained RSNN the target MM by grouping together similar spike trains appearing in the recurrent layer. Even when the target MM was not perfectly induced in a RSNN, the extraction procedure was able to reveal weaknesses of the induced mechanism and the extent to which the target machine had been learned.  相似文献   

3.
In contrast to the usual types of neural networks which utilize two states for each neuron, a class of synchronous discrete-time neural networks with multilevel threshold neurons is developed. A qualitative analysis and a synthesis procedure for the class of neural networks considered constitute the principal contributions of this paper. The applicability of the present class of neural networks is demonstrated by means of a gray level image processing example, where each neuron can assume one of sixteen values. When compared to the usual neural networks with two state neurons, networks which are endowed with multilevel neurons will, in general, for a given application, require fewer neurons and thus fewer interconnections. This is an important consideration in VLSI implementation.  相似文献   

4.
A widely used complex-valued activation function for complex-valued multistate Hopfield networks is revealed to be essentially based on a multilevel step function. By replacing the multilevel step function with other multilevel characteristics, we present two alternative complex-valued activation functions. One is based on a multilevel sigmoid function, while the other on a characteristic of a multistate bifurcating neuron. Numerical experiments show that both modifications to the complex-valued activation function bring about improvements in network performance for a multistate associative memory. The advantage of the proposed networks over the complex-valued Hopfield networks with the multilevel step function is more outstanding when a complex-valued neuron represents a larger number of multivalued states. Further, the performance of the proposed networks in reconstructing noisy 256 gray-level images is demonstrated in comparison with other recent associative memories to clarify their advantages and disadvantages.  相似文献   

5.
State estimation for delayed neural networks   总被引:4,自引:0,他引:4  
In this letter, the state estimation problem is studied for neural networks with time-varying delays. The interconnection matrix and the activation functions are assumed to be norm-bounded. The problem addressed is to estimate the neuron states, through available output measurements, such that for all admissible time-delays, the dynamics of the estimation error is globally exponentially stable. An effective linear matrix inequality approach is developed to solve the neuron state estimation problem. In particular, we derive the conditions for the existence of the desired estimators for the delayed neural networks. We also parameterize the explicit expression of the set of desired estimators in terms of linear matrix inequalities (LMIs). Finally, it is shown that the main results can be easily extended to cope with the traditional stability analysis problem for delayed neural networks. Numerical examples are included to illustrate the applicability of the proposed design method.  相似文献   

6.
This paper deals with the problem of state estimation for fuzzy cellular neural networks (FCNNs) with time delay in the leakage term, discrete and unbounded distributed delays. In this paper, leakage delay in the leakage term is used to unstable the neuron states. It is challenging to develop a delay dependent condition to estimate the unstable neuron states through available output measurements such that the error-state system is globally asymptotically stable. By constructing the Lyapunov-Krasovskii functional which contains a triple-integral term, an improved delay-dependent stability criterion is derived in terms of linear matrix inequalities (LMIs). However, by using the free-weighting matrices method, a simple and efficient criterion is derived in terms of LMIs for estimation. The restriction such as the time-varying delay which was required to be differentiable or even its time-derivative which was assumed to be smaller than one, are removed. Instead, the time-varying delay is only assumed to be bounded. Finally, numerical examples and its simulations are given to demonstrate the effectiveness of the derived results.  相似文献   

7.
嵌合态被发现存在于神经系统并且可能在神经元节律、大脑的睡眠和记忆等诸多神经过程中发挥重要作用.本文考虑神经元交互中的电磁感应现象,建立了以Hindmarsh Rose神经元为节点的局部耦合的双层忆阻神经元网络,研究其嵌合态时空动力学模式及产生机理.结果发现,改变层内、层间突触耦合强度会使网络产生移动和不完美移动嵌合态等多种类型的嵌合模式,其中不完美移动嵌合态中不相干的区域会扩展到网络的相干域.特别地,在特定耦合强度下,存在一种新的嵌合态活动模式,即一部分神经元处于嵌合态,另一部分神经元处于移动嵌合态.考虑神经元突触的忆阻特性,发现忆阻参数的增加能够使处于嵌合态的神经元网络转变为同步态,且耦合强度越大,达到同步态所需要的忆阻参数值越小.进一步探究双层网络的同步性,发现层间耦合强度和忆阻参数的增大有助于网络达到更好的同步.研究结果表明神经元之间的相互作用可以激发双层神经元网络产生多种嵌合态模式,电磁感应可以促进网络由嵌合态向同步态转迁,这些结果有助于理解人脑中复杂的神经放电过程和信息处理机制,并为可能的类脑装置应用提供参考.  相似文献   

8.
钱克昌  谢永杰  李小杰 《控制工程》2012,19(3):435-437,442
针对提高逆系统建模中神经网络的逼近效果和动态性能问题,根据PID神经元网络工作原理,提出一种具有动态激励函数的新型PID神经元模型—输出反馈型PID神经元(OFPID),输出激励采用连续的Sigmoidal函数,使神经元具有等效的IIR突触,采用梯度下降法实现OFPID神经元网络的权值调整,将其应用于非线性系统的神经网络逆控制系统,从而提高非线性系统的解耦效果和控制性能。仿真实验证明,提出的新型神经元网络是一种良好的非线性系统建模和控制工具。  相似文献   

9.
In a recent work, a new method has been introduced to analyze complete stability of the standard symmetric cellular neural networks (CNNs), which are characterized by local interconnections and neuron activations modeled by a three-segment piecewise-linear (PWL) function. By complete stability it is meant that each trajectory of the neural network converges toward an equilibrium point. The goal of this paper is to extend that method in order to address complete stability of the much wider class of symmetric neural networks with an additive interconnecting structure where the neuron activations are general PWL functions with an arbitrary number of straight segments. The main result obtained is that complete stability holds for any choice of the parameters within the class of symmetric additive neural networks with PWL neuron activations, i.e., such a class of neural networks enjoys the important property of absolute stability of global pattern formation. It is worth pointing out that complete stability is proved for generic situations where the neural network has finitely many (isolated) equilibrium points, as well as for degenerate situations where there are infinite (nonisolated) equilibrium points. The extension in this paper is of practical importance since it includes neural networks useful to solve significant signal processing tasks (e.g., neural networks with multilevel neuron activations). It is of theoretical interest too, due to the possibility of approximating any continuous function (e.g., a sigmoidal function), using PWL functions. The results in this paper confirm the advantages of the method of Forti and Tesi, with respect to LaSalle approach, to address complete stability of PWL neural networks.  相似文献   

10.
The silicon neuron is an analog electronic circuit that reproduces the dynamics of a neuron. It is a useful element for artificial neural networks that work in real time. Silicon neuron circuits have to be simple, and at the same time they must be able to realize rich neuronal dynamics in order to reproduce the various activities of neural networks with compact, low-power consumption, and an easy-to-configure circuit. We have been developing a silicon neuron circuit based on the Izhikevich model, which has rich dynamics in spite of its simplicity. In our previous work, we proposed a simple silicon neuron circuit with low power consumption by reconstructing the mathematical structure in the Izhikevich model using an analog electronic circuit. In this article, we propose an improved circuit in which all of the MOSFETs are operated in the sub-threshold region.  相似文献   

11.
An engineering annealing method for optimal solutions of cellular neural networks is presented. Cellular neural networks are very promising in solving many scientific problems in image processing, pattern recognition, and optimization by the use of stored program with predetermined templates. Hardware annealing, which is a paralleled version of mean-field annealing in analog networks, is a highly efficient method of finding optimal solutions of cellular neural networks. It does not require any stochastic procedure and henceforth can be very fast. The generalized energy function of the network is first increased by reducing the voltage gain of each neuron. Then, the hardware annealing searches for the globally minimum energy state by continuously increasing the gain of neurons. The process of global optimization by the proposed annealing can be described by the eigenvalue problems in the time-varying dynamic system. In typical nonoptimization problems, it also provides enough stimulation to frozen neurons caused by ill-conditioned initial states.  相似文献   

12.
基于映射神经元模型和Hindmarsh-Rose神经元模型构建小世界神经网络,并施加带有遗忘因子的迭代学习控制算法,以实现神经网络的同步控制。仿真结果表明迭代学习控制同时适用于离散的和连续的神经网络模型,可以实现神经网络同步和去同步状态的相互转化,其优势在于随着迭代次数的增加,控制信号强度逐渐减弱,从而保持神经元本身的放电特性不变。所得结果为将非线性控制理论应用于帕金森等神经疾病控制提供了新思路。  相似文献   

13.
MSF,即主稳定函数,是一种使用Lyapunov指数理论来确定复杂同型网络同步状态的稳定性的工具.负的MSF值表明网络可以同步.我们构建了一种双变量HR模型来描述神经元在电场作用下的同步行为,将神经元尺寸和外加电场作为影响电场作用的调控因素,利用简化的MSF方法,研究主稳定函数MSF和电荷尺寸及外加电场的关系.结果显示,电场效应对神经网络同步的作用是丰富的.施加较强的恒定电场可以促进网络同步,而施加交变电场则会抑制同步.另外,神经元半径也会影响电场效应的作用结果,在较大的神经元半径下,神经网络会更容易同步.  相似文献   

14.
TMLNNs:三值/多值逻辑神经元网络   总被引:5,自引:0,他引:5  
本文提出了具有三值/多值逻辑表达能力的神经元模型,即三值/多值“逻辑与”神经元和三值/多值“逻辑或”神经元,由这种逻辑神经元连接而成的多层神经网络能够实现三值/多值逻辑推理系统。本文还给出了TMLNNs的学习算法,从TMLNNs网络中容易抽取到三值/多值逻辑规则知识,可以将TMLNNs用于三值/多值逻辑规则知识的自动获取,TMLNNs模型为神经网络表示逻辑知识提供了理论基础。  相似文献   

15.
This paper analyzes various earlier approaches for selection of hidden neuron numbers in artificial neural networks and proposes a novel criterion to select the hidden neuron numbers in improved back propagation networks for wind speed forecasting application. Either over fitting or under fitting problem is caused because of the random selection of hidden neuron numbers in artificial neural networks. This paper presents the solution of either over fitting or under fitting problems. In order to select the hidden neuron numbers, 151 different criteria are tested by means of the statistical errors. The simulation is performed on collected real-time wind data and simulation results prove that proposed approach reduces the error to a minimal value and enhances forecasting accuracy The perfect building of improved back propagation networks employing the fixation criterion is substantiated based on the convergence theorem. Comparative analyses performed prove the selection of hidden neuron numbers in improved back propagation networks is highly effective in nature.  相似文献   

16.
It is a common practice to adjust the number of hidden neurons in training, and the removal of neurons in neural networks plays an indispensable role in this architecture manipulation. In this paper, a succinct and unified mathematical form is upgraded to the generic case for removing neurons based on orthogonal projection and crosswise propagation in a feedforward layer with different architectures of neural networks, and further developed for several neural networks with different architectures. For a trained neural network, the method is divided into three stages. In the first stage, the output vectors of the feedforward observation layer are classified to clusters. In the second stage, the orthogonal projection is performed to locate a neuron whose output vector can be approximated by the other output vectors in the same cluster with the least information loss. In the third stage, the previous located neuron is removed and the crosswise propagation is implemented in each cluster. On accomplishment of the three stages, the neural network with the pruned architecture is retrained. If the number of clusters is one, the method is degenerated into its special case with only one neuron being removed. The applications to different architectures of neural networks with an extension to the support vector machine are exemplified. The methodology supports in theory large-scale applications of neural networks in the real world. In addition, with minor modifications, the unified method is instructive in pruning other networks as far as they have similar network structure to the ones in this paper. It is concluded that the unified pruning method in this paper equips us an effective and powerful tool to simplify the architecture in neural networks.  相似文献   

17.
In this paper, some improved results on the state estimation problem for recurrent neural networks with both time-varying and distributed time-varying delays are presented. Through available output measurements, an improved delay-dependent criterion is established to estimate the neuron states such that the dynamics of the estimation error is globally exponentially stable, and the derivative of time-delay being less than 1 is removed, which generalize the existent methods. Finally, two illustrative examples are given to demonstrate the effectiveness of the proposed results.  相似文献   

18.
This article deals with the problem of delay-dependent state estimation for discrete-time neural networks with time-varying delay. Our objective is to design a state estimator for the neuron states through available output measurements such that the error state system is guaranteed to be globally exponentially stable. Based on the linear matrix inequality approach, a delay-dependent condition is developed for the existence of the desired state estimator via a novel Lyapunov functional. The obtained condition has less conservativeness than the existing ones, which is demonstrated by a numerical example.  相似文献   

19.
Searching of state transitions is an important subject of problem solving in artificial intelligence, computer science, engineering and operations research. In artificial intelligence, a breadth-first search is optimal, with uniform cost, but it takes considerable time to obtain a solution. Neural networks process state transitions in parallel with learning ability. The authors have developed a search procedure for state transitions, that resembles a breadth-first search, using neural networks. First, the input pattern states are self-organized in the neural network, which consists of a Kohonen layer followed by a state-planning layer. The state-planning layer makes lateral connections between the cells of transitions. Then, the initial and the target states are given as a problem. The network shows an optimal transition pathway of states in the neuron firings. Next, the state-transition procedure is developed for the formation of a concept for action planning. Here, as the action planning, an integration between the symbols and the action pattern is carried out in the extended neural network.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号