首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A novel approach to solving the output contention in packet switching networks with synchronous switching mode is presented. A contention controller has been designed based on the K-winner-take-all neural-network technique with a speedup factor to achieve a real-time computation of a nonblocking switching high-speed high-capacity packet switch without packet loss. Simulation results for evaluation of the performance of the K-winner network controller with 10 neurons are presented to study the constraints of the "frozen state" as well as those of same initial state. An optoelectronic contention controller constructed from a K-winner neural network is proposed.  相似文献   

2.
Inhibitory grids and the assignment problem   总被引:2,自引:0,他引:2  
A family of symmetric neural networks that solve a simple version of the assignment problem (AP) is analyzed. The authors analyze the suboptimal performance of these networks and compare the results to optimal answers obtained by linear programming techniques. They then use the interactive activation model to define the network dynamics-a model that is closely related to the Hopfield-Tank model. A systematic analysis of hypercube corner stability and eigenspaces of the connection strength matrix leads to network parameters that give feasible solutions 100% of the time and to a projection algorithm that significantly improves performance. Two formulations of the problem are discussed: (i) nearest corner: encode the assignment numbers as initial activations, and (ii) lowest energy corner: encode the assignment numbers as external inputs.  相似文献   

3.
MAXNET is a common competitive architecture to select the maximum or minimum from a set of data. However, there are two major problems with the MAXNET. The first problem is its slow convergence rate if all the data have nearly the same value. The second one is that it fails when either nonunique extreme values exist or each initial value is smaller than or equal to the sum of initial inhibitions from other nodes. In this paper, a novel neural network model called SELECTRON is proposed to select the maxima or minima from a set of data. This model is able to select all the maxima or minima via competition among the processing units even when MAXNET fails. We then prove that SELECTRON converges to the correct state in every situation. In addition, the convergence rates of SELECTRON for three special data distributions are derived. Finally, simulation results indicate that SELECTRON converges much faster than MAXNET.  相似文献   

4.
Delay-independent stability in bidirectional associative memorynetworks   总被引:8,自引:0,他引:8  
It is shown that if the neuronal gains are small compared with the synaptic connection weights, then a bidirectional associative memory network with axonal signal transmission delays converges to the equilibria associated with exogenous inputs to the network. Both discrete and continuously distributed delays are considered; the asymptotic stability is global in the state space of neuronal activations and also is independent of the delays.  相似文献   

5.
概率逻辑神经元网络收敛性的分析   总被引:2,自引:0,他引:2  
张钹  张铃 《计算机学报》1993,16(1):1-12
本文以马尔科夫链理论为工具,研究PLN(概率逻辑神经元)网络的定量性质.我们得到的主要结果是,在一给定网络中,给出各状态收敛到稳定状态的概率,平均收敛步数和方差以及一般PLN网络平均收敛步数的上(下)界估计.给出计算机的一个模拟结果,并与理论结论相对比,以验证理论结果的正确性.  相似文献   

6.
The high-conductance state of cortical networks   总被引:3,自引:0,他引:3  
We studied the dynamics of large networks of spiking neurons with conductance-based (nonlinear) synapses and compared them to networks with current-based (linear) synapses. For systems with sparse and inhibition-dominated recurrent connectivity, weak external inputs induced asynchronous irregular firing at low rates. Membrane potentials fluctuated a few millivolts below threshold, and membrane conductances were increased by a factor 2 to 5 with respect to the resting state. This combination of parameters characterizes the ongoing spiking activity typically recorded in the cortex in vivo. Many aspects of the asynchronous irregular state in conductance-based networks could be sufficiently well characterized with a simple numerical mean field approach. In particular, it correctly predicted an intriguing property of conductance-based networks that does not appear to be shared by current-based models: they exhibit states of low-rate asynchronous irregular activity that persist for some period of time even in the absence of external inputs and without cortical pacemakers. Simulations of larger networks (up to 350,000 neurons) demonstrated that the survival time of self-sustained activity increases exponentially with network size.  相似文献   

7.

Quantum state engineering is a central task in Lyapunov-based quantum control. Given different initial states, better performance may be achieved if the control parameters, such as the Lyapunov function, are individually optimized for each initial state, however, at the expense of computing resources. To tackle this issue, we propose an initial-state-adaptive Lyapunov control strategy with machine learning. Specifically, artificial neural networks are used to learn the relationship between the optimal control parameters and initial states through supervised learning with samples. Two designs are presented where the feedforward neural network and the general regression neural network are used to select control schemes and design Lyapunov functions, respectively. We demonstrate the performance of the designs with a three-level quantum system for an eigenstate control problem. Since the sample generation and the training of neural networks are carried out in advance, the initial-state-adaptive Lyapunov control can be implemented for new initial states without much increase of computational resources.

  相似文献   

8.
In a recent work, a new method has been introduced to analyze complete stability of the standard symmetric cellular neural networks (CNNs), which are characterized by local interconnections and neuron activations modeled by a three-segment piecewise-linear (PWL) function. By complete stability it is meant that each trajectory of the neural network converges toward an equilibrium point. The goal of this paper is to extend that method in order to address complete stability of the much wider class of symmetric neural networks with an additive interconnecting structure where the neuron activations are general PWL functions with an arbitrary number of straight segments. The main result obtained is that complete stability holds for any choice of the parameters within the class of symmetric additive neural networks with PWL neuron activations, i.e., such a class of neural networks enjoys the important property of absolute stability of global pattern formation. It is worth pointing out that complete stability is proved for generic situations where the neural network has finitely many (isolated) equilibrium points, as well as for degenerate situations where there are infinite (nonisolated) equilibrium points. The extension in this paper is of practical importance since it includes neural networks useful to solve significant signal processing tasks (e.g., neural networks with multilevel neuron activations). It is of theoretical interest too, due to the possibility of approximating any continuous function (e.g., a sigmoidal function), using PWL functions. The results in this paper confirm the advantages of the method of Forti and Tesi, with respect to LaSalle approach, to address complete stability of PWL neural networks.  相似文献   

9.
K. Begain 《Calcolo》1995,32(3-4):137-152
The paper addresses the analysis of a single multiplexing node in ATM networks. It presents analytical models for evaluating the performance parameters of a multiplexer that has N independent and identical ON-OFF type input sources, M independent Constant Bit Rate inputs, and an output channel with finite buffer. The channel speed is assumed to be an integer times of the source speed in ON state which equal to speed of the CBR sources. A bidimensional Homogeneous Discrete Time Markov Chain is introduced where the two dimensions describe the number of ON sources and the number of cells in the finite buffer at a given time. Two time scales are defined in order to ensure accurate results in calculating the performance parameters, e.g. cell loss and cell delay. Three alternative models of the cell arrival process are discussed and the performance parameters are derived.  相似文献   

10.
梯度算法下RBF网的参数变化动态   总被引:2,自引:0,他引:2  
分析神经网络学习过程中各参数的变化动态,对理解网络的动力学行为,改进网络的结构和性能等具有积极意义.本文讨论了用梯度算法优化误差平方和损失函数时RBF网隐节点参数的变化动态,即算法收敛后各隐节点参数的可能取值.主要结论包括:如果算法收敛后损失函数不为零,则各隐节点将位于样本输入的加权聚类中心;如果损失函数为零,则网络中的冗余隐节点将出现萎缩、衰减、外移或重合现象.进一步的试验发现,对结构过大的RBF网,冗余隐节点的萎缩、外移、衰减和重合是频繁出现的现象.  相似文献   

11.
This paper presents new theoretical results on global exponential stability of recurrent neural networks with bounded activation functions and time-varying delays. The stability conditions depend on external inputs, connection weights, and time delays of recurrent neural networks. Using these results, the global exponential stability of recurrent neural networks can be derived, and the estimated location of the equilibrium point can be obtained. As typical representatives, the Hopfield neural network (HNN) and the cellular neural network (CNN) are examined in detail.  相似文献   

12.
Current improvements in the performance of deep neural networks are partly due to the proposition of rectified linear units. A ReLU activation function outputs zero for negative component, inducing the death of some neurons and a bias shift of the outputs, which causes oscillations and impedes learning. According to the theory that “zero mean activations improve learning ability”, a softplus linear unit (SLU) is proposed as an adaptive activation function that can speed up learning and improve performance in deep convolutional neural networks. Firstly, for the reduction of the bias shift, negative inputs are processed using the softplus function, and a general form of the SLU function is proposed. Secondly, the parameters of the positive component are fixed to control vanishing gradients. Thirdly, the rules for updating the parameters of the negative component are established to meet back- propagation requirements. Finally, we designed deep auto-encoder networks and conducted several experiments with them on the MNIST dataset for unsupervised learning. For supervised learning, we designed deep convolutional neural networks and conducted several experiments with them on the CIFAR-10 dataset. The experiments have shown faster convergence and better performance for image classification of SLU-based networks compared with rectified activation functions.  相似文献   

13.
In this paper, the conventional bidirectional associative memory (BAM) neural network with signal transmission delay is intervalized in order to study the bounded effect of deviations in network parameters and external perturbations. The resultant model is referred to as a novel interval dynamic BAM (IDBAM) model. By combining a number of different Lyapunov functionals with the Razumikhin technique, some sufficient conditions for the existence of unique equilibrium and robust stability are derived. These results are fairly general and can be verified easily. To go further, we extend our investigation to the time-varying delay case. Some robust stability criteria for BAM with perturbations of time-varying delays are derived. Besides, our approach for the analysis allows us to consider several different types of activation functions, including piecewise linear sigmoids with bounded activations as well as the usual C1-smooth sigmoids. We believe that the results obtained have leading significance in the design and application of BAM neural networks.  相似文献   

14.
A new neural network model, Routron, which can handle dependent component failures of communication networks, is proposed. We prove that the proposed Routron has a stable solution. Moreover, useful upper and lower bounds for the design parameters are derived to help select them in implementations. Simulation results are included to illustrate the effectiveness of the algorithm.  相似文献   

15.
Haiping  Nong   《Neurocomputing》2008,71(7-9):1388-1400
This paper presents a new encoding scheme for training radial basis function (RBF) networks by genetic algorithms (GAs). In general, it is very difficult to select the proper input variables and the exact number of nodes before training an RBF network. In the proposed encoding scheme, both the architecture (numbers and selections of nodes and inputs) and the parameters (centres and widths) of the RBF networks are represented in one chromosome and evolved simultaneously by GAs so that the selection of nodes and inputs can be achieved automatically. The performance and effectiveness of the presented approach are evaluated using two benchmark time series prediction examples and one practical application example, and are then compared with other existing methods. It is shown by the simulation tests that the developed evolving RBF networks are able to predict the time series accurately with the automatically selected nodes and inputs.  相似文献   

16.
Deep neural networks such as GoogLeNet, ResNet, and BERT have achieved impressive performance in tasks such as image and text classification. To understand how such performance is achieved, we probe a trained deep neural network by studying neuron activations, i.e.combinations of neuron firings, at various layers of the network in response to a particular input. With a large number of inputs, we aim to obtain a global view of what neurons detect by studying their activations. In particular, we develop visualizations that show the shape of the activation space, the organizational principle behind neuron activations, and the relationships of these activations within a layer. Applying tools from topological data analysis, we present TopoAct , a visual exploration system to study topological summaries of activation vectors. We present exploration scenarios using TopoAct that provide valuable insights into learned representations of neural networks. We expect TopoAct to give a topological perspective that enriches the current toolbox of neural network analysis, and to provide a basis for network architecture diagnosis and data anomaly detection.  相似文献   

17.
When centralized information of the total state is not permitted in a large-scale interconnected system with known and arbitrary interconnection pattern, a simple condition for constructing a stable and local adaptive identification model is derived such that the decentralized adaptive identification can be developed. In the proposed identification scheme, the state error between the system to be identified and its corresponding identification model converges to zero for any bounded inputs. Furthermore, the identification model with prescribed persistently exciting inputs for any interconnection pattern ensures that the parameter error converges to zero. Finally, an example is illustrated to show the validity of the proposed scheme.  相似文献   

18.
We consider Jackson networks with unreliable nodes, which randomly break down and are under repair for a random time. The network is described by a Markov process which encompasses the availability status and queue lengths vector. Ergodicity conditions for many related networks are available in the literature and can often be expressed as rate conditions. For (reliable) nodes in Jackson networks the overall arrival rate has to be strictly less than its service rate. If for some nodes this condition is violated, the network process is not ergodic. Nevertheless, it is known that in such a situation, especially in large networks, parts of the network (where the rate condition is fulfilled) in the long run stabilize. For standard Jackson networks without breakdown of nodes, the asymptotics of such stable subnetworks were derived by Goodman and Massey [J.B. Goodman, W.A. Massey, The non-ergodic Jackson network, Journal of Applied Probability 21 (1984) 860–869].In this paper, we obtain the asymptotics of Jackson networks with unreliable nodes and show that the state distribution of the stable subnetworks converges to a Jackson-type product form distribution. In such networks with breakdown and repair of nodes, in general, the ergodicity condition is more involved.Because no stationary distribution for the network exists, steady-state availability and performance evaluation is not possible. We show that instead assessment of the quality of service in the long run for the stabilizing subnetwork can be done by using limiting distributions. Additionally, we prove that time averages of cumulative rewards can be approximated by state-space averages.  相似文献   

19.
Salinas E 《Neural computation》2003,15(7):1439-1475
A bright red light may trigger a sudden motor action in a driver crossing an intersection: stepping at once on the brakes. The same red light, however, may be entirely inconsequential if it appears, say, inside a movie theater. Clearly, context determines whether a particular stimulus will trigger a motor response, but what is the neural correlate of this? How does the nervous system enable or disable whole networks so that they are responsive or not to a given sensory signal? Using theoretical models and computer simulations, I show that networks of neurons have a built-in capacity to switch between two types of dynamic state: one in which activity is low and approximately equal for all units, and another in which different activity distributions are possible and may even change dynamically. This property allows whole circuits to be turned on or off by weak, unstructured inputs. These results are illustrated using networks of integrate-and-fire neurons with diverse architectures. In agreement with the analytic calculations, a uniform background input may determine whether a random network has one or two stable firing levels; it may give rise to randomly alternating firing episodes in a circuit with reciprocal inhibition; and it may regulate the capacity of a center-surround circuit to produce either self-sustained activity or traveling waves. Thus, the functional properties of a network may be drastically modified by a simple, weak signal. This mechanism works as long as the network is able to exhibit stable firing states, or attractors.  相似文献   

20.
We analyze a neural network implementation for puck state prediction in robotic air hockey. Unlike previous prediction schemes which used simple dynamic models and continuously updated an intercept state estimate, the neural network predictor uses a complex function, computed with data acquired from various puck trajectories, and makes a single, timely estimate of the final intercept state. Theoretically, the network can account for the complete dynamics of the table if all important state parameters are included as inputs, an accurate data training set of trajectories is used, and the network has an adequate number of internal nodes. To develop our neural networks, we acquired data from 1500 no‐bounce and 1500 one‐bounce puck trajectories, noting only translational state information. Analysis showed that performance of neural networks designed to predict the results of no‐bounce trajectories was better than the performance of neural networks designed for one‐bounce trajectories. Since our neural network input parameters did not include rotational puck estimates and recent work shows the importance of spin in impact analysis, we infer that adding a spin input to the neural network will increase the effectiveness of state estimates for the one‐bounce case. © 2001 John Wiley & Sons, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号