首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
One of the most basic pattern recognition problems is whether a certain local feature occurs in some linear array to the left of some other local feature. We construct in this article circuits that solve this problem with an asymptotically optimal number of threshold gates. Furthermore it is shown that much fewer threshold gates are needed if one employs in addition a small number of winner-take-all gates. In either case the circuits that are constructed have linear or almost linear total wire length, and are therefore not unrealistic from the point of view of physical implementations.  相似文献   

2.
Circuits composed of threshold gates (McCulloch-Pitts neurons, or perceptrons) are simplified models of neural circuits with the advantage that they are theoretically more tractable than their biological counterparts. However, when such threshold circuits are designed to perform a specific computational task, they usually differ in one important respect from computations in the brain: they require very high activity. On average every second threshold gate fires (sets a 1 as output) during a computation. By contrast, the activity of neurons in the brain is much sparser, with only about 1% of neurons firing. This mismatch between threshold and neuronal circuits is due to the particular complexity measures (circuit size and circuit depth) that have been minimized in previous threshold circuit constructions. In this letter, we investigate a new complexity measure for threshold circuits, energy complexity, whose minimization yields computations with sparse activity. We prove that all computations by threshold circuits of polynomial size with entropy O(log n) can be restructured so that their energy complexity is reduced to a level near the entropy of circuit states. This entropy of circuit states is a novel circuit complexity measure, which is of interest not only in the context of threshold circuits but for circuit complexity in general. As an example of how this measure can be applied, we show that any polynomial size threshold circuit with entropy O(log n) can be simulated by a polynomial size threshold circuit of depth 3. Our results demonstrate that the structure of circuits that result from a minimization of their energy complexity is quite different from the structure that results from a minimization of previously considered complexity measures, and potentially closer to the structure of neural circuits in the nervous system. In particular, different pathways are activated in these circuits for different classes of inputs. This letter shows that such circuits with sparse activity have a surprisingly large computational power.  相似文献   

3.
In most neural network models, synapses are treated as static weights that change only with the slow time scales of learning. It is well known, however, that synapses are highly dynamic and show use-dependent plasticity over a wide range of time scales. Moreover, synaptic transmission is an inherently stochastic process: a spike arriving at a presynaptic terminal triggers the release of a vesicle of neurotransmitter from a release site with a probability that can be much less than one. We consider a simple model for dynamic stochastic synapses that can easily be integrated into common models for networks of integrate-and-fire neurons (spiking neurons). The parameters of this model have direct interpretations in terms of synaptic physiology. We investigate the consequences of the model for computing with individual spikes and demonstrate through rigorous theoretical results that the computational power of the network is increased through the use of dynamic synapses.  相似文献   

4.
A competitive-layer model for feature binding and sensory segmentation   总被引:1,自引:0,他引:1  
We present a recurrent neural network for feature binding and sensory segmentation: the competitive-layer model (CLM). The CLM uses topographically structured competitive and cooperative interactions in a layered network to partition a set of input features into salient groups. The dynamics is formulated within a standard additive recurrent network with linear threshold neurons. Contextual relations among features are coded by pairwise compatibilities, which define an energy function to be minimized by the neural dynamics. Due to the usage of dynamical winner-take-all circuits, the model gains more flexible response properties than spin models of segmentation by exploiting amplitude information in the grouping process. We prove analytic results on the convergence and stable attractors of the CLM, which generalize earlier results on winner-take-all networks, and incorporate deterministic annealing for robustness against local minima. The piecewise linear dynamics of the CLM allows a linear eigensubspace analysis, which we use to analyze the dynamics of binding in conjunction with annealing. For the example of contour detection, we show how the CLM can integrate figure-ground segmentation and grouping into a unified model.  相似文献   

5.
Synapses are crucial elements for computation and information transfer in both real and artificial neural systems. Recent experimental findings and theoretical models of pulse-based neural networks suggest that synaptic dynamics can play a crucial role for learning neural codes and encoding spatiotemporal spike patterns. Within the context of hardware implementations of pulse-based neural networks, several analog VLSI circuits modeling synaptic functionality have been proposed. We present an overview of previously proposed circuits and describe a novel analog VLSI synaptic circuit suitable for integration in large VLSI spike-based neural systems. The circuit proposed is based on a computational model that fits the real postsynaptic currents with exponentials. We present experimental data showing how the circuit exhibits realistic dynamics and show how it can be connected to additional modules for implementing a wide range of synaptic properties.  相似文献   

6.
Monotone circuits for monotone weighted threshold functions   总被引:1,自引:0,他引:1  
Weighted threshold functions with positive weights are a natural generalization of unweighted threshold functions. These functions are clearly monotone. However, the naive way of computing them is adding the weights of the satisfied variables and checking if the sum is greater than the threshold; this algorithm is inherently non-monotone since addition is a non-monotone function. In this work we by-pass this addition step and construct a polynomial size logarithmic depth unbounded fan-in monotone circuit for every weighted threshold function, i.e., we show that weighted threshold functions are in mAC1. (To the best of our knowledge, prior to our work no polynomial monotone circuits were known for weighted threshold functions.)Our monotone circuits are applicable for the cryptographic tool of secret sharing schemes. Using general results for compiling monotone circuits (Yao, 1989) and monotone formulae (Benaloh and Leichter, 1990) into secret sharing schemes, we get secret sharing schemes for every weighted threshold access structure. Specifically, we get: (1) information-theoretic secret sharing schemes where the size of each share is quasi-polynomial in the number of users, and (2) computational secret sharing schemes where the size of each share is polynomial in the number of users.  相似文献   

7.
Complex real-time computations on multi-modal time-varying input streams are carried out by generic cortical microcircuits. Obstacles for the development of adequate theoretical models that could explain the seemingly universal power of cortical microcircuits for real-time computing are the complexity and diversity of their computational units (neurons and synapses), as well as the traditional emphasis on offline computing in almost all theoretical approaches towards neural computation. In this article, we initiate a rigorous mathematical analysis of the real-time computing capabilities of a new generation of models for neural computation, liquid state machines, that can be implemented with—in fact benefit from—diverse computational units. Hence, realistic models for cortical microcircuits represent special instances of such liquid state machines, without any need to simplify or homogenize their diverse computational units. We present proofs of two theorems about the potential computational power of such models for real-time computing, both on analog input streams and for spike trains as inputs.  相似文献   

8.
For ordinary circuits with a fixed upper bound on the fanin of its gates it has been shown that logarithmic redundancy is necessary and sufficient to overcome random hardware faults (noise). Here, we consider the same question for unbounded fanin circuits which in the fault-free case can compute Boolean functions in sublogarithmic depth. Now the details of the fault model become more important. One may assume that only gates, resp. only wires may deliver wrong values, or that both gates and wires may behave faulty. The fault tolerance depends on the types of gates that are used, and whether the error probabilities are known exactly or only an upper bound for them. Concerning the first distinction the two most important models are circuits consisting of and- and or-gates with arbitrarily many inputs, and circuits built from the more general type of threshold gates. We will show that in case of faulty and/or-circuits as well as threshold circuits an increase of fanin and size cannot be traded for a depth reduction if the error probabilities are unknown. Gates with large fanin are of no use if errors may occur. Circuits of arbitrary size, but fixed depth can compute only a tiny subset of all Boolean functions reliably. Only in case of threshold circuits and exactly known error probabilities redundancy is able to compensate faults. We describe a transformation from fault-free to fault-tolerant circuits that is optimal with respect to depth keeping the circuit size polynomial.  相似文献   

9.
Computations by spiking neurons are performed using the timing of action potentials. We investigate the computational power of a simple model for such a spiking neuron in the Boolean domain by comparing it with traditional neuron models such as threshold gates (or McCulloch–Pitts neurons) and sigma-pi units (or polynomial threshold gates). In particular, we estimate the number of gates required to simulate a spiking neuron by a disjunction of threshold gates and we establish tight bounds for this threshold number. Furthermore, we analyze the degree of the polynomials that a sigma-pi unit must use for the simulation of a spiking neuron. We show that this degree cannot be bounded by any fixed value. Our results give evidence that the use of continuous time as a computational resource endows single-cell models with substantially larger computational capabilities. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

10.
A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead, it is based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that, unlike Turing machines, does not require sequential transitions between well-defined discrete internal states. It is supported, as the Turing machine is, by rigorous mathematical results that predict universal computational power under idealized conditions, but for the biologically more realistic scenario of real-time processing of time-varying inputs. Our approach provides new perspectives for the interpretation of neural coding, the design of experiments and data analysis in neurophysiology, and the solution of problems in robotics and neurotechnology.  相似文献   

11.
Taking inspiration from some laws of Nature—energy transformation and chemical reactions—we consider two different paradigms of computation in the framework of Membrane Computing. We first study the computational power of energy-based P systems, a model of membrane systems where a fixed amount of energy is associated with each object and the rules transform objects by manipulating their energy. We show that if we assign local priorities to the rules, then energy-based P systems are as powerful as Turing machines; otherwise, they can be simulated by vector addition systems, and hence are not universal. Then, we consider stochastic membrane systems where computations are performed through chemical networks. We show how molecular species and chemical reactions can be used to describe and simulate the functioning of Fredkin gates and circuits. We conclude the paper with some research topics related to computing with energy-based P systems and with chemical reactions.  相似文献   

12.
The neuroidal tabula rasa (NTR) as a hypothetical device that is capable of performing tasks related to cognitive processes in the brain was introduced by L. G. Valiant in 1994. Neuroidal nets represent a computational model of the NTR. Their basic computational element is a kind of a programmable neuron called neuroid. Essentially, it is a combination of a standard threshold element with a mechanism that allows modification of the neuroid's computational behaviour. This is done by changing its state and the settings of its weights and of threshold in the course of computation. The computational power of an NTR crucially depends both on the functional properties of the underlying update mechanism that allows changing of neuroidal parameters and on the universe of allowable weights. We will define instances of neuroids for which the computational power of the respective finite-size NTR ranges from that of finite automata, through Turing machines, up to that of a certain restricted type of BSS machines that possess super-Turing computational power. The latter two results are surprising since similar results were known to hold only for certain kinds of analog neural networks.  相似文献   

13.
Síma J  Orponen P 《Neural computation》2003,15(12):2727-2778
We survey and summarize the literature on the computational aspects of neural network models by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classification include the architecture of the network (feedforward versus recurrent), time model (discrete versus continuous), state type (binary versus analog), weight constraints (symmetric versus asymmetric), network size (finite nets versus infinite families), and computation type (deterministic versus probabilistic), among others. The underlying results concerning the computational power and complexity issues of perceptron, radial basis function, winner-take-all, and spiking neural networks are briefly surveyed, with pointers to the relevant literature. In our survey, we focus mainly on the digital computation whose inputs and outputs are binary in nature, although their values are quite often encoded as analog neuron states. We omit the important learning issues.  相似文献   

14.
This article studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net function and a sigmoid-like continuous activation function. We introduce arithmetic networks as ARNN augmented with a few simple discontinuous (e.g., threshold or zero test) neurons. We argue that even with weights restricted to polynomial time computable reals, arithmetic networks are able to compute arbitrarily complex recursive functions. We identify many types of neural networks that are at least as powerful as arithmetic nets, some of which are not in fact discontinuous, but they boost other arithmetic operations in the net function (e.g., neurons that can use divisions and polynomial net functions inside sigmoid-like continuous activation functions). These arithmetic networks are equivalent to the Blum-Shub-Smale model, when the latter is restricted to a bounded number of registers. With respect to implementation on digital computers, we show that arithmetic networks with rational weights can be simulated with exponential precision, but even with polynomial-time computable real weights, arithmetic networks are not subject to any fixed precision bounds. This is in contrast with the ARNN that are known to demand precision that is linear in the computation time. When nontrivial periodic functions (e.g., fractional part, sine, tangent) are added to arithmetic networks, the resulting networks are computationally equivalent to a massively parallel machine. Thus, these highly discontinuous networks can solve the presumably intractable class of PSPACE-complete problems in polynomial time.  相似文献   

15.
We propose a fast winner-take-all (WTA) neural network by dynamically accelerating the mutual inhibition among competitive neurons. The highest-threshold neural network (HITNET) with an accelerated factor is evolved from the general mean-based neural network, which adopts the mean of active neurons as the threshold of mutual inhibition. When the accelerated factor is optimally designed, the ideal HITNET statistically achieves the highest threshold for mutual inhibition. Both theoretical analyzes and simulation results demonstrate that the practical HITNET converges faster than the existing WTA networks for a large number of competitors.  相似文献   

16.
构建卷积神经网络要耗费大量的人力资源,且训练过程中需要消耗大量的算力资源.利用空洞卷积代替卷积神经网络中的池化操作,能有效增加感受野,降低运算复杂度,但是空洞卷积会带来空间层次和信息连续性的丢失.本文提出了一种并行不对称空洞卷积模块,该模块能够补全空洞卷积所丢失的信息,可以嵌入到现有的卷积神经网络中,代替3×3卷积进行网络训练,从而加速网络的收敛,提高网络的性能.实验结果表明,利用本文所提出的并行不对称空洞卷积模块,可以显著提高不同网络在CIFAR-10等数据集上的分类效果.  相似文献   

17.
Networks of spiking neurons are very powerful and versatile models for biological and artificial information processing systems. Especially for modelling pattern analysis tasks in a biologically plausible way that require short response times with high precision they seem to be more appropriate than networks of threshold gates or models that encode analog values in average firing rates. We investigate the question how neurons can learn on the basis of time differences between firing times. In particular, we provide learning rules of the Hebbian type in terms of single spiking events of the pre- and postsynaptic neuron and show that the weights approach some value given by the difference between pre- and postsynaptic firing times with arbitrary high precision.  相似文献   

18.
The network of neurons with linear threshold (LT) transfer functions is a prominent model to emulate the behavior of cortical neurons. The analysis of dynamic properties for LT networks has attracted growing interest, such as multistability and boundedness. However, not much is known about how the connection strength and external inputs are related to oscillatory behaviors. Periodic oscillation is an important characteristic that relates to nondivergence, which shows that the network is still bounded although unstable modes exist. By concentrating on a general parameterized two-cell network, theoretical results for geometrical properties and existence of periodic orbits are presented. Although it is restricted to two-dimensional systems, the analysis can provide a useful contribution to analyze cyclic dynamics of some specific LT networks of high dimension. As an application, it is extended to an important class of biologically motivated networks of large scale: the winner-take-all model using local excitation and global inhibition.  相似文献   

19.
20.
In this paper we study small depth circuits that contain threshold gates (with or without weights) and parity gates. All circuits we consider are of polynomial size. We prove several results which complete the work on characterizing possible inclusions between many classes defined by small depth circuits. These results are the following:
1.  A single threshold gate with weights cannot in general be replaced by a polynomial fan-in unweighted threshold gate of parity gates.
2.  On the other hand it can be replaced by a depth 2 unweighted threshold circuit of polynomial size. An extension of this construction is used to prove that whatever can be computed by a depthd polynomial size threshold circuit with weights can be computed by a depthd+1 polynomial size unweighted threshold circuit, whered is an arbitrary fixed integer.
3.  A polynomial fan-in threshold gate (with weights) of parity gates cannot in general be replaced by a depth 2 unweighted threshold circuit of polynomial size.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号