首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In classical computation, rational- and real-weighted recurrent neural networks were shown to be respectively equivalent to and strictly more powerful than the standard Turing machine model. Here, we study the computational power of recurrent neural networks in a more biologically oriented computational framework, capturing the aspects of sequential interactivity and persistence of memory. In this context, we prove that so-called interactive rational- and real-weighted neural networks show the same computational powers as interactive Turing machines and interactive Turing machines with advice, respectively. A mathematical characterization of each of these computational powers is also provided. It follows from these results that interactive real-weighted neural networks can perform uncountably many more translations of information than interactive Turing machines, making them capable of super-Turing capabilities.  相似文献   

2.
Complex real-time computations on multi-modal time-varying input streams are carried out by generic cortical microcircuits. Obstacles for the development of adequate theoretical models that could explain the seemingly universal power of cortical microcircuits for real-time computing are the complexity and diversity of their computational units (neurons and synapses), as well as the traditional emphasis on offline computing in almost all theoretical approaches towards neural computation. In this article, we initiate a rigorous mathematical analysis of the real-time computing capabilities of a new generation of models for neural computation, liquid state machines, that can be implemented with—in fact benefit from—diverse computational units. Hence, realistic models for cortical microcircuits represent special instances of such liquid state machines, without any need to simplify or homogenize their diverse computational units. We present proofs of two theorems about the potential computational power of such models for real-time computing, both on analog input streams and for spike trains as inputs.  相似文献   

3.
Manin, Feynman, and Deutsch have viewed quantum computing as a kind of universal physical simulation procedure. Much of the writing about quantum logic circuits and quantum Turing machines has shown how these machines can simulate an arbitrary unitary transformation on a finite number of qubits. The problem of universality has been addressed most famously in a paper by Deutsch, and later by Bernstein and Vazirani as well as Kitaev and Solovay. The quantum logic circuit model, developed by Feynman and Deutsch, has been more prominent in the research literature than Deutsch’s quantum Turing machines. Quantum Turing machines form a class closely related to deterministic and probabilistic Turing machines and one might hope to find a universal machine in this class. A universal machine is the basis of a notion of programmability. The extent to which universality has in fact been established by the pioneers in the field is examined and this key notion in theoretical computer science is scrutinised in quantum computing by distinguishing various connotations and concomitant results and problems.  相似文献   

4.
Abstract

Continuous dynamical systems intuitively seem capable of more complex behavior than discrete systems. If analyzed in the framework of the traditional theory of computation, a.continuous dynamical system with countably many quasistable states has at least the computational power of a universal Turing machine. Such an analysis assumes, however, the classical notion of measurement. If measurement is viewed nonclassically, a continuous dynamical system cannot, even in principle, exhibit behavior that cannot be simulated by a universal Turing machine.  相似文献   

5.
It is shown here that there is no standard spiking neural P system that simulates Turing machines with less than exponential time and space overheads. The spiking neural P systems considered here have a constant number of neurons that is independent of the input length. Following this, we construct a universal spiking neural P system with exhaustive use of rules that simulates Turing machines in linear time and has only 10 neurons.  相似文献   

6.
In this paper we study the class of reaction-diffusion automaton with three states (3-RDA). Namely, we prove that the dynamical behavior of a given neural network can be simulated by a 3-RDA. Since arbitrary neural networks may simulate Turing machines, the class of all 3-RDA is universal. This work was partially supported by FONDECYT 1950569 (M.M.), 1940520 (E.G.), under Grant CEE Marie Curie (M.M.), ECOS-94 (M.M. and E.G.), and DTI - U. Chile (E.G.).  相似文献   

7.
Circuits composed of threshold gates (McCulloch-Pitts neurons, or perceptrons) are simplified models of neural circuits with the advantage that they are theoretically more tractable than their biological counterparts. However, when such threshold circuits are designed to perform a specific computational task, they usually differ in one important respect from computations in the brain: they require very high activity. On average every second threshold gate fires (sets a 1 as output) during a computation. By contrast, the activity of neurons in the brain is much sparser, with only about 1% of neurons firing. This mismatch between threshold and neuronal circuits is due to the particular complexity measures (circuit size and circuit depth) that have been minimized in previous threshold circuit constructions. In this letter, we investigate a new complexity measure for threshold circuits, energy complexity, whose minimization yields computations with sparse activity. We prove that all computations by threshold circuits of polynomial size with entropy O(log n) can be restructured so that their energy complexity is reduced to a level near the entropy of circuit states. This entropy of circuit states is a novel circuit complexity measure, which is of interest not only in the context of threshold circuits but for circuit complexity in general. As an example of how this measure can be applied, we show that any polynomial size threshold circuit with entropy O(log n) can be simulated by a polynomial size threshold circuit of depth 3. Our results demonstrate that the structure of circuits that result from a minimization of their energy complexity is quite different from the structure that results from a minimization of previously considered complexity measures, and potentially closer to the structure of neural circuits in the nervous system. In particular, different pathways are activated in these circuits for different classes of inputs. This letter shows that such circuits with sparse activity have a surprisingly large computational power.  相似文献   

8.
Approximation and universality of fuzzy Turing machines   总被引:2,自引:1,他引:1  
Fuzzy Turing machines are the formal models of fuzzy algorithms or fuzzy computations. In this paper we give several different formulations of fuzzy Turing machine, which correspond to nondeterministic fuzzy Turing machine using max-* composition for some t-norm* (or NFTM*, for short), nondeterministic fuzzy Turing machine (or NFTM), deterministic fuzzy Turing machine (or DFTM), and multi-tape versions of fuzzy Turing machines. Some distinct results compared to those of ordinary Turing machines are obtained. First, it is shown that NFTM*, NFTM, and DFTM are not necessarily equivalent in the power of recognizing fuzzy languages if the t-norm* does not satisfy finite generated condition, but are equivalent in the approximation sense. That is to say, we can approximate an NFTM* by some NFTM with any given accuracy; the related constructions are also presented. The level characterization of fuzzy recursively enumerable languages and fuzzy recursive languages are exploited by ordinary r.e. languages and recursive languages. Second, we show that universal fuzzy Turing machine exists in the approximated sense. There is a universal fuzzy Turing machine that can simulate any NFTM* on it with a given accuracy.  相似文献   

9.
李永明  李平 《计算机学报》2012,35(7):1407-1420
基于量子逻辑的自动机理论是量子计算模型的一个重要研究方向.该文研究了基于量子逻辑的图灵机(简称量子图灵机)及其一些变形,给出了包括非确定型量子图灵机l-VTM,确定型量子图灵机l-VDTM以及相应类型的多带量子图灵机,并引入量子图灵机基于深度优先与宽度优先识别语言的两种不同定义方式,证明了这两种定义方式在量子逻辑意义下是不等价的.进一步证明了l-VTM、l-VDTM与相应类型的多带量子图灵机之间的等价性.其次,给出了量子递归可枚举语言及量子递归语言的定义,并给出了二者的层次刻画,证明了l-VTM与l-VDTM不等价,但两者作为量子递归语言的识别器是等价的.最后,文中讨论了基于量子逻辑的通用图灵机的存在性问题,给出了一套合理编码系统,证明了基于量子逻辑的通用图灵机在其所取值的正交模格无限时不存在,而在其所取值的正交模格有限时是存在的.  相似文献   

10.
Universality, provability and simplicity are key notions in computability theory. There are various criteria of simplicity for universal Turing machines. Probably the most popular one is to count the number of states/symbols. This criterion is more complex than it may appear at a first glance. In this note we propose three new criteria of simplicity for universal prefix-free Turing machines. These criteria refer to the possibility of proving various natural properties of such a machine (its universality, for example) in a formal theory, Peano arithmetic or Zermelo–Fraenkel set theory. In all cases some, but not all, machines are simple.  相似文献   

11.
Informally, the parallel Turing machine (PTM) proposed by Wiedermann is a set of identical usual sequential Turing machines (STMs) cooperating on two common tapes: storage tape and input tape. Moreover, STMs which represent the individual processors of a parallel computer can multiply themselves in the course of computation. On the other hand, during the past 25 years or so, automata on a three-dimensional tape have been proposed as computational models of three-dimensional pattern processing, and several properties of such automata have been obtained. We proposed a three-dimensional parallel Turing machine (3-PTM), and dealt with a hardware-bounded 3-PTM whose inputs are restricted to cubic ones. We believe that this machine is useful in measuring the parallel computational complexity of three-dimensional images. In this article, we continue the study of 3-PTM, whose inputs are restricted to cubic ones, and investigate some of its accepting powers. This work was presented in part at the 12th International Symposium on Artificial Life and Robotics, Oita, Japan, January 25–27, 2007  相似文献   

12.
Computational capabilities of recurrent NARX neural networks   总被引:11,自引:0,他引:11  
Recently, fully connected recurrent neural networks have been proven to be computationally rich-at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called NARX networks. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by y(t)=Psi(u(t-n(u)), ..., u(t-1), u(t), y(t-n(y)), ..., y(t-1)) where u(t) and y(t) represent input and output of the network at time t, n(u) and n(y) are the input and output order, and the function Psi is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power.  相似文献   

13.
We present an improved simulation of space and reversal bounded Turing machines by width and depth bounded uniform circuits. (All resource bounds hold simultaneously.) An S(n) space, R(n) reversal bounded deterministic k-tape Turing machine can be simulated by a uniform circuit of O(R(n)log2S(n)) depth and O(S(n)k) width. Our proof is cleaner, and has slightly better resource bounds than the original proof due to Pippenger (1979). The improvement is resource bounds comes primarily from the use of a shared-memory machine instead of an oblivious Turing machine, and the concept of a ‘special situation’.  相似文献   

14.
In contrast to machine models like Turing machines or random access machines, circuits are a static computational model. The internal information flow of a computation is fixed in advance, independent of the actual input. Therefore, size and depth are natural and simple measures for circuits and provide a worst-case analysis. We consider a new model in which an internal gate is evaluated as soon as its result has been determined by a partial assignment of its inputs. This way, a dynamic notion of delay is obtained which gives rise to an average case measure for the time complexity of circuits. In a previous paper we have obtained tight upper and lower bounds for the average case complexity of several basic Boolean functions. This paper examines the asymptotic average case complexity for the set of alln-ary Boolean functions. In contrast to worst case analysis a simple counting argument does not work. We prove that with respect to the uniform probability distribution almost all Boolean functions require at leastn−log n−log log nexpected time. On the other hand, there is a significantly large subset of functions that can be computed with a constant average delay. Finally, for an arbitrary Boolean function we compare its worst case and average case complexity. It is shown that for each function that requires circuit depthd, i.e. of worst-case complexityd, the expected time complexity will be at leastd−log n−log dwith respect to an explicitly defined probability distribution. In addition, a nontrivial upper bound on the complexity of such a distribution will be obtained.  相似文献   

15.
We survey some work concerned with small universal Turing machines, cellular automata, tag systems, and other simple models of computation. For example, it has been an open question for some time as to whether the smallest known universal Turing machines of Minsky, Rogozhin, Baiocchi and Kudlek are efficient (polynomial time) simulators of Turing machines. These are some of the most intuitively simple computational devices and previously the best known simulations were exponentially slow. We discuss recent work that shows that these machines are indeed efficient simulators. As a related result, we also find that Rule 110, a well-known elementary cellular automaton, is also efficiently universal. We also review a large number of old and new universal program size results, including new small universal Turing machines and new weakly, and semi-weakly, universal Turing machines. We then discuss some ideas for future work arising out of these, and other, results.  相似文献   

16.
Some accepting powers of three-dimensional parallel Turing machines   总被引:1,自引:1,他引:0  
Informally, the parallel Turing machine (PTM) proposed by Wiedermann is a set of identical usual sequential Turing machines (STMs) cooperating on two common tapes: storage tape and input tape. Moreover, STMs which represent the individual processors of a parallel computer can multiply themselves in the course of computation. On the other hand, during the past 25 years or so, automata on a three-dimensional tape have been proposed as computational models of three-dimensional pattern processing, and several properties of such automata have been obtained. We proposed a three-dimensional parallel Turing machine (3-PTM),1 and dealt with a hardware-bounded 3-PTM whose inputs are restricted to cubic ones. We believe that this machine is useful in measuring the parallel computational complexity of three-dimensional images. Here, we continue the study of 3-PTM, whose inputs are restricted to cubic ones, and investigate some of its accepting powers. This work was presented in part at the First European Workshop on Artificial Life and Robotics, Vienna, Austria, July 12–13, 2007  相似文献   

17.
We describe experimental results to demonstrate the wide-ranging computational ability of quasiperiodic oscillators built from rings of differentiating Schmitt triggers. We describe a theoretical model based on necklace functions to compute the number of states supportable by a ring circuit of a given size. Experimental results are presented to demonstrate that probabilistic state machines can be built from these ring circuits. Other experimental results are given to demonstrate that the rings can model spiking neural network circuits.  相似文献   

18.
Recent work by Siegelmann has shown that the computational power of recurrent neural networks matches that of Turing Machines. One important implication is that complex language classes (infinite languages with embedded clauses) can be represented in neural networks. Proofs are based on a fractal encoding of states to simulate the memory and operations of stacks.In the present work, it is shown that similar stack-like dynamics can be learned in recurrent neural networks from simple sequence prediction tasks. Two main types of network solutions are found and described qualitatively as dynamical systems: damped oscillation and entangled spiraling around fixed points. The potential and limitations of each solution type are established in terms of generalization on two different context-free languages. Both solution types constitute novel stack implementations—generally in line with Siegelmann's theoretical work—which supply insights into how embedded structures of languages can be handled in analog hardware.  相似文献   

19.
On the computational power of winner-take-all   总被引:5,自引:0,他引:5  
Maass W 《Neural computation》2000,12(11):2519-2535
This article initiates a rigorous theoretical analysis of the computational power of circuits that employ modules for computing winner-take-all. Computational models that involve competitive stages have so far been neglected in computational complexity theory, although they are widely used in computational brain models, artificial neural networks, and analog VLSI. Our theoretical analysis shows that winner-take-all is a surprisingly powerful computational module in comparison with threshold gates (also referred to as McCulloch-Pitts neurons) and sigmoidal gates. We prove an optimal quadratic lower bound for computing winner-take-all in any feedforward circuit consisting of threshold gates. In addition we show that arbitrary continuous functions can be approximated by circuits employing a single soft winner-take-all gate as their only nonlinear operation. Our theoretical analysis also provides answers to two basic questions raised by neurophysiologists in view of the well-known asymmetry between excitatory and inhibitory connections in cortical circuits: how much computational power of neural networks is lost if only positive weights are employed in weighted sums and how much adaptive capability is lost if only the positive weights are subject to plasticity.  相似文献   

20.
We establish a fundamental result in the theory of computation by continuous-time dynamical systems by showing that systems corresponding to so-called continuous-time symmetric Hopfield nets are capable of general computation. As is well known, such networks have very constrained Lyapunov-function controlled dynamics. Nevertheless, we show that they are universal and efficient computational devices, in the sense that any convergent synchronous fully parallel computation by a recurrent network of n discrete-time binary neurons, with in general asymmetric coupling weights, can be simulated by a symmetric continuous-time Hopfield net containing only 18n + 7 units employing the saturated-linear activation function. Moreover, if the asymmetric network has maximum integer weight size w(max) and converges in discrete time t*, then the corresponding Hopfield net can be designed to operate in continuous time Theta(t*/epsilon) for any epsilon > 0 such that w(max)2(12n) 相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号