首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Consider Turing machines that read and write the symbols 1 and 0 on a one-dimensional tape that is infinite in both directions, and halt when started on a tape containing all O's. Rado'sbusy beaver function ones(n) is the maximum number of 1's such a machine, withn states, may leave on its tape when it halts. The function ones(n) is noncomputable; in fact, it grows faster than any computable function. Other functions with a similar nature can also be defined. The function time(n) is the maximum number of moves such a machine may make before halting. The function num(n) is the largest number of 1's such a machine may leave on its tape in the form of a single run; and the function space(n) is the maximum number of tape squares such a machine may scan before it halts. This paper establishes a variety of bounds on these functions in terms of each other; for example, time(n) ≤ (2n − 1) × ones(3n + 3). In general, we compare the growth rates of such functions, and discuss the problem of characterizing their growth behavior in a more precise way than that given by Rado.  相似文献   

2.
We prove an O(t(n) d (t(n)) ? / log t(n)) time bound for the sim-ulation of t(n) steps of a Turing machine using several one-dimensional work tapes on a Turing machine using one d-dimensional work tape, . We prove a matching lower bound which holds for the problem of recognizing languages on machines with a separate one-way input tape. Received: March 1995.  相似文献   

3.
This paper presents persistent Turing machines (PTMs), a new way of interpreting Turing-machine computation, based on dynamic stream semantics. A PTM is a Turing machine that performs an infinite sequence of “normal” Turing machine computations, where each such computation starts when the PTM reads an input from its input tape and ends when the PTM produces an output on its output tape. The PTM has an additional worktape, which retains its content from one computation to the next; this is what we mean by persistence.A number of results are presented for this model, including a proof that the class of PTMs is isomorphic to a general class of effective transition systems called interactive transition systems; and a proof that PTMs without persistence (amnesic PTMs) are less expressive than PTMs. As an analogue of the Church-Turing hypothesis which relates Turing machines to algorithmic computation, it is hypothesized that PTMs capture the intuitive notion of sequential interactive computation.  相似文献   

4.
We make some observations concerning alternating Turing machines operating in small space. For example, we show that alternating Turing machines using o(log n) space are more powerful than nondeterministic Turing machines using the same space-bound. In fact, we show that there is a language over a unary alphabet that can be accepted by an on-line alternating Turing machine in log n space, but not by any off-line nondeterministic Turing machine in o(log n) space. We also investigate the weak vs. strong space bounds and on-line vs. off-line machines at these low tape bounds.  相似文献   

5.
Paul and Reischuk devised space efficient simulations of logarithmic cost random access machines and multidimensional Turing machines. We simplify their general space reduction technique and extend it to other computational models, including pointer machines, which model computations on graphs and data structures. Every pointer machine of time complexityT(n) can be simulated by a pointer machine of space complexityO(T(n)/logT(n)).  相似文献   

6.
The main result of this paper is that, given a Turing machine M with k-heads on a d-dimensional tape, one can effectively construct a Turing machine M′ with k d-dimensional tapes but only one head per tape and one additional linear single-head tape which simulates M in linear-time.  相似文献   

7.
We introduce the dual return complexity and prove that the return complexity classes and the dual return complexity classes of nondeterministic Turing machines coincide with the tape complexity classes of Turing machines with auxiliary pushdown tape for resource functions ??id, id being the identity function.  相似文献   

8.
It is reasonable to assume that quantum computations take place under the control of the classical world. For modelling this standard situation, we introduce a Classically-controlled Quantum Turing Machine (CQTM) which is a Turing machine with a quantum tape for acting on quantum data, and a classical transition function for a formalized classical control. In CQTM, unitary transformations and quantum measurements are allowed. We show that any classical Turing machine is simulated by a CQTM without loss of efficiency. Furthermore, we show that any k-tape CQTM is simulated by a 2-tape CQTM with a quadratic loss of efficiency. The gap between classical and quantum computations which was already pointed out in the framework of measurement-based quantum computation (see [S. Perdrix, Ph. Jorrand, Measurement-Based Quantum Turing Machines and their Universality, arXiv, quant-ph/0404146, 2004]) is confirmed in the general case of classically-controlled quantum computation. In order to appreciate the similarity between programming classical Turing machines and programming CQTM, some examples of CQTM will be given in the full version of the paper. Proofs of lemmas and theorems are omitted in this extended abstract.  相似文献   

9.
We prove the first superlinear lower bound for a concrete, polynomial time recognizable decision problem on a Turing machine with one work tape and a two-way input tape (also called off-line 1-tape Turing machine).In particular, for off-line Turing machines we show that two tapes are better than one and that three pushdown stores are better than two (both in the deterministic and in the nondeterministic case).  相似文献   

10.
On alternation     
Summary Every alternating t(n)-time bounded multitape Turing machine can be simulated by an alternating t(n)-time bounded 1-tape Turing machine. Every nondeterministic t(n)-time bounded 1-tape Turing machine can be simulated by an alternating (n + (t(n)) 1/2)-timebounded 1-tape Turing machine. For wellbehaved functions t(n) every nondeterministic t(n)-time bounded 1-tape Turing machine can be simulated by a deterministic ((nlogn)1/2 + (t(n))1/2)-tape bounded off-line Turing machine. These results improve or extend results by Chandra-Stockmeyer, Lipton-Tarjan and Paterson. A preliminary version of this paper was presented at the 19th IEEE-FOCS  相似文献   

11.
A language is called (m,n)-verbose if there exists a Turing machine that enumerates for any n words at most m possibilities for their characteristic string. This notion is compared with (m,n)-fa-verboseness, where instead of a Turing machine a finite automaton is used. By use of a new diagonalisation method, where finite automata trick Turing machines, it is shown that all (m,n)-verbose languages are (h,k)-verbose iff all (m,n)-fa-verbose languages are (h,k)-fa-verbose. In other words, Turing machines and finite automata behave exactly the same way with respect to inclusion of verboseness classes. This identical behaviour implies that the nonspeedup theorem also holds for finite automata. As an application of the theoretical framework, a lower bound is derived on the number of bits that need to be communicated to finite automata protocol checkers for nonregular protocols.  相似文献   

12.
13.
14.
Accelerating Turing machines have attracted much attention in the last decade or so. They have been described as “the work-horse of hypercomputation” (Potgieter and Rosinger 2010: 853). But do they really compute beyond the “Turing limit”—e.g., compute the halting function? We argue that the answer depends on what you mean by an accelerating Turing machine, on what you mean by computation, and even on what you mean by a Turing machine. We show first that in the current literature the term “accelerating Turing machine” is used to refer to two very different species of accelerating machine, which we call end-stage-in and end-stage-out machines, respectively. We argue that end-stage-in accelerating machines are not Turing machines at all. We then present two differing conceptions of computation, the internal and the external, and introduce the notion of an epistemic embedding of a computation. We argue that no accelerating Turing machine computes the halting function in the internal sense. Finally, we distinguish between two very different conceptions of the Turing machine, the purist conception and the realist conception; and we argue that Turing himself was no subscriber to the purist conception. We conclude that under the realist conception, but not under the purist conception, an accelerating Turing machine is able to compute the halting function in the external sense. We adopt a relatively informal approach throughout, since we take the key issues to be philosophical rather than mathematical.  相似文献   

15.
Informally, the parallel Turing machine (PTM) proposed by Wiedermann is a set of identical usual sequential Turing machines (STMs) cooperating on two common tapes: a storage tape and an input tape. Moreover, STMs which represent the individual processors of a parallel computer can multiply themselves in the course of computation. On the other hand, during the past 7 years or so, automata on a four-dimensional tape have been proposed as computational models of four-dimensional pattern processing, and several properties of such automata have been obtained. We proposed a four-dimensional parallel Turing machine (4-PTM), and dealt with a hardware-bounded 4-PTM in which each side-length of each input tape is equivalent. We believe that this machine is useful in measuring the parallel computational complexity of three-dimensional images. In this work, we continued the study of the 3-PTM, in which each side-length of each input tape is equivalent, and investigated some of its accepting powers.  相似文献   

16.
We present an improved simulation of space and reversal bounded Turing machines by width and depth bounded uniform circuits. (All resource bounds hold simultaneously.) An S(n) space, R(n) reversal bounded deterministic k-tape Turing machine can be simulated by a uniform circuit of O(R(n)log2S(n)) depth and O(S(n)k) width. Our proof is cleaner, and has slightly better resource bounds than the original proof due to Pippenger (1979). The improvement is resource bounds comes primarily from the use of a shared-memory machine instead of an oblivious Turing machine, and the concept of a ‘special situation’.  相似文献   

17.
Summary Every deterministic t(n)-time bounded multitape Turing machine can be simulated by an alternating t(n) loglog t(n)/log t(n)-time bounded Turing machine.If the depth of every directed acyclic graph with n edges can be reduced to log n by removing only o(n) edges, then in linear time nondeterministic multitape Turing machines can recognize mor languages than deterministic multitape Turing machines. For some graphs reduction of the depth to log n requires the removal of (n/loglog n) edges. A graph theoretic condition is given, which implies that obliviousness reduces the power of multitape Turing machines.A preliminary version of this paper was presented at the GI-conference on Theoretical Computer Science 1979Part of this research was done while the author was visiting the Laboratoire de recherches en informatique de l'université de Paris sud under DAAD-grant 311-f-HSLA-soe  相似文献   

18.
We study remembering Turing machines, that is Turing machines with the capability to access freely the history of their computations. These devices can detect in one step via the oracle mechanism whether the storage tapes have exactly the same contents at the moment of inquiry as at some past moment in the computation. The s(n)-space-bounded remembering Turing machines are shown to be able to recognize exactly the languages in the time-complexity class determined by bounds exponential in s(n). This is proved for deterministic, non-deterministic, and alternating Turing machines.  相似文献   

19.
Informally, the parallel Turing machine (PTM) proposed by Wiedermann is a set of identical usual sequential Turing machines (STMs) cooperating on two common tapes: storage tape and input tape. Moreover, STMs which represent the individual processors of a parallel computer can multiply themselves in the course of computation. On the other hand, during the past 25 years or so, automata on a three-dimensional tape have been proposed as computational models of three-dimensional pattern processing, and several properties of such automata have been obtained. We proposed a three-dimensional parallel Turing machine (3-PTM), and dealt with a hardware-bounded 3-PTM whose inputs are restricted to cubic ones. We believe that this machine is useful in measuring the parallel computational complexity of three-dimensional images. In this article, we continue the study of 3-PTM, whose inputs are restricted to cubic ones, and investigate some of its accepting powers. This work was presented in part at the 12th International Symposium on Artificial Life and Robotics, Oita, Japan, January 25–27, 2007  相似文献   

20.
A Turing machine with two storage tapes cannot simulate a queue in both real-time and with at least one storage tape head always within o(n) squares from the start square. This fact may be useful for showing that a two-head tape unit is more powerful in real-time than two one-head tape units, as is commonly conjectured.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号