首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
The theory of analog computation aims at modeling computational systems that evolve in a continuous space. Unlike the situation with the discrete setting there is no unified theory of analog computation. There are several proposed theories, some of them seem quite orthogonal. Some theories can be considered as generalizations of the Turing machine theory and classical recursion theory. Among such are recursive analysis and Moore’s class of recursive real functions. Recursive analysis was introduced by Turing (Proc Lond Math Soc 2(42):230–265, 1936), Grzegorczyk (Fundam Math 42:168–202, 1955), and Lacombe (Compt Rend l’Acad Sci Paris 241:151–153, 1955). Real computation in this context is viewed as effective (in the sense of Turing machine theory) convergence of sequences of rational numbers. In 1996 Moore introduced a function algebra that captures his notion of real computation; it consists of some basic functions and their closure under composition, integration and zero-finding. Though this class is inherently unphysical, much work have been directed at stratifying, restricting, and comparing it with other theories of real computation such as recursive analysis and the GPAC. In this article we give a detailed exposition of recursive analysis and Moore’s class and the relationships between them.  相似文献   

2.
This paper presents persistent Turing machines (PTMs), a new way of interpreting Turing-machine computation, based on dynamic stream semantics. A PTM is a Turing machine that performs an infinite sequence of “normal” Turing machine computations, where each such computation starts when the PTM reads an input from its input tape and ends when the PTM produces an output on its output tape. The PTM has an additional worktape, which retains its content from one computation to the next; this is what we mean by persistence.A number of results are presented for this model, including a proof that the class of PTMs is isomorphic to a general class of effective transition systems called interactive transition systems; and a proof that PTMs without persistence (amnesic PTMs) are less expressive than PTMs. As an analogue of the Church-Turing hypothesis which relates Turing machines to algorithmic computation, it is hypothesized that PTMs capture the intuitive notion of sequential interactive computation.  相似文献   

3.
Abstract

Continuous dynamical systems intuitively seem capable of more complex behavior than discrete systems. If analyzed in the framework of the traditional theory of computation, a.continuous dynamical system with countably many quasistable states has at least the computational power of a universal Turing machine. Such an analysis assumes, however, the classical notion of measurement. If measurement is viewed nonclassically, a continuous dynamical system cannot, even in principle, exhibit behavior that cannot be simulated by a universal Turing machine.  相似文献   

4.
The production system is a theoretical model of computation relevant to the artificial intelligence field allowing for problem solving procedures such as hierarchical tree search. In this work we explore some of the connections between artificial intelligence and quantum computation by presenting a model for a quantum production system. Our approach focuses on initially developing a model for a reversible production system which is a simple mapping of Bennett’s reversible Turing machine. We then expand on this result in order to accommodate for the requirements of quantum computation. We present the details of how our proposition can be used alongside Grover’s algorithm in order to yield a speedup comparatively to its classical counterpart. We discuss the requirements associated with such a speedup and how it compares against a similar quantum hierarchical search approach.  相似文献   

5.
Consider Turing machines that use a tape infinite in both directions, with the tape alphabet {0,1} . Rado's busy beaver function, ones(n), is the maximum number of 1's such a machine, with n states, started on a blank (all-zero) tape, may leave on its tape when it halts. The function ones(n) is non-computable; in fact, it grows faster than any computable function. Other functions with a similar nature can be defined also. All involve machines of n states, started on a blank tape. The function time(n) is the maximum number of moves such a machine may make before halting. The function num(n) is the largest number of 1's such a machine may leave on its tape in the form of a single run; and the function space(n) is the maximum number of tape squares such a machine may scan before it halts. This paper establishes new bounds on these functions in terms of each other. Specifically, we bound time(n) by num(n+o(n)), improving on the previously known bound num(3n+6) . This result is obtained using a kind of ``self-interpreting' Turing machine. We also improve on the trivial relation space(n) ≤ time(n) , using a technique of counting crossing sequences. Received July 18, 2000, and in revised form October 10, 2001. Online publication February 20, 2002.  相似文献   

6.
Consider Turing machines that read and write the symbols 1 and 0 on a one-dimensional tape that is infinite in both directions, and halt when started on a tape containing all O's. Rado'sbusy beaver function ones(n) is the maximum number of 1's such a machine, withn states, may leave on its tape when it halts. The function ones(n) is noncomputable; in fact, it grows faster than any computable function. Other functions with a similar nature can also be defined. The function time(n) is the maximum number of moves such a machine may make before halting. The function num(n) is the largest number of 1's such a machine may leave on its tape in the form of a single run; and the function space(n) is the maximum number of tape squares such a machine may scan before it halts. This paper establishes a variety of bounds on these functions in terms of each other; for example, time(n) ≤ (2n − 1) × ones(3n + 3). In general, we compare the growth rates of such functions, and discuss the problem of characterizing their growth behavior in a more precise way than that given by Rado.  相似文献   

7.
In 1944 the computing machine known as Colossus became operational in support of British cryptanalysis and decryption of German High Command wireless traffic. This first electronic digital and very unconventional computer was not a stored-program general purpose computer in today’s terms, despite printed claims to the contrary. At least one of these asserts Colossus was a Turing machine. While an appropriate Turing machine can simulate the operation of Colossus, that is not an argument for generality of computation. Nor does the behavior of Colossus resemble that of a Turing machine, much less a universal Turing machine (UTM). Nonetheless, we shall see that a UTM could have been implemented on a clustering of the ten Colossus machines installed at Bletchley Park, England, by the end of WWII in 1945. Improvements require even fewer machines. Several advances in input, output, speed, processing, and applications—within the hardware capability of the time and respectful of the specification of Colossus—are also offered.  相似文献   

8.
By restricting the number of nondeterministic moves permitted during a real-time Turing machine computation, an infinite hierarchy can be exhibited between the real-time definable languages of Rosenberg and the quasi-real-time languages of Book and Greibach.  相似文献   

9.
We give an acount of the basic determinants of the courses of computation of the Infinite Time Turing Machine model of Hamkins and Kidder, a model of computation which allows for transfinitely many steps of computation, and therefore may accept and output infinite strings of bits. We provide, inter alia, a Normal form Theorem, and a characterisation of which ordinals start gaps in halting times of such machines.  相似文献   

10.
Infinite level-dependent quasi-birth-and-death (LDQBD) processes can be used to model Markovian systems with countably infinite multidimensional state spaces. Recently it has been shown that sums of Kronecker products can be used to represent the nonzero blocks of the transition rate matrix underlying an LDQBD process for models from stochastic chemical kinetics. This paper extends the form of the transition rates used recently so that a larger class of models including those of call centers can be analyzed for their steady-state. The challenge in the matrix analytic solution then is to compute conditional expected sojourn time matrices of the LDQBD model under low memory and time requirements after truncating its countably infinite state space judiciously. Results of numerical experiments are presented using a Kronecker-based matrix-analytic solution on models with two or more countably infinite dimensions and rules of thumb regarding better implementations are derived. In doing this, a more recent approach that reduces memory requirements further by enabling the computation of steady-state expectations without having to obtain the steady-state distribution is also considered.  相似文献   

11.
As an alternative to previously studied models for space-bounded relative computation, an oracle Turing machine with a space bound on its worktape and an arbitrary number of oracle tapes is considered. Basic properties of the resulting reducibilities are examined.  相似文献   

12.
We introduce a model of computation based on the use of write-once memory. Write-once memory has the property that bits may be set but not reset. Our model consists of a RAM with a small amount of regular memory (such as logarithmic orn α for α<1, wheren is the size of the problem) and a polynomial amount of write-once memory. Bounds are given on the time required to simulate on write-once memory algorithms which originally run on a RAM with a polynomial amount of regular memory. We attempt to characterize algorithms that can be simulated on our write-once memory model with very little slow-down. A persistent computation is one in which, at all times, the memory state of the computation at any previous point in time can be reconstructed. We show that any data structure or computation implemented on this write-once memory model can be made persistent without sacrificing much in the way of running time or space. The space requirements of algorithms running on the write-once model are studied. We show that general simulations of algorithms originally running on a RAM with regular memory by algorithms running on our write-once memory model require space proportional to the number of steps simulated. In order to study the space complexity further, we define an analogue of the pebbling game, called the pebble-sticker game. A sticker is different from a pebble in that it cannot be removed once placed on a node of the computation graph. As placing pebbles correspond to writes to regular memory, placing stickers correspond to writes to the write-once memory. Bounds are shown on pebble-sticker tradeoffs required to evaluate trees and planar graphs. Finally, we define the complexity class WO-PSPACE as the class of problems which can be solved with a polynomial amount of write-once memory, and show that it is equal toP. The research of S. Irani was supported by NSF Grant No. DCF-85-13926, and a Tandem Corporation Fellowship. R. Rubinfeld's research was supported by NSF Grant No. CCR 88-13632 and an IBM Graduate Fellowship.  相似文献   

13.
The inherent ambiguity question for context-free languages is shown to be in the Turing degree of unsolvability O”. The method used is to show that the inherent ambiguity question is equivalent to the finiteness question for r.e. sets which is in degree O”. The proof associates a context-free language with each Turing machine. The language has a natural closed-form definition in terms of the Turing machine. The associated language is inherently ambiguous precisely iof the Turing machine accepts an infinite set.  相似文献   

14.
We present a simulation of Turing machines by peptide–antibody interactions. In contrast to an earlier simulation, this new technique simulates the computation steps automatically by the interaction between peptides and antibodies and does not rely on a “look-and-do” approach, in which the Turing machine program would be interpreted by an extraneous computing agent. We determine the resource requirements of the simulation. Towards a precise definition for peptide computing we construct a new theoretical model. We examine how the simulations presented in this paper fits this model. We also give conditions on the peptide computing model so that it can be simulated by a Turing machine.
M. Sakthi BalanEmail:
  相似文献   

15.
Shagrir  Oron 《Minds and Machines》2002,12(2):221-240
There is an intensive discussion nowadays about the meaning of effective computability, with implications to the status and provability of the Church–Turing Thesis (CTT). I begin by reviewing what has become the dominant account of the way Turing and Church viewed, in 1936, effective computability. According to this account, to which I refer as the Gandy–Sieg account, Turing and Church aimed to characterize the functions that can be computed by a human computer. In addition, Turing provided a highly convincing argument for CTT by analyzing the processes carried out by a human computer. I then contend that if the Gandy–Sieg account is correct, then the notion of effective computability has changed after 1936. Today computer scientists view effective computability in terms of finite machine computation. My contention is supported by the current formulations of CTT, which always refer to machine computation, and by the current argumentation for CTT, which is different from the main arguments advanced by Turing and Church. I finally turn to discuss Robin Gandy's characterization of machine computation. I suggest that there is an ambiguity regarding the types of machines Gandy was postulating. I offer three interpretations, which differ in their scope and limitations, and conclude that none provides the basis for claiming that Gandy characterized finite machine computation.  相似文献   

16.
Informally, the parallel Turing machine (PTM) proposed by Wiedermann is a set of identical usual sequential Turing machines (STMs) cooperating on two common tapes: storage tape and input tape. Moreover, STMs which represent the individual processors of a parallel computer can multiply themselves in the course of computation. On the other hand, during the past 25 years or so, automata on a three-dimensional tape have been proposed as computational models of three-dimensional pattern processing, and several properties of such automata have been obtained. We proposed a three-dimensional parallel Turing machine (3-PTM), and dealt with a hardware-bounded 3-PTM whose inputs are restricted to cubic ones. We believe that this machine is useful in measuring the parallel computational complexity of three-dimensional images. In this article, we continue the study of 3-PTM, whose inputs are restricted to cubic ones, and investigate some of its accepting powers. This work was presented in part at the 12th International Symposium on Artificial Life and Robotics, Oita, Japan, January 25–27, 2007  相似文献   

17.
B. Fu  R. Beigel 《Algorithmica》1999,24(2):87-95
The number of molecular strands used by a molecular algorithm is an important measure of the algorithm's complexity. This measure is also called the volume used by the algorithm. We prove that three important polynomial-time models of molecular computation with bounded volume are equivalent to models of polynomial-time Turing machine computation with bounded nondeterminism. Without any assumption, we show that the Split operation does not increase the power of polynomial-time molecular computation. Assuming a plausible separation between Turing machine complexity classes, the Amplify operation does increase the power of polynomial-time molecular computation. Received September 28, 1997; revised March 24, 1998.  相似文献   

18.
Turing’s Imitation Game is often viewed as a test for theorised machines that could ‘think’ and/or demonstrate ‘intelligence’. However, contrary to Turing’s apparent intent, it can be shown that Turing’s Test is essentially a test for humans only. Such a test does not provide for theorised artificial intellects with human-like, but not human-exact, intellectual capabilities. As an attempt to bypass this limitation, I explore the notion of shifting the goal posts of the Turing Test, and related tests such as the Total Turing Test, away from the exact imitation of human capabilities, and towards communication with humans instead. While the continued philosophical relevance of such tests is open to debate, the outcome is a different class of tests which are, unlike the Turing Test, immune to failure by means of sub-cognitive questioning techniques. I suggest that attempting to instantiate such tests could potentially be more scientifically and pragmatically relevant to some Artificial Intelligence researchers, than instantiating a Turing Test, due to the focus on producing a variety of goal directed outcomes through communicative methods, as opposed to the Turing Test’s emphasis on ‘fooling’ an Examiner.
Jamie CullenEmail:
  相似文献   

19.
Some accepting powers of three-dimensional parallel Turing machines   总被引:1,自引:1,他引:0  
Informally, the parallel Turing machine (PTM) proposed by Wiedermann is a set of identical usual sequential Turing machines (STMs) cooperating on two common tapes: storage tape and input tape. Moreover, STMs which represent the individual processors of a parallel computer can multiply themselves in the course of computation. On the other hand, during the past 25 years or so, automata on a three-dimensional tape have been proposed as computational models of three-dimensional pattern processing, and several properties of such automata have been obtained. We proposed a three-dimensional parallel Turing machine (3-PTM),1 and dealt with a hardware-bounded 3-PTM whose inputs are restricted to cubic ones. We believe that this machine is useful in measuring the parallel computational complexity of three-dimensional images. Here, we continue the study of 3-PTM, whose inputs are restricted to cubic ones, and investigate some of its accepting powers. This work was presented in part at the First European Workshop on Artificial Life and Robotics, Vienna, Austria, July 12–13, 2007  相似文献   

20.
Information security is perceived as an important and vital aspect for the survival of any business. Preserving user identity and limiting the access of web resources only to the humans and restricting ‘bots’ is an ever challenging area of study. With the increase in computing power and development of newer approaches towards circumvention and reverse-engineering, the recognition gap present between the machines and the humans is said to be decreasing. Turing test and its modified versions are in place to deal with such problems and ways to resolve them by developing complex algorithms for bot prevention systems like CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart). This paper will deal with the use of “Machine Vision” for judging the ability of the machines to compete with humans in breaking sequences of security systems like CAPTCHA. Reverse Turing test will be put to practise here. Complex image recognition technologies and novel approaches towards using Human interactive proofs (HIP) are discussed. The progress of Turing test over the past 60 years has been paid due attention at the end. After all this experimentation, it can be said that the current machine vision is quite poor and is far worse than it is expected to be.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号