首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
What's computation? The received answer is that computation is a computer at work, and a computer at work is that which can be modelled as a Turing machine at work. Unfortunately, as John Searle has recently argued, and as others have agreed, the received answer appears to imply that AI and Cog Sci are a royal waste of time. The argument here is alarmingly simple: AI and Cog Sci (of the “Strong” sort, anyway) are committed to the view that cognition is computation (or brains are computers); butall processes are computations (orall physical things are computers); so AI and Cog Sci are positively silly. I refute this argument herein, in part by defining the locutions ‘x is a computer’ and ‘c is a computation’ in a way that blocks Searle's argument but exploits the hard-to-deny link between What's Computation? and the theory of computation. However, I also provide, at the end of this essay, an argument which, it seems to me, implies not that AI and Cog Sci are silly, but that they're based on a form of computation that is well “beneath” human persons.  相似文献   

2.
Virtual Symposium on Virtual Mind   总被引:2,自引:2,他引:0  
When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaninguful conversation). These higher levels of interpretability are called ‘virtual’ systems. If such a virtual system is interpretable as if it had a mind, is such a ‘virtual mind’ real? This is the question addressed in this ‘virtual’ symposium, originally conducted electronically among four cognitive scientists. Donald Perlis, a computer scientist, argues that according to the computationalist thesis, virtual minds are real and hence Searle's Chinese Room Argument fails, because if Searle memorized and executed a program that could pass the Turing Test in Chinese he would have a second, virtual, Chinese-understanding mind of which he was unaware (as in multiple personality). Stevan Harnad, a psychologist, argues that Searle's Argument is valid, virtual minds are just hermeneutic overinterpretations, and symbols must be grounded in the real world of objects, not just the virtual world of interpretations. Computer scientist Patrick Hayes argues that Searle's Argument fails, but because Searle does not really implement the program: a real implementation must not be homuncular but mindless and mechanical, like a computer. Only then can it give rise to a mind at the virtual level. Philosopher Ned Block suggests that there is no reason a mindful implementation would not be a real one.  相似文献   

3.
In an effort to uncover fundamental differences between computers and brains, this paper identifies computation with a particular kind of physical process, in contrast to interpreting the behaviors of physical systems as one or more abstract computations. That is, whether or not a system is computing depends on how those aspects of the system we consider to be informational physically cause change rather than on our capacity to describe its behaviors in computational terms. A physical framework based on the notion of causal mechanism is used to distinguish different kinds of information processing in a physically-principled way; each information processing type is associated with a particular causal mechanism. The causal mechanism associated with computation is pattern matching, which isphysically defined as the fitting of physical structures such that they cause a simple change. It is argued that information processing in the brain is based on a causal mechanism different than pattern matching so defined, implying that brains do not compute, at least not in the physical sense that digital computers do. This causal difference may also mean that computers cannot have mental states.The author can be reached at: Advanced Technology Group, Union Switch & Signal Inc., 5800 Corporate Drive, Pittsburgh, PA 15237 (email is cfboyle%atg@switch.com).  相似文献   

4.
Abstract

Searle (1980, 1989) has produced a number of arguments purporting to show that computer programs, no matter how intelligently they may act, lack ‘ intentionality’ Recently, Harnad (1989) has accepted Searle' s arguments as having ‘ shaken the foundations of Artificial Intelligence’ (p. 5). To deal with Searle' s arguments, Harnad has introduced the need for ‘ noncomputational devices’ (e.g. transducers) to realize ‘ symbol grounding’ This paper critically examines both Searle' s and Hamad' s arguments and concludes that the foundations of AT remain unchanged by these arguments, that the Turing Test remains adequate as a test of intentionality, and that the philosophical position of computationalism remains perfectly reasonable as a working hypothesis for the task of describing and embodying intentionality in brains and machines.  相似文献   

5.
Any attempt to explain the mind by building machines with minds must confront the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what is “everything” a body with a mind can do? Turing's original “pen-pal” version of the Turing Test (the TT) only tested linguistic capacity, but Searle has shown that a mindless symbol-manipulator could pass the TT undetected. The Total Turing Test (TTT) calls instead for all of our linguistic and robotic capacities; immune to Searle's argument, it suggests how to ground a symbol manipulating system in the capacity to pick out the objects its symbols refer to. No Turing Test, however, can guarantee that a body has a mind. Worse, nothing in the explanation of its successful performance requires a model to have a mind at all. Minds are hence very different from the unobservables of physics (e.g., superstrings); and Turing Testing, though essential for machine-modeling the mind, can really only yield an explanation of the body.  相似文献   

6.
What are the limits of physical computation? In his ‘Church’s Thesis and Principles for Mechanisms’, Turing’s student Robin Gandy proved that any machine satisfying four idealised physical ‘principles’ is equivalent to some Turing machine. Gandy’s four principles in effect define a class of computing machines (‘Gandy machines’). Our question is: What is the relationship of this class to the class of all (ideal) physical computing machines? Gandy himself suggests that the relationship is identity. We do not share this view. We will point to interesting examples of (ideal) physical machines that fall outside the class of Gandy machines and compute functions that are not Turing-machine computable.  相似文献   

7.
Turing’s notion of human computability is exactly right not only for obtaining a negative solution of Hilbert’s Entscheidungsproblem that is conclusive, but also for achieving a precise characterization of formal systems that is needed for the general formulation of the incompleteness theorems. The broad intellectual context reaches back to Leibniz and requires a focus on mechanical procedures; these procedures are to be carried out by human computers without invoking higher cognitive capacities. The question whether there are strictly broader notions of effectiveness has of course been asked for both cognitive and physical processes. I address this question not in any general way, but rather by focusing on aspects of mathematical reasoning that transcend mechanical procedures.Section 1 discusses Gödel’s perspective on mechanical computability as articulated in his [193?], where he drew a dramatic conclusion from the undecidability of certain Diophantine propositions, namely, that mathematicians cannot be replaced by machines. That theme is taken up in the Gibbs Lecture of 1951; Gödel argues there in greater detail that the human mind infinitely surpasses the powers of any finite machine. An analysis of the argument is presented in Section 2 under the heading Beyond calculation. Section 3 is entitled Beyond discipline and gives Turing’s view of intelligent machinery; it is devoted to the seemingly sharp conflict between Gödel’s and Turing’s views on mind. Their deeper disagreement really concerns the nature of machines, and I’ll end with some brief remarks on (supra-) mechanical devices in Section 4.  相似文献   

8.
9.
From ‘virtual worlds’ to ‘artificial realities’, from ‘cyberspace’ to ‘multisensory sythetic environments’, there is no lack of colourful expressions to describe one of the most recent and the most promising developments of computer graphics. Indeed, this is a radically new tool for representing the world, capable of permanently changing our way of looking at things and the way we work, as well as the familiar concept of a show. What is the definition of a ‘virtual environment’? It is an artificial space, visualized through techniques of synthetic imagery, and in which we can ‘physically’ move about. This impression of ‘physical movement’ is produced by the concurrence of two sensory stimuli, one based on fully stereoscopic vision and the other on the so-called ‘proprioceptive’ sensation of muscular correlation between real bodily movements and apparent changes in the artificial space in which we are ‘immersed’.  相似文献   

10.
No computer that had not experienced the world as we humans had could pass a rigorously administered standard Turing Test. This paper will show that the use of ‘subcognitive’ questions allows the standard Turing Test to indirectly probe the human subcognitive associative concept network built up over a lifetime of experience with the world. Not only can this probing reveal differences in cognitive abilities, but crucially, even differences in physical aspects of the candidates can be detected. Consequently, it is unnecessary to propose even harder versions of the Test in which all physical and behavioural aspects of the two candidates had to be indistinguishable before allowing the machine to pass the Test. Any machine that passed the ‘simpler’ symbols-in symbols-out test as originally proposed by Turing would be intelligent. The problem is that, even in its original form, the Turing Test is already too hard and too anthropocentric for any machine that was not a physical, social and behavioural carbon copy of ourselves to actually pass it. Consequently, the Turing Test, even in its standard version, is not a reasonable test for general machine intelligence. There is no need for an even stronger version of the Test.  相似文献   

11.
In the first section of his celebrated 1936 paper A. Turing says of the machines he defines that at each stage of their operation they can ‘effectively remember’ some of the symbols they have scanned before. In this paper I explicate the motivation and content of this remark of Turing's, and argue that it reveals what could be labeled as a connectionist conception of the human mind.  相似文献   

12.
Kugel  Peter 《Minds and Machines》2002,12(4):563-579
According to the conventional wisdom, Turing (1950) said that computing machines can be intelligent. I don't believe it. I think that what Turing really said was that computing machines –- computers limited to computing –- can only fake intelligence. If we want computers to become genuinelyintelligent, we will have to give them enough initiative (Turing, 1948, p. 21) to do more than compute. In this paper, I want to try to develop this idea. I want to explain how giving computers more ``initiative' can allow them to do more than compute. And I want to say why I believe (and believe that Turing believed) that they will have to go beyond computation before they can become genuinely intelligent.  相似文献   

13.
This paper argues that the idea of a computer is unique. Calculators and analog computers are not different ideas about computers, and nature does not compute by itself. Computers, once clearly defined in all their terms and mechanisms, rather than enumerated by behavioral examples, can be more than instrumental tools in science, and more than source of analogies and taxonomies in philosophy. They can help us understand semantic content and its relation to form. This can be achieved because they have the potential to do more than calculators, which are computers that are designed not to learn. Today’s computers are not designed to learn; rather, they are designed to support learning; therefore, any theory of content tested by computers that currently exist must be of an empirical, rather than a formal nature. If they are designed someday to learn, we will see a change in roles, requiring an empirical theory about the Turing architecture’s content, using the primitives of learning machines. This way of thinking, which I call the intensional view of computers, avoids the problems of analogies between minds and computers. It focuses on the constitutive properties of computers, such as showing clearly how they can help us avoid the infinite regress in interpretation, and how we can clarify the terms of the suggested mechanisms to facilitate a useful debate. Within the intensional view, syntax and content in the context of computers become two ends of physically realizing correspondence problems in various domains.  相似文献   

14.
A long-standing aim of quantum information research is to understand what gives quantum computers their advantage. This requires separating problems that need genuinely quantum resources from those for which classical resources are enough. Two examples of quantum speed-up are the Deutsch–Jozsa and Simon’s problem, both efficiently solvable on a quantum Turing machine, and both believed to lack efficient classical solutions. Here we present a framework that can simulate both quantum algorithms efficiently, solving the Deutsch–Jozsa problem with probability 1 using only one oracle query, and Simon’s problem using linearly many oracle queries, just as expected of an ideal quantum computer. The presented simulation framework is in turn efficiently simulatable in a classical probabilistic Turing machine. This shows that the Deutsch–Jozsa and Simon’s problem do not require any genuinely quantum resources, and that the quantum algorithms show no speed-up when compared with their corresponding classical simulation. Finally, this gives insight into what properties are needed in the two algorithms and calls for further study of oracle separation between quantum and classical computation.  相似文献   

15.
In classical computation, a “write-only memory” (WOM) is little more than an oxymoron, and the addition of WOM to a (deterministic or probabilistic) classical computer brings no advantage. We prove that quantum computers that are augmented with WOM can solve problems that neither a classical computer with WOM nor a quantum computer without WOM can solve, when all other resource bounds are equal. We focus on realtime quantum finite automata, and examine the increase in their power effected by the addition of WOMs with different access modes and capacities. Some problems that are unsolvable by two-way probabilistic Turing machines using sublogarithmic amounts of read/write memory are shown to be solvable by these enhanced automata.  相似文献   

16.
The purpose of this article is to show why consciousness and thought are not manifested in digital computers. Analyzing the rationale for claiming that the formal manipulation of physical symbols in Turing machines would emulate human thought, the article attempts to show why this proved false. This is because the reinterpretation of designation and meaning to accommodate physical symbol manipulation eliminated their crucial functions in human discourse. Words have denotations and intensional meanings because the brain transforms the physical stimuli received from the microworld into a qualitative, macroscopic representation for consciousness. Lacking this capacity as programmed machines, computers have no representations for their symbols to designate and mean. Unlike human beings in which consciousness and thought, with their inherent content, have emerged because of their organic natures, serial processing computers or parallel distributed processing systems, as programmed electrical machines, lack these causal capacities.  相似文献   

17.
Fodor’s theory of concepts holds that the psychological capacities, beliefs or intentions which determine how we use concepts do not determine reference. Instead, causal relations of a specific kind between properties and our dispositions to token a concept are claimed to do so. Fodor does admit that there needs to be some psychological mechanisms mediating the property–concept tokening relations, but argues that they are purely accidental for reference. In contrast, I argue that the actual mechanisms that sustain the reference determining concept tokening relations are necessary for reference. Fodor’s atomism is thus undermined, since in order to refer with a concept it is necessary to possess some specific psychological capacities.  相似文献   

18.
Computation is interpretable symbol manipulation. Symbols are objects that are manipulated on the basis of rules operating only on theirshapes, which are arbitrary in relation to what they can be interpreted as meaning. Even if one accepts the Church/Turing Thesis that computation is unique, universal and very near omnipotent, not everything is a computer, because not everything can be given a systematic interpretation; and certainly everything can't be givenevery systematic interpretation. But even after computers and computation have been successfully distinguished from other kinds of things, mental states will not just be the implementations of the right symbol systems, because of the symbol grounding problem: The interpretation of a symbol system is not intrinsic to the system; it is projected onto it by the interpreter. This is not true of our thoughts. We must accordingly be more than just computers. My guess is that the meanings of our symbols are grounded in the substrate of our robotic capacity to interact with that real world of objects, events and states of affairs that our symbols are systematically interpretable as being about.  相似文献   

19.
This paper deals with the question: what are the key requirements for a physical system to perform digital computation? Time and again cognitive scientists are quick to employ the notion of computation simpliciter when asserting basically that cognitive activities are computational. They employ this notion as if there was or is a consensus on just what it takes for a physical system to perform computation, and in particular digital computation. Some cognitive scientists in referring to digital computation simply adhere to Turing??s notion of computability. Classical computability theory studies what functions on the natural numbers are computable and what mathematical problems are undecidable. Whilst a mathematical formalism of computability may perform a methodological function of evaluating computational theories of certain cognitive capacities, concrete computation in physical systems seems to be required for explaining cognition as an embodied phenomenon. There are many non-equivalent accounts of digital computation in physical systems. I examine only a handful of those in this paper: (1) Turing??s account; (2) The triviality ??account??; (3) Reconstructing Smith??s account of participatory computation; (4) The Algorithm Execution account. My goal in this paper is twofold. First, it is to identify and clarify some of the underlying key requirements mandated by these accounts. I argue that these differing requirements justify a demand that one commits to a particular account when employing the notion of computation in regard to physical systems. Second, it is to argue that despite the informative role that mathematical formalisms of computability may play in cognitive science, they do not specify the relationship between abstract and concrete computation.  相似文献   

20.
This paper presents persistent Turing machines (PTMs), a new way of interpreting Turing-machine computation, based on dynamic stream semantics. A PTM is a Turing machine that performs an infinite sequence of “normal” Turing machine computations, where each such computation starts when the PTM reads an input from its input tape and ends when the PTM produces an output on its output tape. The PTM has an additional worktape, which retains its content from one computation to the next; this is what we mean by persistence.A number of results are presented for this model, including a proof that the class of PTMs is isomorphic to a general class of effective transition systems called interactive transition systems; and a proof that PTMs without persistence (amnesic PTMs) are less expressive than PTMs. As an analogue of the Church-Turing hypothesis which relates Turing machines to algorithmic computation, it is hypothesized that PTMs capture the intuitive notion of sequential interactive computation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号