首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Abstract

The powerful functional capacities (Turing computability) of digital computers are, in part, responsible for fostering the notion that understanding mind is simply a matter of determining the right algorithm—the so-called ‘strong AI’ position that cognition is computation which holds that computers can have mental states and that behavioural equivalence is a sufficient test for the existence of mind (i.e. the Turing Test). John Searle's Chinese Room thought experiment (Searle 1980), however, raises the possibility that pure symbol manipulation does not capture the essence of mind (e.g. intentionaiity) because it lacks certain features—so-called ‘causal properties’. The current status of this debate borders on being a stalemate because, on the one hand, empirical verification based on behaviour is, at the very least, a long way from being decisive while, on the other hand, intuitive arguments about mind are subjective and, as such, untestable. This paper, in contrast to arguing intuitively or relying on behaviour, challenges the computational theory of mind on purely physical grounds, thereby avoiding such an impasse. In particular it is shown that digital computers are physically limited by the very process that underlies their powerful functional capacities, pattern matching. This physical limitation, it is claimed, is fundamental to deciding whether computation is sufficient for understanding mind. Pattern matching is the physical process by which symbols in computers are causal and, therefore, serves as the physical basis for information processing in computers. To demonstrate the physical limitations of pattern matching, a general, causal framework based on how physical change is brought about is introduced, enabling an analysis of the ways in which objects physically embody and transmit information. Physical interactions are causally analysed into ‘informing’ categories which are hypothesized to be Searle's causal properties. It is claimed that differences in these properties explain the cognitively-relevant physical differences between computers and brains  相似文献   

2.
Searle's Chinese Box: Debunking the Chinese Room Argument   总被引:4,自引:2,他引:2  
Hauser  Larry 《Minds and Machines》1997,7(2):199-226
John Searle's Chinese room argument is perhaps the most influential andwidely cited argument against artificial intelligence (AI). Understood astargeting AI proper – claims that computers can think or do think– Searle's argument, despite its rhetorical flash, is logically andscientifically a dud. Advertised as effective against AI proper, theargument, in its main outlines, is an ignoratio elenchi. It musterspersuasive force fallaciously by indirection fostered by equivocaldeployment of the phrase "strong AI" and reinforced by equivocation on thephrase "causal powers" (at least) equal to those of brains." On a morecarefully crafted understanding – understood just to targetmetaphysical identification of thought with computation ("Functionalism"or "Computationalism") and not AI proper the argument is still unsound,though more interestingly so. It's unsound in ways difficult for high church– "someday my prince of an AI program will come" – believersin AI to acknowledge without undermining their high church beliefs. The adhominem bite of Searle's argument against the high church persuasions of somany cognitive scientists, I suggest, largely explains the undeserved reputethis really quite disreputable argument enjoys among them.  相似文献   

3.
Nicholas Agar 《AI & Society》2012,27(4):431-436
In a paper in this journal, Neil Levy challenges Nicholas Agar’s argument for the irrationality of mind-uploading. Mind-uploading is a futuristic process that involves scanning brains and recording relevant information which is then transferred into a computer. Its advocates suppose that mind-uploading transfers both human minds and identities from biological brains into computers. According to Agar’s original argument, mind-uploading is prudentially irrational. Success relies on the soundness of the program of Strong AI—the view that it may someday be possible to build a computer that is capable of thought. Strong AI may in fact be false, an eventuality with dire consequences for mind-uploading. Levy argues that Agar’s argument relies on mistakes about the probability of failed mind-uploading and underestimates what is to be gained from successfully mind-uploading. This paper clarifies Agar’s original claims about the likelihood of mind-uploading failure and offers further defense of a pessimistic evaluation of success.  相似文献   

4.
In their Minds and Machines essay How would you know if you synthesized a Thinking Thing? (Kary &; Mahner, Minds and Machines, 12(1), 61–86, 2002), Kary and Mahner have chosen to occupy a high ground of materialism and empiricism from which to attack the philosophical and methodological positions of believers in artificial intelligence (AI) and artificial life (AL). In this review I discuss some of their main arguments as well as their philosophical foundations. Their central argument: ‘AI is Platonism’, which is based on a particular interpretation of the notion of ‘definition’ and used as a critique against AI, can be counter criticized from two directions: first, Anti-Platonism is not a necessary precondition for criticizing AI, because outspoken Platonist criticism against AI is already known (Penrose, The emperor’s new mind (with a foreword by M. Gardner), 1989). Second, even in case that AI would essentially be ‘Platonism’ this would not be a sufficient argument for proving AI wrong. In my counter criticism I assume a more or less Popperian position by emphasizing the openness of the future: Not by quasi-Scholastic arguments (like Kary and Mahner’s), but only after being confronted with a novel ‘thinking thing’ by future AI engineers we can start to analyze its particular properties (Let me use a history analogon to illustrate my position: In the 19th century, mechanized aviation was widely regarded impossible—only natural organisms (such as birds or bees) could fly, and any science of aerodynamics or aviation did not exist. Only after some non-scientific technicians had confronted their astonished fellows with the first (obviously) flying machine the science of ‘Artificial Aviation’ came into existence, motivated by the need for understanding and mastering that challenging and puzzling new phenomenon).  相似文献   

5.
The term the artificial can only be given a precise meaning in the context of the evolution of computational technology and this in turn can only be fully understood within a cultural setting that includes an epistemological perspective. The argument is illustrated in two case studies from the history of computational machinery: the first calculating machines and the first programmable computers. In the early years of electronic computers, the dominant form of computing was data processing which was a reflection of the dominant philosophy of logical positivism. By contrast, artificial intelligence (AI) adopted an anti-positivist position which left it marginalised until the 1980s when two camps emerged: technical AI which reverted to positivism, and strong AI which reified intelligence. Strong AI's commitment to the computer as a symbol processing machine and its use of models links it to late-modernism. The more directly experiential Virtual Reality (VR) more closely reflects the contemporary cultural climate of postmodernism. It is VR, rather than AI, that is more likely to form the basis of a culture of the artificial.  相似文献   

6.
Though it's difficult to agree on the exact date of their union, logic and artificial intelligence (AI) were married by the late 1950s, and, at least during their honeymoon, were happily united. What connubial permutation do logic and AI find themselves in now? Are they still (happily) married? Are they divorced? Or are they only separated, both still keeping alive the promise of a future in which the old magic is rekindled? This paper is an attempt to answer these questions via a review of six books. Encapsulated, our answer is that (i) logic and AI, despite tabloidish reports to the contrary, still enjoy matrimonial bliss, and (ii) only their future robotic offspring (as opposed to the children of connectionist AI) will mark real progress in the attempt to understand cognition.  相似文献   

7.
This experiment extended the Computers Are Social Actors (CASA) paradigm by examining how output modality (text plus cartoon character vs. synthetic speech), computer gender (male vs. female), and user gender (male vs. female) moderate the ways in which people respond to computers that flatter. Specifically, participants played a trivia game with a computer, which they knew might provide incorrect answers. Participants in the generic-comment condition received strictly factual feedback, whereas those in the flattery condition were given additional remarks praising their performance. Consistent with Fogg and Nass [1997. Silicon sycophants: the effects of computers that flatter. International Journal of Human–Computer Studies 46, 551–561] study, flattery led to more positive overall impressions and performance evaluations of the computer, but such effects were found only in the text plus character condition and among women. In addition, flattery increased participants’ suspicion about the validity of the computer's feedback and lowered conformity to the computer's suggestions. Participants conformed more to the male than female computers when computer gender was manifested in gendered cartoon characters in the text condition, with no corresponding effects in the speech condition. Results suggest that synthetic speech output might suppress social responses to computers, such as flattery effects and gender stereotyping.  相似文献   

8.
The effect of explaining the value of text review was studied. Students (n = 136) were randomly assigned to read a text passage displayed by computer with or without an explanation and in three presentation modes: required or optional review when answer to adjunct questions were incorrect, or reading the text without questions. Review groups learned more than those merely reading the text, and an interaction between student's prior knowledge and explanations indicated that explanation facilitated the learning of students with little familiarity with the material, while slightly impairing knowledgeable students' performance. The implications of these findings for using computers for such training and for ATI research are discussed.  相似文献   

9.
Different worlds? A comparison of young people's home and school ICT use   总被引:1,自引:0,他引:1  
Abstract This paper explores young people's access to and use of computers in the home and at school. Drawing on a questionnaire survey, conducted in 2001 and 2003 with over 1800 children in the South‐West of England, on group interviews in school with over 190 children and with visits to 11 families, the paper discusses: (1) children's current use of computers in the home and in school; 2) changing patterns of computer use in home and school between 2001 and 2003; (3) the impact of age, gender and socio‐economic area on young people's computer use in home and school. The paper then goes on to discuss young people's perceptions of the differences between home and school use of computers and to address the question of whether young people's home and school use of information and communications technologies (ICTs) are really ‘different worlds’. Through analysis of both quantitative and qualitative data, the paper proposes that the boundaries between home and school are less distinct in terms of young people's ICT use than has previously been proposed, in particular through young people's production of virtual social networks through the use of instant messenger that seem to mirror young people's social school contexts. The paper concludes by suggesting that effective home–school link strategies might be adopted through the exploration of the permeability of home/school boundaries.  相似文献   

10.
The expression “artificial intelligence” (AI) was introduced by John McCarthy, and the official birth of AI is unanimously considered to be the 1956 Dartmouth Conference. Thus, AI turned fifty in 2006. How did AI begin? Several differently motivated analyses have been proposed as to its origins. In this paper a brief look at those that might be considered steps towards Dartmouth is attempted, with the aim of showing how a number of research topics and controversies that marked the short history of AI were touched on, or fairly well stated, during the year immediately preceding Dartmouth. The framework within which those steps were taken was the development of digital computers. Earlier computer applications in areas such as complex decision making and management, at that time dealt with by operations research techniques, were important in this story. The time was ripe for AI's intriguingly tumultuous development, marked as it has been by hopes and defeats, successes and difficulties.  相似文献   

11.

This paper use the well-discussed PVM (Parallel Virtual Machine) software with several personal computers, and adopt the widespread Microsoft Windows '98 operating system as our operation platform to construct a heterogeneous PCs cluster. By engaging the related researches of PC cluster system and cluster computing theory, we apply our heterogeneous PC cluster computing system to generate more secure parameters for some public key cryptosystems such as RSA. Copes with each parameter's related mathematic theory's restriction, enormous computation power is needed to get better computation performance in generating these parameters. In this paper, we contribute heterogeneous PCs combined with the PVM software to cryptosystem parameters, which is conformed to today's safety specification and requirement. We practically generate these data to prove that computer cluster can effectively accumulate enormous computation power, and then demonstrate the cluster computation application in finding strong primes which are needed in some public key cryptosystems.  相似文献   

12.
In order to examine the impact of negative attitudes toward computer usage, a survey was administered that measured attitudes toward computers, the level of job satisfaction in the work environment, and general attitudes toward the organization. Twenty-nine employees at a real estate office completed a 24-item survey during a regularly scheduled employee meeting. Attitudes toward computers were generally positive; however, about one third of the sample felt incompetent in their ability to use computers, and 21% said that they avoid using computers altogether. Results also indicated that feelings of frustration and confusion about the use of computers were associated with lower job satisfaction. While negative attitudes towards computers were related to one 's attitudes toward the job, these attitudes were unrelated to one's feeling toward the company. Thus, computerphobia may have a strong link to individual job satisfaction, with any consequence for overall attitudes toward the company operating through prolonged dissatisfaction with one's job.  相似文献   

13.
Advances in computer technology are now so profound that the arithmetic capability and repertoire of computers can and should be expanded. Nowadays the elementary floating-point operations +, −, ×, / give computed results that coincide with the rounded exact result for any operands. Advanced computer arithmetic extends this accuracy requirement to all operations in the usual product spaces of computation: the real and complex vector spaces as well as their interval correspondents. This enhances the mathematical power of the digital computer considerably. A new computer operation, the scalar product, is fundamental to the development of advanced computer arithmetic.This paper studies the design of arithmetic units for advanced computer arithmetic. Scalar product units are developed for different kinds of computers like personal computers, workstations, mainframes, super computers or digital signal processors. The new expanded computational capability is gained at modest cost. The units put a methodology into modern computer hardware which was available on old calculators before the electronic computer entered the scene. In general the new arithmetic units increase both the speed of computation as well as the accuracy of the computed result. The circuits developed in this paper show that there is no way to compute an approximation of a scalar product faster than the correct result.A collection of constructs in terms of which a source language may accommodate advanced computer arithmetic is described in the paper. The development of programming languages in the context of advanced computer arithmetic is reviewed. The simulation of the accurate scalar product on existing, conventional processors is discussed. Finally the theoretical foundation of advanced computer arithmetic is reviewed and a comparison with other approaches to achieving higher accuracy in computation is given. Shortcomings of existing processors and standards are discussed.  相似文献   

14.
Computer literacy and inquiry learning: when geeks learn less   总被引:2,自引:0,他引:2  
Abstract A low level of computer literacy has often been hypothesized as constituting a disadvantage in knowledge acquisition. However, within the field of computer‐supported inquiry learning systematic investigations of these purported relations have not been conducted. This classroom study investigates the role of computer literacy (procedural computer‐related knowledge, self‐confidence in using the computer, and familiarity with computers) as a learning prerequisite for knowledge acquisition, and analyses the learners' patterns of media use as processes that might explain this role. Thirty‐seven students from two final classes of a secondary school worked in pairs on the project ‘How far does light go?’ in the Web‐based Inquiry Science Environment. Findings did indicate significant relations of neither procedural computer‐related knowledge nor self‐confidence in using the computer to knowledge acquisition. However, students with greater familiarity with computers acquired significantly less knowledge. In the light of the patterns of media use, these findings might be explained by different navigation styles adopted by students with high and low familiarity with computers: students with high familiarity with computers exhibit more shallow processing strategies (‘browsing’) which are less functional for learning.  相似文献   

15.
《Ergonomics》2012,55(10):1611-1623
Children's computer use is rapidly growing, together with reports of related musculoskeletal outcomes. Models and theories of adult-related risk factors demonstrate multivariate risk factors associated with computer use. Children's use of computers is different from adult's computer use at work. This study developed and tested a child-specific model demonstrating multivariate relationships between musculoskeletal outcomes, computer exposure and child factors. Using pathway modelling, factors such as gender, age, television exposure, computer anxiety, sustained attention (flow), socio-economic status and somatic complaints (headache and stomach pain) were found to have effects on children's reports of musculoskeletal symptoms. The potential for children's computer exposure to follow a dose–response relationship was also evident. Developing a child-related model can assist in understanding risk factors for children's computer use and support the development of recommendations to encourage children to use this valuable resource in educational, recreational and communication environments in a safe and productive manner.

Practitioner Summary: Computer use is an important part of children's school and home life. Application of this developed model, that encapsulates related risk factors, enables practitioners, researchers, teachers and parents to develop strategies that assist young people to use information technology for school, home and leisure in a safe and productive manner.  相似文献   

16.
17.
The aim of the present paper is to study users' perception of computers and human beings as advice givers in problem-solving situations. It will be asked if people's self-confidence and their perception of the advice vary depending on the origin of advice.Two studies showed somewhat different results. In the first study, people were given advice either by a (putative) computer or by a human being. Their self-confidence did not vary with the origin of the advice, but with the correctness of their own answer as well as of the advice. The perception of this advice did not differ for the two situations. Their general trust in computers was, however, much less than their trust in human beings. In the second study, the subjects had to attribute advice to a computer or a human being, without being told from whom the advice emanated. For Swedish subjects, the ratings showed consistently higher attributions to human beings regarding knowledge and explanation value of advice and higher attributions to computers regarding trust and understanding. For Indian subjects, humans always received the higher attributions.It was concluded that people's perception of computers seems to be related both to existing attitudes and to their experience of the advice given. Knowledge in the domain seems to be an important factor influencing the perception of the computer as trustworthy.  相似文献   

18.
The development of computers at the University of Manchester in the late 1940s is discussed. Scientific computation in Britain during and immediately after World War II is briefly described. Computers at the University were initially influenced by M.H.A. Newman and F.C. Williams. Biographies of these two men are given, and their wartime work is examined in the light of computer development at Manchester. The development at Manchester of the first prototype stored-program computer, the Manchester baby, is also discussed  相似文献   

19.
As progress on the development of building quantum computer continues to advance, first-generation practical quantum computers will be available for ordinary users in the cloud style similar to IBM’s Quantum Experience nowadays. Clients can remotely access the quantum servers using some simple devices. In such a situation, it is of prime importance to keep the security of the client’s information. Blind quantum computation protocols enable a client with limited quantum technology to delegate her quantum computation to a quantum server without leaking any privacy. To date, blind quantum computation has been considered only for an individual quantum system. However, practical universal quantum computer is likely to be a hybrid system. Here, we take the first step to construct a framework of blind quantum computation for the hybrid system, which provides a more feasible way for scalable blind quantum computation.  相似文献   

20.
James Fetzer criticizes the computational paradigm, prevailing in cognitive science by questioning, what he takes to be, its most elementary ingredient: that cognition is computation across representations. He argues that if cognition is taken to be a purposive, meaningful, algorithmic problem solving activity, then computers are incapable of cognition. Instead, they appear to be signs of a special kind, that can facilitate computation. He proposes the conception of minds as semiotic systems as an alternative paradigm for understanding mental phenomena, one that seems to overcome the difficulties of computationalism. Now, I argue, that with computer systems dealing with scientific discovery, the matter is not so simple as that. The alleged superiority of humans using signs to stand for something other over computers being merely “physical symbol systems” or “automatic formal systems” is only easy to establish in everyday life, but becomes far from obvious when scientific discovery is at stake. In science, as opposed to everyday life, the meaning of symbols is, apart from very low-level experimental investigations, defined implicitly by the way the symbols are used in explanatory theories or experimental laws relevant to the field, and in consequence, human and machine discoverers are much more on a par. Moreover, the great practical success of the genetic programming method and recent attempts to apply it to automatic generation of cognitive theories seem to show, that computer systems are capable of very efficient problem solving activity in science, which is neither purposive nor meaningful, nor algorithmic. This, I think, undermines Fetzer’s argument that computer systems are incapable of cognition because computation across representations is bound to be a purposive, meaningful, algorithmic problem solving activity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号