首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
James Fetzer criticizes the computational paradigm, prevailing in cognitive science by questioning, what he takes to be, its most elementary ingredient: that cognition is computation across representations. He argues that if cognition is taken to be a purposive, meaningful, algorithmic problem solving activity, then computers are incapable of cognition. Instead, they appear to be signs of a special kind, that can facilitate computation. He proposes the conception of minds as semiotic systems as an alternative paradigm for understanding mental phenomena, one that seems to overcome the difficulties of computationalism. Now, I argue, that with computer systems dealing with scientific discovery, the matter is not so simple as that. The alleged superiority of humans using signs to stand for something other over computers being merely “physical symbol systems” or “automatic formal systems” is only easy to establish in everyday life, but becomes far from obvious when scientific discovery is at stake. In science, as opposed to everyday life, the meaning of symbols is, apart from very low-level experimental investigations, defined implicitly by the way the symbols are used in explanatory theories or experimental laws relevant to the field, and in consequence, human and machine discoverers are much more on a par. Moreover, the great practical success of the genetic programming method and recent attempts to apply it to automatic generation of cognitive theories seem to show, that computer systems are capable of very efficient problem solving activity in science, which is neither purposive nor meaningful, nor algorithmic. This, I think, undermines Fetzer’s argument that computer systems are incapable of cognition because computation across representations is bound to be a purposive, meaningful, algorithmic problem solving activity.  相似文献   

3.
《Advanced Robotics》2013,27(3):271-287
The architecture constructed with two types of processing, logical symbol processing and stimulus-reaction type parallel processing, seems promising for intelligent systems. Since symbol processing is constructed by a top-down approach and stimulus-reaction type processing is built up by a bottom-up approach, a discrepancy, which is called the 'symbol grounding problem', takes place. This paper presents a framework for integration of symbol processing and stimulus-reaction type processing from the viewpoint of solving the symbol grounding problem. In this framework designers or users use the conventional heuristic symbols and the systems use the self-organized symbols based on the characteristics/environment of the systems themselves. Translation from one to another produces the fusion of those two symbols. The self-organized symbols are grounded and manipulative. Navigation of an autonomous robot is simulated. Acquisition of manipulative grounded symbols with the proposed framework is demonstrated. Since the constructed robot is equipped only with a stimulus-reaction type controller, it has a robustness against noise and temporary geometrical changes.  相似文献   

4.
The symbol grounding problem (SGP), which remains difficult for AI and philosophy of information, was recently scrutinised by Taddeo and Floridi (Solving the symbol grounding problem: A critical review of fifteen years of research. Journal of Experimental & Theoretical Artificial Intelligence, 17, 419–445, 2005; A praxical solution of the symbol grounding problem. Minds and machines, 17, 369–389, doi:10.1007/s11023-007-9081-32005, 2007). However, their own solution to SGP, underwritten by Action-based Semantics, although different from other solutions, does not seem to be satisfactory. Moreover, it does not satisfy the authors' principle, which they dub ‘Zero Semantic Commitment Condition’. In this paper, Taddeo and Floridi's solution is criticised in particular because of the excessively liberal relationship between symbols and internal states of agents, which is conceived in terms of levels of abstraction. Also, the notion of action seems to be seriously defective in their theory. Due to the lack of the possibility of symbols to misrepresent, the grounded symbols remain useless for the cognitive system itself, and it is unclear why they should be grounded in the first place, as the role of grounded symbols is not specified by the proposed solution. At the same time, it is probably one of the best-developed attempts to solve SGP and shows that naturalised semantics can benefit from taking artificial intelligence seriously.  相似文献   

5.
6.
The computational conception of the mind that dominates cognitive science assumes that thought processes involve the computation of algorithms or the execution of functions. Human minds turn out to be automatic formal systems or physical syntax-processing systems. The objection has often been posed that systems of this kind do not possess sufficient conditions for mentality, because the syntax they process may be meaningless for those systems. That problem concerns their semantic content. Here an additional objection is posed that systems of this kind, as normatively-directed, problem-solving causal systems, impose conditions that are not necessary for mentality, because many if not most human thought processes violate them. This problem concerns their causal character. The computational conception reflects an overgeneralization about human thought processes based on special kinds of thinking and thus seems to be trivial or false.  相似文献   

7.
I argue that John Searle's (1980) influential Chinese room argument (CRA) against computationalism and strong AI survives existing objections, including Block's (1998) internalized systems reply, Fodor's (1991b) deviant causal chain reply, and Hauser's (1997) unconscious content reply. However, a new ``essentialist' reply I construct shows that the CRA as presented by Searle is an unsound argument that relies on a question-begging appeal to intuition. My diagnosis of the CRA relies on an interpretation of computationalism as a scientific theory about the essential nature of intentional content; such theories often yield non-intuitive results in non-standard cases, and so cannot be judged by such intuitions. However, I further argue that the CRA can be transformed into a potentially valid argument against computationalism simply by reinterpreting it as an indeterminacy argument that shows that computationalism cannot explain the ordinary distinction between semantic content and sheer syntactic manipulation, and thus cannot be an adequate account of content. This conclusion admittedly rests on the arguable but plausible assumption that thought content is interestingly determinate. I conclude that the viability of computationalism and strong AI depends on their addressing the indeterminacy objection, but that it is currently unclear how this objection can be successfully addressed.  相似文献   

8.
Virtual Symposium on Virtual Mind   总被引:2,自引:2,他引:0  
When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaninguful conversation). These higher levels of interpretability are called ‘virtual’ systems. If such a virtual system is interpretable as if it had a mind, is such a ‘virtual mind’ real? This is the question addressed in this ‘virtual’ symposium, originally conducted electronically among four cognitive scientists. Donald Perlis, a computer scientist, argues that according to the computationalist thesis, virtual minds are real and hence Searle's Chinese Room Argument fails, because if Searle memorized and executed a program that could pass the Turing Test in Chinese he would have a second, virtual, Chinese-understanding mind of which he was unaware (as in multiple personality). Stevan Harnad, a psychologist, argues that Searle's Argument is valid, virtual minds are just hermeneutic overinterpretations, and symbols must be grounded in the real world of objects, not just the virtual world of interpretations. Computer scientist Patrick Hayes argues that Searle's Argument fails, but because Searle does not really implement the program: a real implementation must not be homuncular but mindless and mechanical, like a computer. Only then can it give rise to a mind at the virtual level. Philosopher Ned Block suggests that there is no reason a mindful implementation would not be a real one.  相似文献   

9.
In this paper, we present some recent cognitive robotics studies on language and cognition integration to demonstrate how the language acquired by robotic agents can be directly grounded in action representations. These studies are characterized by the hypothesis that symbols are directly grounded into the agents' own categorical representations, while at the same time having logical (e.g. syntactic) relationships with other symbols. The two robotics studies are based on the combination of cognitive robotics with neural modeling methodologies, such as connectionist models and modeling field theory. Simulations demonstrate the efficacy of the mechanisms of action grounding of language and the symbol grounding transfer in agents that acquire lexicon via imitation and linguistic instructions. The paper also discusses the scientific and technological implications of such an approach.  相似文献   

10.
We consider the symbol grounding problem, and apply to it philosophical arguments against Cartesianism developed by Sellars and McDowell: the problematic issue is the dichotomy between inside and outside which the definition of a physical symbol system presupposes. Surprisingly, one can question this dichotomy and still do symbolic computation: a detailed examination of the hardware and software of serial ports shows this.  相似文献   

11.
12.
Services in the ubiquitous computing are heterogeneous in nature. To be pervasive, these services should be defined in terms of their functionality and capabilities rather than the meaningless Universally Unique IDentifiers (UUIDs) or types of services. With that, clients can access the proper service based on semantic requests, rather then a pre-configured profile. In this paper, we study the requirements for semantic query to be feasible in service discovery processes. Current discovery protocols and the concept of semantics are brought together to construct a framework to realize the semantic service discovery for ubiquitous computing. Many issues are discussed in relation to service discovery topologies, ontology languages, and semantic query languages.  相似文献   

13.
A new procedure for evaluating symbol comprehension, the phrase generation procedure, was assessed with 52 younger and 52 older adults. Participants generated as many phrases as came to mind when viewing 40 different safety symbols (hazard alerting, mandatory action, prohibition, and information symbols). Symbol familiarity was also assessed. Comprehension rates for both groups were lower than the 85% level recommended by the American National Standards Institute. Moreover, older participants' comprehension was significantly worse than younger participants', and the older adults also generated significantly fewer phrases. Generally, prohibition symbols were comprehended best and hazard alerting symbols worst. In addition, symbol familiarity was positively correlated with symbol comprehension. These findings indicate that important safety information depicted on signs and household products may be misunderstood if presented in symbolic form. Furthermore, certain types of symbols may be better understood (e.g., prohibition symbols) than other types (e.g., hazard alerting symbols) by both younger and older individuals. These findings signify the utility of the phrase generation procedure as a method for evaluating symbol comprehension, particularly when it is not possible or desirable to provide contextual information. Actual or potential applications of this research include using the phrase generation approach to identify poorly comprehended symbols, including identification of critical confusions that may arise when processing symbolic information.  相似文献   

14.
A system named MAGELLAN (denoting Map Acquisition of GEographic Labels by Legend ANalysis) is described that utilizes the symbolic knowledge found in the legend of the map to drive geographic symbol (or label) recognition. MAGELLAN first scans the geographic symbol layer(s) of the map. The legend of the map is located and segmented. The geographic symbols (i.e., labels) are identified, and their semantic meaning is attached. An initial training set library is constructed based on this information. The training set library is subsequently used to classify geographic symbols in input maps using statistical pattern recognition. User interaction is required at first to assist in constructing the training set library to account for variability in the symbols. The training set library is built dynamically by entering only instances that add information to it. MAGELLAN then proceeds to identify the geographic symbols in the input maps automatically. MAGELLAN can be fine-tuned by the user to suit specific needs. Recognition rates of over 93% were achieved in an experimental study on a large amount of data. Received January 5, 1998 / Revised March 18, 1998  相似文献   

15.
Cognitive science has been dominated by the computational conception that cognition is computation across representations. To the extent to which cognition as computation across representations is supposed to be a purposive, meaningful, algorithmic, problem-solving activity, however, computers appear to be incapable of cognition. They are devices that can facilitate computations on the basis of semantic grounding relations as special kinds of signs. Even their algorithmic, problem-solving character arises from their interpretation by human users. Strictly speaking, computers as such — apart from human users — are not only incapable of cognition, but even incapable of computation, properly construed. If we want to understand the nature of thought, then we have to study thinking, not computing, because they are not the same thing. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

16.
针对生物医学数据库中基因标识符的描述信息不够丰富和完整,不能很好地区分歧义词不同含义的问题,给出了一种基于扩展语义相似度的基因名标准化方法。该方法利用MEDLINE摘要信息和基因本体描述信息,为数据库中的基因标识符生成了扩展的语义信息;然后通过比较歧义基因名的上下文信息和其不同语义描述信息之间的相似性,为歧义基因名确定能够表达真实含义的唯一基因标识符。使用BioCreative II基因标准化任务的语料,实验结果的准确率达到了80%,召回率达到了82.4%,F值达到了81.2%。从实验结果可以看出,扩展语义相似度的方法适用于生物医学领域的命名实体标准化研究。  相似文献   

17.
Taddeo and Floridi [2007. A praxical solution of the symbol grounding problem. Minds and Machines, 17, 369–389. (This paper is reprinted in Floridi, L. (2011). The philosophy of information. Oxford: Oxford University Press)] propose a solution to the symbol grounding problem (SGP). Unfortunately, their proposal, while certainly innovative, interesting and – given the acute difficulty of SGP – brave, merely shows that a class of robots can in theory connect, in some sense, the symbols it manipulates with the external world it perceives, and can, on the strength of that connection, communicate in sub-human fashion.  相似文献   

18.
当今已进入"感性消费"时代。这就要求产品设计者不但要了解感性认知如何影响消费者行为。也要掌握体现产品形态语意的符号的获取方式。本文利用心理描述测试法和聚类分析验证了情感矩阵的感性人群分类方式在中国的可适用性。并以座椅为例,利用SD语意差异法和主成因子分析法,对其感性认知经验进行了产品符号的提炼和验证运用。提炼出的产品符号在帮助设计者构思或验证未来感性产品设计这点上具有一定的参考价值和意义。  相似文献   

19.
What I call semiotic brains are brains that make up a series of signs and that are engaged in making or manifesting or reacting to a series of signs: through this semiotic activity they are at the same time engaged in “being minds” and so in thinking intelligently. An important effect of this semiotic activity of brains is a continuous process of disembodiment of mind that exhibits a new cognitive perspective on the mechanisms underling the semiotic emergence of meaning processes. Indeed at the roots of sophisticated thinking abilities there is a process of disembodiment of mind that presents a new cognitive perspective on the role of external models, representations, and various semiotic materials. Taking advantage of Turing’s comparison between “unorganized” brains and “logical” and “practical” machines” this paper illustrates the centrality to cognition of the disembodiment of mind from the point of view of the interplay between internal and external representations, both mimetic and creative. The last part of the paper describes the concept of mimetic mind I have introduced to shed new cognitive and philosophical light on the role of computational modeling and on the decline of the so-called Cartesian computationalism.  相似文献   

20.
Presence refers to the sensation of going into a computer-simulated environment. We investigated whether presence and memory accuracy are affected by the meaningfulness of the information encountered in the virtual environment (VE). Non-chess players and three levels of chess players studied meaningful and meaningless chess positions in VEs. They rated the level of presence experienced in each and took an old-new recognition memory test. Non-chess players reported no difference in presence for meaningful compared with meaningless positions, yet even weak chess players reported feeling more present with meaningful compared with meaningless positions. Thus, only modest levels of expertise were needed to enhance presence. In contrast, tournament-level chess-playing ability was required before meaningful chess positions were remembered significantly more accurately than meaningless chess positions. Tournament players' memory accuracy was very high for meaningful positions but was the same as non-chess players for meaningless positions. Meaning did not significantly influence memory accuracy for weak chess players. Our memory results replicate and extend the findings of Chase and Simon (1973). Out presence results show how cognitive factors inherent in the user can influence the quality of the human-computer interface. Practical implications are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号