共查询到20条相似文献,搜索用时 0 毫秒
1.
This article compares the potential of classical and connectionist computational concepts for explanations of innate infant knowledge and of its development. It focuses on issues relating to: the perceptual process; the control and form(s) of perceptual-behavioural coordination; the role of environmental structure in the organization of action; and the construction of novel forms of activity. There is significant compatibility between connectionist and classical views of computation, though classical concepts are, at present, better able to provide a comprehensive computational view of the infant. However, Varela's enaction perspective poses a significant challenge for both approaches.An earlier version of this article was presented at the interdisciplinary seminar La Cognition, organized by the International College of Philosophy, Paris, and the Technological University of Compiègne, Chantilly, France, 1989. 相似文献
2.
The introduction of massive parallelism and the renewed interest in neural networks gives a new need to evaluate the relationship of symbolic processing and artificial intelligence. The physical symbol hypothesis has encountered many difficulties coping with human concepts and common sense. Expert systems are showing more promise for the early stages of learning than for real expertise. There is a need to evaluate more fully the inherent limitations of symbol systems and the potential for programming compared with training. This can give more realistic goals for symbolic systems, particularly those based on logical foundations. 相似文献
3.
The current renewal of connectionist techniques using networks of neuron-like units has started to have an influence on cognitive modelling. However, compared with classical artificial intelligence methods, the position of connectionism is still not clear. In this article artificial intelligence and connectionism are systematically compared as cognitive models so as to bring out the advantages and shortcomings of each. The problem of structured representations appears to be particularly important, suggesting likely research directions. 相似文献
4.
Berkeley [Minds Machines 10 (2000) 1] described a methodology that showed the subsymbolic nature of an artificial neural network
system that had been trained on a logic problem, originally described by Bechtel and Abrahamsen [Connectionism and the mind.
Blackwells, Cambridge, MA, 1991]. It was also claimed in the conclusion of this paper that the evidence was suggestive that
the network might, in fact, count as a symbolic system. Dawson and Piercey [Minds Machines 11 (2001) 197] took issue with
this latter claim. They described some lesioning studies that they argued showed that Berkeley’s (2000) conclusions were premature.
In this paper, these lesioning studies are replicated and it is shown that the effects that Dawson and Piercey rely upon for
their argument are merely an artifact of a threshold function they chose to employ. When a threshold function much closer
to that deployed in the original studies is used, the significant effects disappear. 相似文献
5.
The formulation of a problem may be defined as a process of acquisition and organization of knowledge related to a given situation, on which a decision maker projects some action. The assistance in the problem formulation that we may expect within decision support systems is difficult to design and to implement. This is mainly due to the frequent lack of attention to a sufficiently formalized conceptual framework which would consider the decision with a more cognition sciences oriented approach. In the first part, we will present an instrumental model for the study of decision processes as an attempt to simulate the cognitive process of knowledge acquisition and organization carried out by a decision maker facing a problematic situation. Considering its epistemological foundations, this model can be named “cognitivist model”. Within this model, the decision is defined as a cognitive construction which we call “decisional construct”. It consists of the elaboration of one or several abstract representations of the problematic situation (formulation phase), and the design of operational models (solving phase). In the second part, we will present the COGITA project, which consists of the design and realization of an environment for the development of problem formulation assistance systems. The modelization and simulation of cognitive processes call for relevant techniques originating either in artificial intelligence or in connectionism. We will show which are the main characteristics, potentials, limits and complementarity of these techniques and why their integration is fundamental and necessary to the simulation of the cognitive process associated with the formulation. COGITA is a hybrid system currently under development which tends to integrate symbolic artificial intelligence techniques and connectionist models in a cooperative hybridation the general architecture of which is presented. 相似文献
6.
One's model of skill determines what one expects from neural network modelling and how one proposes to go about enhancing
expertise. We view skill acquisition as a progression from acting on the basis of a rough theory of a domain in terms of facts
and rules to being able to respond appropriately to the current situation on the basis of neuron connections changed by the
results of responses to the relevant aspects of many past situations. Viewing skill acquisition in this ways suggests how
one can avoid the problem currently facing AI of how to train a network to make human-like generalizations. In training a
network one must progress, as the human learner does, from rules and facts to wholistic responses. As to future work, from
our perspective one should not try to enhance expertise as in traditional AI by attempting to construct improved theories
of a domain, but rather by improving the learner's access to the relevant aspects of a domain so as to facilitate learning
from experience. 相似文献
7.
Two recent developments will be surveyed here which are pointing the way towards an input–output theory of
-
adaptive feedback: The solution of problems involving; (1) feedback performance (exact) optimization under large plant uncertainty on the one hand (the two-disc problem of
); and (2) optimally fast identification in
on the other. Taken together, these are yielding adaptive algorithms for slowly varying data in
-
. At a conceptual level, these results motivate a general input-output theory linking identification, adaptation, and control learning. In such a theory, the definition of adaptation is based on system performance under uncertainty, and is independent of internal structure, presence or absence of variable parameters, or even feedback. 相似文献
8.
The degree to which individuals leverage knowledge resources influences their effectiveness and may shape their organizations’ competitive advantage. We examine the ways in which tasks with different characteristics affect individuals’ use of internal and external knowledge and the outcomes of such behaviors. Our analysis reveals that interdependent and non-routine tasks drive internal knowledge sourcing, while complex tasks motivate external knowledge sourcing. Internal and external knowledge sourcing activities contribute to individuals’ cognitive adaptation and innovation, with a negative interaction between them, while cognitive replication benefits only from internal knowledge sourcing. These findings can help managers better satisfy individuals’ knowledge needs and achieve intended organizational outcomes. 相似文献
9.
Transactive memory system is a term from group psychology that describes a system that helps small groups maintain and use personal directories to allocate and retrieve knowledge. Such systems have been observed at the level of whole organizations, suggesting that they provide a means for conceptualizing the exploitation of organizational memory. In this paper, I describe a longitudinal investigation of a global engineering consulting firm in which I used inductive analysis of interview data to map and then develop a conceptual entity-relationship model of organizational memory. This model formed the basis for a transactive directory to facilitate knowledge retrieval and allocation in the firm. 相似文献
10.
Pickering and Chater (P&C) maintain that folk psychology and cognitive science should neither compete nor cooperate. Each is an independent enterprise, with a distinct subject matter and characteristic modes of explanation. P&C's case depends upon their characterizations of cognitive science and folk psychology. We question the basis for their characterizations, challenge both the coherence and the individual adequacy of their contrasts between the two, and show that they waver in their views about the scope of each. We conclude that P&C do not so much discover as create the gap they find between folk psychology and cognitive science. It is an artifact of their implausible and unmotivated attempt to demarcate the two areas, and of the excessively narrow accounts they give of each. 相似文献
11.
In all parts of organisations there flourish developments of different new subsystems in areas of knowledge and learning. Over recent decades, new systems for classification of jobs have emerged both at the level of organisations and at a macro-labour market level. Recent developments in job evaluation systems make it possible to cope with the new demands for equity at work (between, for example, genders, races, physical abilities). Other systems have emerged to describe job requirements in terms of skills, knowledge and competence. Systems for learning at work and web-based learning have created a demand for new ways to classify and to understand the process of learning. Often these new systems have been taken from other areas of the organisation not directly concerned with facilitating workplace learning. All these new systems are of course closely interrelated but, in most organisations, a major problem is the severe lack of cohesion and compatibility between the different subsystems. The aim of this paper is to propose a basis for how different human resource systems can be integrated into the business development of an organisation. We discuss this problem and develop proposals alternative to integrated macro-systems. A key element in our proposition is a structure for classification of knowledge and skill to be used in all parts of the process. This structure should be used as an added dimension or an overlay on all other subsystems of the total process. This will facilitate a continued use of all existing systems within different organisations. We develop Burge's (personal communication) model for learning to show that learning is not a successive linear process, but rather an iterative process. In this way we emphasise the need for greater involvement of learners in the development of learning systems towards increased usability in a networked system. This paper is divided into two parts which are closely related. The first part gives an overview of the lack of compatibility between the different subsystems. In this first part we note two paradoxes which impact learning and for which we propose solutions. The second part deals with 'usability' aspects of these competency-related systems; in particular, usability in e-learning systems. In this second part we describe an example of a new organisational structure. We conclude by discussing four key concepts that are necessary conditions for organisations to address when developing their human capital. Establishing these conditions helps ensure compatibility and usability in e-learning systems. 相似文献
12.
This paper presents a methodology that uses evolutionary learning in training ‘A’ model networks, a topology based on Interactive
Activation and Competition (IAC) neural networks. IAC networks show local knowledge and processing units clustered in pools.
The connections among units may assume only 1, 0 or −1. On the other hand, ‘A’ model network uses values in interval [−1,
1]. This feature provides a wider range of applications for this network, including problems which do not show mutually exclusive
concepts. However, there is no algorithm to adjust the network weights and still preserve the desired characteristics of the
original network. Accordingly, we propose the use of genetic algorithms in a new methodology to obtain the correct weight
set for this network. Two examples are used to illustrate the proposed method. Findings are considered consistent and generic
enough to allow further applications on similar classes of problems suitable for ‘A’ model IAC Networks. 相似文献
13.
Autonomous control systems are designed to perform well under significant uncertainties in the system and environment for extended periods of time, and they must be able to compensate for system failures without external intervention. Intelligent autonomous control systems use techniques from the field of artificial intelligence to achieve this autonomy. Such control systems evolve from conventional control systems by adding intelligent components, and their development requires interdisciplinary research. A hierarchical functional intelligent autonomous control architecture is introduced here and its functions are described in detail. The fundamental issues in autonomous control system modelling and analysis are discussed. 相似文献
14.
There is a common misconception that the automobile industry is slow to adapt new technologies, such as artificial intelligence
(AI) and soft computing. The reality is that many new technologies are deployed and brought to the public through the vehicles
that they drive. This paper provides an overview and a sampling of many of the ways that the automotive industry has utilized
AI, soft computing and other intelligent system technologies in such diverse domains like manufacturing, diagnostics, on-board
systems, warranty analysis and design.
Oleg Gusikhin received the Ph.D. degree from St. Petersburg Institute of Informatics and Automation of the Russian Academy of Sciences
and the M.B.A. degree from the University of Michigan, Ann Arbor, MI. Since 1993, he has been with the Ford Motor Company,
where he is a Technical Leader at the Ford Manufacturing and Vehicle Design Research Laboratory, and is engaged in different
functional areas including information technology, advanced electronics manufacturing, and research and advanced engineering.
He has also been involved in the design and implementation of intelligent control applications for manufacturing and vehicle
systems. He is the recipient of the 2004 Henry Ford Technology Award. He holds two U.S. patents and has published over 30
articles in refereed journals and conference proceedings. He is an Associate Editor of the International Journal of Flexible Manufacturing Systems. He is also a Certified Fellow of the American Production and Inventory Control Society and a member of IEEE and SME.
Nestor Rychtyckyj received the Ph.D. degree in computer science from Wayne State University, Detroit, MI. He is a technical expert in Artificial
Intelligence at Ford Motor Company, Dearborn, MI, in Advanced and Manufacturing Engineering Systems. His current research
interests include the application of knowledge-based systems for vehicle assembly process planning and scheduling. Currently,
his responsibilities include the development of automotive ontologies, intelligent manufacturing systems, controlled languages,
machine translation and corporate terminology management. He has published more than 30 papers in referred journals and conference
proceedings. He is a member of AAAI, ACM and the IEEE Computer Society.
Dimitar P. Filev received the Ph.D. degree in electrical engineering from the Czech Technical University, Prague, in 1979. He is a Senior
Technical Leader, Intelligent Control and Information Systems with Ford Research and Advanced Engineering specializing in
industrial intelligent systems and technologies for control, diagnostics and decision making. He is conducting research in
systems theory and applications, modeling of complex systems, intelligent modeling and control, and has published 3 books
and over 160 articles in refereed journals and conference proceedings. He holds 14 granted U.S. patents and numerous foreign
patents in the area of industrial intelligent systems He is the recipient of the 1995 Award for Excellence of MCB University
Press. He was awarded the Henry Ford Technology Award four times for development and implementation of advanced intelligent
control technologies. He is an Associate Editor of International Journal of General Systems and International Journal of Approximate Reasoning. He is a member of the Board of Governors of the IEEE Systems, Man and Cybernetics Society and President of the North American
Fuzzy Information Processing Society (NAFIPS). 相似文献
15.
Two recent developments will be surveyed here which are pointing the way towards an input–output theory of H∞- l1 adaptive feedback: The solution of problems involving; (1) feedback performance (exact) optimization under large plant uncertainty on the one hand (the two-disc problem of H∞); and (2) optimally fast identification in H∞ on the other. Taken together, these are yielding adaptive algorithms for slowly varying data in H∞- l1. At a conceptual level, these results motivate a general input-output theory linking identification, adaptation, and control learning. In such a theory, the definition of adaptation is based on system performance under uncertainty, and is independent of internal structure, presence or absence of variable parameters, or even feedback. 相似文献
16.
In the practice of concurrent engineering, the factors that are considered early in the product design process include manufacturability, assembly, and cost. A set of issues that are not typically considered revolve around the operational requirements for human workers in the manufacturing system. What tasks will human workers accomplish? How will these tasks be organized and coordinated? What information and resources need to be shared? Will the workers have a coherent set of job responsibilities? How should the manufacturing environment be designed to support effective work practices? How can a manufacturing process be designed that also informs organizational structure and takes into account the quality of working life? The field of sociotechnical systems theory (STS) focuses on exactly these kinds of issues. Rather than subscribing to the usual view of technological determinism — that a complex human-machine system is designed solely with respect to optimization of technical criteria — the goal of STS is to jointly optimize both human and technological considerations in system design and operation. The spirit of STS has much in common with recent work in cognitive systems engineering that advocates the design of joint cognitive systems in which machines serve as flexible, context-sensitive resources for human problem solving. Furthermore, a focus on design teams necessitates the study of the relationship between group work and technology as studied in the field of computer-supported cooperative work (CSCW). This paper briefly reviews current research in sociotechnical systems theory, computer-supported cooperative work, and cognitive systems engineering and proposes a framework for integrating operational concerns into the concurrent engineering process. Relevance to industry To be competitive, organizations need to effectively manage human and technological resources. A key issue is the nature of the information and technological infrastructure that both enables and supports ‘best practice’ across the enterprise. This paper describes such an approach in the context of the ‘operational enterprise’ and provides both a philosophical stance as well as specific examples of software support. 相似文献
17.
Semantic Web Mining aims at combining the two fast-developing research areas Semantic Web and Web Mining. This survey analyzes the convergence of trends from both areas: More and more researchers are working on improving the results of Web Mining by exploiting semantic structures in the Web, and they make use of Web Mining techniques for building the Semantic Web. Last but not least, these techniques can be used for mining the Semantic Web itself.The Semantic Web is the second-generation WWW, enriched by machine-processable information which supports the user in his tasks. Given the enormous size even of today’s Web, it is impossible to manually enrich all of these resources. Therefore, automated schemes for learning the relevant information are increasingly being used. Web Mining aims at discovering insights about the meaning of Web resources and their usage. Given the primarily syntactical nature of the data being mined, the discovery of meaning is impossible based on these data only. Therefore, formalizations of the semantics of Web sites and navigation behavior are becoming more and more common. Furthermore, mining the Semantic Web itself is another upcoming application. We argue that the two areas Web Mining and Semantic Web need each other to fulfill their goals, but that the full potential of this convergence is not yet realized. This paper gives an overview of where the two areas meet today, and sketches ways of how a closer integration could be profitable. 相似文献
18.
This essay discusses the trade-off between the opportunities and the dangers involved in technological change. It is argued that Artificial Intelligence technology, if properly used, could contribute substantially to coping with some of the major problems the world faces because of the highly complex interconnectivity of modern human society.In order to lay the foundation for the discussion, the symptoms of general unease which are associated with current technological progress, the concept of reality, and the field of Artificial Intelligence are very briefly discussed. In the main body of the essay, the dangers are contrasted with the potential benefits of such high technology. Besides discussing more well-known negative and positive aspects we elaborate on the disadvantages of executive systems and the advantages of legislative systems. It is argued that only the latter might enable the re-establishment of the feedback-mechanism which proved so successful in earlier phases of evolution.The German text of an earlier version of this essay appeared in Reference [10], pp. 47–63. 相似文献
19.
The knowledge-based facility planning (KBFP) problem is reviewed. The aim of KBFP is to provide a more comprehensive planning package for users so that their expertise can be augmented with proven knowledge, and yield significantly better plans. The categories reviewed include facilities equipment selection, software model selection, and the generative task of creating a facility planning solution. The employed problem representation and problem-solving techniques are reviewed. Finally, the development of an integrated framework for KBFP is discussed. 相似文献
20.
The factors influencing KMS usage are of major concern to the MIS community. Among the diverse theories employed to help understand this is task technology fit (TTF), which considers the needed technological characteristics of the task as a major factor determining usage. This theory, however, ignores the personal cognition dimension, which has been found to affect the use of an IS. By integrating TTF and social cognitive theory (SCT), we attempted to determine the key factors affecting KMS usage in IT, the organizational task, and personal cognition. Through a survey of 192 KMS users, task interdependence, perceived task technology fit, KMS self-efficacy, and personal outcome expectations were found to have substantial influences on KMS usage. Among the key factors, KMS self-efficacy was found to be especially important as it was substantially and positively correlated to perceived task technology fit, personal and performance-related outcome expectations, and KMS usage. 相似文献
|