首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
LEONARD UHR 《连接科学》1990,2(3):179-193
A crucial dilemma is how to increase the power of connectionist networks (CN), since simply increasing the size of today's relatively small CNs often slows down and worsens learning and performance. There are three possible ways: (1) use more powerful structures; (2) increase the amount of stored information, and the power and the variety of the basic processes; (3) have the network modify itself (learn, evolve) in more powerful ways. Today's connectionist networks use only a few of the many possible topological structures, handle only numerical values using only very simple basic processes, and learn only by modifying weights associated with links. This paper examines the great variety of potentially muck more powerful possibilities, focusing on what appear to be the most promising: appropriate brain-like structures (e.g. local connectivity, global convergence and divergence); matching, symbol-handling, and list-manipulating capabilities; and learning by extraction-generation-discovery.  相似文献   

2.
VISOR is a large connectionist system that shows how visual schemas can be learned, represented and used through mechanisms natural to neural networks. Processing in VISOR is based on cooperation, competition, and parallel bottom-up and top-down activation of schema representations. VISOR is robust against noise and variations in the inputs and parameters. It can indicate the confidence of its analysis, pay attention to important minor differences, and use context to recognize ambiguous objects. Experiments also suggest that the representation and learning are stable, and behavior is consistent with human processes such as priming, perceptual reversal and circular reaction in learning. The schema mechanisms of VISOR can serve as a starting point for building robust high-level vision systems, and perhaps for schema-based motor control and natural language processing systems as well.  相似文献   

3.
The paper demonstrates how algorithmic information theory can be elegantly used as a powerful tool for analyzing the dynamics in connectionist systems. It is shown that simple structures of connectionist systems-even if they are very large-are unable significantly to ease the problem of learning complex functions. Also, the development of new learning algorithms would not essentially change this situation. Lower and upper bounds are given for the number of examples needed to learn complex concepts. The bounds are proved with respect to the notion of probably approximately correct learning. It is proposed to use algorithmic information theory for further studies on network dynamics.  相似文献   

4.
Current work on connectionist models has been focused largely on artificial neural networks that are inspired by the networks of biological neurons in the human brain. However, there are also other connectionistarchitectures that differ significantly from this biological exemplar. We proposed a novel connectionist learning architecture inspired by the physics associated with optical coatings of multiple layers of thin-films in a previous paper (Li and Purvis 1999, Annals of Mathematics and Artificial Intelligence, 26: 1-4). The proposed model differs significantly from the widely used neuron-inspired models. With thin-film layer thicknesses serving as adjustable parameters (as compared with connection weights in a neural network) for the learning system, the optical thin-film multilayer model (OTFM) is capable of approximating virtually any kind of highly nonlinear mappings. The OTFM is not a physical implementation using optical devices. Instead, it is proposed as a new connectionist learning architecture with its distinct optical properties as compared with neural networks. In this paper we focus on a detailed comparison of neural networks and the OTFM (Li 2001, Proceedings ofINNS-IEEE International Joint Conference on Neural Networks, Washington, DC, pp. 1727-1732). We describe the architecture of the OTFM and show how it can be viewed as a connectionist learning model. We then present experimental results on solving a classification problem and a time series prediction problem that are typical of conventional connectionist architectures to demonstrate the OTFM's learning capability.  相似文献   

5.
In this two-part series, we explore how a perceptually based foundation for natural language semantics might be acquired, via association of sensory/motor experiences with verbal utterances describing those experiences. In Part 1, we introduce a novel neural network architecture, termed Katamic memory, that is inspired by the neurocircuitry of the cerebellum and that exhibits (a) rapid/robust sequence learning/recogmtion and (b) allows integrated learning and performance. These capabilities are due to novel neural elements, which model dendritic structure and function in greater detail than in standard connectionist models. In Part 2, we describe the DETE system, a massively parallel proceduraljneural hybrid model that utilizes over 50 Katamic memory modules to perform two associative learning tasks: (a) verbal-to-visual / motor association—given a verbal sequence, DETE learns to regenerate a neural representation of the visual sequence being described and/or to carry out motor commands; and (b) visual/motor-to-verbal association—given a visual/motor sequence, DETE learns to produce a verbal sequence describing the visual input. DETE can learn verbal sequences describing spatial relations and motions of 2D 'blob-like objects; in addition, the system can also generalize to novel inputs. DETE has been tested successfully on small, restricted subsets of English and Spanish—languages that differ in inflectional properties, word order and how they categorize perceptual reality.  相似文献   

6.
7.
A bstract . Fodor and Pylyshyn argued that connectionist models could not be used to exhibit and explain a phenomenon that they termed systematicity, and which they explained by possession of composition syntax and semantics for mental representations and structure sensitivity of mental processes. This inability of connectionist models, they argued, was particularly serious since it meant that these models could not be used as alternative models to classical symbolic models to explain cognition. In this paper, a connectionist model is used to identify some properties which collectively show that connectionist networks supply means for accomplishing a stronger version ofsystematicity than Fodor and Pylyshyn opted for. It is argued that 'context-dependent systematicity' is achievable within a connectionist framework. The arguments put forward rest on a particular formulation of content and context of connectionist representation, firmly and technically based on connectionist primitives in a learning environment. The perspective is motivated by the fundamental differences between the connectionist and classical architectures, in terms of prerequisites, lower-level functionality and inherent constraints. The claim is supported by a set of experiments using a connectionist architecture that demonstrates both an ability of enforcing, what Fodor and Pylyshyn term systematic and nonsystematic processing using a single mechanism, and how novel items can be handled without prior classification. The claim relies on extended learning feedback which enforces representational context dependence.  相似文献   

8.
Recursive auto-associative memory (RAAM) has become established in the connectionist literature as a key contribution in the strive to develop connectionist representations of symbol structures. However, RAAMs use the backpropagation algorithm and therefore can be difficult to train and slow to learn. In addition, it is often hard to analyze exactly what a network has learnt and, therefore, it is difficult to state what composition mechanism is used by a RAAM for constructing representations. In this paper, we present an analytical version of RAAM, denoted as simplified RAAM or (S)RAAM. (S)RAAM models a RAAM very closely in that a single constructor matrix is derived which can be applied recursively to construct connectionist representations of symbol structures. The derivation, like RAAM, exhibits a moving target effect because training patterns adjust during learning but, unlike RAAM, the training is very fast. The analytical model allows a clear statement to be made about generalization characteristics and it can be shown that, in practice, the model will converge.  相似文献   

9.
10.
Standard feedforward and recurrent networks cannot support strong systematicity when constituents are presented as local input/output vectors. To explain systematicity connectionists must either: (1) develop alternative models, or (2) justify the assumption of similar (non-local) constituent representations prior to the learning task. I show that the second commonly presumed option cannot account for systematicity, in general. This option, termed first-order connectionism, relies upon established spatial relationships between common-class constituents to account for systematic generalization: inferences (functions) learnt over, for example, cats, extend systematically to dogs by virtue of both being nouns with similar internal representations so that the function learnt to make inferences employing one simultaneously has the capacity to make inferences employing the other. But, humans generalize beyond common-class constituents. Cross-category generalization (e.g. inferences that require treating mango as a colour, rather than a fruit) makes having had the necessary common context to learn similar constituent representations highly unlikely. At best, the constituent similarity proposal encodes for one binary relationship between any two constituents, at any one time. It cannot account for inferences, such as transverse patterning, that require identifying and applying one of many possible binary constituent relationships that is contingent on a third constituent (i.e. ternary relationship). Connectionists are, therefore, left with the first option which amounts to developing models with the symbol-like capacity to represent explicitly constituent relations independent of constituent contents, such as in tensor-related models. However, rather just simply implementing symbol systems, I suggest reconciling connectionist and classical frameworks to overcome their individual limitations.  相似文献   

11.
Unsupervised topological ordering, similar to Kohonen's (1982, Biological Cybernetics, 43: 59-69) self-organizing feature map, was achieved in a connectionist module for competitive learning (a CALM Map) by internally regulating the learning rate and the size of the active neighbourhood on the basis of input novelty. In this module, winner-take-all competition and the 'activity bubble' are due tograded lateral inhibition between units. It tends to separate representations as far apart as possible, which leads to interpolation abilities and an absence of catastrophic interference when the interfering set of patterns forms an interpolated set of the initial data set. More than the Kohonen maps, these maps provide an opportunity for building psychologically and neurophysiologically motivated multimodular connectionist models. As an example, the dual pathway connectionist model for fear conditioning by Armony et al. (1997, Trends in Cognitive Science, 1: 28-34) was rebuilt and extended with CALM Maps. If the detection of novelty enhances memory encoding in a canonical circuit, such as the CALM Map, this could explain the finding of large distributed networks for novelty detection (e.g. Knight and Scabini, 1998, Journal of Clinical Neurophysiology, 15: 3-13) in the brain.  相似文献   

12.
I. Berkeley  R. Raine 《连接科学》2011,23(3):209-218
In this paper, the problem of changing chords when playing Cajun music is introduced. A number of connectionist network simulations are then described, in which the networks attempted to learn to predict chord changes correctly in a particular Cajun song, ‘Bayou Pompon’. In the various sets of simulations, the amount of information provided to the network was varied. While the network had difficulty in solving the problem with six one-eighths of a bar of melody information, performance radically improved when the network was provided with seven one-eighths of a bar of melody information. A post-training analysis of a trained network revealed a ‘rule’ for solving the problem. In addition to providing useful insight for scholars interested in traditional Cajun music, the results described here also illustrate how a traditional connectionist network, trained with the familiar backpropagation learning algorithm, can be used to generate a theory of the task.  相似文献   

13.
In Part 1 of this two-part series, we introduced Katamic memory—a neural network architecture capable of robust sequence learning and recognition. In Part 2, we introduce the Blobs World taskjdomain for language learning and describe the DETE language learning system, which is composed of over 50 Katamic memory modules. DETE currently learns small subsets of English and Spanish via association with perceptual! motor inputs. In addition to Kaiamic memory, DETE employs several other novel features: (1) use of feature planes, to encode visual shapes, spatial relationships and the motions of objects, (2) phase-locking of neural firing, in order to represent focus of atention and to bind objects across multiple feature planes, and (3) a method for encoding temporal relationships, so that DETE can learn utterances involving the immediate past and future. We compare DETE to related models and discuss the implications of this approach for language-learning research.  相似文献   

14.
This paper introduces a connectionist model of cognitive map formation and use which performs wayfinding tasks. This model is at a higher level of cognitive function than much connectionist work. Its units are each at the level of an already trained backpropagation pattern recognizer. Although similar in certain respects to Hendler's work, the model described herein offers several additional features: first, it is a connectionist model; secondly it learns relationships via a modified Hebbian learning rule and so does not need to input a database; thirdly, spreading activation is an integral part of the model. The model introduced here also differs from backpropagation models in two important respects. First, it does not require correct training input; rather, it learns from ordinary experience. Secondly, it does not converge to a fixed point or equilibrium state; thus, more sophisticated mechanisms are required to control the network's activity. Fatigue and three types of inhibition combine to cause activity to reliably coalesce in units that represent suitable subgoals, or partial solutions, for presented wayfinding problems in networks built through the use of a Hebbian learning rule.  相似文献   

15.
16.
A brief review of studies into the psychology of melody perception leads to the conclusion that melodies are represented in long-term memory as sequences of specific items, either intervals or scale notes; the latter representation is preferred. Previous connectionist models of musical-sequence learning are discussed and criticized as models of perception. The Cohen— Grossberg masking field (Cohen & Grossberg, 1987) is described and it is shown how it can be used to generate melodic expectations when incorporated within an adaptive resonance architecture. An improved formulation, the SONNET 1 network (Nigrin, 1990, 1992), is described in detail and modifications are suggested. The network is tested on its ability to learn short melodic phrases taken from a set of simple melodies, before being applied to the learning of the melodies themselves. Mechanisms are suggested for sequence recognition and sequence recall. The advantages of this approach to sequence learning are discussed.  相似文献   

17.
In this paper, we describe the Parallel Race Network (PRN), a race model with the ability to learn stimulus-response associations using a formal framework that is very similar to the one used by the traditional connectionist networks. The PRN assumes that the connections represent abstract units of time rather than strengths of association. Consequently, the connections in the network indicate how rapidly the information should be sent to an output unit. The decision is based on a race between the outputs. To make learning functional and autonomous, the Delta rule was modified to fit the time-based assumption of the PRN. Finally, the PRN is used to simulate an identification task and the implications of its mode of representation are discussed.  相似文献   

18.
Rationalism has been referred to as the tradition of explaining cognition in terms of logical structures. Much of the work in traditional AI can be seen within a rationalistic framework. Because of the problems with traditional AI, connectionist models have been proposed as an alternative. Connectionist models do solve a number of problems of AI in interesting ways, e.g. learning, generalization, and fault and noise tolerance. However, they do not automatically provide solutions to the basic conceptual problems which can be traced back to a neglect of the relation of AI systems with the real world. We will argue that if we are to make progress in the understanding of (intelligent) behavior the real issue is not whether connectionism is a better paradigm for cognitive science than traditional AI but whether a rationalistic perspective is appropriate and if not what the alternatives are. It is suggested that studying physically instantiated autonomous agents is an important step. However, we will show that building autonomous agents alone does not solve the problem either. What is needed is an appropriate embedding in a non-rationalistic framework. We will discuss a potential solution using an approach we have been developing in our group, called ‘distributed adaptive control’.  相似文献   

19.
RON SUN 《连接科学》1992,4(2):93-124
This paper deals with the problem of variable binding in connectionist networks. Specifically, a more thorough solution to the variable binding problem based on the Discrete Neuron formalism is proposed and a number of issues arising in the solution are examined in relation to logic: consistency checking, binding generation, unification and functions. We analyze what is needed in order to resolve these issues and, based on this analysis, a procedure is developed for systematically setting up connectionist networks for variable binding based on logic rules. This solution compares favorably to similar solutions in simplicity and completeness.  相似文献   

20.
This paper presents a modular connectionist network model of the development of seriation (sorting) in children. The model uses the cascade-correlation generative connectionist algorithm. These cascade-correlation networks do better than existing rule-based models at developing through soft stage transitions, sorting more correctly with larger stimulus size increments and showing variation in seriation performance within stages. However, the full generative power of cascade-correlation was not found to be a necessary component for successfully modelling the development of seriation abilities. Analysis of network weights indicates that improvements in seriation are due to continuous small changes instead of the radical restructuring suggested by Piaget. The model suggests that seriation skills are present early in development and increase in precision during later development. The required learning environment has a bias towards smaller and nearly ordered arrays. The variability characteristic of children's performance arises from sorting subsets of the total array. The model predicts better sorting moves with more array disorder, and a dissociation between which element should be moved and where it should be moved.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号