首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
This paper specifies the main features of connectionist and brain-like connectionist models; argues for the need for, and usefulness of, appropriate successively larger brainlike structures; and examines parallel-hierarchical Recognition Cone models of perception from this perspective, as examples of networks exploiting such structures (e.g. local receptive fields, global convergence-divergence). The anatomy, physiology, behavior, and development of the visual system are briefly summarized to motivate the architecture of brain-structured networks for perceptual recognition. Results are presented from simulations of carefully pre-designed Recognition Cone structures that perceive objects (e.g. houses) in digitized photographs. A framework for perceptual learning is introduced, including mechanisms for generation learning, i.e. the growth of new links and possibly, nodes, subject to brain-like topological constraints. The information processing transforms discovered through feedback-guided generation are fine-tuned by feedback-guided reweighting of links. Some preliminary results are presented of brain-structured networks that learn to recognize simple objects (e.g. letters of the alphabet, cups, apples, bananas) through generation and reweighting of transforms. These show large improvements over networks that either lack brain-like structure or/and learn by reweighting of links alone. It is concluded that brain-like structures and generation learning can significantly increase the power of connectionist models.  相似文献   

2.
Two experiments replicated the ‘inverse base-rate effect’ reported in categorization studies by Medin & Edelson. In Experiment I subjects were presented with the case histories of hypothetical medical patients and had to diagnose which illness they thought each patient was suffering from on the basis of the symptoms they had. On some trials (AB-→1), patients had two symptoms, A and B, and the correct diagnosis was disease 1. On other trials (AC-→2)patients had symptoms A and C and the correct diagnosis was disease 2. Feedback was provided on each trial about the correct diagnosis. Symptom A was common to both diseases, but subjects saw more AB-→1 than AC-→2 trials. On subsequent test trials subjects were more likely to choose disease 1 than disease 2 as their diagnosis for patients who had just symptom A, in accordance with the base-rates of the two diseases. However, on test trials where patients had both symptoms B and C, which had never previously occurred together, subjects were more likely to choose disease 2, contrary not only to the underlying base-rates, but also to the predictions of a well-established connectionist model of categorization. A variety of alternative connectionist models are considered. In Experiment 2 it was found that a necessary condition for the inverse base-rate effect is that symptom A is more strongly associated with disease 1 than disease 2, which is consistent with an associative learning account which appeals to the notion of competition between symptoms. A new connectionist learning model, using a learning algorithm based on Wagner's theory of associative learning, is shown to be able to reproduce the main results.  相似文献   

3.
The paper demonstrates how algorithmic information theory can be elegantly used as a powerful tool for analyzing the dynamics in connectionist systems. It is shown that simple structures of connectionist systems-even if they are very large-are unable significantly to ease the problem of learning complex functions. Also, the development of new learning algorithms would not essentially change this situation. Lower and upper bounds are given for the number of examples needed to learn complex concepts. The bounds are proved with respect to the notion of probably approximately correct learning. It is proposed to use algorithmic information theory for further studies on network dynamics.  相似文献   

4.
This paper introduces a connectionist model of cognitive map formation and use which performs wayfinding tasks. This model is at a higher level of cognitive function than much connectionist work. Its units are each at the level of an already trained backpropagation pattern recognizer. Although similar in certain respects to Hendler's work, the model described herein offers several additional features: first, it is a connectionist model; secondly it learns relationships via a modified Hebbian learning rule and so does not need to input a database; thirdly, spreading activation is an integral part of the model. The model introduced here also differs from backpropagation models in two important respects. First, it does not require correct training input; rather, it learns from ordinary experience. Secondly, it does not converge to a fixed point or equilibrium state; thus, more sophisticated mechanisms are required to control the network's activity. Fatigue and three types of inhibition combine to cause activity to reliably coalesce in units that represent suitable subgoals, or partial solutions, for presented wayfinding problems in networks built through the use of a Hebbian learning rule.  相似文献   

5.
This paper describes a hybrid model which integrates symbolic and connectionist techniques for the analysis of noun phrases. Our model consists of three levels: (1) a distributed connectionist level, (2) a localist connectionist level, and (3) a symbolic level. While most current systems in natural language processing use techniques from only one of these three levels, our model takes advantage of the virtues of all three processing paradigms. The distributed connectionist level provides a learned semantic memory model. The localist connectionist level integrates semantic and syntactic constraints. The symbolic level is responsible for restricted syntactic analysis and concept extraction. We conclude that a hybrid model is potentially stronger than models that rely on only one processing paradigm.  相似文献   

6.
Recurrent neural networks readily process, learn and generate temporal sequences. In addition, they have been shown to have impressive computational power. Recurrent neural networks can be trained with symbolic string examples encoded as temporal sequences to behave like sequential finite slate recognizers. We discuss methods for extracting, inserting and refining symbolic grammatical rules for recurrent networks. This paper discusses various issues: how rules are inserted into recurrent networks, how they affect training and generalization, and how those rules can be checked and corrected. The capability of exchanging information between a symbolic representation (grammatical rules)and a connectionist representation (trained weights) has interesting implications. After partially known rules are inserted, recurrent networks can be trained to preserve inserted rules that were correct and to correct through training inserted rules that were ‘incorrec’—rules inconsistent with the training data.  相似文献   

7.
Rumelhan et al. (1986b) proposed a model of how symbolic processing may be achieved by parallel distributed processing (PDP) networks. Their idea is tested by training two types of recurrent networks to learn to add two numbers of arbitrary lengths. This turned out to be a fruitful exercise. We demonstrate: (1) that networks can learn simple programming constructs such as sequences, conditional branches and while loops; (2) that by lsquo;going sequential’ in this manner, we are able to process artibrarily long problems; (3) a manipulation of the training environment, called combined subset training (CST), that was found to be necessary to acquire a large training set; (4) a power difference between simple recurrent networks and Jordan networks by providing a simple procedure that one can learn and the other cannot.  相似文献   

8.
There is an apparent anomaly in the notion that connectionism, which is fundamentally a new technology, has considerable philosophical significance. Nonetheless, connectionism has been widely viewed as having implications for symbol grounding, notions of structured representation and compositionality, as well as the issue of nativism. In this paper, we consider each of these issues in detail and find that the current state of connectionism does not warrant the magnitude of many of the philosophical conclusions drawn from it. We argue that connectionist models are no more 'grounded' than their classical counterparts. In addition, since connectionist representations typically are ascribed content through semantic interpretation based on correlation, connectionism is prone to a number of well known philosophical problems facing any kind of correlational semantics. However, we suggest that philosophy may be ill advised to ignore the development of connectionism, particularly if connectionist systems prove to be able to learn to handle structured representations.  相似文献   

9.
A bstract . Fodor and Pylyshyn argued that connectionist models could not be used to exhibit and explain a phenomenon that they termed systematicity, and which they explained by possession of composition syntax and semantics for mental representations and structure sensitivity of mental processes. This inability of connectionist models, they argued, was particularly serious since it meant that these models could not be used as alternative models to classical symbolic models to explain cognition. In this paper, a connectionist model is used to identify some properties which collectively show that connectionist networks supply means for accomplishing a stronger version ofsystematicity than Fodor and Pylyshyn opted for. It is argued that 'context-dependent systematicity' is achievable within a connectionist framework. The arguments put forward rest on a particular formulation of content and context of connectionist representation, firmly and technically based on connectionist primitives in a learning environment. The perspective is motivated by the fundamental differences between the connectionist and classical architectures, in terms of prerequisites, lower-level functionality and inherent constraints. The claim is supported by a set of experiments using a connectionist architecture that demonstrates both an ability of enforcing, what Fodor and Pylyshyn term systematic and nonsystematic processing using a single mechanism, and how novel items can be handled without prior classification. The claim relies on extended learning feedback which enforces representational context dependence.  相似文献   

10.
This paper presents a modular connectionist network model of the development of seriation (sorting) in children. The model uses the cascade-correlation generative connectionist algorithm. These cascade-correlation networks do better than existing rule-based models at developing through soft stage transitions, sorting more correctly with larger stimulus size increments and showing variation in seriation performance within stages. However, the full generative power of cascade-correlation was not found to be a necessary component for successfully modelling the development of seriation abilities. Analysis of network weights indicates that improvements in seriation are due to continuous small changes instead of the radical restructuring suggested by Piaget. The model suggests that seriation skills are present early in development and increase in precision during later development. The required learning environment has a bias towards smaller and nearly ordered arrays. The variability characteristic of children's performance arises from sorting subsets of the total array. The model predicts better sorting moves with more array disorder, and a dissociation between which element should be moved and where it should be moved.  相似文献   

11.
Classic barriers to using auto-associative neural networks to model mammalian memory include the unrealistically high synaptic connectivity of fully connected networks, and the relative paucity of information that has been stored in networks with realistic numbers of synapses per neuron and learning rules amenable to physiological implementation. We describe extremely large, auto-associative networks with low synaptic density. The networks have no direct connections between neurons of the same layer. Rather, the neurons of one layer are 'linked' by connections to neurons of some other layer. Patterns of projections of one layer on to another which form projective planes, or other cognate geometries, confer considerable computational power an the network.  相似文献   

12.
Current work on connectionist models has been focused largely on artificial neural networks that are inspired by the networks of biological neurons in the human brain. However, there are also other connectionistarchitectures that differ significantly from this biological exemplar. We proposed a novel connectionist learning architecture inspired by the physics associated with optical coatings of multiple layers of thin-films in a previous paper (Li and Purvis 1999, Annals of Mathematics and Artificial Intelligence, 26: 1-4). The proposed model differs significantly from the widely used neuron-inspired models. With thin-film layer thicknesses serving as adjustable parameters (as compared with connection weights in a neural network) for the learning system, the optical thin-film multilayer model (OTFM) is capable of approximating virtually any kind of highly nonlinear mappings. The OTFM is not a physical implementation using optical devices. Instead, it is proposed as a new connectionist learning architecture with its distinct optical properties as compared with neural networks. In this paper we focus on a detailed comparison of neural networks and the OTFM (Li 2001, Proceedings ofINNS-IEEE International Joint Conference on Neural Networks, Washington, DC, pp. 1727-1732). We describe the architecture of the OTFM and show how it can be viewed as a connectionist learning model. We then present experimental results on solving a classification problem and a time series prediction problem that are typical of conventional connectionist architectures to demonstrate the OTFM's learning capability.  相似文献   

13.
In this two-part series, we explore how a perceptually based foundation for natural language semantics might be acquired, via association of sensory/motor experiences with verbal utterances describing those experiences. In Part 1, we introduce a novel neural network architecture, termed Katamic memory, that is inspired by the neurocircuitry of the cerebellum and that exhibits (a) rapid/robust sequence learning/recogmtion and (b) allows integrated learning and performance. These capabilities are due to novel neural elements, which model dendritic structure and function in greater detail than in standard connectionist models. In Part 2, we describe the DETE system, a massively parallel proceduraljneural hybrid model that utilizes over 50 Katamic memory modules to perform two associative learning tasks: (a) verbal-to-visual / motor association—given a verbal sequence, DETE learns to regenerate a neural representation of the visual sequence being described and/or to carry out motor commands; and (b) visual/motor-to-verbal association—given a visual/motor sequence, DETE learns to produce a verbal sequence describing the visual input. DETE can learn verbal sequences describing spatial relations and motions of 2D 'blob-like objects; in addition, the system can also generalize to novel inputs. DETE has been tested successfully on small, restricted subsets of English and Spanish—languages that differ in inflectional properties, word order and how they categorize perceptual reality.  相似文献   

14.
Highly recurrent neural networks can learn reverberating circuits called Cell Assemblies (CAs). These networks can be used to categorize input, and this paper explores the ability of CAs to learn hierarchical categories. A simulator, based on spiking fatiguing leaky integrators, is presented with instances of base categories. Learning is done using a compensatory Hebbian learning rule. The model takes advantage of overlapping CAs where neurons may participate in more than one CA. Using the unsupervised compensatory learning rule, the networks learn a hierarchy of categories that correctly categorize 97% of the basic level presentations of the input in our test. It categorizes 100% of the super-categories correctly. A larger hierarchy is learned that correctly categorizes 100% of base categories, and 89% of super-categories. It is also shown how novel subcategories gain default information from their super-category. These simulations show that networks containing CAs can be used to learn hierarchical categories. The network then can successfully categorize novel inputs.  相似文献   

15.
Recursive auto-associative memory (RAAM) has become established in the connectionist literature as a key contribution in the strive to develop connectionist representations of symbol structures. However, RAAMs use the backpropagation algorithm and therefore can be difficult to train and slow to learn. In addition, it is often hard to analyze exactly what a network has learnt and, therefore, it is difficult to state what composition mechanism is used by a RAAM for constructing representations. In this paper, we present an analytical version of RAAM, denoted as simplified RAAM or (S)RAAM. (S)RAAM models a RAAM very closely in that a single constructor matrix is derived which can be applied recursively to construct connectionist representations of symbol structures. The derivation, like RAAM, exhibits a moving target effect because training patterns adjust during learning but, unlike RAAM, the training is very fast. The analytical model allows a clear statement to be made about generalization characteristics and it can be shown that, in practice, the model will converge.  相似文献   

16.
Unsupervised topological ordering, similar to Kohonen's (1982, Biological Cybernetics, 43: 59-69) self-organizing feature map, was achieved in a connectionist module for competitive learning (a CALM Map) by internally regulating the learning rate and the size of the active neighbourhood on the basis of input novelty. In this module, winner-take-all competition and the 'activity bubble' are due tograded lateral inhibition between units. It tends to separate representations as far apart as possible, which leads to interpolation abilities and an absence of catastrophic interference when the interfering set of patterns forms an interpolated set of the initial data set. More than the Kohonen maps, these maps provide an opportunity for building psychologically and neurophysiologically motivated multimodular connectionist models. As an example, the dual pathway connectionist model for fear conditioning by Armony et al. (1997, Trends in Cognitive Science, 1: 28-34) was rebuilt and extended with CALM Maps. If the detection of novelty enhances memory encoding in a canonical circuit, such as the CALM Map, this could explain the finding of large distributed networks for novelty detection (e.g. Knight and Scabini, 1998, Journal of Clinical Neurophysiology, 15: 3-13) in the brain.  相似文献   

17.
随着以深度神经网络为代表的深度学习模型取得突破性快速发展,同时得益于更强大的计算机、更大的数据集和能够训练更深网络的技术,深度学习在智能焊接等智能制造领域取得了大量应用。概述了深度学习技术在焊接过程控制、焊缝缺陷检测等方面的研究进展,当前的研究表明深度学习方法能够提高焊接过程实时控制精度和焊接缺陷的识别准确率。  相似文献   

18.
This paper deals with the integration of neural and symbolic approaches. It focuses on associative memories where a connectionist architecture tries to provide a storage and retrieval component for the symbolic level. In this light, the classic model for associative memory, the Hopfield network is briefly reviewed. Then, a new model for associative memory, the hybrid Hopfield-clique network is presented in detail. Its application to a typically symbolic task, the post -processing of the output of an optical character recognizer, is also described. In the author's view, the hybrid Hopfield -clique network constitutes an example of a successful integration of the two approaches. It uses a symbolic learning scheme to train a connectionist network, and through this integration, it can provide perfect storage and recall. As a conclusion, an analysis of what can be learned from this specific architecture is attempted. In the case of this model, a guarantee for perfect storage and recall can only be given because it was possible to analyze the problem using the well-defined symbolic formalism of graph theory. In general, we think that finding an adequate formalism for a given problem is an important step towards solving it.  相似文献   

19.
We have used connectionist simulations in an attempt to understand how orientation tuned units similar to those found in the visual cortex can be used to perform psychophysical tasks involving absolute identification of stimulus orientation. In one task, the observer (or the network) was trained to identify which of two possible orientations had been presented, whereas in a second task there were 10 possible orientations that had to be identified. By determining asymptotic performance levels with stimuli separated to different extents it is possible to generate a psychophysical function relating identification performance to stimulus separation. Comparisons between the performance functions of neural networks with those found for human subjects performing equivalent tasks led us to the following conclusions. Firstly, we found that the ‘psychometric functions’ generated for the networks could accurately mimic the performance of the human observers. Secondly, the most important orientation selective units in such tasks are not the most active ones (as is often assumed). Rather, the most important units were those selective for orientations offset 15° to 20° to either side of the test stimuli. Such data reinforce recent psychophysical and neurophysiological data suggesting that orientation coding in the visual cortex should be thought of in terms of distributed coding. Finally, if the same set of input units was used in the two-orientation and the 10-orientation situation, it became apparent that in order to explain the difference in performance in the two cases it was necessary to use either a network without hidden units or one with a very small number of such units. If more hidden units were available, performance in the 10-orientation case was found to be too good to fit the human data. Such results cast doubt on the hypothesis that hidden units need to be trained in order to account for simple perceptual learning in humans.  相似文献   

20.
The study of numerical abilities, and how they are acquired, is being used to explore the continuity between ontogenesis and environmental learning. One technique that proves useful in this exploration is the artificial simulation of numerical abilities with neural networks, using different learning paradigms to explore development. A neural network simulation of subitization, sometimes referred to as visual enumeration, and of counting, a recurrent operation, has been developed using the so-called multi-net architecture. Our numerical ability simulations use two or more neural networks combining supervised and unsupervised learning techniques to model subitization and counting. Subitization has been simulated using networks employing unsupervised self-organizing learning, the results of which agree with infant subitization experiments and are comparable with supervised neural network simulations of subitization reported in the literature. Counting has been simulated using a multi-net system of supervised static and recurrent backpropagation networks that learn their individual tasks within an unsupervised, competitive framework. The developmental profile of the counting simulation shows similarities to that of children learning to count and demonstrates how neural networks can learn how to be combined together in a process modelling development.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号