首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 671 毫秒
1.
The difference between integral and separable interaction of dimensions is a classic problem in cognitive psychology (Garner 1970, American Psychologist, 25: 350-358, Shepard 1964, Journal of Mathematical Psychology, 1: 54-87) and remains an essential component of most current experimental and theoretical analyses of category learning (e.g. Ashby and Maddox 1994, Journal of Mathematical Psychology, 38: 423-466, Goldstone 1994, Journal of Experimental Psychology: General , 123: 178-200, Kruschke 1993, Connection Science, 5: 3-36, Melara et al. 1993, Journal of Experimental Psychology: Human Perception & Performance, 19: 1082-1104, Nosofsky 1992, Multidimensional Models of Perception and Cognition, Hillsdale NJ: Lawrence Erlbaum). So far the problem has been addressed through post hoc analysis in which empirical evidence of integral and separable processing is used to fit human data, showing how the impact of a pair of dimensions interacting in an integral or a separable manner enters into later learning processes. In this paper, we argue that a mechanistic connectionist explanation for variations in dimensional interactions can provide a new perspective through exploration of how similarities between stimuli are transformed from physical to psychological space when learning to identify, discriminate and categorize them. We substantiate this claim by demonstrating how even a standard backpropagation network combined with a simple image-processing Gabor filter component provides limited but clear potential to process monochromatic stimuli that are composed of integral pairs of dimensions differently from monochromatic stimuli that are composed of separable pairs of dimensions. Interestingly, the responses from Gabor filters are shown already to capture most ofthe dimensional interaction, which in turn can be operated upon by the neural network during a given learning task. In addition, we introduce a basic attention mechanism to back-propagation that gives it the ability to attend selectively to relevant dimensions and illustrate how this serves the model in solving a filtration versus condensation task (Kruschke 1993, Connection Science, 5: 3-36). The model may serve as a starting point in characterizing the general properties of the human perceptual system that causes some pairs of physical dimensions to be treated as integrally interacting and other pairs as separable. An improved understanding of these properties will aid studies in perceptual and category learning, selective attention effects and influences of higher cognitive processes on initial perceptual representations.  相似文献   

2.
Current work on connectionist models has been focused largely on artificial neural networks that are inspired by the networks of biological neurons in the human brain. However, there are also other connectionistarchitectures that differ significantly from this biological exemplar. We proposed a novel connectionist learning architecture inspired by the physics associated with optical coatings of multiple layers of thin-films in a previous paper (Li and Purvis 1999, Annals of Mathematics and Artificial Intelligence, 26: 1-4). The proposed model differs significantly from the widely used neuron-inspired models. With thin-film layer thicknesses serving as adjustable parameters (as compared with connection weights in a neural network) for the learning system, the optical thin-film multilayer model (OTFM) is capable of approximating virtually any kind of highly nonlinear mappings. The OTFM is not a physical implementation using optical devices. Instead, it is proposed as a new connectionist learning architecture with its distinct optical properties as compared with neural networks. In this paper we focus on a detailed comparison of neural networks and the OTFM (Li 2001, Proceedings ofINNS-IEEE International Joint Conference on Neural Networks, Washington, DC, pp. 1727-1732). We describe the architecture of the OTFM and show how it can be viewed as a connectionist learning model. We then present experimental results on solving a classification problem and a time series prediction problem that are typical of conventional connectionist architectures to demonstrate the OTFM's learning capability.  相似文献   

3.
The Symbolic Grounding Problem is viewed as a by-product of the classical cognitivist approach to studying the mind. In contrast, an epigenetic interpretation of connectionist approaches to studying the mind is shown to offer an account of symbolic skills as an emergent, developmental phenomenon. We describe a connectionist model of concept formation and vocabulary growth that auto-associates image representations and their associated labels. The image representations consist of clusters of random dot figures, generated by distorting prototypes. Any given label is associated with a cluster of random dot figures. The network model is tested on its ability to reproduce image representations given input labels alone (comprehension) and to identify labels given input images alone (production). The model implements several well-documented findings in the literature on early semantic development; the occurrence of over- and under-extension errors; a vocabulary spurt; a comprehension/production asymmetry; and a prototype effect. It is shown how these apparently disparate findings can be attributed to the operation of a single underlying mechanism rather than by invoking separate explanations for each phenomenon. The model represents a first step in the direction of providing a formal explanation of the emergence of symbolic behaviour in young children.  相似文献   

4.
A bstract . Fodor and Pylyshyn argued that connectionist models could not be used to exhibit and explain a phenomenon that they termed systematicity, and which they explained by possession of composition syntax and semantics for mental representations and structure sensitivity of mental processes. This inability of connectionist models, they argued, was particularly serious since it meant that these models could not be used as alternative models to classical symbolic models to explain cognition. In this paper, a connectionist model is used to identify some properties which collectively show that connectionist networks supply means for accomplishing a stronger version ofsystematicity than Fodor and Pylyshyn opted for. It is argued that 'context-dependent systematicity' is achievable within a connectionist framework. The arguments put forward rest on a particular formulation of content and context of connectionist representation, firmly and technically based on connectionist primitives in a learning environment. The perspective is motivated by the fundamental differences between the connectionist and classical architectures, in terms of prerequisites, lower-level functionality and inherent constraints. The claim is supported by a set of experiments using a connectionist architecture that demonstrates both an ability of enforcing, what Fodor and Pylyshyn term systematic and nonsystematic processing using a single mechanism, and how novel items can be handled without prior classification. The claim relies on extended learning feedback which enforces representational context dependence.  相似文献   

5.
This paper presents a modular connectionist network model of the development of seriation (sorting) in children. The model uses the cascade-correlation generative connectionist algorithm. These cascade-correlation networks do better than existing rule-based models at developing through soft stage transitions, sorting more correctly with larger stimulus size increments and showing variation in seriation performance within stages. However, the full generative power of cascade-correlation was not found to be a necessary component for successfully modelling the development of seriation abilities. Analysis of network weights indicates that improvements in seriation are due to continuous small changes instead of the radical restructuring suggested by Piaget. The model suggests that seriation skills are present early in development and increase in precision during later development. The required learning environment has a bias towards smaller and nearly ordered arrays. The variability characteristic of children's performance arises from sorting subsets of the total array. The model predicts better sorting moves with more array disorder, and a dissociation between which element should be moved and where it should be moved.  相似文献   

6.
Recurrent neural networks readily process, learn and generate temporal sequences. In addition, they have been shown to have impressive computational power. Recurrent neural networks can be trained with symbolic string examples encoded as temporal sequences to behave like sequential finite slate recognizers. We discuss methods for extracting, inserting and refining symbolic grammatical rules for recurrent networks. This paper discusses various issues: how rules are inserted into recurrent networks, how they affect training and generalization, and how those rules can be checked and corrected. The capability of exchanging information between a symbolic representation (grammatical rules)and a connectionist representation (trained weights) has interesting implications. After partially known rules are inserted, recurrent networks can be trained to preserve inserted rules that were correct and to correct through training inserted rules that were ‘incorrec’—rules inconsistent with the training data.  相似文献   

7.
The paper discusses a connectionist implementation of knowledge engineering concepts and concepts related to production systems in particular. Production systems are one of the most used artificial intelligence techniques as well as a widely explored model of cognition. The use of neural networks for building connectionist production systems opens the door for developing production systems with partial match and approximate reasoning. An architecture of a neural production system (NPS) and its third realization—NPS3, designed to facilitate approximate reasoning—are presented in the paper. NPS3 facilitates partial match between facts and rules, variable binding, different conflict resolution strategies and chain inference. Facts are represented in a working memory by so-called certainty degrees. Different inference control parameters are attached to every production rule. Some of them are known neuronal parameters, receiving an engineering meaning here. Others, which have their context in knowledge engineering, have been implemented in a connectionist way. The partial match implemented in NPS3 is demonstrated on the same test production system as used by other authors. The ability of NPS3 for approximate reasoning is illustrated by reasoning over a set of simple diagnostic productions and a set of decision support fuzzy rules.  相似文献   

8.
In this two-part series, we explore how a perceptually based foundation for natural language semantics might be acquired, via association of sensory/motor experiences with verbal utterances describing those experiences. In Part 1, we introduce a novel neural network architecture, termed Katamic memory, that is inspired by the neurocircuitry of the cerebellum and that exhibits (a) rapid/robust sequence learning/recogmtion and (b) allows integrated learning and performance. These capabilities are due to novel neural elements, which model dendritic structure and function in greater detail than in standard connectionist models. In Part 2, we describe the DETE system, a massively parallel proceduraljneural hybrid model that utilizes over 50 Katamic memory modules to perform two associative learning tasks: (a) verbal-to-visual / motor association—given a verbal sequence, DETE learns to regenerate a neural representation of the visual sequence being described and/or to carry out motor commands; and (b) visual/motor-to-verbal association—given a visual/motor sequence, DETE learns to produce a verbal sequence describing the visual input. DETE can learn verbal sequences describing spatial relations and motions of 2D 'blob-like objects; in addition, the system can also generalize to novel inputs. DETE has been tested successfully on small, restricted subsets of English and Spanish—languages that differ in inflectional properties, word order and how they categorize perceptual reality.  相似文献   

9.
Unsupervised topological ordering, similar to Kohonen's (1982, Biological Cybernetics, 43: 59-69) self-organizing feature map, was achieved in a connectionist module for competitive learning (a CALM Map) by internally regulating the learning rate and the size of the active neighbourhood on the basis of input novelty. In this module, winner-take-all competition and the 'activity bubble' are due tograded lateral inhibition between units. It tends to separate representations as far apart as possible, which leads to interpolation abilities and an absence of catastrophic interference when the interfering set of patterns forms an interpolated set of the initial data set. More than the Kohonen maps, these maps provide an opportunity for building psychologically and neurophysiologically motivated multimodular connectionist models. As an example, the dual pathway connectionist model for fear conditioning by Armony et al. (1997, Trends in Cognitive Science, 1: 28-34) was rebuilt and extended with CALM Maps. If the detection of novelty enhances memory encoding in a canonical circuit, such as the CALM Map, this could explain the finding of large distributed networks for novelty detection (e.g. Knight and Scabini, 1998, Journal of Clinical Neurophysiology, 15: 3-13) in the brain.  相似文献   

10.
This paper specifies the main features of connectionist and brain-like connectionist models; argues for the need for, and usefulness of, appropriate successively larger brainlike structures; and examines parallel-hierarchical Recognition Cone models of perception from this perspective, as examples of networks exploiting such structures (e.g. local receptive fields, global convergence-divergence). The anatomy, physiology, behavior, and development of the visual system are briefly summarized to motivate the architecture of brain-structured networks for perceptual recognition. Results are presented from simulations of carefully pre-designed Recognition Cone structures that perceive objects (e.g. houses) in digitized photographs. A framework for perceptual learning is introduced, including mechanisms for generation learning, i.e. the growth of new links and possibly, nodes, subject to brain-like topological constraints. The information processing transforms discovered through feedback-guided generation are fine-tuned by feedback-guided reweighting of links. Some preliminary results are presented of brain-structured networks that learn to recognize simple objects (e.g. letters of the alphabet, cups, apples, bananas) through generation and reweighting of transforms. These show large improvements over networks that either lack brain-like structure or/and learn by reweighting of links alone. It is concluded that brain-like structures and generation learning can significantly increase the power of connectionist models.  相似文献   

11.
This paper analyses a three-layer connectionist network that solves a translation-invariance problem, offering a novel explanation for transposed letter effects in word reading. Analysis of the hidden unit encodings provides insight into two central issues in cognitive science: (1) What is the novelty of claims of “modality-specific” encodings? and (2) How can a learning system establish a complex internal structure needed to solve a problem? Although these topics (embodied cognition and learnability) are often treated separately, we find a close relationship between them: modality-specific features help the network discover an abstract encoding by causing it to break the initial symmetries of the hidden units in an effective way. While this neural model is extremely simple compared to the human brain, our results suggest that neural networks need not be black boxes and that carefully examining their encoding behaviours may reveal how they differ from classical ideas about the mind-world relationship.  相似文献   

12.
为实现工人装配动作过程的识别,防止由于工人装配动作不规范而造成装配产品质量问题,研究基于深度学习方法的装配动作识别,提出一种基于通道注意力的融合时间和空间信息特征网络模型来识别装配动作的方法.利用MYO臂环传感器采集表面肌电信号,建立一个包含多种装配动作的数据集;搭建一个用于装配动作识别的神经网络,对网络模型进行评估....  相似文献   

13.
A connectionist architecture is developed that can be used for modeling choice probabilities and reaction times in identification tasks. The architecture consists of a feedforward network and a decoding module, and learning is by mean-variance back-propagation, an extension of the standard back-propagation learning algorithm. We suggest that the new learning procedure leads to a better model of human learning in simple identification tasks than does standard back-propagation. Choice probabilities are modeled by the input-output relations of the network and reaction times are modeled by the time taken for the network, particularly the decoding module, to achieve a stable state. In this paper, the model is applied to the identification of unidimensional stimuli; applications to the identification of multidimensional stimuli—visual displays and words—is mentioned and presented in more detail in other papers. The strengths and weaknesses of this connectionist approach vis-à-vis other approaches are discussed  相似文献   

14.
Backpropagation (Rumelhart et al., 1986) was proposed as a general learning algorithm for multi-layer perceptrons. This article demonstrates that a standard version of backprop fails to attend selectively to input dimensions in the same way as humans, suffers catastrophic forgetting of previously learned associations when novel exemplars are trained, and can be overly sensitive to linear category boundaries. Another connectionist model, ALCOVE (Kruschke 1990, 1992), does not suffer those failures. Previous researchers identified these problems; the present article reports quantitative fits of the models to new human learning data. ALCOVE can be functionally approximated by a network that uses linear-sigmoid hidden nodes, like standard backprop. It is argued that models of human category learning should incorporate quasi-local representations and dimensional attention learning, as well as error-driven learning, to address simultaneously all three phenomena.  相似文献   

15.
Many connectionist approaches to musical expectancy and music composition let the question of ‘What next?’ overshadow the equally important question of ‘When next?’. One cannot escape the latter question, one of temporal structure, when considering the perception of musical meter. We view the perception of metrical structure as a dynamic process where the temporal organization of external musical events synchronizes, or entrains, a listener's internal processing mechanisms. This article introduces a novel connectionist unit, based upon a mathematical model of entrainment, capable of phase- and frequency-locking to periodic components of incoming rhythmic patterns. Networks of these units can self-organize temporally structured responses to rhythmic patterns. The resulting network behavior embodies the perception of metrical structure. The article concludes with a discussion of the implications of our approach for theories of metrical structure and musical expectancy.  相似文献   

16.
Berkeley et al. (1995, Connection Science, 7: 167–186) introduced a novel technique for analysing the hidden units of connectionist networks that had been trained using the backpropagation learning procedure. The literature concerning banding analysis is equivocal with respect to the kinds of processing units this technique can be used on. In this paper, it will be shown that, contrary to the claims in some published sources, banding analysis can be conducted on networks that use standard processing units that have a sigmoid activation function. The analytic process is then illustrated and the potential benefits of this kind of technique are discussed.  相似文献   

17.
Continuous-valued recurrent neural networks can learn mechanisms for processing context-free languages. The dynamics of such networks is usually based on damped oscillation around fixed points in state space and requires that the dynamical components are arranged in certain ways. It is shown that qualitatively similar dynamics with similar constraints hold for anbncn , a context-sensitive language. The additional difficulty with anbncn , compared with the context-free language anbn , consists of 'counting up' and 'counting down' letters simultaneously. The network solution is to oscillate in two principal dimensions, one for counting up and one for counting down. This study focuses on the dynamics employed by the sequential cascaded network, in contrast to the simple recurrent network, and the use of backpropagation through time. Found solutions generalize well beyond training data, however, learning is not reliable. The contribution of this study lies in demonstrating how the dynamics in recurrent neural networks that process context-free languages can also be employed in processing some context-sensitive languages (traditionally thought of as requiring additional computation resources). This continuity of mechanism between language classes contributes to our understanding of neural networks in modelling language learning and processing.  相似文献   

18.
This paper introduces a connectionist model of cognitive map formation and use which performs wayfinding tasks. This model is at a higher level of cognitive function than much connectionist work. Its units are each at the level of an already trained backpropagation pattern recognizer. Although similar in certain respects to Hendler's work, the model described herein offers several additional features: first, it is a connectionist model; secondly it learns relationships via a modified Hebbian learning rule and so does not need to input a database; thirdly, spreading activation is an integral part of the model. The model introduced here also differs from backpropagation models in two important respects. First, it does not require correct training input; rather, it learns from ordinary experience. Secondly, it does not converge to a fixed point or equilibrium state; thus, more sophisticated mechanisms are required to control the network's activity. Fatigue and three types of inhibition combine to cause activity to reliably coalesce in units that represent suitable subgoals, or partial solutions, for presented wayfinding problems in networks built through the use of a Hebbian learning rule.  相似文献   

19.
Can connectionist networks effectively represent and process structure? A technique called ‘tensor product representations’, which formalizes and generalizes the approaches of several previous connectionist models, was developed by Smolensky and shown to possess a number of desirable general properties. This paper shows how the technique can be effectively used to design a specific symbol-processing task: the serial execution of simple production rules requiring pattern matching, variable binding and structure manipulation. This ‘Tensor Product Production System’ is applied to one of the classes of production rules in Touretzky and Hinton's Distributed Connectionist Production System, and a number of comparisons are made between the two approaches. The mathematical simplicity and analyzability of the tensor product scheme allows the straightforward design of a simpler, more principled, and in some ways more efficient system.  相似文献   

20.
The paper demonstrates how algorithmic information theory can be elegantly used as a powerful tool for analyzing the dynamics in connectionist systems. It is shown that simple structures of connectionist systems-even if they are very large-are unable significantly to ease the problem of learning complex functions. Also, the development of new learning algorithms would not essentially change this situation. Lower and upper bounds are given for the number of examples needed to learn complex concepts. The bounds are proved with respect to the notion of probably approximately correct learning. It is proposed to use algorithmic information theory for further studies on network dynamics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号