首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this two-part series, we explore how a perceptually based foundation for natural language semantics might be acquired, via association of sensory/motor experiences with verbal utterances describing those experiences. In Part 1, we introduce a novel neural network architecture, termed Katamic memory, that is inspired by the neurocircuitry of the cerebellum and that exhibits (a) rapid/robust sequence learning/recogmtion and (b) allows integrated learning and performance. These capabilities are due to novel neural elements, which model dendritic structure and function in greater detail than in standard connectionist models. In Part 2, we describe the DETE system, a massively parallel proceduraljneural hybrid model that utilizes over 50 Katamic memory modules to perform two associative learning tasks: (a) verbal-to-visual / motor association—given a verbal sequence, DETE learns to regenerate a neural representation of the visual sequence being described and/or to carry out motor commands; and (b) visual/motor-to-verbal association—given a visual/motor sequence, DETE learns to produce a verbal sequence describing the visual input. DETE can learn verbal sequences describing spatial relations and motions of 2D 'blob-like objects; in addition, the system can also generalize to novel inputs. DETE has been tested successfully on small, restricted subsets of English and Spanish—languages that differ in inflectional properties, word order and how they categorize perceptual reality.  相似文献   

2.
This paper presents DENN, a dynamic neural network or neural substrate having a number of abilities that might allow it to play a useful role as a constituent of an artificial cognitive system, handling the task of low-level perceptual processing. DENN can adapt without supervision to new objects, is able to respond to patterns of activation from several objects presented simultaneously to it, and is able to automatically switch its perception between multiple objects. It is based on an ideal neural substrate as conjectured by Dimond (1980), having the twin capabilities of autonomous learning and memory, capabilities emerging due to the use of autonomous neurons. DENN has a pyramidal architecture and its neurons have topologically organized receptive fields. Through training, the neurons become feature detectors, with the higher level neurons responding to more complex features. The neurons respond to a retinal input with an oscillatory output whose frequency depends only on their own input. Due to developing phase differences, the higher level neurons can move out of phase relative to each other. Therefore, different inputs are recognized cyclically-a process we term 'automatic perception switching'. Experiments verified the system's ability of automatic perception switching, investigated its response to randomized images, and compared the performance of adaptive and non-adaptive versions of the neural substrate.  相似文献   

3.
Highly recurrent neural networks can learn reverberating circuits called Cell Assemblies (CAs). These networks can be used to categorize input, and this paper explores the ability of CAs to learn hierarchical categories. A simulator, based on spiking fatiguing leaky integrators, is presented with instances of base categories. Learning is done using a compensatory Hebbian learning rule. The model takes advantage of overlapping CAs where neurons may participate in more than one CA. Using the unsupervised compensatory learning rule, the networks learn a hierarchy of categories that correctly categorize 97% of the basic level presentations of the input in our test. It categorizes 100% of the super-categories correctly. A larger hierarchy is learned that correctly categorizes 100% of base categories, and 89% of super-categories. It is also shown how novel subcategories gain default information from their super-category. These simulations show that networks containing CAs can be used to learn hierarchical categories. The network then can successfully categorize novel inputs.  相似文献   

4.
神经网络是当前最主要的智能控制技术,它是模拟人脑的结构以及对信息的记忆和处理功能,具有擅长从输入输出数据中学习有用的知识的特性。发动机性能预测是根据发动机结构参数和运转参数来估算推测发动机的各种性能指标,因此可以利用神经网络的学习性的特点,借助神经网络,将各种影响汽油机燃烧过程的主要参数对汽油机的非线性影响以网络模型的形式表示出来。本文讨论了如何抛开数学建模的方式,选用广义回归神经网络进行发动机动力性、经济性的预测,并应用了MATLAB软件工具箱编程,给出一个两缸电控汽油发动机的动力性、经济件预测模型的实例。  相似文献   

5.
为了提高轴承故障信号的诊断性能,采用小波分析和RBF神经网络相结合的方法对轴承振动信号进行故障分类。首先对轴承振动信号进行小波变化,采用软阈值去噪方法滤除振动信号噪声,然后对振动信号矩阵化处理,接着构建RBF神经网络,输入轴承振动信号特征向量,初始化权重和阈值,最后通过不断反向迭代得到稳定的RBF神经网络故障判别模型。实验证明:通过差异化设置隐藏层神经元数量,确定合适的RBF神经网络规模,经过小波去噪可以有效提高轴承故障判别准确率,相比于常见轴承故障分类算法,算法具有更高的故障判别准确率。  相似文献   

6.
We present an approach for recognition and clustering of spatio temporal patterns based on networks of spiking neurons with active dendrites and dynamic synapses. We introduce a new model of an integrate-and-fire neuron with active dendrites and dynamic synapses (ADDS) and its synaptic plasticity rule. The neuron employs the dynamics of the synapses and the active properties of the dendrites as an adaptive mechanism for maximizing its response to a specific spatio-temporal distribution of incoming action potentials. The learning algorithm follows recent biological evidence on synaptic plasticity. It goes beyond the current computational approaches which are based only on the relative timing between single pre- and post-synaptic spikes and implements a functional dependence based on the state of the dendritic and somatic membrane potentials around the pre- and post-synaptic action potentials. The learning algorithm is demonstrated to effectively train the neuron towards a selective response determined by the spatio-temporal pattern of the onsets of input spike trains. The model is used in the implementation of a part of a robotic system for natural language instructions. We test the model with a robot whose goal is to recognize and execute language instructions. The research in this article demonstrates the potential of spiking neurons for processing spatio-temporal patterns and the experiments present spiking neural networks as a paradigm which can be applied for modelling sequence detectors at word level for robot instructions.  相似文献   

7.
In this paper, analysis of the information content of discretely firing neurons in unsupervised neural networks is presented, where information is measured according to the network's ability to reconstruct its input from its output with minimum mean square Euclidean error. It is shown how this type of network can self-organize into multiple winner-take-all subnetworks, each of which tackles only a low-dimensional subspace of the input vector. This is a rudimentary example of a neural network that effectively subdivides a task into manageable subtasks.  相似文献   

8.
On-line monitoring of tapping tools is essential for factory automation and productivity enhancement. Three indirect indexes, namely, average peak-topeak torque, modified RMS (root-mean square) torque, and prediction errors of torque signals, were used to monitor the tap wear conditions. In order to further process these three indirect indexes, a multilayer, feedforward neural network with a total of 18 neurons was used. The three indirect indexes are the inputs of the neural network while the wear states are its outputs. The information contained in the inputs is recoded into an internal representation by the hidden neurons which perform the mapping from input to output.  相似文献   

9.
Most known learning algorithms for dynamic neural networks in non-stationary environments need global computations to perform credit assignment. These algorithms either are not local in time or not local in space. Those algorithms which are local in both time and space usually cannot deal sensibly with ‘hidden units’. In contrast, as far as we can judge, learning rules in biological systems with many ‘hidden units’ are local in both space and time. In this paper we propose a parallel on-line learning algorithms which performs local computations only, yet still is designed to deal with hidden units and with units whose past activations are ‘hidden in time’. The approach is inspired by Holland's idea of the bucket brigade for classifier systems, which is transformed to run on a neural network with fixed topology. The result is a feedforward or recurrent ‘neural’ dissipative system which is consuming ‘weight-substance’ and permanently trying to distribute this substance onto its connections in an appropriate way. Simple experiments demonstrating the feasibility of the algorithm are reported.  相似文献   

10.
针对传统故障特征提取过程复杂、诊断方案单一且准确性差等问题,提出了基于多阈值小波包和深度置信网络(DBN)的轴承故障识别方案。本文作者采用最优小波基函数和软硬阈值结合方法对原始振动信号进行三层分解降噪处理,得到8个从低频到高频段的信号成分,对其进行组合重构作为神经网络的输入样本;通过DBN在数据处理上的特征重构优势,建立了DBNBP神经网络的轴承故障识别模型,确定模型的各类参数。经多次实验,探究不同样本输入对模型识别率的影响,并与传统的浅层神经网络识别模型做对比分析,结果表明:经训练的DBNBP轴承故障识别模型可从原始数据、小波包分解信号实现轴承故障信号的准确特征学习和分类,结合识别率和诊断时间考虑,经小波包分解信号输入具有更优的诊断效率。  相似文献   

11.
基于人工神经网络的金属土壤腐蚀预测方法   总被引:15,自引:5,他引:15  
将神经网络用于金属土壤腐蚀研究,利用神经网络的学习特征和高度的非线性特征,以土壤理化性能,腐蚀时间,A3钢在土腐蚀试验1,2,8个月的腐蚀数据作为网络训练样本,对土壤中埋片24个月的A3钢腐蚀速率进行预测,并对结果进行了分析。  相似文献   

12.
Models of associative memory usually have full connectivity or, if diluted, random symmetric connectivity. In contrast, biological neural systems have predominantly local, non-symmetric connectivity. Here we investigate sparse networks of threshold units, trained with the perceptron learning rule. The units are given position and are arranged in a ring. The connectivity graph varies between being local to random via a small world regime, with short path lengths between any two neurons. The connectivity may be symmetric or non-symmetric. The results show that it is the small world networks with non-symmetric weights and non-symmetric connectivity that perform best as associative memories. It is also shown that in highly dilute networks small world architectures will produce efficiently wired associative memories, which still exhibit good pattern completion abilities.  相似文献   

13.
吕雪  吴轩 《机床与液压》2019,47(6):138-142
图像和文档的数字化已经成为了现代信息化建设必不可少的内容。针对彩色美术图像数字化过程的图像校正问题,提出一种基于改进成像模型和深度神经网络的图像处理算法,能够有效去除噪声并增加图像清晰度。首先构建了由全局照度、局部照度和反射率组成的改进成像模型,并将输入RGB彩色图像被转换为HSV彩色图像。然后提取输入图像的局部特征,并利用这些特征构建一个深度神经网络以便实现噪声去除。仿真试验结果显示,相比传统BP神经网络和广义回归神经网络,提出算法的图像校正能力更强,具有较高的峰值信噪比和对比度增量,验证了提出算法的可行和先进性。  相似文献   

14.
黄续芳  赵平  冯铃  张丽 《机床与液压》2023,51(11):224-232
针对航空液压管路故障信号含有噪声干扰导致管路故障识别困难的问题,提出一种基于双向门控循环单元(Bi-GRU)的深度学习液压管路故障诊断方法。由Bi-GRU神经网络模型综合液压管路数据进行时序特征提取,基于同一含噪声的液压管路振动实测数据,输入到Bi-GRU、GRU、RNN、SVM、BPNN等5种故障诊断模型中进行训练。最后,为了进一步展示Bi-GRU模型对于航空液压管路不同故障类型特征的学习能力,利用t-SNE降维算法进行液压管路特征可视化。结果表明:基于Bi-GRU航空故障诊断方法能达到9960%的准确性,明显优于GRU等其他4种神经网络模型,Bi-GRU模型在含有噪声的液压管路数据上具备更出色的特征提取能力,可有效地提取出液压管路故障数据特征,从而实现了液压管路故障的智能化识别。  相似文献   

15.
Predictive coding (PC) is a leading theory of cortical function that has previously been shown to explain a great deal of neurophysiological and psychophysical data. Here it is shown that PC can perform almost exact Bayesian inference when applied to computing with population codes. It is demonstrated that the proposed algorithm, based on PC, can: decode probability distributions encoded as noisy population codes; combine priors with likelihoods to calculate posteriors; perform cue integration and cue segregation; perform function approximation; be extended to perform hierarchical inference; simultaneously represent and reason about multiple stimuli; and perform inference with multi-modal and non-Gaussian probability distributions. PC thus provides a neural network-based method for performing probabilistic computation and provides a simple, yet comprehensive, theory of how the cerebral cortex performs Bayesian inference.  相似文献   

16.
基于CMAC神经网络的液压电梯实时控制研究   总被引:2,自引:0,他引:2  
本文在CMAC的基础上提出一种实时控制算法,算法以时间为CMAC的输入矢量,在结构上与PD控制复合构成控制器。文中对电梯速度曲线的离线和在线两种学习方法进行了讨论和实验,结果表明采用离线学习的CMAC控制器较CMAC在线学习控制器和常规的PID具有更优良的控制品质。  相似文献   

17.
用BP算法建立ANN模型时学习率的选取   总被引:2,自引:0,他引:2  
张忠典  李学军  杜涛  赵广辉 《焊接》2004,(12):14-16
对于用BP算法学习的神经元网络,不同层中神经元进入饱和状态后对网络学习过程的危害程度是不同的。以3-3-1结构的网络及其用BP算法修正连接权值的过程,对比分析了输出层和隐含层神经元进入饱和后对网络学习过程的影响。并对实验证明了不同层采用不同学习率可以改善网络学习收敛速度。  相似文献   

18.
Various applications of the mean field theory (MFT) technique for obtaining solutions close to optimal minima in feedback networks are reviewed. Using this method in the context of the Boltzmann machine gives rise to a fast deterministic learning algorithm with a performance comparable with that of the backpropagation algorithm (BP) in feature recognition applications. Since MFT learning is bidirectional its use can be extended from purely functional mappings to a content addressable memory. The storage capacity of such a network grows like O (10–20)nH with the number of hidden units. The MFT learning algorithm is local and thus it has an advantage over BP with respect to VLSI implementations. It is also demonstrated how MFT and BP are related in situations where the number of input units is much larger than the number of output units. In the context of-finding good solutions to difficult optimization problems the MFT technique again turns out to be extremely powerful. The quality of the solutions for large travelling salesman and graph partition problems are in parity with those obtained by optimally tuned simulated annealing methods. The algorithm employed here is based on multistate K-valued (K > 2) neurons rather than binary (K = 2) neurons. This method is also advantageous for more nested decision problems like scheduling. The MFT equations are isomorfic to resistance-capacitance equations and hence naturally map onto custom-made hardware. With the diversity of successful application areas the MFT approach thus constitutes a convenient platform for hardware development.  相似文献   

19.
目的解决研磨抛光工艺决策中工艺试验耗时耗力的问题,实现在研磨抛光加工中根据加工工艺参数对加工质量进行预估。方法采用遗传算法优化的BP神经网络为主要算法,构建智能预测模型,建立研磨加工中输入参数和输出参数之间的映射关系。然后收集有效的输入参数和输出参数作为网络训练和测试的样本数据集,通过遗传算法对神经网络的初始化权值和偏置进行优化,用样本数据集训练神经网络。同时,在决策系统的理论基础上,将神经网络与决策系统进行结合,利用神经网络的学习能力建立智能决策的数据库和规则库,最终建立智能决策系统。结果与无改进的BP神经网络的决策方法相比,无论是在预测精度,还是学习速度上,遗传算法优化的神经网络性能更加优异,决策系统的决策效果更好。结论研磨加工工艺智能决策系统是可行的,为研磨加工的工艺决策提供了一种新的思路。  相似文献   

20.
Recent research shows that some brain areas perform more than one task and the switching times between them are incompatible with learning and that parts of the brain are controlled by other parts of the brain, or are “recycled”, or are used and reused for various purposes by other neural circuits in different task categories and cognitive domains. All this is conducive to the notion of “programming in the brain”. In this paper, we describe a programmable neural architecture, biologically plausible on the neural level, and we implement, test, and validate it in order to support the programming interpretation of the above-mentioned phenomenology. A programmable neural network is a fixed-weight network that is endowed with auxiliary or programming inputs and behaves as any of a specified class of neural networks when its programming inputs are fed with a code of the weight matrix of a network of the class. The construction is based on the “pulling out” of the multiplication between synaptic weights and neuron outputs and having it performed in “software” by specialised multiplicative-response fixed subnetworks. Such construction has been tested for robustness with respect to various sources of noise. Theoretical underpinnings, analysis of related research, detailed construction schemes, and extensive testing results are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号