首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A model of a human neural knowledge processing system is presented that suggests the following. First, an entity in the outside world lends to be locally encoded in neural networks so that the conceptual information structure is mirrored in its physical implementation. Second, the knowledge of problem solving is implemented in a quite implicit way in the internal structure of the neural network (a functional group of associated hidden neurons and their connections to entity neurons) not in individual neurons or connections. Third, the knowledge system is organized and implemented in a modular fashion in neural networks according to the local specialization of problem solving where a module of neural network implements an inter-related group of knowledge such as a schema, and different modules have similar processing mechanisms, but differ in their input and output patterns. A neural network module can be tuned just as a schema structure can be adapted for changing environments. Three experiments were conducted to try to validate the suggested cognitive engineering based knowledge structure in neural networks through computer simulation. The experiments, which were based on a task of modulo arithmetic, provided some insights into the plausibility of the suggested model of a neural knowledge processing system.  相似文献   

2.
Z. Zhu  H. He 《Information Sciences》2007,177(5):1180-1192
A new self-organizing learning array (SOLAR) system has been implemented in software. It is an information theory based learning machine capable of handling a wide variety of classification problems. It has self-reconfigurable processing cells (neurons) and an evolvable system structure. Entropy based learning is performed locally at each neuron, where neural functions and connections that correspond to the minimum entropy are adaptively learned. By choosing connections for each neuron, the system sets up the wiring and completes its self-organization. SOLAR classifies input data based on weighted statistical information from all neurons. Unlike artificial neural networks, its multi-layer structure scales well to large systems capable of solving complex pattern recognition and classification tasks. This paper shows its application in economic and financial fields. A reference to influence diagrams is also discussed. Several prediction and classification cases are studied. The results have been compared with the existing methods.  相似文献   

3.
The brain can be viewed as a complex modular structure with features of information processing through knowledge storage and retrieval. Modularity ensures that the knowledge is stored in a manner where any complications in certain modules do not affect the overall functionality of the brain. Although artificial neural networks have been very promising in prediction and recognition tasks, they are limited in terms of learning algorithms that can provide modularity in knowledge representation that could be helpful in using knowledge modules when needed. Multi-task learning enables learning algorithms to feature knowledge in general representation from several related tasks. There has not been much work done that incorporates multi-task learning for modular knowledge representation in neural networks. In this paper, we present multi-task learning for modular knowledge representation in neural networks via modular network topologies. In the proposed method, each task is defined by the selected regions in a network topology (module). Modular knowledge representation would be effective even if some of the neurons and connections are disrupted or removed from selected modules in the network. We demonstrate the effectiveness of the method using single hidden layer feedforward networks to learn selected n-bit parity problems of varying levels of difficulty. Furthermore, we apply the method to benchmark pattern classification problems. The simulation and experimental results, in general, show that the proposed method retains performance quality although the knowledge is represented as modules.  相似文献   

4.
It is generally accepted among neuroscientists that the sensory cortex of the brain is arranged in a layered structure. Based on a unified quantum holographic approach to artificial neural network models implemented with coherent, hybrid optoelectronic, or analog electronic neurocomputer architectures, the present paper establishes a novel identity for the matching polynomials of complete bichromatic graphs which implement the intrinsic connections between neurons of local networks located in neural layers.  相似文献   

5.
Optimizing the structure of neural networks is an essential step for the discovery of knowledge from data. This paper deals with a new approach which determines the insignificant input and hidden neurons to detect the optimum structure of a feedforward neural network. The proposed pruning algorithm, called as neural network pruning by significance (N2PS), is based on a new significant measure which is calculated by the Sigmoidal activation value of the node and all the weights of its outgoing connections. It considers all the nodes with significance value below the threshold as insignificant and eliminates them. The advantages of this approach are illustrated by implementing it on six different real datasets namely iris, breast-cancer, hepatitis, diabetes, ionosphere and wave. The results show that the proposed algorithm is quite efficient in pruning the significant number of neurons on the neural network models without sacrificing the networks performance.  相似文献   

6.
Rough sets for adapting wavelet neural networks as a new classifier system   总被引:2,自引:2,他引:0  
Classification is an important theme in data mining. Rough sets and neural networks are two techniques applied to data mining problems. Wavelet neural networks have recently attracted great interest because of their advantages over conventional neural networks as they are universal approximations and achieve faster convergence. This paper presents a hybrid system to extract efficiently classification rules from decision table. The neurons of such hybrid network instantiate approximate reasoning knowledge gleaned from input data. The new model uses rough set theory to help in decreasing the computational effort needed for building the network structure by using what is called reduct algorithm and a rules set (knowledge) is generated from the decision table. By applying the wavelets, frequencies analysis, rough sets and dynamic scaling in connection with neural network, novel and reliable classifier architecture is obtained and its effectiveness is verified by the experiments comparing with traditional rough set and neural networks approaches.  相似文献   

7.
Coherent-type artificial neural networks whose behavior is controlled by carrier-frequency modulation are proposed. The network learns teacher signals associated with an information-carrier frequency as a network parameter. The total network system forms a self-homodyne circuit. The learning process is realized by adjusting delay time and conductance of neural connections. Experiments demonstrate that the network behavior is successfully controlled by the carrier-frequency modulation. This result will be applicable not only to signal processing but also to frequency-multiplexed optical neural computing and quantum neural devices such as carrier-energy-controlled neurons in the future.  相似文献   

8.
An associative neural network whose architecture is greatly influenced by biological data is described. The proposed neural network is significantly different in architecture and connectivity from previous models. Its emphasis is on high parallelism and modularity. The network connectivity is enriched by recurrent connections within the modules. Each module is, effectively, a Hopfield net. Connections within a module are plastic and are modified by associative learning. Connections between modules are fixed and thus not subject to learning. Although the network is tested with character recognition, it cannot be directly used as such for real-world applications. It must be incorporated as a module in a more complex structure. The architectural principles of the proposed network model can be used in the design of other modules of a whole system. Its architecture is such that it constitutes a good mathematical prototype to analyze the properties of modularity, recurrent connections, and feedback. The model does not make any contribution to the subject of learning in neural networks.  相似文献   

9.
Object detection using pulse coupled neural networks   总被引:29,自引:0,他引:29  
Describes an object detection system based on pulse coupled neural networks. The system is designed and implemented to illustrate the power, flexibility and potential the pulse coupled neural networks have in real-time image processing. In the preprocessing stage, a pulse coupled neural network suppresses noise by smoothing the input image. In the segmentation stage, a second pulse coupled neural-network iteratively segments the input image. During each iteration, with the help of a control module, the segmentation network deletes regions that do not satisfy the retention criteria from further processing and produces an improved segmentation of the retained image. In the final stage each group of connected regions that satisfies the detection criteria is identified as an instance of the object of interest.  相似文献   

10.
Systems based on artificial neural networks have high computational rates owing to the use of a massive number of simple processing elements and the high degree of connectivity between these elements. Neural networks with feedback connections provide a computing model capable of solving a large class of optimization problems. This paper presents a novel approach for solving dynamic programming problems using artificial neural networks. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points. Simulated examples are presented and compared with other neural networks. The results demonstrate that the proposed method gives a significant improvement.  相似文献   

11.
为了解决部分高性能深度学习神经网络因存在复杂度高及计算量大等缺陷在嵌入式设备中应用效果不理想的问题;以小型化集成智能无线电设备AIR-T为平台实现了基于深度学习的OFDM信道补偿技术;在FPGA芯片上不仅实现了OFDM信号传输系统模块,也实现了传统信道估计与均衡模块,模块对数据进行预处理减轻神经网络工作量以完成神经网络信道补偿技术模块在Jetson TX2平台GPU上的高效实现;由实验记录神经网络训练过程中的计算复杂度和参数拟合速度得知,传统信道估计与均衡模块有效降低了网络训练时的运算次数;由测试性能方面可知,经过神经网络信道补偿后的数据误码率比之前传统信道估计与均衡后的误码率有明显降低;  相似文献   

12.
提出了一种基于增长法的神经网络结构优化算法。在函数逼近的BP神经网络中引入一种改进的BP算法(LMBP算法),通过二次误差下降与梯度下降,利用误差变化规律分析网络结构的优化程度,自适应地增加隐层神经元或网络层次,从而得到一个合适的网络结构。进行了仿真实验及该算法与RAN算法用于逼近函数的对比实验,实验结果表明了该算法的有效性。  相似文献   

13.
This paper deals with modeling a power plant component with mild nonlinear characteristics using a modified neural network structure. The hidden layer of the proposed neural network has a combination of neurons with linear and nonlinear activation functions. This approach is particularly suitable for nonlinear system with a low grade of nonlinearity, which can not be modeled satisfactorily by neural networks with purely nonlinear hidden layers or by the method of least square of errors (the ideal modeling method of linear systems). In this approach, two channels are installed in a hidden layer of the neural network to cover both linear and nonlinear behavior of systems. If the nonlinear characteristics of the system (i.e. de-superheater) are not negligible, then the nonlinear channel of the neural network is activated; that is, after training, the connections in nonlinear channel get considerable weights. The approach was applied to a de-superheater of a 325 MW power generating plant. The actual plant response, obtained from field experiments, is compared with the response of the proposed model and the responses of linear and neuro-fuzzy models as well as a neural network with purely nonlinear hidden layer. A better accuracy is observed using the proposed approach.  相似文献   

14.
A brief overview is given of the nature of present day artificial neural net computing and the authors emphasize the theme that in this mode of computing, knowledge is not represented symbolically, but in the form of distributed processing and localized decision rules. The authors propose that in neural-net computing, the processing is the representation. In other words, the very nature of the processing encodes the knowledge. There is no place and no need for a separate body of global rules to be used by the network for inferencing. If rules exist at all, they are in the nature of local processing steps carried out at individual processors in response to stimuli from other neurons. The authors develop this theme together with a theme which is closely related to it. The second notion is that neural networks may also be thought of and implemented in terms of heterogeneous networks, rather than always or only in terms of massive arrays of identical elemental processors  相似文献   

15.
In executing tasks involving intelligent information processing, the human brain performs better than the digital computer. The human brain derives its power from a large number [O(1011)] of neurons which are interconnected by a dense interconnection network [O(105) connections per neuron]. Artificial neural network (ANN) paradigms adopt the structure of the brain to try to emulate the intelligent information processing methods of the brain. ANN techniques are being employed to solve problems in areas such as pattern recognition, and robotic processing. Simulation of ANNs involves implementation of large number of neurons and a massive interconnection network. In this paper, we discuss various simulation models of ANNs and their implementation on distributed memory systems. Our investigations reveal that communication-efficient networks of distributed memory systems perform better than other topologies in implementing ANNs.  相似文献   

16.
The neural solids are novel neural networks devised for solving optimization problems. They are dual to Hopfield networks, but with a quartic energy function. These solids are open architectures, in the sense that different choices of the basic elements and interfacings solve different optimization problems. The basic element is the neural resonator (triangle for the three dimensional case), composed of resonant neurons underlying a self-organizing learning. This module is able to solve elementary optimization problems such as the search for the nearest orthonormal matrix to a given one. Then, an example of a more complex solid, the neural decomposer, whose architecture is composed of neural resonators and their mutual connections, is given. This solid can solve more complex optimization problems such as the decomposition of the essential matrix, which is a very important technique in computer vision.  相似文献   

17.
Variable Hidden Layer Sizing in Elman Recurrent Neuro-Evolution   总被引:1,自引:0,他引:1  
The relationship between the size of the hidden layer in a neural network and performance in a particular domain is currently an open research issue. Often, the number of neurons in the hidden layer is chosen empirically and subsequently fixed for the training of the network. Fixing the size of the hidden layer limits an inherent strength of neural networks—the ability to generalize experiences from one situation to another, to adapt to new situations, and to overcome the brittleness often associated with traditional artificial intelligence techniques. This paper proposes an evolutionary algorithm to search for network sizes along with weights and connections between neurons.This research builds upon the neuro-evolution tool SANE, developed by David Moriarty. SANE evolves neurons and networks simultaneously, and is modified in this work in several ways, including varying the hidden layer size, and evolving Elman recurrent neural networks for non-Markovian tasks. These modifications allow the evolution of better performing and more consistent networks, and do so more efficiently and faster.SANE, modified with variable network sizing, learns to play modified casino blackjack and develops a successful card counting strategy. The contributions of this research are up to 8.3% performance increases over fixed hidden layer size models while reducing hidden layer processing time by almost 10%, and a faster, more autonomous approach to the scaling of neuro-evolutionary techniques to solving larger and more difficult problems.  相似文献   

18.
Methods of stabilization as applied to Hopfield-type continuous neural networks with a unique equilibrium point are considered. These methods permit the design of stable networks where the elements of the interconnection matrix and nonlinear activation functions of separate neurons vary with time. For stabilization with a variable interconnection matrix it is suggested that a new second layer of neurons be introduced to the initial single-layer network and some additional connections be added between the new and old layers. This approach gives us a system with a unique equilibrium point that is globally asymptotically stable, i.e. the entire space serves as the domain of attraction of this point, and the stability does not depend on the interconnection matrix of the system. In the case of the variable activation functions, some results from a recent investigation of the absolute stability problem for neural networks are presented, along with some recommendations.  相似文献   

19.
大数据具有高速变化特性,其内容与分布特征均处于动态变化之中,目前的前馈神经网络模型是一种静态学习模型,不支持增量式更新,难以实时学习动态变化的大数据特征。针对这个问题,提出一种支持增量式更新的大数据特征学习模型。通过设计一个优化目标函数对参数进行快速增量式更新,为了在更新过程中保持网络的原始知识,最小化平方误差函数。对于特征变化频繁的数据,通过增加隐藏层神经元数目网络对结构进行更新,使得更新后的网络能够实时学习动态变化大数据的特征。在对网络参数与结构更新之后,通过权重矩阵SVD分解对更新后的网络结构进行优化,删除冗余的网络连接,增强网络模型的泛化能力。实验结果表明提出的模型能够在尽可能保持网络模型原始知识的基础上,通过不断更新神经网络的参数与结构实时学习动态大数据的特征。  相似文献   

20.
目的 借鉴大脑的工作机理来发展人工智能是当前人工智能发展的重要方向之一。注意力与记忆在人的认知理解过程中扮演了重要的角色。由于"端到端"深度学习在识别分类等任务中表现了优异性能,因此如何在深度学习模型中引入注意力机制和外在记忆结构,以挖掘数据中感兴趣的信息和有效利用外来信息,是当前人工智能研究的热点。方法 本文以记忆和注意力等机制为中心,介绍了这些方面的3个代表性工作,包括神经图灵机、记忆网络和可微分神经计算机。在这个基础上,进一步介绍了利用记忆网络的研究工作,其分别是记忆驱动的自动问答、记忆驱动的电影视频问答和记忆驱动的创意(文本生成图像),并对国内外关于记忆网络的研究进展进行了比较。结果 调研结果表明:1)在深度学习模型中引入注意力机制和外在记忆结构,是当前人工智能研究的热点;2)关于记忆网络的研究越来越多。国内外关于记忆网络的研究正在蓬勃发展,每年发表在机器学习与人工智能相关的各大顶级会议上的论文数量正在逐年攀升;3)关于记忆网络的研究越来越热。不仅每年发表的论文数量越来越多,且每年的增长趋势并没有放缓,2015年增长了9篇,2016年增长了4篇,2017年增长了9篇,2018年增长了14篇;4)基于记忆驱动的手段和方法十分通用。记忆网络已成功地运用于自动问答、视觉问答、物体检测、强化学习、文本生成图像等领域。结论 数据驱动的机器学习方法已成功运用于自然语言、多媒体、计算机视觉、语音等领域,数据驱动和知识引导将是人工智能未来发展的趋势之一。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号