首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Self-organizing learning array   总被引:2,自引:0,他引:2  
A new machine learning concept-self-organizing learning array (SOLAR)-is presented. It is a sparsely connected, information theory-based learning machine, with a multilayer structure. It has reconfigurable processing units (neurons) and an evolvable system structure, which makes it an adaptive classification system for a variety of machine learning problems. Its multilayer structure can handle complex problems. Based on the entropy estimation, information theory-based learning is performed locally at each neuron. Neural parameters and connections that correspond to minimum entropy are adaptively set for each neuron. By choosing connections for each neuron, the system sets up its wiring and completes its self-organization. SOLAR classifies input data based on the weighted statistical information from all the neurons. The system classification ability has been simulated and experiments were conducted using test-bench data. Results show a very good performance compared to other classification methods. An important advantage of this structure is its scalability to a large system and ease of hardware implementation on regular arrays of cells.  相似文献   

2.
Optoelectronic spiking neuron that is based on bispin-device is described. The neuron has separate optical inputs for excitatory and inhibitory signals, which are represented with pulses of single polarity. Experimental data, which demonstrates similarity in form of output pulses and set of functions of the suggested neuron and a biological one is given. An example of hardware implementation of optoelectronic pulsed neural network (PNN) that is based on proposed neurons is described. Main elements of the neural network are a line of pulsed neurons and a connection array, part of which is made as a spatial light modulator (SLM) with memory. Usage of SLM allows modification of weights of connections in the learning process of the network. It is possible to create adaptive (capable of additional learning and relearning) optoelectronic PNNs with about 2000 neurons.  相似文献   

3.
冯建  Janusz  Starzyk  邱菀华 《控制与决策》2012,27(2):211-215
讨论一种基于信息熵的神经网络数据分类方法,通过所有神经元的统计权重信息对输入数据进行投票分类.这种多层网络结构以及基于信息量的分割算法,使得它在数据分类问题上比现有的多数神经网络具有更好的表现.其并行的可扩展结构适合硬件实现,能够提高实际运算速度,适合用来处理金融方面高维度、复杂的海量数据问题.  相似文献   

4.
针对脉冲神经元基于精确定时的多脉冲编码信息的特点,提出了一种基于卷积计算的多层脉冲神经网络监督学习的新算法。该算法应用核函数的卷积计算将离散的脉冲序列转换为连续函数,在多层前馈脉冲神经网络结构中,使用梯度下降的方法得到基于核函数卷积表示的学习规则,并用来调整神经元连接的突触权值。在实验部分,首先验证了该算法学习脉冲序列的效果,然后应用该算法对Iris数据集进行分类。结果显示,该算法能够实现脉冲序列复杂时空模式的学习,对非线性模式分类问题具有较高的分类正确率。  相似文献   

5.
A model of a human neural knowledge processing system is presented that suggests the following. First, an entity in the outside world lends to be locally encoded in neural networks so that the conceptual information structure is mirrored in its physical implementation. Second, the knowledge of problem solving is implemented in a quite implicit way in the internal structure of the neural network (a functional group of associated hidden neurons and their connections to entity neurons) not in individual neurons or connections. Third, the knowledge system is organized and implemented in a modular fashion in neural networks according to the local specialization of problem solving where a module of neural network implements an inter-related group of knowledge such as a schema, and different modules have similar processing mechanisms, but differ in their input and output patterns. A neural network module can be tuned just as a schema structure can be adapted for changing environments. Three experiments were conducted to try to validate the suggested cognitive engineering based knowledge structure in neural networks through computer simulation. The experiments, which were based on a task of modulo arithmetic, provided some insights into the plausibility of the suggested model of a neural knowledge processing system.  相似文献   

6.
A novel analytical method based on information geometry was recently proposed, and this method may provide useful insights into the statistical interactions within neural groups. The link between informationgeometric measures and the structure of neural interactions has not yet been elucidated, however, because of the ill-posed nature of the problem. Here, possible neural architectures underlying information-geometric measures are investigated using an isolated pair and an isolated triplet of model neurons. By assuming the existence of equilibrium states, we derive analytically the relationship between the information-geometric parameters and these simple neural architectures. For symmetric networks, the first- and second-order information-geometric parameters represent, respectively, the external input and the underlying connections between the neurons provided that the number of neurons used in the parameter estimation in the log-linear model and the number of neurons in the network are the same. For asymmetric networks, however, these parameters are dependent on both the intrinsic connections and the external inputs to each neuron. In addition, we derive the relation between the information-geometric parameter corresponding to the two-neuron interaction and a conventional cross-correlation measure. We also show that the information-geometric parameters vary depending on the number of neurons assumed for parameter estimation in the log-linear model. This finding suggests a need to examine the information-geometric method carefully. A possible criterion for choosing an appropriate orthogonal coordinate is also discussed. This article points out the importance of a model-based approach and sheds light on the possible neural structure underlying the application of information geometry to neural network analysis.  相似文献   

7.
张军英  许进  保铮 《自动化学报》2001,27(5):657-664
从二进前向网络的稳健要求出发,提出了稳健分类的概念,在此基础上给出了稳健分 类超平面的一般形式,从而如果二进前向网络的每一神经元都是稳健神经元,则网络的连接权 仅为-1,0或+1,每一神经元的阈值也只为二分之一的基阈值加上一处于有限区域上整数的 辅阈值,并且辅阈值为神经元各个输入对其的贡献之和.稳健二进前向网络的这些性质使得网 络不仅稳健能力强,而且易于做到隐节点数少、连接少、易于实现.  相似文献   

8.
This paper presents new findings in the design and application of biologically plausible neural networks based on spiking neuron models, which represent a more plausible model of real biological neurons where time is considered as an important feature for information encoding and processing in the brain. The design approach consists of an evolutionary strategy based supervised training algorithm, newly developed by the authors, and the use of different biologically plausible neuronal models. A dynamic synapse (DS) based neuron model, a biologically more detailed model, and the spike response model (SRM) are investigated in order to demonstrate the efficacy of the proposed approach and to further our understanding of the computing capabilities of the nervous system. Unlike the conventional synapse, represented as a static entity with a fixed weight, employed in conventional and SRM-based neural networks, a DS is weightless and its strength changes upon the arrival of incoming input spikes. Therefore its efficacy depends on the temporal structure of the impinging spike trains. In the proposed approach, the training of the network free parameters is achieved using an evolutionary strategy where, instead of binary encoding, real values are used to encode the static and DS parameters which underlie the learning process. The results show that spiking neural networks based on both types of synapse are capable of learning non-linearly separable data by means of spatio-temporal encoding. Furthermore, a comparison of the obtained performance with classical neural networks (multi-layer perceptrons) is presented.  相似文献   

9.
A neural dynamics based approach is proposed for real-time motion planning with obstacle avoidance of a mobile robot in a nonstationary environment. The dynamics of each neuron in the topologically organized neural network is characterized by a shunting equation or an additive equation. The real-time collision-free robot motion is planned through the dynamic neural activity landscape of the neural network without any learning procedures and without any local collision-checking procedures at each step of the robot movement. Therefore the model algorithm is computationally simple. There are only local connections among neurons. The computational complexity linearly depends on the neural network size. The stability of the proposed neural network system is proved by qualitative analysis and a Lyapunov stability theory. The effectiveness and efficiency of the proposed approach are demonstrated through simulation studies.  相似文献   

10.
多聚合过程神经元网络及其学习算法研究   总被引:2,自引:0,他引:2  
针对系统输入为多元过程函数以及多维过程信号的信息处理问题,提出了多聚合过程神经元和多聚合过程神经元网络模型.多聚合过程神经元的输入和连接权均可以是多元过程函数,其聚合运算包括对多个输入函数的空间加权聚集和对多维过程效应的累积,可同时反映多个多元过程输入信号在多维空间上的共同作用影响以及过程效应的累积结果.多聚合过程神经元网络是由多聚合过程神经元和其它类型的神经元按照一定的结构关系组成的网络模型,按照输出是否为多元过程函数建立了前馈多聚合过程神经元网络的一般模型和输入输出均为过程函数的多聚合过程神经元网络模型,具有对多元过程信号输入输出关系的直接映射和建模能力.文中给出了一种基于多元函数基展开的梯度下降与数值计算相结合的学习算法,仿真实验结果表明了模型和算法对多元过程信号分类和多维动态过程模拟问题的适应性.  相似文献   

11.
The neural network method, a relatively new method in reverse engineering (RE), has the potential to reconstruct 3D models accurately and fast. A neural network (NN) is a set of interconnected neurons, in which each neuron is capable of making autonomous arithmetic and geometric calculations. Moreover, each neuron is affected by its surrounding neurons through the structure of the network. This work proposes a new approach that utilizes growing neural gas neural network (GNG NN) techniques to reconstruct a triangular manifold mesh. This method has the advantage of reconstructing the surface of an n-genus freeform object without a priori knowledge regarding the original object, its topology or its shape. The resulting mesh can be improved by extending the MGNG into an adaptive algorithm. The proposed method was also extended for micro-structure modeling. The feasibility of the proposed method is demonstrated on several examples of freeform objects with complex topologies.  相似文献   

12.
一种分式过程神经元网络及其应用研究   总被引:3,自引:0,他引:3  
针对带有奇异值复杂时变信号的模式分类和系统建模问题,提出了一种分式过程神经元网络.该模型是基于有理式函数具有的对复杂过程信号的逼近性质和过程神经元网络对时变信息的非线性变换机制构建的,其基本信息处理单元由两个过程神经元成对偶组成,逻辑上构成一个分式过程神经元,是人工神经网络在结构和信息处理机制上的一种扩展.分析了分式过程神经元网络的连续性和泛函数逼近能力,给出了基于函数正交基展开的学习算法.实验结果表明,分式过程神经元网络对于带有奇异值时变函数样本的学习性质和泛化性质要优于BP网络和一般过程神经元网络,网络隐层数和节点数可较大减少,且算法的学习性质与传统BP算法相同.  相似文献   

13.
This paper proposes a fuzzy min-max neural network classifier with compensatory neurons (FMCNs). FMCN uses hyperbox fuzzy sets to represent the pattern classes. It is a supervised classification technique with new compensatory neuron architecture. The concept of compensatory neuron is inspired from the reflex system of human brain which takes over the control in hazardous conditions. Compensatory neurons (CNs) imitate this behavior by getting activated whenever a test sample falls in the overlapped regions amongst different classes. These neurons are capable to handle the hyperbox overlap and containment more efficiently. Simpson used contraction process based on the principle of minimal disturbance, to solve the problem of hyperbox overlaps. FMCN eliminates use of this process since it is found to be erroneous. FMCN is capable to learn the data online in a single pass through with reduced classification and gradation errors. One of the good features of FMCN is that its performance is less dependent on the initialization of expansion coefficient, i.e., maximum hyperbox size. The paper demonstrates the performance of FMCN by comparing it with fuzzy min-max neural network (FMNN) classifier and general fuzzy min-max neural network (GFMN) classifier, using several examples  相似文献   

14.
危辉  栾尚敏 《软件学报》2004,15(11):1616-1628
根据认知的计算神经科学的观点,提出了一种基于神经系统动力学理论和连通图的信息的直接表达方式.它首先定义了知觉信息直接表达的神经结构和动力学模式,然后提出一个双层的网络计算模型,分别用于记录外界刺激的特征信息和连通对应的特定神经回路的连接模式,这是通过结构学习来实现的.在两层神经元间建立起来的连通结构同时起到联想记忆的作用,记忆的可靠程度由神经回路的连通度来决定.这种直接表达方式对于人工智能中关于语义表达和基于语义的推理研究具有重要意义.  相似文献   

15.
杨刚  王乐  戴丽珍  杨辉 《自动化学报》2019,45(4):808-818
针对跨越——侧抑制神经网络(Span-lateral inhibition neural network,S-LINN)的结构调整及参数学习问题,结合生物神经系统中神经元的稀疏连接特性,依据儿童及青少年智力发展水平与大脑皮层发育之间的相互关系,提出以小世界网络连接模式进行初始稀疏化的连接自组织发育稀疏跨越——侧抑制神经网络设计方法.定义网络连接稀疏度及神经元输出贡献率,设计网络连接增长——修剪规则,根据智力超常组皮层发育与智力水平的对应关系调整和控制网络连接权值,动态调整网络连接实现网络智力的自组织发育.通过非线性动力学系统辨识及函数逼近基准问题的求解,证明在同等连接复杂度的情况下,稀疏连接的跨越——侧抑制神经网络具有更好的泛化能力.  相似文献   

16.
A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.  相似文献   

17.
The brain can be viewed as a complex modular structure with features of information processing through knowledge storage and retrieval. Modularity ensures that the knowledge is stored in a manner where any complications in certain modules do not affect the overall functionality of the brain. Although artificial neural networks have been very promising in prediction and recognition tasks, they are limited in terms of learning algorithms that can provide modularity in knowledge representation that could be helpful in using knowledge modules when needed. Multi-task learning enables learning algorithms to feature knowledge in general representation from several related tasks. There has not been much work done that incorporates multi-task learning for modular knowledge representation in neural networks. In this paper, we present multi-task learning for modular knowledge representation in neural networks via modular network topologies. In the proposed method, each task is defined by the selected regions in a network topology (module). Modular knowledge representation would be effective even if some of the neurons and connections are disrupted or removed from selected modules in the network. We demonstrate the effectiveness of the method using single hidden layer feedforward networks to learn selected n-bit parity problems of varying levels of difficulty. Furthermore, we apply the method to benchmark pattern classification problems. The simulation and experimental results, in general, show that the proposed method retains performance quality although the knowledge is represented as modules.  相似文献   

18.
Almost all applications of Artificial Neural Networks (ANNs) depend mainly on their memory ability. The characteristics of typical ANN models are fixed connections, with evolved weights, globalized representations, and globalized optimizations, all based on a mathematical approach. This makes those models to be deficient in robustness, efficiency of learning, capacity, anti-jamming between training sets, and correlativity of samples, etc. In this paper, we attempt to address these problems by adopting the characteristics of biological neurons in morphology and signal processing. A hierarchical neural network was designed and realized to implement structure learning and representations based on connected structures. The basic characteristics of this model are localized and random connections, field limitations of neuron fan-in and fan-out, dynamic behavior of neurons, and samples represented through different sub-circuits of neurons specialized into different response patterns. At the end of this paper, some important aspects of error correction, capacity, learning efficiency, and soundness of structural representation are analyzed theoretically. This paper has demonstrated the feasibility and advantages of structure learning and representation. This model can serve as a fundamental element of cognitive systems such as perception and associative memory.  相似文献   

19.
Context in time series is one of the most useful and interesting characteristics for machine learning. In some cases, the dynamic characteristic would be the only basis for achieving a possible classification. A novel neural network, which is named "a recurrent log-linearized Gaussian mixture network (R-LLGMN)," is proposed in this paper for classification of time series. The structure of this network is based on a hidden Markov model (HMM), which has been well developed in the area of speech recognition. R-LLGMN can as well be interpreted as an extension of a probabilistic neural network using a log-linearized Gaussian mixture model, in which recurrent connections have been incorporated to make temporal information in use. Some simulation experiments are carried out to compare R-LLGMN with the traditional estimator of HMM as classifiers, and finally, pattern classification experiments for EEG signals are conducted. It is indicated from these experiments that R-LLGMN can successfully classify not only artificial data but real biological data such as EEG signals.  相似文献   

20.
The advances in biophysics of computations and neurocomputing models have brought the foreground importance of dendritic structure of neuron. These structures are assumed as basic computational units of the neuron, capable of realizing the various mathematical operations. The well structured higher order neurons have shown improved computational power and generalization ability. However, these models are difficult to train because of a combinatorial explosion of higher order terms as the number of inputs to the neuron increases. In this paper we present a neural network using new neuron architecture i.e., generalized mean neuron (GMN) model. This neuron model consists of an aggregation function which is based on the generalized mean of all the inputs applied to it. The resulting neuron model has the same number of parameters with improved computational power as the existing multilayer perceptron (MLP) model. The capability of this model has been tested on the classification and time series prediction problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号