首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study proposed supervised learning probabilistic neural networks (SLPNN) which have three kinds of network parameters: variable weights representing the importance of input variables, the reciprocal of kernel radius representing the effective range of data, and data weights representing the data reliability. These three kinds of parameters can be adjusted through training. We tested three artificial functions as well as 15 benchmark problems, and compared it with multi-layered perceptron (MLP) and probabilistic neural networks (PNN). The results showed that SLPNN is slightly more accurate than MLP, and much more accurate than PNN. Besides, the data weights can find the noise data in data set, and the variable weights can measure the importance of input variables and have the greatest contribution to accuracy of model among the three kinds of network parameters.  相似文献   

2.
3.
This paper presents new findings in the design and application of biologically plausible neural networks based on spiking neuron models, which represent a more plausible model of real biological neurons where time is considered as an important feature for information encoding and processing in the brain. The design approach consists of an evolutionary strategy based supervised training algorithm, newly developed by the authors, and the use of different biologically plausible neuronal models. A dynamic synapse (DS) based neuron model, a biologically more detailed model, and the spike response model (SRM) are investigated in order to demonstrate the efficacy of the proposed approach and to further our understanding of the computing capabilities of the nervous system. Unlike the conventional synapse, represented as a static entity with a fixed weight, employed in conventional and SRM-based neural networks, a DS is weightless and its strength changes upon the arrival of incoming input spikes. Therefore its efficacy depends on the temporal structure of the impinging spike trains. In the proposed approach, the training of the network free parameters is achieved using an evolutionary strategy where, instead of binary encoding, real values are used to encode the static and DS parameters which underlie the learning process. The results show that spiking neural networks based on both types of synapse are capable of learning non-linearly separable data by means of spatio-temporal encoding. Furthermore, a comparison of the obtained performance with classical neural networks (multi-layer perceptrons) is presented.  相似文献   

4.
Networks of spiking neurons are very powerful and versatile models for biological and artificial information processing systems. Especially for modelling pattern analysis tasks in a biologically plausible way that require short response times with high precision they seem to be more appropriate than networks of threshold gates or models that encode analog values in average firing rates. We investigate the question how neurons can learn on the basis of time differences between firing times. In particular, we provide learning rules of the Hebbian type in terms of single spiking events of the pre- and postsynaptic neuron and show that the weights approach some value given by the difference between pre- and postsynaptic firing times with arbitrary high precision.  相似文献   

5.
本文提出了一种基于脉冲神经网络SNN的语音识剐方法。该方法以H-H脉冲神经网络模型为基础,采用圆映射的脉冲编码理论将脉冲序列转化为符号序列;同时,以符号序列的距离函数作为语音识别的相似度函数,结合SNN计算能力和实时性强等优势,对语音识别问题进行了初步探讨和研究。实验结果表明,本文提出的特别的研究方法是可行的,
具有深入研究的价值。  相似文献   

6.
脉冲神经膜系统(简称SN P系统)是一种起源于神经元通过电子脉冲传递信息方式的新型分布式、并行计算模型,具有强大的计算能力和解决计算难问题的潜力.带反脉冲的脉冲神经膜系统(简称SN PA系统)是一种包含脉冲、反脉冲两种对象的脉冲神经膜系统的变体,非常适合于对称三值数字的编码.本文使用带反脉冲的脉冲神经膜系统模拟了对称三值的通用与、或、非逻辑门的功能,也实现了对称三值的整数加、减算术运算的功能.目前的工作是基于带反脉冲的脉冲神经膜系统的三值型CPU设计在理论上的首次尝试.本文也为潘林强和Paun G提出的一个公开问题提供了一种实用案例.  相似文献   

7.
Three neural-based methods for extraction of logical rules from data are presented. These methods facilitate conversion of graded response neural networks into networks performing logical functions. MLP2LN method tries to convert a standard MLP into a network performing logical operations (LN). C-MLP2LN is a constructive algorithm creating such MLP networks. Logical interpretation is assured by adding constraints to the cost function, forcing the weights to ±1 or 0. Skeletal networks emerge ensuring that a minimal number of logical rules are found. In both methods rules covering many training examples are generated before more specific rules covering exceptions. The third method, FSM2LN, is based on the probability density estimation. Several examples of performance of these methods are presented.  相似文献   

8.
Medical image classification becomes a vital part of the design of computer aided diagnosis (CAD) models. The conventional CAD models are majorly dependent upon the shapes, colors, and/or textures that are problem oriented and exhibited complementary in medical images. The recently developed deep learning (DL) approaches pave an efficient method of constructing dedicated models for classification problems. But the maximum resolution of medical images and small datasets, DL models are facing the issues of increased computation cost. In this aspect, this paper presents a deep convolutional neural network with hierarchical spiking neural network (DCNN-HSNN) for medical image classification. The proposed DCNN-HSNN technique aims to detect and classify the existence of diseases using medical images. In addition, region growing segmentation technique is involved to determine the infected regions in the medical image. Moreover, NADAM optimizer with DCNN based Capsule Network (CapsNet) approach is used for feature extraction and derived a collection of feature vectors. Furthermore, the shark smell optimization algorithm (SSA) based HSNN approach is utilized for classification process. In order to validate the better performance of the DCNN-HSNN technique, a wide range of simulations take place against HIS2828 and ISIC2017 datasets. The experimental results highlighted the effectiveness of the DCNN-HSNN technique over the recent techniques interms of different measures. Please type your abstract here.  相似文献   

9.
《软件》2016,(5):77-80
进化神经网络将进化算法与人工神经网络进行了有机结合,进化算法的参与使神经网络系统在进化发育过程中可自适应的进行网络结构与连接权值的调整,改善了神经网络在模拟仿真过程中自主智能化不足的缺陷,提高了神经网络系统的生物真实性。随着研究的深入,大量不同类型的进化神经网络相继出现,根据基因编码方式的不同,可将进化神经网络分为直接编码型和间接编码型两类。本文对神经网络中基因的编码方式进行了阐述分析,最后总结了间接编码方法的应用领域。  相似文献   

10.
Evolutionary Learning of Modular Neural Networks with Genetic Programming   总被引:2,自引:0,他引:2  
Evolutionary design of neural networks has shown a great potential as a powerful optimization tool. However, most evolutionary neural networks have not taken advantage of the fact that they can evolve from modules. This paper presents a hybrid method of modular neural networks and genetic programming as a promising model for evolutionary learning. This paper describes the concepts and methodologies for the evolvable model of modular neural networks, which might not only develop new functionality spontaneously, but also grow and evolve its own structure autonomously. We show the potential of the method by applying an evolved modular network to a visual categorization task with handwritten digits. Sophisticated network architectures as well as functional subsystems emerge from an initial set of randomly-connected networks. Moreover, the evolved neural network has reproduced some of the characteristics of natural visual system, such as the organization of coarse and fine processing of stimuli in separate pathways.  相似文献   

11.
神经网络已经在模式识别、自动控制及数据挖掘等领域取得了广泛的应用,但学习方法的速度不能满足实际需求。传统的误差反向传播方法(BP)主要是基于梯度下降的思想,需要多次迭代;网络的所有参数都需要在训练过程中迭代确定,因此算法的计算量和搜索空间很大。ELM(Extreme Learning Machine,ELM)是一次学习思想使得学习速度提高很多,避免了多次迭代和局部最小值,具有良好的泛化性能、鲁棒性与可控性。但对于不同的数据集和不同的应用领域,无论ELM是用于数据分类或是回归,ELM算法本身还是存在问题,所以本文对已有方法深入对比分析,并指出极速学习方法未来的发展方向。  相似文献   

12.
联想记忆是人工神经元网络的重要功能之一,比讨ield网络是一种重要的应用于联想记忆型网络。为了实现记忆功能,我们总希望通过训练使徉本成为网络的稳定状态。然而汗bpfield网络利用决bb规则训练。  相似文献   

13.
自适应神经网络学习方法研究   总被引:12,自引:0,他引:12  
本文从连接权值、网络的拓扑结构、网络的学习参数以及神经元的激活特性等不同方面分别讨论了人工神经网络的学习问题,并就当前流行的BP模型提出了具体实现方法。实验表明,这些方法对于加快网络的收敛速度,优化网络的拓扑结构等方面有着显著成效,本文所述内容为ANN学习算法的改进与设计提供了示例,途径和思想总结。  相似文献   

14.
神经网络极速学习方法研究   总被引:57,自引:0,他引:57  
单隐藏层前馈神经网络(Single-hidden Layer Feedforward Neural Network,SLFN)已经在模式识别、自动控制及数据挖掘等领域取得了广泛的应用,但传统学习方法的速度远远不能满足实际的需要,成为制约其发展的主要瓶颈.产生这种情况的两个主要原因是:(1)传统的误差反向传播方法(Back Propagation,BP)主要基于梯度下降的思想,需要多次迭代;(2)网络的所有参数都需要在训练过程中迭代确定.因此算法的计算量和搜索空间很大.针对以上问题,借鉴ELM的一次学习思想并基于结构风险最小化理论提出一种快速学习方法(RELM),避免了多次迭代和局部最小值,具有良好的泛化性、鲁棒性与可控性.实验表明RELM综合性能优于ELM、BP和SVM.  相似文献   

15.
提出了一种新颖的基于小波神经网络构架的FLIR图像分割技术,旨在将小波变换的时—频局域特性与神经网络的自学习能力相结合,从而使FLIR图像的分割算法具有较强的逼近和容错能力。该算法在FLIR―ATR系统中得到应用,对于FLIR目标图像轮廓的提取和抑制杂散背景方面获得了良好的效果。  相似文献   

16.
Ensemble learning has gained considerable attention in different tasks including regression, classification and clustering. Adaboost and Bagging are two popular approaches used to train these models. The former provides accurate estimations in regression settings but is computationally expensive because of its inherently sequential structure, while the latter is less accurate but highly efficient. One of the drawbacks of the ensemble algorithms is the high computational cost of the training stage. To address this issue, we propose a parallel implementation of the Resampling Local Negative Correlation (RLNC) algorithm for training a neural network ensemble in order to acquire a competitive accuracy like that of Adaboost and an efficiency comparable to that of Bagging. We test our approach on both synthetic and real datasets from the UCI and Statlib repositories for the regression task. In particular, our fine-grained parallel approach allows us to achieve a satisfactory balance between accuracy and parallel efficiency.  相似文献   

17.
神经网络的学习能力与效率问题是神经网络研究的一个重要方向,该文基于正交变换提出一种网络正交学习算法,它具有学习速度快且能获得全局最优解的特点,并可有效地对学习过程中出现的异常情况进行求解,因而具有良好的普适性。同时对新样本的学习可在以前学习的基础上继续,使网络的学习具有循序渐进的特征,提高了学习效率。  相似文献   

18.
Wu  Cathy  Berry  Michael  Shivakumar  Sailaja  McLarty  Jerry 《Machine Learning》1995,21(1-2):177-193
A neural network classification method has been developed as an alternative approach to the search/organization problem of protein sequence databases. The neural networks used are three-layered, feed-forward, back-propagation networks. The protein sequences are encoded into neural input vectors by a hashing method that counts occurrences ofn-gram words. A new SVD (singular value decomposition) method, which compresses the long and sparsen-gram input vectors and captures semantics ofn-gram words, has improved the generalization capability of the network. A full-scale protein classification system has been implemented on a Cray supercomputer to classify unknown sequences into 3311 PIR (Protein Identification Resource) superfamilies/families at a speed of less than 0.05 CPU second per sequence. The sensitivity is close to 90% overall, and approaches 100% for large superfamilies. The system could be used to reduce the database search time and is being used to help organize the PIR protein sequence database.  相似文献   

19.
20.
增量学习是在原有学习成果的基础上,对新信息进行学习,以获取新知识的过程,它要求尽量保持原有的学习成果.文章先简述了基于覆盖的构造型神经网络,然后在此基础上提出了一种快速增量学习算法.该算法在原有网络的分类能力基础上,通过对新样本的快速增量学习,进一步提高网络的分类能力.实验结果表明该算法是有效的.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号