首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
郝燕玲  王众 《自动化学报》2008,34(12):1475-1482
提出一种基于核方法的下视等分辨率景象匹配算法. 通过模拟电荷吸引模型, 提出了计算不等维高维数据相似度的SNN核函数. 将图像中的特征点映射到径向基向量(Radial basis vector, RBV)空间, 利用SNN核函数计算两个特征点集的相似度及过渡矩阵. 利用置换测试模块来增强SNN核的稳定性, 以确保输出解的可靠性. 实验证明, 基于SNN核的景象匹配算法对图象畸变、噪声干扰与信号缺失具有很强的鲁棒性, 并可保证高精度与高实时性.  相似文献   

2.
脉冲神经网络(spiking neural network,SNN)以异步事件驱动,支持大规模并行计算,在改善同步模拟神经网络的计算效率方面具有巨大潜力.然而,目前SNN仍然面临无法直接训练的难题,为此,受到神经科学领域关于LIF(leaky integrate-and-fire)神经元响应机制研究启发,提出了一种新的...  相似文献   

3.
为了满足大规模脉冲神经网络(SNN)的计算需求,类脑计算系统通常需要采用大规模并行计算平台.因此,如何快速为SNN工作负载确定合理的计算节点数(即如何把工作负载合理映射到计算平台上)以获得最佳的性能、功耗等指标就成为类脑计算系统需解决的关键问题之一.首先分析了SNN工作负载特性并为其建立起计算模型;然后针对NEST类脑仿真器,进一步实例化了SNN的内存、计算和通信负载模型;最终设计并实现了一种基于NEST的SNN工作负载自动映射器(SWAM).SWAM可以自动计算出映射结果并完成映射,避免了极其耗时的工作负载映射手动试探过程.在ARM+FPGA、纯ARM、PC集群三种不同的计算平台上运行SNN典型应用,并比较SWAM、LM算法拟合和实测的映射结果.实验结果表明:SWAM的平均映射准确率达到98.833%,与LM方法与实测映射相比,SWAM具有绝对的时间代价优势.  相似文献   

4.
概率计算是将二进制数编码为概率脉冲序列进行运算,具有功耗低、资源消耗少的优势,将概率计算应用于脉冲神经网络(spiking neuron network,SNN)的硬件电路设计,有利于实现类脑模式的运算.为了实现神经网络的低功耗边缘计算,本文提出一种基于概率计算的SNN异步架构.使用异步微流水线控制的交叉阵列实现LIF...  相似文献   

5.
脉冲神经网络(SNN)采用脉冲序列表征和传递信息,与传统人工神经网络相比更具有生物可解释性,但典型SNN的特征提取能力受到其结构限制,对于图像数据等多分类任务的识别准确率不高,不能与卷积神经网络相媲美。为此提出一种新型的自适应编码脉冲神经网络(SCSNN),将CNN的特征提取能力与SNN的生物可解释性结合起来,采用生物神经元动态脉冲触发特性构建网络结构,并设计了一种新的替代梯度反向传播方法直接训练网络参数。所提出的SCSNN分别在MNIST和Fashion-MNIST数据集进行验证,取得较好的识别结果,在MNIST数据集上准确率达到了99.62%,在Fashion-MNIST数据集上准确率达到了93.52%,验证了其有效性。  相似文献   

6.
王浩杰  刘闯 《计算机应用研究》2024,41(1):177-182+187
脉冲神经网络(spiking neural network, SNN)由于在神经形态芯片上低功耗和高速计算的独特性质而受到广泛的关注。深度神经网络(deep neural network, DNN)到SNN的转换方法是有效的脉冲神经网络训练方法之一,然而从DNN到SNN的转换过程中存在近似误差,转换后的SNN在短时间步长下遭受严重的性能退化。通过对转换过程中的误差进行详细分析,将其分解为量化和裁剪误差以及不均匀误差,提出了一种改进SNN阈值平衡的自适应阈值算法。通过使用最小化均方误差(MMSE)更好地平衡量化误差和裁剪误差;此外,基于IF神经元模型引入了双阈值记忆机制,有效解决了不均匀误差。实验结果表明,改进算法在CIFAR-10、CIFAR-100数据集以及MIT-BIH心律失常数据库上取得了很好的性能,对于CIFAR10数据集,仅用16个时间步长就实现了93.22%的高精度,验证了算法的有效性。  相似文献   

7.
在近邻算法中,近邻样本和目标样本之间的绝对距离和相似性为目标样本类别的判断提供重要的决策依据,K值的大小也会直接决定了近邻算法的预测效果。然而,SNN算法在预测过程中,使用固定的经验K值来预测不同局部密度的目标样本,具有一定的片面性。因此,为实现SNN算法中K值的合理调节,提高算法的预测准确度和稳定性,提出一种基于局部密度和相似度的自适应SNN算法(AK-SNN)。算法的性能在UCI数据集上进行验证,结果显示该算法取得优于KNN和SNN的预测效果和鲁棒性。  相似文献   

8.
T-S模糊广义系统的逼近性   总被引:1,自引:1,他引:0  
本文研究T-S模糊广义系统的逼近性,给出了T-S模糊广义系统的逼近性定理.证明其可以以任意的精度逼近一类广泛存在的非线性广义系统.还将MISO(多输入单输出)情况推广到MIMO(多输入多输出)的情况.在逼近性定理的基础上,利用神经网络的方法对非线性广义系统建模,给出了神经网络的结构及学习算法.本文共提出了两种神经网路的训练策略,对各自的优点与不足给出了分析,最后用数值例子验证了算法的有效性.  相似文献   

9.
针对大脑运动皮层群体神经元信号与运动行为关系的分析,提出一种Spiking神经网络(SNN)的分类算法。SNN的网络连接权值与突触连接的延时参数采用改进的粒子群优化方法(PSO)进行训练。仿真结果表明SNN分类效果优于群体向量法(PV)分类效果,有利于实现性能更高的用于神经康复的脑机接口系统。  相似文献   

10.
曹子宁  董红斌  石纯一 《软件学报》2001,12(9):1366-1374
首先建立了一种多Agent信念逻辑MBL(multi-agentbelieflogic),在经典信念逻辑基础上增加了普遍信念算子和公共信念算子,给出MBL的Kripke语义与广义Aumann语义,讨论了两者的等价性,证明了MBL对于上述两种语义的可靠性和完备性.其次,建立了一种多Agent概率信念逻辑MPBL(multi-agentprobabilisticbelieflogic),通过在广义Aumann语义基础上引入概率空间,给出了MPBL的概率Aumann语义,证明了它的可靠性,并给出MPBL的一些推论.  相似文献   

11.
极限学习机(ELM)是一种新型单馈层神经网络算法,在训练过程中只需要设置合适的隐藏层节点个数,随机赋值输入权值和隐藏层偏差,一次完成无需迭代.结合遗传算法在预测模型参数寻优方面的优势,找到极限学习机的最优参数取值,建立成都双流国际机场旅客吞吐量预测模型,通过对比支持向量机、BP神经网络,分析遗传-极限学习机算法在旅客吞吐量预测中的可行性和优势.仿真结果表明遗传-极限学习机算法不仅可行,并且与原始极限学习机算法相比,在预测精度和训练速度上具有比较明显的优势.  相似文献   

12.
In this paper a new learning algorithm is proposed for the problem of simultaneous learning of a function and its derivatives as an extension of the study of error minimized extreme learning machine for single hidden layer feedforward neural networks. Our formulation leads to solving a system of linear equations and its solution is obtained by Moore-Penrose generalized pseudo-inverse. In this approach the number of hidden nodes is automatically determined by repeatedly adding new hidden nodes to the network either one by one or group by group and updating the output weights incrementally in an efficient manner until the network output error is less than the given expected learning accuracy. For the verification of the efficiency of the proposed method a number of interesting examples are considered and the results obtained with the proposed method are compared with that of other two popular methods. It is observed that the proposed method is fast and produces similar or better generalization performance on the test data.  相似文献   

13.
T.  S. 《Neurocomputing》2009,72(16-18):3915
The major drawbacks of backpropagation algorithm are local minima and slow convergence. This paper presents an efficient technique ANMBP for training single hidden layer neural network to improve convergence speed and to escape from local minima. The algorithm is based on modified backpropagation algorithm in neighborhood based neural network by replacing fixed learning parameters with adaptive learning parameters. The developed learning algorithm is applied to several problems. In all the problems, the proposed algorithm outperform well.  相似文献   

14.
Motivated by the slow learning properties of multilayer perceptrons (MLPs) which utilize computationally intensive training algorithms, such as the backpropagation learning algorithm, and can get trapped in local minima, this work deals with ridge polynomial neural networks (RPNN), which maintain fast learning properties and powerful mapping capabilities of single layer high order neural networks. The RPNN is constructed from a number of increasing orders of Pi–Sigma units, which are used to capture the underlying patterns in financial time series signals and to predict future trends in the financial market. In particular, this paper systematically investigates a method of pre-processing the financial signals in order to reduce the influence of their trends. The performance of the networks is benchmarked against the performance of MLPs, functional link neural networks (FLNN), and Pi–Sigma neural networks (PSNN). Simulation results clearly demonstrate that RPNNs generate higher profit returns with fast convergence on various noisy financial signals.  相似文献   

15.
In this paper, a new efficient learning procedure for training single hidden layer feedforward network is proposed. This procedure trains the output layer and the hidden layer separately. A new optimization criterion for the hidden layer is proposed. Existing methods to find fictitious teacher signal for the output of each hidden neuron, modified standard backpropagation algorithm and the new optimization criterion are combined to train the feedforward neural networks. The effectiveness of the proposed procedure is shown by the simulation results. *The work of P. Thangavel is partially supported by UGC, Government of India sponsored project.  相似文献   

16.
基于RBF神经网络的抗噪语音识别   总被引:1,自引:0,他引:1  
针对目前在噪音环境下语音识别系统性能较差的问题,利用RBF神经网络具有最佳逼近性能、训练速度快等特性,分别采用聚类和全监督训练算法,实现了基于RBF神经网络的抗噪语音识别系统。聚类算法的隐含层训练采用K-均值聚类算法,输出层的学习采用线性最小二乘法;全监督算法中所有参数的调整基于梯度下降法,它是一种有监督学习算法,能够选出性能优良的参数。实验表明,在不同的信噪比下,全监督算法较之聚类算法有更高的识别率。  相似文献   

17.
《Applied Soft Computing》2008,8(1):166-173
Almost all current training algorithms for neural networks are based on gradient descending technique, which causes long training time. In this paper, we propose a novel fast training algorithm called Fast Constructive-Covering Algorithm (FCCA) for neural network construction based on geometrical expansion. Parameters are updated according to the geometrical location of the training samples in the input space, and each sample in the training set is learned only once. By doing this, FCCA is able to avoid iterative computing and much faster than traditional training algorithms. Given an input sequence in an arbitrary order, FCCA learns “easy” samples first and “confusing” samples are easily learned after these “easy” samples. This sample reordering process is done on the fly based on geometrical concept. In addition, FCCA begins with an empty hidden layer, and adds new hidden neurons when necessary. This constructive learning avoids blind selection of neural network structure. The experimental work for classification problems illustrates the advantages of FCCA, especially in learning speed.  相似文献   

18.
In order to improve the learning ability of a forward neural network, in this article, we incorporate the feedback back-propagation (FBBP) and grey system theory to consider the learning and training of a neural network new perspective. By reducing the input grey degree we optimise the input of the neural network to make it more rational for learning and training of neural networks. Simulation results verified the efficiency of the proposed algorithm by comparing its performance with that of FBBP and classic back-propagation (BP). The results showed that the proposed algorithm has the characteristics of fast training and strong ability of generalisation and it is an effective learning method.  相似文献   

19.
This paper proposes a new hybrid approach for recurrent neural networks (RNN). The basic idea of this approach is to train an input layer by unsupervised learning and an output layer by supervised learning. In this method, the Kohonen algorithm is used for unsupervised learning, and dynamic gradient descent method is used for supervised learning. The performances of the proposed algorithm are compared with backpropagation through time (BPTT) on three benchmark problems. Simulation results show that the performances of the new proposed algorithm exceed the standard backpropagation through time in the reduction of the total number of iterations and in the learning time required in the training process.  相似文献   

20.
角分类算法是一类快速分类算法,以其为学习算法的前向神经网络,在信息检索,特别是在线信息检索等领域有着重要的应用.通过对CC4学习算法的分析,揭示了泛化距离在角分类神经网络中的意义.针对文本数据的快速分类要求,提出了新的角分类网络TextCC.为解决数据的多类别判定问题,给出了新的角分类神经网络隐层与输出层之间连接矩阵的学习算法.实验表明,新的角分类神经网络隐层与输出层之间连接矩阵的学习算法有效,TextCC的分类精度教CC4的分类精度显著的提高.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号