首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 33 毫秒
1.
现有的网络表示学习算法主要为基于浅层神经网络的网络表示学习和基于神经矩阵分解的网络表示学习。基于浅层神经网络的网络表示学习又被证实是分解网络结构的特征矩阵。另外,现有的大多数网络表示学习仅仅从网络的结构学习特征,即单视图的表示学习;然而,网络本身蕴含有多种视图。因此,文中提出了一种基于多视图集成的网络表示学习算法(MVENR)。该算法摈弃了神经网络的训练过程,将矩阵的信息融合和分解思想融入到网络表示学习中。另外,将网络的结构视图、连边权重视图和节点属性视图进行了有效的融合,弥补了现有网络表示学习中忽略了网络连边权重的不足,解决了基于单一视图训练时网络特征稀疏的问题。实验结果表明,所提MVENR算法的性能优于网络表示学习中部分常用的联合学习算法和基于结构的网络表示学习算法,是一种简单且高效的网络表示学习算法。  相似文献   

2.
神经网络具有强大的非线性学习能力,基于神经网络的多帧超分辨重建方法获得了初步研究,但这些方法一般只能应用于帧间具有标准位移的控制成像情形,难以推广应用到其他实际情况。为了将神经网络强大的学习能力应用到非控制成像多帧超分辨重建中,以获得更好的超分辨效果,提出了一种利用径向基函数(RBF)神经网络进行解模糊的算法,并将其与多帧非均匀插值结合起来,形成了一种新的两步超分辨算法。仿真实验结果表明,该算法的结构相似度为0.55~0.7。该算法不但扩展了RBF神经网络的应用范围,还获得了更好的超分辨性能。  相似文献   

3.
程龙  刘洋 《控制与决策》2018,33(5):923-937
脉冲神经网络是目前最具有生物解释性的人工神经网络,是类脑智能领域的核心组成部分.首先介绍各类常用的脉冲神经元模型以及前馈和循环型脉冲神经网络结构;然后介绍脉冲神经网络的时间编码方式,在此基础上,系统地介绍脉冲神经网络的学习算法,包括无监督学习和监督学习算法,其中监督学习算法按照梯度下降算法、结合STDP规则的算法和基于脉冲序列卷积核的算法3大类别分别展开详细介绍和总结;接着列举脉冲神经网络在控制领域、模式识别领域和类脑智能研究领域的应用,并在此基础上介绍各国脑计划中,脉冲神经网络与神经形态处理器相结合的案例;最后分析脉冲神经网络目前所存在的困难和挑战.  相似文献   

4.
A principal component analysis (PCA) neural network is developed for online extraction of the multiple minor directions of an input signal. The neural network can extract the multiple minor directions in parallel by computing the principal directions of the transformed input signal so that the stability-speed problem of directly computing the minor directions can be avoided to a certain extent. On the other hand, the learning algorithms for updating the net weights use constant learning rates. This overcomes the shortcoming of the learning rates approaching zero. In addition, the proposed algorithms are globally convergent so that it is very simple to choose the initial values of the learning parameters. This paper presents the convergence analysis of the proposed algorithms by studying the corresponding deterministic discrete time (DDT) equations. Rigorous mathematical proof is given to prove the global convergence. The theoretical results are further confirmed via simulations.  相似文献   

5.
In this paper, we propose an extreme learning machine (ELM) with tunable activation function (TAF-ELM) learning algorithm, which determines its activation functions dynamically by means of the differential evolution algorithm based on the input data. The main objective is to overcome the problem dependence of fixed slop of the activation function in ELM. We mainly considered the issue of processing of benchmark problems on function approximation and pattern classification. Compared with ELM and E-ELM learning algorithms with the same network size or compact network configuration, the proposed algorithm has improved generalization performance with good accuracy. In addition, the proposed algorithm also has very good performance in the TAF neural networks learning algorithms.  相似文献   

6.
一种基于误差放大的快速BP学习算法   总被引:6,自引:0,他引:6  
针对目前使用梯度下降原则的BP学习算法,受饱和区域影响容易出现收敛速度趋缓的问题,提出一种新的基于误差放大的快速BP学习算法以消除饱和区域对后期训练的影响.该算法通过对权值修正函数中误差项的自适应放大,使权值的修正过程不会因饱和区域的影响而趋于停滞,从而使BP学习算法能很快地收敛到期望的精度值.对3-parity问题和Soybean分类问题的仿真实验表明,与目前常用的Delta-bar-Delta方法、加入动量项方法、Prime Offset等方法相比,该方法在不增加算法的复杂度和额外的CPU机时的情况下能更快地收敛到目标精度值.  相似文献   

7.
提出一种新的遗传算法和神经网络彩色图像水印研究,在检测水印的过程中,利用遗传算法来优化BP神经网络的权值矩阵与初值,构建出内在的隐含关系,然后利用训练好的BP神经网络来融合提取水印.实验证明该算法保持不可觉察性,并且水印的鲁棒性比BP神经网络的更强.  相似文献   

8.
There is no method to determine the optimal topology for multi-layer neural networks for a given problem. Usually the designer selects a topology for the network and then trains it. Since determination of the optimal topology of neural networks belongs to class of NP-hard problems, most of the existing algorithms for determination of the topology are approximate. These algorithms could be classified into four main groups: pruning algorithms, constructive algorithms, hybrid algorithms and evolutionary algorithms. These algorithms can produce near optimal solutions. Most of these algorithms use hill-climbing method and may be stuck at local minima. In this article, we first introduce a learning automaton and study its behaviour and then present an algorithm based on the proposed learning automaton, called survival algorithm, for determination of the number of hidden units of three layers neural networks. The survival algorithm uses learning automata as a global search method to increase the probability of obtaining the optimal topology. The algorithm considers the problem of optimization of the topology of neural networks as object partitioning rather than searching or parameter optimization as in existing algorithms. In survival algorithm, the training begins with a large network, and then by adding and deleting hidden units, a near optimal topology will be obtained. The algorithm has been tested on a number of problems and shown through simulations that networks generated are near optimal.  相似文献   

9.
一种全局收敛的PCA神经网络学习算法   总被引:2,自引:1,他引:2  
主元分析(PCA)也称为K-L变换是进行特征提取的一种重要方法。近年来,为了处理海量数据,许多基于Hebbian学习算法的PCA神经网络被提出来。传统的算法,通常不能保证其收敛性或者收敛速度较慢。基于CRLS神经网络,本文提出了一种新的确保权向量收敛的学习算法,本算法无须在计算中规格化权向量。同时也证明了该学习算法使得权向量收敛到最大特征值所对应的特征向量。实验表明,与传统的CRLS神经网络比较,本文算法准确性得到极大提高。  相似文献   

10.
This paper proposes a framework for constructing and training radial basis function (RBF) neural networks. The proposed growing radial basis function (GRBF) network begins with a small number of prototypes, which determine the locations of radial basis functions. In the process of training, the GRBF network gross by splitting one of the prototypes at each growing cycle. Two splitting criteria are proposed to determine which prototype to split in each growing cycle. The proposed hybrid learning scheme provides a framework for incorporating existing algorithms in the training of GRBF networks. These include unsupervised algorithms for clustering and learning vector quantization, as well as learning algorithms for training single-layer linear neural networks. A supervised learning scheme based on the minimization of the localized class-conditional variance is also proposed and tested. GRBF neural networks are evaluated and tested on a variety of data sets with very satisfactory results.  相似文献   

11.
首先利用遗传算法优化的投影寻踪技术对神经网络学习矩阵降维,再利用Bagging技术和不同的神经网络学习算法生成集成个体,并再次用遗传算法进化的投影寻踪技术对神经网络个体集成.建立基于遗传算法优化的投影寻踪技术神经网络集成模型,通过上证指数开盘价、收盘价进行实例分析,计算结果表明该方法具有较好的学习能力和泛化能力,在股市预测中预测精度高、稳定性好.  相似文献   

12.
基于神经网络结构学习的知识求精方法   总被引:5,自引:0,他引:5  
知识求精是知识获取中必不可少的步骤.已有的用于知识求精的KBANN(know ledge based artificialneuralnetw ork)方法,主要局限性是训练时不能改变网络的拓扑结构.文中提出了一种基于神经网络结构学习的知识求精方法,首先将一组规则集转化为初始神经网络,然后用训练样本和结构学习算法训练初始神经网络,并提取求精的规则知识.网络拓扑结构的改变是通过训练时采用基于动态增加隐含节点和网络删除的结构学习算法实现的.大量实例表明该方法是有效的  相似文献   

13.
Gradient-descent type supervised learning is the most commonly used algorithm for design of the standard sigmoid perceptron (SP). However, it is computationally expensive (slow) and has the local-minima problem. Moody and Darken (1989) proposed an input-clustering based hierarchical algorithm for fast learning in networks of locally tuned neurons in the context of radial basis function networks. We propose and analyze input clustering (IC) and input-output clustering (IOC)-based algorithms for fast learning in networks of globally tuned neurons in the context of the SP. It is shown that "localizing' the input layer weights of the SP by the IC and the IOC minimizes an upper bound to the SP output error. The proposed algorithms could possibly be used also to initialize the SP weights for the conventional gradient-descent learning. Simulation results offer that the SPs designed by the IC and the IOC yield comparable performance in comparison with its radial basis function network counterparts.  相似文献   

14.
Supplying industrial firms with an accurate method of forecasting the production value of the mechanical industry to facilitate decision makers in precise planning is highly desirable. Numerous methods, including the autoregressive integrated-moving average (ARIMA) model and artificial neural networks can make accurate forecasts based on historical data. The seasonal ARIMA (SARIMA) model and artificial neural networks can also handle data involving trends and seasonality. Although neural networks can make predictions, deciding the most appropriate input data, network structure and learning parameters are difficult. Therefore, this article presents a hybrid forecasting method that combines the SARIMA model and neural networks with genetic algorithms. Analytical results generated by the SARIMA model are inputted as the input data of a neural network. Subsequently, the number of neurons in the hidden layer and the number of learning parameters of the neural network architecture are globally optimized using genetic algorithms. This model is subsequently adopted to forecast seasonal time series data of the production value of the mechanical industry in Taiwan. The results presented here provide a valuable reference for decision makers in industry.  相似文献   

15.
一种多层前馈网参数可分离学习算法   总被引:1,自引:0,他引:1  
目前大部分神经网络学习算法都是对网络所有的参数同时进行学习.当网络规模较 大时,这种做法常常很耗时.由于许多网络,例如感知器、径向基函数网络、概率广义回归网络 以及模糊神经网络,都是一种多层前馈型网络,它们的输入输出映射都可以表示为一组可变 基的线性组合.网络的参数也表现为二类:可变基中的参数是非线性的,组合系数是线性的. 为此,提出了一个将这二类参数进行分离学习的算法.仿真结果表明,这个学习算法加快了学 习过程,提高了网络的逼近性能.  相似文献   

16.
基于深度学习理论,将图像去噪过程看成神经网络的拟合过程,构造简洁高效的复合卷积神经网络,提出基于复合卷积神经网络的图像去噪算法.算法第1阶段由2个2层的卷积网络构成,分别训练阶段2中的3层卷积网络中的部分初始卷积核,缩短阶段2中网络的训练时间和增强算法的鲁棒性.最后运用阶段2中的卷积网络对新的噪声图像进行有效去噪.实验表明文中算法在峰值信噪比、结构相识度及均方根误差指数上与当前较好的图像去噪算法相当,尤其当噪声加强时效果更佳且训练时间较短.  相似文献   

17.
为了进一步提高卷积神经网络算法的收敛速度和识别精度,提出基于双重优化的卷积神经网络图像识别算法。在构建卷积神经网络的过程中,针对特征提取和回归分类建立双重优化模型,实现对卷积与全连接过程的集成优化,并与局部优化算法对比,分析各算法的识别率和收敛速度的差异。在手写数字集和人脸数据集上的实验表明,双重优化模型可以在较大程度上提高卷积神经网络的收敛速度和识别精度,并且这种优化策略可以进一步拓展到其它与卷积神经网络相关的深度学习算法中。  相似文献   

18.
Discusses the learning problem of neural networks with self-feedback connections and shows that when the neural network is used as associative memory, the learning problem can be transformed into some sort of programming (optimization) problem. Thus, the rather mature optimization technique in programming mathematics can be used for solving the learning problem of neural networks with self-feedback connections. Two learning algorithms based on programming technique are presented. Their complexity is just polynomial. Then, the optimization of the radius of attraction of the training samples is discussed using quadratic programming techniques and the corresponding algorithm is given. Finally, the comparison is made between the given learning algorithm and some other known algorithms  相似文献   

19.
In this paper, a novel approach to adjusting the weightings of fuzzy neural networks using a Real-coded Chaotic Quantum-inspired genetic Algorithm (RCQGA) is proposed. Fuzzy neural networks are traditionally trained by using gradient-based methods, which may fall into local minimum during the learning process. To overcome the problems encountered by the conventional learning methods, RCQGA algorithms are adopted because of their capabilities of directed random search for global optimization. It is well known, however, that the searching speed of the conventional quantum genetic algorithms (QGA) is not satisfactory. In this paper, a real-coded chaotic quantum-inspired genetic algorithm (RCQGA) is proposed based on the chaotic and coherent characters of Q-bits. In this algorithm, real chromosomes are inversely mapped to Q-bits in the solution space. Q-bits probability-guided real cross and chaos mutation are applied to the evolution and searching of real chromosomes. Chromosomes consisting of the weightings of the fuzzy neural network are coded as an adjustable vector with real number components that are searched by the RCQGA. Simulation results have shown that faster convergence of the evolution process in searching for an optimal fuzzy neural network can be achieved. Examples of nonlinear functions approximated by using the fuzzy neural network via the RCQGA are demonstrated to illustrate the effectiveness of the proposed method.  相似文献   

20.
In the field of time series prediction, neural networks are widely used and they have been proven useful and practical. To improve the prediction ability and to reduce the time consumption of neural networks, neural networks are usually developed by researchers and practitioners from learning algorithms, network architectures, etc. A local-recurrent-global-feedforward (LRGF) network using a learning algorithm called the optimization layer-by-layer (OLL) method is proposed. In addition, two representative LRGF networks are introduced: the finite impulse response (FIR) network and the FGS network (proposed by Frasconi, Gori and Soda), and a comparative simulation predicting several financial time series using both methods is performed. According to the results of the simulation, the FGS-OLL method gives better predicting performance than the FIR-OLL method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号