共查询到20条相似文献,搜索用时 485 毫秒
1.
2.
3.
分别从BP网络的学习步长,学习速率自适应调整算法的参数,动量法和自适应学习速率结合起来算法的参数3方面讨论了改进BP参数对网络识别能力的影响;在确定BP网络的隐含层节点个数的过程中提出了BP神经网络自适应学习算法,使得隐层节点的选取动态实现。仿真实验表明,该改进是可行的。 相似文献
4.
针对聚乙烯醇生产过程的醇解度预测问题,建立神经网络模型;对醇解度的影响因素进行了研究,讨论了输入层、输出层、隐含层等神经元的设置及网络训练的参数,比较了梯度下降BP算法、动量-自适应学习速率调整算法、Levenberg-Marquardt BP算法三种不同的训练算法在本问题上的优劣,并与RBF网络相比较,综合考虑训练时间、训练精度、泛化能力等条件,动量一自适应学习速率调整算法是最适合醇解度预测的,并基于动量-自适应学习速率调整算法建立了神经网络模型;将模型应用于醇解度预测系统,系统实际运行情况表明,利用神经网络模型预测醇解度是可行有效的. 相似文献
5.
一种基于距离度量的自适应粒子群优化算法 总被引:3,自引:1,他引:2
惯性权值对粒子群优化((Particle Swarm Optimization,PSO)算法的性能起着重要作用。基本的PSO算法未考虑各粒子的差异而在一次迭代中所有粒子采用固定的惯性权值。为了体现各粒子相对于已知最优解的差异,提出了一种基于距离度量的自适应PSO算法DMAPSO(DistancE Measurement-based Adaptive PSO)。算法采用欧式距离计算粒子与已知全局最优粒子的差异,然后根据差异自适应调整各粒子的·贯r}权值。通过基准测试函数对算法进行了实验,结果表明,对于连续函数优化问题,提出的DMAPSO算法优于经典PSO算法,DMAPSO收敛到最优解的迭代次数比PsO平均减少了约60%. 相似文献
6.
7.
通过C++构造出一个五层的BP神经网络,在满足相对误差要求的情况下,实现了指定样本函数的功能.并针对学习效率和权系数修正常数对算法做了改进,有效地加快了收敛速度.最后讨论了当样本函数中y值非[0,1]区间时,样本的归一化问题. 相似文献
8.
关于非线性时滞系统的鲁棒稳定性条件的讨论 总被引:1,自引:0,他引:1
1 引 言在《自动化学报》1 999年第 6期中 ,短文“非线性时滞系统的稳定性分析及鲁棒稳定性分析”[1 ] 用 Lyapunov函数方法分别讨论了确定性和不确定性非线性时滞系统的稳定性 .对于确定性系统得到了一种基于 LMI的渐近稳定充分条件 ,并研究了不确定性系统的鲁棒稳定性问题 .但由于该文对 Razumikhin定理的理解有误 ,有关鲁棒稳定性的结论 (定理 1、推论 1 )是不正确的 .对于 Razumikhin定理 ,文 [2 ]给出了一种所谓的改进型 Razumikhin定理 .但有人提出质疑 ,问题在于没有正确理解 Razumikhin定理 [3,4] .文 [1 ]虽然没有直接提到 R… 相似文献
9.
提出一种自适应粒子群算法.通过自适应调整飞行时间和惯性权值,克服了粒子群算法在进化后期搜索能力下降的问题,并且充分利用目标函数的信息,提高了算法的稳定性,加快了算法的收敛速度.通过测试函数对算法进行实验,结果表明算法具有较好的稳定性和收敛速度. 相似文献
10.
11.
Spectral clustering based on matrix perturbation theory 总被引:5,自引:1,他引:5
This paper exposes some intrinsic characteristics of the spectral clustering method by using the tools from the matrix perturbation theory. We construct a weight ma- trix of a graph and study its eigenvalues and eigenvectors. It shows that the num- ber of clusters is equal to the number of eigenvalues that are larger than 1, and the number of points in each of the clusters can be approximated by the associated eigenvalue. It also shows that the eigenvector of the weight matrix can be used directly to perform clustering; that is, the directional angle between the two-row vectors of the matrix derived from the eigenvectors is a suitable distance measure for clustering. As a result, an unsupervised spectral clustering algorithm based on weight matrix (USCAWM) is developed. The experimental results on a number of artificial and real-world data sets show the correctness of the theoretical analysis. 相似文献
12.
We describe symmetries of feedforward networks in terms of their corresponding groups, which naturally act on and partition weight space. This leads to an algorithm that generates representative weight vectors in a specific fundamental domain. The closure of this domain turns out to be a manifold with singular points. We derive a canonical metric for the manifold that can be implemented efficiently even for large networks. One application would be the clustering of resulting weight vectors of an experiment in order to identify inadequate models or learning methods. 相似文献
13.
针对离散Hopfield 神经网络(DHNN) 的权值设计问题, 提出一种改进型学习算法, 并在DHNN动力学分析的基础上设计该学习算法. 利用矩阵分解的方法(MD) 得到正交矩阵, 并采用得到的正交矩阵直接计算DHNN的权值矩阵. 通过该学习算法得到的权值矩阵, 可以很好地存储训练样本的信息, 使测试样本收敛到稳定点. 该学习算法不需要进行分块计算, 减少了计算步骤和计算量, 降低了网络的迭代次数, 从而提高了网络运行速度. 最后, 将该学习算法应用于水质评价, 验证了其有效性和可行性. 相似文献
14.
A fully connected continuous time recurrent neural network, trained by means of Real-Time Recurrent Learning, is investigated. A theoretical analysis of the output vector of the network during the training stage is performed. We point out the necessity to apply an additional constraint to the synaptic weight matrix with the intention of reducing the learning time while the forgetting is decreased. This constraint consists of updating the weights of the output cells using the output error gradient into the RTRL and a matrix of learning rates calculated from an average vector computed from the vectors previously memorized. For this first approach of the problem, only fixpoints attractors have been investigated. Some simple computational simulations validate the method. 相似文献
15.
16.
In recent years, the theory of deep learning is booming again. It is widely used in machine learning, visual recognition and
auditory recognition. Boltzmann machine is a typical deep learning neural network. There are many training algorithms for its net-
work weights, such as contrast dispersion (CD) algorithm, which is classical. However, the current algorithm cannot accurately ob-
tain the expected value of network thermal equilibrium state. Only approximate gradient values can be calculated. At the same time,
the algorithm has a large amount of computation and a long running time. In this paper, a method of RBM weight calculation is pro-
posed. Firstly, RBM is equivalent to Hopfield network. Then the weight matrix is designed by DHNN weight design method. Final-
ly, the RBM weight solving problem is transformed into the eigenvalue and eigenvector problem of DHNN weight matrix. An exam-
ple is given to illustrate the calculation process and the correctness of the algorithm is verified by the data. 相似文献
17.
现有的网络表示学习算法主要为基于浅层神经网络的网络表示学习和基于神经矩阵分解的网络表示学习。基于浅层神经网络的网络表示学习又被证实是分解网络结构的特征矩阵。另外,现有的大多数网络表示学习仅仅从网络的结构学习特征,即单视图的表示学习;然而,网络本身蕴含有多种视图。因此,文中提出了一种基于多视图集成的网络表示学习算法(MVENR)。该算法摈弃了神经网络的训练过程,将矩阵的信息融合和分解思想融入到网络表示学习中。另外,将网络的结构视图、连边权重视图和节点属性视图进行了有效的融合,弥补了现有网络表示学习中忽略了网络连边权重的不足,解决了基于单一视图训练时网络特征稀疏的问题。实验结果表明,所提MVENR算法的性能优于网络表示学习中部分常用的联合学习算法和基于结构的网络表示学习算法,是一种简单且高效的网络表示学习算法。 相似文献
18.
基于码重分布的系统循环码识别方法 总被引:1,自引:0,他引:1
介绍系统循环码的定义及矩阵描述,分析其码重分布特性,根据向量间距离的概率定义码重分布距离,推导随机序列的理论码重分布概率,提出实际序列码重分布概率的估计方法和利用实际序列的码重分布概率和随机序列的理论码重分布概率之间的距离估计码组长度和起始点的方法,在此基础上利用高斯消元法估计生成矩阵和校验矩阵,并提出了在误码情况下的识别方法。最后并对不同长度的码进行仿真实验,结果表明文中方法能够在误码为10-3的情况下有效地识别中短码。 相似文献
19.
An adaptive learning algorithm for principal component analysis 总被引:2,自引:0,他引:2
Liang-Hwa Chen Shyang Chang 《Neural Networks, IEEE Transactions on》1995,6(5):1255-1263
Principal component analysis (PCA) is one of the most general purpose feature extraction methods. A variety of learning algorithms for PCA has been proposed. Many conventional algorithms, however, will either diverge or converge very slowly if learning rate parameters are not properly chosen. In this paper, an adaptive learning algorithm (ALA) for PCA is proposed. By adaptively selecting the learning rate parameters, we show that the m weight vectors in the ALA converge to the first m principle component vectors with almost the same rates. Comparing with the Sanger's generalized Hebbian algorithm (GHA), the ALA can quickly find the desired principal component vectors while the GHA fails to do so. Finally, simulation results are also included to illustrate the effectiveness of the ALA. 相似文献
20.