共查询到20条相似文献,搜索用时 125 毫秒
1.
基于最近邻规则的神经网络训练样本选择方法 总被引:5,自引:0,他引:5
训练集中通常含有大量相似的样本, 会增加网络的训练时间并影响学习效果. 针对这一问题, 本文将最近邻法 (Nearest neighbor, NN) 简单快捷和神经网络高精度的特点相结合, 提出了一种基于最近邻规则的神经网络训练样本选择方法. 该方法考虑到训练样本对于神经网络性能的重要影响, 利用改进的最近邻规则选择最具有代表性的样本作为 神经网络的训练集. 实验结果表明, 所提出的方法能够有效去除训练集中的冗余信息, 以少量的样本获得更高的识别率, 减少网络的训练时间, 增强网络的泛化能力. 相似文献
2.
3.
图的顶点覆盖问胚是一个困难的NP-完全问题,并且有许多良好的应用.文中将在已有的应用Hopfield神经网络模型来求解图的顶点覆盖问题的基础上,将人脑决策思维的思想加入其中,建立称为图顶点覆盖问题决策神经网络模型.该方法不仅简化了过去此领域的工作,而且通过增加决策约束项,加速了网络的运行速度. 相似文献
4.
5.
神经网络泛化性能优化算法 总被引:3,自引:0,他引:3
基于提高神经网络泛化性能的目标提出了神经网络泛化损失率的概念,解析了与前一周期相比当前网络误差的变化趋势,在此基础上导出了基于泛化损失率的神经网络训练目标函数.利用新的目标函数和基于量子化粒子群算法的神经网络训练方法,得到了一种新的网络泛化性能优化算法.实验结果表明,将该算法与没有引入泛化损失率的算法相比,网络的收敛性能和泛化性能都有明显提高. 相似文献
6.
本文分析了改进的ELMAN网络的结构,并讨论了神经网络的学习算法,针对BP算法的缺陷,提出了用遗传算法修正网络权值的学习算法.本文不仅将采用遗传算法进行训练的改进ELMAN网络应用于电阻炉加热系统的建模,而且针对该系统的特点提出了一种带预测模型的神经网络PID自适应控制器,并最后将该控制器应用于电阻炉温度控制,取得了良好的控制效果. 相似文献
7.
提出一种卷积神经网络——时序卷积神经网络.将该网络应用于语言模型,时序卷积神经网络的基本结构由输入层、扩大卷积层、因果卷积层、Relu层、Dropout层、输出层组成,将扩大卷积应用在语言模型中.实验结果表明,将语言模型的复杂度降到83.21,误差降到3.87,该网络同RNN比较复杂度下降14%、误差下降0.69,该网络同LSTM比较复杂度下降13%、误差下降0.4,综合复杂度、误差两个指标,时序卷积网络优于其它基准模型. 相似文献
8.
为了提高表情识别率并降低表情识别的功耗,提出一种基于改进深度残差网络的表情识别方法。残差学习在解决深度卷积神经网络退化问题、使网络层次大幅加深的同时,进一步增加了网络的功耗。为此,引入具有生物真实性的激活函数来代替已有的整流线性单元(Rectified Linear Units,ReLU)函数, 并将其作为卷积层激活函数对深度残差网络进行改进。该方法不仅提高了残差网络的精度,而且训练出的网络权重可直接作为与该深度残差网络具有相同结构的深度脉冲神经网络的权重。将该深度脉冲神经网络部署在类脑硬件上时,其能够以较高的识别率和较低的能耗进行表情识别。 相似文献
9.
10.
11.
12.
13.
14.
15.
针对飞行控制系统的传感器故障诊断问题,构造基于Mexico hat小波的小波神经网络,采用改进的自适应遗传算法优化所构造的小波网络的网络参数,并应用训练好的小波网络对飞行控制系统的传感器故障进行诊断.通过在Matlab/Simulink中建立飞控系统传感器的数字仿真模型并进行计算机仿真,得出所构造的小波网络能很好地诊断出飞控系统传感器的三类故障.仿真结果表明,用遗传算法训练小波网络,收敛速度快,且不会陷入局部最优点,训练好的小波神经网络的收敛性、故障诊断能力及泛化性均强于传统的BP神经网络. 相似文献
16.
This paper presents a Wavelet Neural Network (WNN) employment for discrete precise ephemerides tabular data of Global Navigation Satellite System (GNSS) orbit approximation to obtain continuous orbit function. Orbit function is essential in positioning and navigation tasks, the advantage of continuity, however, is that it can also be used during GNSS signal interruptions. The essence of WNN continuous orbit construction is single function determination for the entire interval, while the interpolation methods follow several discrete function establishment. Specifically, we investigate the performance of the WNN continuous orbit approximation by comparison with well known polynomial and trigonometric interpolations. The experimental results show that our proposed method is superior to the traditional methods especially near the end of intervals, because they are not subject to large scale function oscillations as in the case of polynomials constructions. We propose a WNN construction using different mother functions of the WNN namely Mexican hat, Morlet function, Gaussian and Daubechies (D4) wavelet. Furthermore best algorithm for regression estimation is described; selection of neurons in the hidden layer of WNN is based on orthogonal least squares algorithm. The main objective of this article is to show that the presented method of orbit function construction could be used for GNSS ephemerides distribution and short-time prediction in the Assisted GNSS-networks. 相似文献
17.
18.
针对传统小波网络(WNN)基于均方差函数的梯度学习算法收敛速度慢和产生局部极小点的缺点,结合熵函数准则优于均方差函数准则,可以改善网络的收敛速度等优点,研究了一种基于相对熵函数准则的小波网络算法的字符识别方法。首先对分割后的车牌字符图像进行二值化、归一化等一系列预处理,然后利用新的不变矩算法提取不变矩,以此作为字符图像的特征向量,最后应用基于新优化算法的小波网络进行分类识别。计算机仿真结果表明,该方法对字符的识别取得了较好的效果。 相似文献
19.
Adenilton J. da SilvaAuthor Vitae Wilson R. de OliveiraAuthor VitaeTeresa B. LudermirAuthor Vitae 《Neurocomputing》2012,75(1):52-60
A supervised learning algorithm for quantum neural networks (QNN) based on a novel quantum neuron node implemented as a very simple quantum circuit is proposed and investigated. In contrast to the QNN published in the literature, the proposed model can perform both quantum learning and simulate the classical models. This is partly due to the neural model used elsewhere which has weights and non-linear activations functions. Here a quantum weightless neural network model is proposed as a quantisation of the classical weightless neural networks (WNN). The theoretical and practical results on WNN can be inherited by these quantum weightless neural networks (qWNN). In the quantum learning algorithm proposed here patterns of the training set are presented concurrently in superposition. This superposition-based learning algorithm (SLA) has computational cost polynomial on the number of patterns in the training set. 相似文献
20.
In the present world of ‘Big Data,’ the communication channels are always remaining busy and overloaded to transfer quintillion bytes of information. To design an effective equalizer to prevent the inter-symbol interference in such scenario is a challenging task. In this paper, we develop equalizers based on a nonlinear neural structure (wavelet neural network (WNN)) and train it's weighted by a recently developed meta-heuristic (symbiotic organisms search algorithm). The performance of the proposed equalizer is compared with WNN trained by cat swarm optimization (CSO) and clonal selection algorithm (CLONAL), particle swarm optimization (PSO) and least mean square algorithm (LMS). The performance is also compared with other equalizers with structure based on functional link artificial neural network (trigonometric FLANN), radial basis function network (RBF) and finite impulse response filter (FIR). The superior performance is demonstrated on equalization of two non-linear three taps channels and a linear twenty-three taps telephonic channel. It is observed that the performance of the gradient algorithm based equalizers fails in the presence of burst error. The robustness in the performance of the proposed equalizers to handle the burst error conditions is also demonstrated. 相似文献