共查询到20条相似文献,搜索用时 12 毫秒
1.
Binary classification is typically achieved by supervised learning methods. Nevertheless, it is also possible using unsupervised schemes. This paper describes a connectionist unsupervised approach to binary classification and compares its performance to that of its supervised counterpart. The approach consists of training an autoassociator to reconstruct the positive class of a domain at the output layer. After training, the autoassociator is used for classification, relying on the idea that if the network generalizes to a novel instance, then this instance must be positive, but that if generalization fails, then the instance must be negative. When tested on three real-world domains, the autoassociator proved more accurate at classification than its supervised counterpart, MLP, on two of these domains and as accurate on the third (Japkowicz, Myers, & Gluck, 1995). The paper seeks to generalize these results and concludes that, in addition to learning aconcept in the absence of negative examples, 1) autoassociation is more efficient than MLP in multi-modal domains, and 2) it is more accurate than MLP in multi-modal domains for which the negative class creates a particularly strong need for specialization or the positive class creates a particularly weak need for specialization. In multi-modal domains for which the positive class creates a particularly strong need for specialization, on the other hand, MLP is more accurate than autoassociation. 相似文献
2.
Chaos appears in many natural and artificial systems; accordingly, we propose a method that injects chaos into a supervised feed forward neural network (NN). The chaos is injected simultaneously in the learnable temperature coefficient of the sigmoid activation function and in the weights of the NN. This is functionally different from the idea of noise injection (NI) which is relatively distant from biological realism. We investigate whether chaos injection is more efficient than standard back propagation, adaptive neuron model, and NI algorithms by applying these techniques to different benchmark classification problems such as heart disease, glass, breast cancer, and diabetes identification, and time series prediction. In each case chaos injection is superior to the standard approaches in terms of generalization ability and convergence rate. The performance of the proposed method is also statistically different from that of noise injection. 相似文献
3.
随着智能时代的到来,部署了深度神经网络的智能系统应用已经渗透到了人类生活的各个方面.然而,由于神经网络具有黑盒特性和规模庞大的特点,其预测结果难以让人完全信服,当应用于自动驾驶等安全攸关的领域时,如何保证其安全性仍然是学术界和工业界面临的巨大挑战.为此,学术界针对神经网络一种特殊的安全性——鲁棒性展开了研究,并提出了很多鲁棒性的分析和验证方法.目前为止,验证前馈神经网络的方法包括精确验证方法和近似验证方法,已经发展得比较繁荣;而对于其他类型的网络,如循环神经网络的鲁棒性验证研究还处于起步阶段.回顾深度神经网络的发展以及部署到日常生活中面临的挑战;详尽地调研前馈神经网络和循环神经网络的鲁棒性验证方法,并对这些验证方法间的内在联系进行分析和比较;调研循环神经网络在现实应用场景中的安全性验证方法;阐明神经网络鲁棒性验证领域未来可以深入研究的方向. 相似文献
4.
多元多项式函数的三层前向神经网络逼近方法 总被引:4,自引:0,他引:4
该文首先用构造性方法证明:对任意r阶多元多项式,存在确定权值和确定隐元个数的三层前向神经网络.它能以任意精度逼近该多项式.其中权值由所给多元多项式的系数和激活函数确定,而隐元个数由r与输入变量维数确定.作者给出算法和算例,说明基于文中所构造的神经网络可非常高效地逼近多元多项式函数.具体化到一元多项式的情形,文中结果比曹飞龙等所提出的网络和算法更为简单、高效;所获结果对前向神经网络逼近多元多项式函数类的网络构造以及逼近等具有重要的理论与应用意义,为神经网络逼近任意函数的网络构造的理论与方法提供了一条途径. 相似文献
5.
针对时变和/或非线性输入的前向神经网络提出了一种感知自适应算法。其本质是迫使输出的实际值和期望值之间的误差满足一个渐近稳定的差分方程,而不是用后向传播方法使误差函数极小化。通过适当排列扩张输入可以避免算法的奇异性。 相似文献
6.
Fast Learning Algorithms for Feedforward Neural Networks 总被引:7,自引:0,他引:7
In order to improve the training speed of multilayer feedforward neural networks (MLFNN), we propose and explore two new fast backpropagation (BP) algorithms obtained: (1) by changing the error functions, in case using the exponent attenuation (or bell impulse) function and the Fourier kernel function as alternative functions; and (2) by introducing the hybrid conjugate-gradient algorithm of global optimization for dynamic learning rate to overcome the conventional BP learning problems of getting stuck into local minima or slow convergence. Our experimental results demonstrate the effectiveness of the modified error functions since the training speed is faster than that of existing fast methods. In addition, our hybrid algorithm has a higher recognition rate than the Polak-Ribieve conjugate gradient and conventional BP algorithms, and has less training time, less complication and stronger robustness than the Fletcher-Reeves conjugate-gradient and conventional BP algorithms for real speech data. 相似文献
7.
Pattern Analysis and Applications - 相似文献
8.
A novel multistage feedforward network is proposed for efficient solving of difficult classification tasks. The standard Radial Basis Functions (RBF) architecture is modified in order to alleviate two potential drawbacks, namely the curse of dimensionality and the limited discriminatory capacity of the linear output layer. The first goal is accomplished by feeding the hidden layer output to the input of a module performing Principal Component Analysis (PCA). The second one is met by substituting the simple linear combiner in the standard architecture by a Multilayer Perceptron (MLP). Simulation results for the 2-spirals problem and Peterson-Barney vowel classification are reported, showing high classification accuracy using less parameters than existing solutions. 相似文献
9.
Mehmet Önder Efe 《Neural Processing Letters》2008,28(2):63-79
Feedforward neural network structures have extensively been considered in the literature. In a significant volume of research
and development studies hyperbolic tangent type of a neuronal nonlinearity has been utilized. This paper dwells on the widely
used neuronal activation functions as well as two new ones composed of sines and cosines, and a sinc function characterizing
the firing of a neuron. The viewpoint here is to consider the hidden layer(s) as transforming blocks composed of nonlinear
basis functions, which may assume different forms. This paper considers 8 different activation functions which are differentiable
and utilizes Levenberg-Marquardt algorithm for parameter tuning purposes. The studies carried out have a guiding quality based
on empirical results on several training data sets. 相似文献
10.
Feedforward neural networks (FNN) have been proposed to solve complex problems in pattern recognition, classification and function approximation. Despite the general success of learning methods for FNN, such as the backpropagation (BP) algorithm, second-order algorithms, long learning time for convergence remains a problem to be overcome. In this paper, we propose a new hybrid algorithm for a FNN that combines unsupervised training for the hidden neurons (Kohonen algorithm) and supervised training for the output neurons (gradient descent method). Simulation results show the effectiveness of the proposed algorithm compared with other well-known learning methods. 相似文献
11.
12.
构造前向神经网络逼近多项式函数 总被引:1,自引:0,他引:1
首先用构造性的方法证明:对于任意的n阶多元多项式函数,可以构造一个三层前向神经网络以任意精度逼近该多项式,所构造网络的隐层节点个数仅与多项式的维数d和阶数n有关.然后,我们给出实现这一逼近的具体算法.最后,给出两个算例进一步验证所得的理论结果.本文结果对神经网络逼近多元多项式函数的具体网络构造以及实现这一逼近的方法等问题具有指导意义. 相似文献
13.
The role of activation functions in feedforward artificial neural networks has not been investigated to the desired extent. The commonly used sigmoidal functions appear as discrete points in the sigmoidal functional space. This makes comparison difficult. Moreover, these functions can be interpreted as the (suitably scaled) integral of some probability density function (generally taken to be symmetric/bell shaped). Two parameterization methods are proposed that allow us to construct classes of sigmoidal functions based on any given sigmoidal function. The suitability of the members of the proposed class is investigated. It is demonstrated that all members of the proposed class(es) satisfy the requirements to act as an activation function in feedforward artificial neural networks. 相似文献
14.
15.
16.
基于互补遗传算子的前馈神经网络三阶段学习方法 总被引:1,自引:0,他引:1
杨会志 《计算机工程与应用》2005,41(17):88-89,104
论文提出了一种新的基于互补遗传算子的前馈神经网络三阶段学习方法。该方法把神经网络的学习过程分为三个阶段。第一阶段为结构辨识阶段,采用遗传算法进行神经网络隐层节点数目的选择和初始参数的设定,并基于发现的遗传算子的互补效应设计高效互补遗传算子。第二阶段为参数辨识阶段,采用效率较高的神经网络算法如L-M算法进行神经网络参数的进一步学习。第三阶段为剪枝阶段,通过获得最小结构的神经网络以提高其泛化能力。在整个学习过程中,学习过程的可控性以及神经网络的逼近精度、复杂度和泛化能力之间得到了满意平衡。仿真试验结果证明了该方法的有效性。 相似文献
17.
18.
根据优化理论中的Hooke-Jeeves模式搜索(pattern search)法提出了多层前馈式神经网络快速训练算法HJPS.该算法由“探测搜索”和“模式移动”两个步骤交替进行.其基本思想是探测搜索依次沿各个坐标轴进行,用以确定新的基点和有利于网络误差函数值下降的方向.模式移动沿相邻两个基点的连线方向前进,从而进一步减小误差函数值,达到更快收敛.实验结果表明,同BP算法以及其他几种快速算法相比,HJPS算法在收敛速度和运算时间上都有非常显著的提高.同时HJPS算法的泛化能力很强. 相似文献
19.
前馈过程神经网络的网络结构与泛化能力 总被引:1,自引:0,他引:1
基于提高过程神经网络泛化能力的角度,对前馈过程神经网络网络结构对泛化能力的影响进行研究,得出以下结论:其过程神经元隐层(时变隐层)起主要作用,一般神经元隐层(非时变隐层)并非是必须的,对于相同特征的样本,过程神经元对样本特征的抽取能力远远高于传统神经元。给出了一个基于提高泛化能力的前馈过程神经网络网络结构构造算法,并应用一个实例验证了其有效性。 相似文献