首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
两输入幂激励前向神经网络权值与结构确定   总被引:1,自引:0,他引:1  
基于多元函数逼近与二元幂级数展开理论,构建了一个以二元幂函数序列为隐神经元激励函数的两输入幂激励前向神经网络模型.以该网络模型为基础,基于权值直接确定法以及隐神经元数目与逼近误差的关系,提出了一种网络权值与结构确定算法.计算机仿真与数值实验结果验证了所构建的网络在逼近与去噪方面具有优越的性能,所提出的权值与结构确定算法能够快速、有效地确定网络的权值与最优结构,保证网络的最佳逼近能力.  相似文献   

2.
切比雪夫正交基神经网络的权值直接确定法   总被引:2,自引:0,他引:2  
经典的BP神经网络学习算法是基于误差回传的思想.而对于特定的网络模型,采用伪逆思想可以直接确定权值进而避免以往的反复迭代修正的过程.根据多项式插值和逼近理论构造一个切比雪夫正交基神经网络,其模型采用三层结构并以一组切比雪夫正交多项式函数作为隐层神经元的激励函数.依据误差回传(BP)思想可以推导出该网络模型的权值修正迭代公式,利用该公式迭代训练可得到网络的最优权值.区别于这种经典的做法,针对切比雪夫正交基神经网络模型,提出了一种基于伪逆的权值直接确定法,从而避免了传统方法通过反复迭代才能得到网络权值的冗长训练过程.仿真结果表明该方法具有更快的计算速度和至少相同的工作精度,从而验证了其优越性.  相似文献   

3.
根据傅里叶级数逼近理论,将正交三角函数系作为隐层神经元激励函数,合理选取这些激励函数的周期参数,构造单输入多输出(SIMO)傅里叶三角基神经网络模型.根据该网络的特点,推导出一种基于伪逆的权值直接确定法,从而1步计算出网络最优权值,并在此基础上设计出隐层结构自确定算法.仿真结果表明,与传统BP(反向传播)神经网络及基于最小二乘法的SIMO傅里叶神经网络模型相比,本网络模型具有更高的计算精度和更快的计算速度.  相似文献   

4.
介绍了一种三层径向基函数神经网络,其学习算法采用正交最小二乘算法.首先根据正交最小二乘算法得到径向基函数神经网络的结构;然后对该网络的权值进行训练使它逼近给定的函数.为了验证径向基函数神经网络所具有的对任意非线性映射的任意逼近能力和自学习、自适应能力,以两关节机械手为辨识对象来进行实验研究.实验结果表明,该径向基函数神经网络具有良好的模型学习和逼近能力,并且学习速度快、收敛性好、鲁棒性强,尤其适合于具有连续线性与非线性对象的复杂系统的控制实时性要求.  相似文献   

5.
基函数神经网络逼近能力探讨及全局收敛性分析   总被引:1,自引:0,他引:1  
构建一类新型基函数神经网络.依据梯度下降法思想,给出该神经网络的权值迭代公式,证明迭代序列能全局收敛到网络的最优权值,并由此推导出基于伪逆的最优权值一步计算公式--简称为权值直接确定法.理论分析表明,该新型神经网络具有最佳均方逼近能力和全局收敛性质,其权值直接确定法避免了冗长迭代计算、易陷于局部极小点、学习丰难选取等传统BP神经网络难以解决的难题,仿真验证显示其相时BP神经网络的各种改进算法具有运算速度快、计算精度高等优势,且对于噪声有良好的滤除特性.  相似文献   

6.
自适应神经变结构的机器人轨迹跟踪控制   总被引:3,自引:0,他引:3  
提出一种神经网络与变结构融合的控制策略用于非线性机器人控制,该方案利用神经网络来自适应补偿不确定模型,并通过变结构控制器消除逼近误差.考虑到局部泛化网络的不足,根据其状态空间的划分,分别对3个区间采用神经网络与变结构的分级与集成控制.该方案能在控制阶段初期及网络逼近区域外使两种控制器共同起作用以保持系统的强鲁棒性,基于Lyapunov理论证明了闭环系统的全局稳定性.仿真结果进一步表明了该方法的优越性.  相似文献   

7.
范黎 《福建电脑》2010,26(10):97-99
针对模糊神经网络结构辨识问题,提出了一种基于蚁群聚类的自适应神经网络算法。利用蚁群聚类算法确定模糊神经网络的初始结构,在网络的学习过程中,采用误差反向传播学习算法对网络的参数和结构进行优化,达到自适应调整权值参数和结构的目的。最后,针对非线性函数遥近问题进行了验证。通过仿真试验证明该方法取得了很好的结果,系统的逼近精度明显提高,而且网络的自适应能力强,并可以将其有效地用于模糊建模和控制问题的求解。  相似文献   

8.
为克服邵神经网络模型及其学习算法中的固有缺陷,根据多项式插值和逼近理论,构造出一种以工;agucrre正交多项式作为隐层神经元激励函数的多输入前向神经网络模型。针对该网络模型,提出了权值与结构确定法,以便快速、自动地确定该网络的最优权值和最优结构。计算机仿真与实验结果显示:该算法是有效的,并且通过该算法所得到的网络具有较优的逼近性能和良好的去噪能力。  相似文献   

9.
基于代数神经网络的不确定数据知识获取方法   总被引:1,自引:0,他引:1  
定义了代数神经元、代数神经网络,讨论了不确定数据知识获取的数学机理,设计出一类单输入,单输出的三层前向网络来获取知识的代数神经网络模型,给出一种基于代数神经网络知识获取的方法,通过该网络的学习,能确定任意一组给定数据的目标函数的逼近式。  相似文献   

10.
多元多项式函数的三层前向神经网络逼近方法   总被引:4,自引:0,他引:4  
该文首先用构造性方法证明:对任意r阶多元多项式,存在确定权值和确定隐元个数的三层前向神经网络.它能以任意精度逼近该多项式.其中权值由所给多元多项式的系数和激活函数确定,而隐元个数由r与输入变量维数确定.作者给出算法和算例,说明基于文中所构造的神经网络可非常高效地逼近多元多项式函数.具体化到一元多项式的情形,文中结果比曹飞龙等所提出的网络和算法更为简单、高效;所获结果对前向神经网络逼近多元多项式函数类的网络构造以及逼近等具有重要的理论与应用意义,为神经网络逼近任意函数的网络构造的理论与方法提供了一条途径.  相似文献   

11.
Neural network has been applied in several classification problems such as in medical diagnosis, handwriting recognition, and product inspection, with a good classification performance. The performance of a neural network is characterized by the neural network's structure, transfer function, and learning algorithm. However, a neural network classifier tends to be weak if it uses an inappropriate structure. The neural network's structure depends on the complexity of the relationship between the input and the output. There are no exact rules that can be used to determine the neural network's structure. Therefore, studies in improving neural network classification performance without changing the neural network's structure is a challenging issue. This paper proposes a method to improve neural network classification performance by constructing a linear model based on the Kalman filter as a post processing. The linear model transforms the predicted output of the neural network to a value close to the desired output by using the linear combination of the object features and the predicted output. This simple transformation will reduce the error of neural network and improve classification performance. The Kalman filter iteration is used to estimate the parameters of the linear model. Five datasets from various domains with various characteristics, such as attribute types, the number of attributes, the number of samples, and the number of classes, were used for empirical validation. The validation results show that the linear model based on the Kalman filter can improve the performance of the original neural network.  相似文献   

12.
A probabilistic interpretation is presented for two important issues in neural network based classification, namely the interpretation of discriminative training criteria and the neural network outputs as well as the interpretation of the structure of the neural network. The problem of finding a suitable structure of the neural network can be linked to a number of well established techniques in statistical pattern recognition. Discriminative training of neural network outputs amounts to approximating the class or posterior probabilities of the classical statistical approach. This paper extends these links by introducing and analyzing novel criteria such as maximizing the class probability and minimizing the smoothed error rate. These criteria are defined in the framework of class conditional probability density functions. We show that these criteria can be interpreted in terms of weighted maximum likelihood estimation. In particular, this approach covers widely used techniques such as corrective training, learning vector quantization, and linear discriminant analysis  相似文献   

13.
Through a simulation of the tracking method for a neural network weight change on a 2D plane, we noticed that in some cases it was hard for untrained users to observe the neural network weight performance. To overcome this problem, we applied a transformation of the neural network weight trajectories on a 2D plane to the direct controller of a learning-type neural network. The simulation results confirmed that if the trajectory of the neural network weight change on a 2D plane had a simple structure, we could easily determine whether the learning of the neural network had terminated or not. However, if it had a more complex structure, we could not make this determination. The proposed transformation of the neural network weight trajectories to one-dimensional values will be useful for such cases.  相似文献   

14.
用神经网络来实现基于范例的推理系统   总被引:9,自引:1,他引:9  
范例推理与神经网络有一种自然的联系,神经网络有许多优点,利用神经网络来实现范例推理可以取得非常好的效果。文章首先详细探讨了在范你推理中使用的神经网络模型与技术,并给出了其上的搜索与学习算法以及数据挖掘算法,旨在提高范例推理系统的鲁棒性和知识获取的自动化程度。  相似文献   

15.
提出了一种基于模糊神经网络的数据挖掘算法,把模糊理论和神经网络结合起来构造、训练模糊神经网络,弥补了神经网络结构复杂、网络训练时间长、结果表示不易理解等不足.经过模糊神经网络的建立和训练达到精度要求,实现了运用模糊神经网络方法从数据库中提取知识的目标.  相似文献   

16.
李良俊  张斌  杨明 《计算机工程》2007,33(12):63-64,6
提出了一种基于模糊神经网络的数据挖掘算法,把模糊理论和神经网络结合起来构造、训练模糊神经网络,弥补了神经网络结构复杂、网络训练时间长、结果表示不易理解等不足。经过模糊神经网络的建立和训练达到精度要求,实现了运用模糊神经网络方法从数据库中提取知识的目标。  相似文献   

17.
样条权函数神经网络是一种新兴的神经网络,克服了很多传统神经网络(如BP、RBF)的缺点:比如局部极小、收敛速度慢等。它具有拓扑结构简单,精确记忆训练过的样本,反映样本的信息特征,求得全局最小值等优点。基于这些优点,文中提出了一种基于样条权函数神经网络P2P流量识别方法。通过提取P2P流量特征,运用样条权函数神经网络结构对P2P流识别。Matlab仿真和模拟实验结果表明了这种方案的可行性,与传统神经网络相比,样条权函数神经网络在时间效率上具有明显优势。  相似文献   

18.
A model of a human neural knowledge processing system is presented that suggests the following. First, an entity in the outside world lends to be locally encoded in neural networks so that the conceptual information structure is mirrored in its physical implementation. Second, the knowledge of problem solving is implemented in a quite implicit way in the internal structure of the neural network (a functional group of associated hidden neurons and their connections to entity neurons) not in individual neurons or connections. Third, the knowledge system is organized and implemented in a modular fashion in neural networks according to the local specialization of problem solving where a module of neural network implements an inter-related group of knowledge such as a schema, and different modules have similar processing mechanisms, but differ in their input and output patterns. A neural network module can be tuned just as a schema structure can be adapted for changing environments. Three experiments were conducted to try to validate the suggested cognitive engineering based knowledge structure in neural networks through computer simulation. The experiments, which were based on a task of modulo arithmetic, provided some insights into the plausibility of the suggested model of a neural knowledge processing system.  相似文献   

19.
Recently, a projection neural network for solving monotone variational inequalities and constrained optimization problems was developed. In this paper, we propose a general projection neural network for solving a wider class of variational inequalities and related optimization problems. In addition to its simple structure and low complexity, the proposed neural network includes existing neural networks for optimization, such as the projection neural network, the primal-dual neural network, and the dual neural network, as special cases. Under various mild conditions, the proposed general projection neural network is shown to be globally convergent, globally asymptotically stable, and globally exponentially stable. Furthermore, several improved stability criteria on two special cases of the general projection neural network are obtained under weaker conditions. Simulation results demonstrate the effectiveness and characteristics of the proposed neural network.  相似文献   

20.
Dynamic neural networks (DNNs) have important properties that make them convenient to be used together with nonlinear control approaches based on state space models and differential geometry, such as feedback linearisation. However the mapping capability of DNNs are quite limited due to their fixed structure, that is, the number of layers and the number of hidden units. An example shown in this paper has demonstrated this limitation of DNNs. The development of novel DNN structures, which has good mapping capability, is a relevant challenge being addressed in this paper. Although the structure is changed minorly only, the mapping capability of the new designed DNN in this paper has been improved dramatically. Previous work [J. Deng et al., 2005. The dynamic neural network of a hybrid structure for nonlinear system identification. In: 16th IFAC World Congress, Prague.] presents a new dynamic neural network structure which is suitable for the identification of highly nonlinear systems, which needs the outputs from the real system for training and operation. This paper presents a hybrid dynamic neural network structure which presents a similar idea of serial–parallel hybrid structure, but it uses an output from another neural network for training and operation classified as a serial–parallel model. This type of DNNs does not require the output of the plant to be used as an input to the model. This neural network has the advantages of good mapping capabilities and flexibilities in training complicated systems, compared to the existed DNNs. A theoretical proof showing how this hybrid dynamic neural network can approximate finite trajectories of general nonlinear dynamic systems is given. To illustrate the capabilities of the new structure, neural networks are trained to identify a real nonlinear 3D crane system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号