首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
单体模糊神经网络的函数逼近能力   总被引:14,自引:1,他引:13  
研究了单体模糊神经网络的函数逼近能力,由于在MFNNs中神经元的基本运算由原来的积-和运算改为求极小-极大运算,网络的函数逼近性质发生了很大的改变。给出了单调传递函数的MFNNs按序单调特性,连续映射定理以及非函数一致逼近定理,从而说明MFNNs虽然能够保持连续映射,但不如原神经网络具有函数逼近能力。  相似文献   

2.
多层前向小世界神经网络及其函数逼近   总被引:1,自引:0,他引:1  
借鉴复杂网络的研究成果, 探讨一种在结构上处于规则和随机连接型神经网络之间的网络模型—-多层前向小世界神经网络. 首先对多层前向规则神经网络中的连接依重连概率p进行重连, 构建新的网络模型, 对其特征参数的分析表明, 当0 < p < 1时, 该网络在聚类系数上不同于Watts-Strogatz 模型; 其次用六元组模型对网络进行描述; 最后, 将不同p值下的小世界神经网络用于函数逼近, 仿真结果表明, 当p = 0:1时, 网络具有最优的逼近性能, 收敛性能对比试验也表明, 此时网络在收敛性能、逼近速度等指标上要优于同规模的规则网络和随机网络.  相似文献   

3.
Functional Networks   总被引:19,自引:0,他引:19  
In this letter we present functional networks. Unlike neural networks, in these networks there are no weightsassociated with the links connecting neurons, and the internal neuron functions are not fixed but learnable. These functions are not arbitrary, but subject to strong constraints to satisfy the compatibility conditions imposed by the existence of multiple links going from the last input layer to the same output units. In fact, writing the values of the output units in different forms, by considering these different links, a system of functional equations is obtained. When this system is solved, the numberof degrees of freedom of these initially multidimensional functions is considerably reduced. One example illustrates the process and shows that multidimensional functions can be reduced to functions with a single argument. To learn the resulting functions, a method based on minimizing a least squares error function is used, which, unlike the functions used in neural networks, has a single minimum.  相似文献   

4.
Sigma-Pi (Σ-Π) neural networks (SPNNs) are known to provide more powerful mapping capability than traditional feed-forward neural networks. A unified convergence analysis for the batch gradient algorithm for SPNN learning is presented, covering three classes of SPNNs: Σ-Π-Σ, Σ-Σ-Π and Σ-Π-Σ-Π. The monotonicity of the error function in the iteration is also guaranteed.
  相似文献   

5.
Hopfield网的图灵等价性   总被引:1,自引:0,他引:1  
孟祥武  程虎 《软件学报》1998,9(1):43-46
本文给出了用Hopfield网计算部分递归函数的构造性证明.由于部分递归函数与图灵机等价,故Hopfield网与图灵机等价.  相似文献   

6.
针对图像复原方法普遍运算量大的问题,提出了一种利用细胞神经网络进行图像复原的新方法,并首先提出了易于硬件实现的基于边缘方向判据的正则化复原方法;然后通过细胞神经网络的能量函数设计合适的网络参数来对该正则化函数进行细胞神经网络实现。仿真结果表明,该新方法是有效的,复原效果优于有约束的最小二乘复原法和已有的细胞神经网络图像复原法,而且由于细胞神经网络的并行性和硬件易实现性,使该新方法可以实时进行图像复原。  相似文献   

7.
关于神经网络的能量函数   总被引:5,自引:0,他引:5  
能量函数在神经网络的研究中有着非常重要的作用,人们普遍认为:只要能量函数沿着网络的解是下降的,能量函数的导数为零的点是网络的平衡态,能量函数有下界,则网络是稳定的且网络的平衡态是能量函数的极小点,文中取反例说明上述条件下不能保证网络的稳定性,并取例说明即使网络稳定也不能保证网络的平衡态与能量函数的极小点,证明了在网络具有上述条件的能量函数的情况下网络稳定的充分必要条件是网络的解有界,讨论了网络的平  相似文献   

8.
阐述了强化学习的基本原理和特点,讨论了强化学习中评价函数的神经网络近似问题,重点分析了采用多神经网络近似评价函数的学习问题,实现了状态空间或任务的自动分解,提高了评价函数的推广能力,网络的学习是离线进行,并作为反馈控制器在线应用,并以A-学习为例,将强化学习应用于导弹的制导问题,仿真结果表明了强化学习在导弹制导或控制问题中的应用前景和有效性。  相似文献   

9.
Methods of construction of structural models of fast two-layer neural networks are considered. The methods are based on the criteria of minimum computing operations and maximum degrees of freedom. Optimal structural models of two-layer neural networks are constructed. Illustrative examples are given. Translated from Kibernetika i Sistemnyi Analiz, No. 4, pp. 47–56, July–August, 2000.  相似文献   

10.
There is no method to determine the optimal topology for multi-layer neural networks for a given problem. Usually the designer selects a topology for the network and then trains it. Since determination of the optimal topology of neural networks belongs to class of NP-hard problems, most of the existing algorithms for determination of the topology are approximate. These algorithms could be classified into four main groups: pruning algorithms, constructive algorithms, hybrid algorithms and evolutionary algorithms. These algorithms can produce near optimal solutions. Most of these algorithms use hill-climbing method and may be stuck at local minima. In this article, we first introduce a learning automaton and study its behaviour and then present an algorithm based on the proposed learning automaton, called survival algorithm, for determination of the number of hidden units of three layers neural networks. The survival algorithm uses learning automata as a global search method to increase the probability of obtaining the optimal topology. The algorithm considers the problem of optimization of the topology of neural networks as object partitioning rather than searching or parameter optimization as in existing algorithms. In survival algorithm, the training begins with a large network, and then by adding and deleting hidden units, a near optimal topology will be obtained. The algorithm has been tested on a number of problems and shown through simulations that networks generated are near optimal.  相似文献   

11.
线性阈值单元神经元网络的图灵等价性   总被引:6,自引:0,他引:6  
关于神经元网络计算能力,其奠基人即认为神经元网络与图灵机等价,1991年,孙等给出出了其与图灵机等价的一个构造性证明,只是他们的网络是完全联结的、二阶权的回归式网络,与一般讲的神经元网络不同,本文则给出了用线阈值单元构成的神经元网络去计算部分递归函数的构造性证明,由于部分递归函数与图灵机等价,从而这样的神经元网络与图灵机等价。  相似文献   

12.
基于小波网络和多模块网络的数字识别   总被引:2,自引:0,他引:2  
本文研究一种新的数字识别方法,这种方法用小波神经网络抽取特征、用多模块结构神经网络作模式分类器。小波分解的函数近似能力和人工神经网络的学习能力结合起来形成的小波神经网络,有着良好的特征描述性能,可用作特征抽取工具。多模块结构的神经网络将一个k类的模式分类问题转换为k个互相独立的2类分类问题。这种结构将一个复杂的分类问题化解为多个简单的分类问题,各个模块互相并联,各自负责一种模式的识别。用这种修改过的多模块结构网络的BP训练方法,可加速训练和提高训练精度,并且各模块可互相独立地进行训练。用美国NIST数字样本进行训练及测试,结果良好。这种方法可用于更广泛的平面图形识别。  相似文献   

13.
The approximation properties of the RBF neural networks are investigated in this paper. A new approach is proposed, which is based on approximations with orthogonal combinations of functions. An orthogonalization framework is presented for the Gaussian basis functions. It is shown how to use this framework to design efficient neural networks. Using this method we can estimate the necessary number of the hidden nodes, and we can evaluate how appropriate the use of the Gaussian RBF networks is for the approximation of a given function.  相似文献   

14.
In previous works, a neural network based technique to analyze multilayered shielded microwave circuits was developed. The method is based on the approximation of the shielded media Green's functions by radial‐basis‐function neural networks (RBFNNs). The trained neural networks, substitute the original Green's functions during the application of the integral equation approach, allowing a faster analysis than the direct solution. In this article, new and important improvements are applied to the training of the RBFNNs, which permit a reduction in the approximation error introduced by the neural networks. Furthermore, outstanding time reductions in the analysis of printed circuits are achieved, clearly outperforming the former technique. The main improvement consists on a better processing of the Green's function singularity near the source. The singularity produces rapid variations near the source that makes difficult the neural network training. In this work, the singularity is extracted in a more suitable fashion than in previous works. The functions resulting from the singularity extraction present a smooth behavior, so they can be easily approximated by neural networks. In addition, a new subdivision strategy for the input space is proposed to efficiently train the neural networks. Two practical microwave filters are analyzed using the new techniques. Comparisons with measured results are also presented for validation. © 2010 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2010.  相似文献   

15.
In this paper, a class of interval general bidirectional associative memory (BAM) neural networks with delays are introduced and studied, which include many well-known neural networks as special cases. By using fixed point technic, we prove an existence and uniqueness of the equilibrium point for the interval general BAM neural networks with delays. By using a proper Lyapunov functions, we get a sufficient condition to ensure the global robust exponential stability for the interval general BAM neural networks with delays, and we just require that activation function is globally Lipschitz continuous, which is less conservative and less restrictive than the monotonic assumption in previous results. In the last section, we also give an example to demonstrate the validity of our stability result for interval neural networks with delays.  相似文献   

16.
基于遗传算法的前向神经网络结构优化   总被引:2,自引:0,他引:2  
王宏刚  钱锋 《控制工程》2007,14(4):387-390
对近几年应用遗传算法(Genetic Algorithm,GA)优化设计前向神经网络结构的研究进行了评述。指出了神经网络结构优化设计的重要性和目前各种方法存在的不足。介绍了神经网络结构设计原理和应用GA优化设计神经网络应着重考虑的两个问题:即结构表达策略和适应度函数设计。分别对近来应用GA优化设计多层感知器、径向基函数神经网络和径向基概率神经网络结构的研究进行了细致介绍和分析。指出了目前研究工作的不足和未来研究工作的发展方向。  相似文献   

17.
前向代数神经网络的函数逼近理论及学习算法   总被引:12,自引:0,他引:12  
文中对MP神经元模型进行了推广,定义了多项代数神经元、多项式代数神经网络,将多项式代数融入代数神经网络,分析了前向多项式代数神经网络函数逼近能力及理论依据,设计出了一类双输入单输出的前向4层多层式代数神经网络模型,由该模型构成的网络能够逼近于给定的二元多项式到预定的精度。给出了在P-adic意义下的多项式代数神经网络函数逼近整体学习算法,在学习的过程中,不存在局部极小,通过实例表明,该算法有效,最  相似文献   

18.
Abstract: A key problem of modular neural networks is finding the optimal aggregation of the different subtasks (or modules) of the problem at hand. Functional networks provide a partial solution to this problem, since the inter‐module topology is obtained from domain knowledge (functional relationships and symmetries). However, the learning process may be too restrictive in some situations, since the resulting modules (functional units) are assumed to be linear combinations of selected families of functions. In this paper, we present a non‐parametric learning approach for functional networks using feedforward neural networks for approximating the functional modules of the resulting architecture; we also introduce a genetic algorithm for finding the optimal intra‐module topology (the appropriate balance of neurons for the different modules according to the complexity of their respective tasks). Some benchmark examples from nonlinear time‐series prediction are used to illustrate the performance of the algorithm for finding optimal modular network architectures for specific problems.  相似文献   

19.
Single-layer, continuous-time cellular neural/nonlinear networks (CNN) are considered with linear templates. The networks are programmed by the template-parameters. A fundamental question in template training or adaptation is the gradient computation or approximation of the error as a function of the template parameters. Exact equations are developed for computing the gradients. These equations are similar to the CNN network equations, i.e. they have the same neighborhood and connectivity as the original CNN network. It is shown that a CNN network, with a modified output function, can compute the gradients. Thus, fast on-line gradient computation is possible via the CNN Universal Machine, which allows on-line adaptation and training. The method for computing the gradient on-chip is investigated and demonstrated.  相似文献   

20.
无线传感器网络的系统能耗制约着全网络的综合应用能力,其中节点有限的能量从根本上影响着传感器网络效能。针对无线传感器网络的全局能耗问题,提出了基于径向基函数神经网络以及状态空间表达的系统化建模方法。考虑到无线传感器网络的拓扑结构与分级关系, 采用径向基函数神经网络自适应实时规划系统。鉴于各传感器节点对数据的不同处理方式与能耗密切相关, 对全系统能耗建立系统化矩阵模型。仿真分析表明该模型可根据实际应用背景调整设置完成全局优化。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号