首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
周永权  赵斌 《计算机科学》2008,35(7):122-125
泛函网络是近年提出的一种对神经网络的有效推广.与神经网络不同,它处理的是一般的泛函模型,它在各个神经元之间的连接没有权值,并且神经元函数不固定的,往往是一给定的基函数的组合,泛函网络学习的目的就是求出神经元函数的精确表达式或近似表达式. 迄今关于泛函网络神经元基函数的存在性和选取方法缺乏理论依据.文中基于Banach空间中偏序理论,分析了泛函网络神经元基函数的存在性,给出了泛函网络神经元基函数选取方法,对于完善泛函网络的基础理论具有参考价值.  相似文献   

2.
Abstract: A key problem of modular neural networks is finding the optimal aggregation of the different subtasks (or modules) of the problem at hand. Functional networks provide a partial solution to this problem, since the inter‐module topology is obtained from domain knowledge (functional relationships and symmetries). However, the learning process may be too restrictive in some situations, since the resulting modules (functional units) are assumed to be linear combinations of selected families of functions. In this paper, we present a non‐parametric learning approach for functional networks using feedforward neural networks for approximating the functional modules of the resulting architecture; we also introduce a genetic algorithm for finding the optimal intra‐module topology (the appropriate balance of neurons for the different modules according to the complexity of their respective tasks). Some benchmark examples from nonlinear time‐series prediction are used to illustrate the performance of the algorithm for finding optimal modular network architectures for specific problems.  相似文献   

3.
Hopfield网络的全局指数稳定性   总被引:4,自引:0,他引:4  
在研究Hopfield神经网络时通常都假设输出响应函数是光滑的增函数.但实际应用中遇到的大多数函数都是非光滑函数.因此,本文将通常论文中Hopfield神经网络的输出响应函数连续可微的假设削弱为满足L ipschitz条件.通过引入Lyapunov函数的方法,证明了Hopfield神经网络全局指数收敛的一个充分性定理.并且由此定理获得该类网络全局指数稳定的几个判据.这定理与判据是近期相应文献主要结果的极大改进.  相似文献   

4.
In this paper, the state estimation problem is investigated for neural networks with time-varying delays as well as general activation functions. By applying the Finsler's Lemma and constructing appropriate Lyapunov-Krasovskii functional based on delay partitioning, several improved delay-dependent conditions are developed to estimate the neuron state with some available output measurements such that the error-state system is global asymptotically stable. It is established theoretically that one special case of the obtained criteria is equivalent to some existing result with same conservatism but including fewer LMI variables. As the present conditions involve no free-weighting matrices, the computational burden is largely reduced. Two examples are provided to demonstrate the effectiveness of the theoretical results.  相似文献   

5.
在神经科学和计算机科学等领域,研究人员通过统计模型和深度学习等方法探索不同状态间功能性脑网络工作机制的差别;但现有的功能性脑网络研究工具多用于寻找支持某种假设的证据或传达科学发现,存在功能单一的缺点;针对上述问题,文章设计并实现了一个用于功能性核磁共振数据的交互式可视分析系统BrainDVis,帮助研究人员寻找不同状态间功能性脑网络的多方面差异;BrainDVis将功能性脑网络差异分析功能、网络特征参数分析功能、模块化结构分析功能、功能性连接分析功能相关联,提供多视图协同交互的方法帮助研究人员自主探索,寻找差异;最后使用公开数据集进行实验,验证了系统的可行性和有效性。  相似文献   

6.
Current analyses of complex biological networks focus either on their global statistical connectivity properties (e.g. topological path lengths and nodes connectivity ranks) or the statistics of specific local connectivity circuits (motifs). Here we present a different approach – Functional Topology, to enable identification of hidden topological and geometrical fingerprints of biological computing networks that afford their functioning – the form-function fingerprints. To do so we represent the network structure in terms of three matrices: 1. Topological connectivity matrix – each row (i) is the shortest topological path lengths of node i with all other nodes; 2. Topological correlation matrix – the element (i,j) is the correlation between the topological connectivity of nodes (i) and (j); and 3. Weighted graph matrix – in this case the links represent the conductance between nodes that can be simply one over the geometrical length, the synaptic strengths in case of neural networks or other quantity that represents the strengths of the connections. Various methods (e.g. clustering algorithms, random matrix theory, eigenvalues spectrum etc.), can be used to analyze these matrices, here we use the newly developed functional holography approach which is based on clustering of the matrices following their collective normalization. We illustrate the approach by analyzing networks of different topological and geometrical properties: 1. Artificial networks, including – random, regular 4-fold and 5-fold lattice and a tree-like structure; 2. Cultured neural networks: A single network and a network composed of three linked sub-networks; and 3. Model neural network composed of two overlapping sub-networks. Using these special networks, we demonstrate the method’s ability to reveal functional topology features of the networks.  相似文献   

7.
Random vector functional ink(RVFL) networks belong to a class of single hidden layer neural networks in which some parameters are randomly selected. Their network structure in which contains the direct links between inputs and outputs is unique, and stability analysis and real-time performance are two difficulties of the control systems based on neural networks. In this paper, combining the advantages of RVFL and the ideas of online sequential extreme learning machine(OS-ELM) and initial-trainin...  相似文献   

8.
定义了傅立叶神经元与傅立叶神经网络,将一组傅立叶基三角函数作为神经网络各隐层单元的激合函数,设计出一类单输入单输出三层前向傅立叶神经网络与双输入单输出四层前向傅立叶神经网络,以及奇、偶傅立叶神经网络,基于三角函数逼近论,讨论了前向傅立叶神经网络的三角插值机理及系统逼近理论,且有严格的数学理论基础,给出了前向傅立叶神经网络学习算法,通过学习,它们分别能逼近于给定的傅立叶函数到预定的精度。仿真实验表明,该学习算法效率高,具有极为重要的理论价值和应用背景。  相似文献   

9.
Wavelet networks   总被引:273,自引:0,他引:273  
A wavelet network concept, which is based on wavelet transform theory, is proposed as an alternative to feedforward neural networks for approximating arbitrary nonlinear functions. The basic idea is to replace the neurons by ;wavelons', i.e., computing units obtained by cascading an affine transform and a multidimensional wavelet. Then these affine transforms and the synaptic weights must be identified from possibly noise corrupted input/output data. An algorithm of backpropagation type is proposed for wavelet network training, and experimental results are reported.  相似文献   

10.
This article presents a novel classification of wavelet neural networks based on the orthogonality/non-orthogonality of neurons and the type of nonlinearity employed. On the basis of this classification different network types are studied and their characteristics illustrated by means of simple one-dimensional nonlinear examples. For multidimensional problems, which are affected by the curse of dimensionality, the idea of spherical wavelet functions is considered. The behaviour of these networks is also studied for modelling of a low-dimension map.  相似文献   

11.
关于三层前馈神经网络隐层构建问题的研究   总被引:11,自引:0,他引:11  
针对最佳平方逼近三层前馈神经网络模型,分析了与隐层单元性能相关的表示空间 与误差空间、目标空间与耗损空间的作用,提出了按网络生长方式构建隐层时隐单元选择准则和评价方法。研究结果表明:隐单元选取策略应遵循其输出向量有效分量位于误差空间、回避耗损空间和尽可能趋向于极大能量方向的原则,这一结果与隐单元采用什么激发函数无关,也允许各隐单元采用不同激发函数。网络的隐层性能评价可以通过隐层品质因子、隐层有效系数、隐单元剩余度来进行,而总体结果可采用隐层评价因子进行评测。  相似文献   

12.
Some approximation theoretic questions concerning a certain class of neural networks are considered. The networks considered are single input, single output, single hidden layer, feedforward neural networks with continuous sigmoidal activation functions, no input weights but with hidden layer thresholds and output layer weights. Specifically, questions of existence and uniqueness of best approximations on a closed interval of the real line under mean-square and uniform approximation error measures are studied. A by-product of this study is a reparametrization of the class of networks considered in terms of rational functions of a single variable. This rational reparametrization is used to apply the theory of Pade approximation to the class of networks considered. In addition, a question related to the number of local minima arising in gradient algorithms for learning is examined.  相似文献   

13.
Motivated by the slow learning properties of multilayer perceptrons (MLPs) which utilize computationally intensive training algorithms, such as the backpropagation learning algorithm, and can get trapped in local minima, this work deals with ridge polynomial neural networks (RPNN), which maintain fast learning properties and powerful mapping capabilities of single layer high order neural networks. The RPNN is constructed from a number of increasing orders of Pi–Sigma units, which are used to capture the underlying patterns in financial time series signals and to predict future trends in the financial market. In particular, this paper systematically investigates a method of pre-processing the financial signals in order to reduce the influence of their trends. The performance of the networks is benchmarked against the performance of MLPs, functional link neural networks (FLNN), and Pi–Sigma neural networks (PSNN). Simulation results clearly demonstrate that RPNNs generate higher profit returns with fast convergence on various noisy financial signals.  相似文献   

14.
The role of activation functions in feedforward artificial neural networks has not been investigated to the desired extent. The commonly used sigmoidal functions appear as discrete points in the sigmoidal functional space. This makes comparison difficult. Moreover, these functions can be interpreted as the (suitably scaled) integral of some probability density function (generally taken to be symmetric/bell shaped). Two parameterization methods are proposed that allow us to construct classes of sigmoidal functions based on any given sigmoidal function. The suitability of the members of the proposed class is investigated. It is demonstrated that all members of the proposed class(es) satisfy the requirements to act as an activation function in feedforward artificial neural networks.  相似文献   

15.
The robust stability of a class of Hopfield neural networks with multiple delays and parameter perturbations is analyzed. The sufficient conditions for the global robust stability of equilibrium point are given by way of constructing a suitable Lyapunov functional. The conditions take the form of linear matrix inequality (LMI), so they are computable and verifiable efficiently. Furthermore, all the results are obtained without assuming the differentiability and monotonicity of activation functions. From the viewpoint of system analysis, our results provide sufficient conditions for the global robust stability in a manner that they specify the size of perturbation that Hopfield neural networks can endure when the structure of the network is given. On the other hand, from the viewpoint of system synthesis, our results can answer how to choose the parameters of neural networks to endure a given perturbation.  相似文献   

16.
In this paper, performance oriented control laws are synthesized for a class of single‐input‐single‐output (SISO) n‐th order nonlinear systems in a normal form by integrating the neural networks (NNs) techniques and the adaptive robust control (ARC) design philosophy. All unknown but repeat‐able nonlinear functions in the system are approximated by the outputs of NNs to achieve a better model compensation for an improved performance. While all NN weights are tuned on‐line, discontinuous projections with fictitious bounds are used in the tuning law to achieve a controlled learning. Robust control terms are then constructed to attenuate model uncertainties for a guaranteed output tracking transient performance and a guaranteed final tracking accuracy. Furthermore, if the unknown nonlinear functions are in the functional ranges of the NNs and the ideal NN weights fall within the fictitious bounds, asymptotic output tracking is achieved to retain the perfect learning capability of NNs. The precision motion control of a linear motor drive system is used as a case study to illustrate the proposed NNARC strategy.  相似文献   

17.
基于遗传规划实现泛函网络神经元函数类型优化   总被引:1,自引:0,他引:1  
泛函网络是最近提出的一种对神经网络的有效推广。与神经网络不同,它处理的是一般的泛函模型,其神经元函数不固定,而是可学习的,且在各个处理单元之间没有权值。同神经网络一样,至今还没有系统设计方法能够对给定问题设计出近似最优的结构。鉴于此,将整个泛函网络的设计分解为单个神经元的逐个设计;然后,在此框架下提出了基于遗传规划的单个神经元的设计方法,该方法可实现对神经元函数类型的优化。仿真实验表明,本方法是有效可行的,能用较小的网络规模获得更满意的泛化特性。  相似文献   

18.
前向代数神经网络的函数逼近理论及学习算法   总被引:12,自引:0,他引:12  
文中对MP神经元模型进行了推广,定义了多项代数神经元、多项式代数神经网络,将多项式代数融入代数神经网络,分析了前向多项式代数神经网络函数逼近能力及理论依据,设计出了一类双输入单输出的前向4层多层式代数神经网络模型,由该模型构成的网络能够逼近于给定的二元多项式到预定的精度。给出了在P-adic意义下的多项式代数神经网络函数逼近整体学习算法,在学习的过程中,不存在局部极小,通过实例表明,该算法有效,最  相似文献   

19.
人工神经网在二维PSD器件非线性修正中的应用   总被引:4,自引:3,他引:1  
介绍了一种应用人工神经网对二维PSD器件非线性进行修正的方法。对光斑在二维PSD光敏面上的横向位移,以光斑的二维坐标集合为神经网的期望输出,以PSD输出的二维坐标集合为神经网的训练样本,对神经网络进行训练。利用神经网络所具有的非线性映射能力,在训练结束后即可建立PSD输入与输出的近似线性关系。结果表明修正后的PSD器件可以实现任意输入的实时非线性修正。  相似文献   

20.
This study presents a nonlinear systems and function learning by using wavelet network. Wavelet networks are as neural network for training and structural approach. But, training algorithms of wavelet networks is required a smaller number of iterations when the compared with neural networks. Gaussian-based mother wavelet function is used as an activation function. Wavelet networks have three main parameters; dilation, translation, and connection parameters (weights). Initial values of these parameters are randomly selected. They are optimized during training (learning) phase. Because of random selection of all initial values, it may not be suitable for process modeling. Because wavelet functions are rapidly vanishing functions. For this reason heuristic procedure has been used. In this study serial-parallel identification model has been applied to system modeling. This structure does not utilize feedback. Real system outputs have been exercised for prediction of the future system outputs. So that stability and approximation of the network is guaranteed. Gradient methods have been applied for parameters updating with momentum term. Quadratic cost function is used for error minimization. Three example problems have been examined in the simulation. They are static nonlinear functions and discrete dynamic nonlinear system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号