首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
提出一种平面框架结构地震力的设计敏度和海森矩阵的精确计算方法.在有限单元法和纽马克-β法的基础上推导了平面框架结构地震力的设计敏度和海森矩阵的计算公式,用matlab语言编制了平面框架结构地震力的设计敏度和海森矩阵的计算程序,实现了平面框架结构地震力的设计敏度和海森矩阵的精确计算.最后给出一个二层平面框架结构算例,数值结果表明本文所提的地震力的设计敏度和海森矩阵的计算方法是有效的和高效的.  相似文献   

2.
Smooth function approximation using neural networks   总被引:4,自引:0,他引:4  
An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.  相似文献   

3.
This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) as special cases. The authors' analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the generalized singular value decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, the authors investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) the authors are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when the application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper- and lower-layer weights. The authors call this the lateral orthogonalization network (LON) and show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, the authors show the application of their results to the solution of the identification problem of systems whose excitation has a non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.  相似文献   

4.
The essential order of approximation for neural networks   总被引:15,自引:0,他引:15  
There have been various studies on approximation ability of feedforward neural networks (FNNs). Most of the existing studies are, however, only concerned with density or upper bound estimation on how a multivariate function can be approximated by an FNN, and consequently, the essential approximation ability of an FNN cannot be revealed. In this paper, by establishing both upper and lower bound estimations on approximation order, the essential approximation ability (namely, the essential approximation order) of a class of FNNs is clarified in terms of the modulus of smoothness of functions to be approximated. The involved FNNs can not only approximate any continuous or integrable functions defined on a compact set arbitrarily well, but also provide an explicit lower bound on the number of hidden units required. By making use of multivariate approximation tools, it is shown that when the functions to be approximated are Lipschitzian with order up to 2, the approximation speed of the FNNs is uniquely deter  相似文献   

5.
Let SFd and Πψ,n,d = { nj=1bjψ(ωj·x+θj) :bj,θj∈R,ωj∈Rd} be the set of periodic and Lebesgue’s square-integrable functions and the set of feedforward neural network (FNN) functions, respectively. Denote by dist (SF d, Πψ,n,d) the deviation of the set SF d from the set Πψ,n,d. A main purpose of this paper is to estimate the deviation. In particular, based on the Fourier transforms and the theory of approximation, a lower estimation for dist (SFd, Πψ,n,d) is proved. That is, dist(SF d, Πψ,n,d) (nlogC2n)1/2 . T...  相似文献   

6.
Optimized approximation algorithm in neural networks without overfitting.   总被引:2,自引:0,他引:2  
In this paper, an optimized approximation algorithm (OAA) is proposed to address the overfitting problem in function approximation using neural networks (NNs). The optimized approximation algorithm avoids overfitting by means of a novel and effective stopping criterion based on the estimation of the signal-to-noise-ratio figure (SNRF). Using SNRF, which checks the goodness-of-fit in the approximation, overfitting can be automatically detected from the training error only without use of a separate validation set. The algorithm has been applied to problems of optimizing the number of hidden neurons in a multilayer perceptron (MLP) and optimizing the number of learning epochs in MLP's backpropagation training using both synthetic and benchmark data sets. The OAA algorithm can also be utilized in the optimization of other parameters of NNs. In addition, it can be applied to the problem of function approximation using any kind of basis functions, or to the problem of learning model selection when overfitting needs to be considered.  相似文献   

7.
The ability of a neural network to learn from experience can be viewed as closely related to its approximating properties. By assuming that environment is essentially stochastic it follows that neural networks should be able to approximate stochastic processes. The aim of this paper is to show that some classes of artificial neural networks exist such that they are capable of providing the approximation, in the mean square sense, of prescribed stochastic processes with arbitrary accuracy. The networks so defined constitute a new model for neural processing and extend previous results concerning approximating capabilities of artificial neural networks.  相似文献   

8.
In this paper, we propose the approximate transformable technique, which includes the direct transformation and indirect transformation, to obtain a Chebyshev-Polynomials-Based (CPB) unified model neural networks for feedforward/recurrent neural networks via Chebyshev polynomials approximation. Based on this approximate transformable technique, we have derived the relationship between the single-layer neural networks and multilayer perceptron neural networks. It is shown that the CPB unified model neural networks can be represented as a functional link networks that are based on Chebyshev polynomials, and those networks use the recursive least square method with forgetting factor as learning algorithm. It turns out that the CPB unified model neural networks not only has the same capability of universal approximator, but also has faster learning speed than conventional feedforward/recurrent neural networks. Furthermore, we have also derived the condition such that the unified model generating by Chebyshev polynomials is optimal in the sense of error least square approximation in the single variable ease. Computer simulations show that the proposed method does have the capability of universal approximator in some functional approximation with considerable reduction in learning time.  相似文献   

9.
On global-local artificial neural networks for function approximation   总被引:1,自引:0,他引:1  
We present a hybrid radial basis function (RBF) sigmoid neural network with a three-step training algorithm that utilizes both global search and gradient descent training. The algorithm used is intended to identify global features of an input-output relationship before adding local detail to the approximating function. It aims to achieve efficient function approximation through the separate identification of aspects of a relationship that are expressed universally from those that vary only within particular regions of the input space. We test the effectiveness of our method using five regression tasks; four use synthetic datasets while the last problem uses real-world data on the wave overtopping of seawalls. It is shown that the hybrid architecture is often superior to architectures containing neurons of a single type in several ways: lower mean square errors are often achievable using fewer hidden neurons and with less need for regularization. Our global-local artificial neural network (GL-ANN) is also seen to compare favorably with both perceptron radial basis net and regression tree derived RBFs. A number of issues concerning the training of GL-ANNs are discussed: the use of regularization, the inclusion of a gradient descent optimization step, the choice of RBF spreads, model selection, and the development of appropriate stopping criteria.  相似文献   

10.
直觉模糊神经网络的函数逼近能力   总被引:3,自引:0,他引:3  
运用直觉模糊集理论,建立了自适应神经-直觉模糊推理系统(ANIFIS)的控制模型,并证明了该模型具有全局逼近性质.首先将Zadeh模糊推理神经网络变为直觉模糊推理网络,建立一个多输入单输出的T-S型ANIFIS模型;然后设计了系统变量的属性函数和推理规则,确定了各层的输入输出计算关系,以及系统输出结果的合成计算表达式;最后通过证明所建模型的输出结果计算式满足Stone-Weirstrass定理的3个假设条件,完成了该模型的全局逼近性证明.  相似文献   

11.
Back AD  Chen T 《Neural computation》2002,14(11):2561-2566
Recently, there has been interest in the observed capabilities of some classes of neural networks with fixed weights to model multiple nonlinear dynamical systems. While this property has been observed in simulations, open questions exist as to how this property can arise. In this article, we propose a theory that provides a possible mechanism by which this multiple modeling phenomenon can occur.  相似文献   

12.
The use of computer-readable visual codes became common in our everyday life both in industrial environments and for private use. The reading process of visual codes consists of two steps, namely, localization and data decoding. In this paper we examine the localization step of visual codes using conventional and deep rectifier neural networks. They are also evaluated in the discrete cosine transform domain and shown to be efficient, which makes full decompression unnecessary for setups involving JPEG images. This approach is also efficient from a storage viewpoint and computation cost viewpoint, since camera hardware can provide a JPEG stream as output in many cases. The use of neural networks implemented on graphics processing unit allows real-time automatic code object localization. In our earlier studies, the proposed approach was evaluated on the most popular code type, quick response code, and some other 2D codes as well. Here, we also prove that deep rectifier networks are also suitable for 1D barcode localization and present extensive evaluation and comparison to state-of-the-art approaches.  相似文献   

13.
This paper presents a computationally efficient algorithm for function approximation with piecewise linear sigmoidal nodes. A one hidden layer network is constructed one node at a time using the well-known method of fitting the residual. The task of fitting an individual node is accomplished using a new algorithm that searches for the best fit by solving a sequence of quadratic programming problems. This approach offers significant advantages over derivative-based search algorithms (e.g., backpropagation and its extensions). Unique characteristics of this algorithm include: finite step convergence, a simple stopping criterion, solutions that are independent of initial conditions, good scaling properties and a robust numerical implementation. Empirical results are included to illustrate these characteristics.  相似文献   

14.
The paper describes a novel application of the B-spline membership functions (BMF's) and the fuzzy neural network to the function approximation with outliers in training data. According to the robust objective function, we use gradient descent method to derive the new learning rules of the weighting values and BMF's of the fuzzy neural network for robust function approximation. In this paper, the robust learning algorithm is derived. During the learning process, the robust objective function comes into effect and the approximated function will gradually be unaffected by the erroneous training data. As a result, the robust function approximation can rapidly converge to the desired tolerable error scope. In other words, the learning iterations will decrease greatly. We realize the function approximation not only in one dimension (curves), but also in two dimension (surfaces). Several examples are simulated in order to confirm the efficiency and feasibility of the proposed approach in this paper.  相似文献   

15.
We present a type of single-hidden layer feed-forward wavelet neural networks. First, we give a new and quantitative proof of the fact that a single-hidden layer wavelet neural network with n + 1 hidden neurons can interpolate + 1 distinct samples with zero error. Then, without training, we constructed a wavelet neural network X a (x, A), which can approximately interpolate, with arbitrary precision, any set of distinct data in one or several dimensions. The given wavelet neural network can uniformly approximate any continuous function of one variable.  相似文献   

16.
多层前向小世界神经网络及其函数逼近   总被引:1,自引:0,他引:1  
借鉴复杂网络的研究成果, 探讨一种在结构上处于规则和随机连接型神经网络之间的网络模型—-多层前向小世界神经网络. 首先对多层前向规则神经网络中的连接依重连概率p进行重连, 构建新的网络模型, 对其特征参数的分析表明, 当0 < p < 1时, 该网络在聚类系数上不同于Watts-Strogatz 模型; 其次用六元组模型对网络进行描述; 最后, 将不同p值下的小世界神经网络用于函数逼近, 仿真结果表明, 当p = 0:1时, 网络具有最优的逼近性能, 收敛性能对比试验也表明, 此时网络在收敛性能、逼近速度等指标上要优于同规模的规则网络和随机网络.  相似文献   

17.
A feedforward Sigma-Pi neural network with a single hidden layer of m neurons is given by /sup m//spl Sigma//sub j=1/c/sub j/g(n/spl Pi//sub k=1/x/sub k/-/spl theta//sub k//sup j///spl lambda//sub k//sup j/) where c/sub j/, /spl theta//sub k//sup j/, /spl lambda//sub k//spl isin/R. We investigate the approximation of arbitrary functions f: R/sup n//spl rarr/R by a Sigma-Pi neural network in the L/sup p/ norm. An L/sup p/ locally integrable function g(t) can approximate any given function, if and only if g(t) can not be written in the form /spl Sigma//sub j=1//sup n//spl Sigma//sub k=0//sup m//spl alpha//sub jk/(ln|t|)/sup j-1/t/sub k/.  相似文献   

18.
This paper is aimed at exposing the reader to certain aspects in the design of the best approximants with Gaussian radial basis functions (RBFs). The class of functions to which this approach applies consists of those compactly supported in frequency. The approximative properties of uniqueness and existence are restricted to this class. Functions which are smooth enough can be expanded in Gaussian series converging uniformly to the objective function. The uniqueness of these series is demonstrated by the context of the orthonormal basis in a Hilbert space. Furthermore, the best approximation to a given band-limited function from a truncated Gaussian series is analyzed by an energy-based argument. This analysis not only gives a theoretical proof concerned with the existence of best approximations but addresses the problems of architectural selection. Specifically, guidance for selecting the variance and the oversampling parameters is provided for practitioners.  相似文献   

19.
基于折线模糊数间的模糊算术以及一个新的扩展原理建立了一种新的模糊神经网络模型,证明了当输入为负模糊数时,相应的前向三层折线模糊网络可以作为连续模糊函数的通用逼近器,并给出了此时连续模糊函数所需满足的等价条件,最后给出了一个仿真实例。  相似文献   

20.
《Applied Soft Computing》2008,8(1):488-498
The main purpose of this paper is to develop fuzzy polynomial neural networks (FPNN) to predict the compressive strength of concrete. Two different architectures of FPNN are addressed (Type1 and Type2) and their training methods are discussed. In this research, the proposed FPNN is a combination of fuzzy neural networks (FNNs) and polynomial neural networks (PNNs). Here, while the FNN demonstrates the premises (If-Part) of the fuzzy model, the PNN is implemented as its consequence (Then-Part). To enhance the performance of the network, back propagation (BP), and list square error (LSE) algorithms are utilized for the tuning of the system.Six different FPNN architectures are constructed, trained, and tested using the experimental data of 458 different concrete mix-designs collected from three distinct sources. The data are organized in a format of six input parameters of concrete ingredients and one output as 28-day compressive strength of the mix-design. Using root means square (RMS) and correlation factors (CFs), the models are evaluated and compared with training and testing data pairs. The results show that FPNN-Type1 has strong potential as a feasible tool for prediction of the compressive strength of concrete mix-design. However, the FPNN-Type2 is recognized as unfeasible model to this purpose.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号