首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There have been a number of contributions to the literature on a class of structural analysis methods referred to as nonlinear flexibility methods. These methods appear to perform very well compared to classical stiffness approaches for problems with constitutive nonlinearities. Although most of these methods appeal to variational principles, the exact variational basis of these methods has not been entirely clear. Some of them even seem not to be variationally consistent. We show in this paper that, because the equations of equilibrium and kinematics are directly integrable, a nonlinear flexibility method (in the spirit of those presented in the literature) can be derived without appeal to variational principles. The method does not involve interpolation of the displacement field and the accuracy of the method is limited only by the numerical scheme used to perform element integrals. There is no need for h refinement to improve accuracy. Further, we show that this nonlinear flexibility method is essentially identical, with some subtle algorithmic differences, to a two-field (Hellinger-Reissner) variational principle when the stress interpolation is exact (which is possible for this class of problems). We demonstrate the utility of the nonlinear flexibility method by applying it to a problem involving cyclic inelastic loading wherein the strain fields evolve into functions that are difficult to capture through interpolation.  相似文献   

2.
Describes a procedure that enables researchers to estimate nonlinear and interactive effects of latent variables in structural equation models. Given that the latent variables are normally distributed, the parameters of such models can be estimated. To do this, products of the measured variables are used as indicators of latent product variables. Estimation must be done using a procedure that allows nonlinear constraints on parameters. The procedure is demonstrated in 3 examples. The 1st 2 examples use artificial data with known parameter values. These parameters are successfully recovered by the procedure. The final complex example uses national election survey data. (14 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
A thermoelastic model for analyzing laminated composite plates under both mechanical and thermal loadings is constructed by the variational asymptotic method. The original three-dimensional nonlinear thermoelasticity problem is formulated based on a set of intrinsic variables defined on the reference plane and for arbitrary deformation of the normal line. Then the variational asymptotic method is used to rigorously split the three-dimensional problem into two problems: A nonlinear, two-dimensional, plate analysis over the reference plane to obtain the global deformation and a linear analysis through the thickness to provide the two-dimensional generalized constitutive law and the recovering relations to approximate the original three-dimensional results. The nonuniqueness of asymptotic theory correct up to a certain order is used to cast the obtained asymptotically correct second-order free energy into a Reissner–Mindlin type model to account for transverse shear deformation. The present theory is implemented into the computer program, variational asymptotic plate and shell analysis (VAPAS). Results from VAPAS for several cases have been compared with the exact thermoelasticity solutions, classical lamination theory, and first-order shear-deformation theory to demonstrate the accuracy and power of the proposed theory.  相似文献   

4.
An analytical approach is presented for determining the response of a neuron or of the activity in a network of connected neurons, represented by systems of nonlinear ordinary stochastic differential equations--the Fitzhugh-Nagumo system with Gaussian white noise current. For a single neuron, five equations hold for the first- and second-order central moments of the voltage and recovery variables. From this system we obtain, under certain assumptions, five differential equations for the means, variances, and covariance of the two components. One may use these quantities to estimate the probability that a neuron is emitting an action potential at any given time. The differential equations are solved by numerical methods. We also perform simulations on the stochastic Fitzugh-Nagumo system and compare the results with those obtained from the differential equations for both sustained and intermittent deterministic current inputs with superimposed noise. For intermittent currents, which mimic synaptic input, the agreement between the analytical and simulation results for the moments is excellent. For sustained input, the analytical approximations perform well for small noise as there is excellent agreement for the moments. In addition, the probability that a neuron is spiking as obtained from the empirical distribution of the potential in the simulations gives a result almost identical to that obtained using the analytical approach. However, when there is sustained large-amplitude noise, the analytical method is only accurate for short time intervals. Using the simulation method, we study the distribution of the interspike interval directly from simulated sample paths. We confirm that noise extends the range of input currents over which (nonperiodic) spike trains may exist and investigate the dependence of such firing on the magnitude of the mean input current and the noise amplitude. For networks we find the differential equations for the means, variances, and covariances of the voltage and recovery variables and show how solving them leads to an expression for the probability that a given neuron, or given set of neurons, is firing at time t. Using such expressions one may implement dynamical rules for changing synaptic strengths directly without sampling. The present analytical method applies equally well to temporally nonhomogeneous input currents and is expected to be useful for computational studies of information processing in various nervous system centers.  相似文献   

5.
Top-down learning of low-level vision tasks   总被引:1,自引:0,他引:1  
Perceptual tasks such as edge detection, image segmentation, lightness computation and estimation of three-dimensional structure are considered to be low-level or mid-level vision problems and are traditionally approached in a bottom-up, generic and hard-wired way. An alternative to this would be to take a top-down, object-class-specific and example-based approach. In this paper, we present a simple computational model implementing the latter approach. The results generated by our model when tested on edge-detection and view-prediction tasks for three-dimensional objects are consistent with human perceptual expectations. The model's performance is highly tolerant to the problems of sensor noise and incomplete input image information. Results obtained with conventional bottom-up strategies show much less immunity to these problems. We interpret the encouraging performance of our computational model as evidence in support of the hypothesis that the human visual system may learn to perform supposedly low-level perceptual tasks in a top-down fashion.  相似文献   

6.
Structural equation models are commonly used to analyze 2-mode data sets, in which a set of objects is measured on a set of variables. The underlying structure within the object mode is evaluated using latent variables, which are measured by indicators coming from the variable mode. Additionally, when the objects are measured under different conditions, 3-mode data arise, and with this, the simultaneous study of the correlational structure of 2 modes may be of interest. In this article the authors present a model with a simultaneous latent structure for 2 of the 3 modes of such a data set. They present an empirical illustration of the method using a 3-mode data set (person by situation by response) exploring the structure of anger and irritation across different interpersonal situations as well as across persons. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
突变是工程实践过程中广泛存在的现象.当系统的状态发生跳跃性变化时,基于微积分的传统数学建模方法精度较低,人工神经网络等机器学习算法无法对突变现象作出合理的解释.基于突变理论的尖点突变模型可以用来解释系统状态的不连续变化,然而在输入变量维度较大的情况下,传统的尖点突变模型复杂度高且精度较差.为了解决这一问题,提出了一种基于变量选择的尖点突变模型的两步构建方法.第一步,利用多模型集成重要变量选择算法(MEIVS)量化待选变量的重要性并提取重要变量;第二步,基于极大似然法(MLE)利用所提取的重要变量构建尖点突变模型.仿真结果表明,在具有突变特征的数据集上,通过MEIVS降维后的尖点突变模型在评价指标上优于线性模型、Logistic模型和通过其他方法降维的尖点突变模型,并且可以用来解释研究对象的不连续变化.  相似文献   

8.
One of the oldest problems in visual perception is the definition of the basic elements of form, shape, and texture. In the past 20 years the question has been focused by the observation that human vision can be separated into two systems. Neisser first made popular the terms preattentive and attentive to characterize this division. The psychophysical experiments I have conducted over the past two years have probed the preattentive system to determine if it is as simple-minded as present theories suggest. One implication of this research involves rethinking theories of preattentive vision. To date this system is thought to be directly linked to what Marr called the "primal sketch" (a retinal-based image-intensity map of visual features such as blobs and blob intersections). My research suggest that it may be more closely tied to Marr's "2 1/2D sketch" (a retinally-based relief map of object-features such as surfaces slanted in depth). A second implication of this research concerns the neural implementation of detectors for slant based on shading and texture. So far we know a great deal about the tilt (two-dimensional orientation) sensitivity of single units in visual cortex, but the sensitivity of neurons to slant remains to be investigated. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
At present, the preferred tool for parameter estimation in compartmental analysis is an iterative procedure; weighted nonlinear regression. For a large number of applications, observed data can be fitted to sums of exponentials whose parameters are directly related to the rate constants/coefficients of the compartmental models. Since weighted nonlinear regression often has to be repeated for many different data sets, the process of fitting data from compartmental systems can be very time consuming. Furthermore the minimization routine often converges to a local (as opposed to global) minimum. In this paper, we examine the possibility of using artificial neural networks instead of weighted nonlinear regression in order to estimate model parameters. We train simple feed-forward neural networks to produce as outputs the parameter values of a given model when kinetic data are fed to the networks' input layer. The artificial neural networks produce unbiased estimates and are orders of magnitude faster than regression algorithms. At noise levels typical of many real applications, the neural networks are found to produce lower variance estimates than weighted nonlinear regression in the estimation of parameters from mono- and biexponential models. These results are primarily due to the inability of weighted nonlinear regression to converge. These results establish that artificial neural networks are powerful tools for estimating parameters for simple compartmental models.  相似文献   

10.
This article presents a new incremental learning algorithm for classification tasks, called NetLines, which is well adapted for both binary and real-valued input patterns. It generates small, compact feedforward neural networks with one hidden layer of binary units and binary output units. A convergence theorem ensures that solutions with a finite number of hidden units exist for both binary and real-valued input patterns. An implementation for problems with more than two classes, valid for any binary classifier, is proposed. The generalization error and the size of the resulting networks are compared to the best published results on well-known classification benchmarks. Early stopping is shown to decrease overfitting, without improving the generalization performance.  相似文献   

11.
Neural Modeling of Square Surface Aerators   总被引:1,自引:0,他引:1  
Applications of artificial neural networks in the field of aeration phenomena in surface aerators, which are not geometrically similar, are explored to predict reaeration rates under varying dynamic as well as geometric conditions. The primary network for prediction is a feed forward network with nonlinear elements. The network consists of an input layer, an output layer, a hidden layer, and the nonlinear transfer function in each processing element. The network requires supervised learning and the learning algorithm is the back-propagation. As back-propagation learning is affected by local minima, and to get over this aspect various other modifications have been suggested like Levenberg-Marquardt, quasi-Newton, conjugate-gradient, etc. The present study suggests that the Levenberg-Marquardt modification is a very efficient algorithm in comparison with others like quasi-Newton and conjugate-gradient. In the situations when the dimension of the input vector is large, and highly correlated, it is useful to reduce the dimension of the input vectors. An effective procedure for performing this operation is principal component analysis. The best prediction performance is achieved when the data are preprocessed using principal components analysis before they are fed to a back-propagated neural network, but at the cost of losing the physical significance of experimental data. The model thus developed can be used to predict the reaeration rate for different sizes of geometric elements (like rotor diameter, sizes of rotor, aerators’ geometry, water depth, etc.) under various dynamic conditions, i.e., the speed of the rotor.  相似文献   

12.
We propose a method for estimating probability density functions and conditional density functions by training on data produced by such distributions. The algorithm employs new stochastic variables that amount to coding of the input, using a principle of entropy maximization. It is shown to be closely related to the maximum likelihood approach. The encoding step of the algorithm provides an estimate of the probability distribution. The decoding step serves as a generative mode, producing an ensemble of data with the desired distribution. The algorithm is readily implemented by neural networks, using stochastic gradient ascent to achieve entropy maximization.  相似文献   

13.
This paper presents a method for predicting the nonlinear response of torsionally loaded piles in a two-layer soil profile, such as a clay or sand layer underlain by rock. The shear modulus of the upper soil is assumed to vary linearly with depth and the shear modulus of the lower soil is assumed to vary linearly with depth and then stay constant below the pile tip. The method uses the variational principle to derive the governing differential equations of a pile in a two-layer continuum and the elastic response of the pile is then determined by solving the derived differential equations. To consider the effect of soil yielding on the behavior of piles, the soil is assumed to behave linearly elastically at small strain levels and yield when the shear stress on the pile-soil interface exceeds the corresponding maximum shear resistance. To determine the maximum pile-soil interface shear resistance, methods that are available in the literature can be used. The proposed method is verified by comparing its results with existing elastic solutions and published small-scale model pile test results. Finally, the proposed method is used to analyze two full-scale field test piles and the predictions are in reasonable agreement with the measurements.  相似文献   

14.
A methodology for optimal spacing in an array of ditches fully penetrating into homogeneous and isotropic porous medium of finite depth over an impervious layer is presented. The cost function includes the depth-dependent earthwork cost and the capitalized cost of pumping of drain discharge. Essentially, it is a problem of minimization of a nonlinear objective function of single variable. The input variables consist of rainfall intensity, hydraulic conductivity of the porous medium, width and depth of ditches, earthwork cost, cost of pumps and pumping energy cost, efficiency of pumping unit, and rate of interest. Using nonlinear data fitting method an explicit equation has been proposed for computing the optimal spacing between the ditches.  相似文献   

15.
Computer analysis of structures has traditionally been carried out using the displacement method combined with an incremental iterative scheme for nonlinear problems. In this paper, a Lagrangian approach is developed, which is a mixed method, where besides displacements, the stress resultants and other variables of state are primary unknowns. The method can potentially be used for the analysis of collapse of structures subjected to severe vibrations resulting from shocks or dynamic loads. The evolution of the structural state in time is provided a weak formulation using Hamilton’s principle. It is shown that a certain class of structures, known as reciprocal structures, has a mixed Lagrangian formulation in terms of displacements and internal forces. The form of the Lagrangian is invariant under finite displacements and can be used in geometric nonlinear analysis. For numerical solution, a discrete variational integrator is derived starting from the weak formulation. This integrator inherits the energy and momentum conservation characteristics for conservative systems and the contractivity of dissipative systems. The integration of each step is a constrained minimization problem and it is solved using an augmented Lagrangian algorithm. In contrast to the traditional displacement-based method, the Lagrangian method provides a generalized formulation which clearly separates the modeling of components from the numerical solution. Phenomenological models of components, essential to simulate collapse, can be incorporated without having to implement model-specific incremental state determination algorithms. The state variables are determined at the global level by the optimization method.  相似文献   

16.
Demand Forecasting for Irrigation Water Distribution Systems   总被引:1,自引:0,他引:1  
One of the main problems in the management of large water supply and distribution systems is the forecasting of daily demand in order to schedule pumping effort and minimize costs. This paper examines methodologies for consumer demand modeling and prediction in a real-time environment for an on-demand irrigation water distribution system. Approaches based on linear multiple regression, univariate time series models (exponential smoothing and ARIMA models), and computational neural networks (CNNs) are developed to predict the total daily volume demand. A set of templates is then applied to the daily demand to produce the diurnal demand profile. The models are established using actual data from an irrigation water distribution system in southern Spain. The input variables used in various CNN and multiple regression models are (1) water demands from previous days; (2) climatic data from previous days (maximum temperature, minimum temperature, average temperature, precipitation, relative humidity, wind speed, and sunshine duration); (3) crop data (surfaces and crop coefficients); and (4) water demands and climatic and crop data. In CNN models, the training method used is a standard back-propagation variation known as extended-delta-bar-delta. Different neural architectures are compared whose learning is carried out by controlling several threshold determination coefficients. The nonlinear CNN model approach is shown to provide a better prediction of daily water demand than linear multiple regression and univariate time series analysis. The best results were obtained when water demand and maximum temperature variables from the two previous days were used as input data.  相似文献   

17.
In design of water distribution networks, there are several constraints that need to be satisfied; supplying water at an adequate pressure being the main one. In this paper, a self-adaptive fitness formulation is presented for solving constrained optimization of water distribution networks. The method has been formulated to ensure that slightly infeasible solutions with a low objective function value remain fit. This is seen as a benefit in solving highly constrained problems that have solutions on one or more of the constraint bounds. In contrast, solutions well outside the constraint bounds are seen as containing little genetic information that is of use and are therefore penalized. In this method, the dimensionality of the problem is reduced by representing the constraint violations by a single infeasibility measure. The infeasibility measure is used to form a two-stage penalty that is applied to infeasible solutions. The performance of the method has been examined by its application to two water distribution networks from literature. The results have been compared with previously published results. It is shown that the method is able to find optimum solutions with less computational effort. The proposed method is easy to implement, requires no parameter tuning, and can be used as a fitness evaluator with any evolutionary algorithm. The approach is also robust in its handling of both linear and nonlinear equality and inequality constraint functions. Furthermore, the method does not require an initial feasible solution, this being an advantage in real-world applications having many optimization variables.  相似文献   

18.
For several years, there has been an ongoing discussion about appropriate methodological tools to be applied to observational data in pharmacoepidemiological studies. It is now suggested by our research group that artificial neural networks (ANN) might be advantageous in some cases for classification purposes when compared with discriminant analysis. This is due to their inherent capability to detect complex linear and nonlinear functions in multivariate data sets, the possibility of including data on different scales in the same model, as well as their relative resistance to "noisy" input. In this paper, a short introduction is given to the basics of neural networks and possible applications. For demonstration, a comparison between artificial neural networks and discriminant analysis was performed on a multivariate data set, consisting of observational data of 19738 patients treated with fluoxetine. It was tested, which of the two statistical tools outperforms the two other in regard to the therapeutic response prediction from the clinical input data. Essentially, it was found that neither discriminant analysis nor ANN are able to predict the clinical outcome on the basis of the employed clinical variables. Applying ANN, we were able to rule out the possibility of undetected suppressor effects to a greater extent than would have been possible by the exclusive application of discriminant analysis.  相似文献   

19.
A genetic-fuzzy learning from examples (GFLFE) approach is presented for determining fuzzy rule bases generated from input/output data sets. The method is less computationally intensive than existing fuzzy rule base learning algorithms as the optimization variables are limited to the membership function widths of a single rule, which is equal to the number of input variables to the fuzzy rule base. This is accomplished by primary width optimization of a fuzzy learning from examples algorithm. The approach is demonstrated by a case study in masonry bond strength prediction. This example is appropriate as theoretical models to predict masonry bond strength are not available. The GFLFE method is compared to a similar learning method using constrained nonlinear optimization. The writers’ results indicate that the use of a genetic optimization strategy as opposed to constrained nonlinear optimization provides significant improvement in the fuzzy rule base as indicated by a reduced fitness (objective) function and reduced root-mean-squared error of an evaluation data set.  相似文献   

20.
Much of the research in "New Connectionism" has studied "multiple-layer" perceptrons. Such a perceptron is a network of simple processing units, and can detect whether or not some property is true of a presented pattern. A multiple-layer perceptron has a layer of input units to which patterns are presented, as well as a layer of output units to represent an elicited response. Between these two layers are one or more layers of "hidden" processing units, which typically act as feature detectors. An important difficulty faced when dealing with such a network is the problem of credit assignment: how can the difference between the desired and observed activation in the output layer be used to change properties of "hidden" network components? The credit assignment problem needs to be solved if the network is to be trained to detect some property of interest. The generalized delta rule offers one solution to the credit assignment problem. While the generalized delta rule has provided a viable solution to the credit assignment problem, it has led to other difficulties. Processing units with monotonic activation functions cannot themselves make very sophisticated distinctions. Our approach to limiting the proliferation of hidden units has led to an important extension of the generalized delta rule. We have developed a technique to train networks with much more powerful processors. This research provides several advances to the Connectionist programme. First, we have shown how to train networks containing a new (and more powerful) kind of processing unit. Second, the use of such units reduces the number of hidden units required to make pattern discriminations. Third, our new learning rule can be used to train networks with two different kinds of processing units, which is a small but important step to the biological plausibility of connectionist networks. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号