首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this work, an explicit family of time marching procedures with adaptive dissipation control is introduced. The proposed technique is conditionally stable, second‐order accurate, and has controllable algorithm dissipation, which adapts according to the properties of the governing system of equations. Thus, spurious modes can be more effectively dissipated and accuracy is improved. Because this is an explicit time integration technique, the new family is quite efficient, requiring no system of equations to be dealt with at each time step. Moreover, the technique is simple and very easy to implement. Numerical results are presented along the paper, illustrating the good performance of the proposed method, as well as its potentialities. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
In this work, a new, unconditionally stable, time marching procedure for dynamic analyses is presented. The scheme is derived from the standard central difference approximation, with stabilization being provided by a consistent perturbation of the original problem. Because the method only involves constitutive variables that are already available from computations at previous time steps, iterative procedures are not required to establish equilibrium when nonlinear models are focused, allowing more efficient analyses to be obtained. The theoretical properties of the proposed scheme are discussed taking into account standard stability and accuracy analyses, indicating the excellent performance of the new technique. At the end of the contribution, representative nonlinear numerical examples are studied, further illustrating the effectiveness of the new technique. Numerical results obtained by the standard central difference procedure and the implicit constant average acceleration method are also presented along the text for comparison. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
This paper presents a p-version least-squares finite element formulation for unsteady fluid dynamics problems where the effects of space and time are coupled. The dimensionless form of the differential equations describing the problem are first cast into a set of first-order differential equations by introducing auxiliary variables. This permits the use of C° element approximation. The element properties are derived by utilizing p-version approximation functions in both space and time and then minimizing the error functional given by the space–time integral of the sum of squares of the errors resulting from the set of first-order differential equations. This results in a true space–time coupled least-squares minimization procedure. A time marching procedure is developed in which the solution for the current time step provides the initial conditions for the next time step. The space–time coupled p-version approximation functions provide the ability to control truncation error which, in turn, permits very large time steps. What literally requires hundreds of time steps in uncoupled conventional time marching procedures can be accomplished in a single time step using the present space–time coupled approach. For non-linear problems the non-linear algebraic equations resulting from the least-squares process are solved using Newton's method with a line search. This procedure results in a symmetric Hessian matrix. Equilibrium iterations are carried out for each time step until the error functional and each component of the gradient of the error functional with respect to nodal degrees of freedom are below a certain prespecified tolerance. The generality, success and superiority of the present formulation procedure is demonstrated by presenting specific formulations and examples for the advection–diffusion and Burgers equations. The results are compared with the analytical solutions and those reported in the literature. The formulation presented here is ideally suited for space–time adaptive procedures. The element error functional values provide a mechanism for adaptive h, p or hp refinements. The work presented in this paper provides the basis for the extension of the space–time coupled least-squares minimization concept to two- and three-dimensional unsteady fluid flow.  相似文献   

4.
In this paper, the non‐linear seismic response of arch dams is presented using the concept of Continuum Damage Mechanics (CDM). The analysis is performed using the finite element technique and appropriate non‐linear material and damage models in conjunction with the α‐algorithm for time marching. Because of the non‐linear nature of the discretizied equations of motion, modified Newton–Raphson approach has been used at each time step. Damage evolution based on tensile principal strain using mesh‐dependent hardening modulus technique is adopted to ensure the mesh objectivity and to calculate the accumulated damage. The methodology employed is shown to be computationally efficient and consistent in its treatment of both damage growth and damage propagation. As an application of the proposed formulation, a double curvature arch dam has been analysed and the results are compared with the solutions from linear analysis and it is shown that the structural response of arch dams varies significantly in terms of damage evolution. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

5.
The numerical modelling of interacting acoustic media by boundary element method–finite element method (BEM–FEM) coupling procedures is discussed here, taking into account time‐domain approaches. In this study, the global model is divided into different sub‐domains and each sub‐domain is analysed independently (considering BEM or FEM discretizations): the interaction between the different sub‐domains of the global model is accomplished by interface procedures. Numerical formulations based on FEM explicit and implicit time‐marching schemes are discussed, resulting in direct and optimized iterative BEM–FEM coupling techniques. A multi‐level time‐step algorithm is considered in order to improve the flexibility, accuracy and stability (especially when conditionally stable time‐marching procedures are employed) of the coupled analysis. At the end of the paper, numerical examples are presented, illustrating the potentialities and robustness of the proposed methodologies. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
An adaptive remeshing procedure is proposed for discontinuous finite element limit analysis. The procedure proceeds by iteratively adjusting the element sizes in the mesh to distribute local errors uniformly over the domain. To facilitate the redefinition of element sizes in the new mesh, the interelements discontinuous field of elemental bound gaps is converted into a continuous field, ie, the intensity of bound gap, using a patch‐based approximation technique. An analogous technique is subsequently used for the approximation of element sizes in the old mesh. With these information, an optimized distribution of element sizes in the new mesh is defined and then scaled to match the total number of elements specified for each iteration in the adaptive remeshing process. Finally, a new mesh is generated using the advancing front technique. This adaptive remeshing procedure is repeated several times until an optimal mesh is found. Additionally, for problems involving discontinuous boundary loads, a novel algorithm for the generation of fan‐type meshes around singular points is proposed explicitly and incorporated into the main adaptive remeshing procedure. To demonstrate the feasibility of our proposed method, some classical examples extracted from the existing literary works are studied in detail.  相似文献   

7.
A generalized scheme for the fabrication of high performance photodetectors consisting of a p‐type channel material and n‐type nanoparticles is proposed. The high performance of the proposed hybrid photodetector is achieved through enhanced photoabsorption and the photocurrent gain arising from its effective charge transfer mechanism. In this paper, the realization of this design is presented in a hybrid photodetector consisting of 2D p‐type black phosphorus (BP) and n‐type molybdenum disulfide nanoparticles (MoS2 NPs), and it is demonstrated that it exhibits enhanced photoresponsivity and detectivity compared to pristine BP photodetectors. It is found that the performance of hybrid photodetector depends on the density of NPs on BP layer and that the response time can be reduced with increasing density of MoS2 NPs. The rising and falling times of this photodetector are smaller than those of BP photodetectors without NPs. This proposed scheme is expected to work equally well for a photodetector with an n‐type channel material and p‐type nanoparticles.  相似文献   

8.
Adaptive control techniques can be applied to dynamical systems whose parameters are unknown. We propose a technique based on control and numerical analysis approaches to the study of the stability and accuracy of adaptive control algorithms affected by time delay. In particular, we consider the adaptive minimal control synthesis (MCS) algorithm applied to linear time‐invariant plants, due to which, the whole controlled system generated from state and control equations discretized by the zero‐order‐hold (ZOH) sampling is nonlinear. Hence, we propose two linearization procedures for it: the first is via what we term as physical insight and the second is via Taylor series expansion. The physical insight scheme results in useful methods for a priori selection of the controller parameters and of the discrete‐time step. As there is an inherent sampling delay in the process, a fixed one‐step delay in the discrete‐time MCS controller is introduced. This results in a reduction of both the absolute stability regions and the controller performance. Owing to the shortcomings of ZOH sampling in coping with high‐frequency disturbances, a linearly implicit L‐stable integrator is also used within a two degree‐of‐freedom controlled system. The effectiveness of the methodology is confirmed both by simulations and by experimental tests. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
10.
This work introduces a time‐adaptive strategy that uses a refinement estimator on the basis of the first Frenet curvature. In dynamics, a time‐adaptive strategy is a mechanism that interactively proposes changes to the time step used in iterative methods of solution. These changes aim to improve the relation between quality of response and computational cost. The method here proposed is suitable for a variety of numerical time integration problems, for example, in the study of bodies subjected to dynamical loads. The motion equation in its space‐discrete form is used as reference to derive the formulation presented in this paper. Our method is contrasted with other ones based on local error estimator and apparent frequencies. We check the performance of our proposal when employed with the central difference, the explicit generalized‐ α and the Chung‐Lee integration methods. The proposed refinement estimator demands low computational resources, being easily applied to several direct integration methods. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
An adaptive Finite Point Method (FPM) for solving shallow water problems is presented. The numerical methodology we propose, which is based on weighted‐least squares approximations on clouds of points, adopts an upwind‐biased discretization for dealing with the convective terms in the governing equations. The viscous and source terms are discretized in a pointwise manner and the semi‐discrete equations are integrated explicitly in time by means of a multi‐stage scheme. Moreover, with the aim of exploiting meshless capabilities, an adaptive h‐refinement technique is coupled to the described flow solver. The success of this approach in solving typical shallow water flows is illustrated by means of several numerical examples and special emphasis is placed on the adaptive technique performance. This has been assessed by carrying out a numerical simulation of the 26th December 2004 Indian Ocean tsunami with highly encouraging results. Overall, the adaptive FPM is presented as an accurate enough, cost‐effective tool for solving practical shallow water problems. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

12.
Ensemble Methods are proposed as a means to extendbiAdaptive One‐Factor‐at‐a‐Time (aOFAT) experimentation. The proposed method executes multiple aOFAT experiments on the same system with minor differences in experimental setup, such as ‘starting points’. Experimental conclusions are arrived at by aggregating the multiple, individual aOFATs. A comparison is made to test the performance of the new method with that of a traditional form of experimentation, namely a single fractional factorial design which is equally resource intensive. The comparisons between the two experimental algorithms are conducted using a hierarchical probability meta‐model and an illustrative case study. The case is a wet clutch system with the goal of minimizing drag torque. In this study, the proposed procedure was superior in performance to using fractional factorial arrays consistently across various experimental settings. At the best, the proposed algorithm provides an expected value of improvement that is 15% higher than the traditional approach; at the worst, the two methods are equally effective, and on average the improvement is about 10% higher with the new method. These findings suggest that running multiple adaptive experiments in parallel can be an effective way to make improvements in quality and performance of engineering systems and also provides a reasonable aggregation procedure by which to bring together the results of the many separate experiments. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
A numerical technique for non‐planar three‐dimensional linear elastic crack growth simulations is proposed. This technique couples the extended finite element method (X‐FEM) and the fast marching method (FMM). In crack modeling using X‐FEM, the framework of partition of unity is used to enrich the standard finite element approximation by a discontinuous function and the two‐dimensional asymptotic crack‐tip displacement fields. The initial crack geometry is represented by two level set functions, and subsequently signed distance functions are used to maintain the location of the crack and to compute the enrichment functions that appear in the displacement approximation. Crack modeling is performed without the need to mesh the crack, and crack propagation is simulated without remeshing. Crack growth is conducted using FMM; unlike a level set formulation for interface capturing, no iterations nor any time step restrictions are imposed in the FMM. Planar and non‐planar quasi‐static crack growth simulations are presented to demonstrate the robustness and versatility of the proposed technique. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
This work focuses on devising an efficient hierarchy of higher‐order methods for linear transient analysis, equipped with an effective dissipative action on the spurious high modes of the response. The proposed strategy stems from the Nørsett idea and is based on a multi‐stage algorithm, designed to hierarchically improve accuracy while retaining the desired dissipative behaviour. Computational efficiency is pursued by requiring that each stage should involve just one set of implicit equations of the size of the problem to be solved (as standard time integration methods) and, in addition, all the stages should share the same coefficient matrix. This target is achieved by rationally formulating the methods based on the discontinuous collocation approach. The resultant procedure is shown to be well suited for adaptive solution strategies. In particular, it embeds two natural tools to effectively control the error propagation. One estimates the local error through the next‐stage solution, which is one‐order more accurate, the other through the solution discontinuity at the beginning of the current time step, which is permitted by the present formulation. The performance of the procedure and the quality of the two error estimators are experimentally verified on different classes of problems. Some typical numerical tests in transient heat conduction and elasto‐dynamics are presented. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
In the paper we present a postprocessed type of a posteriori error estimate and a h-version adaptive procedure for the semidiscrete finite element method in dynamic analysis. In space the super-convergent patch recovery technique is used for determining higher-order accurate stresses and, thus, a spatial error estimate. In time a postprocessing technique is developed for obtaining a local error estimate for one step time integration schemes (the HHT-α method). Coupling the error estimate with a mesh generator, a h-version adaptive finite element procedure is presented for two-dimensional dynamic analysis. It updates the spatial mesh and time step automatically so that the discretization errors are controlled within specified tolerances. Numerical studies on different problems are presented for demonstrating the performances of the proposed adaptive procedure.  相似文献   

16.
A comprehensive study of the two sub‐steps composite implicit time integration scheme for the structural dynamics is presented in this paper. A framework is proposed for the convergence accuracy analysis of the generalized composite scheme. The local truncation errors of the acceleration, velocity, and displacement are evaluated in a rigorous procedure. The presented and proved accuracy condition enables the displacement, velocity, and acceleration achieving second‐order accuracy simultaneously, which avoids the drawback that the acceleration accuracy may not reach second order. The different influences of numerical frequencies and time step on the accuracy of displacement, velocity, and acceleration are clarified. The numerical dissipation and dispersion and the initial magnitude errors are investigated physically, which measure the errors from the algorithmic amplification matrix's eigenvalues and eigenvectors, respectively. The load and physically undamped/damped cases are naturally accounted. An optimal algorithm‐Bathe composite method (Bathe and Baig, 2005; Bathe, 2007; Bathe and Noh, 2012) is revealed with unconditional stability, no overshooting in displacement, velocity, and acceleration, and excellent performance compared with many other algorithms. The proposed framework also can be used for accuracy analysis and design of other multi‐sub‐steps composite schemes and single‐step methods under physical damping and/or loading. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
In most realistic situations, machines may be unavailable due to maintenance, pre-schedules and so on. The availability constraints are non-fixed in that the completion time of the maintenance task is not fixed and has to be determined during the scheduling procedure. In this paper a greedy randomised adaptive search procedure (GRASP) algorithm is presented to solve the flexible job-shop scheduling problem with non-fixed availability constraints (FJSSP-nfa). The GRASP algorithm is a metaheuristic algorithm which is characterised by multiple initialisations. Basically, it operates in the following manner: first a feasible solution is obtained, which is then further improved by a local search technique. The main objective is to repeat these two phases in an iterative manner and to preserve the best found solution. Representative FJSSP-nfa benchmark problems are solved in order to test the effectiveness and efficiency of the proposed algorithm.  相似文献   

18.
We introduce a nonparametric smoothing procedure for nonparametric factor analysis of multivariate time series. Our main objective is to develop an adaptive method for estimating a time-varying covariance matrix. The asymptotic properties of the proposed procedures are derived. We present an application based on the residuals from the Fair macromodel of the U.S. economy. We find substantial evidence of time varying second moments and breaks in the contemporaneous correlation structure during the mid 1970's to the early 1980's.  相似文献   

19.
The simultaneous electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) recording technique has recently received considerable attention and has been used in many studies on cognition and neurological disease. EEG‐fMRI simultaneous recording has the advantage of enabling the monitoring of brain activity with both high temporal resolution and high spatial resolution in real time. The successful removal of the ballistocardiographic (BCG) artifact from the EEG signal recorded during an MRI is an important prerequisite for real‐time EEG‐fMRI joint analysis. We have developed a new framework dedicated to BCG artifact removal in real‐time. This framework includes a new real‐time R‐peak detection method combining a k‐Teager energy operator, a thresholding detector, and a correlation detector, as well as a real‐time BCG artifact reduction procedure combining average artifact template subtraction and a new multi‐channel referenced adaptive noise cancelling method. Our results demonstrate that this new framework is efficient in the real‐time removal of the BCG artifact. The multi‐channel adaptive noise cancellation (mANC) method performs better than the traditional ANC method in eliminating the BCG residual artifact. In addition, the computational speed of the mANC method fulfills the requirements of real‐time EEG‐fMRI analysis. © 2016 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 26, 209–215, 2016  相似文献   

20.
Simulation‐based engineering usually needs the construction of computational vademecum to take into account the multiparametric aspect. One example concerns the optimization and inverse identification problems encountered in welding processes. This paper presents a nonintrusive a posteriori strategy for constructing quasi‐optimal space‐time computational vademecum using the higher‐order proper generalized decomposition method. Contrary to conventional tensor decomposition methods, based on full grids (eg, parallel factor analysis/higher‐order singular value decomposition), the proposed method is adapted to sparse grids, which allows an efficient adaptive sampling in the multidimensional parameter space. In addition, a residual‐based accelerator is proposed to accelerate the higher‐order proper generalized decomposition procedure for the optimal aspect of computational vademecum. Based on a simplified welding model, different examples of computational vademecum of dimension up to 6, taking into account both geometry and material parameters, are presented. These vademecums lead to real‐time parametric solutions and can serve as handbook for engineers to deal with optimization, identification, or other problems related to repetitive task.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号