首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A computational model is developed, by implementing the damage models previously proposed by authors into a finite element code, for simulating the damage evolution and crushing behavior of chopped random fiber composites. Material damages induced by fiber debonding and crack nucleation and growth are considered. Systematic computational algorithms are developed to combine the damage models into the constitutive relation. Based on the implemented computational model, a range of simulations are carried out to probe the behavior of the composites and to validate the proposed methodology. Numerical examples show that the present computational model is capable of modeling progressive deterioration of effective stiffness and softening behavior after the peak load. Crushing behavior of composite tube is also simulated, which shows the applicability of the proposed computational model for crashworthiness simulations.  相似文献   

2.
Discontinuous Galerkin finite element schemes exhibit attractive features for accurate large‐scale wave‐propagation simulations on modern parallel architectures. For many applications, these schemes must be coupled with nonreflective boundary treatments to limit the size of the computational domain without losing accuracy or computational efficiency, which remains a challenging task. In this paper, we present a combination of a nodal discontinuous Galerkin method with high‐order absorbing boundary conditions for cuboidal computational domains. Compatibility conditions are derived for high‐order absorbing boundary conditions intersecting at the edges and the corners of a cuboidal domain. We propose a GPU implementation of the computational procedure, which results in a multidimensional solver with equations to be solved on 0D, 1D, 2D, and 3D spatial regions. Numerical results demonstrate both the accuracy and the computational efficiency of our approach.  相似文献   

3.
余波  陈冰  吴然立 《工程力学》2017,34(7):136-145
现有的钢筋混凝土(RC)柱抗剪承载力计算模型大多属于确定性模型,难以有效考虑几何尺寸、材料特性和外荷载等因素存在的不确定性,导致计算结果的离散性较大,且计算精度和适用性有限。鉴于此,该文结合变角桁架-拱模型和贝叶斯理论,研究建立了剪切型RC柱抗剪承载力计算的概率模型。首先基于变角桁架-拱模型理论,并考虑轴压力对临界斜裂缝倾角的影响,建立了剪切型RC柱抗剪承载力的确定性修正模型;然后考虑主观不确定性和客观不确定性因素的影响,结合贝叶斯理论和马尔科夫链蒙特卡洛(MCMC)法,建立了剪切型RC柱的概率抗剪承载力计算模型;最后通过与试验数据和现有模型的对比分析,验证了该模型的有效性和实用性。分析结果表明,该模型不仅可以合理描述剪切型RC柱抗剪承载力的概率分布特性,而且可以校准现有确定性计算模型的置信水平,并且可以确定不同置信水平下剪切型RC柱抗剪承载力的特征值。  相似文献   

4.
张伟  韩旭  刘杰  杨刚 《工程力学》2013,30(3):58-65
提出了一种基于正交试验设计的土中爆炸数值模型确认方法。结合相应物理实验数据,该方法将数值模型确认问题转化为确定影响数值计算结果的各因素及其水平最佳搭配的优化问题。采用正交试验设计作为优化手段,达到了用较少的试验次数获得各因素及其水平最佳搭配的目的。通过极差分析,研究了在不同确认准则条件下,各因素及其水平对土中爆炸数值模拟结果的影响。计算结果表明:分别以最小计算耗时和最高计算精度为确认准则时,网格尺寸是影响数值结果的主要因素;以两者综合评估为确认准则时,计算方法是影响数值结果的主要因素。该文提出的这种方法为复杂数值模型确认问题的研究提供了新的思路。  相似文献   

5.
A methodology for analyzing the large static deformations of geometrically nonlinear structural systems in the presence of both system parameters uncertainties and model uncertainties is presented. It is carried out in the context of the identification of stochastic nonlinear reduced-order computational models using simulated experiments. This methodology requires the knowledge of a reference calculation issued from the mean nonlinear computational model in order to determine the POD basis (Proper Orthogonal Decomposition) used for the mean nonlinear reduced-order computational model. The construction of such mean reduced-order nonlinear computational model is explicitly carried out in the context of three-dimensional solid finite elements. It allows the stochastic nonlinear reduced-order computational model to be constructed in any general case with the nonparametric probabilistic approach. A numerical example is then presented for a curved beam in which the various steps are presented in details.  相似文献   

6.
A new unified theory underlying the theoretical design of linear computational algorithms in the context of time dependent first‐order systems is presented. Providing for the first time new perspectives and fresh ideas, and unlike various formulations existing in the literature, the present unified theory involves the following considerations: (i) it leads to new avenues for designing new computational algorithms to foster the notion of algorithms by design and recovering existing algorithms in the literature, (ii) describes a theory for the evolution of time operators via a unified mathematical framework, and (iii) places into context and explains/contrasts future new developments including existing designs and the various relationships among the different classes of algorithms in the literature such as linear multi‐step methods, sub‐stepping methods, Runge–Kutta type methods, higher‐order time accurate methods, etc. Subsequently, it provides design criteria and guidelines for contrasting and evaluating time dependent computational algorithms. The linear computational algorithms in the context of first‐order systems are classified as distinctly pertaining to Type 1, Type 2, and Type 3 classifications of time discretized operators. Such a distinct classification, provides for the first time, new avenues for designing new computational algorithms not existing in the literature and recovering existing algorithms of arbitrary order of time accuracy including an overall assessment of their stability and other algorithmic attributes. Consequently, it enables the evaluation and provides the relationships of computational algorithms for time dependent problems via a standardized measure based on computational effort and memory usage in terms of the resulting number of equation systems and the corresponding number of system solves. A generalized stability and accuracy limitation barrier theorem underlies the generic designs of computational algorithms with arbitrary order of accuracy and establishes guidelines which cannot be circumvented. In summary, unlike the traditional approaches and classical school of thought customarily employed in the theoretical development of computational algorithms, the unified theory underlying time dependent first‐order systems serves as a viable avenue to foster the notion of algorithms by design. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

7.
This paper is concerned with the production smoothing problem that arises in the context of just-in-time manufacturing systems. The production smoothing problem can be solved by employing a two-phase solution methodology, where optimal batch sizes for the products and a sequence for these batches are specified in the first and second phases, respectively. In this paper, we focus on the problem of selecting optimal batch sizes for the products. We propose a dynamic programming (DP) algorithm for the exact solution of the problem. Our computational experiments demonstrate that the DP approach requires significant computational effort, rendering its use in a real environment impractical. We develop three meta-heuristics for the near-optimal solution of the problem, namely strategic oscillation, scatter search and path relinking. The efficiency and efficacy of the methods are tested via a computational study. The computational results show that the meta-heuristic methods considered in this paper provide near-optimal solutions for the problem within several minutes. In particular, the path relinking method can be used for the planning of mixed-model manufacturing systems in real time with its negligible computational requirement and high solution quality.  相似文献   

8.
This paper provides a review on the state of the art in computational dynamic fracture mechanics. The following important essential ingredients in computational dynamic fracture mechanics are included: (i) fundamental aspects of dynamic fracture mechanics, (ii) types of fracture simulation, (iii) computational models of dynamic crack propagation, and (iv) use of dynamic J-integral in computational models. In the item (i), a special attention is focused on the asymptotic eigen fields for various states of dynamic crack tips, which provide the foundation of dynamic fracture mechanics as Williams' asymptotic eigen solutions provided the foundation of static linear fracture mechanics. In the item (ii), a new concept of mixed-phase simulation is presented for general nonself-similar crack propagation, in addition to the generation-phase and application-phase simulations. A comprehensive summary of computational models for dynamic crack propagation is given in the item (iii). Finally, in the item (iv) several attractive features of the dynamic J-integral are presented. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

9.
Computational and experimental crash analysis of the road safety barrier   总被引:4,自引:0,他引:4  
The paper describes the computational analysis and experimental crash tests of a new road safety barrier. The purpose of this research was to develop and evaluate a full-scale computational model of the road safety barrier for use in crash simulations and to further compare the computational results with real crash test data. The impact severity and stiffness of the new design have been evaluated with the dynamic nonlinear elasto-plastic analysis of the three-dimensional road safety barrier within the framework of the finite element method with LS-DYNA code. Comparison of computational and experimental results proved the correctness of the computational model. The tests have also shown that the new safety barrier assures controllable crash energy absorption which in turn increases the safety of vehicle occupants.  相似文献   

10.
Structural topology optimization aims to enhance the mechanical performance of a structure while satisfying some functional constraints. Nearly all approaches proposed in the literature are iterative, and the optimal solution is found by repeatedly solving a finite element analysis (FEA). It is thus clear that the bottleneck is the high computational effort, as these approaches require solving the FEA a large number of times. In this work, we address the need for reducing the computational time by proposing a reduced basis method that relies on functional principal component analysis (FPCA). The methodology has been validated considering a simulated annealing approach for compliance minimization in 2 classical variable thickness problems. Results show the capability of FPCA to provide good results while reducing the computational times, ie, the computational time for an FEA is about one order of magnitude lower in the reduced FPCA space.  相似文献   

11.
弹性力学静力问题的SPH方法   总被引:1,自引:0,他引:1  
光滑质点流体动力学方法(Smoothed Particle Hydrodynamics,SPH)是纯Lagrangian方法,可用于模拟流体或固体的静动力学问题。不需网格系统即可进行空间导数计算,可避免Lagrangian网格在处理结构变形计算时的缠结和扭曲问题。但经典SPH方法计算二阶以上导数时易引起计算失败。该文提出一种改进的SPH方法,既可避免二阶导数的计算失败,又可提高二阶导数的精度。据此计算了均布荷载作用下两端固结梁的变形问题。经与ANSYS计算结果比较,该方法的计算足够精确。虽以弹性力学小变形问题为例,但结论可推广到大变形情形。  相似文献   

12.
Dopamine (DA) is an important neurotransmitter for multiple brain functions, and dysfunctions of the dopaminergic system are implicated in neurological and neuropsychiatric disorders. Although the dopaminergic system has been studied at multiple levels, an integrated and efficient computational model that bridges from molecular to neuronal circuit level is still lacking. In this study, the authors aim to develop a realistic yet efficient computational model of a dopaminergic pre‐synaptic terminal. They first systematically perturb the variables/substrates of an established computational model of DA synthesis, release and uptake, and based on their relative dynamical timescales and steady‐state changes, approximate and reduce the model into two versions: one for simulating hourly timescale, and another for millisecond timescale. They show that the original and reduced models exhibit rather similar steady and perturbed states, whereas the reduced models are more computationally efficient and illuminate the underlying key mechanisms. They then incorporate the reduced fast model into a spiking neuronal model that can realistically simulate the spiking behaviour of dopaminergic neurons. In addition, they successfully include autoreceptor‐mediated inhibitory current explicitly in the neuronal model. This integrated computational model provides the first step toward an efficient computational platform for realistic multiscale simulation of dopaminergic systems in in silico neuropharmacology.Inspec keywords: neurophysiology, organic compounds, brain, medical disordersOther keywords: integrated dopaminergic neuronal model, reduced intracellular processes, inhibitory autoreceptors, neurotransmitter, multiple brain functions, dysfunctions, neurological disorders, neuropsychiatric disorders, computational model, molecular level, neuronal‐circuit level, dopaminergic presynaptic terminal, relative dynamical timescales, steady perturbed states, reduced fast model, spiking neuronal model, autoreceptor‐mediated inhibitory current, integrated computational model, efficient computational platform, realistic multiscale simulation, in silico neuropharmacology  相似文献   

13.
A computational method of failure probability for nonlinear safety margin equations is presented. It is known that Hasofar and Lind's computational method for solving this problem is not quite accurate, especially for a safety boundary surface with high curvature, because there is only one hyperplane to approximate the hypersurface. The method presented by the author of this paper suggests using several tangent hyperplanes to approximate the hypersurface, so its accuracy is quite high, but its computational quantity is still relatively small. Therefore, this new method has useful practical applications. Illustrative examples explain the above-mentioned advantages of this method.  相似文献   

14.
The introduction of composite materials is having a profound effect on the design process. Because these materials permit the designer to tailor material properties to improve structural, aerodynamic and acoustic performance, they require a more integrated multidisciplinary design process. Because of the complexity of the design process numerical optimization methods are required. The present paper is focused on a major difficulty associated with the multidisciplinary design optimization process—its enormous computational cost. We consider two approaches for reducing this computational burden: (i)development of efficient methods for cross-sensitivity calculation using perturbation methods; and (ii) the use of approximate numerical optimization procedures. Our efforts are concentrated upon combined aerodynamic-structural optimization. Results are presented for the integrated design of a sailplane wing. The impact of our computational procedures on the computational costs of integrated designs is discussed.  相似文献   

15.
This study reports on the computational analysis and experimental calibration of the whole-body counting detection equipment at the Nuclear and Technological Institute (ITN) in Portugal. Two state-of-the-art Monte Carlo simulation programmes were used for this purpose: PENELOPE and MCNPX. This computational work was undertaken as part of a new set of experimental calibrations, which improved the quality standards of this study's WBC system. In these calibrations, a BOMAB phantom, one of the industry standards phantoms for WBC calibrations in internal dosimetry applications, was used. Both the BOMAB phantom and the detection system were accurately implemented in the Monte Carlo codes. The whole-body counter at ITN possesses a moving detector system, which poses a challenge for Monte Carlo simulations, as most codes only accept static configurations. The continuous detector movement was approximately described in the simulations by averaging several discrete positions of the detector throughout the movement. The computational efficiency values obtained with the two Monte Carlos codes have deviations of less than 3.2 %, and the obtained deviations between experimental and computational efficiencies are less than 5 %. This work contributes to demonstrate the great effectiveness of using computational tools for understanding the calibration of radiation detection systems used for in vivo monitoring.  相似文献   

16.
17.
Residual stress in a welded plate is computed in the first part of the paper using a weld analysis software program VrWeld () that computes the 3D transient temperature field, the evolution of micro-structure and the evolution of stress-strain fields. The computed residual stress is compared to the residual stress distribution measured by Paradowska (J Mater Process 164–165:1099–1105, 2005) with a neutron diffraction method to show that the computational model captures the physics well. Two uncertainty analyses are conducted in the second part to investigate the question of how variations in parameters contribute to the result from part one provided that computational model can predict residual stress well resulted in part one. The difference between the two is the number of parameters. The former has only one parameter and we employed the computational model for perturbation analysis in order to find the uncertainty due to perturbation in the parameter. For such a test, the number of test required in sample space to approximate normality by central limit theorem, is feasible considering computational resources although it is not true when we have higher number of interrelated parameters. The latter therefore has 4 highly interrelated parameters to show that an alternative way can be employed instead of using directly computational model for such a case. Uncertainty analyses are based on Monte Carlo method in this paper and the idea is that if numerical modeling is valid and also there is a need for a great number of tests for Monte Carlo analysis that make it unfeasible to run such an analysis directly by computational model then extracting a regression model from the computational model and working with it, is an effective alternative.  相似文献   

18.
Scientific Workflow Applications (SWFAs) can deliver collaborative tools useful to researchers in executing large and complex scientific processes. Particularly, Scientific Workflow Scheduling (SWFS) accelerates the computational procedures between the available computational resources and the dependent workflow jobs based on the researchers’ requirements. However, cost optimization is one of the SWFS challenges in handling massive and complicated tasks and requires determining an approximate (near-optimal) solution within polynomial computational time. Motivated by this, current work proposes a novel SWFS cost optimization model effective in solving this challenge. The proposed model contains three main stages: (i) scientific workflow application, (ii) targeted computational environment, and (iii) cost optimization criteria. The model has been used to optimize completion time (makespan) and overall computational cost of SWFS in cloud computing for all considered scenarios in this research context. This will ultimately reduce the cost for service consumers. At the same time, reducing the cost has a positive impact on the profitability of service providers towards utilizing all computational resources to achieve a competitive advantage over other cloud service providers. To evaluate the effectiveness of this proposed model, an empirical comparison was conducted by employing three core types of heuristic approaches, including Single-based (i.e., Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Invasive Weed Optimization (IWO)), Hybrid-based (i.e., Hybrid-based Heuristics Algorithms (HIWO)), and Hyper-based (i.e., Dynamic Hyper-Heuristic Algorithm (DHHA)). Additionally, a simulation-based implementation was used for SIPHT SWFA by considering three different sizes of datasets. The proposed model provides an efficient platform to optimally schedule workflow tasks by handling data-intensiveness and computational-intensiveness of SWFAs. The results reveal that the proposed cost optimization model attained an optimal Job completion time (makespan) and total computational cost for small and large sizes of the considered dataset. In contrast, hybrid and hyper-based approaches consistently achieved better results for the medium-sized dataset.  相似文献   

19.
Geometry and Grid/Mesh Generation Issues for CFD and CSM Shape Optimization   总被引:1,自引:0,他引:1  
This paper discusses geometry and grid generation issues for an automated shape optimization using computational fluid dynamics and computational structural mechanics. Special attention is given to five major steps for shape optimization: shape parameterization, automation of model abstraction, automation of grid generation, calculation of analytical sensitivity, and robust grid deformation.  相似文献   

20.
To be feasible for computationally intensive applications such as parametric studies, optimization, and control design, large‐scale finite element analysis requires model order reduction. This is particularly true in nonlinear settings that tend to dramatically increase computational complexity. Although significant progress has been achieved in the development of computational approaches for the reduction of nonlinear computational mechanics models, addressing the issue of contact remains a major hurdle. To this effect, this paper introduces a projection‐based model reduction approach for both static and dynamic contact problems. It features the application of a non‐negative matrix factorization scheme to the construction of a positive reduced‐order basis for the contact forces, and a greedy sampling algorithm coupled with an error indicator for achieving robustness with respect to model parameter variations. The proposed approach is successfully demonstrated for the reduction of several two‐dimensional, simple, but representative contact and self contact computational models. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号