共查询到20条相似文献,搜索用时 15 毫秒
1.
Solution of train-tunnel entry flow using parallel computing 总被引:1,自引:0,他引:1
B. S. Holmes J. Dias S. M. Rifai J. C. Buell Z. Johan T. Sassa T. Sato 《Computational Mechanics》1999,23(2):124-129
A solution to the problem of predicting the airflow over a train entering a tunnel is presented using parallel processing
and a novel moving boundary condition scheme. The moving boundary condition approach avoids some of the topological problems
of traditional approaches to this problem such as ALE techniques and contact surfaces. The method is demonstrated using both
incompressible and compressible flow solvers based on the GLS finite element formulation. Flow solutions are compared with
experiment for a simple geometry and the method is demonstrated on an actual train geometry. 相似文献
2.
Summary. In this article, an efficient algorithm is developed for the decomposition of large-scale finite element models. A weighted incidence graph with N nodes is used to transform the connectivity properties of finite element meshes into those of graphs. A graph G0 constructed in this manner is then reduced to a graph Gn of desired size by a sequence of contractions G0 G1 G2 Gn. For G0, two pseudoperipheral nodes s0 and t0 are selected and two shortest route trees are expanded from these nodes. For each starting node, a vector is constructed with N entries, each entry being the shortest distance of a node ni of G0 from the corresponding starting node. Hence two vectors v1 and v2 are formed as Ritz vectors for G0. A similar process is repeated for Gi (i=1,2,,n), and the sizes of the vectors obtained are then extended to N. A Ritz matrix consisting of 2(n+1) normalized Ritz vectors each having N entries is constructed. This matrix is then used in the formation of an eigenvalue problem. The first eigenvector is calculated, and an approximate Fiedler vector is constructed for the bisection of G0. The performance of the method is illustrated by some practical examples. 相似文献
3.
Since the computing environment with many workstations on networks has recently become available, these workstations can be regarded as a virtual parallel computer, or a workstation cluster to perform finite element analyses. The parallel performance of finite element algorithms, such as the Gaussian elimination method, the conkugate gradient method and the domain decomposition method, depends strongly on the parallel parameters in the cluster system. The method to evaluate the parallel computational time and to optimize the parallel parameters are presented. The parallel computing systems are developed based on these techniques and applied to a large scale problem with 350,000 degrees of freedom using twenty-one workstations. 相似文献
4.
5.
具有复杂结构和多种用途的隧道正受到广泛运用。为了分析隧道结构在多种载荷下的响应特性,在LS-DYNA环境下,建立了列车-隧道-土体的动力耦合三维有限元计算模型,采用基于负载均衡的并行计算技术,解决了该大规模非线性有限元模型的求解难题。结果表明:运用动力松弛法加载静应力场可以取得较好的效果;加载静应力场时,隧道与土体接触计算更准确,且应力响应分布规律不同;列车及公路车辆载荷的影响远大于轨道不平度的影响,隧道衬砌在公路车道准静态载荷下的垂向位移约1.15mm,在列车动载荷作用引起的垂向位移峰值约0.8mm;基于接触负载均衡的并行计算方法提高了约15%的计算效率,而对于CPU数量的选择,需要综合考虑模型的规模和空间拓扑结构。 相似文献
6.
The authors propose combination of the coupled method and the extrapolation method as a numerical technique suitable for
calculation of an incompressible flow on a massively parallel computer. In the coupled method, the momentum equations and
the continuity equation are directly coupled, and velocity components and pressure values are simultaneously updated. It is
very simple and efficiently parallelized. The extrapolation method is an accelerative technique predicting a converged solution
from a sequence of intermediate solutions generated by an iterative procedure. When it is implemented on a parallel computer,
it is expected to retain good accelerative property even for fine granularity in contrast to the multigrid method. In this
paper three existing versions of the extrapolation method, ROLE, MPE and ROGE, are reviewed, and LWE, a new version developed
by the authors, is presented. Then, ROLE and LWE are applied to numerical analysis of Poisson's equation on a Fujitsu AP1000
and its results are shown. The mathematical proof that the extrapolation method, which is based on the linear theory, is applicable
to an iterative procedure solving nonlinear equations is presented. Then the code consisting of the coupled method and the
extrapolation method is implemented on a Fujitsu AP1000 to solve two simple 2-D steady flows. Accelerative property of the
extrapolation method is discussed, and suitability of the code to massively parallel computing is demonstrated. 相似文献
7.
研究一种3-RRRT新型高速并联机器人的运动学及动力学建模及分析方法.采用D-H法建立了各构件体坐标系,以此为基础,建立了3-RRRT并联机器人运动学模型,并给出了其位置解析解;基于牛顿欧拉递推动力学方法进行了系统动力学模型构建,并利用MATLAB进行运动学和动力学数值仿真,得出了系统实现既定轨迹跟踪所需的驱动力矩,并对结果进行了分析.该建模方法的优点是计算量小,并可求得杆件的受力情况,便于实时控制,可为3-RRRT并联机器人的研究提供分析数据,进而为改进其控制策略提供参考和依据. 相似文献
8.
Woods CJ Ng MH Johnston S Murdock SE Wu B Tai K Fangohr H Jeffreys P Cox S Frey JG Sansom MS Essex JW 《Philosophical transactions. Series A, Mathematical, physical, and engineering sciences》2005,363(1833):2017-2035
Biomolecular computer simulations are now widely used not only in an academic setting to understand the fundamental role of molecular dynamics on biological function, but also in the industrial context to assist in drug design. In this paper, two applications of Grid computing to this area will be outlined. The first, involving the coupling of distributed computing resources to dedicated Beowulf clusters, is targeted at simulating protein conformational change using the Replica Exchange methodology. In the second, the rationale and design of a database of biomolecular simulation trajectories is described. Both applications illustrate the increasingly important role modern computational methods are playing in the life sciences. 相似文献
9.
10.
The next generation of manufacturing systems is assumed to be intelligent enough to make decisions and automatically adjust to variations in production demand, shop-floor breakdowns etc. Auction-based manufacturing is a control strategy in which various intelligent entities in the manufacturing system bid themselves, accept bids and make selections among the bids available based on a heuristic. This paper deals with the simulation modelling and performance evaluation of a push-type auction (negotiation) based manufacturing system embedded in a pulltype production system using coloured Petri nets. Three different models of an auction-based manufacturing system have been discussed. This methodology helps in developing systems for real-time control, anticipation of deadlocks, and evaluation of various performance metrics like machine utilization, automated guided vehicle (AGV) utilization, waiting times, work in process (WIP) etc. Various decision-making rules were identified for the real-time control of auction-based manufacturing systems. 相似文献
11.
12.
According to the characteristics of vortexes with different frequencies in atmospheric turbulence, a rational hypothesis is proposed in the present paper that the time history of fluctuating wind speeds can be viewed as the integration of a series of harmonic waves with the same initial zero-phase. A univariate model of phase spectrum is then developed which relies upon a single argument associated with the concept of starting-time of phase evolution. The identification procedure of starting-time of phase evolution is detailed and its probabilistic structure is investigated through the estimation of the measured data of wind speeds. The univariate phase spectrum model is proved to be valid, bypassing the need of the classical spectral representation techniques in modeling the phase spectrum where hundreds of variables are required. In conjunction with the Fourier amplitude spectrum, a new simulation scheme, based on the stochastic Fourier functions, for fluctuating wind speeds is developed. Numerical and experimental investigations indicate that the proposed scheme operates the accurate simulation of fluctuating wind speeds efficiently that matches well with the measured data of wind fields by revealing the essential relationship among the individual harmonic waves. The univariate phase spectrum model exhibits the potential application for the accurate analysis and reliability evaluation of random wind-induced responses of engineering structures. 相似文献
13.
Eric Fahrenthold Ravishankar Shivarama 《International Journal of Impact Engineering》2001,26(1-10):179-188
Simulations of three dimensional orbital debris impact problems, using a parallel hybrid particle-finite element code, show good agreement with experiment and good speedup in parallel computation. The simulations included single and multi-plate shields as well as aluminum and composite shielding materials, at an impact velocity of eleven kilometers per second. 相似文献
14.
Constructive interference between coherent waves traveling time-reversed paths in a random medium gives rise to the enhancement of light scattering observed in directions close to backscattering. This phenomenon is known as enhanced backscattering (EBS). According to diffusion theory, the angular width of an EBS cone is proportional to the ratio of the wavelength of light lambda to the transport mean-free-path length l(s)* of a random medium. In biological media a large l(s)* approximately 0.5-2 mm > lambda results in an extremely small (approximately 0.001 degrees ) angular width of the EBS cone, making the experimental observation of such narrow peaks difficult. Recently, the feasibility of observing EBS under low spatial coherence illumination (spatial coherence length Lsc < l(s)*) was demonstrated. Low spatial coherence behaves as a spatial filter rejecting longer path lengths and thus resulting in an increase of more than 100 times in the angular width of low coherence EBS (LEBS) cones. However, a conventional diffusion approximation-based model of EBS has not been able to explain such a dramatic increase in LEBS width. We present a photon random walk model of LEBS by using Monte Carlo simulation to elucidate the mechanism accounting for the unprecedented broadening of the LEBS peaks. Typically, the exit angles of the scattered photons are not considered in modeling EBS in the diffusion regime. We show that small exit angles are highly sensitive to low-order scattering, which is crucial for accurate modeling of LEBS. Our results show that the predictions of the model are in excellent agreement with the experimental data. 相似文献
15.
Flow simulation and high performance computing 总被引:3,自引:0,他引:3
T. Tezduyar S. Aliabadi M. Behr A. Johnson V. Kalro M. Litke 《Computational Mechanics》1996,18(6):397-412
Flow simulation is a computational tool for exploring science and technology involving flow applications. It can provide cost-effective
alternatives or complements to laboratory experiments, field tests and prototyping. Flow simulation relies heavily on high
performance computing (HPC). We view HPC as having two major components. One is advanced algorithms capable of accurately
simulating complex, real-world problems. The other is advanced computer hardware and networking with sufficient power, memory
and bandwidth to execute those simulations. While HPC enables flow simulation, flow simulation motivates development of novel
HPC techniques. This paper focuses on demonstrating that flow simulation has come a long way and is being applied to many
complex, real-world problems in different fields of engineering and applied sciences, particularly in aerospace engineering
and applied fluid mechanics. Flow simulation has come a long way because HPC has come a long way. This paper also provides
a brief review of some of the recently-developed HPC methods and tools that has played a major role in bringing flow simulation
where it is today. A number of 3D flow simulations are presented in this paper as examples of the level of computational capability
reached with recent HPC methods and hardware. These examples are, flow around a fighter aircraft, flow around two trains passing
in a tunnel, large ram-air parachutes, flow over hydraulic structures, contaminant dispersion in a model subway station, airflow
past an automobile, multiple spheres falling in a liquid-filled tube, and dynamics of a paratrooper jumping from a cargo aircraft.
Sponsored by ARO, ARPA, NASA-JSC, and by the Army High Performance Computing Research Center under the auspices of the Department
of the Army, Army Research Laboratory cooperative agreement number DAAH04-95-2-0003/ contract number DAAH04-95-C-0008. The
content does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.
CRAY C90 time and support for the second author was provided in part by the Minnesota Supercomputer Institute.
Dedicated to the 10th anniversary of Computational Mechanics 相似文献
16.
ABSTRACTModeling and computing parameters in nonlinear finite element simulations significantly affect simulation accuracy and efficiency even when it is carried out using commercial software, such as ABAQUS, ANSYS, etc. Yet comprehensive effects of these parameters on simulation results have seldom been reported. In this article, we explore the effects of several important parameters, such as mass scaling type and value, element type and size, and loading velocity, on the accuracy and efficiency of nonlinear finite element simulation of metallic foams based on three-dimensional Voronoi mesostructures. Analysis indicated that these parameters did affect simulation accuracy and efficiency, and three optimized nondimensional parameters were recommended. Based on the verified model and optimized parameters, effects of cell-wall thickness distribution on the uniaxial properties of metallic foams were also investigated. Simulation results showed that the different distribution of cell-wall thickness in modeling may induce varied elastic moduli and yield stress of metal foams. Our analysis showed that modeling and computing parameters must be paid attention to in the nonlinear FE simulation, and that the recommended parameters constitute a good reference for numerical simulation of metallic foams in predicting mechanical behaviors. 相似文献
17.
Advanced manufacturing technology requires high-precision capability in multi-axis computer numerical control (CNC) machine tools. At present, the modeling and identification for the drive system of CNC machine tools has some defects. In order to solve the problem, some interdisciplinary theories and methods, such as support vector machines, granular computing, artificial immune algorithms, and particle swarm optimization algorithms, have been used to model and identify multi-axis drive systems for CNC machine tools. An identification method using a support vector machine, based on granular computing, is presented to identify a multi-axis servo drive system model for improving the precision of model identification, and an immune particle swarm optimization algorithm, based on crossover and mutation functions, is proposed to optimize the structure parameters of the support vector machine based on granular computing. The proposed identification method was evaluated by experiments using the multi-axis servo drive system. The experimental results showed that the proposed approach is capable of improving modeling and identification precision. 相似文献
18.
《工程设计学报》2015,(4)
以船携式溢油回收机围油栏布放驱动控制系统为研究对象,针对双马达速度同步控制问题,提出以可编程逻辑控制器(programmable logic controller,PLC)为控制核心的电液比例阀控马达主从速度同步控制方案.建立其速度同步控制系统数学模型,利用MATLAB/Simulink软件进行稳定性、输出速度与速度同步误差分析;为提高系统稳定性同时减小速度同步误差,引入PID控制环节.仿真结果表明,控制系统引入PID整定环节后,其稳态性能明显提高,马达输出速度稳定性与同步性均有满意的设计效果.在同步精度要求不高的情况下,基于PLC的主从同步控制策略是切实可行的. 相似文献
19.
20.
In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for -10 y of spent fuel pool storage, > 35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are -6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle TRANsport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the SN models. The biased A3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by -20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF/B-II library. Both MC and SN calculations were run in parallel on a BEOWULF PC-cluster (eight processors). Timing results indicate that the SN calculation yielded a detailed dose distribution at over 318,426 points in -164 h. Unbiased continuous energy MC required 214 h to calculate dose rates with a 1% relative error in only 18 regions on the surface of the cask. The biased A3MCNP calculations yields dose rates with -0.8% relative error in only 2.5 h on one processor. This study demonstrates that a parallel code, such as the 3-D parallel SN transport code, PENTRAN can solve a complex large problem, such as the storage cask, accurately and efficiently. Moreover, this calculation was performed on a relatively inexpensive PC-cluster. Possible inadequacies of the CASK cross section library still need to be evaluated. 相似文献