首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The lattice Boltzmann method (LBM) and traditional finite difference methods have separate strengths when solving the incompressible Navier–Stokes equations. The LBM is an explicit method with a highly local computational nature that uses floating-point operations that involve only local data and thereby enables easy cache optimization and parallelization. However, because the LBM is an explicit method, smaller grid spacing requires smaller numerical time steps during both transient and steady state computations. Traditional implicit finite difference methods can take larger time steps as they are not limited by the CFL condition, but only by the need for time accuracy during transient computations. To take advantage of the strengths of both methods, a multiple solver, multiple grid block approach was implemented and validated for the 2-D Burgers’ equation in Part I of this work. Part II implements the multiple solver, multiple grid block approach for the 2-D backward step flow problem. The coupled LBM–VSM solver is found to be faster by a factor of 2.90 (2.87 and 2.93 for Re = 150 and Re = 500, respectively) on a single processor than the VSM for the 2-D backward step flow problem while maintaining similar accuracy.  相似文献   

3.
《Applied Soft Computing》2007,7(1):343-352
This paper reports how the genetic programming paradigm, in conjunction with pattern recognition principles, can be used to evolve classifiers capable of recognizing epileptic patterns in human electroencephalographic signals. The procedure for feature extraction from the raw signal is detailed, as well as the genetic programming system that properly selects the features and evolves the classifiers. Based on the data sets used, two different epileptic patterns were detected: 3 Hz spike-and-slow-wave-complex (SASWC) and spike-or-sharp-wave (SOSW). After training, classifiers for both patterns were tested with unseen instances, and achieved sensibility = 1.00 and specificity = 0.93 for SASWC patterns, and sensibility = 0.94 and specificity = 0.89 for SOSW patterns. Results are very promising and suggest that the methodology presented can be applied to other pattern recognition tasks in complex signals.  相似文献   

4.
《Computers & Fluids》2006,35(8-9):863-871
Following the work of Lallemand and Luo [Lallemand P, Luo L-S. Theory of the lattice Boltzmann method: acoustic and thermal properties in two and three dimensions. Phys Rev E 2003;68:036706] we validate, apply and extend the hybrid thermal lattice Boltzmann scheme (HTLBE) by a large-eddy approach to simulate turbulent convective flows. For the mass and momentum equations, a multiple-relaxation-time LBE scheme is used while the heat equation is solved numerically by a finite difference scheme. We extend the hybrid model by a Smagorinsky subgrid scale model for both the fluid flow and the heat flux. Validation studies are presented for laminar and turbulent natural convection in a cavity at various Rayleigh numbers up to 5 × 1010 for Pr = 0.71 using a serial code in 2D and a parallel code in 3D, respectively. Correlations of the Nusselt number are discussed and compared to benchmark data. As an application we simulated forced convection in a building with inner courtyard at Re = 50 000.  相似文献   

5.
The paper presents an application of two domain repartitioning methods to solving hopper discharge problem simulated by the discrete element method. Quantitative comparison of parallel speed-up obtained by using the multilevel k-way graph partitioning method and the recursive coordinate bisection method is presented. The detailed investigation of load balance, interprocessor communication and repartitioning is performed. Speed-up of the parallel computations based on the dynamic domain decomposition is investigated by a series of benchmark tests simulating the granular visco-elastic frictional media in hoppers containing 0.3 × 106 and 5.1 × 106 spherical particles. A soft-particle approach is adopted, when the simulation is performed by small time increments and the contact forces between the particles are calculated using the contact law. The parallel efficiency of 0.87 was achieved on 2048 cores, modelling the hopper filled with 5.1 × 106 particles.  相似文献   

6.
The implicit Colebrook–White equation has been widely used to estimate the friction factor for turbulent fluid-flow in rough-pipes. In this paper, the state-of-the-art review for the most currently available explicit alternatives to the Colebrook–White equation, is presented. An extensive comparison test was established on the 20 × 500 grid, for a wide range of relative roughness (ε/D) and Reynolds number (R) values (1 × 10?6 ? ε/D ? 5 × 10?2; 4 × 103 ? R ? 108), covering a large portion of turbulent flow zone in Moody’s diagram. Based on the comprehensive error analysis, the magnitude points in which the maximum absolute and the maximum relative error are occurred at the pair of ε/D and R values, are observed. A limiting case of the most of these approximations provided friction factor estimates that are characterized by a mean absolute error of 5 × 10?4, a maximum absolute error of 4 × 10?3 whereas, a mean relative error of 1.3% and a maximum relative error of 5.8%, over the entire range of ε/D and R values, respectively. For practical purposes, the complete results for the maximum and the mean relative errors versus the 20 sets of ε/D value, are also indicated in two comparative figures. The examination results for error properties of these approximations gives one an opportunity to practically evaluate the most accurate formula among of all the previous explicit models; and showing in this way its great flexibility for estimating turbulent flow friction factor. Comparative analysis for the mean relative error profile revealed, the classification for the best-fitted six equations examined was in a good agreement with those of the best model selection criterion claimed in the recent literature, for all performed simulations.  相似文献   

7.
《Computer Networks》2007,51(11):3172-3196
A search based heuristic for the optimisation of communication networks where traffic forecasts are uncertain and the problem is NP-complete is presented. While algorithms such as genetic algorithms (GA) and simulated annealing (SA) are often used for this class of problem, this work applies a combination of newer optimisation techniques specifically: fast local search (FLS) as an improved hill climbing method and guided local search (GLS) to allow escape from local minima. The GLS + FLS combination is compared with an optimised GA and SA approaches. It is found that in terms of implementation, the parameterisation of the GLS + FLS technique is significantly simpler than that for a GA and SA. Also, the self-regularisation feature of the GLS + FLS approach provides a distinctive advantage over the other techniques which require manual parameterisation. To compare numerical performance, the three techniques were tested over a number of network sets varying in size, number of switch circuit demands (network bandwidth demands) and levels of uncertainties on the switch circuit demands. The results show that the GLS + FLS outperforms the GA and SA techniques in terms of both solution quality and optimisation speed but even more importantly GLS + FLS has significantly reduced parameterisation time.  相似文献   

8.
9.
Earth's core is believed to consist of a solid inner core and an outer liquid core. Since the inner core is mostly solid iron, most geophysical work has focused on melting of pure iron at core conditions. The inner core density is well matched with seismological data if some S is added to iron. The available phase equilibrium experimental data in the binary Fe–S system to pressures as high as ~200 GPa is used to create a thermodynamic database extending to core pressures that can be used to calculate the inner core density if S were the only other constituent. Such a calculation gives the maximum temperature of the solid inner core as 4428 (±500) K (363.85 GPa, density=13.09 g/cm3) with a sulfur content of ~15 wt%. To be consistent with the seismically determined density, the outer liquid core requires mixing of yet another light element or elements; both oxygen and carbon are suitable.  相似文献   

10.
In general, to achieve high compression efficiency, a 2D image or a 2D block is used as the compression unit. However, 2D compression requires a large memory size and long latency when input data are received in a raster scan order that is common in existing TV systems. To address this problem, a 1D compression algorithm that uses a 1D block as the compression unit is proposed. 1D set partitioning in hierarchical trees (SPIHT) is an effective compression algorithm that fits the encoded bit length to the target bit length precisely. However, the 1D SPIHT can have low compression efficiency because 1D discrete wavelet transform (DWT) cannot make use of the redundancy in the vertical direction. This paper proposes two schemes for improving compression efficiency in the 1D SPIHT. First, a hybrid coding scheme that uses different coding algorithms for the low and high frequency bands is proposed. For the low-pass band, a differential pulse code modulation–variable length coding (DPCM–VLC) is adopted, whereas a 1D SPIHT is used for the high-pass band. Second, a scheme that determines the target bit length of each block by using spatial correlation with a minimal increase in complexity is proposed. Experimental results show that the proposed algorithm improves the average peak signal to noise ratio (PSNR) by 2.97 dB compared with the conventional 1D SPIHT algorithm. With the hardware implementation, the throughputs of both encoder and decoder designs are 6.15 Gbps, and gate counts of encoder and decoder designs are 42.8 K and 57.7 K, respectively.  相似文献   

11.
As a part of the research project aimed at developing a thermodynamic database of the La–Sr–Co–Fe–O system for applications in Solid Oxide Fuel Cells (SOFCs), the Co–Fe–O subsystem was thermodynamically re-modeled in the present work using the CALPHAD methodology. The solid phases were described using the Compound Energy Formalism (CEF) and the ionized liquid was modeled with the ionic two-sublattice model based on CEF. A set of self-consistent thermodynamic parameters was obtained eventually. Calculated phase diagrams and thermodynamic properties are presented and compared with experimental data. The modeling covers a temperature range from 298 K to 3000 K and oxygen partial pressure from 10−16 to 102 bar. A good agreement with the experimental data was shown. Improvements were made as compared to previous modeling results.  相似文献   

12.
《Parallel Computing》2014,40(5-6):144-158
One of the main difficulties using multi-point statistical (MPS) simulation based on annealing techniques or genetic algorithms concerns the excessive amount of time and memory that must be spent in order to achieve convergence. In this work we propose code optimizations and parallelization schemes over a genetic-based MPS code with the aim of speeding up the execution time. The code optimizations involve the reduction of cache misses in the array accesses, avoid branching instructions and increase the locality of the accessed data. The hybrid parallelization scheme involves a fine-grain parallelization of loops using a shared-memory programming model (OpenMP) and a coarse-grain distribution of load among several computational nodes using a distributed-memory programming model (MPI). Convergence, execution time and speed-up results are presented using 2D training images of sizes 100 × 100 × 1 and 1000 × 1000 × 1 on a distributed-shared memory supercomputing facility.  相似文献   

13.
Compressive strength and splitting tensile strength are both mechanical properties of concrete that are utilized in structural design. This study presents gene expression programming (GEP) as a new tool for the formulations of splitting tensile strength from compressive strength of concrete. For purpose of building the GEP-based formulations, 536 experimental data have been gathered from existing literature. The GEP-based formulations are developed for splitting tensile strength of concrete as a function of age of specimen and cylinder compressive strength. In experimental parts of this study, cylindrical specimens of 150 × 300 mm and 100 × 200 mm in dimensions are utilized. Training and testing sets of the GEP-based formulations are randomly separated from the complete experimental data. The GEP-based formulations are also validated with additional 173 data of experimental results other than the data used in training and testing sets of the GEP-based formulations. All of the results obtained from the GEP-based formulations are compared with the results obtained from experimental data, the developed regression-based formulation and formulas given by some national building codes. These comparisons showed that the GEP-based formulations appeared to well agree with the experimental data and found to be quite reliable.  相似文献   

14.
15.
A method is presented to generate metal-artefact-free finite element (FE) models based on in vivo micro-CT images of bone–implant structures in the case of the Guinea Pig’s tibiae. A bone–implant composite FE model was constructed based on both pre- (just before implant insertion) and post-operative (4 days after implant insertion) sets of micro-CT scans. Definition of bone geometry and attribution of material properties to the volumetric elements was based on pre-operative images while post-operative scans were mainly used for registration. Standard Triangulation Language (STL) representations of implant and bone were generated after segmentation of CT images in MIMICS® (Materialise). Registration was performed in 3-matic® (Materialise). By taking two sets of scans at a 4 days interval, undisturbed bone geometry before implant insertion and implant position after insertion were recorded. After adequate validation, FE models constructed with this method can be used to study bone adaptive response to controlled mechanical loading.  相似文献   

16.
By combining embedded passive sensing technologies from both smartphone and smartwatch, it is possible to obtain a high quality detection of sedentary activities (sitting, reclining posture…), movements (walking…) and periods of more intense body movements (running…). Our research encompasses the definition of an energy-saving function for the total energy expenditure (TEE) estimation using accelerometry data. This topic is clearly at the crossroad of both computer science and medical research. The present contribution proposes an intelligent wearable system, which combines the use of two complementary devices: smartphone and smartwatch to collect accelerometry data. Together they can precisely discriminate real-world human sedentary and active behaviors and their duration and estimate energy expenditure in real time and in free-living conditions. The results of the study are expected to help subjects to handle their daily-living physical activity notably for being compliant with the physical activity international guidelines (150 min of moderate intensity activity/week). It is also expected that the physical activity feedbacks using these popular devices can prove the effectiveness of such wearable objects to promote individually-adapted healthy behavioral changes. The performance of the proposed function was evaluated by comparing the energy expenditure given by the smartphone and smartwatch with that produced by Armband®. The mean error of TEE between the proposed function and Armband® was less than 4% for an average 6 h period of daily-living activities. The main theoretical contribution is the definition of a new predictive mathematical function of energy expenditure, which competes with the non-public function used in dedicated costly devices such as Armband®. In addition, this work demonstrates the potential of wearable technologies.  相似文献   

17.
Deregulated epigenetic activity of Histone deacetylase 1 (HDAC1) in tumor development and carcinogenesis pronounces it as promising therapeutic target for cancer treatment. HDAC1 has recently captured the attention of researchers owing to its decisive role in multiple types of cancer. In the present study a multistep framework combining ligand based 3D-QSAR, molecular docking and Molecular Dynamics (MD) simulation studies were performed to explore potential compound with good HDAC1 binding affinity. Four different pharmacophore hypotheses Hypo1 (AADR), Hypo2 (AAAH), Hypo3 (AAAR) and Hypo4 (ADDR) were obtained. The hypothesis Hypo1 (AADR) with two hydrogen bond acceptors (A), one hydrogen bond donor (D) and one aromatics ring (R) was selected to build 3D-QSAR model on the basis of statistical parameter. The pharmacophore hypothesis produced a statistically significant QSAR model, with co-efficient of correlation r2 = 0.82 and cross validation correlation co-efficient q2 = 0.70. External validation result displays high predictive power with r2 (o) value of 0.88 and r2 (m) value of 0.58 to carry out further in silico studies. Virtual screening result shows ZINC70450932 as the most promising lead where HDAC1 interacts with residues Asp99, His178, Tyr204, Phe205 and Leu271 forming seven hydrogen bonds. A high docking score (−11.17 kcal/mol) and lower docking energy −37.84 kcal/mol) displays the binding efficiency of the ligand. Binding free energy calculation was done using MM/GBSA to access affinity of ligands towards protein. Density Functional Theory was employed to explore electronic features of the ligands describing intramolcular charge transfer reaction. Molecular dynamics simulation studies at 50 ns display metal ion (Zn)-ligand interaction which is vital to inhibit the enzymatic activity of the protein.  相似文献   

18.
The dramatic increase in space-borne sensors over the past two decades is presenting unique opportunities for new and enhanced applications in various scientific disciplines. Using these data sets, hydrogeologists can now address and understand the partitioning of water systems on regional and global scales, yet such applications present mounting challenges in data retrieval, assimilation, and analysis for scientists attempting to process relevant large temporal remote sensing data sets (e.g., TRMM, SSM/I, AVHRR, MODIS, QuikSCAT, and AMSR-E). We describe solutions to these problems through the development of an interactive data language (IDL)-based computer program, the remote sensing data extraction model (RESDEM) for integrated processing and analysis of a suite of remote sensing data sets. RESDEM imports, calibrates, and georeferences scenes, and subsets global data sets for the purpose of extracting and verifying precipitation over areas and time periods of interest. Verification of precipitation events is accomplished by integrating other long-term satellite based data sets. The modules in RESDEM process data for cloud detection and others for detecting changes in soil moisture, vegetative water capacity and vegetation intensity following targeted precipitation events. Using the arid Sinai Peninsula (SP; area: 61,000 km2) and the Eastern Desert (ED; area: 220,000 km2) of Egypt as test sites, we demonstrate how RESDEM outputs (verified precipitation events) are now enabling regional scale applications of continuous (1998–2006) rainfall-runoff and groundwater recharge computations.  相似文献   

19.
A methodology is presented for the generation and meshing of large-scale three-dimensional random polycrystals. Voronoi tessellations are used and are shown to include morphological properties that make them particularly challenging to mesh with high element quality. Original approaches are presented to solve these problems: (i) “geometry regularization”, which consists in removing the geometrical details of the polycrystal morphology, (ii) “multimeshing” which consists in using simultaneously several meshing algorithms to optimize mesh quality, and (iii) remeshing, by which a new mesh is constructed over a deformed mesh and the state variables are transported, for large strain applications. Detailed statistical analyses are conducted on the polycrystal morphology and mesh quality. The results are mainly illustrated by the high-quality meshing of polycrystals with large number of grains (up to 105), and the finite element method simulation of a plane strain compression of ε = 1.4 of a 3000-grain polycrystal. The presented algorithms are implemented and distributed in a free (open-source) software package: Neper.  相似文献   

20.
In this paper we present a computer program written in FORTRAN 90 specifically designed to determine the Bouguer anomaly from publicly available global gridded free-air anomaly and elevation database sets. FA2BOUG computes the complete Bouguer correction (i.e. Bullard A, B and C corrections) for both land and sea points in several spatial domains according to the distance between the topography and the calculation point. In each zone a different algorithm is used. In a distant zone we consider the harmonic spherical expansion of the potential of each right rectangular prism representing an elevation grid point. In an intermediate zone we compute the gravitational attraction produced by each prism using the analytic formula. Finally, an inner zone contribution is divided into two parts: a flat-topped prism with a height equal to the elevation of the calculation point, and four quadrants of a conic prism sloping continuously from each square of the inner zone to the calculation point. The program has been applied to the Atlantic-Mediterranean transition zone to obtain a complete Bouguer anomaly map of the area, integrating available onshore Bouguer anomaly with satellite-derived free-air anomaly data. Positive Bouguer anomalies are found in the Atlantic oceanic domain (240–300 mGal), central and eastern Alboran Basin (40–160 mGal) and SW Iberian Peninsula (>40 mGal). Major negative Bouguer anomalies are located beneath the west Alboran Basin (<?40 mGal), the Rif, the Rharb Basin and the Atlas Mountains (<?120 mGal). An isostatic residual anomaly map of the study area has been computed and compared with the crustal and lithospheric structure inferred from previous work.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号