首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An algorithm has been designed to search for the escape paths with the lowest activation barriers when starting from a local minimum-energy configuration of a many-atom system. The pathfinder algorithm combines: (1) a steered eigenvector-following method that guides a constrained escape from the convex region and subsequently climbs to a transition state tangentially to the eigenvector corresponding to the lowest negative Hessian eigenvalue; (2) discrete abstraction of the atomic configuration to systematically enumerate concerted events as linear combinations of atomistic events; (3) evolutionary control of the population dynamics of low activation-barrier events; and (4) hybrid task + spatial decompositions to implement massive search for complex events on parallel computers. The program exhibits good scalability on parallel computers and has been used to study concerted bond-breaking events in the fracture of alumina.  相似文献   

2.
The tunable dimension cluster-cluster aggregation (tdCCA) [R. Thouy, R. Jullien, J. Phys. A: Math. Gen. 27 (1994) 2953] provides a computational model for creating fractal aggregates with a tunable fractal dimension. A straightforward implementation of this model requires a computational effort scaling with O(Ntotal4) of the number of particles Ntotal. By applying two minor changes to the algorithm the computational effort can be reduced to O(Ntotal2) and allows an efficient parallel implementation of the tdCCA. On a modern parallel computer a fractal aggregate of one million particles has been built in less than 24 h.  相似文献   

3.
We present a randomized parallel algorithm that computes the greatest common divisor of two integers of n bits in length with probability 1−o(1) that takes O(nloglogn/logn) time using O(n6+?) processors for any ?>0 on the EREW PRAM parallel model of computation. The algorithm either gives a correct answer or reports failure.We believe this to be the first randomized sublinear time algorithm on the EREW PRAM for this problem.  相似文献   

4.
A scalable and portable code named Atomsviewer has been developed to interactively visualize a large atomistic dataset consisting of up to a billion atoms. The code uses a hierarchical view frustum-culling algorithm based on the octree data structure to efficiently remove atoms outside of the user's field-of-view. Probabilistic and depth-based occlusion-culling algorithms then select atoms, which have a high probability of being visible. Finally a multiresolution algorithm is used to render the selected subset of visible atoms at varying levels of detail. Atomsviewer is written in C++ and OpenGL, and it has been tested on a number of architectures including Windows, Macintosh, and SGI. Atomsviewer has been used to visualize tens of millions of atoms on a standard desktop computer and, in its parallel version, up to a billion atoms.

Program summary

Title of program: AtomsviewerCatalogue identifier: ADUMProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUMProgram obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandComputer for which the program is designed and others on which it has been tested: 2.4 GHz Pentium 4/Xeon processor, professional graphics card; Apple G4 (867 MHz)/G5, professional graphics cardOperating systems under which the program has been tested: Windows 2000/XP, Mac OS 10.2/10.3, SGI IRIX 6.5Programming languages used: C++, C and OpenGLMemory required to execute with typical data: 1 gigabyte of RAMHigh speed storage required: 60 gigabytesNo. of lines in the distributed program including test data, etc.: 550 241No. of bytes in the distributed program including test data, etc.: 6 258 245Number of bits in a word: ArbitraryNumber of processors used: 1Has the code been vectorized or parallelized: NoDistribution format: tar gzip fileNature of physical problem: Scientific visualization of atomic systemsMethod of solution: Rendering of atoms using computer graphic techniques, culling algorithms for data minimization, and levels-of-detail for minimal renderingRestrictions on the complexity of the problem: NoneTypical running time: The program is interactive in its executionUnusual features of the program: NoneReferences: The conceptual foundation and subsequent implementation of the algorithms are found in [A. Sharma, A. Nakano, R.K. Kalia, P. Vashishta, S. Kodiyalam, P. Miller, W. Zhao, X.L. Liu, T.J. Campbell, A. Haas, Presence—Teleoperators and Virtual Environments 12 (1) (2003)].  相似文献   

5.
A scalable parallel algorithm has been designed to study long-time dynamics of many-atom systems based on the nudged elastic band method, which performs mutually constrained molecular dynamics simulations for a sequence of atomic configurations (or states) to obtain a minimum energy path between initial and final local minimum-energy states. A directionally heated nudged elastic band method is introduced to search for thermally activated events without the knowledge of final states, which is then applied to an ensemble of bands in a path ensemble method for long-time simulation in the framework of the transition state theory. The resulting molecular kinetics (MK) simulation method is parallelized with a space-time-ensemble parallel nudged elastic band (STEP-NEB) algorithm, which employs spatial decomposition within each state, while temporal parallelism across the states within each band and band-ensemble parallelism are implemented using a hierarchy of communicator constructs in the Message Passing Interface library. The STEP-NEB algorithm exhibits good scalability with respect to spatial, temporal and ensemble decompositions on massively parallel computers. The MK simulation method is used to study low strain-rate deformation of amorphous silica.  相似文献   

6.
A domain decomposition algorithm for molecular dynamics simulation of atomic and molecular systems with arbitrary shape and non-periodic boundary conditions is described. The molecular dynamics program uses cell multipole method for efficient calculation of long range electrostatic interactions and a multiple time step method to facilitate bigger time steps. The system is enclosed in a cube and the cube is divided into a hierarchy of cells. The deepest level cells are assigned to processors such that each processor has contiguous cells and static load balancing is achieved by redistributing the cells so that each processor has approximately same number of atoms. The resulting domains have irregular shape and may have more than 26 neighbors. Atoms constituting bond angles and torsion angles may straddle more than two processors. An efficient strategy is devised for initial assignment and subsequent reassignment of such multiple-atom potentials to processors. At each step, computation is overlapped with communication greatly reducing the effect of communication overhead on parallel performance. The algorithm is tested on a spherical cluster of water molecules, a hexasaccharide and an enzyme both solvated by a spherical cluster of water molecules. In each case a spherical boundary containing oxygen atoms with only repulsive interactions is used to prevent evaporation of water molecules. The algorithm shows excellent parallel efficiency even for small number of cells/atoms per processor.  相似文献   

7.
What is the relationship between the macroscopic parameters of the constitutive equation for a granular soil and the microscopic forces between grains? In order to investigate this connection, we have simulated by molecular dynamics the oedometric compression of a granular soil (a dry and bad-graded sand) and computed the hypoplastic parameters hs (the granular skeleton hardness) and η (the exponent in the compression law) by following the same procedure than in experiments, that is by fitting the Bauer's law e/e0=exp(−n(3p/hs)), where p is the pressure and e0 and e are the initial and present void ratios. The micro-mechanical simulation includes elastic and dissipative normal forces plus slip, rolling and static friction between grains. By this way we have explored how the macroscopic parameters change by modifying the grains stiffness, V; the dissipation coefficient, γn; the static friction coefficient, μs; and the dynamic friction coefficient, μk. Cumulating all simulations, we obtained an unexpected result: the two macroscopic parameters seems to be related by a power law, hs=0.068(4)η9.88(3). Moreover, the experimental result for a Guamo sand with the same granulometry fits perfectly into this power law. Is this relation real? What is the final ground of the Bauer's Law? We conclude by exploring some hypothesis.  相似文献   

8.
The software described in this paper uses the Maple algebraic computing environment to calculate an analytic form for the matrix element of the plane-wave Born approximation of the electron-impact ionisation of an atomic orbital, with arbitrary orbital and angular momentum quantum numbers. The atomic orbitals are approximated by Hartree-Fock Slater functions, and the ejected electron is modelled by a hydrogenic Coulomb wave, made orthogonal to all occupied orbitals of the target atom. Clenshaw-Curtis integration techniques are then used to calculate the total ionisation cross-section. For improved performance, the numerical integrations are performed using FORTRAN by automatically converting the analytic matrix element for each orbital into a FORTRAN subroutine. The results compare favourably with experimental data for a wide range of elements, including the transition metals, with excellent convergence at high energies.

Program summary

Title of program: BIXCatalogue identifier:ADRZProgram summary URL:http://www.cpc.cs.qub.ac.uk/cpc/summaries/ADRZProgram obtainable from:CPC Program Library, Queen's University of Belfast, N. IrelandComputers: Platform independentOperating systems: Tested on DEC Alpha Unix, Windows NT 4.0 and Windows XP Professional EditionProgramming language used: Maple V Release 5.1 and FORTRAN 90Memory required: 256 MBNo. of processors used: 1No. of bytes in distributed program, including test data, etc.:61754Distributed format:tar gzip fileKeywords: Born approximation, electron-impact ionisation cross-section, Maple, Hartree-FockNature of physical problem: Calculates the total electron impact ionisation cross-section for neutral and ionised atomic species using the first-Born approximation. The scattered electron is modelled by a plane wave, and the ejected electron is modelled by a hydrogenic Coulomb wave, which is made orthogonal to all occupied atomic orbitals, and the atomic orbitals are approximated by Hartree-Fock Slater functions.Method of solution: An analytic form of the matrix element is evaluated using the Maple algebraic computing software. The total ionisation cross-section is then calculated using a three-dimensional Clenshaw-Curtis numerical integration algorithm.Restrictions on the complexity of the problem: There is no theoretical limit on the quantum state of the target orbital that can be solved with this methodology, subject to the availability of Hartree-Fock coefficients. However, computing resource limitations will place a practical limit to, approximately, n?7 and l?4. The precision of results close to the ionisation threshold of larger atoms (< 1 eV for Z>48) is limited to ≈5%.Typical running time: 5 to 40 minutes for initial calculation for an atomic orbital, then 5 to 300 seconds for subsequent energies of the same orbital.Unusual features of the program: To reduce calculation time, FORTRAN source code is generated and compiled automatically by the Maple procedures, based upon the analytic form of the matrix element. Numerical evaluation is then passed to the FORTRAN executable and the results are retrieved automatically.  相似文献   

9.
A numerical program is presented which facilitates a computation pertaining to the full set of one-gluon loop diagrams (including ghost loop contributions), with M attached external gluon lines in all possible ways. The feasibility of such a task rests on a suitably defined master formula, which is expressed in terms of a set of Grassmann and a set of Feynman parameters. The program carries out the Grassmann integration and performs the Lorentz trace on the involved functions, expressing the result as a compact sum of parametric integrals. The computation is based on tracing the structure of the final result, thus avoiding all intermediate unnecessary calculations and directly writing the output. Similar terms entering the final result are grouped together. The running time of the program demonstrates its effectiveness, especially for large M.

Program summary

Program title:DILOG2Program identifier:ADXN_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXN_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandProgramming language:FORTRAN 90Computer(s) for which the program has been designed:Personal ComputerOperating system(s) for which the program has been designed: Windows 98, XP, LINUXNumber of processors used:oneNo. of lines in distributed program, including test data, etc.:2000No. of bytes in distributed program, including test data, etc.:16 249Distribution format:tar.gzExternal routines/libraries used:noneCPC Program Library subprograms used:noneNature of problem:The computation of one gluon/ghost loop diagrams in QCD with many external gluon lines is a time consuming task, practically beyond reasonable reach of analytic procedures. We apply recently proposed master formulas towards the computation of such diagrams with an arbitrary number (M) of external gluon lines, achieving a final result which reduces the problem to one involving integrals over the standard set, for given M, of Feynman parameters.Solution method:The structure of the master expressions is analyzed from a numerical computation point of view. Using the properties of Grassmann variables we identify all the different forms of terms that appear in the final result. Each form is called “structure”. We calculate theoretically the number of terms belonging to every “structure”. We carry out the calculation organizing the whole procedure into separate calculations of the terms belonging to every “structure”. Terms which do not contribute to the final result are thereby avoided. The final result, extending to large values of M, is also presented with terms belonging to the same “structure” grouped together.Restrictions:M is coded as a 2-digit integer. Overflow in the dimension of used array is expected to appear for M?20 in a processor that uses 4-bytes integers or for M?34 in a processor with 8-bytes integers.Running time:Depends on M, see enclosed figures.  相似文献   

10.
We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h2), and O(h4), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores.

Program summary

Program title: NDL (Numerical Differentiation Library)Catalogue identifier: AEDG_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 73 030No. of bytes in distributed program, including test data, etc.: 630 876Distribution format: tar.gzProgramming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMPComputer: Distributed systems (clusters), shared memory systemsOperating system: Linux, SolarisHas the code been vectorised or parallelized?: YesRAM: The library uses O(N) internal storage, N being the dimension of the problemClassification: 4.9, 4.14, 6.5Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, etc. The parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems.Solution method: Finite differencing is used with carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries.Restrictions: The library uses only double precision arithmetic.Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed.Running time: Running time depends on the function's complexity. The test run took 15 ms for the serial distribution, 0.6 s for the OpenMP and 4.2 s for the MPI parallel distribution on 2 processors.  相似文献   

11.
Many string manipulations can be performed efficiently on suffix trees. In this paper a CRCW parallel RAM algorithm is presented that constructs the suffix tree associated with a string ofn symbols inO(logn) time withn processors. The algorithm requires Θ(n 2) space. However, the space needed can be reduced toO(n 1+?) for any 0< ? ≤1, with a corresponding slow-down proportional to 1/?. Efficient parallel procedures are also given for some string problems that can be solved with suffix trees.  相似文献   

12.
A parallel implementation of an algorithm for solving the one-dimensional, Fourier transformed Vlasov-Poisson system of equations is documented, together with the code structure, file formats and settings to run the code. The properties of the Fourier transformed Vlasov-Poisson system is discussed in connection with the numerical solution of the system. The Fourier method in velocity space is used to treat numerical problems arising due the filamentation of the solution in velocity space. Outflow boundary conditions in the Fourier transformed velocity space removes the highest oscillations in velocity space. A fourth-order compact Padé scheme is used to calculate derivatives in the Fourier transformed velocity space, and spatial derivatives are calculated with a pseudo-spectral method. The parallel algorithms used are described in more detail, in particular the parallel solver of the tri-diagonal systems occurring in the Padé scheme.

Program summary

Title of program:vlasovCatalogue identifier:ADVQProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVQProgram obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandOperating system under which the program has been tested: Sun Solaris; HP-UX; Read Hat LinuxProgramming language used: FORTRAN 90 with Message Passing Interface (MPI)Computers: Sun Ultra Sparc; HP 9000/785; HP IPF (Itanium Processor Family) ia64 Cluster; PCs clusterNumber of lines in distributed program, including test data, etc.:3737Number of bytes in distributed program, including test data, etc.:18 772Distribution format: tar.gzNature of physical problem: Kinetic simulations of collisionless electron-ion plasmas.Method of solution: A Fourier method in velocity space, a pseudo-spectral method in space and a fourth-order Runge-Kutta scheme in time.Memory required to execute with typical data: Uses typically of the order 105-106 double precision numbers.Restriction on the complexity of the problem: The program uses periodic boundary conditions in space.Typical running time: Depends strongly on the problem size, typically few hours if only electron dynamics is considered and longer if both ion and electron dynamics is important.Unusual features of the program: No  相似文献   

13.
We present a FORTRAN90 program GCFP for the calculation of the generalized coefficients of fractional parentage (generalized CFPs or GCFP). The approach is based on the observation that the multi-shell CFPs can be expressed in terms of single-shell CFPs, while the latter can be readily calculated employing a simple enumeration scheme of antisymmetric A-particle states and an efficient method of construction of the idempotent matrix eigenvectors. The program provides fast calculation of GCFPs for a given particle number and produces results possessing numerical uncertainties below the desired tolerance. A single j-shell is defined by four quantum numbers, (e,l,j,t).A supplemental C++ program parGCFP allows calculation to be done in batches and/or in parallel.

Program summary

Program title:GCFP, parGCFPCatalogue identifier: AEBI_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBI_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 17 199No. of bytes in distributed program, including test data, etc.: 88 658Distribution format: tar.gzProgramming language: FORTRAN 77/90 (GCFP), C++ (parGCFP)Computer: Any computer with suitable compilers. The program GCFP requires a FORTRAN 77/90 compiler. The auxiliary program parGCFP requires GNU-C++ compatible compiler, while its parallel version additionally requires MPI-1 standard librariesOperating system: Linux (Ubuntu, Scientific) (all programs), also checked on Windows XP (GCFP, serial version of parGCFP)RAM: The memory demand depends on the computation and output mode. If this mode is not 4, the program GCFP demands the following amounts of memory on a computer with Linux operating system. It requires around 2 MB of RAM for the A=12 system at Ex?2. Computation of the A=50 particle system requires around 60 MB of RAM at Ex=0 and ∼70 MB at Ex=2 (note, however, that the calculation of this system will take a very long time). If the computation and output mode is set to 4, the memory demands by GCFP are significantly larger. Calculation of GCFPs of A=12 system at Ex=1 requires 145 MB. The program parGCFP requires additional 2.5 and 4.5 MB of memory for the serial and parallel version, respectively.Classification: 17.18Nature of problem: The program GCFP generates a list of two-particle coefficients of fractional parentage for several j-shells with isospin.Solution method: The method is based on the observation that multishell coefficients of fractional parentage can be expressed in terms of single-shell CFPs [1]. The latter are calculated using the algorithm [2,3] for a spectral decomposition of an antisymmetrization operator matrix Y. The coefficients of fractional parentage are those eigenvectors of the antisymmetrization operator matrix Y that correspond to unit eigenvalues. A computer code for these coefficients is available [4]. The program GCFP offers computation of two-particle multishell coefficients of fractional parentage. The program parGCFP allows a batch calculation using one input file. Sets of GCFPs are independent and can be calculated in parallel.Restrictions:A<86 when Ex=0 (due to the memory constraints); small numbers of particles allow significantly higher excitations, though the shell with j?11/2 cannot get full (it is the implementation constraint).Unusual features: Using the program GCFP it is possible to determine allowed particle configurations without the GCFP computation. The GCFPs can be calculated either for all particle configurations at once or for a specified particle configuration. The values of GCFPs can be printed out with a complete specification in either one file or with the parent and daughter configurations printed in separate files. The latter output mode requires additional time and RAM memory. It is possible to restrict the (J,T) values of the considered particle configurations. (Here J is the total angular momentum and T is the total isospin of the system.) The program parGCFP produces several result files the number of which equals to the number of particle configurations. To work correctly, the program GCFP needs to be compiled to read parameters from the standard input (the default setting).Running time: It depends on the size of the problem. The minimum time is required, if the computation and output mode (CompMode) is not 4, but the resulting file is larger. A system with A=12 particles at Ex=0 (all 9411 GCFPs) took around 1 sec on a Pentium4 2.8 GHz processor with 1 MB L2 cache. The program required about 14 min to calculate all 1.3×106 GCFPs of Ex=1. The time for all 5.5×107 GCFPs of Ex=2 was about 53 hours. For this number of particles, the calculation time of both Ex=0 and Ex=1 with CompMode = 1 and 4 is nearly the same, when no other processes are running. The case of Ex=2 could not be calculated with CompMode = 4, because the RAM memory was insufficient. In general, the latter CompMode requires a longer computation time, although the resulting files are smaller in size. The program parGCFP puts virtually no time overhead. Its parallel version speeds-up the calculation. However, the results need to be collected from several files created for each configuration.References:[1] J. Levinsonas, Works of Lithuanian SSR Academy of Sciences 4 (1957) 17.[2] A. Deveikis, A. Bon?kus, R. Kalinauskas, Lithuanian Phys. J. 41 (2001) 3.[3] A. Deveikis, R.K. Kalinauskas, B.R. Barrett, Ann. Phys. 296 (2002) 287.[4] A. Deveikis, Comput. Phys. Comm. 173 (2005) 186. (CPC Catalogue ID. ADWI_v1_0)  相似文献   

14.
We describe the implementation of the MSSM in the diagram generator FeynArts and the calculational tool FormCalc. This extension allows to perform loop calculations of MSSM processes almost fully automatically. The actual implementation has two aspects: The MSSM Feynman rules are specified in a new model file for FeynArts. The computation of the parameters in the MSSM Lagrangian from the input parameters is realized as a Fortran subroutine in the framework of FormCalc. The model file does not depend on the latter, however, and can be used even if one does not want to continue the calculation with FormCalc. The Feynman rules have been entered in a very generic way to allow, e.g., scenarios with complex parameters, and have been tested extensively by reproducing known results for several non-trivial scattering processes.  相似文献   

15.
S. G. Akl 《Computing》1986,36(3):271-277
A parallel algorithm is described for computing the minimum spanning tree of an undirected, connected and weighted graph withn vertices. We assume a shared-memory single-instruction-stream, multiple-data-stream model of computation which does not allow read or write conflicts. The algorithm is adaptive in the sense that it usesn 1?e processors and runs inO(n 1+e ) time wheree lies between 0 and 1 and depends on the number of available processors. In view of the obvious Ω(n 2) lower bound on the number of operations required to compute a minimum spanning tree, the algorithm is also cost-optimal.  相似文献   

16.
In this article, we describe a general-purpose coarse-grained molecular dynamics program COGNAC (COarse Grained molecular dynamics program by NAgoya Cooperation). COGNAC has been developed for general molecular dynamics simulation, especially for coarse-grained polymer chain models. COGNAC can deal with general molecular models, in which each molecule consists of coarse-grained atomic units connected by chemical bonds. The chemical bonds are specified by bonding potentials for the stretching, bending and twisting of the bonds, each of which are the functions of the position coordinates of the two, three and four atomic units. COGNAC can deal with both isotropic and anisotropic interactions between the non-bonded atomic units. As an example, the Gay-Berne potential is implemented. New potential functions can be added to the list of existing potential functions by users. COGNAC can do simulations for various situations such as under constant temperature, under constant pressure, under shear and elongational deformation, etc. Some new methods are implemented in COGNAC for modeling multiphase structures of polymer blends and block copolymers. A density biased Monte Carlo method and a density biased potential method can generate equilibrium chain configurations from the results of the self-consistent field calculations. Staggered reflective boundary conditions can generate interfacial structures with smaller system size compared with those of periodic boundary conditions.  相似文献   

17.
This paper presents a deterministic parallel algorithm to solve the data path allocation problem in high-level synthesis. The algorithm is driven by a motion equation that determines the neurons firing conditions based on the modified Hopfield neural network model of computation. The method formulates the allocation problem using the clique partitioning problem, an NP-complete problem, and handles multicycle functional units as well as structural pipelining. The algorithm has a running time complexity of O(1) for a circuit with n operations and c shared resources. A sequential simulator was implemented on a Linux Pentium PC under X-Windows. Several benchmark examples have been implemented and favorable design comparisons to other synthesis systems are reported.  相似文献   

18.
19.
We present a general purpose parallel molecular dynamics simulation code. The code can handle NVE, NVT, and NPT ensemble molecular dynamics, Langevin dynamics, and dissipative particle dynamics. Long-range interactions are handled by using the smooth particle mesh Ewald method. The implicit solvent model using solvent-accessible surface area was also implemented. Benchmark results using molecular dynamics, Langevin dynamics, and dissipative particle dynamics are given.

Program summary

Title of program:MM_PARCatalogue identifier:ADXP_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXP_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandComputer for which the program is designed and others on which it has been tested:any UNIX machine. The code has been tested on Linux cluster and IBM p690Operating systems or monitors under which the program has been tested:Linux, AIXProgramming language used:CMemory required to execute with typical data:∼60 MB for a system of atoms Has the code been vectorized or parallelized? parallelized with MPI using atom decomposition and domain decompositionNo. of lines in distributed program, including test data, etc.:171 427No. of bytes in distributed program, including test data, etc.:4 558 773Distribution format:tar.gzExternal routines/libraries used:FFTW free software (http://www.fftw.org)Nature of physical problem:Structural, thermodynamic, and dynamical properties of fluids and solids from microscopic scales to mesoscopic scales.Method of solution:Molecular dynamics simulation in NVE, NVT, and NPT ensemble, Langevin dynamics simulation, dissipative particle dynamics simulation.Typical running time:Table below shows the typical run times for the four test programs.
Benchmark results. The values in the parenthesis are the number of processors used
SystemMethodTiming for 100 steps in seconds
256 TIP3PMD23.8 (1)
64 DMPC + 1645 TIP3PMD890 (1)528 (2)326 (4)209 (8)
8 Aβ16-22LD1.02 (1)
23760 Groot-Warren particlesDPD22.16 (1)
Full-size table
  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号