首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a patch code for LAMMPS to implement a coarse grained (CG) model of poly(vinyl alcohol) (PVA). LAMMPS is a powerful molecular dynamics (MD) simulator developed at Sandia National Laboratories. Our patch code implements tabulated angular potential and Lennard-Jones-9-6 (LJ96) style interaction for PVA. Benefited from the excellent parallel efficiency of LAMMPS, our patch code is suitable for large-scale simulations.This CG-PVA code is used to study polymer crystallization, which is a long-standing unsolved problem in polymer physics. By using parallel computing, cooling and heating processes for long chains are simulated. The results show that chain-folded structures resembling the lamellae of polymer crystals are formed during the cooling process. The evolution of the static structure factor during the crystallization transition indicates that long-range density order appears before local crystalline packing. This is consistent with some experimental observations by small/wide angle X-ray scattering (SAXS/WAXS). During the heating process, it is found that the crystalline regions are still growing until they are fully melted, which can be confirmed by the evolution both of the static structure factor and average stem length formed by the chains. This two-stage behavior indicates that melting of polymer crystals is far from thermodynamic equilibrium. Our results concur with various experiments. It is the first time that such growth/reorganization behavior is clearly observed by MD simulations.Our code can be easily used to model other type of polymers by providing a file containing the tabulated angle potential data and a set of appropriate parameters.

Program summary

Program title: lammps-cgpvaCatalogue identifier: AEDE_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDE_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU's GPLNo. of lines in distributed program, including test data, etc.: 940 798No. of bytes in distributed program, including test data, etc.: 12 536 245Distribution format: tar.gzProgramming language: C++/MPIComputer: Tested on Intel-x86 and AMD64 architectures. Should run on any architecture providing a C++ compilerOperating system: Tested under Linux. Any other OS with C++ compiler and MPI library should sufficeHas the code been vectorized or parallelized?: YesRAM: Depends on system size and how many CPUs are usedClassification: 7.7External routines: LAMMPS (http://lammps.sandia.gov/), FFTW (http://www.fftw.org/)Nature of problem: Implementing special tabular angle potentials and Lennard-Jones-9-6 style interactions of a coarse grained polymer model for LAMMPS code.Solution method: Cubic spline interpolation of input tabulated angle potential data.Restrictions: The code is based on a former version of LAMMPS.Unusual features.: Any special angular potential can be used if it can be tabulated.Running time: Seconds to weeks, depending on system size, speed of CPU and how many CPUs are used. The test run provided with the package takes about 5 minutes on 4 AMD's opteron (2.6 GHz) CPUs.References:
[1]
D. Reith, H. Meyer, F. Müller-Plathe, Macromolecules 34 (2001) 2335-2345.
[2]
H. Meyer, F. Müller-Plathe, J. Chem. Phys. 115 (2001) 7807.
[3]
H. Meyer, F. Müller-Plathe, Macromolecules 35 (2002) 1241-1252.
  相似文献   

2.
The three-dimensional Mercedes-Benz model was recently introduced to account for the structural and thermodynamic properties of water. It treats water molecules as point-like particles with four dangling bonds in tetrahedral coordination, representing H-bonds of water. Its conceptual simplicity renders the model attractive in studies where complex behaviors emerge from H-bond interactions in water, e.g., the hydrophobic effect. A molecular dynamics (MD) implementation of the model is non-trivial and we outline here the mathematical framework of its force-field. Useful routines written in modern Fortran are also provided. This open source code is free and can easily be modified to account for different physical context. The provided code allows both serial and MPI-parallelized execution.Program summaryProgram title: CASHEW (Coarse Approach Simulator for Hydrogen-bonding Effects in Water)Catalogue identifier: AEKM_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEKM_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 20 501No. of bytes in distributed program, including test data, etc.: 551 044Distribution format: tar.gzProgramming language: Fortran 90Computer: Program has been tested on desktop workstations and a Cray XT4/XT5 supercomputer.Operating system: Linux, Unix, OS XHas the code been vectorized or parallelized?: The code has been parallelized using MPI.RAM: Depends on size of system, about 5 MB for 1500 molecules.Classification: 7.7External routines: A random number generator, Mersenne Twister (http://www.math.sci.hiroshima-u.ac.jp/m-mat/MT/VERSIONS/FORTRAN/mt95.f90), is used. A copy of the code is included in the distribution.Nature of problem: Molecular dynamics simulation of a new geometric water model.Solution method: New force-field for water molecules, velocity–Verlet integration, representation of molecules as rigid particles with rotations described using quaternion algebra.Restrictions: Memory and cpu time limit the size of simulations.Additional comments: Software web site: https://gitorious.org/cashew/.Running time: Depends on the size of system. The sample tests provided only take a few seconds.  相似文献   

3.
The Green's function molecular dynamics method, which enables one to study the elastic response of a three-dimensional solid to an external stress field by taking into consideration only the surface atoms, was implemented as an extension to an open source classical molecular dynamics simulation code LAMMPS. This was done in the style of fixes. The first fix, FixGFC, measures the elastic stiffness coefficients for a (small) solid block of a given material by making use of the fluctuation-dissipation theorem. With the help of the second fix, FixGFMD, the coefficients obtained from FixGFC can then be used to compute the elastic forces for a (large) block of the same material. Both fixes are designed to be run in parallel and to exploit the functions provided by LAMMPS.

Program summary

Program title: FixGFC/FixGFMDCatalogue identifier: AECW_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECW_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: yesNo. of lines in distributed program, including test data, etc.: 33 469No. of bytes in distributed program, including test data, etc.: 1 383 631Distribution format: tar.gzProgramming language: C++Computer: AllOperating system: LinuxHas the code been vectorized or parallelized?: Parallelized via MPIRAM: Depends on the problemClassification: 7.7External routines: MPI, FFTW 2.1.5 (http://www.fftw.org/), LAMMPS version May 21, 2008 (http://lammps.sandia.gov/)Nature of problem: Using molecular dynamics to study elastically deforming solids imposes very high computational costs because portions of the solid far away from the interface or contact points need to be included in the simulation to reproduce the effects of long-range elastic deformations. Green's function molecular dynamics (GFMD) incorporates the full elastic response of semi-infinite solids so that only surface atoms have to be considered in molecular dynamics simulations, thus reducing the problem from three dimensions to two dimensions without compromising the physical essence of the problem.Solution method: See “Nature of problem”.Restrictions: The mean equilibrium positions of the GFMD surface atoms must be in a plane and be periodic in the plane, so that the Born-von Karman boundary condition can be used. In addition, only deformation within the harmonic regime is expected in the surface layer during Green's function molecular dynamics.Running time: FixGFC varies from minutes to days, depending on the system size, the numbers of processors used, and the complexity of the force field. FixGFMD varies from seconds to days depending on the system size and numbers of processors used.References: [1] C. Campañá, M.H. Müser, Phys. Rev. B 74 (2006) 075420.  相似文献   

4.
We present a driver program for performing replica-exchange molecular dynamics simulations with the Tinker package. Parallelization is based on the Message Passing Interface, with every replica assigned to a separate process. The algorithm is not communication intensive, which makes the program suitable for running even on loosely coupled cluster systems. Particular attention is paid to the practical aspects of analyzing the program output.

Program summary

Program title: TiReXCatalogue identifier: AEEK_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEK_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 43 385No. of bytes in distributed program, including test data, etc.: 502 262Distribution format: tar.gzProgramming language: Fortran 90/95Computer: Most UNIX machinesOperating system: LinuxHas the code been vectorized or parallelized?: parallelized with MPIClassification: 16.13External routines: TINKER version 4.2 or 5.0, built as a libraryNature of problem: Replica-exchange molecular dynamics.Solution method: Each replica is assigned to a separate process; temperatures are swapped between replicas at regular time intervals.Running time: The sample run may take up to a few minutes.  相似文献   

5.
6.
We present a suite of Mathematica-based computer-algebra packages, termed “Kranc”, which comprise a toolbox to convert certain (tensorial) systems of partial differential evolution equations to parallelized C or Fortran code for solving initial boundary value problems. Kranc can be used as a “rapid prototyping” system for physicists or mathematicians handling very complicated systems of partial differential equations, but through integration into the Cactus computational toolkit we can also produce efficient parallelized production codes. Our work is motivated by the field of numerical relativity, where Kranc is used as a research tool by the authors. In this paper we describe the design and implementation of both the Mathematica packages and the resulting code, we discuss some example applications, and provide results on the performance of an example numerical code for the Einstein equations.

Program summary

Title of program: KrancCatalogue identifier: ADXS_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXS_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandDistribution format: tar.gzComputer for which the program is designed and others on which it has been tested: General computers which run Mathematica (for code generation) and Cactus (for numerical simulations), tested under LinuxProgramming language used: Mathematica, C, Fortran 90Memory required to execute with typical data: This depends on the number of variables and gridsize, the included ADM example requires 4308 KBHas the code been vectorized or parallelized: The code is parallelized based on the Cactus framework.Number of bytes in distributed program, including test data, etc.: 1 578 142Number of lines in distributed program, including test data, etc.: 11 711Nature of physical problem: Solution of partial differential equations in three space dimensions, which are formulated as an initial value problem. In particular, the program is geared towards handling very complex tensorial equations as they appear, e.g., in numerical relativity. The worked out examples comprise the Klein-Gordon equations, the Maxwell equations, and the ADM formulation of the Einstein equations.Method of solution: The method of numerical solution is finite differencing and method of lines time integration, the numerical code is generated through a high level Mathematica interface.Restrictions on the complexity of the program: Typical numerical relativity applications will contain up to several dozen evolution variables and thousands of source terms, Cactus applications have shown scaling up to several thousand processors and grid sizes exceeding 5003.Typical running time: This depends on the number of variables and the grid size: the included ADM example takes approximately 100 seconds on a 1600 MHz Intel Pentium M processor.Unusual features of the program: based on Mathematica and Cactus  相似文献   

7.
Massively parallel computers now permit the molecular dynamics (MD) simulation of multi-million atom systems on time scales up to the microsecond. However, the subsequent analysis of the resulting simulation trajectories has now become a high performance computing problem in itself. Here, we present software for calculating X-ray and neutron scattering intensities from MD simulation data that scales well on massively parallel supercomputers. The calculation and data staging schemes used maximize the degree of parallelism and minimize the IO bandwidth requirements. The strong scaling tested on the Jaguar Petaflop Cray XT5 at Oak Ridge National Laboratory exhibits virtually linear scaling up to 7000 cores for most benchmark systems. Since both MPI and thread parallelism is supported, the software is flexible enough to cover scaling demands for different types of scattering calculations. The result is a high performance tool capable of unifying large-scale supercomputing and a wide variety of neutron/synchrotron technology.Program summaryProgram title: SassenaCatalogue identifier: AELW_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AELW_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: GNU General Public License, version 3No. of lines in distributed program, including test data, etc.: 1 003 742No. of bytes in distributed program, including test data, etc.: 798Distribution format: tar.gzProgramming language: C++, OpenMPIComputer: Distributed Memory, Cluster of Computers with high performance network, SupercomputerOperating system: UNIX, LINUX, OSXHas the code been vectorized or parallelized?: Yes, the code has been parallelized using MPI directives. Tested with up to 7000 processorsRAM: Up to 1 Gbytes/coreClassification: 6.5, 8External routines: Boost Library, FFTW3, CMAKE, GNU C++ Compiler, OpenMPI, LibXML, LAPACKNature of problem: Recent developments in supercomputing allow molecular dynamics simulations to generate large trajectories spanning millions of frames and thousands of atoms. The structural and dynamical analysis of these trajectories requires analysis algorithms which use parallel computation and IO schemes to solve the computational task in a practical amount of time. The particular computational and IO requirements very much depend on the particular analysis algorithm. In scattering calculations a very frequent pattern is that the trajectory data is used multiple times to compute different projections and aggregates this into a single scattering function. Thus, for good performance the trajectory data has to be kept in memory and the parallel computer has to have enough RAM to store a volatile version of the whole trajectory. In order to achieve high performance and good scalability the mapping of the physical equations to a parallel computer needs to consider data locality and reduce the amount of the inter-node communication.Solution method: The physical equations for scattering calculations were analyzed and two major calculation schemes were developed to support any type of scattering calculation (all/self). Certain hardware aspects were taken into account, e.g. high performance computing clusters and supercomputers usually feature a 2 tier network system, with Ethernet providing the file storage and infiniband the inter-node communication via MPI calls. The time spent loading the trajectory data into memory is minimized by letting each core only read the trajectory data it requires. The performance of inter-node communication is maximized by exclusively utilizing the appropriate MPI calls to exchange the necessary data, resulting in an excellent scalability. The partitioning scheme developed to map the calculation onto a parallel computer covers a wide variety of use cases without negatively effecting the achieved performance. This is done through a 2D partitioning scheme where independent scattering vectors are assigned to independent parallel partitions and all communication is local to the partition.Additional comments: !!!!! The distribution file for this program is approximately 36 Mbytes and therefore is not delivered directly when download or E-mail is requested. Instead an html file giving details of how the program can be obtained is sent. !!!!!Running time: Usual runtime spans from 1 min on 20 nodes to 2 h on 2000 nodes. That is 0.5–4000 CPU hours per execution.  相似文献   

8.
The purpose of this paper is (i) to present a generic and fully functional implementation of the density-matrix renormalization group (DMRG) algorithm, and (ii) to describe how to write additional strongly-correlated electron models and geometries by using templated classes. Besides considering general models and geometries, the code implements Hamiltonian symmetries in a generic way and parallelization over symmetry-related matrix blocks.

Program summary

Program title: DMRG++Catalogue identifier: AEDJ_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDJ_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: See file LICENSENo. of lines in distributed program, including test data, etc.: 15 795No. of bytes in distributed program, including test data, etc.: 83 454Distribution format: tar.gzProgramming language: C++, MPIComputer: PC, HP clusterOperating system: Any, tested on LinuxHas the code been vectorized or parallelized?: YesRAM: 1 GB (256 MB is enough to run included test)Classification: 23External routines: BLAS and LAPACKNature of problem: Strongly correlated electrons systems, display a broad range of important phenomena, and their study is a major area of research in condensed matter physics. In this context, model Hamiltonians are used to simulate the relevant interactions of a given compound, and the relevant degrees of freedom. These studies rely on the use of tight-binding lattice models that consider electron localization, where states on one site can be labeled by spin and orbital degrees of freedom. The calculation of properties from these Hamiltonians is a computational intensive problem, since the Hilbert space over which these Hamiltonians act grows exponentially with the number of sites on the lattice.Solution method: The DMRG is a numerical variational technique to study quantum many body Hamiltonians. For one-dimensional and quasi one-dimensional systems, the DMRG is able to truncate, with bounded errors and in a general and efficient way, the underlying Hilbert space to a constant size, making the problem tractable.Running time: The test program runs in 15 seconds.  相似文献   

9.
We describe the Breit–Pauli distorted wave (BPDW) approach for the electron-impact excitation of atomic ions that we have implemented within the autostructure code.

Program summary

Program title:autostructureCatalogue identifier: AEIV_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIV_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 130 987No. of bytes in distributed program, including test data, etc.: 1 031 584Distribution format: tar.gzProgramming language: Fortran 77/95Computer: GeneralOperating system: UnixHas the code been vectorized or parallelized?: Yes, a parallel version, with MPI directives, is included in the distribution.RAM: From several kbytes to several GbytesClassification: 2, 2.4Nature of problem: Collision strengths for the electron-impact excitation of atomic ions are calculated using a Breit–Pauli distorted wave approach with the optional inclusion of two-body non-fine-structure and fine-structure interactions.Solution method: General multi-configuration Breit–Pauli atomic structure. A jK-coupling partial wave expansion of the collision problem. Slater state angular algebra. Various model potential non-relativistic or kappa-averaged relativistic radial orbital solutions — the continuum distorted wave orbitals are not required to be orthogonal to the bound.Additional comments: Documentation is provided in the distribution file along with the test-case.Running time: From a few seconds to a few hours.  相似文献   

10.
11.
12.
We present a general purpose parallel molecular dynamics simulation code. The code can handle NVE, NVT, and NPT ensemble molecular dynamics, Langevin dynamics, and dissipative particle dynamics. Long-range interactions are handled by using the smooth particle mesh Ewald method. The implicit solvent model using solvent-accessible surface area was also implemented. Benchmark results using molecular dynamics, Langevin dynamics, and dissipative particle dynamics are given.

Program summary

Title of program:MM_PARCatalogue identifier:ADXP_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXP_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandComputer for which the program is designed and others on which it has been tested:any UNIX machine. The code has been tested on Linux cluster and IBM p690Operating systems or monitors under which the program has been tested:Linux, AIXProgramming language used:CMemory required to execute with typical data:∼60 MB for a system of atoms Has the code been vectorized or parallelized? parallelized with MPI using atom decomposition and domain decompositionNo. of lines in distributed program, including test data, etc.:171 427No. of bytes in distributed program, including test data, etc.:4 558 773Distribution format:tar.gzExternal routines/libraries used:FFTW free software (http://www.fftw.org)Nature of physical problem:Structural, thermodynamic, and dynamical properties of fluids and solids from microscopic scales to mesoscopic scales.Method of solution:Molecular dynamics simulation in NVE, NVT, and NPT ensemble, Langevin dynamics simulation, dissipative particle dynamics simulation.Typical running time:Table below shows the typical run times for the four test programs.
Benchmark results. The values in the parenthesis are the number of processors used
SystemMethodTiming for 100 steps in seconds
256 TIP3PMD23.8 (1)
64 DMPC + 1645 TIP3PMD890 (1)528 (2)326 (4)209 (8)
8 Aβ16-22LD1.02 (1)
23760 Groot-Warren particlesDPD22.16 (1)
Full-size table
  相似文献   

13.
A program that uses the time-dependent wavepacket method to study the motion of structureless particles in a force field of quasi-cylindrical symmetry is presented here. The program utilises cylindrical polar coordinates to express the wavepacket, which is subsequently propagated using a Chebyshev expansion of the Schrödinger propagator. Time-dependent exit flux as well as energy-dependent S matrix elements can be obtained for all states of the particle (describing its angular momentum component along the nanotube axis and the excitation of the radial degree of freedom in the cylinder). The program has been used to study the motion of an H atom across a carbon nanotube.

Program summary

Program title: CYLWAVECatalogue identifier: AECL_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECL_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 3673No. of bytes in distributed program, including test data, etc.: 35 237Distribution format: tar.gzProgramming language: Fortran 77Computer: RISC workstationsOperating system: UNIXRAM: 120 MBytesClassification: 16.7, 16.10External routines: SUNSOFT performance library (not essential) TFFT2D.F (Temperton Fast Fourier Transform), BESSJ.F (from Numerical Recipes, for the calculation of Bessel functions) (included in the distribution file).Nature of problem: Time evolution of the state of a structureless particle in a quasicylindrical potential.Solution method: Time dependent wavepacket propagation.Running time: 50000 secs. The test run supplied with the distribution takes about 10 minutes to complete.  相似文献   

14.
We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible.

Program summary

Program title: MontePythonCatalogue identifier: ADZP_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZP_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 49 519No. of bytes in distributed program, including test data, etc.: 114 484Distribution format: tar.gzProgramming language: C++, PythonComputer: PC, IBM RS6000/320, HP, ALPHAOperating system: LINUXHas the code been vectorised or parallelized?: Yes, parallelized with MPINumber of processors used: 1-96RAM: Depends on physical system to be simulatedClassification: 7.6; 16.1Nature of problem: Investigating ab initio quantum mechanical systems, specifically Bose-Einstein condensation in dilute gases of 87RbSolution method: Quantum Monte CarloRunning time: 225 min with 20 particles (with 4800 walkers moved in 1750 time steps) on 1 AMD OpteronTM Processor 2218 processor; Production run for, e.g., 200 particles takes around 24 hours on 32 such processors.  相似文献   

15.
Las Palmeras Molecular Dynamics (LPMD) is a highly modular and extensible molecular dynamics (MD) code using interatomic potential functions. LPMD is able to perform equilibrium MD simulations of bulk crystalline solids, amorphous solids and liquids, as well as non-equilibrium MD (NEMD) simulations such as shock wave propagation, projectile impacts, cluster collisions, shearing, deformation under load, heat conduction, heterogeneous melting, among others, which involve unusual MD features like non-moving atoms and walls, unstoppable atoms with constant-velocity, and external forces like electric fields. LPMD is written in C++ as a compromise between efficiency and clarity of design, and its architecture is based on separate components or plug-ins, implemented as modules which are loaded on demand at runtime. The advantage of this architecture is the ability to completely link together the desired components involved in the simulation in different ways at runtime, using a user-friendly control file language which describes the simulation work-flow.As an added bonus, the plug-in API (Application Programming Interface) makes it possible to use the LPMD components to analyze data coming from other simulation packages, convert between input file formats, apply different transformations to saved MD atomic trajectories, and visualize dynamical processes either in real-time or as a post-processing step.Individual components, such as a new potential function, a new integrator, a new file format, new properties to calculate, new real-time visualizers, and even a new algorithm for handling neighbor lists can be easily coded, compiled and tested within LPMD by virtue of its object-oriented API, without the need to modify the rest of the code.LPMD includes already several pair potential functions such as Lennard-Jones, Morse, Buckingham, MCY and the harmonic potential, as well as embedded-atom model (EAM) functions such as the Sutton–Chen and Gupta potentials. Integrators to choose include Euler (if only for demonstration purposes), Verlet and Velocity Verlet, Leapfrog and Beeman, among others. Electrostatic forces are treated as another potential function, by default using the plug-in implementing the Ewald summation method.Program summaryProgram title: LPMDCatalogue identifier: AEHG_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEHG_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU General Public License version 3No. of lines in distributed program, including test data, etc.: 509 490No. of bytes in distributed program, including test data, etc.: 6 814 754Distribution format: tar.gzProgramming language: C++Computer: 32-bit and 64-bit workstationOperating system: UNIXRAM: Minimum 1024 bytesClassification: 7.7External routines: zlib, OpenGLNature of problem: Study of Statistical Mechanics and Thermodynamics of condensed matter systems, as well as kinetics of non-equilibrium processes in the same systems.Solution method: Equilibrium and non-equilibrium molecular dynamics method, Monte Carlo methods.Restrictions: Rigid molecules are not supported. Polarizable atoms and chemical bonds (proteins) either.Unusual features: The program is able to change the temperature of the simulation cell, the pressure, cut regions of the cell, color the atoms by properties, even during the simulation. It is also possible to fix the positions and/or velocity of groups of atoms. Visualization of atoms and some physical properties during the simulation.Additional comments: The program does not only perform molecular dynamics and Monte Carlo simulations, it is also able to filter and manipulate atomic configurations, read and write different file formats, convert between them, evaluate different structural and dynamical properties.Running time: 50 seconds on a 1000-step simulation of 4000 argon atoms, running on a single 2.67 GHz Intel processor.  相似文献   

16.
A method to measure the phonon dispersion of a crystal based on molecular dynamics simulation is proposed and implemented as an extension to an open source classical molecular dynamics simulation code LAMMPS. In the proposed method, the dynamical matrix is constructed by observing the displacements of atoms during molecular dynamics simulation, making use of the fluctuation–dissipation theory. The dynamical matrix can then be employed to compute the phonon spectra by evaluating its eigenvalues. It is found that the proposed method is capable of yielding the phonon dispersion accurately, while taking into account the anharmonic effect on phonons simultaneously. The implementation is done in the style of fix of LAMMPS, which is designed to run in parallel and to exploit the functions provided by LAMMPS; the measured dynamical matrices could be passed to an auxiliary postprocessing code to evaluate the phonons.

Program summary

Program title: FixPhonon, version 1.0Catalogue identifier: AEJB_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJB_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: GNU General Public licenseNo. of lines in distributed program, including test data, etc.: 105 393No. of bytes in distributed program, including test data, etc.: 3 231 800Distribution format: tar.gzProgramming language: C++Computer: AllOperating system: LinuxHas the code been vectorized or parallelized?: Yes. 1 to N processors may be usedRAM: Depends on problem, ≈1 kB to several MBClassification: 7.8External routines: MPI, FFT, LAMMPS version 15, January 2010 (http://lammps.sandia.gov/)Nature of problem: Atoms in solids make ceaseless vibrations about their equilibrium positions, and a collective vibration forms a wave of allowed wavelength and amplitude. The quantum of such lattice vibration is called the phonon, and the so-called “lattice dynamics” is the field of study to find the normal modes of these vibrations. In other words, lattice dynamics examines the relationship between the frequencies of phonons and the wave vectors, i.e., the phonon dispersion. The evaluation of the phonon dispersion requires the construction of the dynamical matrix. In atomic scale modeling, the dynamical matrices are usually constructed by deriving the derivatives of the force field employed, which cannot account for the effect of temperature on phonons, with an exception of the tedious “quasi-harmonic” procedure.Solution method: We propose here a method to construct the dynamical matrix directly from molecular dynamics simulations, simply by observing the displacements of atoms in the system thus making the constructing of the dynamical matrix a straightforward task. Moreover, the anharmonic effect was taken into account in molecular dynamics simulations naturally, the resultant phonons therefore reflect the finite temperature effect simultaneously.Restrictions: A well defined lattice is necessary to employ the proposed method as well as the implemented code to evaluate the phonon dispersion. In other words, the system under study should be in solid state where atoms vibrate about their equilibrium positions. Besides, no drifting of the lattice is expected. The method is best suited for periodic systems, although non-periodic system with a supercell approach is also possible, it will however become inefficient when the unit cell contains too many atoms.Additional comments: The readers are encouraged to visit http://code.google.com/p/fix-phonon for subsequent update of the code as well as the associated postprocessing code, so as to keep up with the latest version of LAMMPS.Running time: Running time depends on the system size, the numbers of processors used, and the complexity of the force field, like a typical molecular dynamics simulation. For the third example shown in this paper, it took about 2.5 hours on an Intel Xeon X3220 architecture (2.4G, quadcore).References:
  • [1] 
    C. Campañá, M.H. Müser, Phys. Rev. B 74 (2006) 075420.
  • [2] 
    L.T. Kong, G. Bartels, C. Campañá, C. Denniston, M.H. Müser, Comp. Phys. Commun. 180 (6) (2009) 1004–1010.
  相似文献   

17.
18.
The MDVRY classical molecular dynamics package is presented for the study of biomolecules in the gas and liquid phase. Electrostatic polarization has been implemented in the formalism of point induced dipoles following the model of Thole. Two schemes have been implemented for the calculation of induced dipoles, i.e. resolution of the self-consistent equations and a ‘Car-Parrinello’ dynamical approach. In this latter, the induced dipoles are calculated at each time step of the dynamics through the dynamics of additional degrees of freedom associated with the dipoles. This method saves computer time and allows to study polarized solvated proteins at a very low CPU cost. The program is written in C-language and runs on LINUX machines. A detailed manual of the code is given. The main features of the package are illustrated taking on examples of proteins in the gas phase or immersed in liquid water.

Program summary

Program title: MDVRYCatalogue identifier: AEBY_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBY_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 39 156No. of bytes in distributed program, including test data, etc.: 277 197Distribution format: tar.bz2Programming language: CComputer: Linux machines with FFTW Fourier Transform package installedOperating system: Linux machines, SUSE & RedHat distributionsClassification: 3, 16.13, 23External routines: FFTW (http://www.fftw.org/)Nature of problem: Molecular Dynamics Software package.Solution method: Velocity Verlet algorithm. The implemented force field is composed of intra-molecular interactions and inter-molecular interactions (electrostatics, polarization, van der Waals). Polarization is accounted through induced point dipoles at each atomic site. Supplementary degrees of freedom are associated to the induced dipoles so that a modified Hamiltonian of the dynamics is written. This allows to calculate the induced dipoles with a very fast ‘Car-Parrinello’ type of dynamics.Running time: The test run provided takes approximately 6 minutes to run.  相似文献   

19.
We describe a Scheme implementation of the interactive environment to calculate analytically the Clebsch-Gordan coefficients, Wigner 6j and 9j symbols, and general recoupling coefficients that are used in the quantum theory of angular momentum. The orthogonality conditions for considered coefficients are implemented. The program provides a fast and exact calculation of the coefficients for large values of quantum angular momenta.

Program summary

Title of program:Scheme2ClebschCatalogue number:ADWCProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWCProgram obtainable from:CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions:noneComputer for which the program is designed:Any Scheme-capable platformOperating systems under which the program has been tested: Windows 2000Programming language used:SchemeMemory required to execute with typical data:50 MB (≈ size of DrScheme, version 204)No. of lines in distributed program, including test data, etc.: 2872No. of bytes in distributed program, including test data, etc.: 109 396Distribution format:tar.gzNature of physical problem:The accurate and fast calculation of the angular momentum coupling and recoupling coefficients is required in various branches of quantum many-particle physics. The presented code provides a fast and exact calculation of the angular momentum coupling and recoupling coefficients for large values of quantum angular momenta and is based on the GNU Library General Public License PLT software http://www.plt-scheme.org/.Method of solution:A direct evaluation of sum formulas. A general angular momentum recoupling coefficient for an arbitrary number of (integer or half-integer) angular momenta is expressed as a sum over products of the Clebsch-Gordan coefficients.Restrictions on the complexity of the problem:Limited only by the DrScheme implementation used to run the program. No limitation inherent in the code.Typical running time:The Clebsch-Gordan coefficients, Wigner 6j and 9j symbols, and general recoupling coefficients with small angular momenta are computed almost instantaneously. The running time for large-scale calculations depends strongly on the number and magnitude of arguments' values (i.e., of the angular momenta).  相似文献   

20.
We investigate performance improvements for the discrete element method (DEM) used in ppohDEM. First, we use OpenMP and MPI to parallelize DEM for efficient operation on many types of memory, including shared memory, and at any scale, from small PC clusters to supercomputers. We also describe a new algorithm for the descending storage method (DSM) based on a sort technique that makes creation of contact candidate pair lists more efficient. Finally, we measure the performance of ppohDEM using the proposed improvements, and confirm that computational time is significantly reduced. We also show that the parallel performance of ppohDEM can be improved by reducing the number of OpenMP threads per MPI process.Program summaryProgram title: ppohDEMCatalogue identifier: AESI_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AESI_v1_0.htmlProgram obtainable from: CPC Program Library, Queen’s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 39007No. of bytes in distributed program, including test data, etc.: 2482843Distribution format: tar.gzProgramming language: Fortran.Computer: CPU based workstations and parallel computers.Operating system: Linux, Windows.Has the code been vectorized or parallelized?: Yes, using MPI. Tested with up to 8 processors.RAM: Dependent upon the numbers of particles and contact particle pairs (1 GB for the example program supplied with the package)Classification: 6.5, 13.External routines: MPI-2, OpenMPNature of problem:Collision dynamics of viscoelastic particles with friction in powder engineering and soil mechanics.Solution method:Parallelized DEM running on shared and/or distributed systems is the solution method based particle model in which geometrical size and shape attributes are provided for each element. In the DEM, the Voigt model and Coulomb friction model are considered at each contact point between particles.Running time:10 min for the example program supplied with the package using 2 CPU (each with 10 cores) of Intel Xeon E7-4870.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号