首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
We present a driver program for performing replica-exchange molecular dynamics simulations with the Tinker package. Parallelization is based on the Message Passing Interface, with every replica assigned to a separate process. The algorithm is not communication intensive, which makes the program suitable for running even on loosely coupled cluster systems. Particular attention is paid to the practical aspects of analyzing the program output.

Program summary

Program title: TiReXCatalogue identifier: AEEK_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEK_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 43 385No. of bytes in distributed program, including test data, etc.: 502 262Distribution format: tar.gzProgramming language: Fortran 90/95Computer: Most UNIX machinesOperating system: LinuxHas the code been vectorized or parallelized?: parallelized with MPIClassification: 16.13External routines: TINKER version 4.2 or 5.0, built as a libraryNature of problem: Replica-exchange molecular dynamics.Solution method: Each replica is assigned to a separate process; temperatures are swapped between replicas at regular time intervals.Running time: The sample run may take up to a few minutes.  相似文献   

2.
Nowadays the state of the art Density Functional Theory (DFT) codes are based on local (LDA) or semilocal (GGA) energy functionals. Recently the theory of a truly nonlocal energy functional has been developed. It has been used mostly as a post-DFT calculation approach, i.e. by applying the functional to the charge density calculated using any standard DFT code, thus obtaining a new improved value for the total energy of the system. Nonlocal calculation is computationally quite expensive and scales as N2 where N is the number of points in which the density is defined, and a massively parallel calculation is welcome for a wider applicability of the new approach. In this article we present a code which accomplishes this goal.

Program summary

Program title: JuNoLoCatalogue identifier: AEFM_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFM_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 176 980No. of bytes in distributed program, including test data, etc.: 2 126 072Distribution format: tar.gzProgramming language: Fortran 90Computer: any architecture with a Fortran 90 compilerOperating system: Linux, AIXHas the code been vectorised or parallelized?: Yes, from 1 to 65536 processors may be used.RAM: depends strongly on the problem's size.Classification: 7.3External routines:• FFTW (http://www.tw.org/)• MPI (http://www.mcs.anl.gov/research/projects/mpich2/ or http://www.lam-mpi.org/)Nature of problem: Obtaining the value of the nonlocal vdW-DF energy based on the charge density distribution obtained from some Density Functional Theory code.Solution method: Numerical calculation of the double sum is implemented in a parallel F90 code. Calculation of this sum yields the required nonlocal vdW-DF energy.Unusual features: Binds to virtually any DFT program.Additional comments: Excellent parallelization features.Running time: Depends strongly on the size of the problem and the number of CPUs used.  相似文献   

3.
We describe an implementation to solve Poisson?s equation for an isolated system on a unigrid mesh using FFTs. The method solves the equation globally on mesh blocks distributed across multiple processes on a distributed-memory parallel computer. Test results to demonstrate the convergence and scaling properties of the implementation are presented. The solver is offered to interested users as the library PSPFFT.

Program summary

Program title: PSPFFTCatalogue identifier: AEJK_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJK_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 110 243No. of bytes in distributed program, including test data, etc.: 16 332 181Distribution format: tar.gzProgramming language: Fortran 95Computer: Any architecture with a Fortran 95 compiler, distributed memory clustersOperating system: Linux, UnixHas the code been vectorized or parallelized?: Yes, using MPI. An arbitrary number of processors may be used (subject to some constraints). The program has been tested on from 1 up to ∼ 13 000 processors. RAM: Depends on the problem size, approximately 170 MBytes for 483 cells per process.Classification: 4.3, 6.5External routines: MPI (http://www.mcs.anl.gov/mpi/), FFTW (http://www.fftw.org), Silo (https://wci.llnl.gov/codes/silo/) (only necessary for running test problem).Nature of problem: Solving Poisson?s equation globally on unigrid mesh distributed across multiple processes on distributed memory system.Solution method: Numerical solution using multidimensional discrete Fourier Transform in a parallel Fortran 95 code.Unusual features: This code can be compiled as a library to be readily linked and used as a blackbox Poisson solver with other codes.Running time: Depends on the size of the problem, but typically less than 1 second per solve.  相似文献   

4.
We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible.

Program summary

Program title: MontePythonCatalogue identifier: ADZP_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZP_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 49 519No. of bytes in distributed program, including test data, etc.: 114 484Distribution format: tar.gzProgramming language: C++, PythonComputer: PC, IBM RS6000/320, HP, ALPHAOperating system: LINUXHas the code been vectorised or parallelized?: Yes, parallelized with MPINumber of processors used: 1-96RAM: Depends on physical system to be simulatedClassification: 7.6; 16.1Nature of problem: Investigating ab initio quantum mechanical systems, specifically Bose-Einstein condensation in dilute gases of 87RbSolution method: Quantum Monte CarloRunning time: 225 min with 20 particles (with 4800 walkers moved in 1750 time steps) on 1 AMD OpteronTM Processor 2218 processor; Production run for, e.g., 200 particles takes around 24 hours on 32 such processors.  相似文献   

5.
We document our Fortran 77 code for multicanonical simulations of 4D U(1) lattice gauge theory in the neighborhood of its phase transition. This includes programs and routines for canonical simulations using biased Metropolis heatbath updating and overrelaxation, determination of multicanonical weights via a Wang-Landau recursion, and multicanonical simulations with fixed weights supplemented by overrelaxation sweeps. Measurements are performed for the action, Polyakov loops and some of their structure factors. Many features of the code transcend the particular application and are expected to be useful for other lattice gauge theory models as well as for systems in statistical physics.

Program summary

Program title: STMC_U1MUCACatalogue identifier: AEET_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEET_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 18 376No. of bytes in distributed program, including test data, etc.: 205 183Distribution format: tar.gzProgramming language: Fortran 77Computer: Any capable of compiling and executing Fortran codeOperating system: Any capable of compiling and executing Fortran codeClassification: 11.5Nature of problem: Efficient Markov chain Monte Carlo simulation of U(1) lattice gauge theory close to its phase transition. Measurements and analysis of the action per plaquette, the specific heat, Polyakov loops and their structure factors.Solution method: Multicanonical simulations with an initial Wang-Landau recursion to determine suitable weight factors. Reweighting to physical values using logarithmic coding and calculating jackknife error bars.Running time: The prepared tests runs took up to 74 minutes to execute on a 2 GHz PC.  相似文献   

6.
A computational approach is presented for efficient solution of two-dimensional few-body problems, such as quantum dots or excitonic complexes, using the stochastic variational method. The computer program can be used to calculate the energies and wave functions of various two-dimensional systems.

Program summary

Program title: svm-2dCatalogue identifier: AEBE_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBE_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 5091No. of bytes in distributed program, including test data, etc.: 130 963Distribution format: tar.gzProgramming language: Fortran 90Computer: The program should work on any system with a Fortran 90 compilerOperating system: The program should work on any system with a Fortran 90 compilerClassification: 7.3Nature of problem: Variational calculation of energies and wave functions using Correlated Gaussian basis.Solution method: Two-dimensional few-electron problems are solved by the variational method. The ground state wave function is expanded into Correlated Gaussian basis functions and the parameters of the basis states are optimized by a stochastic selection procedure. Accurate results can be obtained for 2-6 electron systems.Running time: A couple of hours for a typical system.  相似文献   

7.
Fortran 77 code is presented for a hybrid method of the Metropolis Monte Carlo (MMC) and Reverse Monte Carlo (RMC) for the simulation of amorphous silicon and carbon structures. In additional to the usual constraints of the pair correlation functions and average coordination, the code also incorporates an optional energy constraint. This energy constraint is in the form of either the Environment Dependent Interatomic Potential (applicable to silicon and carbon) and the original and modified Stillinger-Weber potentials (applicable to silicon). The code also allows porous systems to be modeled via a constraint on porosity and internal surface area using a novel restriction on the available simulation volume.

Program summary

Program title: HRMC version 1.0Catalogue identifier: AEAO_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAO_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 200 894No. of bytes in distributed program, including test data, etc.: 907 557Distribution format: tar.gzProgramming language: FORTRAN 77Computer: Any computer capable of running executables produced by the g77 Fortran compilerOperating system: Unix, WindowsRAM: Depends on the type of empirical potential use, number of atoms and which constraints are employedClassification: 7.7Nature of problem: Atomic modeling using empirical potentials and experimental dataSolution method: Monte CarloAdditional comments: The code is not standard FORTRAN 77 but includes some additional features and therefore generates errors when compiled using the Nag95 compiler. It does compile successfully with the GNU g77 compiler (http://www.gnu.org/software/fortran/fortran.html).Running time: Depends on the type of empirical potential use, number of atoms and which constraints are employed. The test included in the distribution took 37 minutes on a DEC Alpha PC.  相似文献   

8.
Computer generated holograms are usually generated using commercial software like MATLAB, MATHCAD, Mathematica, etc. This work is an approach in doing the same using freely distributed open source packages and Operating System. A Fourier hologram is generated using this method and tested for simulated and optical reconstruction. The reconstructed images are in good agreement with the objects chosen. The significance of using such a system is also discussed.

Program summary

Program title: FHOLOCatalogue identifier: AEDS_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDS_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 176 336No. of bytes in distributed program, including test data, etc.: 4 294 872Distribution format: tar.gzProgramming language: C++Computer: any X86 micro computerOperating system: Linux (Debian Etch)RAM: 512 MBClassification: 18Nature of problem: To generate a Fourier Hologram in micro computer only by using open source operating system and packages.Running time: Depends on the matrix size. 10 sec for a matrix of size 256×256.  相似文献   

9.
We describe a program for computing the abundances of light elements produced during Big Bang Nucleosynthesis which is publicly available at http://parthenope.na.infn.it/. Starting from nuclear statistical equilibrium conditions the program solves the set of coupled ordinary differential equations, follows the departure from chemical equilibrium of nuclear species, and determines their asymptotic abundances as function of several input cosmological parameters as the baryon density, the number of effective neutrino, the value of cosmological constant and the neutrino chemical potential. The program requires commercial NAG library routines.

Program summary

Program title: PArthENoPECatalogue identifier: AEAV_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAV_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 10 033No. of bytes in distributed program, including test data, etc.: 46 002Distribution format: tar.gzProgramming language: Fortran 77Computer: PC-compatible running Fortran on Unix, MS Windows or LinuxOperating system: Windows 2000, Windows XP, LinuxClassification: 1.2, 1.9, 17.8External routines: NAG LibrariesNature of problem: Computation of yields of light elements synthesized in the primordial universe.Solution method: BDF method for the integration of the ODEs, implemented in a NAG routine.Running time: 90 sec with default parameters on a Dual Xeon Processor 2.4 GHz with 2 GB RAM.  相似文献   

10.
We describe the Breit–Pauli distorted wave (BPDW) approach for the electron-impact excitation of atomic ions that we have implemented within the autostructure code.

Program summary

Program title:autostructureCatalogue identifier: AEIV_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIV_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 130 987No. of bytes in distributed program, including test data, etc.: 1 031 584Distribution format: tar.gzProgramming language: Fortran 77/95Computer: GeneralOperating system: UnixHas the code been vectorized or parallelized?: Yes, a parallel version, with MPI directives, is included in the distribution.RAM: From several kbytes to several GbytesClassification: 2, 2.4Nature of problem: Collision strengths for the electron-impact excitation of atomic ions are calculated using a Breit–Pauli distorted wave approach with the optional inclusion of two-body non-fine-structure and fine-structure interactions.Solution method: General multi-configuration Breit–Pauli atomic structure. A jK-coupling partial wave expansion of the collision problem. Slater state angular algebra. Various model potential non-relativistic or kappa-averaged relativistic radial orbital solutions — the continuum distorted wave orbitals are not required to be orthogonal to the bound.Additional comments: Documentation is provided in the distribution file along with the test-case.Running time: From a few seconds to a few hours.  相似文献   

11.
We describe a revised and updated version of the program package SMMP. SMMP is an open-source FORTRAN package for molecular simulation of proteins within the standard geometry model. It is designed as a simple and inexpensive tool for researchers and students to become familiar with protein simulation techniques. SMMP 3.0 sports a revised API increasing its flexibility, an implementation of the Lund force field, multi-molecule simulations, a parallel implementation of the energy function, Python bindings, and more.

Program summary

Title of program:SMMPCatalogue identifier:ADOJ_v3_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADOJ_v3_0.htmlProgram obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlProgramming language used:FORTRAN, PythonNo. of lines in distributed program, including test data, etc.:52 105No. of bytes in distributed program, including test data, etc.:599 150Distribution format:tar.gzComputer:Platform independentOperating system:OS independentRAM:2 MbytesClassification:3Does the new version supersede the previous version?:YesNature of problem:Molecular mechanics computations and Monte Carlo simulation of proteins.Solution method:Utilizes ECEPP2/3, FLEX, and Lund potentials. Includes Monte Carlo simulation algorithms for canonical, as well as for generalized ensembles.Reasons for new version:API changes and increased functionality.Summary of revisions:Added Lund potential; parameters used in subroutines are now passed as arguments; multi-molecule simulations; parallelized energy calculation for ECEPP; Python bindings.Restrictions:The consumed CPU time increases with the size of protein molecule.Running time:Depends on the size of the simulated molecule.  相似文献   

12.
We present a program for the numerical evaluation of form factors entering the calculation of one-loop amplitudes with up to six external legs. The program is written in Fortran95 and performs the reduction to a certain set of basis integrals numerically, using a formalism where inverse Gram determinants can be avoided. It can be used to calculate one-loop amplitudes with massless internal particles in a fast and numerically stable way.

Program summary

Program title: golem95_v1.0Catalogue identifier: AEEO_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEO_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 50 105No. of bytes in distributed program, including test data, etc.: 241 657Distribution format: tar.gzProgramming language: Fortran95Computer: Any computer with a Fortran95 compilerOperating system: Linux, UnixRAM: RAM used per form factor is insignificant, even for a rank six six-point form factorClassification: 4.4, 11.1External routines: Perl programming language (http://www.perl.com/)Nature of problem: Evaluation of one-loop multi-leg tensor integrals occurring in the calculation of next-to-leading order corrections to scattering amplitudes in elementary particle physics.Solution method: Tensor integrals are represented in terms of form factors and a set of basic building blocks (“basis integrals”). The reduction to the basis integrals is performed numerically, thus avoiding the generation of large algebraic expressions.Restrictions: The current version contains basis integrals for massless internal particles only. Basis integrals for massive internal particles will be included in a future version.Running time: Depends on the nature of the problem. A rank 6 six-point form factor at a randomly chosen kinematic point takes 0.13 seconds on an Intel Core 2 Q9450 2.66 GHz processor, without any optimisation. With compiler optimisation flag -O3 the same point takes 0.09 seconds. Timings for lower point form factors are: All form factors for five-point functions from rank 0 to rank 4: 0.04 s. All form factors for rank 5 five-point functions: 0.05 s. All form factors for four-point functions from rank 0 to rank 4: 0.01 s.  相似文献   

13.
We investigate performance improvements for the discrete element method (DEM) used in ppohDEM. First, we use OpenMP and MPI to parallelize DEM for efficient operation on many types of memory, including shared memory, and at any scale, from small PC clusters to supercomputers. We also describe a new algorithm for the descending storage method (DSM) based on a sort technique that makes creation of contact candidate pair lists more efficient. Finally, we measure the performance of ppohDEM using the proposed improvements, and confirm that computational time is significantly reduced. We also show that the parallel performance of ppohDEM can be improved by reducing the number of OpenMP threads per MPI process.Program summaryProgram title: ppohDEMCatalogue identifier: AESI_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AESI_v1_0.htmlProgram obtainable from: CPC Program Library, Queen’s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 39007No. of bytes in distributed program, including test data, etc.: 2482843Distribution format: tar.gzProgramming language: Fortran.Computer: CPU based workstations and parallel computers.Operating system: Linux, Windows.Has the code been vectorized or parallelized?: Yes, using MPI. Tested with up to 8 processors.RAM: Dependent upon the numbers of particles and contact particle pairs (1 GB for the example program supplied with the package)Classification: 6.5, 13.External routines: MPI-2, OpenMPNature of problem:Collision dynamics of viscoelastic particles with friction in powder engineering and soil mechanics.Solution method:Parallelized DEM running on shared and/or distributed systems is the solution method based particle model in which geometrical size and shape attributes are provided for each element. In the DEM, the Voigt model and Coulomb friction model are considered at each contact point between particles.Running time:10 min for the example program supplied with the package using 2 CPU (each with 10 cores) of Intel Xeon E7-4870.  相似文献   

14.
We describe the Monte Carlo event generator for black hole production and decay in proton-proton collisions - QBH version 1.02. The generator implements a model for quantum black hole production and decay based on the conservation of local gauge symmetries and democratic decays. The code in written entirely in C++ and interfaces to the PYTHIA 8 Monte Carlo code for fragmentation and decays.

Program summary

Program title: QBHCatalogue identifier: AEGU_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGU_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 10 048No. of bytes in distributed program, including test data, etc.: 118 420Distribution format: tar.gzProgramming language: C++Computer: x86Operating system: Scientific Linux, Mac OS XRAM: 1 GBClassification: 11.6External routines: PYTHIA 8130 (http://home.thep.lu.se/~torbjorn/pythiaaux/present.html) and LHAPDF (http://projects.hepforge.org/lhapdf/)Nature of problem: Simulate black hole production and decay in proton-proton collision.Solution method: Monte Carlo simulation using importance sampling.Running time: Eight events per second.  相似文献   

15.
When one deals with data drawn from continuous variables, a histogram is often inadequate to display their probability density. It deals inefficiently with statistical noise, and binsizes are free parameters. In contrast to that, the empirical cumulative distribution function (obtained after sorting the data) is parameter free. But it is a step function, so that its differentiation does not give a smooth probability density. Based on Fourier series expansion and Kolmogorov tests, we introduce a simple method, which overcomes this problem. Error bars on the estimated probability density are calculated using a jackknife method. We give several examples and provide computer code reproducing them. You may want to look at the corresponding figures 4 to 9 first.

Program summary

Program title: cdf_to_pdCatalogue identifier: AEBC_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBC_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 2758No. of bytes in distributed program, including test data, etc.: 18 594Distribution format: tar.gzProgramming language: Fortran 77Computer: Any capable of compiling and executing Fortran codeOperating system: Any capable of compiling and executing Fortran codeClassification: 4.14, 9Nature of problem: When one deals with data drawn from continuous variables, a histogram is often inadequate to display the probability density. It deals inefficiently with statistical noise, and binsizes are free parameters. In contrast to that, the empirical cumulative distribution function (obtained after sorting the data) is parameter free. But it is a step function, so that its differentiation does not give a smooth probability density.Solution method: Based on Fourier series expansion and Kolmogorov tests, we introduce a simple method, which overcomes this problem. Error bars on the estimated probability density are calculated using a jackknife method. Several examples are included in the distribution file.Running time: The test runs provided take only a few seconds to execute.  相似文献   

16.
We developed a software package (CAVE) in Fortran language to detect internal cavities in proteins which can be applied also to an arbitrary system of balls. The volume, the surface area and other quantitative characteristics of the cavities can be calculated. The code is based on the recently suggested enveloping triangulation algorithm [J. Buša et al., J. Comp. Chem. 30 (2009) 346] for computing volume and surface area of the cavity by analytical equations. Different standard sets of atomic radii can be used. The PDB compatible file containing the atomic coordinates must be stored on the disk in advance. Testing of the code on different proteins and artificial ball systems showed efficiency and accuracy of the algorithm. The program is fast. It can handle a system of several thousands of balls in the order of seconds on contemporary PC's. The code is open source and free.

Program summary

Program title: CAVECatalogue identifier: AEHC_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHC_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 8670No. of bytes in distributed program, including test data, etc.: 100 131Distribution format: tar.gzProgramming language: FortranComputer: PC Pentium and CoreOperating system: Linux system and Windows XP systemClassification: 16.1Nature of problem: Molecular structure analysis.Solution method: Analytical method for cavities detection, and numerical algorithm for volume and surface area calculation based on the analytical formulas, after using the stereographic transformation.Running time: Depends on the size of the molecule under consideration. The test example included in the distribution takes about 1 minute to run.  相似文献   

17.
The Parallel Programming Interface for Distributed Data (PPIDD) library provides an interface, suitable for use in parallel scientific applications, that delivers communications and global data management. The library can be built either using the Global Arrays (GA) toolkit, or a standard MPI-2 library. This abstraction allows the programmer to write portable parallel codes that can utilise the best, or only, communications library that is available on a particular computing platform.Program summaryProgram title: PPIDDCatalogue identifier: AEEF_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEEF_1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 17 698No. of bytes in distributed program, including test data, etc.: 166 173Distribution format: tar.gzProgramming language: Fortran, CComputer: Many parallel systemsOperating system: VariousHas the code been vectorised or parallelized?: Yes. 2–256 processors usedRAM: 50 MbytesClassification: 6.5External routines: Global Arrays or MPI-2Nature of problem: Many scientific applications require management and communication of data that is global, and the standard MPI-2 protocol provides only low-level methods for the required one-sided remote memory access.Solution method: The Parallel Programming Interface for Distributed Data (PPIDD) library provides an interface, suitable for use in parallel scientific applications, that delivers communications and global data management. The library can be built either using the Global Arrays (GA) toolkit, or a standard MPI-2 library. This abstraction allows the programmer to write portable parallel codes that can utilise the best, or only, communications library that is available on a particular computing platform.Running time: Problem dependent. The test provided with the distribution takes only a few seconds to run.  相似文献   

18.
19.
An interactive Java applet for real-time simulation and visualization of the transmittance properties of multiple interference dielectric filters is presented. The most commonly used interference filters as well as the state-of-the-art ones are embedded in this platform-independent applet which can serve research and education purposes. The Transmittance applet can be freely downloaded from the site http://cpc.cs.qub.ac.uk.

Program summary

Program title: TransmittanceCatalogue identifier: AEBQ_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBQ_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 5778No. of bytes in distributed program, including test data, etc.: 90 474Distribution format: tar.gzProgramming language: JavaComputer: Developed on PC-Pentium platformOperating system: Any Java-enabled OS. Applet was tested on Windows ME, XP, Sun Solaris, Mac OSRAM: VariableClassification: 18Nature of problem: Sophisticated wavelength selective multiple interference filters can include some tens or even hundreds of dielectric layers. The spectral response of such a stack is not obvious. On the other hand, there is a strong demand from application designers and students to get a quick insight into the properties of a given filter.Solution method: A Java applet was developed for the computation and the visualization of the transmittance of multilayer interference filters. It is simple to use and the embedded filter library can serve educational purposes. Also, its ability to handle complex structures will be appreciated as a useful research and development tool.Running time: Real-time simulations  相似文献   

20.
The routine Milne provides accurate numerical values for the classical Milne's problem of neutron transport for the planar one speed and isotropic scattering case. The solution is based on the Case eigen-function formalism. The relevant X functions are evaluated accurately by the Double Exponential quadrature. The calculated quantities are the extrapolation distance and the scalar and the angular fluxes. Also, the H function needed in astrophysical calculations is evaluated as a byproduct.

Program summary

Program title: MilneCatalogue identifier: AEGS_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGS_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 701No. of bytes in distributed program, including test data, etc.: 6845Distribution format: tar.gzProgramming language: Fortran 77Computer: PC under Linux or WindowsOperating system: Ubuntu 8.04 (Kernel version 2.6.24-16-generic), Windows-XPClassification: 4.11, 21.1, 21.2Nature of problem: The X functions are integral expressions. The convergence of these regular and Cauchy Principal Value integrals are impaired by the singularities of the integrand in the complex plane. The DE quadrature scheme tackles these singularities in a robust manner compared to the standard Gauss quadrature.Running time: The test included in the distribution takes a few seconds to run.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号