首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A new stable version (“production version”) v5.28.00 of ROOT [1] has been published [2]. It features several major improvements in many areas, most noteworthy data storage performance as well as statistics and graphics features. Some of these improvements have already been predicted in the original publication Antcheva et al. (2009) [3]. This version will be maintained for at least 6 months; new minor revisions (“patch releases”) will be published [4] to solve problems reported with this version.

New version program summary

Program title: ROOTCatalogue identifier: AEFA_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFA_v2_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: GNU Lesser Public License v.2.1No. of lines in distributed program, including test data, etc.: 2 934 693No. of bytes in distributed program, including test data, etc.: 1009Distribution format: tar.gzProgramming language: C++Computer: Intel i386, Intel x86-64, Motorola PPC, Sun Sparc, HP PA-RISCOperating system: GNU/Linux, Windows XP/Vista/7, Mac OS X, FreeBSD, OpenBSD, Solaris, HP-UX, AIXHas the code been vectorized or parallelized?: YesRAM: > 55 MbytesClassification: 4, 9, 11.9, 14Catalogue identifier of previous version: AEFA_v1_0Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 2499Does the new version supersede the previous version?: YesNature of problem: Storage, analysis and visualization of scientific dataSolution method: Object store, wide range of analysis algorithms and visualization methodsReasons for new version: Added features and corrections of deficienciesSummary of revisions: The release notes at http://root.cern.ch/root/v528/Version528.news.html give a module-oriented overview of the changes in v5.28.00. Highlights include
  • • 
    File format Reading of TTrees has been improved dramatically with respect to CPU time (30%) and notably with respect to disk space.
  • • 
    Histograms A new TEfficiency class has been provided to handle the calculation of efficiencies and their uncertainties, TH2Poly for polygon-shaped bins (e.g. maps), TKDE for kernel density estimation, and TSVDUnfold for singular value decomposition.
  • • 
    Graphics Kerning is now supported in TLatex, PostScript and PDF; a table of contents can be added to PDF files. A new font provides italic symbols. A TPad containing GL can be stored in a binary (i.e. non-vector) image file; add support for full-scene anti-aliasing. Usability enhancements to EVE.
  • • 
    Math New interfaces for generating random number according to a given distribution, goodness of fit tests of unbinned data, binning multidimensional data, and several advanced statistical functions were added.
  • • 
    RooFit Introduction of HistFactory; major additions to RooStats.
  • • 
    TMVA Updated to version 4.1.0, adding e.g. the support for simultaneous classification of multiple output classes for several multivariate methods.
  • • 
    PROOF Many new features, adding to PROOF?s usability, plus improvements and fixes.
  • • 
    PyROOT Support of Python 3 has been added.
  • • 
    Tutorials Several new tutorials were provided for above new features (notably RooStats).
A detailed list of all the changes is available at http://root.cern.ch/root/htmldoc/examples/V5.Additional comments: For an up-to-date author list see: http://root.cern.ch/drupal/content/root-development-team and http://root.cern.ch/drupal/content/former-root-developers.The distribution file for this program is over 30 Mbytes and therefore is not delivered directly when download or E-mail is requested. Instead a html file giving details of how the program can be obtained is sent.Running time: Depending on the data size and complexity of analysis algorithms.References:
  • [1] 
    http://root.cern.ch.
  • [2] 
    http://root.cern.ch/drupal/content/production-version-528.
  • [3] 
    I. Antcheva, M. Ballintijn, B. Bellenot, M. Biskup, R. Brun, N. Buncic, Ph. Canal, D. Casadei, O. Couet, V. Fine, L. Franco, G. Ganis, A. Gheata, D. Gonzalez Maline, M. Goto, J. Iwaszkiewicz, A. Kreshuk, D. Marcos Segura, R. Maunder, L. Moneta, A. Naumann, E. Offermann, V. Onuchin, S. Panacek, F. Rademakers, P. Russo, M. Tadel, ROOT — A C++ framework for petabyte data storage, statistical analysis and visualization, Comput. Phys. Commun. 180 (2009) 2499.
  • [4] 
    http://root.cern.ch/drupal/content/root-version-v5-28-00-patch-release-notes.
  相似文献   

2.
A C++ class was written for the calculation of frequentist confidence intervals using the profile likelihood method. Seven combinations of Binomial, Gaussian, Poissonian and Binomial uncertainties are implemented. The package provides routines for the calculation of upper and lower limits, sensitivity and related properties. It also supports hypothesis tests which take uncertainties into account. It can be used in compiled C++ code, in Python or interactively via the ROOT analysis framework.

Program summary

Program title: TRolke version 2.0Catalogue identifier: AEFT_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFT_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: MIT licenseNo. of lines in distributed program, including test data, etc.: 3431No. of bytes in distributed program, including test data, etc.: 21 789Distribution format: tar.gzProgramming language: ISO C++.Computer: Unix, GNU/Linux, Mac.Operating system: Linux 2.6 (Scientific Linux 4 and 5, Ubuntu 8.10), Darwin 9.0 (Mac-OS X 10.5.8).RAM:∼20 MBClassification: 14.13.External routines: ROOT (http://root.cern.ch/drupal/)Nature of problem: The problem is to calculate a frequentist confidence interval on the parameter of a Poisson process with statistical or systematic uncertainties in signal efficiency or background.Solution method: Profile likelihood method, AnalyticalRunning time:<10−4 seconds per extracted limit.  相似文献   

3.
4.
5.
MCNP Output Data Analysis with ROOT (MODAR) is a tool based on CERN's ROOT software. MODAR has been designed to handle time-energy data issued by MCNP simulations of neutron inspection devices using the associated particle technique. MODAR exploits ROOT's Graphical User Interface and functionalities to visualize and process MCNP simulation results in a fast and user-friendly way. MODAR allows to take into account the detection system time resolution (which is not possible with MCNP) as well as detectors energy response function and counting statistics in a straightforward way.

Program summary

Program title: MODARCatalogue identifier: AEGA_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGA_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 155 373No. of bytes in distributed program, including test data, etc.: 14 815 461Distribution format: tar.gzProgramming language: C++Computer: Most Unix workstations and PCOperating system: Most Unix systems, Linux and windows, provided the ROOT package has been installed. Examples where tested under Suse Linux and Windows XP.RAM: Depends on the size of the MCNP output file. The example presented in the article, which involves three two-dimensional 139×740 bins histograms, allocates about 60 MB. These data are running under ROOT and include consumption by ROOT itself.Classification: 17.6External routines: ROOT version 5.24.00 (http://root.cern.ch/drupal/)Nature of problem: The output of an MCNP simulation is an ASCII file. The data processing is usually performed by copying and pasting the relevant parts of the ASCII file into Microsoft Excel. Such an approach is satisfactory when the quantity of data is small but is not efficient when the size of the simulated data is large, for example when time-energy correlations are studied in detail such as in problems involving the associated particle technique. In addition, since the finite time resolution of the simulated detector cannot be modeled with MCNP, systems in which time-energy correlation is crucial cannot be described in a satisfactory way. Finally, realistic particle energy deposit in detectors is calculated with MCNP in a two-step process involving type-5 then type-8 tallies. In the first step, the photon flux energy spectrum associated to a time region is selected and serves as a source energy distribution for the second step. Thus, several files must be manipulated before getting the result, which can be time consuming if one needs to study several time regions or different detectors performances. In the same way, modeling counting statistics obtained in a limited acquisition time requires several steps and can also be time consuming.Solution method: In order to overcome the previous limitations, the MODAR C++ code has been written to make use of CERN's ROOT data analysis software. MCNP output data are read from the MCNP output file with dedicated routines. Two-dimensional histograms are filled and can be handled efficiently within the ROOT framework. To keep a user friendly analysis tool, all processing and data display can be done by means of ROOT Graphical User Interface. Specific routines have been written to include detectors finite time resolution and energy response function as well as counting statistics in a straightforward way.Additional comments: The possibility of adding tallies has also been incorporated in MODAR in order to describe systems in which the signal from several detectors can be summed. Moreover, MODAR can be adapted to handle other problems involving two-dimensional data.Running time: The CPU time needed to smear a two-dimensional histogram depends on the size of the histogram. In the presented example, the time-energy smearing of one of the 139×740 two-dimensional histograms takes 3 minutes with a DELL computer equipped with INTEL Core 2.  相似文献   

6.
The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion.

Program summary

Program title: Application Hosting Environment 2.0Catalogue identifier: AEEJ_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEJ_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU Public Licence, Version 2No. of lines in distributed program, including test data, etc.: not applicableNo. of bytes in distributed program, including test data, etc.: 1 685 603 766Distribution format: tar.gzProgramming language: Perl (server), Java (Client)Computer: x86Operating system: Linux (Server), Linux/Windows/MacOS (Client)RAM: 134 217 728 (server), 67 108 864 (client) bytesClassification: 6.5External routines: VirtualBox (server), Java (client)Nature of problem: The middleware that makes grid computing possible has been found by many users to be too unwieldy, and presents an obstacle to use rather than providing assistance [1,2]. Such problems are compounded when one attempts to harness the power of a grid, or a federation of different grids, rather than just a single resource on the grid.Solution method: To address the above problem, we have developed AHE, a lightweight interface, designed to simplify the process of running scientific codes on a grid of HPC and local resources. AHE does this by introducing a layer of middleware between the user and the grid, which encapsulates much of the complexity associated with launching grid applications.Unusual features: The server is distributed as a VirtualBox virtual machine. VirtualBox (http://www.virtualbox.org) must be downloaded and installed in order to run the AHE server virtual machine. Details of how to do this are given in the AHE 2.0 Quick Start Guide.Running time: Not applicableReferences:
  • [1] 
    J. Chin, P.V. Coveney, Towards tractable toolkits for the grid: A plea for lightweight, useable middleware, NeSC Technical Report, 2004, http://nesc.ac.uk/technical_papers/UKeS-2004-01.pdf.
  • [2] 
    P.V. Coveney, R.S. Saksena, S.J. Zasada, M. McKeown, S. Pickles, The Application Hosting Environment: Lightweight middleware for grid-based computational science, Computer Physics Communications 176 (2007) 406–418.
  相似文献   

7.
We present HONEI, an open-source collection of libraries offering a hardware oriented approach to numerical calculations. HONEI abstracts the hardware, and applications written on top of HONEI can be executed on a wide range of computer architectures such as CPUs, GPUs and the Cell processor. We demonstrate the flexibility and performance of our approach with two test applications, a Finite Element multigrid solver for the Poisson problem and a robust and fast simulation of shallow water waves. By linking against HONEI's libraries, we achieve a two-fold speedup over straight forward C++ code using HONEI's SSE backend, and additional 3–4 and 4–16 times faster execution on the Cell and a GPU. A second important aspect of our approach is that the full performance capabilities of the hardware under consideration can be exploited by adding optimised application-specific operations to the HONEI libraries. HONEI provides all necessary infrastructure for development and evaluation of such kernels, significantly simplifying their development.

Program summary

Program title: HONEICatalogue identifier: AEDW_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDW_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GPLv2No. of lines in distributed program, including test data, etc.: 216 180No. of bytes in distributed program, including test data, etc.: 1 270 140Distribution format: tar.gzProgramming language: C++Computer: x86, x86_64, NVIDIA CUDA GPUs, Cell blades and PlayStation 3Operating system: LinuxRAM: at least 500 MB freeClassification: 4.8, 4.3, 6.1External routines: SSE: none; [1] for GPU, [2] for Cell backendNature of problem: Computational science in general and numerical simulation in particular have reached a turning point. The revolution developers are facing is not primarily driven by a change in (problem-specific) methodology, but rather by the fundamental paradigm shift of the underlying hardware towards heterogeneity and parallelism. This is particularly relevant for data-intensive problems stemming from discretisations with local support, such as finite differences, volumes and elements.Solution method: To address these issues, we present a hardware aware collection of libraries combining the advantages of modern software techniques and hardware oriented programming. Applications built on top of these libraries can be configured trivially to execute on CPUs, GPUs or the Cell processor. In order to evaluate the performance and accuracy of our approach, we provide two domain specific applications; a multigrid solver for the Poisson problem and a fully explicit solver for 2D shallow water equations.Restrictions: HONEI is actively being developed, and its feature list is continuously expanded. Not all combinations of operations and architectures might be supported in earlier versions of the code. Obtaining snapshots from http://www.honei.org is recommended.Unusual features: The considered applications as well as all library operations can be run on NVIDIA GPUs and the Cell BE.Running time: Depending on the application, and the input sizes. The Poisson solver executes in few seconds, while the SWE solver requires up to 5 minutes for large spatial discretisations or small timesteps.References:
  • [1] 
    http://www.nvidia.com/cuda.
  • [2] 
    http://www.ibm.com/developerworks/power/cell.
  相似文献   

8.
We discuss a program suite for simulating Quantum Chromodynamics on a 4-dimensional space–time lattice. The basic Hybrid Monte Carlo algorithm is introduced and a number of algorithmic improvements are explained. We then discuss the implementations of these concepts as well as our parallelisation strategy in the actual simulation code. Finally, we provide a user guide to compile and run the program.

Program summary

Program title: tmLQCDCatalogue identifier: AEEH_v1_0Program summary URL::http://cpc.cs.qub.ac.uk/summaries/AEEH_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU General Public Licence (GPL)No. of lines in distributed program, including test data, etc.: 122 768No. of bytes in distributed program, including test data, etc.: 931 042Distribution format: tar.gzProgramming language: C and MPIComputer: anyOperating system: any with a standard C compilerHas the code been vectorised or parallelised?: Yes. One or optionally any even number of processors may be used. Tested with up to 32 768 processorsRAM: no typical values availableClassification: 11.5External routines: LAPACK [1] and LIME [2] libraryNature of problem: Quantum ChromodynamicsSolution method: Markov Chain Monte Carlo using the Hybrid Monte Carlo algorithm with mass preconditioning and multiple time scales [3]. Iterative solver for large systems of linear equations.Restrictions: Restricted to an even number of (not necessarily mass degenerate) quark flavours in the Wilson or Wilson twisted mass formulation of lattice QCD.Running time: Depending on the problem size, the architecture and the input parameters from a few minutes to weeks.References:
  • [1] 
    http://www.netlib.org/lapack/.
  • [2] 
    USQCD, http://usqcd.jlab.org/usqcd-docs/c-lime/.
  • [3] 
    C. Urbach, K. Jansen, A. Shindler, U. Wenger, Comput. Phys. Commun. 174 (2006) 87, hep-lat/0506011.
  相似文献   

9.
The R-matrix method has proved to be a remarkably stable, robust and efficient technique for solving the close-coupling equations that arise in electron and photon collisions with atoms, ions and molecules. During the last thirty-four years a series of related R-matrix program packages have been published periodically in CPC. These packages are primarily concerned with low-energy scattering where the incident energy is insufficient to ionise the target. In this paper we describe 2DRMP, a suite of two-dimensional R-matrix propagation programs aimed at creating virtual experiments on high performance and grid architectures to enable the study of electron scattering from H-like atoms and ions at intermediate energies.

Program summary

Program title: 2DRMPCatalogue identifier: AEEA_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEA_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 196 717No. of bytes in distributed program, including test data, etc.: 3 819 727Distribution format: tar.gzProgramming language: Fortran 95, MPIComputer: Tested on CRAY XT4 [1]; IBM eServer 575 [2]; Itanium II cluster [3]Operating system: Tested on UNICOS/lc [1]; IBM AIX [2]; Red Hat Linux Enterprise AS [3]Has the code been vectorised or parallelised?: Yes. 16 cores were used for small test runClassification: 2.4External routines: BLAS, LAPACK, PBLAS, ScaLAPACKSubprograms used: ADAZ_v1_1Nature of problem: 2DRMP is a suite of programs aimed at creating virtual experiments on high performance architectures to enable the study of electron scattering from H-like atoms and ions at intermediate energies.Solution method: Two-dimensional R-matrix propagation theory. The (r1,r2) space of the internal region is subdivided into a number of subregions. Local R-matrices are constructed within each subregion and used to propagate a global R-matrix, ℜ, across the internal region. On the boundary of the internal region ℜ is transformed onto the IERM target state basis. Thus, the two-dimensional R-matrix propagation technique transforms an intractable problem into a series of tractable problems enabling the internal region to be extended far beyond that which is possible with the standard one-sector codes. A distinctive feature of the method is that both electrons are treated identically and the R-matrix basis states are constructed to allow for both electrons to be in the continuum. The subregion size is flexible and can be adjusted to accommodate the number of cores available.Restrictions: The implementation is currently restricted to electron scattering from H-like atoms and ions.Additional comments: The programs have been designed to operate on serial computers and to exploit the distributed memory parallelism found on tightly coupled high performance clusters and supercomputers. 2DRMP has been systematically and comprehensively documented using ROBODoc [4] which is an API documentation tool that works by extracting specially formatted headers from the program source code and writing them to documentation files.Running time: The wall clock running time for the small test run using 16 cores and performed on [3] is as follows: bp (7 s); rint2 (34 s); newrd (32 s); diag (21 s); amps (11 s); prop (24 s).References:
  • [1] 
    HECToR, CRAY XT4 running UNICOS/lc, http://www.hector.ac.uk/, accessed 22 July, 2009.
  • [2] 
    HPCx, IBM eServer 575 running IBM AIX, http://www.hpcx.ac.uk/, accessed 22 July, 2009.
  • [3] 
    HP Cluster, Itanium II cluster running Red Hat Linux Enterprise AS, Queen s University Belfast, http://www.qub.ac.uk/directorates/InformationServices/Research/HighPerformanceComputing/Services/Hardware/HPResearch/, accessed 22 July, 2009.
  • [4] 
    Automating Software Documentation with ROBODoc, http://www.xs4all.nl/~rfsber/Robo/, accessed 22 July, 2009.
  相似文献   

10.
We present the program EvolFMC v.2 that solves the evolution equations in QCD for the parton momentum distributions by means of the Monte Carlo technique based on the Markovian process. The program solves the DGLAP-type evolution as well as modified-DGLAP ones. In both cases the evolution can be performed in the LO or NLO approximation. The quarks are treated as massless. The overall technical precision of the code has been established at 5×10−4. This way, for the first time ever, we demonstrate that with the Monte Carlo method one can solve the evolution equations with precision comparable to the other numerical methods.

New version program summary

Program title: EvolFMC v.2Catalogue identifier: AEFN_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFN_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including binary test data, etc.: 66 456 (7407 lines of C++ code)No. of bytes in distributed program, including test data, etc.: 412 752Distribution format: tar.gzProgramming language: C++Computer: PC, MacOperating system: Linux, Mac OS XRAM: Less than 256 MBClassification: 11.5External routines: ROOT (http://root.cern.ch/drupal/)Nature of problem: Solution of the QCD evolution equations for the parton momentum distributions of the DGLAP- and modified-DGLAP-type in the LO and NLO approximations.Solution method: Monte Carlo simulation of the Markovian process of a multiple emission of partons.Restrictions:
1.
Limited to the case of massless partons.
2.
Implemented in the LO and NLO approximations only.
3.
Weighted events only.
Unusual features: Modified-DGLAP evolutions included up to the NLO level.Additional comments: Technical precision established at 5×10−4.Running time: For the 106 events at 100 GeV: DGLAP NLO: 27s; C-type modified DGLAP NLO: 150s (MacBook Pro with Mac OS X v.10.5.5, 2.4 GHz Intel Core 2 Duo, gcc 4.2.4, single thread).  相似文献   

11.
This paper presents two coupled software packages which receive widespread use in the field of numerical simulations of Quantum Chromo-Dynamics. These consist of the BAGEL library and the BAGEL fermion sparse-matrix library, BFM.The Bagel library can generate assembly code for a number of architectures and is configurable – supporting several precision and memory pattern options to allow architecture specific optimisation. It provides high performance on the QCDOC, BlueGene/L and BlueGene/P parallel computer architectures that are popular in the field of lattice QCD. The code includes a complete conjugate gradient implementation for the Wilson and domain wall fermion actions, making it easy to use for third party codes including the Jefferson Laboratory's CHROMA, UKQCD's UKhadron, and the Riken–Brookhaven–Columbia Collaboration's CPS packages.

Program summary

Program title: BagelCatalogue identifier: AEFE_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFE_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU Public License V2No. of lines in distributed program, including test data, etc.: 109 576No. of bytes in distributed program, including test data, etc.: 892 841Distribution format: tar.gzProgramming language: C++, assemblerComputer: Massively parallel message passing. BlueGene/QCDOC/others.Operating system: POSIX, Linux and compatible.Has the code been vectorised or parallelized?: Yes. 16 384 processors used.Classification: 11.5External routines: QMP, QDP++Nature of problem: Quantum Chromo-Dynamics sparse matrix inversion for Wilson and domain wall fermion formulations.Solution method: Optimised Krylov linear solver.Unusual features: Domain specific compiler generates optimised assembly code.Running time: 1 h per matrix inversion; multi-year simulations.  相似文献   

12.
13.
The derivation of the Feynman rules for lattice perturbation theory from actions and operators is complicated, especially for highly improved actions such as HISQ. This task is, however, both important and particularly suitable for automation. We describe a suite of software to generate and evaluate Feynman rules for a wide range of lattice field theories with gluons and (relativistic and/or heavy) quarks. Our programs are capable of dealing with actions as complicated as (m)NRQCD and HISQ. Automated differentiation methods are used to calculate also the derivatives of Feynman diagrams.

Program summary

Program title: HiPPY, HPsrcCatalogue identifier: AEDX_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDX_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GPLv2 (see Additional comments below)No. of lines in distributed program, including test data, etc.: 513 426No. of bytes in distributed program, including test data, etc.: 4 893 707Distribution format: tar.gzProgramming language: Python, Fortran95Computer: HiPPy: Single-processor workstations. HPsrc: Single-processor workstations and MPI-enabled multi-processor systemsOperating system: HiPPy: Any for which Python v2.5.x is available. HPsrc: Any for which a standards-compliant Fortran95 compiler is availableHas the code been vectorised or parallelised?: YesRAM: Problem specific, typically less than 1 GB for either codeClassification: 4.4, 11.5Nature of problem: Derivation and use of perturbative Feynman rules for complicated lattice QCD actions.Solution method: An automated expansion method implemented in Python (HiPPy) and code to use expansions to generate Feynman rules in Fortran95 (HPsrc).Restrictions: No general restrictions. Specific restrictions are discussed in the text.Additional comments: The HiPPy and HPsrc codes are released under the second version of the GNU General Public Licence (GPL v2). Therefore anyone is free to use or modify the code for their own calculations. As part of the licensing, we ask that any publications including results from the use of this code or of modifications of it cite Refs. [1,2] as well as this paper. Finally, we also ask that details of these publications, as well as of any bugs or required or useful improvements of this core code, would be communicated to us.Running time: Very problem specific, depending on the complexity of the Feynman rules and the number of integration points. Typically between a few minutes and several weeks. The installation tests provided with the program code take only a few seconds to run.References:
  • [1] 
    A. Hart, G.M. von Hippel, R.R. Horgan, L.C. Storoni, Automatically generating Feynman rules for improved lattice eld theories, J. Comput. Phys. 209 (2005) 340–353, doi:10.1016/j.jcp.2005.03.010, arXiv:hep-lat/0411026.
  • [2] 
    M. Lüscher, P. Weisz, Efficient Numerical Techniques for Perturbative Lattice Gauge Theory Computations, Nucl. Phys. B 266 (1986) 309, doi:10.1016/0550-3213(86)90094-5.
  相似文献   

14.
15.
We discuss in this work a new software tool, named E-SpiReS (Electron Spin Resonance Simulations), aimed at the interpretation of dynamical properties of molecules in fluids from electron spin resonance (ESR) measurements. The code implements an integrated computational approach (ICA) for the calculation of relevant molecular properties that are needed in order to obtain spectral lines. The protocol encompasses information from atomistic level (quantum mechanical) to coarse grained level (hydrodynamical), and evaluates ESR spectra for rigid or flexible single or multi-labeled paramagnetic molecules in isotropic and ordered phases, based on a numerical solution of a stochastic Liouville equation.E-SpiReS automatically interfaces all the computational methodologies scheduled in the ICA in a way completely transparent for the user, who controls the whole calculation flow via a graphical interface.Parallelized algorithms are employed in order to allow running on calculation clusters, and a web applet Java has been developed with which it is possible to work from any operating system, avoiding the problems of recompilation.E-SpiReS has been used in the study of a number of different systems and two relevant cases are reported to underline the promising applicability of the ICA to complex systems and the importance of similar software tools in handling a laborious protocol.

Program summary

Program title: E-SpiReSCatalogue identifier: AEEM_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEM_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GPL v2.0No. of lines in distributed program, including test data, etc.: 311 761No. of bytes in distributed program, including test data, etc.: 10 039 531Distribution format: tar.gzProgramming language: C (core programs) and Java (graphical interface)Computer: PC and MacintoshOperating system: Unix and WindowsHas the code been vectorized or parallelized?: YesRAM: 2 048 000 000Classification: 7.2External routines: Babel-1.1, CLAPACK, BLAS, CBLAS, SPARSEBLAS, CQUADPACK, LEVMARNature of problem:Ab initio simulation of cw-ESR spectra of radicals in solutionSolution method: E-SpiReS uses an hydrodynamic approach to calculate the diffusion tensor of the molecule, DFT methodologies to evaluate magnetic tensors and linear algebra techniques to solve numerically the stochastic Liouville equation to obtain an ESR spectrum.Running time: Variable depending on the task. It takes seconds for small molecules in the fast motional regime to hours for big molecules in viscous and/or ordered media.  相似文献   

16.
To complete the 2DRMP package an asymptotic program, such as FARM, is needed. The original version of FARM is designed to construct the physical R-matrix, R, from surface amplitudes contained in the H-file. However, in 2DRMP, R has already been constructed for each scattering energy during propagation. Therefore, this modified version of FARM, known as FARM_2DRMP, has been developed solely for use with 2DRMP.

New version program summary

Program title: FARM_2DRMPCatalogue identifier: ADAZ_v1_1Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADAZ_v1_1.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 13 806No. of bytes in distributed program, including test data, etc.: 134 462Distribution format: tar.gzProgramming language: Fortran 95 and MPIComputer: Tested on CRAY XT4 [1]; IBM eServer 575 [2]; Itanium II cluster [3]Operating system: Tested on UNICOS/lc [1]; IBM AIX [2]; Red Hat Linux Enterprise AS [3]Has the code been vectorized or parallelized?: Yes. 16 cores were used for the small test runClassification: 2.4External routines: BLAS, LAPACKDoes the new version supersede the previous version?: NoNature of problem: The program solves the scattering problem in the asymptotic region of R-matrix theory where exchange is negligible.Solution method: A radius is determined at which the wave function, calculated as a Gailitis expansion [4] with accelerated summing [5] over terms, converges. The R-matrix is propagated from the boundary of the internal region to this radius and the K-matrix calculated. Collision strengths or cross sections may be calculated.Reasons for new version: To complete the 2DRMP package [6] an asymptotic program, such as FARM [7], is needed. The original version of FARM is designed to construct the physical R-matrix, R, from surface amplitudes contained in the H-file. However, in 2DRMP, R, has already been constructed for each scattering energy during propagation and each R is stored in one of the RmatT files described in Fig. 8 of [6]. Therefore, this modified version of FARM, known as FARM_2DRMP, has been developed solely for use with 2DRMP. Instructions on its use and corresponding test data is provided with 2DRMP [6].Summary of revisions: FARM_2DRMP contains two codes, farm.f and farm_par.f90. The former is a serial code while the latter is a parallel F95 code that employs an MPI harness to enable the nenergy energies to be computed simultaneously across ncore cores, with each core processing either ⌊nenergy/ncore⌋ or ⌈nenergy/ncore⌉ energies. The input files, input.d and H, and the output file farm.out are as described in [7]. Both codes read R directly from RmatT.Restrictions: FARM_2DRMP is for use solely with 2DRMP and for a specified L,S and Π combination. The energy range specified in input.d must match that specified in energies.data.Running time: The wall clock running time for the small test run using 16 cores and performed on [3] is 9 secs.References:
  • [1] 
    HECToR, CRAY XT4 running UNICOS/lc, http://www.hector.ac.uk/, visited 22 July, 2009.
  • [2] 
    HPCx, IBM eServer 575 running IBM AIX, http://www.hpcx.ac.uk/, visited 22 July, 2009.
  • [3] 
    HP Cluster, Itanium II cluster running Red Hat Linux Enterprise AS, Queen's University Belfast, http://www.qub.ac.uk/directorates/InformationServices/Research/HighPerformanceComputing/Services/Hardware/HPResearch/, visited 22 July, 2009.
  • [4] 
    M. Gailitis, J. Phys. B 9 (1976) 843.
  • [5] 
    C.J. Noble, R.K. Nesbet, Comput. Phys. Comm. 33 (1984) 399.
  • [6] 
    N.S. Scott, M.P. Scott, P.G. Burke, T. Stitt, V. Faro-Maza, C. Denis, A. Maniopoulou, Comput. Phys. Comm. 180 (12) (2009) 2424–2449, this issue.
  • [7] 
    V.M. Burke, C.J. Noble, Comput. Phys. Comm. 85 (1995) 471.
  相似文献   

17.
18.
19.
20.
A new nonlinear gyro-kinetic flux tube code (GKW) for the simulation of micro instabilities and turbulence in magnetic confinement plasmas is presented in this paper. The code incorporates all physics effects that can be expected from a state of the art gyro-kinetic simulation code in the local limit: kinetic electrons, electromagnetic effects, collisions, full general geometry with a coupling to a MHD equilibrium code, and E×B shearing. In addition the physics of plasma rotation has been implemented through a formulation of the gyro-kinetic equation in the co-moving system. The gyro-kinetic model is five-dimensional and requires a massive parallel approach. GKW has been parallelised using MPI and scales well up to 8192+ cores. The paper presents the set of equations solved, the numerical methods, the code structure, and the essential benchmarks.

Program summary

Program title: GKWCatalogue identifier: AEES_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEES_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU GPL v3No. of lines in distributed program, including test data, etc.: 29 998No. of bytes in distributed program, including test data, etc.: 206 943Distribution format: tar.gzProgramming language: Fortran 95Computer: Not computer specificOperating system: Any for which a Fortran 95 compiler is availableHas the code been vectorised or parallelised?: Yes. The program can efficiently utilise 8192+ processors, depending on problem and available computer. 128 processors is reasonable for a typical nonlinear kinetic run on the latest x86-64 machines.RAM:∼128 MB–1 GB for a linear run; 25 GB for typical nonlinear kinetic run (30 million grid points)Classification: 19.8, 19.9, 19.11External routines: None required, although the functionality of the program is somewhat limited without a MPI implementation (preferably MPI-2) and the FFTW3 library.Nature of problem: Five-dimensional gyro-kinetic Vlasov equation in general flux tube tokamak geometry with kinetic electrons, electro-magnetic effects and collisionsSolution method: Pseudo-spectral and finite difference with explicit time integrationAdditional comments: The MHD equilibrium code CHEASE [1] is used for the general geometry calculations. This code has been developed in CRPP Lausanne and is not distributed together with GKW, but can be downloaded separately. The geometry module of GKW is based on the version 7.1 of CHEASE, which includes the output for Hamada coordinates.Running time: (On recent x86-64 hardware) ∼10 minutes for a short linear problem; 48 hours for typical nonlinear kinetic run.References:
  •  
    [1] H. Lütjens, A. Bondeson, O. Sauter, Comput. Phys. Comm. 97 (1996) 219, http://cpc.cs.qub.ac.uk/summaries/ADDH_v1_0.html.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号