首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The program FIESTA has been completely rewritten. Now it can be used not only as a tool to evaluate Feynman integrals numerically, but also to expand Feynman integrals automatically in limits of momenta and masses with the use of sector decompositions and Mellin–Barnes representations. Other important improvements to the code are complete parallelization (even to multiple computers), high-precision arithmetics (allowing to calculate integrals which were undoable before), new integrators, Speer sectors as a strategy, the possibility to evaluate more general parametric integrals.

Program summary

Program title:FIESTA 2Catalogue identifier: AECP_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECP_v2_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU GPL version 2No. of lines in distributed program, including test data, etc.: 39 783No. of bytes in distributed program, including test data, etc.: 6 154 515Distribution format: tar.gzProgramming language: Wolfram Mathematica 6.0 (or higher) and CComputer: From a desktop PC to a supercomputerOperating system: Unix, Linux, Windows, Mac OS XHas the code been vectorised or parallelized?: Yes, the code has been parallelized for use on multi-kernel computers as well as clusters via Mathlink over the TCP/IP protocol. The program can work successfully with a single processor, however, it is ready to work in a parallel environment and the use of multi-kernel processor and multi-processor computers significantly speeds up the calculation; on clusters the calculation speed can be improved even further.RAM: Depends on the complexity of the problemClassification: 4.4, 4.12, 5, 6.5Catalogue identifier of previous version: AECP_v1_0Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 735External routines: QLink [1], Cuba library [2], MPFR [3]Does the new version supersede the previous version?: YesNature of problem: The sector decomposition approach to evaluating Feynman integrals falls apart into the sector decomposition itself, where one has to minimize the number of sectors; the pole resolution and epsilon expansion; and the numerical integration of the resulting expression.Solution method: The sector decomposition is based on a new strategy as well as on classical strategies such as Speer sectors. The sector decomposition, pole resolution and epsilon-expansion are performed in Wolfram Mathematica 6.0 or, preferably, 7.0 (enabling parallelization) [4]. The data is stored on hard disk via a special program, QLink [1]. The expression for integration is passed to the C-part of the code, that parses the string and performs the integration by one of the algorithms in the Cuba library package [2]. This part of the evaluation is perfectly parallelized on multi-kernel computers.Reasons for new version:
  • 1. 
    The first version of FIESTA had problems related to numerical instability, so for some classes of integrals it could not produce a result.
  • 2. 
    The sector decomposition method can be applied not only for integral calculation.
Summary of revisions:
  • 1. 
    New integrator library is used.
  • 2. 
    New methods to deal with numerical instability (MPFR library).
  • 3. 
    Parallelization in Mathematica.
  • 4. 
    Parallelization on multiple computers via TCP-IP.
  • 5. 
    New sector decomposition strategy (Speer sectors).
  • 6. 
    Possibility of using FIESTA to for integral expansion.
  • 7. 
    Possibility of using FIESTA to discover poles in d.
  • 8. 
    New negative terms resolution strategies.
Restrictions: The complexity of the problem is mostly restricted by CPU time required to perform the evaluation of the integralRunning time: Depends on the complexity of the problemReferences:
  • [1] 
    http://qlink08.sourceforge.net, open source.
  • [2] 
    http://www.feynarts.de/cuba/, open source.
  • [3] 
    http://www.mpfr.org/, open source.
  • [4] 
    http://www.wolfram.com/products/mathematica/index.html.
  相似文献   

2.
3.
A new stable version (“production version”) v5.28.00 of ROOT [1] has been published [2]. It features several major improvements in many areas, most noteworthy data storage performance as well as statistics and graphics features. Some of these improvements have already been predicted in the original publication Antcheva et al. (2009) [3]. This version will be maintained for at least 6 months; new minor revisions (“patch releases”) will be published [4] to solve problems reported with this version.

New version program summary

Program title: ROOTCatalogue identifier: AEFA_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFA_v2_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: GNU Lesser Public License v.2.1No. of lines in distributed program, including test data, etc.: 2 934 693No. of bytes in distributed program, including test data, etc.: 1009Distribution format: tar.gzProgramming language: C++Computer: Intel i386, Intel x86-64, Motorola PPC, Sun Sparc, HP PA-RISCOperating system: GNU/Linux, Windows XP/Vista/7, Mac OS X, FreeBSD, OpenBSD, Solaris, HP-UX, AIXHas the code been vectorized or parallelized?: YesRAM: > 55 MbytesClassification: 4, 9, 11.9, 14Catalogue identifier of previous version: AEFA_v1_0Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 2499Does the new version supersede the previous version?: YesNature of problem: Storage, analysis and visualization of scientific dataSolution method: Object store, wide range of analysis algorithms and visualization methodsReasons for new version: Added features and corrections of deficienciesSummary of revisions: The release notes at http://root.cern.ch/root/v528/Version528.news.html give a module-oriented overview of the changes in v5.28.00. Highlights include
  • • 
    File format Reading of TTrees has been improved dramatically with respect to CPU time (30%) and notably with respect to disk space.
  • • 
    Histograms A new TEfficiency class has been provided to handle the calculation of efficiencies and their uncertainties, TH2Poly for polygon-shaped bins (e.g. maps), TKDE for kernel density estimation, and TSVDUnfold for singular value decomposition.
  • • 
    Graphics Kerning is now supported in TLatex, PostScript and PDF; a table of contents can be added to PDF files. A new font provides italic symbols. A TPad containing GL can be stored in a binary (i.e. non-vector) image file; add support for full-scene anti-aliasing. Usability enhancements to EVE.
  • • 
    Math New interfaces for generating random number according to a given distribution, goodness of fit tests of unbinned data, binning multidimensional data, and several advanced statistical functions were added.
  • • 
    RooFit Introduction of HistFactory; major additions to RooStats.
  • • 
    TMVA Updated to version 4.1.0, adding e.g. the support for simultaneous classification of multiple output classes for several multivariate methods.
  • • 
    PROOF Many new features, adding to PROOF?s usability, plus improvements and fixes.
  • • 
    PyROOT Support of Python 3 has been added.
  • • 
    Tutorials Several new tutorials were provided for above new features (notably RooStats).
A detailed list of all the changes is available at http://root.cern.ch/root/htmldoc/examples/V5.Additional comments: For an up-to-date author list see: http://root.cern.ch/drupal/content/root-development-team and http://root.cern.ch/drupal/content/former-root-developers.The distribution file for this program is over 30 Mbytes and therefore is not delivered directly when download or E-mail is requested. Instead a html file giving details of how the program can be obtained is sent.Running time: Depending on the data size and complexity of analysis algorithms.References:
  • [1] 
    http://root.cern.ch.
  • [2] 
    http://root.cern.ch/drupal/content/production-version-528.
  • [3] 
    I. Antcheva, M. Ballintijn, B. Bellenot, M. Biskup, R. Brun, N. Buncic, Ph. Canal, D. Casadei, O. Couet, V. Fine, L. Franco, G. Ganis, A. Gheata, D. Gonzalez Maline, M. Goto, J. Iwaszkiewicz, A. Kreshuk, D. Marcos Segura, R. Maunder, L. Moneta, A. Naumann, E. Offermann, V. Onuchin, S. Panacek, F. Rademakers, P. Russo, M. Tadel, ROOT — A C++ framework for petabyte data storage, statistical analysis and visualization, Comput. Phys. Commun. 180 (2009) 2499.
  • [4] 
    http://root.cern.ch/drupal/content/root-version-v5-28-00-patch-release-notes.
  相似文献   

4.
The semi-classical atomic-orbital close-coupling method is a well-known approach for the calculation of cross sections in ion–atom collisions. It strongly relies on the fast and stable computation of exchange integrals. We present an upgrade to earlier implementations of the Fourier-transform method.For this purpose, we implement an extensive library for symbolic storage of polynomials, relying on sophisticated tree structures to allow fast manipulation and numerically stable evaluation. Using this library, we considerably speed up creation and computation of exchange integrals. This enables us to compute cross sections for more complex collision systems.

Program summary

Program title: TXINTCatalogue identifier: AEHS_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHS_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 12 332No. of bytes in distributed program, including test data, etc.: 157 086Distribution format: tar.gzProgramming language: Fortran 95Computer: All with a Fortran 95 compilerOperating system: All with a Fortran 95 compilerRAM: Depends heavily on input, usually less than 100 MiBClassification: 16.10Nature of problem: Analytical calculation of one- and two-center exchange matrix elements for the close-coupling method in the impact parameter model.Solution method: Similar to the code of Hansen and Dubois [1], we use the Fourier-transform method suggested by Shakeshaft [2] to compute the integrals. However, we heavily speed up the calculation using a library for symbolic manipulation of polynomials.Restrictions: We restrict ourselves to a defined collision system in the impact parameter model.Unusual features: A library for symbolic manipulation of polynomials, where polynomials are stored in a space-saving left-child right-sibling binary tree. This provides stable numerical evaluation and fast mutation while maintaining full compatibility with the original code.Additional comments: This program makes heavy use of the new features provided by the Fortran 90 standard, most prominently pointers, derived types and allocatable structures and a small portion of Fortran 95. Only newer compilers support these features. Following compilers support all features needed by the program.
  • • 
    GNU Fortran Compiler “gfortran” from version 4.3.0
  • • 
    GNU Fortran 95 Compiler “g95” from version 4.2.0
  • • 
    Intel Fortran Compiler “ifort” from version 11.0
Running time: Heavily dependent on input, usually less than one CPU second.References:
  • [1] 
    J.-P. Hansen, A. Dubois, Comput. Phys. Commun. 67 (1992) 456.
  • [2] 
    R. Shakeshaft, J. Phys. B: At. Mol. Opt. Phys. 8 (1975) L134.
  相似文献   

5.
We discuss a program suite for simulating Quantum Chromodynamics on a 4-dimensional space–time lattice. The basic Hybrid Monte Carlo algorithm is introduced and a number of algorithmic improvements are explained. We then discuss the implementations of these concepts as well as our parallelisation strategy in the actual simulation code. Finally, we provide a user guide to compile and run the program.

Program summary

Program title: tmLQCDCatalogue identifier: AEEH_v1_0Program summary URL::http://cpc.cs.qub.ac.uk/summaries/AEEH_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU General Public Licence (GPL)No. of lines in distributed program, including test data, etc.: 122 768No. of bytes in distributed program, including test data, etc.: 931 042Distribution format: tar.gzProgramming language: C and MPIComputer: anyOperating system: any with a standard C compilerHas the code been vectorised or parallelised?: Yes. One or optionally any even number of processors may be used. Tested with up to 32 768 processorsRAM: no typical values availableClassification: 11.5External routines: LAPACK [1] and LIME [2] libraryNature of problem: Quantum ChromodynamicsSolution method: Markov Chain Monte Carlo using the Hybrid Monte Carlo algorithm with mass preconditioning and multiple time scales [3]. Iterative solver for large systems of linear equations.Restrictions: Restricted to an even number of (not necessarily mass degenerate) quark flavours in the Wilson or Wilson twisted mass formulation of lattice QCD.Running time: Depending on the problem size, the architecture and the input parameters from a few minutes to weeks.References:
  • [1] 
    http://www.netlib.org/lapack/.
  • [2] 
    USQCD, http://usqcd.jlab.org/usqcd-docs/c-lime/.
  • [3] 
    C. Urbach, K. Jansen, A. Shindler, U. Wenger, Comput. Phys. Commun. 174 (2006) 87, hep-lat/0506011.
  相似文献   

6.
7.
To complete the 2DRMP package an asymptotic program, such as FARM, is needed. The original version of FARM is designed to construct the physical R-matrix, R, from surface amplitudes contained in the H-file. However, in 2DRMP, R has already been constructed for each scattering energy during propagation. Therefore, this modified version of FARM, known as FARM_2DRMP, has been developed solely for use with 2DRMP.

New version program summary

Program title: FARM_2DRMPCatalogue identifier: ADAZ_v1_1Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADAZ_v1_1.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 13 806No. of bytes in distributed program, including test data, etc.: 134 462Distribution format: tar.gzProgramming language: Fortran 95 and MPIComputer: Tested on CRAY XT4 [1]; IBM eServer 575 [2]; Itanium II cluster [3]Operating system: Tested on UNICOS/lc [1]; IBM AIX [2]; Red Hat Linux Enterprise AS [3]Has the code been vectorized or parallelized?: Yes. 16 cores were used for the small test runClassification: 2.4External routines: BLAS, LAPACKDoes the new version supersede the previous version?: NoNature of problem: The program solves the scattering problem in the asymptotic region of R-matrix theory where exchange is negligible.Solution method: A radius is determined at which the wave function, calculated as a Gailitis expansion [4] with accelerated summing [5] over terms, converges. The R-matrix is propagated from the boundary of the internal region to this radius and the K-matrix calculated. Collision strengths or cross sections may be calculated.Reasons for new version: To complete the 2DRMP package [6] an asymptotic program, such as FARM [7], is needed. The original version of FARM is designed to construct the physical R-matrix, R, from surface amplitudes contained in the H-file. However, in 2DRMP, R, has already been constructed for each scattering energy during propagation and each R is stored in one of the RmatT files described in Fig. 8 of [6]. Therefore, this modified version of FARM, known as FARM_2DRMP, has been developed solely for use with 2DRMP. Instructions on its use and corresponding test data is provided with 2DRMP [6].Summary of revisions: FARM_2DRMP contains two codes, farm.f and farm_par.f90. The former is a serial code while the latter is a parallel F95 code that employs an MPI harness to enable the nenergy energies to be computed simultaneously across ncore cores, with each core processing either ⌊nenergy/ncore⌋ or ⌈nenergy/ncore⌉ energies. The input files, input.d and H, and the output file farm.out are as described in [7]. Both codes read R directly from RmatT.Restrictions: FARM_2DRMP is for use solely with 2DRMP and for a specified L,S and Π combination. The energy range specified in input.d must match that specified in energies.data.Running time: The wall clock running time for the small test run using 16 cores and performed on [3] is 9 secs.References:
  • [1] 
    HECToR, CRAY XT4 running UNICOS/lc, http://www.hector.ac.uk/, visited 22 July, 2009.
  • [2] 
    HPCx, IBM eServer 575 running IBM AIX, http://www.hpcx.ac.uk/, visited 22 July, 2009.
  • [3] 
    HP Cluster, Itanium II cluster running Red Hat Linux Enterprise AS, Queen's University Belfast, http://www.qub.ac.uk/directorates/InformationServices/Research/HighPerformanceComputing/Services/Hardware/HPResearch/, visited 22 July, 2009.
  • [4] 
    M. Gailitis, J. Phys. B 9 (1976) 843.
  • [5] 
    C.J. Noble, R.K. Nesbet, Comput. Phys. Comm. 33 (1984) 399.
  • [6] 
    N.S. Scott, M.P. Scott, P.G. Burke, T. Stitt, V. Faro-Maza, C. Denis, A. Maniopoulou, Comput. Phys. Comm. 180 (12) (2009) 2424–2449, this issue.
  • [7] 
    V.M. Burke, C.J. Noble, Comput. Phys. Comm. 85 (1995) 471.
  相似文献   

8.
The new version of the Motion4D-library now also includes the integration of a Sachs basis and the Jacobi equation to determine gravitational lensing of pointlike sources for arbitrary spacetimes.

New version program summary

Program title: Motion4D-libraryCatalogue identifier: AEEX_v3_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEX_v3_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 219 441No. of bytes in distributed program, including test data, etc.: 6 968 223Distribution format: tar.gzProgramming language: C++Computer: All platforms with a C++ compilerOperating system: Linux, WindowsRAM: 61 MbytesClassification: 1.5External routines: Gnu Scientic Library (GSL) (http://www.gnu.org/software/gsl/)Catalogue identifier of previous version: AEEX_v2_0Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 703Does the new version supersede the previous version?: YesNature of problem: Solve geodesic equation, parallel and Fermi-Walker transport in four-dimensional Lorentzian spacetimes. Determine gravitational lensing by integration of Jacobi equation and parallel transport of Sachs basis.Solution method: Integration of ordinary differential equations.Reasons for new version: The main novelty of the current version is the extension to integrate the Jacobi equation and the parallel transport of the Sachs basis along null geodesics. In combination, the change of the cross section of a light bundle and thus the gravitational lensing effect of a spacetime can be determined. Furthermore, we have implemented several new metrics.Summary of revisions: The main novelty of the current version is the integration of the Jacobi equation and the parallel transport of the Sachs basis along null geodesics. The corresponding set of equations read(1)(2)(3) where (1) is the geodesic equation, (2) represents the parallel transport of the two Sachs basis vectors s1,2, and (3) is the Jacobi equation for the two Jacobi fields Y1,2.The initial directions of the Sachs basis vectors are defined perpendicular to the initial direction of the light ray, see also Fig. 1,(4a)(4b)A congruence of null geodesics with central null geodesic γ which starts at the observer O with an infinitesimal circular cross section is defined by the above mentioned two Jacobi fields with initial conditions and . The cross section of this congruence along γ is described by the Jacobian . However, to determine the gravitational lensing of a pointlike source S that is connected to the observer via γ, we need the reverse Jacobian JSO. Fortunately, the reverse Jacobian is just the negative transpose of the original Jacobian JOS,(5)J:=JSO=−T(JOS). The Jacobian J transforms the circular shape of the congruence into an ellipse whose shape parameters (M±: major/minor axis, ψ: angle of major axis, ε: ellipticity) read(6a)(6b)ψ=arctan2(J21cosζ++J22sinζ+,J11cosζ++J12sinζ+),(6c) with(7) and the parameters α=J11J12+J21J22, . The magnification factor is given by(8) These shape parameters can be easily visualized in the new version of the GeodesicViewer, see Ref. [1]. A detailed discussion of gravitational lensing can be found, for example, in Schneider et al. [2].In the following, a list of newly implemented metrics is given:
  • • 
    BertottiKasner: see Rindler [3].
  • • 
    BesselGravWaveCart: gravitational Bessel wave from Kramer [4].
  • • 
    DeSitterUniv, DeSitterUnivConf: de Sitter universe in Cartesian and conformal coordinates.
  • • 
    Ernst: Black hole in a magnetic universe by Ernst [5].
  • • 
    ExtremeReissnerNordstromDihole: see Chandrasekhar [6].
  • • 
    HalilsoyWave: see Ref. [7].
  • • 
    JaNeWi: Janis–Newman–Winicour metric, see Ref. [8].
  • • 
    MinkowskiConformal: Minkowski metric in conformally rescaled coordinates.
  • • 
    PTD_AI, PTD_AII, PTD_AIII, PTD_BI, PTD_BII, PTD_BIII, PTD_C Petrov-Type D – Levi-Civita spacetimes, see Ref. [7].
  • • 
    PainleveGullstrand: Schwarzschild metric in Painlevé–Gullstrand coordinates, see Ref. [9].
  • • 
    PlaneGravWave: Plane gravitational wave, see Ref. [10].
  • • 
    SchwarzschildIsotropic: Schwarzschild metric in isotropic coordinates, see Ref. [11].
  • • 
    SchwarzschildTortoise: Schwarzschild metric in tortoise coordinates, see Ref. [11].
  • • 
    Sultana-Dyer: A black hole in the Einstein–de Sitter universe by Sultana and Dyer [12].
  • • 
    TaubNUT: see Ref. [13].
The Christoffel symbols and the natural local tetrads of these new metrics are given in the Catalogue of Spacetimes, Ref. [14].To study the behavior of geodesics, it is often useful to determine an effective potential like in classical mechanics. For several metrics, we followed the Euler–Lagrangian approach as described by Rindler [10] and implemented an effective potential for a specific situation. As an example, consider the Lagrangian for timelike geodesics in the ?=π/2 hypersurface in the Schwarzschild spacetime with α=1−2m/r. The Euler–Lagrangian equations lead to the energy balance equation with the effective potential V(r)=(r−2m)(r2+h2)/r3 and the constants of motion and . The constants of motion for a timelike geodesic that starts at (r=10m,φ=0) with initial direction ξ=π/4 with respect to the black hole direction and with initial velocity β=0.7 read k≈1.252 and h≈6.931. Then, from the energy balance equation we immediately obtain the radius of closest approach rmin≈5.927.Beside a standard Runge–Kutta fourth-order integrator and the integrators of the Gnu Scientific Library (GSL), we also implemented a standard Bulirsch–Stoer integrator.Running time: The test runs provided with the distribution require only a few seconds to run.References:
  • [1] 
    T. Müller, New version announcement to the GeodesicViewer, http://cpc.cs.qub.ac.uk/summaries/AEFP_v2_0.html.
  • [2] 
    P. Schneider, J. Ehlers, E. E. Falco, Gravitational Lenses, Springer, 1992.
  • [3] 
    W. Rindler, Phys. Lett. A 245 (1998) 363.
  • [4] 
    D. Kramer, Ann. Phys. 9 (2000) 331.
  • [5] 
    F.J. Ernst, J. Math. Phys. 17 (1976) 54.
  • [6] 
    S. Chandrasekhar, Proc. R. Soc. Lond. A 421 (1989) 227.
  • [7] 
    H. Stephani, D. Kramer, M. MacCallum, C. Hoenselaers, E. Herlt, Exact Solutions of the Einstein Field Equations, Cambridge University Press, 2009.
  • [8] 
    A.I. Janis, E.T. Newman, J. Winicour, Phys. Rev. Lett. 20 (1968) 878.
  • [9] 
    K. Martel, E. Poisson, Am. J. Phys. 69 (2001) 476.
  • [10] 
    W. Rindler, Relativity – Special, General, and Cosmology, Oxford University Press, Oxford, 2007.
  • [11] 
    C.W. Misner, K.S. Thorne, J.A. Wheeler, Gravitation, W.H. Freeman, 1973.
  • [12] 
    J. Sultana, C.C. Dyer, Gen. Relativ. Gravit. 37 (2005) 1349.
  • [13] 
    D. Bini, C. Cherubini, Robert T. Jantzen, Class. Quantum Grav. 19 (2002) 5481.
  • [14] 
    T. Muller, F. Grave, arXiv:0904.4184 [gr-qc].
  相似文献   

9.
The R-matrix method has proved to be a remarkably stable, robust and efficient technique for solving the close-coupling equations that arise in electron and photon collisions with atoms, ions and molecules. During the last thirty-four years a series of related R-matrix program packages have been published periodically in CPC. These packages are primarily concerned with low-energy scattering where the incident energy is insufficient to ionise the target. In this paper we describe 2DRMP, a suite of two-dimensional R-matrix propagation programs aimed at creating virtual experiments on high performance and grid architectures to enable the study of electron scattering from H-like atoms and ions at intermediate energies.

Program summary

Program title: 2DRMPCatalogue identifier: AEEA_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEA_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 196 717No. of bytes in distributed program, including test data, etc.: 3 819 727Distribution format: tar.gzProgramming language: Fortran 95, MPIComputer: Tested on CRAY XT4 [1]; IBM eServer 575 [2]; Itanium II cluster [3]Operating system: Tested on UNICOS/lc [1]; IBM AIX [2]; Red Hat Linux Enterprise AS [3]Has the code been vectorised or parallelised?: Yes. 16 cores were used for small test runClassification: 2.4External routines: BLAS, LAPACK, PBLAS, ScaLAPACKSubprograms used: ADAZ_v1_1Nature of problem: 2DRMP is a suite of programs aimed at creating virtual experiments on high performance architectures to enable the study of electron scattering from H-like atoms and ions at intermediate energies.Solution method: Two-dimensional R-matrix propagation theory. The (r1,r2) space of the internal region is subdivided into a number of subregions. Local R-matrices are constructed within each subregion and used to propagate a global R-matrix, ℜ, across the internal region. On the boundary of the internal region ℜ is transformed onto the IERM target state basis. Thus, the two-dimensional R-matrix propagation technique transforms an intractable problem into a series of tractable problems enabling the internal region to be extended far beyond that which is possible with the standard one-sector codes. A distinctive feature of the method is that both electrons are treated identically and the R-matrix basis states are constructed to allow for both electrons to be in the continuum. The subregion size is flexible and can be adjusted to accommodate the number of cores available.Restrictions: The implementation is currently restricted to electron scattering from H-like atoms and ions.Additional comments: The programs have been designed to operate on serial computers and to exploit the distributed memory parallelism found on tightly coupled high performance clusters and supercomputers. 2DRMP has been systematically and comprehensively documented using ROBODoc [4] which is an API documentation tool that works by extracting specially formatted headers from the program source code and writing them to documentation files.Running time: The wall clock running time for the small test run using 16 cores and performed on [3] is as follows: bp (7 s); rint2 (34 s); newrd (32 s); diag (21 s); amps (11 s); prop (24 s).References:
  • [1] 
    HECToR, CRAY XT4 running UNICOS/lc, http://www.hector.ac.uk/, accessed 22 July, 2009.
  • [2] 
    HPCx, IBM eServer 575 running IBM AIX, http://www.hpcx.ac.uk/, accessed 22 July, 2009.
  • [3] 
    HP Cluster, Itanium II cluster running Red Hat Linux Enterprise AS, Queen s University Belfast, http://www.qub.ac.uk/directorates/InformationServices/Research/HighPerformanceComputing/Services/Hardware/HPResearch/, accessed 22 July, 2009.
  • [4] 
    Automating Software Documentation with ROBODoc, http://www.xs4all.nl/~rfsber/Robo/, accessed 22 July, 2009.
  相似文献   

10.
The QCDMAPT program package facilitates computations in the framework of dispersive approach to Quantum Chromodynamics. The QCDMAPT_F version of this package enables one to perform such computations with Fortran, whereas the previous version was developed for use with Maple system. The QCDMAPT_F package possesses the same basic features as its previous version. Namely, it embodies the calculated explicit expressions for relevant spectral functions up to the four–loop level and the subroutines for necessary integrals.

New version program summary

Program title: QCDMAPT_FCatalogue identifier: AEGP_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGP_v2_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 10 786No. of bytes in distributed program, including test data, etc.: 332 329Distribution format: tar.gzProgramming language: Fortran 77 and higherComputer: Any which supports Fortran 77Operating system: Any which supports Fortran 77Classification: 11.1, 11.5, 11.6External routines: MATHLIB routine RADAPT (D102) from CERNLIB Program Library [1]Catalogue identifier of previous version: AEGP_v1_0Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 1769Does the new version supersede the previous version?: No. This version provides an alternative to the previous, Maple, version.Nature of problem: A central object of the dispersive (or “analytic”) approach to Quantum Chromodynamics [2,3] is the so-called spectral function, which can be calculated by making use of the strong running coupling. At the one-loop level the latter has a quite simple form and the relevant spectral function can easily be calculated. However, at the higher loop levels the strong running coupling has a rather cumbersome structure. Here, the explicit calculation of corresponding spectral functions represents a somewhat complicated task (see Section 3 and Appendix B of Ref. [4]), whereas their numerical evaluation requires a lot of computational resources and essentially slows down the overall computation process.Solution method: The developed package includes the calculated explicit expressions for relevant spectral functions up to the four-loop level and the subroutines for necessary integrals.Reasons for new version: The previous version of the package (Ref. [4]) was developed for use with Maple system. The new version is developed for Fortran programming language.Summary of revisions: The QCDMAPT_F package consists of the main program (QCDMAPT_F.f) and two samples of the file containing the values of input parameters (QCDMAPT_F.i1 and QCDMAPT_F.i2). The main program includes the definitions of relevant spectral functions and subroutines for necessary integrals. The main program also provides an example of computation of the values of (M)APT spacelike/timelike expansion functions for the specified set of input parameters and (as an option) generates the output data files with values of these functions over the given kinematic intervals.Additional comments: For the proper functioning of QCDMAPT_F package, the “MATHLIB” CERNLIB library [1] has to be installed.Running time: The running time of the main program with sample set of input parameters specified in the file QCDMAPT_F.i2 is about a minute (depends on CPU).References:
  • [1] 
    Subroutine D102 of the “MATHLIB” CERNLIB library, URL addresses: http://cernlib.web.cern.ch/cernlib/mathlib.html, http://wwwasdoc.web.cern.ch/wwwasdoc/shortwrupsdir/d102/top.html.
  • [2] 
    D.V. Shirkov, I.L. Solovtsov, Phys. Rev. Lett. 79 (1997) 1209;
    •  
      K.A. Milton, I.L. Solovtsov, Phys. Rev. D 55 (1997) 5295;
    •  
      K.A. Milton, I.L. Solovtsov, Phys. Rev. D 59 (1999) 107701;
    •  
      I.L. Solovtsov, D.V. Shirkov, Theor. Math. Phys. 120 (1999) 1220;
    •  
      D.V. Shirkov, I.L. Solovtsov, Theor. Math. Phys. 150 (2007) 132.
  • [3] 
    A.V. Nesterenko, Phys. Rev. D 62 (2000) 094028;
    •  
      A.V. Nesterenko, Phys. Rev. D 64 (2001) 116009;
    •  
      A.V. Nesterenko, Int. J. Mod. Phys. A 18 (2003) 5475;
    •  
      A.V. Nesterenko, J. Papavassiliou, J. Phys. G 32 (2006) 1025;
    •  
      A.V. Nesterenko, Nucl. Phys. B (Proc. Suppl.) 186 (2009) 207.
  • [4] 
    A.V. Nesterenko, C. Simolo, Comput. Phys. Comm. 181 (2010) 1769.
  相似文献   

11.
The GeodesicViewer realizes exocentric two- and three-dimensional illustrations of lightlike and timelike geodesics in the general theory of relativity. By means of an intuitive graphical user interface, all parameters of a spacetime as well as the initial conditions of the geodesics can be modified interactively.

New version program summary

Program title: GeodesicViewerCatalogue identifier: AEFP_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFP_v2_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 76 202No. of bytes in distributed program, including test data, etc.: 1 722 290Distribution format: tar.gzProgramming language: C++, OpenGLComputer: All platforms with a C++ compiler, Qt, OpenGLOperating system: Linux, Mac OS X, WindowsRAM: 24 MBytesClassification: 1.5External routines:
  • • 
    Motion4D (included in the package)
  • • 
    Gnu Scientific Library (GSL) (http://www.gnu.org/software/gsl/)
  • • 
    Qt (http://qt.nokia.com/downloads)
  • • 
    OpenGL (http://www.opengl.org/)
Catalogue identifier of previous version: AEFP_v1_0Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 413Does the new version supersede the previous version?: YesNature of problem: Illustrate geodesics in four-dimensional Lorentzian spacetimes.Solution method: Integration of ordinary differential equations. 3D-Rendering via OpenGL.Reasons for new version: The main reason for the new version was to visualize the parallel transport of the Sachs legs and to show the influence of curved spacetime on a bundle of light rays as is realized in the new version of the Motion4D library (http://cpc.cs.qub.ac.uk/summaries/AEEX_v3_0.html).Summary of revisions:
  • • 
    By choosing the new geodesic type “lightlike_sachs”, the parallel transport of the Sachs basis and the integration of the Jacobi equation can be visualized.
  • • 
    The 2D representation via Qwt was replaced by an OpenGL 2D implementation to speed up the visualization.
  • • 
    Viewing parameters can now be stored in a configuration file (.cfg).
  • • 
    Several new objects can be used in 3D and 2D representation.
  • • 
    Several predefined local tetrads can be choosen.
  • • 
    There are some minor modifications: new mouse control (rotate on sphere); line smoothing; current last point in coordinates is shown; mutual-coordinate representation extended; current cursor position in 2D; colors for 2D view.
Running time: Interactive. The examples given take milliseconds.  相似文献   

12.
The CIF2Cell program generates the geometrical setup for a number of electronic structure programs based on the crystallographic information in a Crystallographic Information Framework (CIF) file. The program will retrieve the space group number, Wyckoff positions and crystallographic parameters, make a sensible choice for Bravais lattice vectors (primitive or principal cell) and generate all atomic positions. Supercells can be generated and alloys are handled gracefully. The code currently has output interfaces to the electronic structure programs ABINIT, CASTEP, CPMD, Crystal, Elk, Exciting, EMTO, Fleur, RSPt, Siesta and VASP.

Program summary

Program title: CIF2CellCatalogue identifier: AEIM_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIM_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU GPL version 3No. of lines in distributed program, including test data, etc.: 12 691No. of bytes in distributed program, including test data, etc.: 74 933Distribution format: tar.gzProgramming language: Python (versions 2.4–2.7)Computer: Any computer that can run Python (versions 2.4–2.7)Operating system: Any operating system that can run Python (versions 2.4–2.7)Classification: 7.3, 7.8, 8External routines: PyCIFRW [1]Nature of problem: Generate the geometrical setup of a crystallographic cell for a variety of electronic structure programs from data contained in a CIF file.Solution method: The CIF file is parsed using routines contained in the library PyCIFRW [1], and crystallographic as well as bibliographic information is extracted. The program then generates the principal cell from symmetry information, crystal parameters, space group number and Wyckoff sites. Reduction to a primitive cell is then performed, and the resulting cell is output to suitably named files along with documentation of the information source generated from any bibliographic information contained in the CIF file. If the space group symmetries is not present in the CIF file the program will fall back on internal tables, so only the minimal input of space group, crystal parameters and Wyckoff positions are required. Additional key features are handling of alloys and supercell generation.Additional comments: Currently implements support for the following general purpose electronic structure programs: ABINIT [2,3], CASTEP [4], CPMD [5], Crystal [6], Elk [7], exciting [8], EMTO [9], Fleur [10], RSPt [11], Siesta [12] and VASP [13–16].Running time: The examples provided in the distribution take only seconds to run.References:
  • [1] 
    J.R. Hester, A validating CIF parser: PyCIFRW, Journal of Applied Crystallography 39 (4) (2006) 621–625, doi:10.1107/S0021889806015627, URL http://dx.doi.org/10.1107/S0021889806015627
  • [2] 
    X. Gonze, G.-M. Rignanese, M. Verstraete, J.-M. Beuken, Y. Pouillon, R. Caracas, F. Jollet, M. Torrent, G. Zerah, M. Mikami, P. Ghosez, M. Veithen, J.-Y. Raty, V. Olevano, F. Bruneval, L. Reining, R. Godby, G. Onida, D.R. Hamann, D.C. Allan, A brief introduction to the abinit software package, Zeitschrift für Kristallographie 220 (12) (2005) 558–562.
  • [3] 
    X. Gonze, B. Amadon, P.-M. Anglade, J.-M. Beuken, F. Bottin, P. Boulanger, F. Bruneval, D. Caliste, R. Caracas, M. Ct, T. Deutsch, L. Genovese, P. Ghosez, M. Giantomassi, S. Goedecker, D. Hamann, P. Hermet, F. Jollet, G. Jomard, S. Leroux, M. Mancini, S. Mazevet, M. Oliveira, G. Onida, Y. Pouillon, T. Rangel, G.-M. Rignanese, D. Sangalli, R. Shaltaf, M. Torrent, M. Verstraete, G. Zerah, J. Zwanziger, Abinit: First-principles approach to material and nanosystem properties, Computer Physics Communications 180 (12) (2009) 2582–2615 (40 years of CPC: A celebratory issue focused on quality software for high performance, grid and novel computing architectures), doi:10.1016/j.cpc.2009.07.007; http://www.sciencedirect.com/science/article/B6TJ5-4WTRSCM-3/2/20edf8da70cd808f10fe352c45d0c0be.
  • [4] 
    S.J. Clark, M.D. Segall, C.J. Pickard, P.J. Hasnip, M.J. Probert, K. Refson, M.C. Payne, First principles methods using CASTEP, Zeitschrift für Kristallographie 220 (12) (2005) 567–570.
  • [5] 
    URL http://www.cpmd.org.
  • [6] 
    R. Dovesi, R. Orlando, B. Civalleri, C. Roetti, V.R. Saunders, C.M. Zicovich-Wilson, Crystal: a computational tool for the ab initio study of the electronic properties of crystals, Zeitschrift für Kristallographie 220 (2005) 571–573. URL http://dx.doi.org/10.1524/zkri.220.5.571.65065.
  • [7] 
    URL http://elk.sourceforge.net.
  • [8] 
    URL http://exciting-code.org.
  • [9] 
    L. Vitos, Computational Quantum Mechanics for Materials Engineers; The EMTO Method and Applications, Springer, London, 2007, doi:10.1007/978-1-84628-951-4.
  • [10] 
    URL http://www.flapw.de.
  • [11] 
    J.M. Wills, O. Eriksson, M. Alouani, D.L. Price, Full-potential LMTO total energy and force calculations, in: H. Dreussé (Ed.), Electronic Structure and Physical Properties of Solids; The Uses of the LMTO Method, Springer, 1996, pp. 148–167.
  • [12] 
    J.M. Soler, E. Artacho, J.D. Gale, A. García, J. Junquera, P. Ordejón, D. Sánchez-Portal, The siesta method for ab initio order-n materials simulation, Journal of Physics: Condensed Matter 14 (11) (2002) 2745. URL http://stacks.iop.org/0953-8984/14/i=11/a=302
  • [13] 
    G. Kresse, J. Hafner, Ab initio molecular dynamics for liquid metals, Phys. Rev. B 47 (1) (1993) 558–561, doi:10.1103/PhysRevB.47.558.
  • [14] 
    G. Kresse, J. Hafner, Ab initio molecular-dynamics simulation of the liquid–metal amorphous-semiconductor transition in germanium, Phys. Rev. B 49 (20) (1994) 14251–14269, doi:10.1103/PhysRevB.49.14251.
  • [15] 
    G. Kresse, J. Furthmüller, Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set, Computational Materials Science 6 (1) (1996) 15–50, doi:10.1016/0927-0256(96)00008-0. URL http://www.sciencedirect.com/science/article/B6TWM-3VRVTBF-3/2/88689b1eacfe2b5fe57f09d37eff3b74.
  • [16] 
    G. Kresse, J. Furthmüller, Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set, Phys. Rev. B 54 (16) (1996) 11169–11186, doi:10.1103/PhysRevB.54.11169.
  相似文献   

13.
The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion.

Program summary

Program title: Application Hosting Environment 2.0Catalogue identifier: AEEJ_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEJ_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU Public Licence, Version 2No. of lines in distributed program, including test data, etc.: not applicableNo. of bytes in distributed program, including test data, etc.: 1 685 603 766Distribution format: tar.gzProgramming language: Perl (server), Java (Client)Computer: x86Operating system: Linux (Server), Linux/Windows/MacOS (Client)RAM: 134 217 728 (server), 67 108 864 (client) bytesClassification: 6.5External routines: VirtualBox (server), Java (client)Nature of problem: The middleware that makes grid computing possible has been found by many users to be too unwieldy, and presents an obstacle to use rather than providing assistance [1,2]. Such problems are compounded when one attempts to harness the power of a grid, or a federation of different grids, rather than just a single resource on the grid.Solution method: To address the above problem, we have developed AHE, a lightweight interface, designed to simplify the process of running scientific codes on a grid of HPC and local resources. AHE does this by introducing a layer of middleware between the user and the grid, which encapsulates much of the complexity associated with launching grid applications.Unusual features: The server is distributed as a VirtualBox virtual machine. VirtualBox (http://www.virtualbox.org) must be downloaded and installed in order to run the AHE server virtual machine. Details of how to do this are given in the AHE 2.0 Quick Start Guide.Running time: Not applicableReferences:
  • [1] 
    J. Chin, P.V. Coveney, Towards tractable toolkits for the grid: A plea for lightweight, useable middleware, NeSC Technical Report, 2004, http://nesc.ac.uk/technical_papers/UKeS-2004-01.pdf.
  • [2] 
    P.V. Coveney, R.S. Saksena, S.J. Zasada, M. McKeown, S. Pickles, The Application Hosting Environment: Lightweight middleware for grid-based computational science, Computer Physics Communications 176 (2007) 406–418.
  相似文献   

14.
15.
There are many reconstruction algorithms for tomography, raft for short, and some of them are considered “classic” by researchers. The so-called raft library, provide a set of useful and basic tools, usually needed in many inverse problems that are related to medical imaging. The subroutines in raft are free software and written in C language; portable to any system with a working C compiler. This paper presents source codes written according to raft routines, applied to a new imaging modality called X-ray fluorescence tomography.

Program summary

Program title: raftCatalogue identifier: AEJY_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJY_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: GNU General Public Licence, version 2No. of lines in distributed program, including test data, etc.: 218 844No. of bytes in distributed program, including test data, etc.: 3 562 902Distribution format: tar.gzProgramming language: Standard C.Computer: Any with a standard C compilerOperating system: Linux and WindowsClassification: 2.4, 2.9, 3, 4.3, 4.7External routines:
  •  
    raft:
    •  
      autoconf 2.60 or later – http://www.gnu.org/software/autoconf/
    •  
      GSL scientific library – http://www.gnu.org/software/gsl/
    •  
      Confuse parser library – http://www.nongnu.org/confuse/
raft-fun: gengetopt – http://www.gnu.org/software/gengetopt/gengetopt.htmlNature of problem: Reconstruction algorithms for tomography, specially in X-ray fluorescence tomography.Solution method: As a library, raft covers the standard reconstruction algorithms like filtered backprojection, Novikov?s inversion, Hogan?s formula, among others. The input data set is represented by a complete sinogram covering a determined angular range. Users are allowed to set solid angle range for fluorescence emission at each algorithm.Running time: 1 second to 15 minutes, depending on the data size.  相似文献   

16.
We present a new module of micrOMEGAs devoted to the computation of indirect signals from dark matter annihilation in any new model with a stable weakly interacting particle. The code provides the mass spectrum, cross-sections, relic density and exotic fluxes of gamma rays, positrons and antiprotons. The propagation of charged particles in the Galactic halo is handled with a new module that allows to easily modify the propagation parameters.

Program summary

Program title: micrOMEGAs2.4Catalogue identifier: ADQR_v2_3Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADQR_v2_3.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 401 126No. of bytes in distributed program, including test data, etc.: 6 583 596Distribution format: tar.gzProgramming language: C and FortranComputer: PC, Alpha, Mac, SunOperating system: UNIX (Linux, OSF1, SunOS, Darwin, Cygwin)RAM: 50 MB depending on the number of processes requiredClassification: 1.9, 11.6Catalogue identifier of previous version: ADQR_v2_3Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 747Does the new version supersede the previous version?: YesNature of problem: Calculation of the relic density and detection rates of the lightest stable particle in a generic new model of particle physics.Solution method: In numerically solving the evolution equation for the density of dark matter, relativistic formulas for the thermal average are used. All tree-level processes for annihilation and coannihilation of new particles in the model are included. The cross-sections for all processes are calculated exactly with CalcHEP after definition of a model file. The propagation of the charged cosmic rays is solved within a semi-analytical two-zone model.Reasons for new version: There are many experiments that are currently searching for the remnants of dark matter annihilation. In this version we perform the computation of indirect signals from dark matter annihilation in any new model with a stable weakly interacting particle. We include the propagation of charged particles in the Galactic halo.Summary of revisions:
  • • 
    Annihilation cross-sections for all 2-body tree-level processes and for radiative emission of a photon for all models.
  • • 
    Annihilation cross-sections into polarised gauge bosons.
  • • 
    Annihilation cross-sections for the loop induced processes γγ and γZ0 in the MSSM.
  • • 
    Modelling of the DM halo with a general parameterization and with the possibility of including DM clumps.
  • • 
    Computation of the propagation of charged particles through the Galaxy, including the possibility of modifying the propagation parameters.
  • • 
    Effect of solar modulation on the charged particle spectrum.
  • • 
    Model independent predictions of the indirect detection signals.
Unusual features: Depending on the parameters of the model, the program generates additional new code, compiles it and loads it dynamically.Running time: 3 sec  相似文献   

17.
We present HONEI, an open-source collection of libraries offering a hardware oriented approach to numerical calculations. HONEI abstracts the hardware, and applications written on top of HONEI can be executed on a wide range of computer architectures such as CPUs, GPUs and the Cell processor. We demonstrate the flexibility and performance of our approach with two test applications, a Finite Element multigrid solver for the Poisson problem and a robust and fast simulation of shallow water waves. By linking against HONEI's libraries, we achieve a two-fold speedup over straight forward C++ code using HONEI's SSE backend, and additional 3–4 and 4–16 times faster execution on the Cell and a GPU. A second important aspect of our approach is that the full performance capabilities of the hardware under consideration can be exploited by adding optimised application-specific operations to the HONEI libraries. HONEI provides all necessary infrastructure for development and evaluation of such kernels, significantly simplifying their development.

Program summary

Program title: HONEICatalogue identifier: AEDW_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDW_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GPLv2No. of lines in distributed program, including test data, etc.: 216 180No. of bytes in distributed program, including test data, etc.: 1 270 140Distribution format: tar.gzProgramming language: C++Computer: x86, x86_64, NVIDIA CUDA GPUs, Cell blades and PlayStation 3Operating system: LinuxRAM: at least 500 MB freeClassification: 4.8, 4.3, 6.1External routines: SSE: none; [1] for GPU, [2] for Cell backendNature of problem: Computational science in general and numerical simulation in particular have reached a turning point. The revolution developers are facing is not primarily driven by a change in (problem-specific) methodology, but rather by the fundamental paradigm shift of the underlying hardware towards heterogeneity and parallelism. This is particularly relevant for data-intensive problems stemming from discretisations with local support, such as finite differences, volumes and elements.Solution method: To address these issues, we present a hardware aware collection of libraries combining the advantages of modern software techniques and hardware oriented programming. Applications built on top of these libraries can be configured trivially to execute on CPUs, GPUs or the Cell processor. In order to evaluate the performance and accuracy of our approach, we provide two domain specific applications; a multigrid solver for the Poisson problem and a fully explicit solver for 2D shallow water equations.Restrictions: HONEI is actively being developed, and its feature list is continuously expanded. Not all combinations of operations and architectures might be supported in earlier versions of the code. Obtaining snapshots from http://www.honei.org is recommended.Unusual features: The considered applications as well as all library operations can be run on NVIDIA GPUs and the Cell BE.Running time: Depending on the application, and the input sizes. The Poisson solver executes in few seconds, while the SWE solver requires up to 5 minutes for large spatial discretisations or small timesteps.References:
  • [1] 
    http://www.nvidia.com/cuda.
  • [2] 
    http://www.ibm.com/developerworks/power/cell.
  相似文献   

18.
We present a program for the numerical evaluation of scalar integrals and tensor form factors entering the calculation of one-loop amplitudes which supports the use of complex masses in the loop integrals. The program is built on an earlier version of the golem95 library, which performs the reduction to a certain set of basis integrals using a formalism where inverse Gram determinants can be avoided. It can be used to calculate one-loop amplitudes with arbitrary masses in an algebraic approach as well as in the context of unitarity-inspired numerical reconstruction of the integrand.

Program summary

Program title: golem95-1.2.0Catalogue identifier: AEEO_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEO_v2_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 182 492No. of bytes in distributed program, including test data, etc.: 950 549Distribution format: tar.gzProgramming language: Fortran95Computer: Any computer with a Fortran95 compilerOperating system: Linux, UnixRAM: RAM used per integral/form factor is insignificantClassification: 4.4, 11.1External routines: Some finite scalar integrals are called from OneLOop [1,2], the option to call them from LoopTools [3,4] is also implemented.Catalogue identifier of previous version: AEEO_v1_0Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2317Does the new version supersede the previous version?: YesNature of problem: Evaluation of one-loop multi-leg integrals occurring in the calculation of next-to-leading order corrections to scattering amplitudes in elementary particle physics. In the presence of massive particles in the loop, propagators going on-shell can cause singularities which should be regulated to allow for a successful evaluation.Solution method: Complex masses can be used in the loop integrals to stand for a width of an unstable particle, regulating the singularities by moving the poles away from the real axis.Reasons for new version: The previous version was restricted to massless particles in the loop.Summary of revisions: Real and complex masses are supported, a general μ parameter for the renormalization scale is introduced, improvements in the caching system and the user interface.Running time: Depends on the nature of the problem. A single call to a rank 6 six-point form factor at a randomly chosen kinematic point, using complex masses, takes 0.06 seconds on an Intel Core 2 Q9450 2.66 GHz processor.References:
  • [1] 
    A. van Hameren, C.G. Papadopoulos, R. Pittau, Automated one-loop calculations: a proof of concept, JHEP 0909 (2009) 106, arXiv:0903.4665.
  • [2] 
    A. van Hameren, OneLOop: for the evaluation of one-loop scalar functions, arXiv:1007.4716.
  • [3] 
    T. Hahn, M. Perez-Victoria, Automatized one-loop calculations in four and D dimensions, Comput. Phys. Commun. 118 (1999) 153–165, hep-ph/9807565.
  • [4] 
    T. Hahn, Feynman diagram calculations with FeynArts, FormCalc, and LoopTools, arXiv:1006.2231.
  相似文献   

19.
A B-spline version of a Hartree–Fock program is described. The usual differential equations are replaced by systems of non-linear equations and generalized eigenvalue problems of the form (HaεaaB)Pa=0, where a designates the orbital. When orbital a is required to be orthogonal to a fixed orbital, this form assumes that a projection operator has been applied to eliminate the Lagrange multiplier. When two orthogonal orbitals are both varied, the energy must also be stationary with respect to orthogonal transformations. At such a stationary point, the matrix of Lagrange multipliers, εab=(Pb|Ha|Pa), is symmetric and the off-diagonal Lagrange multipliers may again be eliminated through projection operators. For multiply occupied shells, convergence problems are avoided by the use of a single-orbital Newton–Raphson method. A self-consistent field procedure based on these two possibilities exhibits excellent convergence. A Newton–Raphson method for updating all orbitals simultaneously has better numerical properties and a more rapid rate of convergence but requires more computer processing time. Both ground and excited states may be computed using a default universal grid. Output from a calculation for Al 3s23p shows the improvement in accuracy that can be achieved by mapping results from low-order splines on a coarse grid to splines of higher order onto a refined grid. The program distribution contains output from additional test cases.

Program summary

Program title: SPHF version 1.00Catalogue identifier: AEIJ_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIJ_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 13 925No. of bytes in distributed program, including test data, etc.: 714 254Distribution format: tar.gzProgramming language: Fortran 95Computer: Any system with a Fortran 95 compiler. Tested on Intel Xeon CPU X5355, 2.66 GHzOperating system: Any system with a Fortran 95 compilerClassification: 2.1External routines: LAPACK (http://www.netlib.org/lapack/)Nature of problem: Non-relativistic Hartree–Fock wavefunctions are determined for atoms in a bound state that may be used to predict a variety atomic properties.Solution method: The radial functions are expanded in a B-spline basis [1]. The variational principle applied to an energy functional that includes Lagrange multipliers for orthonormal constraints defines the Hartree–Fock matrix for each orbital. Orthogonal transformations symmetrize the matrix of Lagrange multipliers and projection operators eliminate the off-diagonal Lagrange multipliers to yield a generalized eigenvalue problem. For multiply occupied shells, a single-orbital Newton–Raphson (NR) method is used to speed convergence with very little extra computation effort. In a final step, all orbitals are updated simultaneously by a Newton–Raphson method to improve numerical accuracy.Restrictions: There is no restriction on calculations for the average energy of a configuration. As in the earlier HF96 program [2], only one or two open shells are allowed when results are required for a specific LS coupling. These include:
  • 1. 
    N(nl)ns, where l=0,1,2,3
  • 2. 
    N(np)nl, where l=0,1,2,3,…
  • 3. 
    (nd)(nf)
Unusual features: Unlike HF96, the present program is a Fortran 90/95 program without the use of COMMON. It is assumed that Lapack libraries are available.Running time: For Ac 7s27p the execution time varied from 6.9 s to 9.1 s depending on the iteration method.References:
  • [1] 
    C. Froese Fischer, Adv. At. Mol. Phys. 55 (2008) 235.
  • [2] 
    G. Gaigalas, C. Froese Fischer, Comput. Phys. Commun. 98 (1996) 255.
  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号