首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
A library for reading and writing data in the SUSY Les Houches Accord 2 format is presented. The implementation is in native Fortran 77. The data are contained in a single array conveniently indexed by preprocessor statements.

Program summary

Program title: SLHA2LibCatalogue identifier: AEDY_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDY_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 7550No. of bytes in distributed program, including test data, etc.: 160 123Distribution format: tar.gzProgramming language: FortranComputer: For the build process, a Fortran 77 compiler in a Unixish environment (make, shell) are requiredOperating system: Linux, Mac OS, Windows (Cygwin), Tru64 UnixRAM: The SLHA Record is currently 88 944 bytes longClassification: 4.14, 11.6Nature of problem: Exchange of SUSY parameters and decay information in an ASCII file format.Solution method: The SLHA2Lib provides routines for reading and writing files in the SUSY Les Houches Accord 2 format, a common interchange format for SUSY parameters and decay data.Restrictions: The fixed-sized array that holds the SLHA2 data necessarily limits the amount of decay data that can be stored. This limit can be enlarged by editing and re-running the SLHA2.m program.Unusual features: Data are transported in a single “double complex” array in Fortran, indexed through preprocessor macros. This is about the simplest conceivable container and needs neither dynamic memory allocation nor Fortran extension like structures.Running time: Both reading and writing a SLHA file are typically in the range of a few milliseconds.  相似文献   

3.
The CERN SPS experiment COMPASS has integrated a Conditions Database System in its off-line software. The system is used to manage time-dependent information, detector condition, calibration, and geometrical alignment information, by using a package provided by the CERN IT/DB. This integrated system consists of administration tools, a data handling library, and data transfer software from the detector control system to the Conditions Database. In this paper, the status of the Conditions Database project is described, and the results of the performance test on the COMPASS computing farm are given.  相似文献   

4.
A fitting procedure for one trap and one recombination centre kinetic model is described here. The procedure makes use of a grid in the parameters space obtained by changing each parameter back and forth and calculating robust cost functions on the surfaces of this grid. The lengths of the changes are determined empirically. The best set of parameters is calculated by the projection on the grid surface with smallest cost function. The fitting procedure applied to the fit of one, two and three parameters of the kinetic model is analyzed. In all cases the optimization procedure shows reliable fitting within a feasible interval of processing time.  相似文献   

5.
In the field of experimental data acquisition and evaluation need rises for using some kind of “expert system” in order to provide support for sophisticated instruments and data evaluation applications. Different external expert system shells served as the basis for previous attempts to develop an expert system for such goals in the X-ray Photoelectron Spectroscopy (XPS). The paper presents a simple reasoning expert system engine, which can be built directly into data acquisition and evaluation software. Some problems arising due to the lack of human intelligence in the inferencing process are also discussed. The feasibility of the realized system is demonstrated through implementing a real-life rule set, an example (the carbon contamination rules) taken from the field of XPS. Apart from the field-specific rules, the package can be used on any field.  相似文献   

6.
Simulations of crystal deformation and structural transformation may generate complex datasets involving networks with million to billion chemical bonds which makes local structure analysis a challenge. An ideal analysis method must recognize perfect crystal structures, such as face-centered cubic, body-centered cubic and hexagonal close packed, and differentiate structural defects such as dislocations, stacking faults, grain boundaries, cracks and surfaces. Currently a few methods are used for this purpose, e.g., the Common Neighbor Analysis (CNA) and the Centrosymmetry Parameter (CSP). This paper proposes an alternative method based on the calculation of a single parameter that depends on the common atomic neighborhood. We validate the method characterizing local structures in complex molecular-dynamics datasets, clarifying its advantages over the CNA and the CSP methods.  相似文献   

7.
In the paper efficient nonlinear fitting algorithms without matrix inversion are described. The algorithms were applied to the analysis of two- and three-fold coincidence γ-ray spectra. They were used to process coincidence matrices from fission data from the multidetector GAMMASPHERE spectrometer.  相似文献   

8.
Computer simulation techniques have found extensive use in establishing empirical relationships between three-dimensional (3d) and two-dimensional (2d) projected properties of particles produced by the process of growth through the agglomeration of smaller particles (monomers). In this paper, we describe a package, FracMAP, that has been written to simulate 3d quasi-fractal agglomerates and create their 2d pixelated projection images by restricting them to stable orientations as commonly encountered for quasi-fractal agglomerates collected on filter media for electron microscopy. Resulting 2d images are analyzed for their projected morphological properties.

Program summary

Program title: FracMAPCatalogue identifier: AEDD_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDD_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 4722No. of bytes in distributed program, including test data, etc.: 27 229Distribution format: tar.gzProgramming language: C++Computer: PCOperating system: Windows, LinuxRAM: 2.0 MegabytesClassification: 7.7Nature of problem: Solving for a suitable fractal agglomerate construction under constraints of typical morphological parameters.Solution method: Monte Carlo approximation.Restrictions: Problem complexity is not representative of run-time, since Monte Carlo iterations are of a constant complexity.Additional comments: The distribution file contains two versions of the FracMAP code, one for Windows and one for Linux.Running time: 1 hour for a fractal agglomerate of size 25 on a single processor.  相似文献   

9.
A simulation study to evaluate the required computing resources for a research exploitation of the Large Hadron Collider (LHC) has been performed. The evaluation was done as a case study, assuming existence of a Nordic regional centre and using the requirements for performing a specific physics analysis as a yard-stick. Other input parameters were: assumption for the distribution of researchers at the institutions involved, an analysis model, and two different functional structures of the computing resources.  相似文献   

10.
A vertex reconstruction algorithm that is based on the Gaussian-sum filter (GSF) was developed and implemented in the framework of the CMS reconstruction program. While linear least-square estimators are optimal in case all observation errors are Gaussian distributed, the GSF offers a better treatment of non-Gaussian distributions of track parameter errors when these are modeled by Gaussian mixtures. The algorithm has been verified and evaluated with simulated data. The results are compared to the Kalman filter and to an adaptive vertex estimator.  相似文献   

11.
In the paper a class of fast adaptive Fourier-based transforms were used for spectroscopic data compression. These transforms are based on adaptive modification of the Cooley-Tukey's signal flow graph. The adaptive versions of the cosine, cosine-Haar and cosine-Walsh transform of various degrees were taken as a base for the experiments. The transform kernels are modified according to reference vectors representing a given class of processed data. The results obtained using these transforms for γ-γ ray coincidence spectra compression are presented and compared with the results obtained by use of classical transforms. Both classical and adaptive transforms can be used for off-line as well as for on-line compression.  相似文献   

12.
Modern high energy physics experiments have to process terabytes of input data produced in particle collisions. The core of many data reconstruction algorithms in high energy physics is the Kalman filter. Therefore, the speed of Kalman filter based algorithms is of crucial importance in on-line data processing. This is especially true for the combinatorial track finding stage where the Kalman filter based track fit is used very intensively. Therefore, developing fast reconstruction algorithms, which use maximum available power of processors, is important, in particular for the initial selection of events which carry signals of interesting physics.One of such powerful feature supported by almost all up-to-date PC processors is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achieving more operations per clock cycle. The novel Cell processor extends the parallelization further by combining a general-purpose PowerPC processor core with eight streamlined coprocessing elements which greatly accelerate vector processing applications.In the investigation described here, after a significant memory optimization and a comprehensive numerical analysis, the Kalman filter based track fitting algorithm of the CBM experiment has been vectorized using inline operator overloading. Thus the algorithm continues to be flexible with respect to any CPU family used for data reconstruction.Because of all these changes the SIMDized Kalman filter based track fitting algorithm takes 1 μs per track that is 10000 times faster than the initial version. Porting the algorithm to a Cell Blade computer gives another factor of 10 of the speedup.Finally, we compare performance of the tracking algorithm running on three different CPU architectures: Intel Xeon, AMD Opteron and Cell Broadband Engine.  相似文献   

13.
We present a data mining technique for the analysis of multichannel oscillatory timeseries data and show an application using poloidal arrays of magnetic sensors installed in the H-1 heliac. The procedure is highly automated, and scales well to large datasets. The timeseries data is split into short time segments to provide time resolution, and each segment is represented by a singular value decomposition (SVD). By comparing power spectra of the temporal singular vectors, related singular values are grouped into subsets which define fluctuation structures. Thresholds for the normalised energy of the fluctuation structure and the normalised entropy of the SVD can be used to filter the dataset. We assume that distinct classes of fluctuations are localised in the space of phase differences Δψ(n,n+1) between each pair of nearest neighbour channels. An expectation maximisation clustering algorithm is used to locate the distinct classes of fluctuations and assign mode numbers where possible, and a cluster tree mapping is used to visualise the results.  相似文献   

14.
State-of-the-art molecular dynamics (MD) simulations generate massive datasets involving billion-vertex chemical bond networks, which makes data mining based on graph algorithms such as K-ring analysis a challenge. This paper proposes an algorithm to improve the efficiency of ring analysis of large graphs, exploiting properties of K-rings and spatial correlations of vertices in the graph. The algorithm uses dual-tree expansion (DTE) and spatial hash-function tagging (SHAFT) to optimize computation and memory access. Numerical tests show nearly perfect linear scaling of the algorithm. Also a parallel implementation of the DTE + SHAFT algorithm achieves high scalability. The algorithm has been successfully employed to analyze large MD simulations involving up to 500 million atoms.  相似文献   

15.
A standard file format is proposed to store process and event information, primarily output from parton-level event generators for further use by general-purpose ones. The information content is identical with what was already defined by the Les Houches Accord five years ago, but then in terms of Fortran commonblocks. This information is embedded in a minimal XML-style structure, for clarity and to simplify parsing.  相似文献   

16.
In this paper we present an inversion algorithm for ill-posed problems arising in atmospheric remote sensing. The proposed method is an iterative Runge-Kutta type regularization method. Those methods are better well known for solving differential equations. We adapted them for solving inverse ill-posed problems. The numerical performances of the algorithm are studied by means of simulations concerning the retrieval of aerosol particle size distributions from lidar observations.  相似文献   

17.
In this paper we discuss some computational problems associated to matched filtering of experimental signals from gravitational wave interferometric detectors in a parallel-processing environment. We then specialize our discussion to the use of the APEmille and apeNEXT processors for this task. Finally, we accurately estimate the performance of an APEmille system on a computational load appropriate for the LIGO and VIRGO experiments, and extrapolate our results to apeNEXT.  相似文献   

18.
A modification of the standard Simulated Annealing (SA) algorithm is presented for finding the global minimum of a continuous multidimensional, multimodal function. We report results of computational experiments with a set of test functions and we compare to methods of similar structure. The accompanying software accepts objective functions coded both in Fortran 77 and C++.

Program summary

Title of program:GenAnnealCatalogue identifier:ADXI_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXI_v1_0Program available from: CPC Program Library, Queen's University of Belfast, N. IrelandComputer for which the program is designed and others on which it has been tested: The tool is designed to be portable in all systems running the GNU C++ compilerInstallation: University of Ioannina, Greece on Linux based machinesProgramming language used:GNU-C++, GNU-C, GNU Fortran 77Memory required to execute with typical data: 200 KBNo. of bits in a word: 32No. of processors used: 1Has the code been vectorized or parallelized?: NoNo. of bytes in distributed program, including test data, etc.:84 885No. of lines in distributed program, including test data, etc.:14 896Distribution format: tar.gzNature of physical problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques are frequently trapped in local minima. Global optimization is hence the appropriate tool. For example, solving a non-linear system of equations via optimization, employing a “least squares” type of objective, one may encounter many local minima that do not correspond to solutions (i.e. they are far from zero).Typical running time: Depending on the objective function.Method of solution: We modified the process of step selection that the traditional Simulated Annealing employs and instead we used a global technique based on grammatical evolution.  相似文献   

19.
A new method that employs grammatical evolution and a stopping rule for finding the global minimum of a continuous multidimensional, multimodal function is considered. The genetic algorithm used is a hybrid genetic algorithm in conjunction with a local search procedure. We list results from numerical experiments with a series of test functions and we compare with other established global optimization methods. The accompanying software accepts objective functions coded either in Fortran 77 or in C++.

Program summary

Program title: GenMinCatalogue identifier: AEAR_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAR_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 35 810No. of bytes in distributed program, including test data, etc.: 436 613Distribution format: tar.gzProgramming language: GNU-C++, GNU-C, GNU Fortran 77Computer: The tool is designed to be portable in all systems running the GNU C++ compilerOperating system: The tool is designed to be portable in all systems running the GNU C++ compilerRAM: 200 KBWord size: 32 bitsClassification: 4.9Nature of problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques are frequently trapped in local minima. Global optimization is hence the appropriate tool. For example, solving a nonlinear system of equations via optimization, employing a least squares type of objective, one may encounter many local minima that do not correspond to solutions (i.e. they are far from zero).Solution method: Grammatical evolution and a stopping rule.Running time: Depending on the objective function. The test example given takes only a few seconds to run.  相似文献   

20.
A new stochastic method for locating the global minimum of a multidimensional function inside a rectangular hyperbox is presented. A sampling technique is employed that makes use of the procedure known as grammatical evolution. The method can be considered as a “genetic” modification of the Controlled Random Search procedure due to Price. The user may code the objective function either in C++ or in Fortran 77. We offer a comparison of the new method with others of similar structure, by presenting results of computational experiments on a set of test functions.

Program summary

Title of program: GenPriceCatalogue identifier:ADWPProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWPProgram available from: CPC Program Library, Queen's University of Belfast, N. IrelandComputer for which the program is designed and others on which it has been tested: the tool is designed to be portable in all systems running the GNU C++ compilerInstallation: University of Ioannina, GreeceProgramming language used: GNU-C++, GNU-C, GNU Fortran-77Memory required to execute with typical data: 200 KBNo. of bits in a word: 32No. of processors used: 1Has the code been vectorized or parallelized?: noNo. of lines in distributed program, including test data, etc.:13 135No. of bytes in distributed program, including test data, etc.: 78 512Distribution format: tar. gzNature of physical problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques are frequently trapped in local minima. Global optimization is hence the appropriate tool. For example, solving a nonlinear system of equations via optimization, employing a “least squares” type of objective, one may encounter many local minima that do not correspond to solutions, i.e. minima with values far from zero.Method of solution: Grammatical Evolution is used to accelerate the process of finding the global minimum of a multidimensional, multimodal function, in the framework of the original “Controlled Random Search” algorithm.Typical running time: Depending on the objective function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号