首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A vertex reconstruction algorithm that is based on the Gaussian-sum filter (GSF) was developed and implemented in the framework of the CMS reconstruction program. While linear least-square estimators are optimal in case all observation errors are Gaussian distributed, the GSF offers a better treatment of non-Gaussian distributions of track parameter errors when these are modeled by Gaussian mixtures. The algorithm has been verified and evaluated with simulated data. The results are compared to the Kalman filter and to an adaptive vertex estimator.  相似文献   

2.
In the paper efficient nonlinear fitting algorithms without matrix inversion are described. The algorithms were applied to the analysis of two- and three-fold coincidence γ-ray spectra. They were used to process coincidence matrices from fission data from the multidetector GAMMASPHERE spectrometer.  相似文献   

3.
4.
In the context of the analysis of measured data, one is often faced with the task to differentiate data numerically. Typically, this occurs when measured data are concerned or data are evaluated numerically during the evolution of partial or ordinary differential equations. Usually, one does not take care for accuracy of the resulting estimates of derivatives because modern computers are assumed to be accurate to many digits. But measurements yield intrinsic errors, which are often much less accurate than the limit of the machine used, and there exists the effect of “loss of significance”, well known in numerical mathematics and computational physics. The problem occurs primarily in numerical subtraction, and clearly, the estimation of derivatives involves the approximation of differences. In this article, we discuss several techniques for the estimation of derivatives. As a novel aspect, we divide into local and global methods, and explain the respective shortcomings. We have developed a general scheme for global methods, and illustrate our ideas by spline smoothing and spectral smoothing. The results from these less known techniques are confronted with the ones from local methods. As typical for the latter, we chose Savitzky-Golay-filtering and finite differences. Two basic quantities are used for characterization of results: The variance of the difference of the true derivative and its estimate, and as important new characteristic, the smoothness of the estimate. We apply the different techniques to numerically produced data and demonstrate the application to data from an aeroacoustic experiment. As a result, we find that global methods are generally preferable if a smooth process is considered. For rough estimates local methods work acceptably well.  相似文献   

5.
In the paper a class of fast adaptive Fourier-based transforms were used for spectroscopic data compression. These transforms are based on adaptive modification of the Cooley-Tukey's signal flow graph. The adaptive versions of the cosine, cosine-Haar and cosine-Walsh transform of various degrees were taken as a base for the experiments. The transform kernels are modified according to reference vectors representing a given class of processed data. The results obtained using these transforms for γ-γ ray coincidence spectra compression are presented and compared with the results obtained by use of classical transforms. Both classical and adaptive transforms can be used for off-line as well as for on-line compression.  相似文献   

6.
In the paper we present compact library for analysis of nuclear spectra. The library consists of sophisticated functions for background elimination, smoothing, peak searching, deconvolution, and peak fitting. The functions can process one- and two-dimensional spectra. The software described in the paper comprises a number of conventional as well as newly developed methods needed to analyze experimental data.

Program summary

Program title: SpecAnalysLib 1.1Catalogue identifier: AEDZ_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDZ_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 42 154No. of bytes in distributed program, including test data, etc.: 2 379 437Distribution format: tar.gzProgramming language: C++Computer: Pentium 3 PC 2.4 GHz or higher, Borland C++ Builder v. 6. A precompiled Windows version is included in the distribution packageOperating system: Windows 32 bit versionsRAM: 10 MBWord size: 32 bitsClassification: 17.6Nature of problem: The demand for advanced highly effective experimental data analysis functions is enormous. The library package represents one approach to give the physicists the possibility to use the advanced routines simply by calling them from their own programs. SpecAnalysLib is a collection of functions for analysis of one- and two-parameter γ-ray spectra, but they can be used for other types of data as well. The library consists of sophisticated functions for background elimination, smoothing, peak searching, deconvolution, and peak fitting.Solution method: The algorithms of background estimation are based on Sensitive Non-linear Iterative Peak (SNIP) clipping algorithm. The smoothing algorithms are based on the convolution of the original data with several types of filters and algorithms based on discrete Markov chains. The peak searching algorithms use the smoothed second differences and they can search for peaks of general form. The deconvolution (decomposition - unfolding) functions use the Gold iterative algorithm, its improved high resolution version and Richardson-Lucy algorithm. In the algorithms of peak fitting we have implemented two approaches. The first one is based on the algorithm without matrix inversion - AWMI algorithm. It allows it to fit large blocks of data and large number of parameters. The other one is based on the calculation of the system of linear equations using Stiefel-Hestens method. It converges faster than the AWMI, however it is not suitable for fitting large number of parameters.Restrictions: Dimensionality of the analyzed data is limited to two.Unusual features: Dynamically loadable library (DLL) of processing functions users can call from their own programs.Running time: Most processing routines execute interactively or in a few seconds. Computationally intensive routines (deconvolution, fitting) execute longer, depending on the number of iterations specified and volume of the processed data.  相似文献   

7.
8.
Modern high energy physics experiments have to process terabytes of input data produced in particle collisions. The core of many data reconstruction algorithms in high energy physics is the Kalman filter. Therefore, the speed of Kalman filter based algorithms is of crucial importance in on-line data processing. This is especially true for the combinatorial track finding stage where the Kalman filter based track fit is used very intensively. Therefore, developing fast reconstruction algorithms, which use maximum available power of processors, is important, in particular for the initial selection of events which carry signals of interesting physics.One of such powerful feature supported by almost all up-to-date PC processors is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achieving more operations per clock cycle. The novel Cell processor extends the parallelization further by combining a general-purpose PowerPC processor core with eight streamlined coprocessing elements which greatly accelerate vector processing applications.In the investigation described here, after a significant memory optimization and a comprehensive numerical analysis, the Kalman filter based track fitting algorithm of the CBM experiment has been vectorized using inline operator overloading. Thus the algorithm continues to be flexible with respect to any CPU family used for data reconstruction.Because of all these changes the SIMDized Kalman filter based track fitting algorithm takes 1 μs per track that is 10000 times faster than the initial version. Porting the algorithm to a Cell Blade computer gives another factor of 10 of the speedup.Finally, we compare performance of the tracking algorithm running on three different CPU architectures: Intel Xeon, AMD Opteron and Cell Broadband Engine.  相似文献   

9.
An open source software system called GaussDal for management of results from quantum chemical computations is presented. Chemical data contained in output files from different quantum chemical programs are automatically extracted and incorporated into a relational database (PostgreSQL). The Structural Query Language (SQL) is used to extract combinations of chemical properties (e.g., molecules, orbitals, thermo-chemical properties, basis sets etc.) into data tables for further data analysis, processing and visualization. This type of data management is particularly suited for projects involving a large number of molecules. In the current version of GaussDal, parsers for Gaussian and Dalton output files are supported, however future versions may also include parsers for other quantum chemical programs.For visualization and analysis of generated data tables from GaussDal we have used the locally developed open source software SciCraft.

Program summary

Title of program: GaussDalCatalogue identifier: ADVTProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVTProgram obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandComputers: AnyOperating system under which the system has been tested: LinuxProgramming language used: PythonMemory required to execute with typical data: 256 MBNo. of bits in word: 32 or 64No. of processors used: 1Has the code been vectorized or parallelized?: NoNo. of lines in distributed program, including test data, etc: 543 531No. of bytes in distribution program, including test data, etc: 7 718 121Distribution format: tar.gzip fileNature of physical problem: Handling of large amounts of data from quantum chemistry computations.Method of solution: Use of SQL based database and quantum chemistry software specific parsers.Restriction on the complexity of the problem: Program is currently limited to Gaussian and Dalton output, but expandable to other formats. Generates subsets of multiple data tables from output files.  相似文献   

10.
In the field of experimental data acquisition and evaluation need rises for using some kind of “expert system” in order to provide support for sophisticated instruments and data evaluation applications. Different external expert system shells served as the basis for previous attempts to develop an expert system for such goals in the X-ray Photoelectron Spectroscopy (XPS). The paper presents a simple reasoning expert system engine, which can be built directly into data acquisition and evaluation software. Some problems arising due to the lack of human intelligence in the inferencing process are also discussed. The feasibility of the realized system is demonstrated through implementing a real-life rule set, an example (the carbon contamination rules) taken from the field of XPS. Apart from the field-specific rules, the package can be used on any field.  相似文献   

11.
A library for reading and writing data in the SUSY Les Houches Accord 2 format is presented. The implementation is in native Fortran 77. The data are contained in a single array conveniently indexed by preprocessor statements.

Program summary

Program title: SLHA2LibCatalogue identifier: AEDY_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDY_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 7550No. of bytes in distributed program, including test data, etc.: 160 123Distribution format: tar.gzProgramming language: FortranComputer: For the build process, a Fortran 77 compiler in a Unixish environment (make, shell) are requiredOperating system: Linux, Mac OS, Windows (Cygwin), Tru64 UnixRAM: The SLHA Record is currently 88 944 bytes longClassification: 4.14, 11.6Nature of problem: Exchange of SUSY parameters and decay information in an ASCII file format.Solution method: The SLHA2Lib provides routines for reading and writing files in the SUSY Les Houches Accord 2 format, a common interchange format for SUSY parameters and decay data.Restrictions: The fixed-sized array that holds the SLHA2 data necessarily limits the amount of decay data that can be stored. This limit can be enlarged by editing and re-running the SLHA2.m program.Unusual features: Data are transported in a single “double complex” array in Fortran, indexed through preprocessor macros. This is about the simplest conceivable container and needs neither dynamic memory allocation nor Fortran extension like structures.Running time: Both reading and writing a SLHA file are typically in the range of a few milliseconds.  相似文献   

12.
The computing cluster built at Bologna to provide the LHCb Collaboration with a powerful Monte Carlo production tool is presented. It is a performance oriented Beowulf-class cluster, made of rack mounted commodity components, designed to minimize operational support requirements and to provide full and continuous availability of the computing resources. In this paper we describe the architecture of the cluster, and discuss the technical solutions adopted for each specialized sub-system.  相似文献   

13.
A standard file format is proposed to store process and event information, primarily output from parton-level event generators for further use by general-purpose ones. The information content is identical with what was already defined by the Les Houches Accord five years ago, but then in terms of Fortran commonblocks. This information is embedded in a minimal XML-style structure, for clarity and to simplify parsing.  相似文献   

14.
The paper elucidates, with an analytic example, a subtle mistake in the application of the extended likelihood method to the problem of determining the fractions of pure samples in a mixed sample from the shape of the distribution of a random variable. This mistake, which affects two widely used software packages, leads to a misestimate of the errors.  相似文献   

15.
A crucial issue in many complex experiments is the flexibility and ease of the online data analysis. Here we present an easy-to-learn and intuitive-to-operate method of interactive online analysis for use in projectile fragmentation induced gamma-ray spectroscopy experiments at the GSI facility (the RISING experiments). With a sequence of dialogue boxes the experimenter can create a complex definition, which will produce a conditional spectrum. These definitions can be immediately applied by the online analysis, which runs in parallel as a separate program. Some problems regarding the logic of gating conditions are discussed.  相似文献   

16.
We describe a C++ implementation of the Optimal Jet Definition for identification of jets in hadronic final states of particle collisions. We explain interface subroutines and provide a usage example. The source code is available from http://www.inr.ac.ru/~ftkachov/projects/jets/.

Program summary

Title of program: Optimal Jet Finder (v1.0 C++)Catalogue identifier: ADSB_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSB_v2_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandComputer: any computer with a standard C++ compilerTested with:
(1)
GNU gcc 3.4.2, Linux Fedora Core 3, Intel i686;
(2)
Forte Developer 7 C++ 5.4, SunOS 5.9, UltraSPARC III+;
(3)
Microsoft Visual C++ Toolkit 2003 (compiler 13.10.3077, linker 7.10.30777, option /EHsc), Windows XP, Intel i686.
Programming language used: C++Memory required:∼1 MB (or more, depending on the settings)No. of lines in distributed program, including test data, etc.: 3047No. of bytes in distributed program, including test data, etc.: 17 884Distribution format: tar.gzNature of physical problem: Analysis of hadronic final states in high energy particle collision experiments often involves identification of hadronic jets. A large number of hadrons detected in the calorimeter is reduced to a few jets by means of a jet finding algorithm. The jets are used in further analysis which would be difficult or impossible when applied directly to the hadrons. Grigoriev et al. [D.Yu. Grigoriev, E. Jankowski, F.V. Tkachov, Phys. Rev. Lett. 91 (2003) 061801] provide brief introduction to the subject of jet finding algorithms and a general review of the physics of jets can be found in [R. Barlow, Rep. Prog. Phys. 36 (1993) 1067].Method of solution: The software we provide is an implementation of the so-called Optimal Jet Definition (OJD). The theory of OJD was developed in [F.V. Tkachov, Phys. Rev. Lett. 73 (1994) 2405; Erratum, Phys. Rev. Lett. 74 (1995) 2618; F.V. Tkachov, Int. J. Modern Phys. A 12 (1997) 5411; F.V. Tkachov, Int. J. Modern Phys. A 17 (2002) 2783]. The desired jet configuration is obtained as the one that minimizes Ω, a certain function of the input particles and jet configuration. A FORTRAN 77 implementation of OJD is described in [D.Yu. Grigoriev, E. Jankowski, F.V. Tkachov, Comput. Phys. Comm. 155 (2003) 42].Restrictions on the complexity of the program: Memory required by the program is proportional to the number of particles in the input × the number of jets in the output. For example, for 650 particles and 20 jets ∼300 KB memory is required.Typical running time: The running time (in the running mode with a fixed number of jets) is proportional to the number of particles in the input × the number of jets in the output × times the number of different random initial configurations tried (ntries). For example, for 65 particles in the input and 4 jets in the output, the running time is ∼4⋅10−3 s per try (Pentium 4 2.8 GHz).  相似文献   

17.
We describe a FORTRAN 77 implementation of the optimal jet definition for identification of jets in hadronic final states of particle collisions. We discuss details of the implementation, explain interface subroutines and provide a usage example. The source code is available from http://www.inr.ac.ru/~ftkachov/projects/jets/.

Program summary

Title of program: Optimal Jet Finder (OJF_014)Catalogue identifier: ADSBProgram Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSBProgram obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandComputer: Any computer with the FORTRAN 77 compilerTested with: g77/Linux on Intel, Alpha and Sparc; Sun f77/Solaris (thwgs.cern.ch); xlf/AIX (rsplus.cern.ch); MS Fortran PowerStation 4.0/Win98Programming language used: FORTRAN 77Memory required: ∼1 MB (or more, depending on the settings)Number of bytes in distributed program, including examples and test data: 251 463Distribution format: tar gzip fileKeywords: Hadronic jets, jet finding algorithmsNature of physical problem: Analysis of hadronic final states in high energy particle collision experiments often involves identification of hadronic jets. A large number of hadrons detected in the calorimeter is reduced to a few jets by means of a jet finding algorithm. The jets are used in further analysis which would be difficult or impossible when applied directly to the hadrons. Grigoriev et al. [hep-ph/0301185] provide a brief introduction to the subject of jet finding algorithms and a general review of the physics of jets can be found in [Rep. Prog. Phys. 36 (1993) 1067].Method of solution: The software we provide is an implementation of the so-called optimal jet definition (OJD). The theory of OJD was developed by Tkachov [Phys. Rev. Lett. 73 (1994) 2405; 74 (1995) 2618; Int. J. Mod. Phys. A 12 (1997) 5411; 17 (2002) 2783]. The desired jet configuration is obtained as the one that minimizes , a certain function of the input particles and jet configuration.Restrictions on the complexity of the program: The size of the largest data structure the program uses is (maximal number of particles in the input) × (maximal number of jets in the output) × 8 bytes. (For the standard settings <1 MB). Therefore, there is no memory restriction for any conceivable application for which the program was designed.Typical running time: The running time depends strongly on the physical process being analyzed and the parameters used. For the benchmark process we studied, , with the average number of ∼80 particles in the input, the running time was <10−2s on a modest PC (per event with ntries=1). For a fixed number of jets the complexity of the algorithm grows linearly with the number of particles (cells) in the input, in contrast with other known jet finding algorithms for which this dependence is cubic. The reader is referred to Grigoriev et al. [hep-ph/0301185] for a more detailed discussion of this issue.  相似文献   

18.
Performance of programming approaches and languages used for the development of software codes for numerical simulation of granular material dynamics by the discrete element method (DEM) is investigated. The granular material considered represents a space filled with discrete spherical visco-elastic particles, and the behaviour of material under imposed conditions is simulated using the DEM. The object-oriented programming approach (implemented via C++) was compared with the procedural approach (using FORTRAN 90 and OBJECT PASCAL) in order to test their efficiency. The identical neighbour-searching algorithm, contact forces model and time integration method were implemented in all versions of codes.Two identical representative examples of the dynamic behaviour of granular material on a personal computer (compatible with IBM PC) were solved. The results show that software based on procedural approach runs faster in compare with software based on OOP, and software developed by FORTRAN 90 runs faster in compare with software developed by OBJECT PASCAL.  相似文献   

19.
20.
This paper describes a package for calculations of expressions with Dirac matrices. Advantages over existing similar packages are described. MatrixExp package is intended for simplification of complex expressions involving γ-matrices, providing such tools as automatic Feynman parameterization, integration in d-dimensional space, sorting and grouping of results in a given order. Also, in comparison with the existing similar package Tracer, the presented package MatrixExp has more enhanced input possibility. User-available functions of MatrixExp package are described in detail. Also an example of calculation of Feynman diagram for process bsγg with application of functions of MatrixExp package is presented.

Program summary

Title of program:MatrixExpCatalogue identifier:ADWBProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWBProgram obtainable from:CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions:noneProgramming language:MATHEMATICAComputer:PC PentiumOperating system:WindowsNo. of lines in distributed program, including test data, etc.: 1551No. of bytes in distributed program, including test data, etc.: 16 040Distribution format:tar.gzRAM:loading the package uses approx. 3 500 000 bytes of RAM. However memory required for calculations depends heavily on the expressions in the view, as the package uses recursive functions, and MATHEMATICA dynamically allocates memory. Package has been tested to work on PC Pentium II 233 MHz with 128 Mb of memory calculating typical diagrams of contemporary calculations.Nature of problem:Feynman diagram calculation, simplification of expressions with γ-matricesSolution method:Analytic transformations, dimensional regularization, Feynman parameterizationRestrictions:MatrixExp package works only with single line of expressions (G[l1,]), in contrast to the Tracer package that works with multiple lines, i.e., the following is possible in Tracer, but not in MatrixExp: G[l1,]**G[l2,]**G[l3,], which will return the result of G[l1,]**G[l1,]**G[l1,]….Unusual features:noneRunning time:Seconds for expressions with several different γ-matrices on Pentium IV 1.8 GHz and of the order of a minute on Pentium II 233 MHz. Calculation times rise with the number of matrices.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号