首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper describes a new algorithm for Monte Carlo integration, based on the Field Estimator for Arbitrary Spaces (FiEstAS). The algorithm is discussed in detail, and its performance is evaluated in the context of Bayesian analysis, with emphasis on multimodal distributions with strong parameter degeneracies. Source code is available upon request.  相似文献   

2.
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the absence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.  相似文献   

3.
The paper elucidates, with an analytic example, a subtle mistake in the application of the extended likelihood method to the problem of determining the fractions of pure samples in a mixed sample from the shape of the distribution of a random variable. This mistake, which affects two widely used software packages, leads to a misestimate of the errors.  相似文献   

4.
The Monte Carlo generator MERADGEN 1.0 for the simulation of radiative events in parity conserving doubly-polarized Møller scattering has been developed. Analytical integration wherever it is possible provides rather fast and accurate generation. Some numerical tests and histograms are presented.

Program summary

Program title: MERADGEN 1.0Catalogue identifier:ADYM_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYM_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions: noneProgramming language: FORTRAN 77Computer(s) for which the program has been designed: allOperating system(s) for which the program has been designed: LinuxRAM required to execute with typical data: 1 MBNo. of lines in distributed program, including test data, etc.:2196No. of bytes in distributed program, including test data, etc.:23 501Distribution format:tar.gzHas the code been vectorized or parallelized?: noNumber of processors used: 1Supplementary material: noneExternal routines/libraries used: noneCPC Program Library subprograms used: noneNature of problem: Simulation of radiative events in parity conserving doubly-polarized Møller scattering.Solution method: Monte Carlo method for simulation within QED, analytical integration wherever it is possible that provides rather fast and accurate generation.Restrictions: noneUnusual features: noneAdditional comments: noneRunning time: The simulation of 108 radiative events for itest:=1 takes up to 45 seconds on AMD Athlon 2.80 GHz processor.  相似文献   

5.
Deterministic method based on ray tracing technique is known as a powerful alternative to the Monte Carlo approach for virtual X-ray imaging. The algorithm speed is a critical issue in the perspective of simulating hundreds of images, notably to simulate tomographic acquisition or even more, to simulate X-ray radiographic video recordings. We present an algorithm for voxel-based deterministic simulation of X-ray imaging using voxel-driven forward and backward perspective projection operations and minimum bounding rectangles (MBRs). The algorithm is fast, easy to implement, and creates high-quality simulated radiographs. As a result, simulated radiographs can typically be obtained in split seconds with a simple personal computer.

Program summary

Program title: X-rayCatalogue identifier: AEAD_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAD_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 416 257No. of bytes in distributed program, including test data, etc.: 6 018 263Distribution format: tar.gzProgramming language: C (Visual C++)Computer: Any PC. Tested on DELL Precision 380 based on a Pentium D 3.20 GHz processor with 3.50 GB of RAMOperating system: Windows XPClassification: 14, 21.1Nature of problem: Radiographic simulation of voxelized objects based on ray tracing technique.Solution method: The core of the simulation is a fast routine for the calculation of ray-box intersections and minimum bounding rectangles, together with voxel-driven forward and backward perspective projection operations.Restrictions: Memory constraints. There are three programs in all.
A. Program for test 3.1(1): Object and detector have axis-aligned orientation;
B. Program for test 3.1(2): Object in arbitrary orientation;
C. Program for test 3.2: Simulation of X-ray video recordings.
1.
Program A Memory required to execute with typical data: 207 Megabytes, depending on the size of the input file. Typical running time: 2.30 s. (Tested in release mode, the same below.)
2.
Program B (the main program) Memory required to execute with typical data: 114 Megabytes, depending on the size of the input file. Typical running time: 1.60 s.
3.
Program C Memory required to execute with typical data: 215 Megabytes, depending on the size of the input file. Typical computation time: 27.26 s for cast-5, 101.87 s for cast-6.
  相似文献   

6.
7.
Consideration was given to the main issues concerning development of a flexible modeling complex for the operator support system of a UPP power-generating unit with VVER reactor, as well as to the model for calculation of the dynamics of the neutron-physical parameters of the VVER-1000 core with the aim of using it in the flexible modeling complex.  相似文献   

8.
This paper focuses on the implementation and the performance analysis of a smooth particle mesh Ewald method on several parallel computers. We present the details of the algorithms and our implementation that are used to optimize parallel efficiency on such parallel computers.  相似文献   

9.
Thanks to the dramatic decrease of computer costs and the no less dramatic increase in those same computer's capabilities and also thanks to the availability of specific free software and libraries that allow the set up of small parallel computation installations the scientific community is now in a position where parallel computation is within easy reach even to moderately budgeted research groups. The software package PMCD (Parallel Monte Carlo Driver) was developed to drive the Monte Carlo simulation of a wide range of user supplied models in parallel computation environments. The typical Monte Carlo simulation involves using a software implementation of a function to repeatedly generate function values. Typically these software implementations were developed for sequential runs. Our driver was developed to enable the run in parallel of the Monte Carlo simulation, with minimum changes to the original code that implements the function of interest to the researcher. In this communication we present the main goals and characteristics of our software, together with a simple study its expected performance. Monte Carlo simulations are informally classified as “embarrassingly parallel”, meaning that the gains in parallelizing a Monte Carlo run should be close to ideal, i.e. with speed ups close to linear. In this paper our simple study shows that without compromising the easiness of use and implementation, one can get performances very close to the ideal.  相似文献   

10.
Biophysical techniques, such as single molecule FRET, fluorescence microscopy, single ion-channel patch clamping, and optical tweezers often yield data that are noisy time series containing discrete steps. Here we present a method enabling objective identification of nonuniform steps present in such noisy data. Our method does not require the assumption of any underlying kinetic or state models and is thus particularly useful for analysis of novel and poorly understood systems. In contrast to other model-independent methods, no parameters or other information is taken from the user. We find that, at high noise levels, our method exceeds the performance of other model-independent methods in accurately locating steps in simulated noisy data.  相似文献   

11.
Consideration was given to the combinatorial hierarchical design (composition) of the structure of an application telemetry system consisting of the on-board, radio, and ground communication equipment including the operator working place. The principal use was of the three-stage hierarchical morphological multicriterial design: (a) design of the tree model of the system in the form of the AND-OR tree and generation of the design alternatives for the hanging vertices of the constructed model (system parts/components), (b) multicriterial selection of the design alternatives for the parts of the designed system, (c) generation of the resulting combination of the selected alternatives taking into consideration their ordinal quality and compatibility. The process of hierarchical modular design is illustrated by the example of an application.  相似文献   

12.
The computing cluster built at Bologna to provide the LHCb Collaboration with a powerful Monte Carlo production tool is presented. It is a performance oriented Beowulf-class cluster, made of rack mounted commodity components, designed to minimize operational support requirements and to provide full and continuous availability of the computing resources. In this paper we describe the architecture of the cluster, and discuss the technical solutions adopted for each specialized sub-system.  相似文献   

13.
Consideration was given to control of a complex object whose motion obeys a multivariable nonlinear nonstationary mathematical model. Rigid constraints were imposed on the object’s dynamic precision. The paper considered computer-aided generation of the current equations of object motion with regard for the actuators which differ from subsystem to subsystem. The object is controlled adaptively with regard for the computer-based realization. The algorithms of control system operation that maintain the guaranteed precision of object motion were constructed. Conditions for problem solvability were formulated. The freeflying space robot was discussed by way of example.  相似文献   

14.
Modeling of the natural and technogenic processes in diverse geomorphological environments is one of the basic tools for forecasting and preventing unfavorable development of the urban ecology. One of the causes of its deterioration lies in pollution. The paper considers mathematical modeling of the spread of pollutants transported with water. The complicated process of pollutant spread was modeled as an aggregate of four simpler models such as overland water flow, influent seepage, pollutants transport with surface runoff, and pollutant deposition (accumulation) on the land surface. The model relies on the diffusion equation with supplementary addends in the right-hand side of which one reflects the effect of the terrain relief and the other, which depends on the lithologic structure of the territory, defines the intensity of pollutant uptake rate by the land surface. This equation is satisfied in the two-dimensional domain corresponding physically to an area covered with water. Both the form of the boundary and topology of this area are time-dependent because of appearance of dry “islands” surrounded by water.  相似文献   

15.
The stochastic optimization method ALOPEX IV is successfully applied to the problem of estimating the time dependency of the physiological demand in response to exercise. This is a fundamental and unsolved problem in the area of exercise physiology, where the lack of appropriate tools and techniques forces the assumption and the use of a constant demand during exercise. By the use of an appropriate partition of the physiological time series and by means of stochastic optimization, the time dependency of the physiological demand during heavy intensity exercise and its subsequent recovery is, for the first time, revealed.  相似文献   

16.
The Particle Flow Analysis (PFA) is currently under intense studies as the most promising way to achieve precision jet energy measurements required at the future linear e+e collider. In order to optimize detector configurations and to tune up the PFA it is crucial to identify factors that limit the PFA performance and clarify the fundamental limits on the jet energy resolution that remain even with the perfect PFA and an infinitely granular calorimeter. This necessitates a tool to connect each calorimeter hit in particle showers to its parent charged track, if any, and eventually all the way back to its corresponding primary particle, while identifying possible interactions and decays along the way. In order to realize this with a realistic memory space, we have developed a set of C++ classes that facilitates history keeping of particle tracks within the framework of Geant4. This software tool, hereafter called J4HistoryKeeper, comes in handy in particular when one needs to stop this history keeping for memory space economy at multiple geometrical boundaries beyond which a particle shower is expected to start. In this paper this software tool is described and applied to a generic detector model to demonstrate its functionality.  相似文献   

17.
The performance of the method of angular moments on the ΔΓs determination from analysis of untagged decays is examined by using the SIMUB generator. The results of Monte Carlo studies with evaluation of measurement errors are presented. The method of angular moments gives stable results for the estimate of ΔΓs and is found to be an efficient and flexible tool for the quantitative investigation of the B0sJ/ψφ decay. The statistical error of the ratio ΔΓs/Γs for values of this ratio in the interval [0.03,0.3] was found to be independent on this value, being 0.015 for 105 events.  相似文献   

18.
In this paper, a programming model is presented which enables scalable parallel performance on multi-core shared memory architectures. The model has been developed for application to a wide range of numerical simulation problems. Such problems involve time stepping or iteration algorithms where synchronization of multiple threads of execution is required. It is shown that traditional approaches to parallelism including message passing and scatter-gather can be improved upon in terms of speed-up and memory management. Using spatial decomposition to create orthogonal computational tasks, a new task management algorithm called H-Dispatch is developed. This algorithm makes efficient use of memory resources by limiting the need for garbage collection and takes optimal advantage of multiple cores by employing a “hungry” pull strategy. The technique is demonstrated on a simple finite difference solver and results are compared to traditional MPI and scatter-gather approaches. The H-Dispatch approach achieves near linear speed-up with results for efficiency of 85% on a 24-core machine. It is noted that the H-Dispatch algorithm is quite general and can be applied to a wide class of computational tasks on heterogeneous architectures involving multi-core and GPGPU hardware.  相似文献   

19.
Performance of programming approaches and languages used for the development of software codes for numerical simulation of granular material dynamics by the discrete element method (DEM) is investigated. The granular material considered represents a space filled with discrete spherical visco-elastic particles, and the behaviour of material under imposed conditions is simulated using the DEM. The object-oriented programming approach (implemented via C++) was compared with the procedural approach (using FORTRAN 90 and OBJECT PASCAL) in order to test their efficiency. The identical neighbour-searching algorithm, contact forces model and time integration method were implemented in all versions of codes.Two identical representative examples of the dynamic behaviour of granular material on a personal computer (compatible with IBM PC) were solved. The results show that software based on procedural approach runs faster in compare with software based on OOP, and software developed by FORTRAN 90 runs faster in compare with software developed by OBJECT PASCAL.  相似文献   

20.
Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号