首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For magnetically confined plasmas in tokamaks, we have numerically investigated how Lagrangian chaos at the plasma edge affects the plasma confinement. Initially, we have considered the chaotic motion of particles in an equilibrium electric field with a monotonic radial profile perturbed by drift waves. We have showed that an effective transport barrier may be created at the plasma edge by modifying the electric field radial profile. In the second place, we have obtained escape patterns and magnetic footprints of chaotic magnetic field lines in the region near a tokamak wall with resonant modes due to the action of an ergodic magnetic limiter. For monotonic plasma current density profiles we have obtained distributions of field line connections to the wall and line escape channels with the same spatial pattern as the magnetic footprints on the tokamak walls.  相似文献   

2.
3.
A periodic datamining algorithm has been developed and used to extract distinct plasma fluctuations in multichannel oscillatory timeseries data. The technique uses the Expectation Maximisation algorithm to solve for the maximum likelihood estimates and cluster assignments of a mixture of multivariate independent von Mises distributions (EM-VMM). The performance of the algorithm shows significant benefits when compared to a periodic k-means algorithm and clustering using non-periodic techniques on several artificial datasets and real experimental data. Additionally, a new technique for identifying interesting features in multichannel oscillatory timeseries data is described (STFT-clustering). STFT-clustering identifies the coincidence of spectral features over most channels of a multi-channel array using the averaged short time Fourier transform of the signals. These features are filtered using clustering to remove noise. This method is particularly good at identifying weaker features and complements existing methods of feature extraction. Results from applying the STFT-clustering and EM-VMM algorithm to the extraction and clustering of plasma wave modes in the time series data from a helical magnetic probe array on the H-1NF heliac are presented.  相似文献   

4.
Visualization in the spherical geometry is ubiquitous in geophysical data processing. For the spherical visualization, the commonly used spherical polar coordinate system is not ideal due to its grid convergence nature near the poles. We propose to use a spherical overset grid system called Yin-Yang grid as the base grid system of the spherical visualization. The convergence-free nature of the Yin-Yang grid leads to a balanced data distribution and effective visualization processing in a sphere. The Yin-Yang grid is already used in various geophysical simulations including the geodynamo and mantle convection in the spherical geometry. Data produced by the Yin-Yang grid can be, and should be, visualized directly on the same Yin-Yang grid system without any data remapping. Since the component grid of the Yin-Yang grid is a part of (or low latitude region of) the standard spherical polar coordinate system, it is straightforward to convert an existing spherical visualization tool based on the spherical polar coordinates into a tool based on the Yin-Yang grid.  相似文献   

5.
A new method that employs grammatical evolution and a stopping rule for finding the global minimum of a continuous multidimensional, multimodal function is considered. The genetic algorithm used is a hybrid genetic algorithm in conjunction with a local search procedure. We list results from numerical experiments with a series of test functions and we compare with other established global optimization methods. The accompanying software accepts objective functions coded either in Fortran 77 or in C++.

Program summary

Program title: GenMinCatalogue identifier: AEAR_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAR_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 35 810No. of bytes in distributed program, including test data, etc.: 436 613Distribution format: tar.gzProgramming language: GNU-C++, GNU-C, GNU Fortran 77Computer: The tool is designed to be portable in all systems running the GNU C++ compilerOperating system: The tool is designed to be portable in all systems running the GNU C++ compilerRAM: 200 KBWord size: 32 bitsClassification: 4.9Nature of problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques are frequently trapped in local minima. Global optimization is hence the appropriate tool. For example, solving a nonlinear system of equations via optimization, employing a least squares type of objective, one may encounter many local minima that do not correspond to solutions (i.e. they are far from zero).Solution method: Grammatical evolution and a stopping rule.Running time: Depending on the objective function. The test example given takes only a few seconds to run.  相似文献   

6.
A vertex reconstruction algorithm that is based on the Gaussian-sum filter (GSF) was developed and implemented in the framework of the CMS reconstruction program. While linear least-square estimators are optimal in case all observation errors are Gaussian distributed, the GSF offers a better treatment of non-Gaussian distributions of track parameter errors when these are modeled by Gaussian mixtures. The algorithm has been verified and evaluated with simulated data. The results are compared to the Kalman filter and to an adaptive vertex estimator.  相似文献   

7.
A new modular code called BOUT++ is presented, which simulates 3D fluid equations in curvilinear coordinates. Although aimed at simulating Edge Localised Modes (ELMs) in tokamak x-point geometry, the code is able to simulate a wide range of fluid models (magnetised and unmagnetised) involving an arbitrary number of scalar and vector fields, in a wide range of geometries. Time evolution is fully implicit, and 3rd-order WENO schemes are implemented. Benchmarks are presented for linear and non-linear problems (the Orszag-Tang vortex) showing good agreement. Performance of the code is tested by scaling with problem size and processor number, showing efficient scaling to thousands of processors.Linear initial-value simulations of ELMs using reduced ideal MHD are presented, and the results compared to the ELITE linear MHD eigenvalue code. The resulting mode-structures and growth-rate are found to be in good agreement (γBOUT++=0.245ωA, γELITE=0.239ωA, with Alfvénic timescale 1/ωA=R/VA). To our knowledge, this is the first time dissipationless, initial-value simulations of ELMs have been successfully demonstrated.  相似文献   

8.
In the field of experimental data acquisition and evaluation need rises for using some kind of “expert system” in order to provide support for sophisticated instruments and data evaluation applications. Different external expert system shells served as the basis for previous attempts to develop an expert system for such goals in the X-ray Photoelectron Spectroscopy (XPS). The paper presents a simple reasoning expert system engine, which can be built directly into data acquisition and evaluation software. Some problems arising due to the lack of human intelligence in the inferencing process are also discussed. The feasibility of the realized system is demonstrated through implementing a real-life rule set, an example (the carbon contamination rules) taken from the field of XPS. Apart from the field-specific rules, the package can be used on any field.  相似文献   

9.
A fitting procedure for one trap and one recombination centre kinetic model is described here. The procedure makes use of a grid in the parameters space obtained by changing each parameter back and forth and calculating robust cost functions on the surfaces of this grid. The lengths of the changes are determined empirically. The best set of parameters is calculated by the projection on the grid surface with smallest cost function. The fitting procedure applied to the fit of one, two and three parameters of the kinetic model is analyzed. In all cases the optimization procedure shows reliable fitting within a feasible interval of processing time.  相似文献   

10.
A standard file format is proposed to store process and event information, primarily output from parton-level event generators for further use by general-purpose ones. The information content is identical with what was already defined by the Les Houches Accord five years ago, but then in terms of Fortran commonblocks. This information is embedded in a minimal XML-style structure, for clarity and to simplify parsing.  相似文献   

11.
A semi-Lagrangian code for the solution of the electrostatic drift-kinetic equations in straight cylinder configuration is presented. The code, CYGNE, is part of a project with the long term aim of studying microturbulence in fusion devices. The code has been constructed in such a way as to preserve a good control of the constants of motion, possessed by the drift-kinetic equations, until the nonlinear saturation of the ion-temperature-gradient modes occurs. Studies of convergence with phase space resolution and time-step are presented and discussed. The code is benchmarked against electrostatic Particle-in-Cell codes.  相似文献   

12.
In the paper we present compact library for analysis of nuclear spectra. The library consists of sophisticated functions for background elimination, smoothing, peak searching, deconvolution, and peak fitting. The functions can process one- and two-dimensional spectra. The software described in the paper comprises a number of conventional as well as newly developed methods needed to analyze experimental data.

Program summary

Program title: SpecAnalysLib 1.1Catalogue identifier: AEDZ_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDZ_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 42 154No. of bytes in distributed program, including test data, etc.: 2 379 437Distribution format: tar.gzProgramming language: C++Computer: Pentium 3 PC 2.4 GHz or higher, Borland C++ Builder v. 6. A precompiled Windows version is included in the distribution packageOperating system: Windows 32 bit versionsRAM: 10 MBWord size: 32 bitsClassification: 17.6Nature of problem: The demand for advanced highly effective experimental data analysis functions is enormous. The library package represents one approach to give the physicists the possibility to use the advanced routines simply by calling them from their own programs. SpecAnalysLib is a collection of functions for analysis of one- and two-parameter γ-ray spectra, but they can be used for other types of data as well. The library consists of sophisticated functions for background elimination, smoothing, peak searching, deconvolution, and peak fitting.Solution method: The algorithms of background estimation are based on Sensitive Non-linear Iterative Peak (SNIP) clipping algorithm. The smoothing algorithms are based on the convolution of the original data with several types of filters and algorithms based on discrete Markov chains. The peak searching algorithms use the smoothed second differences and they can search for peaks of general form. The deconvolution (decomposition - unfolding) functions use the Gold iterative algorithm, its improved high resolution version and Richardson-Lucy algorithm. In the algorithms of peak fitting we have implemented two approaches. The first one is based on the algorithm without matrix inversion - AWMI algorithm. It allows it to fit large blocks of data and large number of parameters. The other one is based on the calculation of the system of linear equations using Stiefel-Hestens method. It converges faster than the AWMI, however it is not suitable for fitting large number of parameters.Restrictions: Dimensionality of the analyzed data is limited to two.Unusual features: Dynamically loadable library (DLL) of processing functions users can call from their own programs.Running time: Most processing routines execute interactively or in a few seconds. Computationally intensive routines (deconvolution, fitting) execute longer, depending on the number of iterations specified and volume of the processed data.  相似文献   

13.
A simulation study to evaluate the required computing resources for a research exploitation of the Large Hadron Collider (LHC) has been performed. The evaluation was done as a case study, assuming existence of a Nordic regional centre and using the requirements for performing a specific physics analysis as a yard-stick. Other input parameters were: assumption for the distribution of researchers at the institutions involved, an analysis model, and two different functional structures of the computing resources.  相似文献   

14.
In the context of the analysis of measured data, one is often faced with the task to differentiate data numerically. Typically, this occurs when measured data are concerned or data are evaluated numerically during the evolution of partial or ordinary differential equations. Usually, one does not take care for accuracy of the resulting estimates of derivatives because modern computers are assumed to be accurate to many digits. But measurements yield intrinsic errors, which are often much less accurate than the limit of the machine used, and there exists the effect of “loss of significance”, well known in numerical mathematics and computational physics. The problem occurs primarily in numerical subtraction, and clearly, the estimation of derivatives involves the approximation of differences. In this article, we discuss several techniques for the estimation of derivatives. As a novel aspect, we divide into local and global methods, and explain the respective shortcomings. We have developed a general scheme for global methods, and illustrate our ideas by spline smoothing and spectral smoothing. The results from these less known techniques are confronted with the ones from local methods. As typical for the latter, we chose Savitzky-Golay-filtering and finite differences. Two basic quantities are used for characterization of results: The variance of the difference of the true derivative and its estimate, and as important new characteristic, the smoothness of the estimate. We apply the different techniques to numerically produced data and demonstrate the application to data from an aeroacoustic experiment. As a result, we find that global methods are generally preferable if a smooth process is considered. For rough estimates local methods work acceptably well.  相似文献   

15.
The paper elucidates, with an analytic example, a subtle mistake in the application of the extended likelihood method to the problem of determining the fractions of pure samples in a mixed sample from the shape of the distribution of a random variable. This mistake, which affects two widely used software packages, leads to a misestimate of the errors.  相似文献   

16.
Performance of programming approaches and languages used for the development of software codes for numerical simulation of granular material dynamics by the discrete element method (DEM) is investigated. The granular material considered represents a space filled with discrete spherical visco-elastic particles, and the behaviour of material under imposed conditions is simulated using the DEM. The object-oriented programming approach (implemented via C++) was compared with the procedural approach (using FORTRAN 90 and OBJECT PASCAL) in order to test their efficiency. The identical neighbour-searching algorithm, contact forces model and time integration method were implemented in all versions of codes.Two identical representative examples of the dynamic behaviour of granular material on a personal computer (compatible with IBM PC) were solved. The results show that software based on procedural approach runs faster in compare with software based on OOP, and software developed by FORTRAN 90 runs faster in compare with software developed by OBJECT PASCAL.  相似文献   

17.
A computer package (CNMS) is presented aimed at the solution of finite-level quantum optimal control problems. This package is based on a recently developed computational strategy known as monotonic schemes.Quantum optimal control problems arise in particular in quantum optics where the optimization of a control representing laser pulses is required. The purpose of the external control field is to channel the system's wavefunction between given states in its most efficient way. Physically motivated constraints, such as limited laser resources, are accommodated through appropriately chosen cost functionals.

Program summary

Program title: CNMSCatalogue identifier: ADEB_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADEB_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 770No. of bytes in distributed program, including test data, etc.: 7098Distribution format: tar.gzProgramming language: MATLAB 6Computer: AMD Athlon 64 × 2 Dual, 2:21 GHz, 1:5 GB RAMOperating system: Microsoft Windows XPWord size: 32Classification: 4.9Nature of problem: Quantum controlSolution method: IterativeRunning time: 60-600 sec  相似文献   

18.
A time saving algorithm for the Monte Carlo method of Metropolis is presented. The technique is tested with different potential models and number of particles. The coupling of the method with neighbor lists, linked lists, Ewald sum and reaction field techniques is also analyzed. It is shown that the proposed algorithm is particularly suitable for computationally heavy intermolecular potentials.  相似文献   

19.
20.
Deterministic method based on ray tracing technique is known as a powerful alternative to the Monte Carlo approach for virtual X-ray imaging. The algorithm speed is a critical issue in the perspective of simulating hundreds of images, notably to simulate tomographic acquisition or even more, to simulate X-ray radiographic video recordings. We present an algorithm for voxel-based deterministic simulation of X-ray imaging using voxel-driven forward and backward perspective projection operations and minimum bounding rectangles (MBRs). The algorithm is fast, easy to implement, and creates high-quality simulated radiographs. As a result, simulated radiographs can typically be obtained in split seconds with a simple personal computer.

Program summary

Program title: X-rayCatalogue identifier: AEAD_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAD_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 416 257No. of bytes in distributed program, including test data, etc.: 6 018 263Distribution format: tar.gzProgramming language: C (Visual C++)Computer: Any PC. Tested on DELL Precision 380 based on a Pentium D 3.20 GHz processor with 3.50 GB of RAMOperating system: Windows XPClassification: 14, 21.1Nature of problem: Radiographic simulation of voxelized objects based on ray tracing technique.Solution method: The core of the simulation is a fast routine for the calculation of ray-box intersections and minimum bounding rectangles, together with voxel-driven forward and backward perspective projection operations.Restrictions: Memory constraints. There are three programs in all.
A. Program for test 3.1(1): Object and detector have axis-aligned orientation;
B. Program for test 3.1(2): Object in arbitrary orientation;
C. Program for test 3.2: Simulation of X-ray video recordings.
1.
Program A Memory required to execute with typical data: 207 Megabytes, depending on the size of the input file. Typical running time: 2.30 s. (Tested in release mode, the same below.)
2.
Program B (the main program) Memory required to execute with typical data: 114 Megabytes, depending on the size of the input file. Typical running time: 1.60 s.
3.
Program C Memory required to execute with typical data: 215 Megabytes, depending on the size of the input file. Typical computation time: 27.26 s for cast-5, 101.87 s for cast-6.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号