共查询到20条相似文献,搜索用时 15 毫秒
1.
Stochastic optimization for the calculation of the time dependency of the physiological demand during exercise and recovery 总被引:1,自引:0,他引:1
The stochastic optimization method ALOPEX IV is successfully applied to the problem of estimating the time dependency of the physiological demand in response to exercise. This is a fundamental and unsolved problem in the area of exercise physiology, where the lack of appropriate tools and techniques forces the assumption and the use of a constant demand during exercise. By the use of an appropriate partition of the physiological time series and by means of stochastic optimization, the time dependency of the physiological demand during heavy intensity exercise and its subsequent recovery is, for the first time, revealed. 相似文献
2.
M.S. Zakynthinaki J.R. Stirling C.A. Cordente Martínez J. Sampedro Molinuevo 《Computer Physics Communications》2008,179(8):562-568
We demonstrate the successful application of ALOPEX stochastic optimization to the problem of calculating the optimal critical curve in a dynamical systems model of the process of regaining balance after perturbation from quiet stance. Experimental data provide the time series of angles for which the subjects were able to regain balance after an initial perturbation. The optimal critical curve encloses all data points and has a minimum distance from the border points of the data set. We demonstrate the results of the optimization firstly using the traditional cost function of chi-square distance. We then successfully introduce a modified cost function that fits the model to the experimental data by taking into account the specific requirements of the model. By use of the proposed cost function, combined with the efficiency of our optimization method, an optimal critical curve is calculated even in the cases of very asymmetric data sets that lie within the capabilities of the existing model. 相似文献
3.
Marco D. Mazzeo 《Computer Physics Communications》2010,181(2):355-3999
Recent algorithm and hardware developments have significantly improved our capability to interactively visualise time-varying flow fields. However, when visualising very large dynamically varying datasets interactively there are still limitations in the scalability and efficiency of these methods. Here we present a rendering pipeline which employs an efficient in situ ray tracing technique to visualise flow fields as they are simulated. The ray casting approach is particularly well suited for the visualisation of large and sparse time-varying datasets, where it is capable of rendering fluid flow fields at high image resolutions and at interactive frame rates on a single multi-core processor using OpenMP. The parallel implementation of our in situ visualisation method relies on MPI, requires no specialised hardware support, and employs the same underlying spatial decomposition as the fluid simulator. The visualisation pipeline allows the user to operate on a commodity computer and explore the simulation output interactively. Our simulation environment incorporates numerous features that can be utilised in a wide variety of research contexts. 相似文献
4.
We present computer simulations of a tip-tilt adaptive optics system, where stochastic optimization is applied to the problem of dynamic compensation of atmospheric turbulence. The system uses a simple measure of the light intensity that passes through a mask and is recorded on the image plane, to generate signals for the tip-tilt mirror. A feedback system rotates the mirror adaptively and in phase with the rapidly changing atmospheric conditions. Computer simulations and a series of numerical experiments investigate the implementation of the method in the presence of drifting atmosphere. In particular, the study examines the system's sensitivity to the rate of change of the atmospheric conditions and investigates the optimal size of the mirror's masking area and the algorithm's optimal degree of stochasticity. 相似文献
5.
Rui P.S. Fartaria Pedro C.R. Rodrigues Fernando M.S. Silva Fernandes 《Computer Physics Communications》2006,175(2):116-121
A time saving algorithm for the Monte Carlo method of Metropolis is presented. The technique is tested with different potential models and number of particles. The coupling of the method with neighbor lists, linked lists, Ewald sum and reaction field techniques is also analyzed. It is shown that the proposed algorithm is particularly suitable for computationally heavy intermolecular potentials. 相似文献
6.
We present two sequential and one parallel global optimization codes, that belong to the stochastic class, and an interface routine that enables the use of the Merlin/MCL environment as a non-interactive local optimizer. This interface proved extremely important, since it provides flexibility, effectiveness and robustness to the local search task that is in turn employed by the global procedures. We demonstrate the use of the parallel code to a molecular conformation problem.
Program summary
Title of program: PANMINCatalogue identifier: ADSUProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSUProgram obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandComputer for which the program is designed and others on which it has been tested: PANMIN is designed for UNIX machines. The parallel code runs on either shared memory architectures or on a distributed system. The code has been tested on a SUN Microsystems ENTERPRISE 450 with four CPUs, and on a 48-node cluster under Linux, with both the GNU g77 and the Portland group compilers. The parallel implementation is based on MPI and has been tested with LAM MPI and MPICHInstallation: University of Ioannina, GreeceProgramming language used: Fortran-77Memory required to execute with typical data: Approximately O(n2) words, where n is the number of variablesNo. of bits in a word: 64No. of processors used: 1 or manyHas the code been vectorised or parallelized?: Parallelized using MPINo. of bytes in distributed program, including test data, etc.: 147163No. of lines in distributed program, including the test data, etc.: 14366Distribution format: gzipped tar fileNature of physical problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques can be trapped in any local minimum. Global Optimization is then the appropriate tool. For example, solving a non-linear system of equations via optimization, one may encounter many local minima that do not correspond to solutions, i.e. they are far from zeroMethod of solution: PANMIN is a suite of programs for Global Optimization that take advantage of the Merlin/MCL optimization environment [1,2]. We offer implementations of two algorithms that belong to the stochastic class and use local searches either as intermediate steps or as solution refinementRestrictions on the complexity of the problem: The only restriction is set by the available memory of the hardware configuration. The software can handle bound constrained problems. The Merlin Optimization environment must be installed. Availability of an MPI installation is necessary for executing the parallel codeTypical running time: Depending on the objective functionReferences: [1] D.G. Papageorgiou, I.N. Demetropoulos, I.E. Lagaris, Merlin-3.0. A multidimensional optimization environment, Comput. Phys. Commun. 109 (1998) 227-249. [2] D.G. Papageorgiou, I.N. Demetropoulos, I.E. Lagaris, The Merlin Control Language for strategic optimization, Comput. Phys. Commun. 109 (1998) 250-275. 相似文献7.
Yiming Li 《Computer Physics Communications》2003,153(3):359-372
Various self-consistent semiconductor device simulation approaches require the solution of Poisson equation that describes the potential distribution for a specified doping profile (or charge density). In this paper, we solve the multi-dimensional semiconductor nonlinear Poisson equation numerically with the finite volume method and the monotone iterative method on a Linux-cluster. Based on the nonlinear property of the Poisson equation, the proposed method converges monotonically for arbitrary initial guesses. Compared with the Newton's iterative method, it is easy implementing, relatively robust and fast with much less computation time, and its algorithm is inherently parallel in large-scale computing. The presented method has been successfully implemented; the developed parallel nonlinear Poisson solver tested on a variety of devices shows it has good efficiency and robustness. Benchmarks are also included to demonstrate the excellent parallel performance of the method. 相似文献
8.
P.M. Jordan 《Mathematics and computers in simulation》2010,81(1):18-25
We point out and examine two nonlinear, hyperbolic equations, both of which arise in kinematic-wave theory, that can be solved exactly using a conditional application of the Cole-Hopf transformation. Both of these equations are based on flux relations that were originally proposed as models of thermal wave phenomena, also known as second-sound. We then show how this method can be extended and used to obtain a particular type of exact solution to a class of nonlinear, hyperbolic PDEs. 相似文献
9.
Joe Pitt-Francis Pras Pathmanathan Miguel O. Bernabeu Rafel Bordas Jonathan Cooper Alexander G. Fletcher Gary R. Mirams Philip Murray James M. Osborne Alex Walter S. Jon Chapman Alan Garny Ingeborg M.M. van Leeuwen Philip K. Maini Blanca Rodríguez Sarah L. Waters Jonathan P. Whiteley Helen M. Byrne David J. Gavaghan 《Computer Physics Communications》2009,180(12):2452-2471
10.
Grid computing is distributed computing performed transparently across multiple administrative domains. Grid middleware, which is meant to enable access to grid resources, is currently widely seen as being too heavyweight and, in consequence, unwieldy for general scientific use. Its heavyweight nature, especially on the client-side, has severely restricted the uptake of grid technology by computational scientists. In this paper, we describe the Application Hosting Environment (AHE) which we have developed to address some of these problems. The AHE is a lightweight, easily deployable environment designed to allow the scientist to quickly and easily run legacy applications on distributed grid resources. It provides a higher level abstraction of a grid than is offered by existing grid middleware schemes such as the Globus Toolkit. As a result, the computational scientist does not need to know the details of any particular underlying grid middleware and is isolated from any changes to it on the distributed resources. The functionality provided by the AHE is ‘application-centric’: applications are exposed as web services with a well-defined standards-compliant interface. This allows the computational scientist to start and manage application instances on a grid in a transparent manner, thus greatly simplifying the user experience. We describe how a range of computational science codes have been hosted within the AHE and how the design of the AHE allows us to implement complex workflows for deployment on grid infrastructure. 相似文献
11.
We describe the improved properties of the NMHDECAY program, that is designed to compute Higgs and sparticle masses and Higgs decay widths in the NMSSM. In the version 2.0, Higgs decays into squarks and sleptons are included, accompanied by a calculation of the squark, gluino and slepton spectrum and tests against constraints from LEP and the Tevatron. Further radiative corrections are included in the Higgs mass calculation. A link to MicrOMEGAs allows to compute the dark matter relic density, and a rough (lowest order) calculation of BR(b→sγ) is performed. Finally, version 2.1 allows to integrate the RGEs for the soft terms up to the GUT scale.
Program summary
Title of program:NMHDECAY_SCAN, NMHDECAY_SLHACatalogue identifier:ADXW_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXW_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions:noneProgramming language used:FortranComputer:Mac, PC, Sun, Dec, AlphaOperating system:Mac OSX, Linux, Unix, WindowsNo. of lines in distributed program, including test data, etc.:20 060No. of bytes in distributed program, including test data, etc.:133 644RAM:2M bytesDistribution format:tar.gzNumber of processors used:1Classification:11.6Journal reference of previous version:JHEP 0502:066, 2005Does the new version supersede the previous version?:YesNature of problem:Computation of the Higgs and sparticle spectrum in the NMSSM and check of theoretical and experimental constraints.Solution method:Mass matrices including up to 2 loop radiative corrections for the Higgs bosons and all sparticles are computed and diagonalized. All Higgs decay widths are computed and branching ratios are compared to experimental bounds. Renormalisation group equations are integrated up to the GUT scale using a modified Runge-Kutta method, in order to check for the absence of a Landau pole. A modified version of MicrOmegas_1.3 can be called in order to compute the relic density of the lightest sparticle.Reasons for the new version:Higgs to sparticle decays added, computation of dark matter relic density added.Summary of revisions:Treatment of RGEs and radiative corrections improved, Higgs to sparticle decays added, new link to MicrOmegas_1.3.Restrictions:noneUnusual features:noneRunning time:<1 s per point in parameter space 相似文献12.
In order to model complex heterogeneous biophysical macrostructures with non-trivial charge distributions such as globular proteins in water, it is important to evaluate the long range forces present in these systems accurately and efficiently. The Smooth Particle Mesh Ewald summation technique (SPME) is commonly used to determine the long range part of electrostatic energy in large scale molecular simulations. While the SPME technique does not give rise to a performance bottleneck on a single processor, current implementations of SPME on massively parallel, supercomputers become problematic at large processor numbers, limiting the time and length scales that can be reached. Here, a synergistic investigation involving method improvement, parallel programming and novel architectures is employed to address this difficulty. A relatively simple modification of the SPME technique is described which gives rise to both improved accuracy and efficiency on both massively parallel and scalar computing platforms. Our fine grained parallel implementation of the modified SPME method for the novel QCDOC supercomputer with its 6D-torus architecture is then given. Numerical tests of algorithm performance on up to 1024 processors of the QCDOC machine at BNL are presented for two systems of interest, a β-hairpin solvated in explicit water, a system which consists of 1142 water molecules and a 20 residue protein for a total of 3579 atoms, and the HIV-1 protease solvated in explicit water, a system which consists of 9331 water molecules and a 198 residue protein for a total of 29508 atoms. 相似文献
13.
Andrei Afanasev Alexander Ilyichev Vladimir Zykunov 《Computer Physics Communications》2007,176(3):218-231
The Monte Carlo generator MERADGEN 1.0 for the simulation of radiative events in parity conserving doubly-polarized Møller scattering has been developed. Analytical integration wherever it is possible provides rather fast and accurate generation. Some numerical tests and histograms are presented.
Program summary
Program title: MERADGEN 1.0Catalogue identifier:ADYM_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYM_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions: noneProgramming language: FORTRAN 77Computer(s) for which the program has been designed: allOperating system(s) for which the program has been designed: LinuxRAM required to execute with typical data: 1 MBNo. of lines in distributed program, including test data, etc.:2196No. of bytes in distributed program, including test data, etc.:23 501Distribution format:tar.gzHas the code been vectorized or parallelized?: noNumber of processors used: 1Supplementary material: noneExternal routines/libraries used: noneCPC Program Library subprograms used: noneNature of problem: Simulation of radiative events in parity conserving doubly-polarized Møller scattering.Solution method: Monte Carlo method for simulation within QED, analytical integration wherever it is possible that provides rather fast and accurate generation.Restrictions: noneUnusual features: noneAdditional comments: noneRunning time: The simulation of 108 radiative events for itest:=1 takes up to 45 seconds on AMD Athlon 2.80 GHz processor. 相似文献14.
M.S. Zakynthinaki R.O. Barakat C.A. Cordente Martínez J. Sampedro Molinuevo 《Computer Physics Communications》2011,(3):683-691
The stochastic optimization method ALOPEX IV has been successfully applied to the problem of detecting possible changes in the maternal heart rate kinetics during pregnancy. For this reason, maternal heart rate data were recorded before, during and after gestation, during sessions of exercises of constant mild intensity; ALOPEX IV stochastic optimization was used to calculate the parameter values that optimally fit a dynamical systems model to the experimental data. The results not only demonstrate the effectiveness of ALOPEX IV stochastic optimization, but also have important implications in the area of exercise physiology, as they reveal important changes in the maternal cardiovascular dynamics, as a result of pregnancy. 相似文献
15.
In this article we consider the problem of detecting unusual values or outliers from time series data where the process by
which the data are created is difficult to model. The main consideration is the fact that data closer in time are more correlated
to each other than those farther apart. We propose two variations of a method that uses the median from a neighborhood of
a data point and a threshold value to compare the difference between the median and the observed data value. Both variations
of the method are fast and can be used for data streams that occur in quick succession such as sensor data on an airplane.
Martin Meckesheimer has been a member of the Applied Statistics Group at Phantom Works, Boeing since 2001. He received a Bachelor of Science
Degree in Industrial Engineering from the University of Pittsburgh in 1997, and a Master's Degree in Industrial and Systems
Engineering from Ecole Centrale Paris in 1999. Martin earned a Doctorate in Industrial Engineering from The Pennsylvania State
University in August 2001, as a student of Professor Russell R. Barton and Dr. Timothy W. Simpson. His primary research interests
are in the areas of design of experiments and surrogate modeling.
Sabyasachi Basu received his Ph.D. is Statistics from the University of Wisconsin at Madison in 1990. Since his Ph.D., he has worked in both
academia and in industry. He has taught and guided Ph.D. students in the Department of Statistics at the Southern Methodist
University. He has also worked as a senior marketing statistician at the J. C. Penney Company. Dr. Basu is also an American
Society of Quality certified Six Sigma Black Belt. He is currently an Associate Technical Fellow in Statistics and Data Mining
at the Boeing Company. In this capacity, he works as a researcher and a technical consultant within Boeing for data mining,
statistics and process improvements. He has published more than 20 papers and technical reports. He has also served as journal
referee for several journals, organized conferences and been invited to present at conferences. 相似文献
16.
This paper presents ?-SHAKE, an extension to SHAKE, an algorithm for the resolution of holonomic constraints in molecular dynamics simulations, which allows for the explicit treatment of angular constraints. We show that this treatment is more efficient than the use of fictitious bonds, significantly reducing the overlap between the individual constraints and thus accelerating convergence. The new algorithm is compared with SHAKE, M-SHAKE, the matrix-based approach described by Ciccotti and Ryckaert and P-SHAKE for rigid water and octane. 相似文献
17.
High dimensionality in real-world multi-reservoir systems greatly hinders the application and popularity of evolutionary algorithms, especially for systems with heterogeneous units. An efficient hierarchical optimization framework is presented for search space reduction, determining the best water distributions, not only between cascade reservoirs, but also among different types of hydropower units. The framework is applied to the Three Gorges Project (TGP) system and the results demonstrate that the difficulties of multi-reservoir optimization caused by high dimensionality can be effectively solved by the proposed hierarchical method. For the day studied, power output could be increased by 6.79 GWh using an optimal decision with the same amount of water actually used; while the same amount of power could be generated with 2.59 × 107 m3 less water compared to the historical policy. The methodology proposed is general in that it can be used for other reservoir systems and other types of heterogeneous unit generators. 相似文献
18.
Time-delay neural networks for time series prediction: an application to the monthly wholesale price of oilseeds in India 总被引:1,自引:0,他引:1
Agricultural price forecasting is one of the challenging areas of time series forecasting. The feed-forward time-delay neural network (TDNN) is one of the promising and potential methods for time series prediction. However, empirical evaluations of TDNN with autoregressive integrated moving average (ARIMA) model often yield mixed results in terms of the superiority in forecasting performance. In this paper, the price forecasting capabilities of TDNN model, which can model nonlinear relationship, are compared with ARIMA model using monthly wholesale price series of oilseed crops traded in different markets in India. Most earlier studies of forecast accuracy for TDNN versus ARIMA do not consider pretesting for nonlinearity. This study shows that the nonlinearity test of price series provides reliable guide to post-sample forecast accuracy for neural network model. The TDNN model in general provides better forecast accuracy in terms of conventional root mean square error values as compared to ARIMA model for nonlinear patterns. The study also reveals that the neural network models have clear advantage over linear models for predicting the direction of monthly price change for different series. Such direction of change forecasts is particularly important in economics for capturing the business cycle movements relating to the turning points. 相似文献
19.
Optimization of scale and parametrization for terrain segmentation: An application to soil-landscape modeling 总被引:1,自引:0,他引:1
Lucian Drgu Thomas Schauppenlehner Andreas Muhar Josef Strobl Thomas Blaschke 《Computers & Geosciences》2009,35(9):1875-1883
This paper presents a procedure to optimize parametrization and scale for terrain-based environmental modeling. The workflow was exemplified on crop yield data, which is assumed to represent a proxy for soil productivity. Focal mean statistics were used to generate different scale levels of terrain derivatives by increasing the neighborhood size in calculation. The degree of association between each terrain derivative and crop yield values was established iteratively for all scale levels through correlation analysis. The first peak of correlation indicated the scale level to be further retained. To select the best combination of terrain parameters that explains the variation of crop yield, we ran stepwise multiple regressions with appropriately scaled terrain parameters as independent variables. These techniques proved that the mean curvature, filtered over a neighborhood of 55 m, together with slope, made up the optimal combination to account for patterns of soil productivity.To illustrate the importance of scale, we compared the regression results of unfiltered and filtered mean curvature vs. crop yield. The comparison shows an improvement of R2 from a value of 0.01 when the curvature was not filtered, to 0.16 when the curvature was filtered within 55×55 m neighborhood size.The results were further used in an object-based image analysis environment to create terrain objects containing aggregated values of both terrain derivatives and crop yield. Hence, we introduce terrain segmentation as an alternative method for generating scale levels in terrain-based environmental modeling, besides existing per-cell methods. At the level of segments, R2 improved up to a value of 0.47. 相似文献
20.
Ludovic Giet 《Computational statistics & data analysis》2008,52(6):2945-2965
A minimum disparity estimator minimizes a φ-divergence between the marginal density of a parametric model and its non-parametric estimate. This principle is applied to the estimation of stochastic differential equation models, choosing the Hellinger distance as particular φ-divergence. Under an hypothesis of stationarity, the parametric marginal density is provided by solving the Kolmogorov forward equation. A particular emphasis is put on the non-parametric estimation of the sample marginal density which has to take into account sample dependence and kurtosis. A new window size determination is provided. The classical estimator is presented alternatively as a distance minimizer and as a pseudo-likelihood maximizer. The latter presentation opens the way to Bayesian inference. The method is applied to continuous time models of the interest rate. In particular, various models are tested using alternatively tests and their results are discussed. 相似文献