首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A new version of XtalOpt, a user-friendly GPL-licensed evolutionary algorithm for crystal structure prediction, is available for download from the CPC library or the XtalOpt website, http://xtalopt.openmolecules.net. The new version now supports four external geometry optimization codes (VASP, GULP, PWSCF, and CASTEP), as well as three queuing systems: PBS, SGE, SLURM, and “Local”. The local queuing system allows the geometry optimizations to be performed on the user?s workstation if an external computational cluster is unavailable. Support for the Windows operating system has been added, and a Windows installer is provided. Numerous bugfixes and feature enhancements have been made in the new release as well.

New version program summary

Program title:XtalOptCatalogue identifier: AEGX_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGX_v2_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: GPL v2.1 or later [1]No. of lines in distributed program, including test data, etc.: 125 383No. of bytes in distributed program, including test data, etc.: 11 607 415Distribution format: tar.gzProgramming language: C++Computer: PCs, workstations, or clustersOperating system: Linux, MS WindowsClassification: 7.7External routines: Qt [2], Open Babel [3], Avogadro [4], and one of: VASP [5], PWSCF [6], GULP [7], CASTEP [8]Catalogue identifier of previous version: AEGX_v1_0Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 372Does the new version supersede the previous version?: YesNature of problem: Predicting the crystal structure of a system from its stoichiometry alone remains a grand challenge in computational materials science, chemistry, and physics.Solution method: Evolutionary algorithms are stochastic search techniques which use concepts from biological evolution in order to locate the global minimum of a crystalline structure on its potential energy surface. Our evolutionary algorithm, XtalOpt, is freely available for use and collaboration under the GNU Public License. See the original publication on XtalOpt?s implementation [11] for more information on the method.Reasons for new version: Since XtalOpt?s initial release in June 2010, support for additional optimizers, queuing systems, and an operating system has been added. XtalOpt can now use VASP, GULP, PWSCF, or CASTEP to perform local geometry optimizations. The queue submission code has been rewritten, and now supports running any of the above codes on ssh-accessible computer clusters that use the Portable Batch System (PBS), Sun Grid Engine (SGE), or SLURM queuing systems for managing the optimization jobs. Alternatively, geometry optimizations may be performed on the user?s workstation using the new internal “Local” queuing system if high performance computing resources are unavailable. XtalOpt has been built and tested on the Microsoft Windows operating system (XP or later) in addition to Linux, and a Windows installer is provided. The installer includes a development version of Avogadro that contains expanded crystallography support [12] that is not available in the mainline Avogadro releases. Other notable new developments include:
  • • 
    LIBSSH [10] is distributed with the XtalOpt sources and used for communication with the remote clusters, eliminating the previous requirement to set up public-key authentication;
  • • 
    Plotting enthalpy (or energy) vs. structure number in the plot tab will trace out the history of the most stable structure as the search progresses A read-only mode has been added to allow inspection of previous searches through the user interface without connecting to a cluster or submitting new jobs;
  • • 
    The tutorial [13] has been rewritten to reflect the changes to the interface and the newly supported codes. Expanded sections on optimizations schemes and save/resume have been added;
  • • 
    The included version of SPGLIB has been updated. An option has been added to set the Cartesian tolerance of the space group detection. A new option has been added to the Progress table?s right-click menu that copies the selected structure?s POSCAR formatted representation to the clipboard;
  • • 
    Numerous other small bugfixes/enhancements.
Summary of revisions: See “Reasons for new version” above.Running time: User dependent. The program runs until stopped by the user.References:
  •  [1] 
    http://www.gnu.org/licenses/gpl.html.
  •  [2] 
    http://www.trolltech.com/.
  •  [3] 
    http://openbabel.org/.
  •  [4] 
    http://avogadro.openmolecules.net.
  •  [5] 
    http://cms.mpi.univie.ac.at/vasp.
  •  [6] 
    http://www.quantum-espresso.org.
  •  [7] 
    https://www.ivec.org/gulp.
  •  [8] 
    http://www.castep.org.
  •  [9] 
    http://spglib.sourceforge.net.
  • [10] 
    http://www.libssh.org.
  • [11] 
    D. Lonie, E. Zurek, Comp. Phys. Comm. 182 (2011) 372–387, doi:10.1016/j.cpc.2010.07.048.
  • [12] 
    http://davidlonie.blogspot.com/2011/03/new-avogadro-crystallography-extension.html.
  • [13] 
    http://xtalopt.openmolecules.net/globalsearch/docs/tut-xo.html.
  相似文献   

2.
The algorithm and testing of the Multi-algorithm-collaborative Universal Structure-prediction Environment (Muse) are detailed. Presently, in Muse I combined the evolutionary, the simulated annealing, and the basin hopping algorithms to realize high-efficiency structure predictions of materials under certain conditions. Muse is kept open and other algorithms can be added in future. I introduced two new operators, slip and twist, to increase the diversity of structures. In order to realize the self-adaptive evolution of structures, I also introduced the competition scheme among the ten variation operators, as is proved to further increase the diversity of structures. The symmetry constraints in the first generation, the multi-algorithm collaboration, the ten variation operators, and the self-adaptive scheme are all key to enhancing the performance of Muse. To study the search ability of Muse, I performed extensive tests on different systems, including the metallic, covalent, and ionic systems. All these present tests show that Muse has very high efficiency and 100% success rate.  相似文献   

3.
Learning how to classify sensor data is one of the basic learning tasks in engineering. Data from sensors are usually made available over time, and are classified according to the behavior they exhibit in specific time intervals. This paper addresses the problem of classifying finite, univariate time series that are governed by unknown deterministic processes contaminated by noise. Time series in the same class are allowed to follow different processes. In this context, the appropriateness of using induction algorithms not specifically designed for temporal data is investigated. The paper presents Calchas, a simple supervised induction algorithm that uses serial correlation as its inductive bias in a Bayesian framework, and compares it empirically to a popular general-purpose classifier, in a NASA telemetry monitoring application. Two comparisons were performed: one in which the general purpose classifier was applied directly to the data, and another in which features that captured serial correlations were extracted before the induction. Serial correlation appeared to be an important form of inductive bias, most effectively utilized as an integral part of the learning algorithm. Feature extraction occurs too early in the training process to utilize correlation knowledge effectively.  相似文献   

4.
A well-known computational approach to finite presentations of infinite groups is the kbmag procedure of Epstein, Holt and Rees. We describe some efficiency issues relating to the procedure and detail two asymptotic improvements: an index for the rewriting system that uses generalized suffix trees and the use of dynamic programming to reduce the number of verification steps.  相似文献   

5.
6.
We describe the Breit–Pauli distorted wave (BPDW) approach for the electron-impact excitation of atomic ions that we have implemented within the autostructure code.

Program summary

Program title:autostructureCatalogue identifier: AEIV_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIV_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 130 987No. of bytes in distributed program, including test data, etc.: 1 031 584Distribution format: tar.gzProgramming language: Fortran 77/95Computer: GeneralOperating system: UnixHas the code been vectorized or parallelized?: Yes, a parallel version, with MPI directives, is included in the distribution.RAM: From several kbytes to several GbytesClassification: 2, 2.4Nature of problem: Collision strengths for the electron-impact excitation of atomic ions are calculated using a Breit–Pauli distorted wave approach with the optional inclusion of two-body non-fine-structure and fine-structure interactions.Solution method: General multi-configuration Breit–Pauli atomic structure. A jK-coupling partial wave expansion of the collision problem. Slater state angular algebra. Various model potential non-relativistic or kappa-averaged relativistic radial orbital solutions — the continuum distorted wave orbitals are not required to be orthogonal to the bound.Additional comments: Documentation is provided in the distribution file along with the test-case.Running time: From a few seconds to a few hours.  相似文献   

7.
OneLOop is a program to evaluate the one-loop scalar 1-point, 2-point, 3-point and 4-point functions, for all kinematical configurations relevant for collider-physics, and for any non-positive imaginary parts of the internal squared masses. It deals with all UV and IR divergences within dimensional regularization. Furthermore, it provides routines to evaluate these functions using straightforward numerical integration.

Program summary

Program title: OneLOopCatalogue identifier: AEJO_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJO_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 12 061No. of bytes in distributed program, including test data, etc.: 74 163Distribution format: tar.gzProgramming language: FortranComputer: WorkstationsOperating system: Linux, UnixRAM: NegligibleClassification: 4.4, 11.1Nature of problem: In order to reach next-to-leading order precision in the calculation of cross sections of hard scattering processes, one-loop amplitudes have to be evaluated. This is done by expressing them as linear combination of one-loop scalar functions. In a concrete calculation, these functions eventually have to be evaluated. If the scattering process involves unstable particles, consistency requires the evaluation of these functions with complex internal masses.Solution method: Expressions for the one-loop scalar functions in terms of single-variable analytic functions existing in literature have been implemented.Restrictions: The applicability is restricted to the kinematics occurring in collider-physics.Running time: The evaluation of the most general 4-point function with 4 complex masses takes about 180 μs, and the evaluation of the 4-point function with 4 real masses takes about 18 μs on a 2.80 GHz Intel Xeon processor.  相似文献   

8.
The challenge for the metaobject protocol designer is to balance the conflicting demands of efficiency, simplicity, and extensibility. It is impossible to know all desired extensions in advance; some of them will require greater functionality, while others require greater efficiency. In addition, the protocol itself must be sufficiently simple that it can be fully documented and understood by those who need to use it.This paper presents the framework of a metaobject protocol forEuLisp which provides expressiveness by a multi-leveled protocol and achieves efficiency by static semantics for predefined metaobjects and modularizing their operations. TheEuLisp module system supports global optimizations of metaobject applications. The metaobject system itself is structured into modules, taking into account the consequences for the compiler. It provides introspective operations as well as extension interfaces for various functionalities, including new inheritance, allocation, and slot access semantics.While the overall goals and functionality are close to those of Kiczaleset al. [9], the approach shows different emphases. As a result, time and space efficiency as well as robustness have been improved.This article is a revised and extended version of [4]The work of this paper was supported by the joint project APPLY, Ilog SA, the University of Bath, the British Council/DAAD ARC program, and theEuLisp working group.The joint project APPLY is funded by the German Federal Ministry for Research and Technology (BMFT). The partners in this project are the University of Kiel, the Fraunhofer Institute for Software Engineering and Systems Engineering (ISST), the German National Research Center for Computer Science (GMD), and VW-Gedas.  相似文献   

9.
On-line hand-drawn structured document interpretation is a complex problem of pattern recognition. This paper deals with eager strategies for such purpose, i.e. consisting in updating the analyzed document after each input stroke and providing a corresponding feedback to the user. We have designed a new class of visual grammars for the modeling of structured document composition: the context-driven constraint multiset grammars (CD-CMG). Their main originalities are to model the structural context in which a production can be reduced and to take into account the hand-drawn nature of the considered data. Its associated parser exploits the formalized knowledge for predictive purposes and couples bottom-up and top-down strategies. Their context-sensitiveness helps reducing significantly the combinatory associated to the analysis process. We use the fuzzy set framework to evaluate each possible interpretation on a qualitative way. Reject options are exploited to increase the decision making robustness and to detect the need for stroke segmentation. The parser is also able to wait for more information before making a decision by using a branch and bound algorithm. In this paper, we provide experimental results showing that the method is efficient enough to be used in real-time applications. We illustrate this point by focusing on a commercialized pen-based system that is based on the DALI method we present in this paper.  相似文献   

10.
The final fragment energy distribution in fast photodissociation reactions is often close to an impulsive limit since there is little time for intramolecular vibrational relaxation to occur. The computer program Zh?Kè implements a new vibrationally-adiabatic impulsive dissociation model [K.F. Lim, in: IQEC '96 Technical Digest (Optical Soc. of America, 1996), paper WL117], in which holonomic constraints are used to decouple vibrations from the dissociation reaction coordinate. Final photofragment vibrational, translational, and rotational energies and the associated angular momentum quantum numbers are calculated. The “soft” impulsive dissociation model of Busch and Wilson (J. Chem. Phys. 56 (1972) 3626) is included for comparison. The FORTRAN 77 code has been tested on a DEC ALPHA 300 workstation computer, a Fujitsu VP supercomputer, and a Solbourne 5e computer (with 3 cpu's).  相似文献   

11.
With modern data acquisition devices that work fast and very precise, scientists often face the task of dealing with huge amounts of data. These need to be rapidly processed and stored onto a hard disk. We present a LabVIEW program which reliably streams analog time series of MHz sampling. Its run time has virtually no limitation. We explicitly show how to use the program to extract time series from two experiments: For a photodiode detection system that tracks the position of an optically trapped particle and for a measurement of ionic current through a glass capillary. The program is easy to use and versatile as the input can be any type of analog signal. Also, the data streaming software is simple, highly reliable, and can be easily customized to include, e.g., real-time power spectral analysis and Allan variance noise quantification.

Program summary

Program title: TimeSeriesStreaming.VICatalogue identifier: AEHT_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHT_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 250No. of bytes in distributed program, including test data, etc.: 63 259Distribution format: tar.gzProgramming language: LabVIEW (http://www.ni.com/labview/)Computer: Any machine running LabVIEW 8.6 or higherOperating system: Windows XP and Windows 7RAM: 60–360 MbyteClassification: 3Nature of problem: For numerous scientific and engineering applications, it is highly desirable to have an efficient, reliable, and flexible program to perform data streaming of time series sampled with high frequencies and possibly for long time intervals. This type of data acquisition often produces very large amounts of data not easily streamed onto a computer hard disk using standard methods.Solution method: This LabVIEW program is developed to directly stream any kind of time series onto a hard disk. Due to optimized timing and usage of computational resources, such as multicores and protocols for memory usage, this program provides extremely reliable data acquisition. In particular, the program is optimized to deal with large amounts of data, e.g., taken with high sampling frequencies and over long time intervals. The program can be easily customized for time series analyses.Restrictions: Only tested in Windows-operating LabVIEW environments, must use TDMS format, acquisition cards must be LabVIEW compatible, driver DAQmx installed.Running time: As desirable: microseconds to hours  相似文献   

12.
The architectural layout design problem, which is concerned with the finding of the best adjacencies between functional spaces among many possible ones under given constraints, can be formulated as a combinatorial optimization problem and can be solved with an Evolutionary Algorithm (EA). We present functional spaces and their adjacencies in form of graphs and propose an EA called EvoArch that works with a graph-encoding scheme. EvoArch encodes topological configuration in the adjacency matrices of the graphs that they represent and its reproduction operators operate on these adjacency matrices. In order to explore the large search space of graph topologies, these reproduction operators are designed to be unbiased so that all nodes in a graph have equal chances of being selected to be swapped or mutated. To evaluate the fitness of a graph, EvoArch makes use of a fitness function that takes into consideration preferences for adjacencies between different functional spaces, budget and other design constraints. By means of different experiments, we show that EvoArch can be a very useful tool for architectural layout design tasks.  相似文献   

13.
An efficient evolutionary algorithm for accurate polygonal approximation   总被引:7,自引:0,他引:7  
An optimization problem for polygonal approximation of 2-D shapes is investigated in this paper. The optimization problem for a digital contour of N points with the approximating polygon of K vertices has a search space of C(NK) instances, i.e., the number of ways of choosing K vertices out of N points. A genetic-algorithm-based method has been proposed for determining the optimal polygons of digital curves, and its performance is better than that of several existing methods for the polygonal approximation problems. This paper proposes an efficient evolutionary algorithm (EEA) with a novel orthogonal array crossover for obtaining the optimal solution to the polygonal approximation problem. It is shown empirically that the proposed EEA outperforms the existing genetic-algorithm-based method under the same cost conditions in terms of the quality of the best solution, average solution, variance of solutions, and the convergence speed, especially in solving large polygonal approximation problems.  相似文献   

14.
15.
In the second article of the series, we present the Gibbs2 code, a Fortran90 reimplementation of the original Gibbs program [Comput. Phys. Commun. 158 (2004) 57] for the calculation of pressure–temperature dependent thermodynamic properties of solids under the quasiharmonic approximation. We have taken advantage of the detailed analysis carried out in the first paper to implement robust fitting techniques. In addition, new models to introduce temperature effects have been incorporated, from the simple Debye model contained in the original article to a full quasiharmonic model that requires the phonon density of states at each calculated volume. Other interesting novel features include the empirical energy corrections, that rectify systematic errors in the calculation of equilibrium volumes caused by the choice of the exchange-correlation functional, the electronic contributions to the free energy and the automatic computation of phase diagrams. Full documentation in the form of a user?s guide and a complete set of tests and sample data are provided along with the source code.

Program summary

Program title:Gibbs2Catalogue identifier: AEJI_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJI_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: GNU General Public License, v3No. of lines in distributed program, including test data, etc.: 936 087No. of bytes in distributed program, including test data, etc.: 8 596 671Distribution format: tar.gzProgramming language: Fortran90Computer: Any running Unix/LinuxOperating system: Unix, GNU/LinuxClassification: 7.8External routines: Part of the minpack, pppack and slatec libraries (downloaded from www.netlib.org) are distributed along with the program.Nature of problem: Given the static E(V) curve, and possibly vibrational information such as the phonon density of states, calculate the equilibrium volume and thermodynamic properties of a solid at arbitrary temperatures and pressures in the framework of the quasiharmonic approximation.Additional comments: A detailed analysis concerning the fitting of equations of state has been carried out in the first part of this article, and implemented in the code presented here.Running time: The tests provided only take a few seconds to run.  相似文献   

16.
This note describes a method of fitting κ straight lines to a set of data points using an algorithm analogous to the Isodata, or κ-means, clustering technique for partitioning a set of data points into κ compact clusters.  相似文献   

17.
It is assumed that a host processor computes the corner coordinates of surfaces and outputs these sequentially, in ranked order, to the components described in an OCCAM program. The data is precomputed and stored in a sequential file. A scheduler controls the activity of a number of zone management processors (ZMPs), all running in parallel, and a special memory buffer. Each ZMP handles only one surface at a time. A processor can pick up a new surface for display when the previous surface has been completed. Only one ZMP can write into a given raster scanline at one time. Others may be writing into the same column of other lines at the same time. Hidden surface elimination is achieved by processing the surfaces in an order ranked on distance from the viewing point. This ranking is done in the host processor. The ranked data is held on a file, which is read sequentially in 512 byte blocks. This data has been previously computed and stored as a sequence of double-byte integers in the required order for a series of picture frames, one frame per 512 byte block. The occam implementation on the Apple II europlus running under UCSD version 4 is very slow. It is postulated that an implementation using separate occam professor hardware units for each appropriate process would run in real time. There is considerable communication between the processors. The activity of each processor is generally sequential and all the processors run in parallel. Comments are made about some of the problems and advantages of programming in occam in an appendix.  相似文献   

18.
We present teraflop-scale calculations of biomolecular electrostatics enabled by the combination of algorithmic and hardware acceleration. The algorithmic acceleration is achieved with the fast multipole method (fmm) in conjunction with a boundary element method (bem) formulation of the continuum electrostatic model, as well as the bibee approximation to bem. The hardware acceleration is achieved through graphics processors, gpus. We demonstrate the power of our algorithms and software for the calculation of the electrostatic interactions between biological molecules in solution. The applications demonstrated include the electrostatics of protein–drug binding and several multi-million atom systems consisting of hundreds to thousands of copies of lysozyme molecules. The parallel scalability of the software was studied in a cluster at the Nagasaki Advanced Computing Center, using 128 nodes, each with 4 gpus. Delicate tuning has resulted in strong scaling with parallel efficiency of 0.8 for 256 and 0.5 for 512 gpus. The largest application run, with over 20 million atoms and one billion unknowns, required only one minute on 512 gpus. We are currently adapting our bem software to solve the linearized Poisson–Boltzmann equation for dilute ionic solutions, and it is also designed to be flexible enough to be extended for a variety of integral equation problems, ranging from Poisson problems to Helmholtz problems in electromagnetics and acoustics to high Reynolds number flow.  相似文献   

19.
In this article, we present the formal verification of a Common Lisp implementation of Buchberger’s algorithm for computing Gröbner bases of polynomial ideals. This work is carried out in ACL2, a system which provides an integrated environment where programming (in a pure functional subset of Common Lisp) and formal verification of programs, with the assistance of a theorem prover, are possible. Our implementation is written in a real programming language and it is directly executable within the ACL2 system or any compliant Common Lisp system. We provide here snippets of real verified code, discuss the formalization details in depth, and present quantitative data about the proof effort.  相似文献   

20.
We consider the problem max csp over multi-valued domains with variables ranging over sets of size si?s and constraints involving kj?k variables. We study two algorithms with approximation ratios A and B, respectively, so we obtain a solution with approximation ratio max(A,B).The first algorithm is based on the linear programming algorithm of Serna, Trevisan, and Xhafa [Proc. 15th Annual Symp. on Theoret. Aspects of Comput. Sci., 1998, pp. 488-498] and gives ratio A which is bounded below by s1−k. For k=2, our bound in terms of the individual set sizes is the minimum over all constraints involving two variables of , where s1 and s2 are the set sizes for the two variables.We then give a simple combinatorial algorithm which has approximation ratio B, with B>A/e. The bound is greater than s1−k/e in general, and greater than s1−k(1−(s−1)/2(k−1)) for s?k−1, thus close to the s1−k linear programming bound for large k. For k=2, the bound is if s=2, 1/2(s−1) if s?3, and in general greater than the minimum of 1/4s1+1/4s2 over constraints with set sizes s1 and s2, thus within a factor of two of the linear programming bound.For the case of k=2 and s=2 we prove an integrality gap of . This shows that our analysis is tight for any method that uses the linear programming upper bound.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号