首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
We present the Mathematica package HypExp which allows to expand hypergeometric functions around integer parameters to arbitrary order. At this, we apply two methods, the first one being based on an integral representation, the second one on the nested sums approach. The expansion works for both symbolic argument z and unit argument. We also implemented new classes of integrals that appear in the first method and that are, in part, yet unknown to Mathematica.

Program summary

Title of program:HypExpCatalogue identifier:ADXF_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXF_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandLicence:noneComputers:Computers running Mathematica under Linux or WindowsOperating system:Linux, WindowsProgram language:MathematicaNo. of bytes in distributed program, including test data, etc.:739 410No. of lines in distributed program, including test data, etc.:89 747Distribution format:tar.gzOther package needed:the package HPL, included in the distributionExternal file required:noneNature of the physical problem:Expansion of hypergeometric functions around integer-valued parameters. These are needed in the context of dimensional regularization for loop and phase space integrals.Method of solution:Algebraic manipulation of nested sums and integral representation.Restrictions on complexity of the problem:Limited by the memory availableTypical running time:Strongly depending on the problem and the availability of libraries.  相似文献   

3.
Hidden semi-Markov models are a generalization of the well-known hidden Markov model. They allow for a greater flexibility of sojourn time distributions, which implicitly follow a geometric distribution in the case of a hidden Markov chain. The aim of this paper is to describe hsmm, a new software package for the statistical computing environment R. This package allows for the simulation and maximum likelihood estimation of hidden semi-Markov models. The implemented Expectation Maximization algorithm assumes that the time spent in the last visited state is subject to right-censoring. It is therefore not subject to the common limitation that the last visited state terminates at the last observation. Additionally, hsmm permits the user to make inferences about the underlying state sequence via the Viterbi algorithm and smoothing probabilities.  相似文献   

4.
In this article, we describe a new algorithm for the expansion of hypergeometric functions about half-integer parameters. The implementation of this algorithm for certain classes of hypergeometric functions in the already existing Mathematica package HypExp is described. Examples of applications in Feynman diagrams with up to four loops are given.

New version program summary

Program title:HypExp 2Catalogue identifier:ADXF_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXF_v2_0.htmlProgram obtainable from:CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.:106 401No. of bytes in distributed program, including test data, etc.:2 668 729Distribution format:tar.gzProgramming language:MathematicaComputer:Computers running MathematicaOperating system:Linux, Windows, MacRAM:Depending on the complexity of the problemSupplementary material:Library files which contain the expansion of certain hypergeometric functions around their parameters are availableClassification:4.7, 5Does the new version supersede the previous version?:YesNature of problem:Expansion of hypergeometric functions about parameters that are integer and/or half-integer valued.Solution method:New algorithm implemented in Mathematica.Reasons for new version:Expansion about half-integer parameters.Summary of revisions:Ability to expand about half-integer valued parameters added.Restrictions:The classes of hypergeometric functions with half-integer parameters that can be expanded are listed below.Additional comments:The package uses the package HPL included in the distribution.Running time:Depending on the expansion.  相似文献   

5.
Let f be a univariate polynomial with real coefficients, fR[X]. Subdivision algorithms based on algebraic techniques (e.g., Sturm or Descartes methods) are widely used for isolating the real roots of f in a given interval. In this paper, we consider a simple subdivision algorithm whose primitives are purely numerical (e.g., function evaluation). The complexity of this algorithm is adaptive because the algorithm makes decisions based on local data. The complexity analysis of adaptive algorithms (and this algorithm in particular) is a new challenge for computer science. In this paper, we compute the size of the subdivision tree for the SqFreeEVAL algorithm.The SqFreeEVAL algorithm is an evaluation-based numerical algorithm which is well-known in several communities. The algorithm itself is simple, but prior attempts to compute its complexity have proven to be quite technical and have yielded sub-optimal results. Our main result is a simple O(d(L+lnd)) bound on the size of the subdivision tree for the SqFreeEVAL algorithm on the benchmark problem of isolating all real roots of an integer polynomial f of degree d and whose coefficients can be written with at most L bits.Our proof uses two amortization-based techniques: first, we use the algebraic amortization technique of the standard Mahler-Davenport root bounds to interpret the integral in terms of d and L. Second, we use a continuous amortization technique based on an integral to bound the size of the subdivision tree. This paper is the first to use the novel analysis technique of continuous amortization to derive state of the art complexity bounds.  相似文献   

6.
We present the new version of the Mathematica code FIRE and ideas behind it. It can be applied together with the recently developed code LiteRed by Lee in order to provide an integration by parts reduction to master integrals for quite complicated families of Feynman integrals. As an example, we consider four-loop massless propagator integrals for which LiteRed provides reduction rules and FIRE assists to apply these rules. So, as a by-product one obtains a four-loop variant of the well-known algorithm MINCER. The existence of these explicit reduction rules shows that any four-loop massless propagator integral can be reduced to a linear combination of master integrals in the sense of a mathematical theorem. We also describe various algebraic ways to find additional relations between master integrals associated with several families of Feynman integrals.  相似文献   

7.
The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms.  相似文献   

8.
There exist algorithms, also called “fast” algorithms, which exploit the special structure of Toeplitz matrices so that, e.g., allow to solve a linear system of equations in O(n 2) flops. However, some implementations of classical algorithms that do not use this structure (O(n 3) flops) highly reduce the time to solution when several cores are available. That is why it is necessary to work on “fast” algorithms so that they do not lose track of the benefits of new hardware/software. In this work, we propose a new approach to the Generalized Schur Algorithm, a very known algorithm for the solution of Toeplitz systems, to work on a Block–Toeplitz matrix. Our algorithm is based on matrix–matrix multiplications, thus allowing to exploit an efficient implementation of this operation if it exists. Our algorithm also makes use of the thread level parallelism featured by multicores to decrease execution time.  相似文献   

9.
Although the problem of data server placement in parallel and distributed systems has been studied extensively, most of the existing work assumes there is no competition between servers. Hence, their goal is to minimize read, update and storage cost. In this paper, we study the server placement problem in which a new server has to compete with existing servers for user requests. Therefore, in addition to minimizing cost, we also need to maximize the benefit of building a new server.Our major results include three parts. First, for tree-structured systems, we propose an O(|V|3k) time dynamic programming algorithm to find the optimal placement of k extra servers that maximizes the benefit in a tree with |V| nodes. We also propose an O(|V|3) time dynamic programming algorithm to find the optimal placement of extra servers that maximizes the benefit, without any constraint on the number of extra servers. Second, for general connected graphs, we prove that the server placement problems are NP-complete, and present three greedy heuristic algorithms, called Greedy Add, Greedy Remove and Greedy Add-Remove, to solve them. Third, we show that if the number of requests a server can handle (i.e., server capacity) is bounded, the server placement problem is NP-complete even for tree networks. We then derive a variation of the same set of greedy heuristic algorithms, with consideration of server capacity constraint, to solve the problem.Our experiment results demonstrate that the greedy algorithms achieve good results, when compared with the upper bounds found by a linear programming algorithm. Greedy Add performs best in the unconstrained model, yielding a benefit within 12% difference from the theoretical upper bound in average. For the constrained model, Greedy Remove performs best for smaller network sizes, while Greedy Add-Remove performs best for larger network sizes. On average, the heuristic algorithms yield a benefit within 13% difference from the theoretical upper bound in the constrained model.  相似文献   

10.
Current publicly available computer programs calculate the spectrum and couplings of the minimal supersymmetric standard model under the assumption of R-parity conservation. Here, we describe an extension to the SOFTSUSY program which includes R-parity violating effects. The user provides a theoretical boundary condition upon the high-scale supersymmetry breaking R-parity violating couplings. Successful radiative electroweak symmetry breaking, electroweak and CKM matrix data are used as weak-scale boundary conditions. The renormalisation group equations are solved numerically between the weak scale and a high energy scale using a nested iterative algorithm. This paper serves as a manual to the R-parity violating mode of the program, detailing the approximations and conventions used.

Program summary

Program title:SOFTSUSY v3.0Catalogue identifier: ADPM_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADPM_v2_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 75 927No. of bytes in distributed program, including test data, etc.: 570 916Distribution format: tar.gzProgramming language: C++, FortranComputer: Personal computerOperating system: Tested on Linux 4.xWord size: 32 bitsClassification: 11.6Catalogue identifier of previous version: ADPM_v1_0Journal reference of previous version: Comput. Phys. Comm. 143 (2002) 305Does the new version supersede the previous version?: YesNature of problem: Calculating supersymmetric particle spectrum and mixing parameters in the R-parity violating minimal supersymmetric standard model. The solution to the renormalisation group equations must be consistent with a high-scale boundary condition on supersymmetry breaking parameters and Rp parameters, as well as a weak-scale boundary condition on gauge couplings, Yukawa couplings and the Higgs potential parameters.Solution method: Nested iterative algorithmReasons for new version: This is an extension to the SOFTSUSY program which includes R-parity violating effects. The user provides a theoretical boundary condition upon the high-scale supersymmetry breaking R-parity violating couplings. Successful radiative electroweak symmetry breaking, electroweak and CKM matrix data are used as weak-scale boundary conditions. The renormalisation group equations are solved numerically between the weak scale and a high energy scale using a nested iterative algorithm. The paper serves as a manual to the R-parity violating mode of the program, detailing the approximations and conventions used.Restrictions:SOFTSUSY3.0 will provide a solution only in the perturbative regime and it assumes that all couplings of the MSSM are real (i.e. CP-conserving). The iterative SOFTSUSY algorithm will not converge if parameters are too close to a boundary of successful electroweak symmetry breaking, but a warning flag will alert the user to this fact.Running time: A few seconds per parameter point.  相似文献   

11.
12.
In this paper we study a novel parametrization for state-space systems, namely separable least squares data driven local coordinates (slsDDLC). The parametrization by slsDDLC has recently been successfully applied to maximum likelihood estimation of linear dynamic systems. In a simulation study, the use of slsDDLC has led to numerical advantages in comparison to the use of more conventional parametrizations, including data driven local coordinates (DDLC). However, an analysis of properties of slsDDLC, which are relevant to identification, has not been performed up to now. In this paper, we provide insights into the geometry and topology of the slsDDLC construction and show a number of results which are important for actual identification, in particular for maximum likelihood estimation. We also prove that the separable least squares methodology is indeed guaranteed to be applicable to maximum likelihood estimation of linear dynamic systems in typical situations.  相似文献   

13.
We introduce the Mathematica package MT which can be used to compute, both analytically and numerically, convolutions involving harmonic polylogarithms, polynomials or generalized functions. As applications contributions to next-to-next-to-next-to leading order Higgs boson production and the Drell–Yan process are discussed.  相似文献   

14.
This paper addresses the problem of scheduling non-preemptive moldable tasks to minimize the stretch of the tasks in an online non-clairvoyant setting. To the best of the authors’ knowledge, this problem has never been studied before. To tackle this problem, first the sequential subproblem is studied through the lens of the approximation theory. An algorithm, called DASEDF, is proposed and, through simulations, it is shown to outperform the first-come, first-served scheme. Furthermore, it is observed that machine availability is the key to getting good stretch values. Then, the moldable task scheduling problem is considered, and, by leveraging the results from the sequential case, another algorithm, DBOS, is proposed to optimize the stretch while scheduling moldable tasks. This work is motivated by a task scheduling problem in the context of parallel short sequence mapping which has important applications in biology and genetics. The proposed DBOS algorithm is evaluated both on synthetic data sets that represent short sequence mapping requests and on data sets generated using log files of real production clusters. The results show that the DBOS algorithm significantly outperforms the two state-of-the-art task scheduling algorithms on stretch optimization.  相似文献   

15.
We present the new version of the Mathematica package SARAH which provides the same features for a non-supersymmetric model as previous versions for supersymmetric models. This includes an easy and straightforward definition of the model, the calculation of all vertices, mass matrices, tadpole equations, and self-energies. Also the two-loop renormalization group equations for a general gauge theory are now included and have been validated with the independent Python code PyR@TE. Model files for FeynArts, CalcHep/CompHep, WHIZARD and in the UFO format can be written, and source code for SPheno for the calculation of the mass spectrum, a set of precision observables, and the decay widths and branching ratios of all states can be generated. Furthermore, the new version includes routines to output model files for Vevacious for both, supersymmetric and non-supersymmetric, models. Global symmetries are also supported with this version and by linking Susyno the handling of Lie groups has been improved and extended.  相似文献   

16.
In this paper, we present an algorithm for the systematic calculation of Lie point symmetries for fractional order differential equations (FDEs) using the method as described by Buckwar & Luchko (1998) and Gazizov, Kasatkin & Lukashchuk (2007, 2009, 2011). The method has been generalised here to allow for the determination of symmetries for FDEs with nn independent variables and for systems of partial FDEs. The algorithm has been implemented in the new MAPLE package FracSym (Jefferson and Carminati 2013) which uses routines from the MAPLE symmetry packages DESOLVII (Vu, Jefferson and Carminati, 2012) and ASP (Jefferson and Carminati, 2013). We introduce FracSym by investigating the symmetries of a number of FDEs; specific forms of any arbitrary functions, which may extend the symmetry algebras, are also determined. For each of the FDEs discussed, selected invariant solutions are then presented.  相似文献   

17.
The qrnn package for R implements the quantile regression neural network, which is an artificial neural network extension of linear quantile regression. The model formulation follows from previous work on the estimation of censored regression quantiles. The result is a nonparametric, nonlinear model suitable for making probabilistic predictions of mixed discrete-continuous variables like precipitation amounts, wind speeds, or pollutant concentrations, as well as continuous variables. A differentiable approximation to the quantile regression error function is adopted so that gradient-based optimization algorithms can be used to estimate model parameters. Weight penalty and bootstrap aggregation methods are used to avoid overfitting. For convenience, functions for quantile-based probability density, cumulative distribution, and inverse cumulative distribution functions are also provided. Package functions are demonstrated on a simple precipitation downscaling task.  相似文献   

18.
19.
The prevalence of dynamic-content web services, exemplified by search and online social networking, has motivated an increasingly wide web-facing front end. Horizontal scaling in the Cloud is favored for its elasticity, and distributed design of load balancers is highly desirable. Existing algorithms with a centralized design, such as Join-the-Shortest-Queue (JSQ), incur high communication overhead for distributed dispatchers.We propose a novel class of algorithms called Join-Idle-Queue (JIQ) for distributed load balancing in large systems. Unlike algorithms such as Power-of-Two, the JIQ algorithm incurs no communication overhead between the dispatchers and processors at job arrivals. We analyze the JIQ algorithm in the large system limit and find that it effectively results in a reduced system load, which produces 30-fold reduction in queueing overhead compared to Power-of-Two at medium to high load. An extension of the basic JIQ algorithm deals with very high loads using only local information of server load.  相似文献   

20.
The statistical analysis of mixed effects models for binary and count data is investigated. In the statistical computing environment R, there are a few packages that estimate models of this kind. The package lme4 is a de facto standard for mixed effects models. The package glmmML allows non-normal distributions in the specification of random intercepts. It also allows for the estimation of a fixed effects model, assuming that all cluster intercepts are distinct fixed parameters; moreover, a bootstrapping technique is implemented to replace asymptotic analysis. The random intercepts model is fitted using a maximum likelihood estimator with adaptive Gauss-Hermite and Laplace quadrature approximations of the likelihood function. The fixed effects model is fitted through a profiling approach, which is necessary when the number of clusters is large. In a simulation study, the two approaches are compared. The fixed effects model has severe bias when the mixed effects variance is positive and the number of clusters is large.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号