首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 672 毫秒
1.
We present the new version of the Mathematica code FIRE and ideas behind it. It can be applied together with the recently developed code LiteRed by Lee in order to provide an integration by parts reduction to master integrals for quite complicated families of Feynman integrals. As an example, we consider four-loop massless propagator integrals for which LiteRed provides reduction rules and FIRE assists to apply these rules. So, as a by-product one obtains a four-loop variant of the well-known algorithm MINCER. The existence of these explicit reduction rules shows that any four-loop massless propagator integral can be reduced to a linear combination of master integrals in the sense of a mathematical theorem. We also describe various algebraic ways to find additional relations between master integrals associated with several families of Feynman integrals.  相似文献   

2.
We present an algorithm to compute the Wedderburn decomposition of semisimple group algebras based on a computational approach of the Brauer–Witt theorem. The algorithm was implemented in the GAP package wedderga.  相似文献   

3.
To broaden the scope of decision procedures for linear arithmetic, they have to be integrated into theorem provers. Successful approaches e.g. in NQTHM or ACL2 suggest a close integration scheme which augments the decision procedures with lemmas about user-defined operators. We propose an even closer integration providing feedback about the state of the decision procedure in terms of entailed formulas for three reasons: First, to provide detailed proof objects for proof checking and archiving. Second, to analyze and improve the interaction between the decision procedure and the theorem prover. Third, to investigate whether the communication of the state of a failed proof attempt to the human user with the comprehensible standard GUI mechanisms of the theorem prover can enhance the speculation of auxiliary lemmas.  相似文献   

4.
We introduce SubPolyhedra (SubPoly), a new family of numerical abstract domains to infer and propagate linear inequalities. The key insight is that the reduced product of linear equalities and intervals produces powerful yet scalable analyses. Abstract domains in SubPoly are as expressive as Polyhedra, but they drop some of the deductive power to achieve scalability. The cost/precision ratio of abstract domains in the SubPoly family can be fine-tuned according to the precision one wants to retain at join points, and the algorithm used to infer the tighter bounds on intervals. We implemented SubPoly on the top of Clousot{{\tt Clousot}}, a generic abstract interpreter for .Net. Clousot{{\tt .Net.\,Clousot}} with SubPoly analyzes very large and complex code bases in few minutes. SubPoly can efficiently capture linear inequalities among hundreds of variables, a result well beyond the state-of-the-art implementations of Polyhedra.  相似文献   

5.
Esterel is a design language for the specification of real time embedded systems. Based on the synchronous concurrency paradigm, its semantics describes execution as a succession of instants of computation. In this work, we consider the introduction of a new gotopause instruction in the language, which acts as a non-instantaneous jump instruction compatible with concurrency. It allows the programmer to activate state control points anywhere in the program, from where the execution is resumed in the next instant. In order to provide the formal semantics of the extended language, we first define a state semantics of Esterel, which we prove observationally equivalent to the original logical behavioral semantics. Including gotopause in the state semantics is then straightforward. We sketch two key applications of our new primitive: a direct encoding of automata and a quasi-linear rewriting of programs eliminating schizophrenic behaviors.  相似文献   

6.
We present PubKey-Wiki, a public key-based wiki group collaboration system. PubKey-Wiki allows users to authenticate themselves using public-key cryptography and gain authorizations using digital certificates. By using public key-based user authentication, users’ passwords are not sent across the network and are not stored on the web server’s host machine. Using digital certificates to authorize users to access protected files facilitates delegation of authority and simpler access control list (ACL) management, and allows the ability of a user to pass authorizations onto other users without needing to connect to the wiki’s server. The paper introduces a new approach to revocation in which revocation of certificates and revocation of public keys are handled separately and take effect immediately.The paper also introduces an algorithm, CertClosure, that computes the transitive closure of a set of certificates that contain authorization information. When a user adds or removes a certificate from his certificate directory in PubKey-Wiki, PubKey-Wiki uses the CertClosure algorithm to derive authorization rules. PubKey-Wiki stores these authorization rules in a lookup table where they can be easily referenced. When a user tries to access a protected file, PubKey-Wiki looks up and uses the relevant authorization rules to efficiently make an access control decision.  相似文献   

7.
Learning complex action models with quantifiers and logical implications   总被引:1,自引:0,他引:1  
Automated planning requires action models described using languages such as the Planning Domain Definition Language (PDDL) as input, but building action models from scratch is a very difficult and time-consuming task, even for experts. This is because it is difficult to formally describe all conditions and changes, reflected in the preconditions and effects of action models. In the past, there have been algorithms that can automatically learn simple action models from plan traces. However, there are many cases in the real world where we need more complicated expressions based on universal and existential quantifiers, as well as logical implications in action models to precisely describe the underlying mechanisms of the actions. Such complex action models cannot be learned using many previous algorithms. In this article, we present a novel algorithm called LAMP (Learning Action Models from Plan traces), to learn action models with quantifiers and logical implications from a set of observed plan traces with only partially observed intermediate state information. The LAMP algorithm generates candidate formulas that are passed to a Markov Logic Network (MLN) for selecting the most likely subsets of candidate formulas. The selected subset of formulas is then transformed into learned action models, which can then be tweaked by domain experts to arrive at the final models. We evaluate our approach in four planning domains to demonstrate that LAMP is effective in learning complex action models. We also analyze the human effort saved by using LAMP in helping to create action models through a user study. Finally, we apply LAMP to a real-world application domain for software requirement engineering to help the engineers acquire software requirements and show that LAMP can indeed help experts a great deal in real-world knowledge-engineering applications.  相似文献   

8.
We present GMC2, a software model checker for GCC, the open-source compiler from the Free Software Foundation (FSF). GMC2, which is part of the GMC static-analysis and model-checking tool suite for GCC under development at SUNY Stony Brook, can be seen as an extension of Monte Carlo model checking to the setting of concurrent, procedural programming languages. Monte Carlo model checking is a newly developed technique that utilizes the theory of geometric random variables, statistical hypothesis testing, and random sampling of lassos in Büchi automata to realize a one- sided error, randomized algorithm for LTL model checking. To handle the function call/return mechanisms inherent in procedural languages such as C/C++, the version of Monte Carlo model checking implemented in GMC2 is optimized for pushdown-automaton models. Our experimental results demonstrate that this approach yields an efficient and scalable software model checker for GCC.  相似文献   

9.
Datatype specialization is a form of subtyping that captures program invariants on data structures that are expressed using the convenient and intuitive datatype notation. Of particular interest are structural invariants such as well-formedness. We investigate the use of phantom types for describing datatype specializations. We show that it is possible to express statically-checked specializations within the type system of Standard ML. We also show that this can be done in a way that does not lose useful programming facilities such as pattern matching in case expressions.  相似文献   

10.
The prevalence of dynamic-content web services, exemplified by search and online social networking, has motivated an increasingly wide web-facing front end. Horizontal scaling in the Cloud is favored for its elasticity, and distributed design of load balancers is highly desirable. Existing algorithms with a centralized design, such as Join-the-Shortest-Queue (JSQ), incur high communication overhead for distributed dispatchers.We propose a novel class of algorithms called Join-Idle-Queue (JIQ) for distributed load balancing in large systems. Unlike algorithms such as Power-of-Two, the JIQ algorithm incurs no communication overhead between the dispatchers and processors at job arrivals. We analyze the JIQ algorithm in the large system limit and find that it effectively results in a reduced system load, which produces 30-fold reduction in queueing overhead compared to Power-of-Two at medium to high load. An extension of the basic JIQ algorithm deals with very high loads using only local information of server load.  相似文献   

11.
We present a new software tool called CDN (Collaborative Data Network) for sharing and querying of clinical documents modeled using HL7 v3 standard (e.g., Clinical Document Architecture (CDA), Continuity of Care Document (CCD)). Similar to the caBIG initiative, CDN aims to foster innovations in cancer treatment and diagnosis through large-scale, sharing of clinical data. We focus on cancer because it is the second leading cause of deaths in the US. CDN is based on the synergistic combination of peer-to-peer technology and the extensible markup language XML and XQuery. Using CDN, a user can pose both structured queries and keyword queries on the HL7 v3 documents hosted by data providers. CDN is unique in its design – it supports location oblivious queries in a large-scale, network wherein a user does not explicitly provide the location of the data for a query. A location service in CDN discovers data of interest in the network at query time. CDN uses standard cryptographic techniques to provide security to data providers and protect the privacy of patients. Using CDN, a user can pose clinical queries pertaining to cancer containing aggregations and joins across data hosted by multiple data providers. CDN is implemented with open-source software for web application development and XML query processing. We ran CDN in a distributed environment using Amazon EC2 as a testbed. We report its performance on real and synthetic datasets of discharge summaries. We show that CDN can achieve good performance in a setup with large number of data providers and documents.  相似文献   

12.
Although the problem of data server placement in parallel and distributed systems has been studied extensively, most of the existing work assumes there is no competition between servers. Hence, their goal is to minimize read, update and storage cost. In this paper, we study the server placement problem in which a new server has to compete with existing servers for user requests. Therefore, in addition to minimizing cost, we also need to maximize the benefit of building a new server.Our major results include three parts. First, for tree-structured systems, we propose an O(|V|3k) time dynamic programming algorithm to find the optimal placement of k extra servers that maximizes the benefit in a tree with |V| nodes. We also propose an O(|V|3) time dynamic programming algorithm to find the optimal placement of extra servers that maximizes the benefit, without any constraint on the number of extra servers. Second, for general connected graphs, we prove that the server placement problems are NP-complete, and present three greedy heuristic algorithms, called Greedy Add, Greedy Remove and Greedy Add-Remove, to solve them. Third, we show that if the number of requests a server can handle (i.e., server capacity) is bounded, the server placement problem is NP-complete even for tree networks. We then derive a variation of the same set of greedy heuristic algorithms, with consideration of server capacity constraint, to solve the problem.Our experiment results demonstrate that the greedy algorithms achieve good results, when compared with the upper bounds found by a linear programming algorithm. Greedy Add performs best in the unconstrained model, yielding a benefit within 12% difference from the theoretical upper bound in average. For the constrained model, Greedy Remove performs best for smaller network sizes, while Greedy Add-Remove performs best for larger network sizes. On average, the heuristic algorithms yield a benefit within 13% difference from the theoretical upper bound in the constrained model.  相似文献   

13.
We prove that entailment for RDF Schema (RDFS) is decidable, NP-complete, and in P if the target graph does not contain blank nodes. We show that the standard set of entailment rules for RDFS is incomplete and that this can be corrected by allowing blank nodes in predicate position.We define semantic extensions of RDFS that involve datatypes and a subset of the OWL vocabulary that includes the property-related vocabulary (e.g. FunctionalProperty), the comparisons (e.g. sameAs and differentFrom) and the value restrictions (e.g. allValuesFrom). These semantic extensions are in line with the ‘if-semantics’ of RDFS and weaker than the ‘iff-semantics’ of D-entailment and OWL (DL or Full). For these semantic extensions we present entailment rules, prove completeness results, prove that consistency is in P and that, just as for RDFS, entailment is NP-complete, and in P if the target graph does not contain blank nodes. There are no restrictions on use to obtain decidability: classes can be used as instances.  相似文献   

14.
Discrete classification problems abound in pattern recognition and data mining applications. One of the most common discrete rules is the discrete histogram rule. This paper presents exact formulas for the computation of bias, variance, and RMS of the resubstitution and leave-one-out error estimators, for the discrete histogram rule. We also describe an algorithm to compute the exact probability distribution of resubstitution and leave-one-out, as well as their deviations from the true error rate. Using a parametric Zipf model, we compute the exact performance of resubstitution and leave-one-out, for varying expected true error, number of samples, and classifier complexity (number of bins). We compare this to approximate performance measures-computed by Monte-Carlo sampling—of 10-repeated 4-fold cross-validation and the 0.632 bootstrap error estimator. Our results show that resubstitution is low-biased but much less variable than leave-one-out, and is effectively the superior error estimator between the two, provided classifier complexity is low. In addition, our results indicate that the overall performance of resubstitution, as measured by the RMS, can be substantially better than the 10-repeated 4-fold cross-validation estimator, and even comparable to the 0.632 bootstrap estimator, provided that classifier complexity is low and the expected error rates are moderate. In addition to the results discussed in the paper, we provide an extensive set of plots that can be accessed on a companion website, at the URL http://ee.tamu.edu/edward/exact_discrete.  相似文献   

15.
We present a software package that guesses formulae for sequences of, for example, rational numbers or rational functions, given the first few terms. We implement an algorithm due to Bernhard Beckermann and George Labahn, together with some enhancements to render our package efficient. Thus we extend and complement Christian Krattenthaler’s program Rate.m, the parts concerned with guessing of Bruno Salvy and Paul Zimmermann’s GFUN, the univariate case of Manuel Kauers’ Guess.m and Manuel Kauers’ and Christoph Koutschan’s qGeneratingFunctions.m.  相似文献   

16.
This paper is an in-depth study of qualitative physical reasoning about one particular scenario: using a box to carry a collection of objects from one place to another. Specifically we consider the plan, plan1 “Load objects uCargo into box oBox one by one; carry oBox from location l1 to location l2”. We present qualitative constraints on the shape, starting position, and material properties of uCargo and oBox and on the characteristics of the motion that suffice to make it virtually certain that plan1 can be successfully executed. We develop a theory, consisting mostly of first-order statements together with two default rules, that supports an inference of the form “If conditions XYZ hold, and the agent attempts to carry out plan1 then presumably he will succeed”. Our theory is elaboration tolerant in the sense that carrying out the analogous inference for carrying objects in boxes with lids, in boxes with small holes, or on trays can reuse much of the same knowledge. The theory integrates reasoning about continuous time, Euclidean space, commonsense dynamics of solid objects, and semantics of partially specified plans.  相似文献   

17.
Most large software applications rely on an external relational database for storing and managing persistent data. Typically, such applications interact with the database by first constructing strings that represent SQL statements, and then submitting these for execution by the database engine. The fact that these statements are only checked for correctness at runtime is a source for many potential defects, including type and syntax errors and vulnerability to injection attacks.The AraRat system presented here offers a method for dealing with these difficulties by coercing the host C++ compiler to do the necessary checks of the generated strings. A library of templates and preprocessor directives is used to embed in C++ a little language representing an augmented relational algebra formalism. Type checking of this embedded language, carried out by our template library, assures, at compile-time, the correctness and safety of the generated SQL strings. All SQL statements constructed by AraRat are guaranteed to be syntactically correct, and type safe with respect to the database schema. Moreover, AraRat statically ensures that the generated statements are immune to all injection attacks.The standard techniques of “expression templates” and “compile-time symbolic derivation” for compile-time representation of symbolic structures, are enhanced in our system. We demonstrate the support of a type system and a symbol table lookup of the symbolic structure. A key observation of this work is that type equivalence of instantiated nominally typed generics in C++ (as well as other languages, e.g., Java) is structural rather than nominal. This makes it possible to embed the structural type system, characteristic to persistent data management, in the nominal type system of C++.For some of its advanced features, AraRat relies on two small extensions to the standard C++ language: the typeof pseudo operator and the __COUNTER__ preprocessor macro.  相似文献   

18.
This paper addresses the problem of scheduling non-preemptive moldable tasks to minimize the stretch of the tasks in an online non-clairvoyant setting. To the best of the authors’ knowledge, this problem has never been studied before. To tackle this problem, first the sequential subproblem is studied through the lens of the approximation theory. An algorithm, called DASEDF, is proposed and, through simulations, it is shown to outperform the first-come, first-served scheme. Furthermore, it is observed that machine availability is the key to getting good stretch values. Then, the moldable task scheduling problem is considered, and, by leveraging the results from the sequential case, another algorithm, DBOS, is proposed to optimize the stretch while scheduling moldable tasks. This work is motivated by a task scheduling problem in the context of parallel short sequence mapping which has important applications in biology and genetics. The proposed DBOS algorithm is evaluated both on synthetic data sets that represent short sequence mapping requests and on data sets generated using log files of real production clusters. The results show that the DBOS algorithm significantly outperforms the two state-of-the-art task scheduling algorithms on stretch optimization.  相似文献   

19.
We present the new version of the Mathematica package SARAH which provides the same features for a non-supersymmetric model as previous versions for supersymmetric models. This includes an easy and straightforward definition of the model, the calculation of all vertices, mass matrices, tadpole equations, and self-energies. Also the two-loop renormalization group equations for a general gauge theory are now included and have been validated with the independent Python code PyR@TE. Model files for FeynArts, CalcHep/CompHep, WHIZARD and in the UFO format can be written, and source code for SPheno for the calculation of the mass spectrum, a set of precision observables, and the decay widths and branching ratios of all states can be generated. Furthermore, the new version includes routines to output model files for Vevacious for both, supersymmetric and non-supersymmetric, models. Global symmetries are also supported with this version and by linking Susyno the handling of Lie groups has been improved and extended.  相似文献   

20.
In this paper we study a novel parametrization for state-space systems, namely separable least squares data driven local coordinates (slsDDLC). The parametrization by slsDDLC has recently been successfully applied to maximum likelihood estimation of linear dynamic systems. In a simulation study, the use of slsDDLC has led to numerical advantages in comparison to the use of more conventional parametrizations, including data driven local coordinates (DDLC). However, an analysis of properties of slsDDLC, which are relevant to identification, has not been performed up to now. In this paper, we provide insights into the geometry and topology of the slsDDLC construction and show a number of results which are important for actual identification, in particular for maximum likelihood estimation. We also prove that the separable least squares methodology is indeed guaranteed to be applicable to maximum likelihood estimation of linear dynamic systems in typical situations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号