首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 466 毫秒
1.
The termF-cardinality of (=F-card()) is introduced whereF: n n is a partial function and is a set of partial functionsf: n n . TheF-cardinality yields a lower bound for the worst-case complexity of computingF if only functionsf can be evaluated by the underlying abstract automaton without conditional jumps. This complexity bound isindependent from the oracles available for the abstract machine. Thus it is shown that any automaton which can only apply the four basic arithmetic operations needs (n logn) worst-case time to sortn numbers; this result is even true if conditional jumps witharbitrary conditions are possible. The main result of this paper is the following: Given a total functionF: n n and a natural numberk, it is almost always possible to construct a set such that itsF-cardinality has the valuek; in addition, can be required to be closed under composition of functionsf,g . Moreover, ifF is continuous, then consists of continuous functions.  相似文献   

2.
Quality evaluations in optimization processes are frequently noisy. In particular evolutionary algorithms have been shown to cope with such stochastic variations better than other optimization algorithms. So far mostly additive noise models have been assumed for the analysis. However, we will argue in this paper that this restriction must be relaxed for a large class of applied optimization problems. We suggest systematic noise as an alternative scenario, where the noise term is added to the objective parameters or to environmental parameters inside the fitness function. We thoroughly analyze the sphere function with systematic noise for the evolution strategy with global intermediate recombination. The progress rate formula and a measure for the efficiency of the evolutionary progress lead to a recommended ratio between and . Furthermore, analysis of the dynamics identifies limited regions of convergence dependent on the normalized noise strength and the normalized mutation strength. A residual localization error R can be quantified and a second to ratio is derived by minimizing R .  相似文献   

3.
Learning to Play Chess Using Temporal Differences   总被引:4,自引:0,他引:4  
Baxter  Jonathan  Tridgell  Andrew  Weaver  Lex 《Machine Learning》2000,40(3):243-263
In this paper we present TDLEAF(), a variation on the TD() algorithm that enables it to be used in conjunction with game-tree search. We present some experiments in which our chess program KnightCap used TDLEAF() to learn its evaluation function while playing on Internet chess servers. The main success we report is that KnightCap improved from a 1650 rating to a 2150 rating in just 308 games and 3 days of play. As a reference, a rating of 1650 corresponds to about level B human play (on a scale from E (1000) to A (1800)), while 2150 is human master level. We discuss some of the reasons for this success, principle among them being the use of on-line, rather than self-play. We also investigate whether TDLEAF() can yield better results in the domain of backgammon, where TD() has previously yielded striking success.  相似文献   

4.
This paper presents aut, a modern Automath checker. It is a straightforward re-implementation of the Zandleven Automath checker from the seventies. It was implemented about five years ago, in the programming language C. It accepts both the AUT-68 and AUT-QE dialects of Automath. This program was written to restore a damaged version of Jutting's translation of Landau's Grundlagen. Some notable features: It is fast. On a 1 GHz machine it will check the full Jutting formalization (736 K of nonwhitespace Automath source) in 0.6 seconds. Its implementation of -terms does not use named variables or de Bruijn indices (the two common approaches) but instead uses a graph representation. In this representation variables are represented by pointers to a binder. The program can compile an Automath text into one big Automath single line-style -term. It outputs such a term using de Bruijn indices. (These -terms cannot be checked by modern systems like Coq or Agda, because the -typed -calculi of de Bruijn are different from the -typed -calculi of modern type theory.)The source of aut is freely available on the Web at the address .  相似文献   

5.
A text is a triple=(, 1, 2) such that is a labeling function, and 1 and 2 are linear orders on the domain of ; hence may be seen as a word (, 1) together with an additional linear order 2 on the domain of . The order 2 is used to give to the word (, 1) itsindividual hierarchical representation (syntactic structure) which may be a tree but it may be also more general than a tree. In this paper we introducecontext-free grammars for texts and investigate their basic properties. Since each text has its own individual structure, the role of such a grammar should be that of a definition of a pattern common to all individual texts. This leads to the notion of ashapely context-free text grammar also investigated in this paper.  相似文献   

6.
Continuation passing style (CPS) translations of typed -calculi have numerous applications. However, the range of these applications has been confined by the fact that CPS translations are known for non-dependent type systems only, thus excluding well-known systems like the calculus of constructions (CC) and the logical frameworks (LF). This paper presents techniques for CPS translating systems with dependent types, with an emphasis on pure type-theoretical applications.In the first part of the paper we review several lines of work in which the need for CPS translations of dependent type systems has arisen, and discuss the difficulties involved with CPS translating such systems. One way of overcoming these difficulties is to work with so-called domain-free type systems. Thus, instead of Barendregt's -cube we shall consider the domain-free -cube, and instead of traditional pure type systems, we shall consider domain-free pure type systems.We therefore begin the second part by reviewing the domain-free -cube, which includes domain-free versions of CC and LF, and then present CPS translations for all the systems of the domain-free -cube. We also introduce Direct Style (DS) (i.e., inverse CPS) translations for all the systems of the domain-free -cube; such DS translations, which have been used in a number of applications, were previously formulated for untyped and simply-typed languages only.In the third part we review domain-free pure type systems and generalize the CPS translations of the domain-free -cube to a large class of domain-free pure type systems which includes most of the systems that appear in the literature, including those of the domain-free -cube. Many translations that appear in the literature arise as special cases of ours.In the fourth part of the paper we present two approaches to CPS translations of traditional pure type systems. The first, indirect, technique lifts the CPS translation of domain-free pure type systems to the analogous class of traditional pure type systems by using results that relate derivations in domain-free and traditional pure type systems. The second, direct, approach translates derivations, requiring a certain order on derivations to be well-founded. Both techniques yield translations for most of the systems that appear in the literature, including those of Barendregt's -cube.  相似文献   

7.
The method described in Ref. 1 does not always correctly establish the bonds between the variables. In fact, during the reduction to normal form of the -formula ((ty)(yy)) and of all those in which the same variable that is free in the left subformula of an application occurs bound in the right subformula, this variable is wrongly considered as bound. To prevent this, it is necessary to modify the levels assigned to the formulas in the -generation. We therefore give the correct -generation statements and the correct algorithm of the -generation.See Ref. 1.  相似文献   

8.
This paper is an informal introduction to the theory of types which use a connective for the intersection of two types and a constant for a universal type, besides the usual connective for function-types. This theory was first devised in about 1977 by Coppo, Dezani and Sallé in the context of-calculus and its main development has been by Coppo and Dezani and their collaborators in Turin. With suitable axioms and rules to assign types to-calculus terms, they obtained a system in which (i) the set of types given to a term does not change under-conversion, (ii) some interesting sets of terms, for example the solvable terms and the terms with normal form, can be characterised exactly by the types of their members, and (iii) the type-apparatus is not so complex as polymorphic systems with quantifier-containing types and therefore probably not so expensive to implement mechanically as these systems.There are in fact several variant systems with different detailed properties. This paper defines and motivates the simplest one from which the others are derived, and describes its most basic properties. No proofs are given but the motivation is shown by examples. A comprehensive bibliography is included.  相似文献   

9.
A loss queueing system GI/G/m/0 is considered. Let a(x) be a p.d.f. of interarrival intervals. Assume that this function behaves like cx-1 for small x. Further let B(x) be a d.f. of service time; (1/) be the mean service time. Conditions are derived for the light-traffic insensitivity of the loss probability to the form of B(x) as (/ ) 0. In particular, the condition = 1 is necessary. Estimates for the loss probability are obtained.  相似文献   

10.
We present a new definition of optimality intervals for the parametric right-hand side linear programming (parametric RHS LP) Problem () = min{c t x¦Ax =b + ¯b,x 0}. We then show that an optimality interval consists either of a breakpoint or the open interval between two consecutive breakpoints of the continuous piecewise linear convex function (). As a consequence, the optimality intervals form a partition of the closed interval {; ¦()¦ < }. Based on these optimality intervals, we also introduce an algorithm for solving the parametric RHS LP problem which requires an LP solver as a subroutine. If a polynomial-time LP solver is used to implement this subroutine, we obtain a substantial improvement on the complexity of those parametric RHS LP instances which exhibit degeneracy. When the number of breakpoints of () is polynomial in terms of the size of the parametric problem, we show that the latter can be solved in polynomial time.This research was partially funded by the United States Navy-Office of Naval Research under Contract N00014-87-K-0202. Its financial support is gratefully acknowledged.  相似文献   

11.
When interpolating incomplete data, one can choose a parametric model, or opt for a more general approach and use a non-parametric model which allows a very large class of interpolants. A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also smooth in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a fidelity term and a smoothness term.The classical approach to regularization is: select optimal weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the optimal weights does not guarantee that the chosen function will be optimal in some sense, such as the maximum likelihood criterion, or the minimal square error criterion. For that, we have to consider all possible weights.The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights. The reason is as follows: the weight actually determines the probability space in which we are working. For a given weight , the probability of a function f is proportional to exp(– f2 uu du) (for the case of a function with one variable). For each different , there is a different solution to the restoration problem; denote it by f. Now, if we had known , it would not be necessary to use all the weights; however, all we are given are some noisy measurements of f, and we do not know the correct . Therefore, the mathematically correct solution is to calculate, for every , the probability that f was sampled from a space whose probability is determined by , and average the different f's weighted by these probabilities. The same argument holds for the noise variance, which is also unknown.Three basic problems are addressed is this work: Computing the MAP estimate, that is, the function f maximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem. Computing the MSE estimate. This function is defined at each point x as f(x)Pr(f/D) f. This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSE estimate. Computing the pointwise uncertainty associated with the MSE solution. This problem is reduced to computing three one-dimensional integrals.  相似文献   

12.
Our starting point is a definition of conditional event EH which differs from many seemingly similar ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same third value u (undetermined) to all conditional events, but make it depend on EH, it turns out that this function t(EH) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, compulsory – choice of the relevant operations among conditional events) the natural axioms for many different (besides probability) conditional measures.  相似文献   

13.
The proofs of the Church–Rosser theorems for , , and reduction in untyped -calculus are formalized in Isabelle/HOL, an implementation of Higher Order Logic in the generic theorem prover Isabelle. For -reduction, both the standard proof and Takahashi's are given and compared. All proofs are based on a general theory of commutating relations that supports an almost geometric style of reasoning about confluence diagrams.  相似文献   

14.
Reasoning about programs in continuation-passing style   总被引:6,自引:0,他引:6  
Plotkin's v -calculus for call-by-value programs is weaker than the -calculus for the same programs in continuation-passing style (CPS). To identify the call-by-value axioms that correspond to on CPS terms, we define a new CPS transformation and an inverse mapping, both of which are interesting in their own right. Using the new CPS transformation, we determine the precise language of CPS terms closed under -transformations, as well as the call-by-value axioms that correspond to the so-called administrative -reductions on CPS terms. Using the inverse mapping, we map the remaining and equalities on CPS terms to axioms on call-by-value terms. On the pure (constant free) set of -terms, the resulting set of axioms is equivalent to Moggi's computational -calculus. If the call-by-value language includes the control operatorsabort andcall-with-current-continuation, the axioms are equivalent to an extension of Felleisenet al.'s v -C-calculus and to the equational subtheory of Talcott's logic IOCC.This article is a revised and extended version of the conference paper with the same title [42]. The technical report of the same title contains additional material.The authors were supported in part by NSF grant CCR 89-17022 and by Texas ATP grant 91-003604014.  相似文献   

15.
Let H be a separable Hilbert space. We consider the manifold M consisting of density operators on H such that p is of trace class for some p (0, 1). We say M is nearby if there exists C > 1 such that C –1C. We show that the space of nearby points to can be furnished with the two flat connections known as the (±)-affine structures, which are dual relative to the BKM metric. We furnish M with a norm making it into a Banach manifold.  相似文献   

16.
The postal network is an interconnection network that possesses many desirable properties in networking applications. It includes hypercubes and Fibonacci cubes as its special cases. Basically, the postal network forms a series (with series number ) that is based on the sequence N (n)=N (n–1)+N (n–), where n is the dimension and N (n) represents the number of nodes in an n-dimensional postal network in series . In this paper, we study topological properties of postal networks and relationships between different postal networks. One application of postal networks is also shown in implementing barrier synchronization using a special spanning tree called a postal tree. The postal network can also be considered as a flexible version of the hypercube by relaxing the restriction on the number of nodes, and hence, makes it possible to construct multicomputers with arbitrary sizes.  相似文献   

17.
For compact Euclidean bodiesP, Q, we define (P, Q) to be the smallest ratior/s wherer > 0,s > 0 satisfy . HeresQ denotes a scaling ofQ by the factors, andQ,Q are some translates ofQ. This function gives us a new distance function between bodies which, unlike previously studied measures, is invariant under affine transformations. If homothetic bodies are identified, the logarithm of this function is a metric. (Two bodies arehomothetic if one can be obtained from the other by scaling and translation.)For integerk 3, define (k) to be the minimum value such that for each convex polygonP there exists a convexk-gonQ with (P, Q) (k). Among other results, we prove that 2.118 ... <-(3) 2.25 and (k) = 1 + (k –2). We give anO(n 2 log2 n)-time algorithm which, for any input convexn-gonP, finds a triangleT that minimizes (T, P) among triangles. However, in linear time we can find a trianglet with (t, P)<-2.25.Our study is motivated by the attempt to reduce the complexity of the polygon containment problem, and also the motion-planning problem. In each case we describe algorithms which run faster when certain implicitslackness parameters of the input are bounded away from 1. These algorithms illustrate a new algorithmic paradigm in computational geometry for coping with complexity.Work of all authors was partially supported by the ESPRIT II Basic Research Actions Program of the EC under Contract No. 3075 (project ALCOM). Rudolf Fleischer and Kurt Mehlhorn acknowledge also DFG (Grant SPP Me 620/6). Chee Yap acknowledges also DFG (Grant Be 142/46-1) and NSF (Grants DCR-84-01898 and CCR-87-03458). This research was performed when Günter Rote and Chee Yap were at the Freie Universität Berlin.  相似文献   

18.
Technical Update: Least-Squares Temporal Difference Learning   总被引:2,自引:0,他引:2  
Boyan  Justin A. 《Machine Learning》2002,49(2-3):233-246
TD./ is a popular family of algorithms for approximate policy evaluation in large MDPs. TD./ works by incrementally updating the value function after each observed transition. It has two major drawbacks: it may make inefficient use of data, and it requires the user to manually tune a stepsize schedule for good performance. For the case of linear value function approximations and = 0, the Least-Squares TD (LSTD) algorithm of Bradtke and Barto (1996, Machine learning, 22:1–3, 33–57) eliminates all stepsize parameters and improves data efficiency.This paper updates Bradtke and Barto's work in three significant ways. First, it presents a simpler derivation of the LSTD algorithm. Second, it generalizes from = 0 to arbitrary values of ; at the extreme of = 1, the resulting new algorithm is shown to be a practical, incremental formulation of supervised linear regression. Third, it presents a novel and intuitive interpretation of LSTD as a model-based reinforcement learning technique.  相似文献   

19.
The concept of information is virtually ubiquitous in contemporary cognitive science. It is claimed to be processed (in cognitivist theories of perception and comprehension), stored (in cognitivist theories of memory and recognition), and otherwise manipulated and transformed by the human central nervous system. Fred Dretske's extensive philosophical defense of a theory of informational content (semantic information) based upon the Shannon-Weaver formal theory of information is subjected to critical scrutiny. A major difficulty is identified in Dretske's equivocations in the use of the concept of a signal bearing informational content. Gibson's alternative conception of information (construed as analog by Dretske), while avoiding many of the problems located in the conventional use of signal, raises different but equally serious questions. It is proposed that, taken literally, the human CNS does not extract or process information at all; rather, whatever information is construed as locatable in the CNS is information only for an observer-theorist and only for certain purposes.Blood courses through our veins, andinformation through our central nervous system.— A Neuropsychology Textbook.  相似文献   

20.
We consider the half-space range-reporting problem: Given a setS ofn points in d, preprocess it into a data structure, so that, given a query half-space , allk points ofS can be reported efficiently. We extend previously known static solutions to dynamic ones, supporting insertions and deletions of points ofS. For a given parameterm,n m n d/2 and an arbitrarily small positive constant , we achieveO(m 1+) space and preprocessing time, O((n/m d/2 logn+k) query time, and O(m1+n) amortized update time (d 3). We present, among others, the following applications: an O(n1+)-time algorithm for computing convex layers in 3, and an output sensitive algorithm for computing a level in an arrangements of planes in 3, whose time complexity is O((b+n) n, whereb is the size of the level.Work by the first author has been supported by National Science Foundation Grant CCR-91-06514. A preliminary version of this paper appeared in Agarwalet al. [2], which also contains the results of [20] on dynamic bichromatic closest pair and minimum spanning trees.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号