首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Unification algorithms have been constructed for semigroups and commutative semigroups. This paper considers the intermediate case of partially commutative semigroups. We introduce classesN and of such semigroups and justify their use. We present an equation-solving algorithm for any member of the classN. This algorithm is relative to having an algorithm to determine all non-negative solutions of a certain class of diophantine equations of degree 2 which we call -equations. The difficulties arising when attempting to solve equations in members of the class are discussed, and we present arguments that strongly suggest that unification in these semigroups is undecidable.  相似文献   

2.
Our starting point is a definition of conditional event EH which differs from many seemingly similar ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same third value u (undetermined) to all conditional events, but make it depend on EH, it turns out that this function t(EH) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, compulsory – choice of the relevant operations among conditional events) the natural axioms for many different (besides probability) conditional measures.  相似文献   

3.
The relation between an operational interleaving semantics forTSCP based on a transition system and a compositional true concurrency semantics based on event structures is studied. In particular we extend the consistency result of Goltz and Loogan [15] forTCSP processes without recursion to the general case. Thus we obtain for everyTCSP processP that its operational meaningO(P) and the interleaving behaviourO( M3P3) which is derived from the event structureM3P3 associated withP are bisimilar.  相似文献   

4.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

5.
Learning to Play Chess Using Temporal Differences   总被引:4,自引:0,他引:4  
Baxter  Jonathan  Tridgell  Andrew  Weaver  Lex 《Machine Learning》2000,40(3):243-263
In this paper we present TDLEAF(), a variation on the TD() algorithm that enables it to be used in conjunction with game-tree search. We present some experiments in which our chess program KnightCap used TDLEAF() to learn its evaluation function while playing on Internet chess servers. The main success we report is that KnightCap improved from a 1650 rating to a 2150 rating in just 308 games and 3 days of play. As a reference, a rating of 1650 corresponds to about level B human play (on a scale from E (1000) to A (1800)), while 2150 is human master level. We discuss some of the reasons for this success, principle among them being the use of on-line, rather than self-play. We also investigate whether TDLEAF() can yield better results in the domain of backgammon, where TD() has previously yielded striking success.  相似文献   

6.
This paper uses Thiele rational interpolation to derive a simple method for computing the Randles–Sevcik function 1/2(x), with relative error at most 1.9 × 10–5 for – < x < . We develop a piecewise approximation method for the numerical computation of 1/2(x) on the union (–, –10) [–10, 10] (10, ). This approximation is particularly convenient to employ in electrochemical applications where four significant digits of accuracy are usually sufficient. Although this paper is primarily concerned with the approximation of the Randles–Sevcik function, some examples are included that illustrate how Thiele rational interpolation can be employed to generate useful approximations to other functions of interest in scientific work.  相似文献   

7.
This paper presents aut, a modern Automath checker. It is a straightforward re-implementation of the Zandleven Automath checker from the seventies. It was implemented about five years ago, in the programming language C. It accepts both the AUT-68 and AUT-QE dialects of Automath. This program was written to restore a damaged version of Jutting's translation of Landau's Grundlagen. Some notable features: It is fast. On a 1 GHz machine it will check the full Jutting formalization (736 K of nonwhitespace Automath source) in 0.6 seconds. Its implementation of -terms does not use named variables or de Bruijn indices (the two common approaches) but instead uses a graph representation. In this representation variables are represented by pointers to a binder. The program can compile an Automath text into one big Automath single line-style -term. It outputs such a term using de Bruijn indices. (These -terms cannot be checked by modern systems like Coq or Agda, because the -typed -calculi of de Bruijn are different from the -typed -calculi of modern type theory.)The source of aut is freely available on the Web at the address .  相似文献   

8.
The termF-cardinality of (=F-card()) is introduced whereF: n n is a partial function and is a set of partial functionsf: n n . TheF-cardinality yields a lower bound for the worst-case complexity of computingF if only functionsf can be evaluated by the underlying abstract automaton without conditional jumps. This complexity bound isindependent from the oracles available for the abstract machine. Thus it is shown that any automaton which can only apply the four basic arithmetic operations needs (n logn) worst-case time to sortn numbers; this result is even true if conditional jumps witharbitrary conditions are possible. The main result of this paper is the following: Given a total functionF: n n and a natural numberk, it is almost always possible to construct a set such that itsF-cardinality has the valuek; in addition, can be required to be closed under composition of functionsf,g . Moreover, ifF is continuous, then consists of continuous functions.  相似文献   

9.
In this paper, we consider the class of Boolean -functions, which are the Boolean functions definable by -expressions (Boolean expressions in which no variable occurs more than once). We present an algorithm which transforms a Boolean formulaE into an equivalent -expression-if possible-in time linear in E times , where E is the size ofE andn m is the number of variables that occur more than once inE. As an application, we obtain a polynomial time algorithm for Mundici's problem of recognizing -functions fromk-formulas [17]. Furthermore, we show that recognizing Boolean -functions is co-NP-complete for functions essentially dependent on all variables and we give a bound close to co-NP for the general case.  相似文献   

10.
We develop a theory of communication within branching programs that provides exponential lower bounds on the size of branching programs that are bounded alternating. Our theory is based on the algebraic concept of -branching programs, : , a semiring homomorphism, that generalizes ordinary branching programs, -branching programs [M2] andMOD p-branching programs [DKMW].Due to certain exponential lower and polynomial upper bounds on the size of bounded alternating -branching programs we are able to separate the corresponding complexity classesN ba ,co-N ba ba , andMOD p - ba ,p prime, from each other, and from that classes corresponding to oblivious linear length-bounded branching programs investigated in the past.  相似文献   

11.
Agent-based technology has been identified as an important approach for developing next generation manufacturing systems. One of the key techniques needed for implementing such advanced systems will be learning. This paper first discusses learning issues in agent-based manufacturing systems and reviews related approaches, then describes how to enhance the performance of an agent-based manufacturing system through learning from history (based on distributed case-based learning and reasoning) and learning from the future (through system forecasting simulation). Learning from history is used to enhance coordination capabilities by minimizing communication and processing overheads. Learning from the future is used to adjust promissory schedules through forecasting simulation, by taking into account the shop floor interactions, production and transportation time. Detailed learning and reasoning mechanisms are described and partial experimental results are presented.  相似文献   

12.
We present an O(n3) time type inference algorithm for a type system with a largest type, a smallest type , and the usual ordering between function types. The algorithm infers type annotations of least shape, and it works equally well for recursive types. For the problem of typability, our algorithm is simpler than the one of Kozen, Palsberg, and Schwartzbach for type inferencewithout . This may be surprising, especially because the system with is strictly more powerful.  相似文献   

13.
We show that if a complexity classC is closed downward under polynomial-time majority truth-table reductions ( mtt p ), then practically every other polynomial closure property it enjoys is inherited by the corresponding bounded two-sided error class BP[C]. For instance, the Arthur-Merlin game class AM [B1] enjoys practically every closure property of NP. Our main lemma shows that, for any relativizable classD which meets two fairly transparent technical conditions, we haveC BP[C] BP[D C]. Among our applications, we simplify the proof by Toda [Tol], [To2] that the polynomial hierarchy PH is contained in BP[P]. We also show that relative to a random oracleR, PH R is properly contained in P R .The first author was supported in part by NSF Grant CCR-9011248 and the second author was supported in part by NSF Grant CCR-89011154.  相似文献   

14.
Given a finite setE R n, the problem is to find clusters (or subsets of similar points inE) and at the same time to find the most typical elements of this set. An original mathematical formulation is given to the problem. The proposed algorithm operates on groups of points, called samplings (samplings may be called multiple centers or cores); these samplings adapt and evolve into interesting clusters. Compared with other clustering algorithms, this algorithm requires less machine time and storage. We provide some propositions about nonprobabilistic convergence and a sufficient condition which ensures the decrease of the criterion. Some computational experiments are presented.  相似文献   

15.
Let (X, #) be an orthogonality space such that the lattice C(X, #) of closed subsets of (X, #) is orthomodular and let (, ) denote the free orthogonality monoid over (X, #). Let C0(, ) be the subset of C(, ), consisting of all closures of bounded orthogonal sets. We show that C0(, ) is a suborthomodular lattice of C(, ) and we provide a necessary and sufficient condition for C0(, ) to carry a full set of dispersion free states.The work of the second author on this paper was supported by National Science Foundation Grant GP-9005.  相似文献   

16.
Adaptive control is considered for a two-dimensional linear discrete-time plant with randomly drifting parameters. The certainty equivalent minimum variance control law along with the projection-like identification algorithm are used. The stability of the parameter estimates and exponential stability of the closed-loop system are proved in the absence of any persistent excitation assumption.  相似文献   

17.
Games such as CHESS, GO and OTHELLO can be represented by minimax game trees. Among various search procedures to solve such game trees,- and SSS* are perhaps most well known. Although it is proved that SSS* explores only a subset of the nodes explored by-, - is commonly believed to be faster in real applications, since it requires very little memory space and hence its storage management cost is low. Contrary to this folklore, however, this paper reports, using the OTHELLO game as an example, that SSS* is much faster than-. It is also demonstrated that SSS* can be modified to make the required memory space controllable to some extent, while retaining the high efficiency of the original SSS*.This research was partially supported by the Ministry of Education, Science and Culture of Japan, under a Scientific Grant-in-Aid.  相似文献   

18.
The main stream of legal theory tends to incorporate unwritten principles into the law. Weighing of principles plays a great role in legal argumentation, inter alia in statutory interpretation. A weighing and balancing of principles and other prima facie reasons is a jump. The inference is not conclusive.To deal with defeasibility and weighing, a jurist needs both the belief-revision logic and the nonmonotonic logic. The systems of nonmonotonic logic included in the present volume provide logical tools enabling one to speak precisely about various kinds of rules about rules, dealing with such things as applicability of rules, what is assumed by rules, priority between rules and the burden of proof. Nonmonotonic logic is an example of an extension of the domain of logic. But the more far-reaching the extension is, the greater problems it meets. It seems impossible to make logical reconstruction of the totality of legal argumentation.The lawyers' search for reasons has no obvious end point. Ideally, the search for reasons may end when one arrives at a coherent totality of knowledge. In other words, coherence is the termination condition of reasoning. Both scientific knowledge and knowledge of legal and moral norms progresses by trial and error, and that one must resort to a certain convention to define what error means. The main difference is, however, that conventions of science are much more precise than those of legal scholarship.Consequently, determination of error in legal science is often holistic and circular. The reasons determining that a legal theory is erroneous are not more certain than the contested theory itself. A strict and formal logical analysis cannot give us the full grasp of legal rationality. A weaker logical theory, allowing for nonmonotonic steps, comes closer, at the expense of an inevitable loss of computational efficiency. Coherentist epistemology grasps even more of this rationality, at the expense of a loss of preciseness.  相似文献   

19.
When verifying concurrent systems described by transition systems, state explosion is one of the most serious problems. If quantitative temporal information (expressed by clock ticks) is considered, state explosion is even more serious. We present a notion of abstraction of transition systems, where the abstraction is driven by the formulae of a quantitative temporal logic, called qu-mu-calculus, defined in the paper. The abstraction is based on a notion of bisimulation equivalence, called , n-equivalence, where is a set of actions and n is a natural number. It is proved that two transition systems are , n-equivalent iff they give the same truth value to all qu-mu-calculus formulae such that the actions occurring in the modal operators are contained in , and with time constraints whose values are less than or equal to n. We present a non-standard (abstract) semantics for a timed process algebra able to produce reduced transition systems for checking formulae. The abstract semantics, parametric with respect to a set of actions and a natural number n, produces a reduced transition system , n-equivalent to the standard one. A transformational method is also defined, by means of which it is possible to syntactically transform a program into a smaller one, still preserving , n-equivalence.  相似文献   

20.
Since the earliest formalisation of default logic by Reiter many contributions to this appealing approach to nonmonotonic reasoning have been given. The different formalisations are here presented in a general framework that gathers the basic notions, concepts and constructions underlying default logic. Our view is to interpret defaults as special rules that impose a restriction on the juxtaposition of monotonic Hubert-style proofs of a given logicL. We propose to describe default logic as a logic where the juxtaposition of default proofs is subordinate to a restriction condition . Hence a default logic is a pair (L, ) where properties of the logic , like compactness, can be interpreted through the restriction condition . Different default systems are then given a common characterization through a specific condition on the logicL. We also prove cumulativity for any default logic (L, ) by slightly modifying the notion of default proof. We extend, in fact, the language ofL in a way close to that followed by Brewka in the formulation of his cumulative default system. Finally we show the existence of infinitely many intermediary default logics, depending on and called linear logics, which lie between Reiter's and ukaszewicz' versions of default logic.Work carried out in the framework of the agreement between Italian PT Administration and FUBLaforia, Université Paris VI Pierre et Marie Curie, 4 Place Jussieu,Tour 46, 75252 Paris, France  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号