首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
Indecomposable local maps of one-dimensional tessellation automata are studied. The main results of this paper are the following. (1) For any alphabet containing two or more symbols and for anyn 1, there exist indecomposable scope-n local maps over . (2) If is a finite field of prime order, then a linear scope-n local map over is indecomposable if and only if its associated polynomial is an irreducible polynomial of degreen – 1 over , except for a trivial case. (3) Result (2) is no longer true if is a finite field whose order is not prime.  相似文献   

2.
We show that a number of geometric problems can be solved on a n × n mesh-connected computer (MCC) inO(n) time, which is optimal to within a constant factor, since a nontrivial data movement on an MCC requires (n) time. The problems studied here include multipoint location, planar point location, trapezoidal decomposition, intersection detection, intersection of two convex polygons, Voronoi diagram, the largest empty circle, the smallest enclosing circle, etc. TheO(n) algorithms for all of the above problems are based on the classical divide-and-conquer problem-solving strategy.This work was supported in part by the National Science Foundation under Grant DCR 8420814. A preliminary version was presented in the 1987 FJCC, Dallas, TX.  相似文献   

3.
In this paper, an objective conception of contexts based loosely upon situation theory is developed and formalized. Unlike subjective conceptions, which take contexts to be something like sets of beliefs, contexts on the objective conception are taken to be complex, structured pieces of the world that (in general) contain individuals, other contexts, and propositions about them. An extended first-order language for this account is developed. The language contains complex terms for propositions, and the standard predicate ist that expresses the relation that holds between a context and a proposition just in case the latter is true in the former. The logic for the objective conception features a global classical predicate calculus, a local logic for reasoning within contexts, and axioms for propositions. The specter of paradox is banished from the logic by allowing ist to be nonbivalent in problematic cases: it is not in general the case, for any context c and proposition p, that either ist(c,p) or ist(c, ¬ p). An important representational capability of the logic is illustrated by proving an appropriately modified version of an illustrative theorem from McCarthy's classic Blocks World example.  相似文献   

4.
Games such as CHESS, GO and OTHELLO can be represented by minimax game trees. Among various search procedures to solve such game trees,- and SSS* are perhaps most well known. Although it is proved that SSS* explores only a subset of the nodes explored by-, - is commonly believed to be faster in real applications, since it requires very little memory space and hence its storage management cost is low. Contrary to this folklore, however, this paper reports, using the OTHELLO game as an example, that SSS* is much faster than-. It is also demonstrated that SSS* can be modified to make the required memory space controllable to some extent, while retaining the high efficiency of the original SSS*.This research was partially supported by the Ministry of Education, Science and Culture of Japan, under a Scientific Grant-in-Aid.  相似文献   

5.
Unification algorithms have been constructed for semigroups and commutative semigroups. This paper considers the intermediate case of partially commutative semigroups. We introduce classesN and of such semigroups and justify their use. We present an equation-solving algorithm for any member of the classN. This algorithm is relative to having an algorithm to determine all non-negative solutions of a certain class of diophantine equations of degree 2 which we call -equations. The difficulties arising when attempting to solve equations in members of the class are discussed, and we present arguments that strongly suggest that unification in these semigroups is undecidable.  相似文献   

6.
When verifying concurrent systems described by transition systems, state explosion is one of the most serious problems. If quantitative temporal information (expressed by clock ticks) is considered, state explosion is even more serious. We present a notion of abstraction of transition systems, where the abstraction is driven by the formulae of a quantitative temporal logic, called qu-mu-calculus, defined in the paper. The abstraction is based on a notion of bisimulation equivalence, called , n-equivalence, where is a set of actions and n is a natural number. It is proved that two transition systems are , n-equivalent iff they give the same truth value to all qu-mu-calculus formulae such that the actions occurring in the modal operators are contained in , and with time constraints whose values are less than or equal to n. We present a non-standard (abstract) semantics for a timed process algebra able to produce reduced transition systems for checking formulae. The abstract semantics, parametric with respect to a set of actions and a natural number n, produces a reduced transition system , n-equivalent to the standard one. A transformational method is also defined, by means of which it is possible to syntactically transform a program into a smaller one, still preserving , n-equivalence.  相似文献   

7.
This paper uses Thiele rational interpolation to derive a simple method for computing the Randles–Sevcik function 1/2(x), with relative error at most 1.9 × 10–5 for – < x < . We develop a piecewise approximation method for the numerical computation of 1/2(x) on the union (–, –10) [–10, 10] (10, ). This approximation is particularly convenient to employ in electrochemical applications where four significant digits of accuracy are usually sufficient. Although this paper is primarily concerned with the approximation of the Randles–Sevcik function, some examples are included that illustrate how Thiele rational interpolation can be employed to generate useful approximations to other functions of interest in scientific work.  相似文献   

8.
Learning to Play Chess Using Temporal Differences   总被引:4,自引:0,他引:4  
Baxter  Jonathan  Tridgell  Andrew  Weaver  Lex 《Machine Learning》2000,40(3):243-263
In this paper we present TDLEAF(), a variation on the TD() algorithm that enables it to be used in conjunction with game-tree search. We present some experiments in which our chess program KnightCap used TDLEAF() to learn its evaluation function while playing on Internet chess servers. The main success we report is that KnightCap improved from a 1650 rating to a 2150 rating in just 308 games and 3 days of play. As a reference, a rating of 1650 corresponds to about level B human play (on a scale from E (1000) to A (1800)), while 2150 is human master level. We discuss some of the reasons for this success, principle among them being the use of on-line, rather than self-play. We also investigate whether TDLEAF() can yield better results in the domain of backgammon, where TD() has previously yielded striking success.  相似文献   

9.
In recent years, constraint satisfaction techniques have been successfully applied to disjunctive scheduling problems, i.e., scheduling problems where each resource can execute at most one activity at a time. Less significant and less generally applicable results have been obtained in the area of cumulative scheduling. Multiple constraint propagation algorithms have been developed for cumulative resources but they tend to be less uniformly effective than their disjunctive counterparts. Different problems in the cumulative scheduling class seem to have different characteristics that make them either easy or hard to solve with a given technique. The aim of this paper is to investigate one particular dimension along which problems differ. Within the cumulative scheduling class, we distinguish between highly disjunctive and highly cumulative problems: a problem is highly disjunctive when many pairs of activities cannot execute in parallel, e.g., because many activities require more than half of the capacity of a resource; on the contrary, a problem is highly cumulative if many activities can effectively execute in parallel. New constraint propagation and problem decomposition techniques are introduced with this distinction in mind. This includes an O(n2) edge-finding algorithm for cumulative resources (where n is the number of activities requiring the same resource) and a problem decomposition scheme which applies well to highly disjunctive project scheduling problems. Experimental results confirm that the impact of these techniques varies from highly disjunctive to highly cumulative problems. In the end, we also propose a refined version of the edge-finding algorithm for cumulative resources which, despite its worst case complexity in O(n3) , performs very well on highly cumulative instances.  相似文献   

10.
Our starting point is a definition of conditional event EH which differs from many seemingly similar ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same third value u (undetermined) to all conditional events, but make it depend on EH, it turns out that this function t(EH) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, compulsory – choice of the relevant operations among conditional events) the natural axioms for many different (besides probability) conditional measures.  相似文献   

11.
The ongoing integration of LANs and WANs to support global communications and businesses and the emergence of integrated broadband communication services has created an increased demand for cooperation between customers, network and service providers to achieve end-to-end service management. Such a cooperation between autonomous authorities, each defining their own administrative management domains, requires the application of an open standardized framework to facilitate and regulate interworking. Such a framework is given by the ITU-T recommendations on TMN, where the so-called X interface is of particular importance for inter-domain management. In this paper, we explain the role of the TMN X interface within an inter-domain TMN architecture supporting end-to-end communications management. We identify the important issues that need to be addressed for the definition and realization of TMN X interfaces and report about our practical experiences with the implementation of TMN X interfaces in the PREPARE project.  相似文献   

12.
The computational reconstruction of surface topographies from scanning electron microscope (SEM) images has been extensively investigated in the past, but fundamental image processing problems still exist. Since conventional approaches adapted from general-purpose image processing have not sufficiently met the requirements in terms of resolution and reliability, we have explored combining different methods to obtain better results.This paper presents a least-squares combination of conventional stereoscopy with shape from shading and a way of obtaining self-consistent surface profiles from stereoscopy and stereo-intrinsic shape from shading using dynamic programming techniques. Results are presented showing how this combined analysis of multi-sensorial data yields improvements of the reconstructed surface topography that cannot be obtained from individual sensor signals alone.  相似文献   

13.
The concept of information is virtually ubiquitous in contemporary cognitive science. It is claimed to be processed (in cognitivist theories of perception and comprehension), stored (in cognitivist theories of memory and recognition), and otherwise manipulated and transformed by the human central nervous system. Fred Dretske's extensive philosophical defense of a theory of informational content (semantic information) based upon the Shannon-Weaver formal theory of information is subjected to critical scrutiny. A major difficulty is identified in Dretske's equivocations in the use of the concept of a signal bearing informational content. Gibson's alternative conception of information (construed as analog by Dretske), while avoiding many of the problems located in the conventional use of signal, raises different but equally serious questions. It is proposed that, taken literally, the human CNS does not extract or process information at all; rather, whatever information is construed as locatable in the CNS is information only for an observer-theorist and only for certain purposes.Blood courses through our veins, andinformation through our central nervous system.— A Neuropsychology Textbook.  相似文献   

14.
Let (X, #) be an orthogonality space such that the lattice C(X, #) of closed subsets of (X, #) is orthomodular and let (, ) denote the free orthogonality monoid over (X, #). Let C0(, ) be the subset of C(, ), consisting of all closures of bounded orthogonal sets. We show that C0(, ) is a suborthomodular lattice of C(, ) and we provide a necessary and sufficient condition for C0(, ) to carry a full set of dispersion free states.The work of the second author on this paper was supported by National Science Foundation Grant GP-9005.  相似文献   

15.
Summary This paper is devoted to developing and studying a precise notion of the encoding of a logical data structure in a physical storage structure, that is motivated by considerations of computational efficiency. The development builds upon the notion of an encoding of one graph in another. The cost of such an encoding is then defined so as to reflect the structural compatibility of the two graphs, the (externally specified) costs of implementing the host graph, and the (externally specified) set of intended usage patterns of the guest graph. The stability of the constructed framework is demonstrated in terms of a number of results; the faithfulness of the formalism is argued in terms of a number of examples from the literature; and the tractability of the model is hinted at by several results and by further references to the literature.  相似文献   

16.
This paper deals with distributed transactions that employ self-stabilization in the sense that the transaction is possibly extended by corrective actions (stabilizers) to repair violated integrity constraints. The implementation of such an extended distributed transaction is discussed in both macro (inter-transaction) and micro (intra-transaction) aspects. In the macro level we discuss the distributed stabilizers execution protocol, the relationship with the 2PC algorithm (the reinfection problem) and cooperative constraint evaluation and transaction termination. The Micro-level issues discussed relate to the interaction among different stabilizers. Three different alternative protocols are discussed: the basic S-Lock protocol, the extended S-Lock protocol, and the cluster graph based protocol. These protocols are compared and discussed. The paper concludes with discussion of some general properties such as:determinism and termination.  相似文献   

17.
Summary A proof rule for the procedure call is derived for procedures with value, result and value-result parameters. It is extended to procedures with unrestricted global variables and to recursive procedures. Like D. Gries's proof rule, it is based on the substitution rule for assignment. However, it is more general and much simpler to apply. Assume that {U} S {V} has been proved about the procedure body S. The proof rule for determining whether a call establishes predicate E is based on finding an adaptation A satisfying AV E, where E is derived from E by some substitution of parameters.  相似文献   

18.
When interpolating incomplete data, one can choose a parametric model, or opt for a more general approach and use a non-parametric model which allows a very large class of interpolants. A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also smooth in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a fidelity term and a smoothness term.The classical approach to regularization is: select optimal weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the optimal weights does not guarantee that the chosen function will be optimal in some sense, such as the maximum likelihood criterion, or the minimal square error criterion. For that, we have to consider all possible weights.The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights. The reason is as follows: the weight actually determines the probability space in which we are working. For a given weight , the probability of a function f is proportional to exp(– f2 uu du) (for the case of a function with one variable). For each different , there is a different solution to the restoration problem; denote it by f. Now, if we had known , it would not be necessary to use all the weights; however, all we are given are some noisy measurements of f, and we do not know the correct . Therefore, the mathematically correct solution is to calculate, for every , the probability that f was sampled from a space whose probability is determined by , and average the different f's weighted by these probabilities. The same argument holds for the noise variance, which is also unknown.Three basic problems are addressed is this work: Computing the MAP estimate, that is, the function f maximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem. Computing the MSE estimate. This function is defined at each point x as f(x)Pr(f/D) f. This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSE estimate. Computing the pointwise uncertainty associated with the MSE solution. This problem is reduced to computing three one-dimensional integrals.  相似文献   

19.
In this paper, we define what we call a unitary immersion of a nonlinear system. We observe that, for classical Hamiltonian systems, this notion contains, in some sense, the concept of quantization. We restrict our attention to degree-zero unitary immersions, where all observation functions must be represented by operators of the type multiplication by a function. We show that the problem of classifying such degree-zero unitary immersions of a given nonlinear system is not obvious. In some cases, we solve this problem.Chargé de Recherche au CNRS.Maître de Conférences.  相似文献   

20.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号