首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 732 毫秒
1.
We provide techniques to integrate resolution logic with equality in type theory. The results may be rendered as follows. A clausification procedure in type theory, equipped with a correctness proof, all encoded using higher-order primitive recursion. A novel representation of clauses in minimal logic such that the -representation of resolution steps is linear in the size of the premisses. A translation of resolution proofs into lambda terms, yielding a verification procedure for those proofs. Availability of the power of resolution theorem provers in interactive proof construction systems based on type theory.  相似文献   

2.
We formalize natural deduction for first-order logic in the proof assistant Coq, using de Bruijn indices for variable binding. The main judgment we model is of the form d[:], stating that d is a proof term of formula under hypotheses it can be viewed as a typing relation by the Curry–Howard isomorphism. This relation is proved sound with respect to Coq's native logic and is amenable to the manipulation of formulas and of derivations. As an illustration, we define a reduction relation on proof terms with permutative conversions and prove the property of subject reduction.  相似文献   

3.
The proofs of the Church–Rosser theorems for , , and reduction in untyped -calculus are formalized in Isabelle/HOL, an implementation of Higher Order Logic in the generic theorem prover Isabelle. For -reduction, both the standard proof and Takahashi's are given and compared. All proofs are based on a general theory of commutating relations that supports an almost geometric style of reasoning about confluence diagrams.  相似文献   

4.
Gérard Ligozat 《Constraints》1998,3(2-3):165-177
This paper proves a key result in the maximality proof of ORD-Horn relations, namely, the fact that any subclass of Allen's algebra which contains all atomic relations, is closed under conversion, intersection and composition, and contains a relation which is not ORD-Horn will contain one (in fact, two at least) of four specific relations, the corner relations. Our proof uses the structural properties of ORD-Horn relations, where the original proof was by machine enumeration.  相似文献   

5.
Pizer and Eberly introduced the core as the analogue of the medial axis for greyscale images. For two-dimensional images, it is obtained as the ridge of a medial function defined on 2 + 1-dimensional scale space. The medial function is defined using Gaussian blurring and measures the extent to which a point is in the center of the object measured at a scale. Numerical calculations indicate the core has properties quite different from the medial axis. In this paper we give the generic properties of ridges and cores for two-dimensional images and explain the discrepancy between core and medial axis properties. We place cores in a larger relative critical set structure, which coherently relates disjoint pieces of core. We also give the generic transitions which occur for sequences of images varying with a parameter such as time. The genericity implies the stability of the full structure in any compact viewing area of scale space under sufficiently small L2 perturbations of the image intensity function. We indicate consequences for finding cores and also for adding markings to completely determine the structure of the medial function.  相似文献   

6.
This paper is an informal introduction to the theory of types which use a connective for the intersection of two types and a constant for a universal type, besides the usual connective for function-types. This theory was first devised in about 1977 by Coppo, Dezani and Sallé in the context of-calculus and its main development has been by Coppo and Dezani and their collaborators in Turin. With suitable axioms and rules to assign types to-calculus terms, they obtained a system in which (i) the set of types given to a term does not change under-conversion, (ii) some interesting sets of terms, for example the solvable terms and the terms with normal form, can be characterised exactly by the types of their members, and (iii) the type-apparatus is not so complex as polymorphic systems with quantifier-containing types and therefore probably not so expensive to implement mechanically as these systems.There are in fact several variant systems with different detailed properties. This paper defines and motivates the simplest one from which the others are derived, and describes its most basic properties. No proofs are given but the motivation is shown by examples. A comprehensive bibliography is included.  相似文献   

7.
ART: A Hybrid Classification Model   总被引:1,自引:0,他引:1  
This paper presents a new family of decision list induction algorithms based on ideas from the association rule mining context. ART, which stands for Association Rule Tree, builds decision lists that can be viewed as degenerate, polythetic decision trees. Our method is a generalized Separate and Conquer algorithm suitable for Data Mining applications because it makes use of efficient and scalable association rule mining techniques.  相似文献   

8.
Zusammenfassung In der folgenden Arbeit werden zunächst die Begriffe Gesamtschrittverfahren, Einzelschrittverfahren und Relaxationsverfahren allgemein formuliert und dann auf allgemeine lineare Gleichungssysteme angewandt. Im Spezialfall einer Matrix mit verschwindender Hauptdiagonale erhält man so die bekanntenJacobi-, Gauss-Seidel- und Relaxationsverfahren. Satz 1 macht eine Aussage über die Konvergenz des Einzelschrittverfahrens bei allgemeinen, nicht-negativen Matrizen. Der Beweis verläuft ähnlich wie in einem bereits 1948 vonStein undRosenberg [2] behandelten Spezialfall. Als Korollar ergibt sich eine Aussage über die Konvergenz des Relaxationsverfahrens bei nicht-negativen Matrizen. Es wird ferner der Satz 2 über die Konvergenz des Relaxationsverfahrens bei diagonaldominanten Matrizen beweisen.
Summary In this paper we give a general definition what is meant by total-step-, single-step- and successive relaxation iterative method and we apply these concepts on systems of linear equations. In the special case of a matrix with zero diagonal entries we obtain the well knownJacobi-, Gauss-Seidel- and Relaxation iterative method. Theorem 1 gives conditions for the convergence of the singlestep-iterative method for general, non-negative matrices. The proof is similar to that given byStein andRosenberg in [2] (1948) for a special case. A corollary gives conditions for the convergence of the relaxation-iterative method for non-negative matrices. Further on we prove theorem 2 about the convergence of the relaxation-iterative method with diagonally dominant matrices.
  相似文献   

9.
Let (X, #) be an orthogonality space such that the lattice C(X, #) of closed subsets of (X, #) is orthomodular and let (, ) denote the free orthogonality monoid over (X, #). Let C0(, ) be the subset of C(, ), consisting of all closures of bounded orthogonal sets. We show that C0(, ) is a suborthomodular lattice of C(, ) and we provide a necessary and sufficient condition for C0(, ) to carry a full set of dispersion free states.The work of the second author on this paper was supported by National Science Foundation Grant GP-9005.  相似文献   

10.
This paper presents aut, a modern Automath checker. It is a straightforward re-implementation of the Zandleven Automath checker from the seventies. It was implemented about five years ago, in the programming language C. It accepts both the AUT-68 and AUT-QE dialects of Automath. This program was written to restore a damaged version of Jutting's translation of Landau's Grundlagen. Some notable features: It is fast. On a 1 GHz machine it will check the full Jutting formalization (736 K of nonwhitespace Automath source) in 0.6 seconds. Its implementation of -terms does not use named variables or de Bruijn indices (the two common approaches) but instead uses a graph representation. In this representation variables are represented by pointers to a binder. The program can compile an Automath text into one big Automath single line-style -term. It outputs such a term using de Bruijn indices. (These -terms cannot be checked by modern systems like Coq or Agda, because the -typed -calculi of de Bruijn are different from the -typed -calculi of modern type theory.)The source of aut is freely available on the Web at the address .  相似文献   

11.
This paper initiates a study of the quantitative aspects of randomness in interactive proofs. Our main result, which applies to the equivalent form of IP known as Arthur-Merlin (AM) games, is a randomness-efficient technique for decreasing the error probability. Given an AM proof system forL which achieves error probability 1/3 at the cost of Arthur sendingl(n) random bits per round, and given a polynomialk=k(n), we show how to construct an AM proof system forL which, in the same number of rounds as the original proof system, achieves error 2 –k(n) at the cost of Arthur sending onlyO(l+k) random bits per round.Underlying the transformation is a novel sampling method for approximating the average value of an arbitrary functionf:{0,1} l [0,1]. The method evaluates the function onO(–2 log –1) sample points generated using onlyO(l + log –1) coin tosses to get an estimate which with probability at least 1- is within of the average value of the function.  相似文献   

12.
Summary This paper is devoted to developing and studying a precise notion of the encoding of a logical data structure in a physical storage structure, that is motivated by considerations of computational efficiency. The development builds upon the notion of an encoding of one graph in another. The cost of such an encoding is then defined so as to reflect the structural compatibility of the two graphs, the (externally specified) costs of implementing the host graph, and the (externally specified) set of intended usage patterns of the guest graph. The stability of the constructed framework is demonstrated in terms of a number of results; the faithfulness of the formalism is argued in terms of a number of examples from the literature; and the tractability of the model is hinted at by several results and by further references to the literature.  相似文献   

13.
We present a deep X-ray mask with integrated bent-beam electrothermal actuator for the fabrication of 3D microstructures with curved surface. The mask absorber is electroplated on the shuttle mass, which is supported by a pair of 20-m-thick single crystal silicon bent-beam electrothermal actuators and oscillated in a rectilinear direction due to the thermal expansion of the bent-beams. The width of each bent-beam is 10 m or 20 m and the length and bending angle are 1 mm and 0.1 rad, respectively, and the shuttle mass size is 1 mm × 1 mm. For 10-m-wide bent-beams, the shuttle mass displacement is around 15 m at 180 mW (3.6 V) dc input power. For 20-m-wide bent-beams, the shuttle mass displacement is around 19 m at 336 mW (4.2 V) dc input power. Sinusoidal cross-sectional PMMA microstructures with a pitch of 40 m and a height of 20 m are fabricated by 0.5 Hz, 20-m-amplitude sinusoidal shuttle mass oscillation.This research, under the contract project code MS-02-338-01, has been supported by the Intelligent Microsystem Center, which carries out one of the 21st centurys Frontier R & D Projects sponsored by the Korea Ministry of Science & Technology. Experiments at PLS were supported in part by MOST and POSCO.  相似文献   

14.
This paper shows how proof nets can be used to formalize the notion of incomplete dependency used in psycholinguistic theories of the unacceptability of center embedded constructions. Such theories of human language processing can usually be restated in terms of geometrical constraints on proof nets. The paper ends with a discussion of the relationship between these constraints and incremental semantic interpretation.  相似文献   

15.
Personalization and adaptation techniques are an interesting opportunity to design new services on-board vehicles. In this context, in fact, the need of an individual user to receive the right service at the right time and in the right way is more critical than in other cases, where personalization and adaptation already showed interesting advantages. At the same time, this context of application can provide new interesting insights for user modeling and adaptation. In the paper we present an architecture for providing personalized services on-board vehicles and we discuss an application to the case of tourist information. We focus on the choices we made to design an on-board system which was as less intrusive and distracting as possible and that could adapt its recommendations, the way it presents them and its own behavior to the user's preferences/interests and to the context of interaction (especially the driving conditions).  相似文献   

16.
Summary Making use of the fact that two-level grammars (TLGs) may be thought of as finite specification of context-free grammars (CFGs) with infinite sets of productions, known techniques for parsing CFGs are applied to TLGs by first specifying a canonical CFG G — called skeleton grammar — obtained from the cross-reference of the TLG G. Under very natural restrictions it can be shown that for these grammar pairs (G, G) there exists a 1 — 1 correspondence between leftmost derivations in G and leftmost derivations in G. With these results a straightforward parsing algorithm for restricted TLGs is given.  相似文献   

17.
In this paper, we propose a two-layer sensor fusion scheme for multiple hypotheses multisensor systems. To reflect reality in decision making, uncertain decision regions are introduced in the hypotheses testing process. The entire decision space is partitioned into distinct regions of correct, uncertain and incorrect regions. The first layer of decision is made by each sensor indepedently based on a set of optimal decision rules. The fusion process is performed by treating the fusion center as an additional virtual sensor to the system. This virtual sensor makes decision based on the decisions reached by the set of sensors in the system. The optimal decision rules are derived by minimizing the Bayes risk function. As a consequence, the performance of the system as well as individual sensors can be quantified by the probabilities of correct, incorrect and uncertain decisions. Numerical examples of three hypotheses, two and four sensor systems are presented to illustrate the proposed scheme.  相似文献   

18.
This paper is concerned with the structure of texts in which aproof is presented. Some parts of such a text are assumptions, otherparts are conclusions. We show how the structural organisation of thetext into assumptions and conclusions helps to check the validity of theproof. Then we go on to use the structural information for theformulation of proof rules, i.e., rules for the (re-)construction ofproof texts. The running example is intuitionistic propositional logicwith connectives , and. We give new proofs of some familiar results aboutthe proof theory of this logic to indicate how the new techniques workout.  相似文献   

19.
We investigate the problem of estimating the proportion vector which maximizes the likelihood of a given sample for a mixture of given densities. We adapt a framework developed for supervised learning and give simple derivations for many of the standard iterative algorithms like gradient projection and EM. In this framework, the distance between the new and old proportion vectors is used as a penalty term. The square distance leads to the gradient projection update, and the relative entropy to a new update which we call the exponentiated gradient update (EG). Curiously, when a second order Taylor expansion of the relative entropy is used, we arrive at an update EM which, for =1, gives the usual EM update. Experimentally, both the EM-update and the EG-update for > 1 outperform the EM algorithm and its variants. We also prove a polynomial bound on the rate of convergence of the EG algorithm.  相似文献   

20.
When interpolating incomplete data, one can choose a parametric model, or opt for a more general approach and use a non-parametric model which allows a very large class of interpolants. A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also smooth in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a fidelity term and a smoothness term.The classical approach to regularization is: select optimal weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the optimal weights does not guarantee that the chosen function will be optimal in some sense, such as the maximum likelihood criterion, or the minimal square error criterion. For that, we have to consider all possible weights.The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights. The reason is as follows: the weight actually determines the probability space in which we are working. For a given weight , the probability of a function f is proportional to exp(– f2 uu du) (for the case of a function with one variable). For each different , there is a different solution to the restoration problem; denote it by f. Now, if we had known , it would not be necessary to use all the weights; however, all we are given are some noisy measurements of f, and we do not know the correct . Therefore, the mathematically correct solution is to calculate, for every , the probability that f was sampled from a space whose probability is determined by , and average the different f's weighted by these probabilities. The same argument holds for the noise variance, which is also unknown.Three basic problems are addressed is this work: Computing the MAP estimate, that is, the function f maximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem. Computing the MSE estimate. This function is defined at each point x as f(x)Pr(f/D) f. This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSE estimate. Computing the pointwise uncertainty associated with the MSE solution. This problem is reduced to computing three one-dimensional integrals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号