首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Summary In a simple language for finite automata based on SCCS we introduce three different delay operators , , . The operators , are two different versions of an unbounded but finite delay operator. It is argued that the usual notion of bisimulation is inadequate and two generalisations are proposed. In both cases we give a complete axiomatisation for finite terms of the language and prove that certain forms of induction are sound. In one case we give a complete axiomatisation.  相似文献   

2.
Let (X, #) be an orthogonality space such that the lattice C(X, #) of closed subsets of (X, #) is orthomodular and let (, ) denote the free orthogonality monoid over (X, #). Let C0(, ) be the subset of C(, ), consisting of all closures of bounded orthogonal sets. We show that C0(, ) is a suborthomodular lattice of C(, ) and we provide a necessary and sufficient condition for C0(, ) to carry a full set of dispersion free states.The work of the second author on this paper was supported by National Science Foundation Grant GP-9005.  相似文献   

3.
Modeling and programming tools for neighborhood search often support invariants, i.e., data structures specified declaratively and automatically maintained incrementally under changes. This paper considers invariants for longest paths in directed acyclic graphs, a fundamental abstraction for many applications. It presents bounded incremental algorithms for arc insertion and deletion which run in O( + || log||) time and O() time respectively, where || and are measures of the change in the input and output. The paper also shows how to generalize the algorithm to various classes of multiple insertions/deletions encountered in scheduling applications. Preliminary experimental results show that the algorithms behave well in practice.  相似文献   

4.
Our starting point is a definition of conditional event EH which differs from many seemingly similar ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same third value u (undetermined) to all conditional events, but make it depend on EH, it turns out that this function t(EH) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, compulsory – choice of the relevant operations among conditional events) the natural axioms for many different (besides probability) conditional measures.  相似文献   

5.
A problem of estimating a functional parameter (x) and functionals () based on observation of a solution u (t, x) of the stochastic partial differential equation is considered. The asymptotic problem setting, as the noise intensity 0, is investigated.  相似文献   

6.
Summary It is shown how the weakest precondition approach to proving total correctness of nondeterministic programs can be formalized in infinitary logic. The weakest precondition technique is extended to hierarchically structured programs by adding a new primitive statement for operational abstraction, the nondeterministic assignment statement, to the guarded commands of Dijkstra. The infinitary logic L 1 is shown to be strong enough to express the weakest preconditions for Dijkstra's guarded commands, but too weak for the extended guarded commands. Two possible solutions are considered: going to the essentially stronger infinitary logic L 11 and restricting the power of the nondeterministic assignment statement in a way which allows the weakest preconditions to be expressed in L 1.  相似文献   

7.
The termF-cardinality of (=F-card()) is introduced whereF: n n is a partial function and is a set of partial functionsf: n n . TheF-cardinality yields a lower bound for the worst-case complexity of computingF if only functionsf can be evaluated by the underlying abstract automaton without conditional jumps. This complexity bound isindependent from the oracles available for the abstract machine. Thus it is shown that any automaton which can only apply the four basic arithmetic operations needs (n logn) worst-case time to sortn numbers; this result is even true if conditional jumps witharbitrary conditions are possible. The main result of this paper is the following: Given a total functionF: n n and a natural numberk, it is almost always possible to construct a set such that itsF-cardinality has the valuek; in addition, can be required to be closed under composition of functionsf,g . Moreover, ifF is continuous, then consists of continuous functions.  相似文献   

8.
On improving the accuracy of the Hough transform   总被引:4,自引:0,他引:4  
The subject of this paper is very high precision parameter estimation using the Hough transform. We identify various problems that adversely affect the accuracy of the Hough transform and propose a new, high accuracy method that consists of smoothing the Hough arrayH(, ) prior to finding its peak location and interpolating about this peak to find a final sub-bucket peak. We also investigate the effect of the quantizations and ofH(, ) on the final accuracy. We consider in detail the case of finding the parameters of a straight line. Using extensive simulation and a number of experiments on calibrated targets, we compare the accuracy of the method with results from the standard Hough transform method of taking the quantized peak coordinates, with results from taking the centroid about the peak, and with results from least squares fitting. The largest set of simulations cover a range of line lengths and Gaussian zero-mean noise distributions. This noise model is ideally suited to the least squares method, and yet the results from the method compare favorably. Compared to the centroid or to standard Hough estimates, the results are significantly better—for the standard Hough estimates by a factor of 3 to 10. In addition, the simulations show that as and are increased (i.e., made coarser), the sub-bucket interpolation maintains a high level of accuracy. Experiments using real images are also described, and in these the new method has errors smaller by a factor of 3 or more compared to the standard Hough estimates.  相似文献   

9.
In Pninis grammar of Sanskrit one finds the ivastras, a table which defines the natural classes of phonological segments in Sanskrit by intervals. We present a formal argument which shows that, using his representation method, Pninis way of ordering the phonological segments to represent the natural classes is optimal. The argument is based on a strictly set-theoretical point of view depending only on the set of natural classes and does not explicitly take into account the phonological features of the segments, which are, however, implicitly given in the way a language clusters its phonological inventory. The key idea is to link the graph of the Hasse-diagram of the set of natural classes closed under intersection to ivastra-style representations of the classes. Moreover, the argument is so general that it allows one to decide for each set of sets whether it can be represented with Pninis method. Actually, Pnini had to modify the set of natural classes to define it by the ivastras (the segment h plays a special role). We show that this modification was necessary and, in fact, the best possible modification. We discuss how every set of classes can be modified in such a way that it can be defined in a ivastra-style representation.1  相似文献   

10.
This paper is a discussion about how the Application Perspective works in practice.1 We talk about values and attitudes to system development and computer systems, and we illustrate how they have been carried out in practice by examples from the Florence project.2 The metaphors utensil and epaulet refer to questions about how we conceive the computer system we are to design in the system development process. Our experience is that, in the scientific community, technical challenges mean making computer systems that may be characterised as epaulets: they have technical, fancy features, but are not particularly useful. Making small, simple, but useful computer systems, more like utensils, does not give as much credit even if the development process may be just as challenging.  相似文献   

11.
Kulpa  Zenon 《Reliable Computing》2003,9(3):205-228
Using the results obtained for the one-dimensional case in Part I (Reliable Computing 9(1) (2003), pp. 1–20) of the paper, an analysis of the two-dimensional relational expression a 1 x 1 + a 2 x 2 b, where {, , , =}, is conducted with the help of a midpoint-radius diagram and other auxiliary diagrams. The solution sets are obtained with a simple boundary-line selection rule derived using these tools, and are characterized by types of one-dimensional cuts through the solution space. A classification of basic possible solution types is provided in detail. The generalization of the approach for n-dimensional interval systems and avenues for further research are also outlined.  相似文献   

12.
This paper presents aut, a modern Automath checker. It is a straightforward re-implementation of the Zandleven Automath checker from the seventies. It was implemented about five years ago, in the programming language C. It accepts both the AUT-68 and AUT-QE dialects of Automath. This program was written to restore a damaged version of Jutting's translation of Landau's Grundlagen. Some notable features: It is fast. On a 1 GHz machine it will check the full Jutting formalization (736 K of nonwhitespace Automath source) in 0.6 seconds. Its implementation of -terms does not use named variables or de Bruijn indices (the two common approaches) but instead uses a graph representation. In this representation variables are represented by pointers to a binder. The program can compile an Automath text into one big Automath single line-style -term. It outputs such a term using de Bruijn indices. (These -terms cannot be checked by modern systems like Coq or Agda, because the -typed -calculi of de Bruijn are different from the -typed -calculi of modern type theory.)The source of aut is freely available on the Web at the address .  相似文献   

13.
Summary Tsokos [12] showed the existence of a unique random solution of the random Volterra integral equation (*)x(t; ) = h(t; ) + o t k(t, ; )f(, x(; )) d, where , the supporting set of a probability measure space (,A, P). It was required thatf must satisfy a Lipschitz condition in a certain subset of a Banach space. By using an extension of Banach's contraction-mapping principle, it is shown here that a unique random solution of (*) exists whenf is (, )-uniformly locally Lipschitz in the same subset of the Banach space considered in [12].  相似文献   

14.
We show that a number of geometric problems can be solved on a n × n mesh-connected computer (MCC) inO(n) time, which is optimal to within a constant factor, since a nontrivial data movement on an MCC requires (n) time. The problems studied here include multipoint location, planar point location, trapezoidal decomposition, intersection detection, intersection of two convex polygons, Voronoi diagram, the largest empty circle, the smallest enclosing circle, etc. TheO(n) algorithms for all of the above problems are based on the classical divide-and-conquer problem-solving strategy.This work was supported in part by the National Science Foundation under Grant DCR 8420814. A preliminary version was presented in the 1987 FJCC, Dallas, TX.  相似文献   

15.
In a model for a measure of computational complexity, , for a partial recursive functiont, letR t denote all partial recursive functions having the same domain ast and computable within timet. Let = {R t |t is recursive} and let = { |i is actually the running time function of a computation}. and are partially ordered under set-theoretic inclusion. These partial orderings have been extensively investigated by Borodin, Constable and Hopcroft in [3]. In this paper we present a simple uniform proof of some of their results. For example, we give a procedure for easily calculating a model of computational complexity for which is not dense while is dense. In our opinion, our technique is so transparent that it indicates that certain questions of density are not intrinsically interesting for general abstract measures of computational complexity, . (This is not to say that similar questions are necessarily uninteresting for specific models.)Supported by NSF Research Grants GP6120 and GJ27127.  相似文献   

16.
The asymptotic behavior of the -entropy of ellipsoids in an n-dimensional Hamming space whose coefficients take only two different values is investigated as n . Explicit expressions for the main terms of the asymptotic expansion of -entropy of such ellipsoids are obtained under various relations between and parameters that define these ellipsoids.  相似文献   

17.
This paper is a study of the existence of polynomial time Boolean connective functions for languages. A languageL has an AND function if there is a polynomial timef such thatf(x,y) L x L andy L. L has an OR function if there is a polynomial timeg such thatg(x,y) xL oryL. While all NP complete sets have these functions, Graph Isomorphism, which is probably not complete, is also shown to have both AND and OR functions. The results in this paper characterize the complete sets for the classes Dp and pSAT[O(logn)] in terms of AND and OR and relate these functions to the structure of the Boolean hierarchy and the query hierarchies. Also, this paper shows that the complete sets for the levels of the Boolean hierarchy above the second level cannot have AND or OR unless the polynomial hierarchy collapses. Finally, most of the structural properties of the Boolean hierarchy and query hierarchies are shown to depend only on the existence of AND and OR functions for the NP complete sets.The first author was supported in part by NSF Research Grants DCR-8520597 and CCR-88-23053, and by an IBM Graduate Fellowship.  相似文献   

18.
A dialectical model of assessing conflicting arguments in legal reasoning   总被引:2,自引:2,他引:0  
Inspired by legal reasoning, this paper presents a formal framework for assessing conflicting arguments. Its use is illustrated with applications to realistic legal examples, and the potential for implementation is discussed. The framework has the form of a logical system for defeasible argumentation. Its language, which is of a logic-programming-like nature, has both weak and explicit negation, and conflicts between arguments are decided with the help of priorities on the rules. An important feature of the system is that these priorities are not fixed, but are themselves defeasibly derived as conclusions within the system. Thus debates on the choice between conflicting arguments can also be modelled.The proof theory of the system is stated in dialectical style, where a proof takes the form of a dialogue between a proponent and an opponent of an argument. An argument is shown to be justified if the proponent can make the opponent run out of moves in whatever way the opponent attacks. Despite this dialectical form, the system reflects a declarative, or relational approach to modelling legal argument. A basic assumption of this paper is that this approach complements two other lines of research in AI and Law, investigations of precedent-based reasoning and the development of procedural, or dialectical models of legal argument.Supported by a research fellowship of the Royal Netherlands Academy of Arts and Sciences, and by Esprit WG 8319 Modelage.  相似文献   

19.
The computation of the geometrical parameters identification model of a robot arm leads to an intrinsic extended Jacobian matrix. This matrix is often singular and its manual computation is tedious and may lead to error. A computer assisted procedure is developed to compute this matrix symbolically and to check its singularity. The program GPIM is developed in Pascal and works on PC's for this purpose. Jacobian singularity is checked using its symbolic expressions with the aid of a proposed table for robot arm parameters to obtain a minimum set of identified parameters. This program can be used for simple chain robot arms having revolute (R) and/or prismatic (P) joints including the case of consecutive near parallel axes. TH8 robot arm of type RPPRRR having two pairs of consecutive near parallel axes is studied to show the program efficiency and how singularity is eliminated.Nomenclature a i Denavit-Hartenberg parameter. - C i ,C i andC 12 cos i , cos i and cos ( 1 + 2). - J 0, J Basic Jacobian and Jacobian matrices. - J a , J r , J , J and J Sub-Jacobian matrices. - J 0a , J 0r , J 0, J 0 and J 0 Intrinsic sub-Jacobian matrices. - m Work space dimensions. - n Number of moving links of the robot arm. - p Number of pairs of consecutive near parallel axes. - P i,i+1 and P li [3×1] position vectors. - Q pi , Q ai , Q ri , Q i , Q i and Q i [4×4] differential operator matrices. - r i Denavit-Hartenberg parameter. - R l , R i , R i+1 and R n+1 Base, links i–1, i and n coordinate frames. - R i,i+1 and R 1i [3×3] orientation matrices. - S i , S i and S 12 sin i , sin i and sin 1 + 2. - T i,i+1 and T 1i [4×4] homogeneous transformation matrices. - X [m×1] robot arm operational coordinates. - 1st columns of the orientation matrices, and associated tensors. - 2nd columns of the orientation matrices, and associated tensors. - 3rd columns of the orientation matrices, and associated tensors. - i Denavit-Hartenberg parameter. - i Twist angle parameter. - X Changes in robot arm operational coordinates. - , ls and rr Changes in robot arm geometrical parameters, and their least square and ridge regression estimated changes. - i Denavit-Hartenberg parameter. - , c and n Robot geometrical parameters and their correct and nominal values. - pi, ai, ri, i, i and i [4×4] differential operator matrices. - and –1 [3×3] conversion matrix and its inverse.  相似文献   

20.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号