首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary Making use of the fact that two-level grammars (TLGs) may be thought of as finite specification of context-free grammars (CFGs) with infinite sets of productions, known techniques for parsing CFGs are applied to TLGs by first specifying a canonical CFG G — called skeleton grammar — obtained from the cross-reference of the TLG G. Under very natural restrictions it can be shown that for these grammar pairs (G, G) there exists a 1 — 1 correspondence between leftmost derivations in G and leftmost derivations in G. With these results a straightforward parsing algorithm for restricted TLGs is given.  相似文献   

2.
The cross ratio of four colinear points is of fundamental importance in model based vision, because it is the simplest numerical property of an object that is invariant under projection to an image. It provides a basis for algorithms to recognise objects from images without first estimating the position and orientation of the camera.A quantitative analysis of the effectiveness of the cross ratio in model based vision is made. A given imageI of four colinear points is classified by making comparisons between the measured cross ratio of the four image points and the cross ratios stored in the model database. The imageI is accepted as a projection of an objectO with cross ratio if |–|ntu, wheren is the standard deviation of the image noise,t is a threshold andu=. The performance of the cross ratio is described quantitatively by the probability of rejectionR, the probability of false alarmF and the probability of misclassificationp (), defined for two model cross ratios , . The trade off between these different probabilities is determined byt.It is assumed that in the absence of an object the image points have identical Gaussian distributions, and that in the presence of an object the image points have the appropriate conditional densities. The measurements of the image points are subject to small random Gaussian perturbations. Under these assumptions the trade offs betweenR,F andp () are given to a good approximation byR=2(1–(t)),F=r F t, t|–|–1, where is the relative noise level, is cumulative distribution function for the normal distribution,r F is constant, ande is a function of only. The trade off betweenR andF is obtained in Maybank (1994). In this paper the trade off betweenR andp () is obtained.It is conjectured that the general form of the above trade offs betweenR,F andp () is the same for a range of invariants useful in model based vision. The conjecture prompts the following definition: an invariant which has trade offs betweenR,F,p () of the above form is said to benon-degenerate for model based vision.The consequences of the trade off betweenR andp () are examined. In particular, it is shown that for a fixed overall probability of misclassification there is a maximum possible model cross ratio m , and there is a maximum possible numberN of models. Approximate expressions for m andN are obtained. They indicate that in practice a model database containing only cross ratio values can have a size of order at most ten, for a physically plausible level of image noise, and for a probability of misclassification of the order 0.1.  相似文献   

3.
Summary Let L(f) be the network complexity of a Boolean function L(f). For any n-ary Boolean function L(f) let . Hereby p ranges over all relative Turing programs and ranges over all oracles such that given the oracle , the restriction of p to inputs of length n is a program for L(f). p is the number of instructions of p. T p (n) is the time bound and S p of the program p relative to the oracle on inputs of length n. Our main results are (1) L(f) O(TC(L(f))), (2) TC(f) O(L(f) 2 2+) for every O.The results of this paper have been reported in a main lecture at the 1975 annual meeting of GAMM, April 2–5, Göttingen  相似文献   

4.
The plane with parallel coordinates   总被引:15,自引:0,他引:15  
By means ofParallel Coordinates planar graphs of multivariate relations are obtained. Certain properties of the relationship correspond tothe geometrical properties of its graph. On the plane a point line duality with several interesting properties is induced. A new duality betweenbounded and unbounded convex sets and hstars (a generalization of hyperbolas) and between Convex Unions and Intersections is found. This motivates some efficient Convexity algorithms and other results inComputational Geometry. There is also a suprising cusp inflection point duality. The narrative ends with a preview of the corresponding results inR N .  相似文献   

5.
For the equation x(t) = x(t) (1-(1/) t-- t- x(u)du), > 0, > 0, > 0, conditions for the stability of a nonzero stationary solution under small perturbations are determined.  相似文献   

6.
This paper presents a new method of partition, named-splitting, of a point set ind-dimensional space. Given a pointG in ad-dimensional simplexT, T(G;i) is the subsimplex spanned by G and the ith facet ofT. LetS be a set ofn points inT, and let be a sequence of nonnegative integers 1, ..., nd+1 satisfying i=1 d+1 1=n The-splitter of (T, S) is a pointG inT such thatT(G;i) contains at least i points ofS in its closure for everyi=1, 2, ...,d + 1. The associated dissection is the re-splitting.The existence of a-splitting is shown for any (T, S) and, and two efficient algorithms for finding such a splitting are given. One runs inO(d2n logn + d3n) time, and the other runs inO(n) time if the dimensiond can be considered as a constant. Applications of re-splitting to mesh generation, polygonal-tour generation, and a combinatorial assignment problem are given.  相似文献   

7.
A solution to the N-bit parity problem employing a single multiplicative neuron model, called translated multiplicative neuron ( t -neuron), is proposed. The t -neuron presents the following advantages: (a) N1, only 1 t -neuron is necessary, with a threshold activation function and parameters defined within a specific interval; (b) no learning procedures are required; and (c) the computational cost is the same as the one associated with a simple McCulloch-Pitts neuron. Therefore, the t -neuron solution to the N-bit parity problem has the lowest computational cost among the neural solutions presented to date.  相似文献   

8.
This paper presents generated enhancements for robust two and three-quarter dimensional meshing, including: (1) automated interval assignment by integer programming for submapped surfaces and volumes, (2) surface submapping, and (3) volume submapping. An introduction to the simplex method, an optimization technique of integer programming, is presented. Simplification of complex geometry is required for the formulation of the integer programming problem. A method of i-j unfolding is defined which explains how irregular geometry can be realigned into a simplified form that is suitable for submap interval assignment solutions. Also presented is the processes by which submapping eliminates the decomposition of surface geometry, through a pseudodecomposition process, producing suitable mapped meshes. The process of submapping involves the creation of interpolated virtual edges, user defined vertex types and i-j-k space traversals. The creation of interpolated virtual edges is the method by which submapping automatically subdivides surface geometry. The interpolated virtual edge is formulated according to an interpolation scheme using the node discretization of curves on the surface. User defined vertex types allow direct user control of surface decomposition and interval assignment by modifying i-j-k space traversals. Volume submapping takes the geometry decomposition to a higher level by using mapped virtual surfaces to eliminate decomposition of complex volumes.  相似文献   

9.
LetF andG be elements of aC *-algebraA. Assume that, for each irreducible*-representation ofA on a Hilbert space210B; , there is a bounded linear operatorL B( ) such that the spectrum of(F) –(G)L is contained in the open left half plane. We prove that there is then an elementL A such that the spectrum ofF — GL is contained in the open left half plane. That is, if the system (F, G) is locally stabilizable, then it is stabilizable. We also consider the analogous problem with the open left half plane replaced by the open unit disk.This paper was supported in part by the National Science Foundation under Grant NSF-MCS-8002138.  相似文献   

10.
Summary Tsokos [12] showed the existence of a unique random solution of the random Volterra integral equation (*)x(t; ) = h(t; ) + o t k(t, ; )f(, x(; )) d, where , the supporting set of a probability measure space (,A, P). It was required thatf must satisfy a Lipschitz condition in a certain subset of a Banach space. By using an extension of Banach's contraction-mapping principle, it is shown here that a unique random solution of (*) exists whenf is (, )-uniformly locally Lipschitz in the same subset of the Banach space considered in [12].  相似文献   

11.
We consider the parallel time complexity of logic programs without function symbols, called logical query programs, or Datalog programs. We give a PRAM algorithm for computing the minimum model of a logical query program, and show that for programs with the polynomial fringe property, this algorithm runs in time that is logarithmic in the input size, assuming that concurrent writes are allowed if they are consistent. As a result, the linear and piecewise linear classes of logic programs are inN C. Then we examine several nonlinear classes in which the program has a single recursive rule that is an elementary chain. We show that certain nonlinear programs are related to GSM mappings of a balanced parentheses language, and that this relationship implies the polynomial fringe property; hence such programs are inN C Finally, we describe an approach for demonstrating that certain logical query programs are log space complete forP, and apply it to both elementary single rule programs and nonelementary programs.Supported by NSF Grant IST-84-12791, a grant of IBM Corporation, and ONR contract N00014-85-C-0731.  相似文献   

12.
The AI methodology of qualitative reasoning furnishes useful tools to scientists and engineers who need to deal with incomplete system knowledge during design, analysis, or diagnosis tasks. Qualitative simulators have a theoretical soundness guarantee; they cannot overlook any concrete equation implied by their input. On the other hand, the basic qualitative simulation algorithms have been shown to suffer from the incompleteness problem; they may allow non-solutions of the input equation to appear in their output. The question of whether a simulator with purely qualitative input which never predicts spurious behaviors can ever be achieved by adding new filters to the existing algorithm has remained unanswered. In this paper, we show that, if such a sound and complete simulator exists, it will have to be able to handle numerical distinctions with such a high precision that it must contain a component that would better be called a quantitative, rather than qualitative reasoner. This is due to the ability of the pure qualitative format to allow the exact representation of the members of a rich set of numbers.  相似文献   

13.
The factorisation problem is to construct the specification of a submoduleX when the specifications of the system and all submodules butX are given. It is usually described by the equation where P and X are submodules of system Q, ¦ is a composition operator, and is the equivalence criterion. In this paper we use a finite state machine (FSM) model consistent with CCS and study two factorisation problems:P |||P Q andP |||P Q, where ||| is a derived CCS composition operator, and represent strong and observational equivalences. Algorithms are presented and proved correct to find the most general specification of submoduleX forP |||P Q withQ -deterministic and forP |||P Q withQ deterministic. Conditions on the submachines of the most general solutions that remain solutions toP |||P Q(P |||P Q) are given. This paper extends and is based on the work of M. W. Shields.  相似文献   

14.
I discuss the attitude of Jewish law sources from the 2nd–:5th centuries to the imprecision of measurement. I review a problem that the Talmud refers to, somewhat obscurely, as impossible reduction. This problem arises when a legal rule specifies an object by referring to a maximized (or minimized) measurement function, e.g., when a rule applies to the largest part of a divided whole, or to the first incidence that occurs, etc. A problem that is often mentioned is whether there might be hypothetical situations involving more than one maximal (or minimal) value of the relevant measurement and, given such situations, what is the pertinent legal rule. Presumption of simultaneous occurrences or equally measured values are also a source of embarrassment to modern legal systems, in situations exemplified in the paper, where law determines a preference based on measured values. I contend that the Talmudic sources discussing the problem of impossible reduction were guided by primitive insights compatible with fuzzy logic presentation of the inevitable uncertainty involved in measurement. I maintain that fuzzy models of data are compatible with a positivistic epistemology, which refuses to assume any precision in the extra-conscious world that may not be captured by observation and measurement. I therefore propose this view as the preferred interpretation of the Talmudic notion of impossible reduction. Attributing a fuzzy world view to the Talmudic authorities is meant not only to increase our understanding of the Talmud but, in so doing, also to demonstrate that fuzzy notions are entrenched in our practical reasoning. If Talmudic sages did indeed conceive the results of measurements in terms of fuzzy numbers, then equality between the results of measurements had to be more complicated than crisp equations. The problem of impossible reduction could lie in fuzzy sets with an empty core or whose membership functions were only partly congruent. Reduction is impossible may thus be reconstructed as there is no core to the intersection of two measures. I describe Dirichlet maps for fuzzy measurements of distance as a rough partition of the universe, where for any region A there may be a non-empty set of - _A (upper approximation minus lower approximation), where the problem of impossible reduction applies. This model may easily be combined with probabilistic extention. The possibility of adopting practical decision standards based on -cuts (and therefore applying interval analysis to fuzzy equations) is discussed in this context. I propose to characterize the uncertainty that was presumably capped by the old sages as U-uncertainty, defined, for a non-empty fuzzy set A on the set of real numbers, whose -cuts are intervals of real numbers, as U(A) = 1/h(A) 0 h(A) log [1+(A)]d, where h(A) is the largest membership value obtained by any element of A and (A) is the measure of the -cut of A defined by the Lebesge integral of its characteristic function.  相似文献   

15.
Given a finite setE R n, the problem is to find clusters (or subsets of similar points inE) and at the same time to find the most typical elements of this set. An original mathematical formulation is given to the problem. The proposed algorithm operates on groups of points, called samplings (samplings may be called multiple centers or cores); these samplings adapt and evolve into interesting clusters. Compared with other clustering algorithms, this algorithm requires less machine time and storage. We provide some propositions about nonprobabilistic convergence and a sufficient condition which ensures the decrease of the criterion. Some computational experiments are presented.  相似文献   

16.
This paper presents algorithms for multiterminal net channel routing where multiple interconnect layers are available. Major improvements are possible if wires are able to overlap, and our generalized main algorithm allows overlap, but only on everyKth (K 2) layer. Our algorithm will, for a problem with densityd onL layers,L K + 3,provably use at most three tracks more than optimal: (d + 1)/L/K + 2 tracks, compared with the lower bound of d/L/K. Our algorithm is simple, has few vias, tends to minimize wire length, and could be used if different layers have different grid sizes. Finally, we extend our algorithm in order to obtain improved results for adjacent (K = 1) overlap: (d + 2)/2L/3 + 5 forL 7.This work was supported by the Semiconductor Research Corporation under Contract 83-01-035, by a grant from the General Electric Corporation, and by a grant at the University of the Saarland.  相似文献   

17.
The results of application of potential theory to optimization are used to extend the use of (Helmholtz) diffusion and diffraction equations for optimization of their solutions (x, ) with respect to both x, and . If the aim function is modified such that the optimal point does not change, then the function (x, ) is convex in (x, for small . The possibility of using heat conductivity equation with a simple boundary layer for global optimization is investigated. A method is designed for making the solution U(x,t) of such equations to have a positive-definite matrix of second mixed derivatives with respect to x for any x in the optimization domain and any small t < 0 (the point is remote from the extremum) or a negative-definite matrix in x (the point is close to the extremum). For the functions (x, ) and U(x,t) having these properties, the gradient and the Newton–Kantorovich methods are used in the first and second stages of optimization, respectively.  相似文献   

18.
It is shown that the translation of an open default into a modal formula x(L(x)LM 1 (x)...LM m (x)w(x)) gives rise to an embedding of open default systems into non-monotonic logics.  相似文献   

19.
Zusammenfassung Sei G eine kontextsensitive Grammatik. Gc bezeichne den kontextfreien Kern von G. In dieser Arbeit wird die Zeitkomplexität des folgenden Problems untersucht. Das NormalisierungsproblemSei ein Ableitungsbaum bezüglich Gc; ist auch ein Ableitungsbaum bezüglich G? Es wird gezeigt, daß im allgemeinen das Normalisierungs-problem NP-vollständig ist. Andererseits gibt es zu jeder kontextsensitiven Sprache L eine kontextsensitive Grammatik G, für welche das Normalisierungsproblem in Polynomzeit lösbar ist.
The time complexity of the normalization problem of contextsensitive grammars
Summary Let G be a Contextsensitive grammar. G cf represents the context free core of G. In this paper the time complexity of the following problem will be discussed. The Normalization ProblemLet be a derivation tree with respect to Gc; is also a derivation tree with respect to G? It is shown that in general the normalization problem is NP-complete. On the other hand, for every context sensitive language L there is a corresponding context sensitive grammar G for which this normalization problem is solvable in polynomial time.
  相似文献   

20.
We present a new definition of optimality intervals for the parametric right-hand side linear programming (parametric RHS LP) Problem () = min{c t x¦Ax =b + ¯b,x 0}. We then show that an optimality interval consists either of a breakpoint or the open interval between two consecutive breakpoints of the continuous piecewise linear convex function (). As a consequence, the optimality intervals form a partition of the closed interval {; ¦()¦ < }. Based on these optimality intervals, we also introduce an algorithm for solving the parametric RHS LP problem which requires an LP solver as a subroutine. If a polynomial-time LP solver is used to implement this subroutine, we obtain a substantial improvement on the complexity of those parametric RHS LP instances which exhibit degeneracy. When the number of breakpoints of () is polynomial in terms of the size of the parametric problem, we show that the latter can be solved in polynomial time.This research was partially funded by the United States Navy-Office of Naval Research under Contract N00014-87-K-0202. Its financial support is gratefully acknowledged.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号