首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The basic problem of interval computations is: given a function f(x 1,..., x n) and n intervals [x i, x i], find the (interval) range yof the given function on the given intervals. It is known that even for quadratic polynomials f(x 1,..., x n), this problem is NP-hard. In this paper, following the advice of A. Neumaier, we analyze the complexity of asymptotic range estimation, when the bound on the width of the input intervals tends to 0. We show that for small c > 0, if we want to compute the range with an accuracy c 2, then the problem is still NP-hard; on the other hand, for every > 0, there exists a feasible algorithm which asymptotically, estimates the range with an accuracy c 2–.  相似文献   

2.
Modeling and programming tools for neighborhood search often support invariants, i.e., data structures specified declaratively and automatically maintained incrementally under changes. This paper considers invariants for longest paths in directed acyclic graphs, a fundamental abstraction for many applications. It presents bounded incremental algorithms for arc insertion and deletion which run in O( + || log||) time and O() time respectively, where || and are measures of the change in the input and output. The paper also shows how to generalize the algorithm to various classes of multiple insertions/deletions encountered in scheduling applications. Preliminary experimental results show that the algorithms behave well in practice.  相似文献   

3.
Our starting point is a definition of conditional event EH which differs from many seemingly similar ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same third value u (undetermined) to all conditional events, but make it depend on EH, it turns out that this function t(EH) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, compulsory – choice of the relevant operations among conditional events) the natural axioms for many different (besides probability) conditional measures.  相似文献   

4.
We consider the half-space range-reporting problem: Given a setS ofn points in d, preprocess it into a data structure, so that, given a query half-space , allk points ofS can be reported efficiently. We extend previously known static solutions to dynamic ones, supporting insertions and deletions of points ofS. For a given parameterm,n m n d/2 and an arbitrarily small positive constant , we achieveO(m 1+) space and preprocessing time, O((n/m d/2 logn+k) query time, and O(m1+n) amortized update time (d 3). We present, among others, the following applications: an O(n1+)-time algorithm for computing convex layers in 3, and an output sensitive algorithm for computing a level in an arrangements of planes in 3, whose time complexity is O((b+n) n, whereb is the size of the level.Work by the first author has been supported by National Science Foundation Grant CCR-91-06514. A preliminary version of this paper appeared in Agarwalet al. [2], which also contains the results of [20] on dynamic bichromatic closest pair and minimum spanning trees.  相似文献   

5.
Viscous Lattices     
Let E be an arbitrary space, and an extensive dilation of P(E) into itself, with an adjoint erosion . Then, the image [P(E)] of P(E) by is a complete lattice P where the sup is the union and the inf the opening of the intersection according to . The lattice L, named viscous, is not distributive, nor complemented. Any dilation on P(E) admits the same expression in L. However, the erosion in L is the opening according to of the erosion in P(E). Given a connection C on P(E) the image of C under turns out to be a connection C on L as soon as (C)eq C. Moreover, the elementary connected openings x of C and (x) are linked by the relation (x) = x. A comprehensive class of connection preverving closings is constructed. Two examples, binary and numerical (the latter comes from the heart imaging), prove the relevance of viscous lattices in interpolation and in segmentation problems.Jean Serra obtained the degree of Mining Engineer, in 1962 in Nancy, France, and in 1967 his Ph.D. for a work dealing with the estimation of the iron ore body of Lorraine by geostatistics. In cooperation with Georges Matheron, he laid the foundations of a new method, that he called Mathematical Morphology (1964). Its purpose was to describe quantitatively shapes and textures of natural phenomena, at micro and macro scales. In 1967, he founded with G. Matheron, the Centre de Morphologie Mathematique, at School of Mines of Paris, on the campus of Fontainebleau. Since this time, he has been working in this framework as a Directeur de Recherches. His main book is a two-volume treatise entitled Image Analysis and Mathematical Morphology (Ac. Press, 1982, 1988). He has been Vice President for Europe of the International Society for Stereology from 1979 to 1983. He founded the International Society for Mathematical Morphology in 1993, and was elected his first president. His achievements include several patents of devices for image processing, various awards and titles, such as the first AFCET award, in 1988, or Doctor Honoris Causa of the University of Barcelona (Spain) in 1993. He recently developed a new theory of segmentation, which is based on set connections (2001–2004), and currently works on colour image processing.  相似文献   

6.
The cross ratio of four colinear points is of fundamental importance in model based vision, because it is the simplest numerical property of an object that is invariant under projection to an image. It provides a basis for algorithms to recognise objects from images without first estimating the position and orientation of the camera.A quantitative analysis of the effectiveness of the cross ratio in model based vision is made. A given imageI of four colinear points is classified by making comparisons between the measured cross ratio of the four image points and the cross ratios stored in the model database. The imageI is accepted as a projection of an objectO with cross ratio if |–|ntu, wheren is the standard deviation of the image noise,t is a threshold andu=. The performance of the cross ratio is described quantitatively by the probability of rejectionR, the probability of false alarmF and the probability of misclassificationp (), defined for two model cross ratios , . The trade off between these different probabilities is determined byt.It is assumed that in the absence of an object the image points have identical Gaussian distributions, and that in the presence of an object the image points have the appropriate conditional densities. The measurements of the image points are subject to small random Gaussian perturbations. Under these assumptions the trade offs betweenR,F andp () are given to a good approximation byR=2(1–(t)),F=r F t, t|–|–1, where is the relative noise level, is cumulative distribution function for the normal distribution,r F is constant, ande is a function of only. The trade off betweenR andF is obtained in Maybank (1994). In this paper the trade off betweenR andp () is obtained.It is conjectured that the general form of the above trade offs betweenR,F andp () is the same for a range of invariants useful in model based vision. The conjecture prompts the following definition: an invariant which has trade offs betweenR,F,p () of the above form is said to benon-degenerate for model based vision.The consequences of the trade off betweenR andp () are examined. In particular, it is shown that for a fixed overall probability of misclassification there is a maximum possible model cross ratio m , and there is a maximum possible numberN of models. Approximate expressions for m andN are obtained. They indicate that in practice a model database containing only cross ratio values can have a size of order at most ten, for a physically plausible level of image noise, and for a probability of misclassification of the order 0.1.  相似文献   

7.
One useful generalization of the convex hull of a setS ofn points is the -strongly convex -hull. It is defined to be a convex polygon with vertices taken fromS such that no point inS lies farther than outside and such that even if the vertices of are perturbed by as much as , remains convex. It was an open question as to whether an -strongly convexO()-hull existed for all positive . We give here anO(n logn) algorithm for constructing it (which thus proves its existence). This algorithm uses exact rational arithmetic. We also show how to construct an -strongly convexO( + )-hull inO(n logn) time using rounded arithmetic with rounding unit . This is the first rounded-arithmetic convex-hull algorithm which guarantees a convex output and which has error independent ofn.  相似文献   

8.
Modular Control and Coordination of Discrete-Event Systems   总被引:1,自引:0,他引:1  
In the supervisory control of discrete-event systems based on controllable languages, a standard way to handle state explosion in large systems is by modular supervision: either horizontal (decentralized) or vertical (hierarchical). However, unless all the relevant languages are prefix-closed, a well-known potential hazard with modularity is that of conflict. In decentralized control, modular supervisors that are individually nonblocking for the plant may nevertheless produce blocking, or even deadlock, when operating on-line concurrently. Similarly, a high-level hierarchical supervisor that predicts nonblocking at its aggregated level of abstraction may inadvertently admit blocking in a low-level implementation. In two previous papers, the authors showed that nonblocking hierarchical control can be guaranteed provided high-level aggregation is sufficiently fine; the appropriate conditions were formalized in terms of control structures and observers. In this paper we apply the same technique to decentralized control, when specifications are imposed on local models of the global process; in this way we remove the restriction in some earlier work that the plant and specification (marked) languages be prefix-closed. We then solve a more general problem of coordination: namely how to determine a high level coordinator that forestalls conflict in a decentralized architecture when it potentially arises, but is otherwise minimally intrusive on low-level control action. Coordination thus combines both vertical and horizontal modularity. The example of a simple production process is provided as a practical illustration. We conclude with an appraisal of the computational effort involved.  相似文献   

9.
Exact algorithms for detecting all rotational and involutional symmetries in point sets, polygons and polyhedra are described. The time complexities of the algorithms are shown to be (n) for polygons and (n logn) for two- and three-dimensional point sets. (n logn) time is also required for general polyhedra, but for polyhedra with connected, planar surface graphs (n) time can be achieved. All algorithms are optimal in time complexity, within constants.  相似文献   

10.
We present a new definition of optimality intervals for the parametric right-hand side linear programming (parametric RHS LP) Problem () = min{c t x¦Ax =b + ¯b,x 0}. We then show that an optimality interval consists either of a breakpoint or the open interval between two consecutive breakpoints of the continuous piecewise linear convex function (). As a consequence, the optimality intervals form a partition of the closed interval {; ¦()¦ < }. Based on these optimality intervals, we also introduce an algorithm for solving the parametric RHS LP problem which requires an LP solver as a subroutine. If a polynomial-time LP solver is used to implement this subroutine, we obtain a substantial improvement on the complexity of those parametric RHS LP instances which exhibit degeneracy. When the number of breakpoints of () is polynomial in terms of the size of the parametric problem, we show that the latter can be solved in polynomial time.This research was partially funded by the United States Navy-Office of Naval Research under Contract N00014-87-K-0202. Its financial support is gratefully acknowledged.  相似文献   

11.
We present an algorithm for computingL 1 shortest paths among polygonal obstacles in the plane. Our algorithm employs the continuous Dijkstra technique of propagating a wavefront and runs in timeO(E logn) and spaceO(E), wheren is the number of vertices of the obstacles andE is the number of events. By using bounds on the density of certain sparse binary matrices, we show thatE =O(n logn), implying that our algorithm is nearly optimal. We conjecture thatE =O(n), which would imply our algorithm to be optimal. Previous bounds for our problem were quadratic in time and space.Our algorithm generalizes to the case of fixed orientation metrics, yielding anO(n–1/2 log2 n) time andO(n–1/2) space approximation algorithm for finding Euclidean shortest paths among obstacles. The algorithm further generalizes to the case of many sources, allowing us to compute anL 1 Voronoi diagram for source points that lie among a collection of polygonal obstacles.Partially supported by a grant from Hughes Research Laboratories, Malibu, California and by NSF Grant ECSE-8857642. Much of this work was done while the author was a Ph.D. student at Stanford University, under the support of a Howard Hughes Doctoral Fellowship, and an employee of Hughes Research Laboratories.  相似文献   

12.
Conditions are presented under which the maximum of the Kolmogorov complexity (algorithmic entropy) K(1... N ) is attained, given the cost f( i ) of a message 1... N . Various extremal relations between the message cost and the Kolmogorov complexity are also considered; in particular, the minimization problem for the function f( i ) – K(1... N ) is studied. Here, is a parameter, called the temperature by analogy with thermodynamics. We also study domains of small variation of this function.  相似文献   

13.
Optimal shape design problems for an elastic body made from physically nonlinear material are presented. Sensitivity analysis is done by differentiating the discrete equations of equilibrium. Numerical examples are included.Notation U ad set of admissible continuous design parameters - U h ad set of admissible discrete design parameters - function fromU h ad defining shape of body - h function fromU h ad defining approximated shape of body - vector of nodal values of h - { n} sequence of functions tending to - () domain defined by - K bulk modulus - shear modulus - penalty parameter for contact condition - V() space of virtual displacements in() - V h(h) finite element approximation ofV() - J cost functional - J h discretized cost functional - J algebraic form ofJ h - (u) stress tensor - e(u) strain tensor - K stiffness matrix - f force vector - b(q) term arising from nonlinear boundary conditions - q vector of nodal degrees of freedom - p vector of adjoint state variables - J Jacobian of isoparametric mapping - |J| determinant ofJ - N vector of shape function values on parent element - L matrix of shape function derivatives on parent element - G matrix of Cartesian derivatives of shape functions - X matrix of nodal coordinates of element - D matrix of elastic coefficients - B strain-displacement matrix - P part of boundary where tractions are prescribed - u part of boundary where displacements are prescribed - variable part of boundary - strain invariant  相似文献   

14.
In terms of Groenendijk and Stokhofs (1984) formalization of exhaustive interpretation, many conversational implicatures can be accounted for. In this paper we justify and generalize this approach. Our justification proceeds by relating their account via Halpern and Moses (1984) non-monotonic theory of only knowing to the Gricean maxims of Quality and the first sub-maxim of Quantity. The approach of Groenendijk and Stokhof (1984) is generalized such that it can also account for implicatures that are triggered in subclauses not entailed by the whole complex sentence.  相似文献   

15.
Meer  Klaus 《Reliable Computing》2004,10(3):209-225
We study some problems in interval arithmetic treated in Kreinovich et al. [13]. First, we consider the best linear approximation of a quadratic interval function. Whereas this problem (as decision problem) is known to be NP-hard in the Turing model, we analyze its complexity in the real number model and the analogous class NP . Our results substantiate that most likely it does not any longer capture the difficulty of NP in such a real number setting. More precisely, we give upper complexity bounds for the approximation problem for interval functions by locating it in (a real analogue of). This result allows several conclusions: the problem is not (any more) NP -hard under so called weak polynomial time reductions and likely not to be NP -hard under (full) polynomial time reductions; for fixed dimension the problem is polynomial time solvable; this extends the results in Koshelev et al. [12] and answers a question left open in [13].We also study several versions of interval linear systems and show similar results as for the approximation problem.Our methods combine structural complexity theory with issues from semi-infinite optimization theory.  相似文献   

16.
A text is a triple=(, 1, 2) such that is a labeling function, and 1 and 2 are linear orders on the domain of ; hence may be seen as a word (, 1) together with an additional linear order 2 on the domain of . The order 2 is used to give to the word (, 1) itsindividual hierarchical representation (syntactic structure) which may be a tree but it may be also more general than a tree. In this paper we introducecontext-free grammars for texts and investigate their basic properties. Since each text has its own individual structure, the role of such a grammar should be that of a definition of a pattern common to all individual texts. This leads to the notion of ashapely context-free text grammar also investigated in this paper.  相似文献   

17.
Let (X, #) be an orthogonality space such that the lattice C(X, #) of closed subsets of (X, #) is orthomodular and let (, ) denote the free orthogonality monoid over (X, #). Let C0(, ) be the subset of C(, ), consisting of all closures of bounded orthogonal sets. We show that C0(, ) is a suborthomodular lattice of C(, ) and we provide a necessary and sufficient condition for C0(, ) to carry a full set of dispersion free states.The work of the second author on this paper was supported by National Science Foundation Grant GP-9005.  相似文献   

18.
A problem of estimating a functional parameter (x) and functionals () based on observation of a solution u (t, x) of the stochastic partial differential equation is considered. The asymptotic problem setting, as the noise intensity 0, is investigated.  相似文献   

19.
We present an O(n3) time type inference algorithm for a type system with a largest type, a smallest type , and the usual ordering between function types. The algorithm infers type annotations of least shape, and it works equally well for recursive types. For the problem of typability, our algorithm is simpler than the one of Kozen, Palsberg, and Schwartzbach for type inferencewithout . This may be surprising, especially because the system with is strictly more powerful.  相似文献   

20.
The primary purpose of parallel computation is the fast execution of computational tasks that are too slow to perform sequentially. However, it was shown recently that a second equally important motivation for using parallel computers exists: Within the paradigm of real-time computation, some classes of problems have the property that a solution to a problem in the class computed in parallel is better than the one obtained on a sequential computer. What represents a better solution depends on the problem under consideration. Thus, for optimization problems, better means closer to optimal. Similarly, for numerical problems, a solution is better than another one if it is more accurate. The present paper continues this line of inquiry by exploring another class enjoying the aforementioned property, namely, cryptographic problems in a real-time setting. In this class, better means more secure. A real-time cryptographic problem is presented for which the parallel solution is provably, considerably, and consistently better than a sequential one.It is important to note that the purpose of this paper is not to demonstrate merely that a parallel computer can obtain a better solution to a computational problem than one derived sequentially. The latter is an interesting (and often surprising) observation in its own right, but we wish to go further. It is shown here that the improvement in quality can be arbitrarily high (and certainly superlinear in the number of processors used by the parallel computer). This result is akin to superlinear speedup—a phenomenon itself originally thought to be impossible.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号