首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The cross ratio of four colinear points is of fundamental importance in model based vision, because it is the simplest numerical property of an object that is invariant under projection to an image. It provides a basis for algorithms to recognise objects from images without first estimating the position and orientation of the camera.A quantitative analysis of the effectiveness of the cross ratio in model based vision is made. A given imageI of four colinear points is classified by making comparisons between the measured cross ratio of the four image points and the cross ratios stored in the model database. The imageI is accepted as a projection of an objectO with cross ratio if |–|ntu, wheren is the standard deviation of the image noise,t is a threshold andu=. The performance of the cross ratio is described quantitatively by the probability of rejectionR, the probability of false alarmF and the probability of misclassificationp (), defined for two model cross ratios , . The trade off between these different probabilities is determined byt.It is assumed that in the absence of an object the image points have identical Gaussian distributions, and that in the presence of an object the image points have the appropriate conditional densities. The measurements of the image points are subject to small random Gaussian perturbations. Under these assumptions the trade offs betweenR,F andp () are given to a good approximation byR=2(1–(t)),F=r F t, t|–|–1, where is the relative noise level, is cumulative distribution function for the normal distribution,r F is constant, ande is a function of only. The trade off betweenR andF is obtained in Maybank (1994). In this paper the trade off betweenR andp () is obtained.It is conjectured that the general form of the above trade offs betweenR,F andp () is the same for a range of invariants useful in model based vision. The conjecture prompts the following definition: an invariant which has trade offs betweenR,F,p () of the above form is said to benon-degenerate for model based vision.The consequences of the trade off betweenR andp () are examined. In particular, it is shown that for a fixed overall probability of misclassification there is a maximum possible model cross ratio m , and there is a maximum possible numberN of models. Approximate expressions for m andN are obtained. They indicate that in practice a model database containing only cross ratio values can have a size of order at most ten, for a physically plausible level of image noise, and for a probability of misclassification of the order 0.1.  相似文献   

2.
An accurate acceptance-rejection algorithm is devised and tested. The procedure requires an average of less than 3 uniform deviates whenever the standard deviation of the distribution is at least 4, and this number decreases monotonically to 2.63 as . Variable parameters are permitted, and no subroutines for sampling from other statistical distributions are needed.This research was supported by the Austrian Research Council (Fonds zur Förderung der wissenschaftlichen Forschung).  相似文献   

3.
A central component of the analysis of panel clustering techniques for the approximation of integral operators is the so-called -admissibility condition min {diam(),diam()} 2dist(,) that ensures that the kernel function is approximated only on those parts of the domain that are far from the singularity. Typical techniques based on a Taylor expansion of the kernel function require a subdomain to be far enough from the singularity such that the parameter has to be smaller than a given constant depending on properties of the kernel function. In this paper, we demonstrate that any is sufficient if interpolation instead of Taylor expansionisused for the kernel approximation, which paves the way for grey-box panel clustering algorithms.  相似文献   

4.
The past decade has given rise to a number of explicit substitution calculi. An important question of explicit substitution calculi is that of the termination of the underlying calculus of substitution. Proofs of termination of substitutions fall in two categories: those that are easy because a decreasing measure can be established and those that are difficult because such a decreasing measure is not easy to establish. This paper considers two styles of explicit substitution: and s, for which different termination proof methods apply. The termination of s is guaranteed by a decreasing weight, while a decreasing weight for showing the termination of has not yet been found. These termination methods for and s are formalized in the proof checker ALF. During our process of formally checking the termination of and s we comment on what is needed to make a proof formally checkable.  相似文献   

5.
We show that the simple universal adaptive control lawu(t)=N(k(t))y(t)=|y(t)| 2, withN(k)=(logk) cos((logk)) and 3+<1, stabilizes all detectable and stabilizable infinite dimensional systems of Pritchard-Salamon type which are externally stabilized by somescalar output feedback. The same controller is also shown to stabilize time varying systems satisfying the same type of output feedback stabilizability.  相似文献   

6.
Among the major developments in Mathematical Morphology in the last two decades are the interrelated subjects of connectivity classes and connected operators. Braga-Neto and Goutsias have proposed an extension of the theory of connectivity classes to a multiscale setting, whereby one can assign connectivity to an object observed at different scales. In this paper, we study connected operators in the context of multiscale connectivity. We propose the notion of a -connected operator, that is, an operator connected at scale . We devote some attention to the study of binary -grain operators. In particular, we show that families of -grain openings and -grain closings, indexed by the connectivity scale parameter, are granulometries and anti-granulometries, respectively. We demonstrate the use of multiscale connected operators with image analysis applications. The first is the scale-space representation of grayscale images using multiscale levelings, where the role of scale is played by the connectivity scale. Then we discuss the application of multiscale connected openings in granulometric analysis, where both size and connectivity information are summarized. Finally, we describe an application of multiscale connected operators to an automatic target recognition problem.Ulisses Braga-Neto received the Baccalaureate degree in Electrical Engineering from the Universidade Federal de Pernambuco (UFPE), Brazil, in 1992, the Masters degree in Electrical Engineering from the Universidade Estadual de Campinas, Brazil, in 1994, the M.S.E. degree in Electrical and Computer Engineering and the M.S.E. degree in Mathematical Sciences, both from The Johns Hopkins University, in 1998, and the Ph.D. degree in Electrical and Computer Engineering, from The Johns Hopkins University, in 2001. He was a Post-Doctoral Fellow at the University of Texas MD Anderson Cancer Center and a Visiting Scholar at Texas A&M University, from 2002 to 2004. He is currently an Associate Researcher at the Aggeu Magalhães Research Center of the Osvaldo Cruz Foundation, Brazilian Ministry of Health. His research interests include Bioinformatics, Pattern Recognition, Image Analysis, and Mathematical Morphology.  相似文献   

7.
Linear Least-Squares Algorithms for Temporal Difference Learning   总被引:8,自引:2,他引:6  
We introduce two new temporal difference (TD) algorithms based on the theory of linear least-squares function approximation. We define an algorithm we call Least-Squares TD (LS TD) for which we prove probability-one convergence when it is used with a function approximator linear in the adjustable parameters. We then define a recursive version of this algorithm, Recursive Least-Square TD (RLS TD). Although these new TD algorithms require more computation per time-step than do Suttons TD() algorithms, they are more efficient in a statistical sense because they extract more information from training experiences. We describe a simulation experiment showing the substantial improvement in learning rate achieved by RLS TD in an example Markov prediction problem. To quantify this improvement, we introduce the TD error variance of a Markov chain, TD, and experimentally conclude that the convergence rate of a TD algorithm depends linearly on TD. In addition to converging more rapidly, LS TD and RLS TD do not have control parameters, such as a learning rate parameter, thus eliminating the possibility of achieving poor performance by an unlucky choice of parameters.  相似文献   

8.
Let H be a separable Hilbert space. We consider the manifold M consisting of density operators on H such that p is of trace class for some p (0, 1). We say M is nearby if there exists C > 1 such that C –1C. We show that the space of nearby points to can be furnished with the two flat connections known as the (±)-affine structures, which are dual relative to the BKM metric. We furnish M with a norm making it into a Banach manifold.  相似文献   

9.
LetL * be accepted in timef(n) by a nondeterministic Turing machine. Then there is a monadic existential second-order sentence in the language of + such that for everyx*,xL if and only if a certain structureU x f of cardinalityf(|x|) satisfies . It follows that ifL is accepted in nondeterministic timen d, d a natural number, then there is a sentence whose relational symbols ared-ary or less, whose finite spectrum isL. This research was partially supported by NSF Grants MCS78-01832 and MCS-8002695.  相似文献   

10.
In a previous study (P. B. Slater, Eur. Phys. J. B. 17, 471 (2000)), several remarkably simple exact results were found, in certain specialized m-dimensional scenarios (m 4), for the a priori probability that a pair of qubits is unentangled/separable. The measure used was the volume element of the Bures metric (identically one-fourth the statistical distinguishability [SD] metric). Here, making use of a newly-developed (Euler angle) parameterization of the 4 × 4 density matrices of Tilma, Byrd and Sudarshan, we extend the analysis to the complete 15-dimensional convex set (C) of arbitrarily paired qubits—the total SD volume of which is known to be 8 / 1680 = 8/24 3 5 7 5.64794. Using advanced quasi-Monte Carlo procedures (scrambled Halton sequences) for numerical integration in this high-dimensional space, we approximately (5.64851) reproduce that value, while obtaining an estimate of 0.416302 for the SD volume of separable states. We conjecture that this is but an approximation to 6/2310 = 6 / (2 3 5 7 11) 0.416186. The ratio of the two volumes, 8/1122 .0736881, would then constitute the exact Bures/SD probability of separability. The SD area of the 14-dimensional boundary of C is 1427/12285 = 2 717/33 5 7 13 34.911, while we obtain a numerical estimate of 1.75414 for the SD area of the boundary of separable states. PACS: 03.67.-; 03.65.Ud; 02.60.Jh; 02.40.Ky  相似文献   

11.
Optimal shape design problems for an elastic body made from physically nonlinear material are presented. Sensitivity analysis is done by differentiating the discrete equations of equilibrium. Numerical examples are included.Notation U ad set of admissible continuous design parameters - U h ad set of admissible discrete design parameters - function fromU h ad defining shape of body - h function fromU h ad defining approximated shape of body - vector of nodal values of h - { n} sequence of functions tending to - () domain defined by - K bulk modulus - shear modulus - penalty parameter for contact condition - V() space of virtual displacements in() - V h(h) finite element approximation ofV() - J cost functional - J h discretized cost functional - J algebraic form ofJ h - (u) stress tensor - e(u) strain tensor - K stiffness matrix - f force vector - b(q) term arising from nonlinear boundary conditions - q vector of nodal degrees of freedom - p vector of adjoint state variables - J Jacobian of isoparametric mapping - |J| determinant ofJ - N vector of shape function values on parent element - L matrix of shape function derivatives on parent element - G matrix of Cartesian derivatives of shape functions - X matrix of nodal coordinates of element - D matrix of elastic coefficients - B strain-displacement matrix - P part of boundary where tractions are prescribed - u part of boundary where displacements are prescribed - variable part of boundary - strain invariant  相似文献   

12.
In many application areas,it is important to detect outliers. The traditional engineering approach to outlier detection is that we start with some normal values x1, ...,xn, compute the sample average E, the sample standard variation , and then mark a value x as an outlier if x is outside the k0-sigma interval [Ek0 , E + k0 ] (for some pre-selected parameter k0).In real life,we often have only interval ranges [ ] for the normal values x1, ...,xn. In this case,we only have intervals of possible values for the bounds and . We can therefore identify outliers as values that are outside all k0-sigma intervals.Once we identify a value as an outlier for a fixed k0, it is also desirable to find out to what degree this value is an outlier, i.e., what is the largest value k0 for which this value is an outlier.In this paper,we analyze the computational complexity of these outlier detection problems, provide efficient algorithms that solve some of these problems (under reasonable conditions), and list related open problems.  相似文献   

13.
Semantics connected to some information based metaphor are well-known in logic literature: a paradigmatic example is Kripke semantic for Intuitionistic Logic. In this paper we start from the concrete problem of providing suitable logic-algebraic models for the calculus of attribute dependencies in Formal Contexts with information gaps and we obtain an intuitive model based on the notion of passage of information showing that Kleene algebras, semi-simple Nelson algebras, three-valued ukasiewicz algebras and Post algebras of order three are, in a sense, naturally and directly connected to partially defined information systems. In this way wecan provide for these logic-algebraic structures a raison dêetre different from the original motivations concerning, for instance, computability theory.  相似文献   

14.
The notion of the rational closure of a positive knowledge base K of conditional assertions | (standing for if then normally ) was first introduced by Lehmann (1989) and developed by Lehmann and Magidor (1992). Following those authors we would also argue that the rational closure is, in a strong sense, the minimal information, or simplest, rational consequence relation satisfying K. In practice, however, one might expect a knowledge base to consist not just of positive conditional assertions, | , but also negative conditional assertions, i (standing for not if then normally . Restricting ourselves to a finite language we show that the rational closure still exists for satisfiable knowledge bases containing both positive and negative conditional assertions and has similar properties to those exhibited in Lehmann and Magidor (1992). In particular an algorithm in Lehmann and Magidor (1992) which constructs the rational closure can be adapted to this case and yields, in turn, completeness theorems for the conditional assertions entailed by such a mixed knowledge base.  相似文献   

15.
A loss queueing system GI/G/m/0 is considered. Let a(x) be a p.d.f. of interarrival intervals. Assume that this function behaves like cx-1 for small x. Further let B(x) be a d.f. of service time; (1/) be the mean service time. Conditions are derived for the light-traffic insensitivity of the loss probability to the form of B(x) as (/ ) 0. In particular, the condition = 1 is necessary. Estimates for the loss probability are obtained.  相似文献   

16.
In this paper we use free fall approach to develop a high level control/command strategy for a bipedal robot called BIPMAN, based on a multi-chain mechanical model with a general control architecture. The strategy is composed of three levels: the Legs and arms level, the Coordinator level and the Supervisor level. The Coordinator level is devoted to controlling leg movements and to ensure the stability of the whole biped. Actually perturbation effects threaten the equilibrium of the human robot and can only be compensated using a dynamic control strategy. This one is based on dynamic stability studies with a center of mass acceleration control and a force distribution on each leg and arm. Free fall in the gravity field is assumed to be deeply involved in the human locomotor control. According to studies of this specific motion through a direct dynamic model,the notion of equilibrium classes is introduced. They allow one to define time intervals in which the biped is able to maintain its posture. This notion is used for the definition of a reconfigurable high level control of the robot.  相似文献   

17.
Our starting point is a definition of conditional event EH which differs from many seemingly similar ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same third value u (undetermined) to all conditional events, but make it depend on EH, it turns out that this function t(EH) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, compulsory – choice of the relevant operations among conditional events) the natural axioms for many different (besides probability) conditional measures.  相似文献   

18.
When interpolating incomplete data, one can choose a parametric model, or opt for a more general approach and use a non-parametric model which allows a very large class of interpolants. A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also smooth in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a fidelity term and a smoothness term.The classical approach to regularization is: select optimal weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the optimal weights does not guarantee that the chosen function will be optimal in some sense, such as the maximum likelihood criterion, or the minimal square error criterion. For that, we have to consider all possible weights.The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights. The reason is as follows: the weight actually determines the probability space in which we are working. For a given weight , the probability of a function f is proportional to exp(– f2 uu du) (for the case of a function with one variable). For each different , there is a different solution to the restoration problem; denote it by f. Now, if we had known , it would not be necessary to use all the weights; however, all we are given are some noisy measurements of f, and we do not know the correct . Therefore, the mathematically correct solution is to calculate, for every , the probability that f was sampled from a space whose probability is determined by , and average the different f's weighted by these probabilities. The same argument holds for the noise variance, which is also unknown.Three basic problems are addressed is this work: Computing the MAP estimate, that is, the function f maximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem. Computing the MSE estimate. This function is defined at each point x as f(x)Pr(f/D) f. This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSE estimate. Computing the pointwise uncertainty associated with the MSE solution. This problem is reduced to computing three one-dimensional integrals.  相似文献   

19.
Shape optimization of interior cutouts in composite panels   总被引:4,自引:0,他引:4  
This paper presents validated results of the optimization of cutouts in laminated carbon-fibre composite panels by adapting a recently developed optimization procedure known as Evolutionary Structural Optimization (ESO). An initial small cutout was introduced into each finite element model and elements were removed from around this cutout based on a predefined rejection criterion. In the examples presented, the limiting ply within each plate element around the cutout was determined based on the Tsai-Hill failure index. Plates with values below the product of the average Tsai-Hill number and a rejection ratio (RR) were subsequently removed. This process was iterated until a steady state was reached and the RR was then incremented by an evolutionary rate (ER). The above steps were repeated until a cutout of a desired area was achieved.Nomenclature 1 stress in fibre direction - 2 stress in transverse direction - 12 shear stress - X tensile/compressive strength in fibre direction - Y tensile/compressive strength in transverse direction - S shear strength - e plate element number lying on edge of cutout - p plate element number - P total number of plates - TH Tsai-Hill number - TH(l) limiting Tsai-Hill number of platee - TH(p) limiting Tsai-Hill number of platep - RR rejection ratio - [K] global stiffness matrix - {} displacement vector - {F} force vector - {} stress vector - l ply number - j number of plies per plate - FI failure index - T specified condition on which to terminate evolution (e.g. area of cutout) - A stress at major vertex of ellipse - B stress at minor vertex of ellipse - a ellipse major axis - b ellipse minor axis - i increment number - ER evolutionary rate  相似文献   

20.
A sublinear algorithm for approximate keyword searching   总被引:2,自引:0,他引:2  
E. W. Myers 《Algorithmica》1994,12(4-5):345-374
Given a relatively short query stringW of lengthP, a long subject stringA of lengthN, and a thresholdD, theapproximate keyword search problem is to find all substrings ofA that align withW with not more than D insertions, deletions, and mismatches. In typical applications, such as searching a DNA sequence database, the size of the databaseA is much larger than that of the queryW, e.g.,N is on the order of millions or billions andP is a hundred to a thousand. In this paper we present an algorithm that given a precomputedindex of the databaseA, finds rare matches in time that issublinear inN, i.e.,N c for somec<1. The sequenceA must be overa. finite alphabet . More precisely, our algorithm requires 0(DN pow() logN) expected-time where =D/P is the maximum number of differences as a percentage of query length, and pow() is an increasing and concave function that is 0 when =0. Thus the algorithm is superior to current O(DN) algorithms when is small enough to guarantee that pow() < 1. As seen in the paper, this is true for a wide range of , e.g., . up to 33% for DNA sequences (¦¦=4) and 56% for proteins sequences (¦¦=20). In preliminary practical experiments, the approach gives a 50-to 500-fold improvement over previous algorithms for prolems of interest in molecular biology.This work was supported in part by the National Institutes of Health under Grant R01 LM04960-01 and the Aspen Center for Physics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号