首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Many low level visual computation problems such as focus, stereo, optical flow, etc., can be formulated as problems of extracting one or more parameters of a non-stationary transformation between two images. Finite-width windows are widely used in various algorithms to extract spatially local information from images. While the choice of window width has a very profound impact on the quality of algorithmic results, there has been no quantitative way to measure or eliminate the negative effects of finite-width windows. To address this problem and the foreshortening problem caused by non-stationarity, we introduce two novel sets of filters: moment filters and hypergeometric filters. The recursive properties of these filters allow the effects of finite-width windows and foreshortening to be explicitly analyzed and eliminated.We apply the moment filter approach to the focus and stereo problems, in which one parameter is extracted at every pixel location. We apply the hypergeometric approach to the optical flow problem, in which two parameters are extracted. We demonstrate that algorithms based on moment filters and hypergeometric filters achieve much higher precision than other state-of-art techniques.  相似文献   

2.
In this paper the problem of routing messages along shortest paths in a distributed network without using complete routing tables is considered. In particular, the complexity of deriving minimum (in terms of number of intervals) interval routing schemes is analyzed under different requirements. For all the cases considered NP-hardness proofs are given, while some approximability results are provided. Moreover, relations among the different cases considered are studied.This work was supported by the EEC ESPRIT II Basic Research Action Program under Contract No. 7141 Algorithms and Complexity II, by the EEC Human Capital and Mobility MAP project, and by the Italian MURST 40% project Algoritmi, Modelli di Calcolo e Strutture Informative.  相似文献   

3.
Ward Elliott (from 1987) and Robert Valenza (from 1989) set out to the find the true Shakespeare from among 37 anti-Stratfordian Claimants. As directors of the Claremont Shakespeare Authorship Clinic, Elliott and Valenza developed novel attributional tests, from which they concluded that most Claimants are not-Shakespeare. From 1990-4, Elliott and Valenza developed tests purporting further to reject much of the Shakespeare canon as not-Shakespeare (1996a). Foster (1996b) details extensive and persistent flaws in the Clinic's work: data were collected haphazardly; canonical and comparative text-samples were chronologically mismatched; procedural controls for genre, stanzaic structure, and date were lacking. Elliott and Valenza counter by estimating maximum erosion of the Clinic's findings to include five of our 54 tests, which can amount, at most, to half of one percent (1998). This essay provides a brief history, showing why the Clinic foundered. Examining several of the Clinic's representative tests, I evaluate claims that Elliott and Valenza continue to make for their methodology. A final section addresses doubts about accuracy, validity and replicability that have dogged the Clinic's work from the outset.  相似文献   

4.
A first-order system F has theKreisel length-of-proof property if the following statement is true for all formulas(x): If there is ak1 such that for alln0 there is a proof of(¯n) in F with at mostk lines, then there is a proof of x(x) in F. We consider this property for Parikh systems, which are first-order axiomatic systems that contain a finite number of axiom schemata (including individual axioms) and a finite number of rules of inference. We prove that any usual Parikh system formulation of Peano arithmetic has the Kreisel length-of-proof property if the underlying logic of the system is formulated without a schema for universal instantiation in either one of two ways. (In one way, the formula to be instantiated is built up in steps, and in the other way, the term to be substituted is built up in steps.) Our method of proof uses techniques and ideas from unification theory.  相似文献   

5.
A well-known problem in default logic is the ability of naive reasoners to explain bothg and ¬g from a set of observations. This problem is treated in at least two different ways within that camp.One approach is examination of the various explanations and choosing among them on the basis of various explanation comparators. A typical comparator is choosing the explanation that depends on the most specific observation, similar to the notion of narrowest reference class.Others examine default extensions of the observations and choose whatever is true in any extension, or what is true in all extensions or what is true in preferred extensions. Default extensions are sometimes thought of as acceptable models of the world that are discarded as more knowledge becomes available.We argue that the notions of specificity and extension lack clear semantics. Furthermore, we show that the problems these ideas were supposed to solve can be handled easily within a probabilistic framework.  相似文献   

6.
Our starting point is a definition of conditional event EH which differs from many seemingly similar ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same third value u (undetermined) to all conditional events, but make it depend on EH, it turns out that this function t(EH) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, compulsory – choice of the relevant operations among conditional events) the natural axioms for many different (besides probability) conditional measures.  相似文献   

7.
Let H be a separable Hilbert space. We consider the manifold M consisting of density operators on H such that p is of trace class for some p (0, 1). We say M is nearby if there exists C > 1 such that C –1C. We show that the space of nearby points to can be furnished with the two flat connections known as the (±)-affine structures, which are dual relative to the BKM metric. We furnish M with a norm making it into a Banach manifold.  相似文献   

8.
A central component of the analysis of panel clustering techniques for the approximation of integral operators is the so-called -admissibility condition min {diam(),diam()} 2dist(,) that ensures that the kernel function is approximated only on those parts of the domain that are far from the singularity. Typical techniques based on a Taylor expansion of the kernel function require a subdomain to be far enough from the singularity such that the parameter has to be smaller than a given constant depending on properties of the kernel function. In this paper, we demonstrate that any is sufficient if interpolation instead of Taylor expansionisused for the kernel approximation, which paves the way for grey-box panel clustering algorithms.  相似文献   

9.
We consider the half-space range-reporting problem: Given a setS ofn points in d, preprocess it into a data structure, so that, given a query half-space , allk points ofS can be reported efficiently. We extend previously known static solutions to dynamic ones, supporting insertions and deletions of points ofS. For a given parameterm,n m n d/2 and an arbitrarily small positive constant , we achieveO(m 1+) space and preprocessing time, O((n/m d/2 logn+k) query time, and O(m1+n) amortized update time (d 3). We present, among others, the following applications: an O(n1+)-time algorithm for computing convex layers in 3, and an output sensitive algorithm for computing a level in an arrangements of planes in 3, whose time complexity is O((b+n) n, whereb is the size of the level.Work by the first author has been supported by National Science Foundation Grant CCR-91-06514. A preliminary version of this paper appeared in Agarwalet al. [2], which also contains the results of [20] on dynamic bichromatic closest pair and minimum spanning trees.  相似文献   

10.
LetB be a Banach space ofR n valued continuous functions on [0, ) withfB. Consider the nonlinear Volterra integral equation (*)x(t)+ o t K(t,s,x(s))ds. We use the implicit function theorem to give sufficient conditions onB andK (t,s,x) for the existence of a unique solutionxB to (*) for eachf B with f B sufficiently small. Moreover, there is a constantM>0 independent off with MfB.Part of this work was done while the author was visiting at Wright State University.  相似文献   

11.
Prequential model selection and delete-one cross-validation are data-driven methodologies for choosing between rival models on the basis of their predictive abilities. For a given set of observations, the predictive ability of a model is measured by the model's accumulated prediction error and by the model's average-out-of-sample prediction error, respectively, for prequential model selection and for cross-validation. In this paper, given i.i.d. observations, we propose nonparametric regression estimators—based on neural networks—that select the number of hidden units (or neurons) using either prequential model selection or delete-one cross-validation. As our main contributions: (i) we establish rates of convergence for the integrated mean-squared errors in estimating the regression function using off-line or batch versions of the proposed estimators and (ii) we establish rates of convergence for the time-averaged expected prediction errors in using on-line versions of the proposed estimators. We also present computer simulations (i) empirically validating the proposed estimators and (ii) empirically comparing the proposed estimators with certain novel prequential and cross-validated mixture regression estimators.  相似文献   

12.
Continuation-passing style (CPS) is a good abstract representation to use for compilation and optimization: it has a clean semantics and is easily manipulated. We examine how CPS expresses the saving and restoring of registers in source-language procedure calls. In most CPS-based compilers, the context of the calling procedure is saved in a continuation closure—a single variable that is passed as an argument to the function being called. This closure is a record containing bindings of all the free variables of the continuation; that is, registers that hold values needed by the caller after the call are written to memory in the closure, and fetched back after the call.Consider the procedure-call mechanisms used by conventional compilers. In particular, registers holding values needed after the call must be saved and later restored. The responsibility for saving registers can lie with the caller (a caller-saves convention) or with the called function (callee-saves). In practice, to optimize memory traffic, compilers find it useful to have some caller-saves registers and some callee-saves.Conventional CPS-based compilers that pass a pointer to a record containing all the variables needed after the call (i.e., the continuation closure), are using a caller-saves convention. We explain how to express callee-save registers in Continuation-Passing Style, and give measurements showing the resulting improvement in execution time.SUPPORTED IN PART BY NSF GRANT CCR-8914570.  相似文献   

13.
We consider the parallel time complexity of logic programs without function symbols, called logical query programs, or Datalog programs. We give a PRAM algorithm for computing the minimum model of a logical query program, and show that for programs with the polynomial fringe property, this algorithm runs in time that is logarithmic in the input size, assuming that concurrent writes are allowed if they are consistent. As a result, the linear and piecewise linear classes of logic programs are inN C. Then we examine several nonlinear classes in which the program has a single recursive rule that is an elementary chain. We show that certain nonlinear programs are related to GSM mappings of a balanced parentheses language, and that this relationship implies the polynomial fringe property; hence such programs are inN C Finally, we describe an approach for demonstrating that certain logical query programs are log space complete forP, and apply it to both elementary single rule programs and nonelementary programs.Supported by NSF Grant IST-84-12791, a grant of IBM Corporation, and ONR contract N00014-85-C-0731.  相似文献   

14.
In recent years, constraint satisfaction techniques have been successfully applied to disjunctive scheduling problems, i.e., scheduling problems where each resource can execute at most one activity at a time. Less significant and less generally applicable results have been obtained in the area of cumulative scheduling. Multiple constraint propagation algorithms have been developed for cumulative resources but they tend to be less uniformly effective than their disjunctive counterparts. Different problems in the cumulative scheduling class seem to have different characteristics that make them either easy or hard to solve with a given technique. The aim of this paper is to investigate one particular dimension along which problems differ. Within the cumulative scheduling class, we distinguish between highly disjunctive and highly cumulative problems: a problem is highly disjunctive when many pairs of activities cannot execute in parallel, e.g., because many activities require more than half of the capacity of a resource; on the contrary, a problem is highly cumulative if many activities can effectively execute in parallel. New constraint propagation and problem decomposition techniques are introduced with this distinction in mind. This includes an O(n2) edge-finding algorithm for cumulative resources (where n is the number of activities requiring the same resource) and a problem decomposition scheme which applies well to highly disjunctive project scheduling problems. Experimental results confirm that the impact of these techniques varies from highly disjunctive to highly cumulative problems. In the end, we also propose a refined version of the edge-finding algorithm for cumulative resources which, despite its worst case complexity in O(n3) , performs very well on highly cumulative instances.  相似文献   

15.
The adaptiveness of agents is one of the basic conditions for the autonomy. This paper describes an approach of adaptiveness forMonitoring Cognitive Agents based on the notion of generic spaces. This notion allows the definition of virtual generic processes so that any particular actual process is then a simple configuration of the generic process, that is to say a set of values of parameters. Consequently, generic domain ontology containing the generic knowledge for solving problems concerning the generic process can be developed. This lead to the design of Generic Monitoring Cognitive Agent, a class of agent in which the whole knowledge corpus is generic. In other words, modeling a process within a generic space becomes configuring a generic process and adaptiveness becomes genericity, that is to say independence regarding technology. In this paper, we present an application of this approach on Sachem, a Generic Monitoring Cognitive Agent designed in order to help the operators in operating a blast furnace. Specifically, the NeuroGaz module of Sachem will be used to present the notion of a generic blast furnace. The adaptiveness of Sachem can then be noted through the low cost of the deployment of a Sachem instance on different blast furnaces and the ability of NeuroGaz in solving problem and learning from various top gas instrumentation.  相似文献   

16.
Kulpa  Zenon 《Reliable Computing》2003,9(3):205-228
Using the results obtained for the one-dimensional case in Part I (Reliable Computing 9(1) (2003), pp. 1–20) of the paper, an analysis of the two-dimensional relational expression a 1 x 1 + a 2 x 2 b, where {, , , =}, is conducted with the help of a midpoint-radius diagram and other auxiliary diagrams. The solution sets are obtained with a simple boundary-line selection rule derived using these tools, and are characterized by types of one-dimensional cuts through the solution space. A classification of basic possible solution types is provided in detail. The generalization of the approach for n-dimensional interval systems and avenues for further research are also outlined.  相似文献   

17.
The cross ratio of four colinear points is of fundamental importance in model based vision, because it is the simplest numerical property of an object that is invariant under projection to an image. It provides a basis for algorithms to recognise objects from images without first estimating the position and orientation of the camera.A quantitative analysis of the effectiveness of the cross ratio in model based vision is made. A given imageI of four colinear points is classified by making comparisons between the measured cross ratio of the four image points and the cross ratios stored in the model database. The imageI is accepted as a projection of an objectO with cross ratio if |–|ntu, wheren is the standard deviation of the image noise,t is a threshold andu=. The performance of the cross ratio is described quantitatively by the probability of rejectionR, the probability of false alarmF and the probability of misclassificationp (), defined for two model cross ratios , . The trade off between these different probabilities is determined byt.It is assumed that in the absence of an object the image points have identical Gaussian distributions, and that in the presence of an object the image points have the appropriate conditional densities. The measurements of the image points are subject to small random Gaussian perturbations. Under these assumptions the trade offs betweenR,F andp () are given to a good approximation byR=2(1–(t)),F=r F t, t|–|–1, where is the relative noise level, is cumulative distribution function for the normal distribution,r F is constant, ande is a function of only. The trade off betweenR andF is obtained in Maybank (1994). In this paper the trade off betweenR andp () is obtained.It is conjectured that the general form of the above trade offs betweenR,F andp () is the same for a range of invariants useful in model based vision. The conjecture prompts the following definition: an invariant which has trade offs betweenR,F,p () of the above form is said to benon-degenerate for model based vision.The consequences of the trade off betweenR andp () are examined. In particular, it is shown that for a fixed overall probability of misclassification there is a maximum possible model cross ratio m , and there is a maximum possible numberN of models. Approximate expressions for m andN are obtained. They indicate that in practice a model database containing only cross ratio values can have a size of order at most ten, for a physically plausible level of image noise, and for a probability of misclassification of the order 0.1.  相似文献   

18.
Referential transparency,definiteness and unfoldability   总被引:1,自引:0,他引:1  
Summary The term referential transparency is frequently used to indicate that a programming language has certain useful substitution properties. We observe, however, that the formal and informal definitions given in the literature are not equivalent and we investigate their relationships. To this end, we study the definitions in the context of a simple expression language and show that in the presence of non-determinism, the differences between the definitions are manifest. We propose a definition of referential transparency, based on Quine's, as well as of the related notions: definiteness and unfoldability. We demonstrate that these notions are useful to characterize programming languages.  相似文献   

19.
The concept of information is virtually ubiquitous in contemporary cognitive science. It is claimed to be processed (in cognitivist theories of perception and comprehension), stored (in cognitivist theories of memory and recognition), and otherwise manipulated and transformed by the human central nervous system. Fred Dretske's extensive philosophical defense of a theory of informational content (semantic information) based upon the Shannon-Weaver formal theory of information is subjected to critical scrutiny. A major difficulty is identified in Dretske's equivocations in the use of the concept of a signal bearing informational content. Gibson's alternative conception of information (construed as analog by Dretske), while avoiding many of the problems located in the conventional use of signal, raises different but equally serious questions. It is proposed that, taken literally, the human CNS does not extract or process information at all; rather, whatever information is construed as locatable in the CNS is information only for an observer-theorist and only for certain purposes.Blood courses through our veins, andinformation through our central nervous system.— A Neuropsychology Textbook.  相似文献   

20.
Pizer and Eberly introduced the core as the analogue of the medial axis for greyscale images. For two-dimensional images, it is obtained as the ridge of a medial function defined on 2 + 1-dimensional scale space. The medial function is defined using Gaussian blurring and measures the extent to which a point is in the center of the object measured at a scale. Numerical calculations indicate the core has properties quite different from the medial axis. In this paper we give the generic properties of ridges and cores for two-dimensional images and explain the discrepancy between core and medial axis properties. We place cores in a larger relative critical set structure, which coherently relates disjoint pieces of core. We also give the generic transitions which occur for sequences of images varying with a parameter such as time. The genericity implies the stability of the full structure in any compact viewing area of scale space under sufficiently small L2 perturbations of the image intensity function. We indicate consequences for finding cores and also for adding markings to completely determine the structure of the medial function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号