首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 365 毫秒
1.
Pseudo-Linear Scale-Space Theory   总被引:2,自引:2,他引:0  
It has been observed that linear, Gaussian scale-space, and nonlinear, morphological erosion and dilation scale-spaces generated by a quadratic structuring function have a lot in common. Indeed, far-reaching analogies have been reported, which seems to suggest the existence of an underlying isomorphism. However, an actual mapping appears to be missing.In the present work a one-parameter isomorphism is constructed in closed-form, which encompasses linear and both types of morphological scale-spaces as (non-uniform) limiting cases. The unfolding of the one-parameter family provides a means to transfer known results from one domain to the other. Moreover, for any fixed and non-degenerate parameter value one obtains a novel type of pseudo-linear multiscale representation that is, in a precise way, in-between the familiar ones. This is of interest in its own right, as it enables one to balance pros and cons of linear versus morphological scale-space representations in any particular situation.  相似文献   

2.
Our starting point is a definition of conditional event EH which differs from many seemingly similar ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same third value u (undetermined) to all conditional events, but make it depend on EH, it turns out that this function t(EH) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, compulsory – choice of the relevant operations among conditional events) the natural axioms for many different (besides probability) conditional measures.  相似文献   

3.
The computational reconstruction of surface topographies from scanning electron microscope (SEM) images has been extensively investigated in the past, but fundamental image processing problems still exist. Since conventional approaches adapted from general-purpose image processing have not sufficiently met the requirements in terms of resolution and reliability, we have explored combining different methods to obtain better results.This paper presents a least-squares combination of conventional stereoscopy with shape from shading and a way of obtaining self-consistent surface profiles from stereoscopy and stereo-intrinsic shape from shading using dynamic programming techniques. Results are presented showing how this combined analysis of multi-sensorial data yields improvements of the reconstructed surface topography that cannot be obtained from individual sensor signals alone.  相似文献   

4.
Agent-based technology has been identified as an important approach for developing next generation manufacturing systems. One of the key techniques needed for implementing such advanced systems will be learning. This paper first discusses learning issues in agent-based manufacturing systems and reviews related approaches, then describes how to enhance the performance of an agent-based manufacturing system through learning from history (based on distributed case-based learning and reasoning) and learning from the future (through system forecasting simulation). Learning from history is used to enhance coordination capabilities by minimizing communication and processing overheads. Learning from the future is used to adjust promissory schedules through forecasting simulation, by taking into account the shop floor interactions, production and transportation time. Detailed learning and reasoning mechanisms are described and partial experimental results are presented.  相似文献   

5.
Let (X, #) be an orthogonality space such that the lattice C(X, #) of closed subsets of (X, #) is orthomodular and let (, ) denote the free orthogonality monoid over (X, #). Let C0(, ) be the subset of C(, ), consisting of all closures of bounded orthogonal sets. We show that C0(, ) is a suborthomodular lattice of C(, ) and we provide a necessary and sufficient condition for C0(, ) to carry a full set of dispersion free states.The work of the second author on this paper was supported by National Science Foundation Grant GP-9005.  相似文献   

6.
This article presents a senior-executive perspective on the grand vision of a Next Generation Enterprise (NGE), its organizational and individual impacts, and the organizational and technological challenges associated with implementing this vision. It is based upon discussions that took place at a CIO Panel, consisting of nine senior executives from a wide cross section of organizations and organized during the Academia/Industry Working Conference on Research Challenges 2000 (AIWoRC '00 Conference) to address the central theme of the conference: The Next Generation Enterprise: Virtual Organizations and Mobile and Pervasive Technologies. The paper first presents theoretical notions about virtuality, virtual organizations, pervasive and mobile information technologies (PMITs), and authors' conceptualization of a prototypical NGE. This is followed by a synthesis and summary of the panel discussion under five major thematic areas: (1) the shaping of the emerging NGE vision as enabled by modern-day PMITs, (2) the significant organizational impacts and changes occasioned by the NGE vision, (3) the crucial impacts that the NGE model may have on individual work and personal lives, (4) the critical organizational issues and challenges in implementing the NGE vision, and (5) the major technological issues that, if left unattended, may hamper translation of the NGE vision into a reality. The paper concludes with remarks about the crucial requirements for making the NGE vision a reality.  相似文献   

7.
The ongoing integration of LANs and WANs to support global communications and businesses and the emergence of integrated broadband communication services has created an increased demand for cooperation between customers, network and service providers to achieve end-to-end service management. Such a cooperation between autonomous authorities, each defining their own administrative management domains, requires the application of an open standardized framework to facilitate and regulate interworking. Such a framework is given by the ITU-T recommendations on TMN, where the so-called X interface is of particular importance for inter-domain management. In this paper, we explain the role of the TMN X interface within an inter-domain TMN architecture supporting end-to-end communications management. We identify the important issues that need to be addressed for the definition and realization of TMN X interfaces and report about our practical experiences with the implementation of TMN X interfaces in the PREPARE project.  相似文献   

8.
When verifying concurrent systems described by transition systems, state explosion is one of the most serious problems. If quantitative temporal information (expressed by clock ticks) is considered, state explosion is even more serious. We present a notion of abstraction of transition systems, where the abstraction is driven by the formulae of a quantitative temporal logic, called qu-mu-calculus, defined in the paper. The abstraction is based on a notion of bisimulation equivalence, called , n-equivalence, where is a set of actions and n is a natural number. It is proved that two transition systems are , n-equivalent iff they give the same truth value to all qu-mu-calculus formulae such that the actions occurring in the modal operators are contained in , and with time constraints whose values are less than or equal to n. We present a non-standard (abstract) semantics for a timed process algebra able to produce reduced transition systems for checking formulae. The abstract semantics, parametric with respect to a set of actions and a natural number n, produces a reduced transition system , n-equivalent to the standard one. A transformational method is also defined, by means of which it is possible to syntactically transform a program into a smaller one, still preserving , n-equivalence.  相似文献   

9.
Summary This paper is devoted to developing and studying a precise notion of the encoding of a logical data structure in a physical storage structure, that is motivated by considerations of computational efficiency. The development builds upon the notion of an encoding of one graph in another. The cost of such an encoding is then defined so as to reflect the structural compatibility of the two graphs, the (externally specified) costs of implementing the host graph, and the (externally specified) set of intended usage patterns of the guest graph. The stability of the constructed framework is demonstrated in terms of a number of results; the faithfulness of the formalism is argued in terms of a number of examples from the literature; and the tractability of the model is hinted at by several results and by further references to the literature.  相似文献   

10.
In this paper, we consider the class of Boolean -functions, which are the Boolean functions definable by -expressions (Boolean expressions in which no variable occurs more than once). We present an algorithm which transforms a Boolean formulaE into an equivalent -expression-if possible-in time linear in E times , where E is the size ofE andn m is the number of variables that occur more than once inE. As an application, we obtain a polynomial time algorithm for Mundici's problem of recognizing -functions fromk-formulas [17]. Furthermore, we show that recognizing Boolean -functions is co-NP-complete for functions essentially dependent on all variables and we give a bound close to co-NP for the general case.  相似文献   

11.
Indecomposable local maps of one-dimensional tessellation automata are studied. The main results of this paper are the following. (1) For any alphabet containing two or more symbols and for anyn 1, there exist indecomposable scope-n local maps over . (2) If is a finite field of prime order, then a linear scope-n local map over is indecomposable if and only if its associated polynomial is an irreducible polynomial of degreen – 1 over , except for a trivial case. (3) Result (2) is no longer true if is a finite field whose order is not prime.  相似文献   

12.
A formal model of atomicity in asynchronous systems   总被引:1,自引:0,他引:1  
Summary We propose a generalisation of occurrence graphs as a formal model of computational structure. The model is used to define the atomic occurrence of a program, to characterise interference freeness between programs, and to model error recovery in a decentralised system.  相似文献   

13.
When interpolating incomplete data, one can choose a parametric model, or opt for a more general approach and use a non-parametric model which allows a very large class of interpolants. A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also smooth in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a fidelity term and a smoothness term.The classical approach to regularization is: select optimal weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the optimal weights does not guarantee that the chosen function will be optimal in some sense, such as the maximum likelihood criterion, or the minimal square error criterion. For that, we have to consider all possible weights.The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights. The reason is as follows: the weight actually determines the probability space in which we are working. For a given weight , the probability of a function f is proportional to exp(– f2 uu du) (for the case of a function with one variable). For each different , there is a different solution to the restoration problem; denote it by f. Now, if we had known , it would not be necessary to use all the weights; however, all we are given are some noisy measurements of f, and we do not know the correct . Therefore, the mathematically correct solution is to calculate, for every , the probability that f was sampled from a space whose probability is determined by , and average the different f's weighted by these probabilities. The same argument holds for the noise variance, which is also unknown.Three basic problems are addressed is this work: Computing the MAP estimate, that is, the function f maximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem. Computing the MSE estimate. This function is defined at each point x as f(x)Pr(f/D) f. This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSE estimate. Computing the pointwise uncertainty associated with the MSE solution. This problem is reduced to computing three one-dimensional integrals.  相似文献   

14.
Zusammenfassung In der folgenden Arbeit werden zunächst die Begriffe Gesamtschrittverfahren, Einzelschrittverfahren und Relaxationsverfahren allgemein formuliert und dann auf allgemeine lineare Gleichungssysteme angewandt. Im Spezialfall einer Matrix mit verschwindender Hauptdiagonale erhält man so die bekanntenJacobi-, Gauss-Seidel- und Relaxationsverfahren. Satz 1 macht eine Aussage über die Konvergenz des Einzelschrittverfahrens bei allgemeinen, nicht-negativen Matrizen. Der Beweis verläuft ähnlich wie in einem bereits 1948 vonStein undRosenberg [2] behandelten Spezialfall. Als Korollar ergibt sich eine Aussage über die Konvergenz des Relaxationsverfahrens bei nicht-negativen Matrizen. Es wird ferner der Satz 2 über die Konvergenz des Relaxationsverfahrens bei diagonaldominanten Matrizen beweisen.
Summary In this paper we give a general definition what is meant by total-step-, single-step- and successive relaxation iterative method and we apply these concepts on systems of linear equations. In the special case of a matrix with zero diagonal entries we obtain the well knownJacobi-, Gauss-Seidel- and Relaxation iterative method. Theorem 1 gives conditions for the convergence of the singlestep-iterative method for general, non-negative matrices. The proof is similar to that given byStein andRosenberg in [2] (1948) for a special case. A corollary gives conditions for the convergence of the relaxation-iterative method for non-negative matrices. Further on we prove theorem 2 about the convergence of the relaxation-iterative method with diagonally dominant matrices.
  相似文献   

15.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

16.
The first proposals for various component tools of what is now called the translator's workstation or translator's workbench are traced back to the 1970s and early 1980s in various, often independent, proposals at different stages in the development of computers and in their use by translators.  相似文献   

17.
This paper uses Thiele rational interpolation to derive a simple method for computing the Randles–Sevcik function 1/2(x), with relative error at most 1.9 × 10–5 for – < x < . We develop a piecewise approximation method for the numerical computation of 1/2(x) on the union (–, –10) [–10, 10] (10, ). This approximation is particularly convenient to employ in electrochemical applications where four significant digits of accuracy are usually sufficient. Although this paper is primarily concerned with the approximation of the Randles–Sevcik function, some examples are included that illustrate how Thiele rational interpolation can be employed to generate useful approximations to other functions of interest in scientific work.  相似文献   

18.
This paper presents aut, a modern Automath checker. It is a straightforward re-implementation of the Zandleven Automath checker from the seventies. It was implemented about five years ago, in the programming language C. It accepts both the AUT-68 and AUT-QE dialects of Automath. This program was written to restore a damaged version of Jutting's translation of Landau's Grundlagen. Some notable features: It is fast. On a 1 GHz machine it will check the full Jutting formalization (736 K of nonwhitespace Automath source) in 0.6 seconds. Its implementation of -terms does not use named variables or de Bruijn indices (the two common approaches) but instead uses a graph representation. In this representation variables are represented by pointers to a binder. The program can compile an Automath text into one big Automath single line-style -term. It outputs such a term using de Bruijn indices. (These -terms cannot be checked by modern systems like Coq or Agda, because the -typed -calculi of de Bruijn are different from the -typed -calculi of modern type theory.)The source of aut is freely available on the Web at the address .  相似文献   

19.
A method to automatically select locally appropriate scales for feature detection, proposed by Lindeberg (1993b, 1998), involves choosing a so-called -parameter. The implications of the choice of -parameter are studied and it is demonstrated that different values of can lead to qualitatively different features being detected. As an example the range of -values is determined such that a second derivative of Gaussian filter kernel detects ridges but not edges. Some results of this relatively simple ridge detector are shown for two-dimensional images.  相似文献   

20.
In this paper, an objective conception of contexts based loosely upon situation theory is developed and formalized. Unlike subjective conceptions, which take contexts to be something like sets of beliefs, contexts on the objective conception are taken to be complex, structured pieces of the world that (in general) contain individuals, other contexts, and propositions about them. An extended first-order language for this account is developed. The language contains complex terms for propositions, and the standard predicate ist that expresses the relation that holds between a context and a proposition just in case the latter is true in the former. The logic for the objective conception features a global classical predicate calculus, a local logic for reasoning within contexts, and axioms for propositions. The specter of paradox is banished from the logic by allowing ist to be nonbivalent in problematic cases: it is not in general the case, for any context c and proposition p, that either ist(c,p) or ist(c, ¬ p). An important representational capability of the logic is illustrated by proving an appropriately modified version of an illustrative theorem from McCarthy's classic Blocks World example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号