首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary In a simple language for finite automata based on SCCS we introduce three different delay operators , , . The operators , are two different versions of an unbounded but finite delay operator. It is argued that the usual notion of bisimulation is inadequate and two generalisations are proposed. In both cases we give a complete axiomatisation for finite terms of the language and prove that certain forms of induction are sound. In one case we give a complete axiomatisation.  相似文献   

2.
Viscous Lattices     
Let E be an arbitrary space, and an extensive dilation of P(E) into itself, with an adjoint erosion . Then, the image [P(E)] of P(E) by is a complete lattice P where the sup is the union and the inf the opening of the intersection according to . The lattice L, named viscous, is not distributive, nor complemented. Any dilation on P(E) admits the same expression in L. However, the erosion in L is the opening according to of the erosion in P(E). Given a connection C on P(E) the image of C under turns out to be a connection C on L as soon as (C)eq C. Moreover, the elementary connected openings x of C and (x) are linked by the relation (x) = x. A comprehensive class of connection preverving closings is constructed. Two examples, binary and numerical (the latter comes from the heart imaging), prove the relevance of viscous lattices in interpolation and in segmentation problems.Jean Serra obtained the degree of Mining Engineer, in 1962 in Nancy, France, and in 1967 his Ph.D. for a work dealing with the estimation of the iron ore body of Lorraine by geostatistics. In cooperation with Georges Matheron, he laid the foundations of a new method, that he called Mathematical Morphology (1964). Its purpose was to describe quantitatively shapes and textures of natural phenomena, at micro and macro scales. In 1967, he founded with G. Matheron, the Centre de Morphologie Mathematique, at School of Mines of Paris, on the campus of Fontainebleau. Since this time, he has been working in this framework as a Directeur de Recherches. His main book is a two-volume treatise entitled Image Analysis and Mathematical Morphology (Ac. Press, 1982, 1988). He has been Vice President for Europe of the International Society for Stereology from 1979 to 1983. He founded the International Society for Mathematical Morphology in 1993, and was elected his first president. His achievements include several patents of devices for image processing, various awards and titles, such as the first AFCET award, in 1988, or Doctor Honoris Causa of the University of Barcelona (Spain) in 1993. He recently developed a new theory of segmentation, which is based on set connections (2001–2004), and currently works on colour image processing.  相似文献   

3.
The aim of this paper is to analyze the solution to the problem of the time-evolution of a Gaussian wave packet of quantum particle moving in the field of a linear force F(x) = x with continuously observed position. Particular attention is paid to the case of a harmonic oscillator ( < 0). In contrast to the case of unobserved particle for which the dispension of the packet oscillates, the oscillations for the observed particle are decaying and tend to a finite limit. It is shown that, like for a free quantum particle ( = 0) [Open Syst. Information Dyn. 5, 391 (1998), Phys. Rev. A 60, 687 (1999)], the position dispersion function (for all , negative or positive) always decreases in the beginning of the observation. Next, the dispersion function oscillates, at first irregularly, and passing through a regular rapidly decaying oscillations achieves its asymptotic value. The same is true for the case of a coherent state for which the position dispersion for the unobserved particle is constant. It is also shown that for the initial coherent state the asymptotic position dispersion is always smaller than the initial one.  相似文献   

4.
The first proposals for various component tools of what is now called the translator's workstation or translator's workbench are traced back to the 1970s and early 1980s in various, often independent, proposals at different stages in the development of computers and in their use by translators.  相似文献   

5.
Let (X, #) be an orthogonality space such that the lattice C(X, #) of closed subsets of (X, #) is orthomodular and let (, ) denote the free orthogonality monoid over (X, #). Let C0(, ) be the subset of C(, ), consisting of all closures of bounded orthogonal sets. We show that C0(, ) is a suborthomodular lattice of C(, ) and we provide a necessary and sufficient condition for C0(, ) to carry a full set of dispersion free states.The work of the second author on this paper was supported by National Science Foundation Grant GP-9005.  相似文献   

6.
When verifying concurrent systems described by transition systems, state explosion is one of the most serious problems. If quantitative temporal information (expressed by clock ticks) is considered, state explosion is even more serious. We present a notion of abstraction of transition systems, where the abstraction is driven by the formulae of a quantitative temporal logic, called qu-mu-calculus, defined in the paper. The abstraction is based on a notion of bisimulation equivalence, called , n-equivalence, where is a set of actions and n is a natural number. It is proved that two transition systems are , n-equivalent iff they give the same truth value to all qu-mu-calculus formulae such that the actions occurring in the modal operators are contained in , and with time constraints whose values are less than or equal to n. We present a non-standard (abstract) semantics for a timed process algebra able to produce reduced transition systems for checking formulae. The abstract semantics, parametric with respect to a set of actions and a natural number n, produces a reduced transition system , n-equivalent to the standard one. A transformational method is also defined, by means of which it is possible to syntactically transform a program into a smaller one, still preserving , n-equivalence.  相似文献   

7.
The concept of information is virtually ubiquitous in contemporary cognitive science. It is claimed to be processed (in cognitivist theories of perception and comprehension), stored (in cognitivist theories of memory and recognition), and otherwise manipulated and transformed by the human central nervous system. Fred Dretske's extensive philosophical defense of a theory of informational content (semantic information) based upon the Shannon-Weaver formal theory of information is subjected to critical scrutiny. A major difficulty is identified in Dretske's equivocations in the use of the concept of a signal bearing informational content. Gibson's alternative conception of information (construed as analog by Dretske), while avoiding many of the problems located in the conventional use of signal, raises different but equally serious questions. It is proposed that, taken literally, the human CNS does not extract or process information at all; rather, whatever information is construed as locatable in the CNS is information only for an observer-theorist and only for certain purposes.Blood courses through our veins, andinformation through our central nervous system.— A Neuropsychology Textbook.  相似文献   

8.
Our starting point is a definition of conditional event EH which differs from many seemingly similar ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same third value u (undetermined) to all conditional events, but make it depend on EH, it turns out that this function t(EH) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, compulsory – choice of the relevant operations among conditional events) the natural axioms for many different (besides probability) conditional measures.  相似文献   

9.
This paper presents aut, a modern Automath checker. It is a straightforward re-implementation of the Zandleven Automath checker from the seventies. It was implemented about five years ago, in the programming language C. It accepts both the AUT-68 and AUT-QE dialects of Automath. This program was written to restore a damaged version of Jutting's translation of Landau's Grundlagen. Some notable features: It is fast. On a 1 GHz machine it will check the full Jutting formalization (736 K of nonwhitespace Automath source) in 0.6 seconds. Its implementation of -terms does not use named variables or de Bruijn indices (the two common approaches) but instead uses a graph representation. In this representation variables are represented by pointers to a binder. The program can compile an Automath text into one big Automath single line-style -term. It outputs such a term using de Bruijn indices. (These -terms cannot be checked by modern systems like Coq or Agda, because the -typed -calculi of de Bruijn are different from the -typed -calculi of modern type theory.)The source of aut is freely available on the Web at the address .  相似文献   

10.
Summary This paper is devoted to developing and studying a precise notion of the encoding of a logical data structure in a physical storage structure, that is motivated by considerations of computational efficiency. The development builds upon the notion of an encoding of one graph in another. The cost of such an encoding is then defined so as to reflect the structural compatibility of the two graphs, the (externally specified) costs of implementing the host graph, and the (externally specified) set of intended usage patterns of the guest graph. The stability of the constructed framework is demonstrated in terms of a number of results; the faithfulness of the formalism is argued in terms of a number of examples from the literature; and the tractability of the model is hinted at by several results and by further references to the literature.  相似文献   

11.
In this paper, we define what we call a unitary immersion of a nonlinear system. We observe that, for classical Hamiltonian systems, this notion contains, in some sense, the concept of quantization. We restrict our attention to degree-zero unitary immersions, where all observation functions must be represented by operators of the type multiplication by a function. We show that the problem of classifying such degree-zero unitary immersions of a given nonlinear system is not obvious. In some cases, we solve this problem.Chargé de Recherche au CNRS.Maître de Conférences.  相似文献   

12.
On improving the accuracy of the Hough transform   总被引:4,自引:0,他引:4  
The subject of this paper is very high precision parameter estimation using the Hough transform. We identify various problems that adversely affect the accuracy of the Hough transform and propose a new, high accuracy method that consists of smoothing the Hough arrayH(, ) prior to finding its peak location and interpolating about this peak to find a final sub-bucket peak. We also investigate the effect of the quantizations and ofH(, ) on the final accuracy. We consider in detail the case of finding the parameters of a straight line. Using extensive simulation and a number of experiments on calibrated targets, we compare the accuracy of the method with results from the standard Hough transform method of taking the quantized peak coordinates, with results from taking the centroid about the peak, and with results from least squares fitting. The largest set of simulations cover a range of line lengths and Gaussian zero-mean noise distributions. This noise model is ideally suited to the least squares method, and yet the results from the method compare favorably. Compared to the centroid or to standard Hough estimates, the results are significantly better—for the standard Hough estimates by a factor of 3 to 10. In addition, the simulations show that as and are increased (i.e., made coarser), the sub-bucket interpolation maintains a high level of accuracy. Experiments using real images are also described, and in these the new method has errors smaller by a factor of 3 or more compared to the standard Hough estimates.  相似文献   

13.
Agent-based technology has been identified as an important approach for developing next generation manufacturing systems. One of the key techniques needed for implementing such advanced systems will be learning. This paper first discusses learning issues in agent-based manufacturing systems and reviews related approaches, then describes how to enhance the performance of an agent-based manufacturing system through learning from history (based on distributed case-based learning and reasoning) and learning from the future (through system forecasting simulation). Learning from history is used to enhance coordination capabilities by minimizing communication and processing overheads. Learning from the future is used to adjust promissory schedules through forecasting simulation, by taking into account the shop floor interactions, production and transportation time. Detailed learning and reasoning mechanisms are described and partial experimental results are presented.  相似文献   

14.
A variotherm mold for micro metal injection molding   总被引:4,自引:1,他引:3  
In this paper, a variotherm mold was designed and fabricated for the production of 316L stainless steel microstructures by micro metal injection molding (MIM). The variotherm mold incorporated a rapid heating/cooling system, vacuum unit, hot sprue and cavity pressure transducer. The design of the variotherm mold and the process cycle of MIM using the variotherm mold were described. Experiments were conducted to evaluate the molded microstructures produced using variotherm mold and conventional mold. The experiments showed that microstructures of higher aspect ratio such as 60 m × height 191 m and 40 m × height 174 m microstructures could be injection molded with complete filling and demolded successfully using the variotherm mold. Molded microstructures with dimensions of 60 m × height 191 m were successfully debound and sintered without visual defects.  相似文献   

15.
The ongoing integration of LANs and WANs to support global communications and businesses and the emergence of integrated broadband communication services has created an increased demand for cooperation between customers, network and service providers to achieve end-to-end service management. Such a cooperation between autonomous authorities, each defining their own administrative management domains, requires the application of an open standardized framework to facilitate and regulate interworking. Such a framework is given by the ITU-T recommendations on TMN, where the so-called X interface is of particular importance for inter-domain management. In this paper, we explain the role of the TMN X interface within an inter-domain TMN architecture supporting end-to-end communications management. We identify the important issues that need to be addressed for the definition and realization of TMN X interfaces and report about our practical experiences with the implementation of TMN X interfaces in the PREPARE project.  相似文献   

16.
We show that the simple universal adaptive control lawu(t)=N(k(t))y(t)=|y(t)| 2, withN(k)=(logk) cos((logk)) and 3+<1, stabilizes all detectable and stabilizable infinite dimensional systems of Pritchard-Salamon type which are externally stabilized by somescalar output feedback. The same controller is also shown to stabilize time varying systems satisfying the same type of output feedback stabilizability.  相似文献   

17.
A formal model of atomicity in asynchronous systems   总被引:1,自引:0,他引:1  
Summary We propose a generalisation of occurrence graphs as a formal model of computational structure. The model is used to define the atomic occurrence of a program, to characterise interference freeness between programs, and to model error recovery in a decentralised system.  相似文献   

18.
We study a variant of the on-line scheduling problem on two parallel processors. The size of the items is unknown and, as soon as an item is released, it must be immediately assigned to a processor and the assignment cannot be changed later. Optimal algorithms (with respect to competitive ratio) are known for some variants of this problem, where some partial information is given on the instance: the sum of the items is known, or a buffer is available to store a finite number of items. In these cases the best possible competitive ratio of the algorithms is 4/3. In this paper we assume that the sum of items is known in advance (supposed to equal 2) and also that the size of items does not exceed a fixed upper bound < 1. We provide, for all the possible values of , a lower bound for the competitive ratio of any algorithm and propose different algorithms, for different ranges of the upper bound, for which a worst-case analysis is provided. The proposed algorithms are optimal for &frac; \le \le 3/5, =&frac; and 16/17 \le < 1.  相似文献   

19.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

20.
When interpolating incomplete data, one can choose a parametric model, or opt for a more general approach and use a non-parametric model which allows a very large class of interpolants. A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also smooth in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a fidelity term and a smoothness term.The classical approach to regularization is: select optimal weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the optimal weights does not guarantee that the chosen function will be optimal in some sense, such as the maximum likelihood criterion, or the minimal square error criterion. For that, we have to consider all possible weights.The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights. The reason is as follows: the weight actually determines the probability space in which we are working. For a given weight , the probability of a function f is proportional to exp(– f2 uu du) (for the case of a function with one variable). For each different , there is a different solution to the restoration problem; denote it by f. Now, if we had known , it would not be necessary to use all the weights; however, all we are given are some noisy measurements of f, and we do not know the correct . Therefore, the mathematically correct solution is to calculate, for every , the probability that f was sampled from a space whose probability is determined by , and average the different f's weighted by these probabilities. The same argument holds for the noise variance, which is also unknown.Three basic problems are addressed is this work: Computing the MAP estimate, that is, the function f maximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem. Computing the MSE estimate. This function is defined at each point x as f(x)Pr(f/D) f. This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSE estimate. Computing the pointwise uncertainty associated with the MSE solution. This problem is reduced to computing three one-dimensional integrals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号