首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Our starting point is a definition of conditional event EH which differs from many seemingly similar ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same third value u (undetermined) to all conditional events, but make it depend on EH, it turns out that this function t(EH) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, compulsory – choice of the relevant operations among conditional events) the natural axioms for many different (besides probability) conditional measures.  相似文献   

2.
An induction principle, called context induction, is presented which is appropriate for the verification of behavioural properties of abstract data types. The usefulness of the proof principle is documented by several applications: the verification of behavioural theorems over a behavioural specification, the verification of behavioural implementations and the verification of forget-restrict-identify implementations.In particular, it is shown that behavioural implementations and forget-restrict-identify implementations (under certain assumptions) can be characterised by the same condition on contexts, i.e. (under the given assumptions) both concepts are equivalent. This leads to the suggestion to use context induction as a uniform proof method for correctness proofs of algebraic implementations.  相似文献   

3.
Let H be a separable Hilbert space. We consider the manifold M consisting of density operators on H such that p is of trace class for some p (0, 1). We say M is nearby if there exists C > 1 such that C –1C. We show that the space of nearby points to can be furnished with the two flat connections known as the (±)-affine structures, which are dual relative to the BKM metric. We furnish M with a norm making it into a Banach manifold.  相似文献   

4.
Pizer and Eberly introduced the core as the analogue of the medial axis for greyscale images. For two-dimensional images, it is obtained as the ridge of a medial function defined on 2 + 1-dimensional scale space. The medial function is defined using Gaussian blurring and measures the extent to which a point is in the center of the object measured at a scale. Numerical calculations indicate the core has properties quite different from the medial axis. In this paper we give the generic properties of ridges and cores for two-dimensional images and explain the discrepancy between core and medial axis properties. We place cores in a larger relative critical set structure, which coherently relates disjoint pieces of core. We also give the generic transitions which occur for sequences of images varying with a parameter such as time. The genericity implies the stability of the full structure in any compact viewing area of scale space under sufficiently small L2 perturbations of the image intensity function. We indicate consequences for finding cores and also for adding markings to completely determine the structure of the medial function.  相似文献   

5.
Ward Elliott (from 1987) and Robert Valenza (from 1989) set out to the find the true Shakespeare from among 37 anti-Stratfordian Claimants. As directors of the Claremont Shakespeare Authorship Clinic, Elliott and Valenza developed novel attributional tests, from which they concluded that most Claimants are not-Shakespeare. From 1990-4, Elliott and Valenza developed tests purporting further to reject much of the Shakespeare canon as not-Shakespeare (1996a). Foster (1996b) details extensive and persistent flaws in the Clinic's work: data were collected haphazardly; canonical and comparative text-samples were chronologically mismatched; procedural controls for genre, stanzaic structure, and date were lacking. Elliott and Valenza counter by estimating maximum erosion of the Clinic's findings to include five of our 54 tests, which can amount, at most, to half of one percent (1998). This essay provides a brief history, showing why the Clinic foundered. Examining several of the Clinic's representative tests, I evaluate claims that Elliott and Valenza continue to make for their methodology. A final section addresses doubts about accuracy, validity and replicability that have dogged the Clinic's work from the outset.  相似文献   

6.
In this paper the problem of routing messages along shortest paths in a distributed network without using complete routing tables is considered. In particular, the complexity of deriving minimum (in terms of number of intervals) interval routing schemes is analyzed under different requirements. For all the cases considered NP-hardness proofs are given, while some approximability results are provided. Moreover, relations among the different cases considered are studied.This work was supported by the EEC ESPRIT II Basic Research Action Program under Contract No. 7141 Algorithms and Complexity II, by the EEC Human Capital and Mobility MAP project, and by the Italian MURST 40% project Algoritmi, Modelli di Calcolo e Strutture Informative.  相似文献   

7.
We develop a theory of communication within branching programs that provides exponential lower bounds on the size of branching programs that are bounded alternating. Our theory is based on the algebraic concept of -branching programs, : , a semiring homomorphism, that generalizes ordinary branching programs, -branching programs [M2] andMOD p-branching programs [DKMW].Due to certain exponential lower and polynomial upper bounds on the size of bounded alternating -branching programs we are able to separate the corresponding complexity classesN ba ,co-N ba ba , andMOD p - ba ,p prime, from each other, and from that classes corresponding to oblivious linear length-bounded branching programs investigated in the past.  相似文献   

8.
Since Aristotle it is recognised that a valid syllogism cannot have two particular premises. However, that is not how a lay person sees it; at least as long as the premises read many, most etc, instead of a plain some. The lay people are right if one considers that these syllogisms do not have strict but approximate (Zadeh) validity. Typically there are only particular premises available in everyday life and one is dependent on such syllogisms. – Some rules on the usage of particular premises are given below.  相似文献   

9.
Summary The author's inquiry [1] on learning systems is generalized in the following respects: The process of learning, instead of coming to an end when the learning goal has been reached, is supposed to last for ever, so that the above definitive learning as well as phenomena such as forgetting, re-learning, changing the goal etc. become describable.We take over the notion of semi-uniform solvability of a set of learning problems (2.2), but now (trivial cases excluded) the whole capacity of a learning system is never s. u. solvable. Finite such sets are. The notion of a solving-basis of some is introduced and we can state necessary conditions that possess such a basis (2.14), so that examples of sets without a basis can be provided. On the other hand, any s. u. solvable has as basis. The notion of uniform solvability (3.1) reinforces that of s. u. solvability, and there are given sufficient conditions for to be uniformly solvable (3.6). In some finite cases, s. u. solvability, existence of a basis and uniform solvability coincide (3.7–3.9). At last we give the construction for the weakest learning system solving a uniformly solvable problem set (3.12–3.19).Eine deutsche Fassung wurde am 30. Mai 1972 eingereicht.  相似文献   

10.
We consider the half-space range-reporting problem: Given a setS ofn points in d, preprocess it into a data structure, so that, given a query half-space , allk points ofS can be reported efficiently. We extend previously known static solutions to dynamic ones, supporting insertions and deletions of points ofS. For a given parameterm,n m n d/2 and an arbitrarily small positive constant , we achieveO(m 1+) space and preprocessing time, O((n/m d/2 logn+k) query time, and O(m1+n) amortized update time (d 3). We present, among others, the following applications: an O(n1+)-time algorithm for computing convex layers in 3, and an output sensitive algorithm for computing a level in an arrangements of planes in 3, whose time complexity is O((b+n) n, whereb is the size of the level.Work by the first author has been supported by National Science Foundation Grant CCR-91-06514. A preliminary version of this paper appeared in Agarwalet al. [2], which also contains the results of [20] on dynamic bichromatic closest pair and minimum spanning trees.  相似文献   

11.
Data structures are interpreted as sets which may contain repeating elements. An algebraic system close to the integer ring is constructed on these sets.Translated from Kibernetika, No. 4, pp. 82–88, July–August, 1990.  相似文献   

12.
We construct a nearest-neighbor interaction whose ground states encode the solutions to the NP-complete problem independent set for cubic planar graphs. The important difference to previously used Hamiltonians in adiabatic quantum computing is that our Hamiltonian is spatially local. Due to its special structure our Hamiltonian can be easily simulated by Ising interactions between adjacent particles on a 2D rectangular lattice. We describe the required pulse sequences. Our methods could help to implement adiabatic quantum computing by physically reasonable Hamiltonians like short-range interactions. Therefore, this universal resource Hamiltonian can be used for different graphs by applying suitable control operations. This is in contrast to a previous proposal where the Hamiltonians have to be wired in hardware for each graph. PACS: 03.67.Lx  相似文献   

13.
Given a finite setE R n, the problem is to find clusters (or subsets of similar points inE) and at the same time to find the most typical elements of this set. An original mathematical formulation is given to the problem. The proposed algorithm operates on groups of points, called samplings (samplings may be called multiple centers or cores); these samplings adapt and evolve into interesting clusters. Compared with other clustering algorithms, this algorithm requires less machine time and storage. We provide some propositions about nonprobabilistic convergence and a sufficient condition which ensures the decrease of the criterion. Some computational experiments are presented.  相似文献   

14.
A central component of the analysis of panel clustering techniques for the approximation of integral operators is the so-called -admissibility condition min {diam(),diam()} 2dist(,) that ensures that the kernel function is approximated only on those parts of the domain that are far from the singularity. Typical techniques based on a Taylor expansion of the kernel function require a subdomain to be far enough from the singularity such that the parameter has to be smaller than a given constant depending on properties of the kernel function. In this paper, we demonstrate that any is sufficient if interpolation instead of Taylor expansionisused for the kernel approximation, which paves the way for grey-box panel clustering algorithms.  相似文献   

15.
Summary Equivalence is a fundamental notion for the semantic analysis of algebraic specifications. In this paper the notion of crypt-equivalence is introduced and studied w.r.t. two loose approaches to the semantics of an algebraic specification T: the class of all first-order models of T and the class of all term-generated models of T. Two specifications are called crypt-equivalent if for one specification there exists a predicate logic formula which implicitly defines an expansion (by new functions) of every model of that specification in such a way that the expansion (after forgetting unnecessary functions) is homologous to a model of the other specification, and if vice versa there exists another predicate logic formula with the same properties for the other specification. We speak of first-order crypt-equivalence if this holds for all first-order models, and of inductive crypt-equivalence if this holds for all term-generated models. Characterizations and structural properties of these notions are studied. In particular, it is shown that first order crypt-equivalence is equivalent to the existence of explicit definitions and that in case of positive definability two first-order crypt-equivalent specifications admit the same categories of models and homomorphisms. Similarly, two specifications which are inductively crypt-equivalent via sufficiently complete implicit definitions determine the same associated categories. Moreover, crypt-equivalence is compared with other notions of equivalence for algebraic specifications: in particular, it is shown that first-order cryptequivalence is strictly coarser than abstract semantic equivalence and that inductive crypt-equivalence is strictly finer than inductive simulation equivalence and implementation equivalence.  相似文献   

16.
This paper describes a machine for reducing a -formula (explicitly given or implicitly by a system of recursive equations) to principal --normal form, with particular attention to the memory structures needed for the purpose, and with some important features: (1) any kind of collision is permitted; (2) the processing of subformulas which will be thrown away [e.g., ((xy)x) in ((yz)(xy)x)] is avoided; (3) there is no need to introduce any fixed point operator like , etc. The machine structure entails: (1) some store to memorize as side-effects assignment statements with the r.h.s. of a given shape. (2) a number of stacks, one for every in the initial formula, partitioned naturally in classes (chains). These stacks admit as entries only words representing variables and they are peculiar in that the operations admitted on the top arewriting anderasing and the operations admitted on the pseudo-top arereading,read-protecting, andresetting readability (the last two operations are chain operations). This structure is critically motivated. (3) A workstack. (4) A pointerstack. The computation runs through four phases: -generation, -run, -generation, -run. Every generation- (run-) phase is rather recognition- (transformation-) oriented, but we found it more stimulating to emphasize technical similarities rather than methodological differences. Every phase is described and four examples are extensively developed.  相似文献   

17.
Workflow Management Systems (WFMSs) are often used in context of B2B integration as a base technology to implement business-to-business (B2B) integration processes across enterprises. In this context the notion of distributed inter-organizational workflows is introduced to indicate the collaboration of enterprises on a process level. This notion requires a thorough examination presented in this article since WFMSs were not designed with inter-enterprise distribution as one of the design goals. At a closer look, the proposed use of WFMSs in context of B2B integration is often very naïve and inappropriate. Consequently it does not address the real requirements found in enterprises. Enterprises do not share common workflow definitions, let alone common workflow instance execution state and have no intent to do so due to competitive knowledge protection. Furthermore, trading partner specific business rules within enterprises are not accounted for leading to an unwanted explosion of workflow definitions. This article clarifies the notion of distributed inter-organizational workflows as well as private and public processes. Based on this definition, the appropriate use of WFMSs is shown in context of an overall B2B integration solution that allows enterprises to protect their competitive knowledge while participating in B2B integration.  相似文献   

18.
Games such as CHESS, GO and OTHELLO can be represented by minimax game trees. Among various search procedures to solve such game trees,- and SSS* are perhaps most well known. Although it is proved that SSS* explores only a subset of the nodes explored by-, - is commonly believed to be faster in real applications, since it requires very little memory space and hence its storage management cost is low. Contrary to this folklore, however, this paper reports, using the OTHELLO game as an example, that SSS* is much faster than-. It is also demonstrated that SSS* can be modified to make the required memory space controllable to some extent, while retaining the high efficiency of the original SSS*.This research was partially supported by the Ministry of Education, Science and Culture of Japan, under a Scientific Grant-in-Aid.  相似文献   

19.
Continuation-passing style (CPS) is a good abstract representation to use for compilation and optimization: it has a clean semantics and is easily manipulated. We examine how CPS expresses the saving and restoring of registers in source-language procedure calls. In most CPS-based compilers, the context of the calling procedure is saved in a continuation closure—a single variable that is passed as an argument to the function being called. This closure is a record containing bindings of all the free variables of the continuation; that is, registers that hold values needed by the caller after the call are written to memory in the closure, and fetched back after the call.Consider the procedure-call mechanisms used by conventional compilers. In particular, registers holding values needed after the call must be saved and later restored. The responsibility for saving registers can lie with the caller (a caller-saves convention) or with the called function (callee-saves). In practice, to optimize memory traffic, compilers find it useful to have some caller-saves registers and some callee-saves.Conventional CPS-based compilers that pass a pointer to a record containing all the variables needed after the call (i.e., the continuation closure), are using a caller-saves convention. We explain how to express callee-save registers in Continuation-Passing Style, and give measurements showing the resulting improvement in execution time.SUPPORTED IN PART BY NSF GRANT CCR-8914570.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号