首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary Equivalence is a fundamental notion for the semantic analysis of algebraic specifications. In this paper the notion of crypt-equivalence is introduced and studied w.r.t. two loose approaches to the semantics of an algebraic specification T: the class of all first-order models of T and the class of all term-generated models of T. Two specifications are called crypt-equivalent if for one specification there exists a predicate logic formula which implicitly defines an expansion (by new functions) of every model of that specification in such a way that the expansion (after forgetting unnecessary functions) is homologous to a model of the other specification, and if vice versa there exists another predicate logic formula with the same properties for the other specification. We speak of first-order crypt-equivalence if this holds for all first-order models, and of inductive crypt-equivalence if this holds for all term-generated models. Characterizations and structural properties of these notions are studied. In particular, it is shown that first order crypt-equivalence is equivalent to the existence of explicit definitions and that in case of positive definability two first-order crypt-equivalent specifications admit the same categories of models and homomorphisms. Similarly, two specifications which are inductively crypt-equivalent via sufficiently complete implicit definitions determine the same associated categories. Moreover, crypt-equivalence is compared with other notions of equivalence for algebraic specifications: in particular, it is shown that first-order cryptequivalence is strictly coarser than abstract semantic equivalence and that inductive crypt-equivalence is strictly finer than inductive simulation equivalence and implementation equivalence.  相似文献   

2.
In this paper I consider how the computer can or should be accepted in Japanese schools. The concept of teaching in Japan stresses learning from a long-term perspective. Whereas in the instructional technology, on which the CAI or the Tutoring System depends, step-by-step attainments in relatively short time are emphasized. The former is reluctant in using the computer, but both share the Platonic perspective which are goal-oriented. However, The Socratic teacher, who intends to activate students' innate disposition to be better, would find another way of teaching and use of the computer.  相似文献   

3.
Since Aristotle it is recognised that a valid syllogism cannot have two particular premises. However, that is not how a lay person sees it; at least as long as the premises read many, most etc, instead of a plain some. The lay people are right if one considers that these syllogisms do not have strict but approximate (Zadeh) validity. Typically there are only particular premises available in everyday life and one is dependent on such syllogisms. – Some rules on the usage of particular premises are given below.  相似文献   

4.
The first proposals for various component tools of what is now called the translator's workstation or translator's workbench are traced back to the 1970s and early 1980s in various, often independent, proposals at different stages in the development of computers and in their use by translators.  相似文献   

5.
We analyze four nce Memed novels of Yaar Kemal using six style markers: most frequent words, syllable counts, word type – or part of speech – information, sentence length in terms of words, word length in text, and word length in vocabulary. For analysis we divide each novel into five thousand word text blocks and count the frequencies of each style marker in these blocks. The style markers showing the best separation are most frequent words and sentence lengths. We use stepwise discriminant analysis to determine the best discriminators of each style marker. We then use these markers in cross validation based discriminant analysis. Further investigation based on multiple analysis of variance (MANOVA) reveals how the attributes of each style marker group distinguish among the volumes.  相似文献   

6.
Summary Distributed Mutual Exclusion algorithms have been mainly compared using the number of messages exchanged per critical section execution. In such algorithms, no attention has been paid to the serialization order of the requests. Indeed, they adopt FCFS discipline. Conversely, the insertion of priority serialization disciplines, such as Short-Job-First, Head-Of-Line, Shortest-Remaining-Job-First etc., can be useful in many applications to optimize some performance indices. However, such priority disciplines are prone to starvation. The goal of this paper is to investigate and evaluate the impact of the insertion of a priority discipline in Maekawa-type algorithms. Priority serialization disciplines will be inserted by means of agated batch mechanism which avoids starvation. In a distributed algorithm, such a mechanism needs synchronizations among the processes. In order to highlight the usefulness of the priority based serialization discipline, we show how it can be used to improve theaverage response time compared to the FCFS discipline. The gated batch approach exhibits other advantages: algorithms are inherently deadlock-free and messages do not need to piggyback timestamps. We also show that, under heavy demand, algorithms using gated batch exchange less messages than Maekawa-type algorithms per critical section excution. Roberto Baldoni was born in Rome on February 1, 1965. He received the Laurea degree in electronic engineering in 1990 from the University of Rome La Sapienza and the Ph.D. degree in Computer Science from the University of Rome La Sapienza in 1994. Currently, he is a researcher in computer science at IRISA, Rennes (France). His research interests include operating systems, distributed algorithms, network protocols and real-time multimedia applications. Bruno Ciciani received the Laurea degree in electronic engineering in 1980 from the University of Rome La Sapienza. From 1983 to 1991 he has been a researcher at the University of Rome Tor Vergata. He is currently full professor in Computer Science at the University of Rome La Sapienza. His research activities include distributed computer systems, fault-tolerant computing, languages for parallel processing, and computer system performance and reliability evaluation. He has published in IEEE Trans. on Computers, IEEE Trans. on Knowledge and Data Engineering, IEEE Trans. on Software Engineering and IEEE Trans. on Reliability. He is the author of a book titled Manufactoring Yield Evaluation of VLSI/WSI Systems to be published by IEEE Computer Society Press.This research was supported in part by the Consiglio Nazionale delle Ricerche under grant 93.02294.CT12This author is also supported by a grant of the Human Capital and Mobility project of the European Community under contract No. 3702 CABERNET  相似文献   

7.
Aweaving W is a simple arrangement of lines (or line segments) in the plane together with a binary relation specifying which line is above the other. A system of lines (or line segments) in 3-space is called arealization ofW, if its projection into the plane isW and the above-below relations between the lines respect the specifications. Two weavings are equivalent if the underlying arrangements of lines are combinatorially equivalent and the above-below relations are the same. An equivalence class of weavings is said to be aweaving pattern. A weaving pattern isrealizable if at least one element of the equivalence class has a three-dimensional realization. A weaving (pattern)W is calledperfect if, along each line (line segment) ofW, the lines intersecting it are alternately above and below. We prove that (i) a perfect weaving pattern ofn lines is realizable if and only ifn 3, (ii) a perfect m byn weaving pattern of line segments (in a grid-like fashion) is realizable if and only if min(m, n) 3, (iii) ifn is sufficiently large, then almost all weaving patterns ofn lines are nonrealizable.Jànos Pach has been supported in part by Hungarian NFSR Grant 1812, NSF Grant CCR-8901484, and the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS), a National Science Foundation Science and Technology Center, under NSF Grant STC88-09648. Richard Pollack has been supported in part by NSA Grant MDA904-89-H-2030, NSF Grants DMS-85-01947 and CCR-8901484, and DIMACS. Emo Welzl has been supported in part by the ESPRIT II Basic Research Actions Program of the EC under Contract No. 3075 (project ALCOM) and DIMACS.  相似文献   

8.
Today, documents and data are likely to be encountered in electronic form. This creates a challenge for the legal system since its rules of evidence evolved to deal with tangible (physical) evidence. Digital evidence differs from tangible evidence in various respects, which raise important issues as to how digital evidence is to be authenticated, ascertained to be reliable and determined to be admissible in criminal or civil proceedings. This article explains how digital evidence differs from traditional physical evidence and reviews the current state of the law with regard to the processes of authentication, reliability and admissibility.  相似文献   

9.
We construct equivalent localized versions of a formula, adding assumptions simultaneously to various locations, where the particular location determines what is added. Inference rules that take advantage of localized formulas are presented for sequent calculi in which the left hand side of sequents can be used to accumulate the background assumptions (or contexts) of assertions. The intended application is to the automatic generation of tractable justifying lemmas for substitution operations for interactive proof development systems, especially those concerned with mathematical topics where manipulation of deeply embedded terms is desirable.  相似文献   

10.
A New Class of Depth-Size Optimal Parallel Prefix Circuits   总被引:1,自引:1,他引:0  
Given n values x1, x2, ... ,xn and an associative binary operation o, the prefix problem is to compute x1ox2o··· oxi, 1in. Many combinational circuits for solving the prefix problem, called prefix circuits, have been designed. It has been proved that the size s(D(n)) and the depth d(D(n)) of an n-input prefix circuit D(n) satisfy the inequality d(D(n))+s(D(n))2n–2; thus, a prefix circuit is depth-size optimal if d(D(n))+s(D(n))=2n–2. In this paper, we construct a new depth-size optimal prefix circuit SL(n). In addition, we can build depth-size optimal prefix circuits whose depth can be any integer between d(SL(n)) and n–1. SL(n) has the same maximum fan-out lgn+1 as Snir's SN(n), but the depth of SL(n) is smaller; thus, SL(n) is faster. Compared with another optimal prefix circuit LYD(n), d(LYD(n))+2d(SL(n))d(LYD(n)). However, LYD(n) may have a fan-out of at most 2 lgn–2, and the fan-out of LYD(n) is greater than that of SL(n) for almost all n12. Because an operation node with greater fan-out occupies more chip area and is slower in VLSI implementation, in most cases, SL(n) needs less area and may be faster than LYD(n). Moreover, it is much easier to design SL(n) than LYD(n).  相似文献   

11.
Given a finite setE R n, the problem is to find clusters (or subsets of similar points inE) and at the same time to find the most typical elements of this set. An original mathematical formulation is given to the problem. The proposed algorithm operates on groups of points, called samplings (samplings may be called multiple centers or cores); these samplings adapt and evolve into interesting clusters. Compared with other clustering algorithms, this algorithm requires less machine time and storage. We provide some propositions about nonprobabilistic convergence and a sufficient condition which ensures the decrease of the criterion. Some computational experiments are presented.  相似文献   

12.
The main stream of legal theory tends to incorporate unwritten principles into the law. Weighing of principles plays a great role in legal argumentation, inter alia in statutory interpretation. A weighing and balancing of principles and other prima facie reasons is a jump. The inference is not conclusive.To deal with defeasibility and weighing, a jurist needs both the belief-revision logic and the nonmonotonic logic. The systems of nonmonotonic logic included in the present volume provide logical tools enabling one to speak precisely about various kinds of rules about rules, dealing with such things as applicability of rules, what is assumed by rules, priority between rules and the burden of proof. Nonmonotonic logic is an example of an extension of the domain of logic. But the more far-reaching the extension is, the greater problems it meets. It seems impossible to make logical reconstruction of the totality of legal argumentation.The lawyers' search for reasons has no obvious end point. Ideally, the search for reasons may end when one arrives at a coherent totality of knowledge. In other words, coherence is the termination condition of reasoning. Both scientific knowledge and knowledge of legal and moral norms progresses by trial and error, and that one must resort to a certain convention to define what error means. The main difference is, however, that conventions of science are much more precise than those of legal scholarship.Consequently, determination of error in legal science is often holistic and circular. The reasons determining that a legal theory is erroneous are not more certain than the contested theory itself. A strict and formal logical analysis cannot give us the full grasp of legal rationality. A weaker logical theory, allowing for nonmonotonic steps, comes closer, at the expense of an inevitable loss of computational efficiency. Coherentist epistemology grasps even more of this rationality, at the expense of a loss of preciseness.  相似文献   

13.
The design of the database is crucial to the process of designing almost any Information System (IS) and involves two clearly identifiable key concepts: schema and data model, the latter allowing us to define the former. Nevertheless, the term model is commonly applied indistinctly to both, the confusion arising from the fact that in Software Engineering (SE), unlike in formal or empirical sciences, the notion of model has a double meaning of which we are not always aware. If we take our idea of model directly from empirical sciences, then the schema of a database would actually be a model, whereas the data model would be a set of tools allowing us to define such a schema.The present paper discusses the meaning of model in the area of Software Engineering from a philosophical point of view, an important topic for the confusion arising directly affects other debates where model is a key concept. We would also suggest that the need for a philosophical discussion on the concept of data model is a further argument in favour of institutionalizing a new area of knowledge, which could be called: Philosophy of Engineering.  相似文献   

14.
A central component of the analysis of panel clustering techniques for the approximation of integral operators is the so-called -admissibility condition min {diam(),diam()} 2dist(,) that ensures that the kernel function is approximated only on those parts of the domain that are far from the singularity. Typical techniques based on a Taylor expansion of the kernel function require a subdomain to be far enough from the singularity such that the parameter has to be smaller than a given constant depending on properties of the kernel function. In this paper, we demonstrate that any is sufficient if interpolation instead of Taylor expansionisused for the kernel approximation, which paves the way for grey-box panel clustering algorithms.  相似文献   

15.
In this paper, an objective conception of contexts based loosely upon situation theory is developed and formalized. Unlike subjective conceptions, which take contexts to be something like sets of beliefs, contexts on the objective conception are taken to be complex, structured pieces of the world that (in general) contain individuals, other contexts, and propositions about them. An extended first-order language for this account is developed. The language contains complex terms for propositions, and the standard predicate ist that expresses the relation that holds between a context and a proposition just in case the latter is true in the former. The logic for the objective conception features a global classical predicate calculus, a local logic for reasoning within contexts, and axioms for propositions. The specter of paradox is banished from the logic by allowing ist to be nonbivalent in problematic cases: it is not in general the case, for any context c and proposition p, that either ist(c,p) or ist(c, ¬ p). An important representational capability of the logic is illustrated by proving an appropriately modified version of an illustrative theorem from McCarthy's classic Blocks World example.  相似文献   

16.
Summary Directed node-label controlled graph grammars (DNLC grammars) are sequential graph rewriting systems. In a direct derivation step of a DNLC grammar a single node is rewritten. Both the rewriting of a node and the embedding of a daughter graph in a host graph are controlled by the labels of nodes only. We study the use of those grammars to define string languages. In particular we provide a characterization of the class of context-free string languages in terms of DNLC grammars.  相似文献   

17.
A note on dimensions and factors   总被引:1,自引:1,他引:0  
In this short note, we discuss several aspectsof dimensions and the related constructof factors. We concentrate on those aspectsthat are relevant to articles in this specialissue, especially those dealing with the analysisof the wild animal cases discussed inBerman and Hafner's 1993 ICAIL article. We reviewthe basic ideas about dimensions,as used in HYPO, and point out differences withfactors, as used in subsequent systemslike CATO. Our goal is to correct certainmisconceptions that have arisen over the years.  相似文献   

18.
Summary This paper is devoted to developing and studying a precise notion of the encoding of a logical data structure in a physical storage structure, that is motivated by considerations of computational efficiency. The development builds upon the notion of an encoding of one graph in another. The cost of such an encoding is then defined so as to reflect the structural compatibility of the two graphs, the (externally specified) costs of implementing the host graph, and the (externally specified) set of intended usage patterns of the guest graph. The stability of the constructed framework is demonstrated in terms of a number of results; the faithfulness of the formalism is argued in terms of a number of examples from the literature; and the tractability of the model is hinted at by several results and by further references to the literature.  相似文献   

19.
A technique to model and to verify distributed algorithms is suggested. This technique (based on Petri nets) reduces the modelling and analysis effort to a reasonable level. The paper outlines the technique using the example of a typical network algorithm, theecho algorithm.Supported by the DFG-projects Verteilte Algorithmen and Konsensalgorithmen  相似文献   

20.
A well-known problem in default logic is the ability of naive reasoners to explain bothg and ¬g from a set of observations. This problem is treated in at least two different ways within that camp.One approach is examination of the various explanations and choosing among them on the basis of various explanation comparators. A typical comparator is choosing the explanation that depends on the most specific observation, similar to the notion of narrowest reference class.Others examine default extensions of the observations and choose whatever is true in any extension, or what is true in all extensions or what is true in preferred extensions. Default extensions are sometimes thought of as acceptable models of the world that are discarded as more knowledge becomes available.We argue that the notions of specificity and extension lack clear semantics. Furthermore, we show that the problems these ideas were supposed to solve can be handled easily within a probabilistic framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号