首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
I discuss the attitude of Jewish law sources from the 2nd–:5th centuries to the imprecision of measurement. I review a problem that the Talmud refers to, somewhat obscurely, as impossible reduction. This problem arises when a legal rule specifies an object by referring to a maximized (or minimized) measurement function, e.g., when a rule applies to the largest part of a divided whole, or to the first incidence that occurs, etc. A problem that is often mentioned is whether there might be hypothetical situations involving more than one maximal (or minimal) value of the relevant measurement and, given such situations, what is the pertinent legal rule. Presumption of simultaneous occurrences or equally measured values are also a source of embarrassment to modern legal systems, in situations exemplified in the paper, where law determines a preference based on measured values. I contend that the Talmudic sources discussing the problem of impossible reduction were guided by primitive insights compatible with fuzzy logic presentation of the inevitable uncertainty involved in measurement. I maintain that fuzzy models of data are compatible with a positivistic epistemology, which refuses to assume any precision in the extra-conscious world that may not be captured by observation and measurement. I therefore propose this view as the preferred interpretation of the Talmudic notion of impossible reduction. Attributing a fuzzy world view to the Talmudic authorities is meant not only to increase our understanding of the Talmud but, in so doing, also to demonstrate that fuzzy notions are entrenched in our practical reasoning. If Talmudic sages did indeed conceive the results of measurements in terms of fuzzy numbers, then equality between the results of measurements had to be more complicated than crisp equations. The problem of impossible reduction could lie in fuzzy sets with an empty core or whose membership functions were only partly congruent. Reduction is impossible may thus be reconstructed as there is no core to the intersection of two measures. I describe Dirichlet maps for fuzzy measurements of distance as a rough partition of the universe, where for any region A there may be a non-empty set of - _A (upper approximation minus lower approximation), where the problem of impossible reduction applies. This model may easily be combined with probabilistic extention. The possibility of adopting practical decision standards based on -cuts (and therefore applying interval analysis to fuzzy equations) is discussed in this context. I propose to characterize the uncertainty that was presumably capped by the old sages as U-uncertainty, defined, for a non-empty fuzzy set A on the set of real numbers, whose -cuts are intervals of real numbers, as U(A) = 1/h(A) 0 h(A) log [1+(A)]d, where h(A) is the largest membership value obtained by any element of A and (A) is the measure of the -cut of A defined by the Lebesge integral of its characteristic function.  相似文献   

2.
The first part of the paper introduces a new exclusion function without derivatives and describes its fundamental characteristics. The second part uses the new notion for finding global minimum of a real function of few variables g : [a, b] Rm R, g(x) = max{g1(x),...,gn(x)}, x [a, b], where g1(x),...,gn(x) are special multivariant real functions. The structure of the Maple program and some numerical examples are also presented in this part.  相似文献   

3.
The language of standard propositional modal logic has one operator ( or ), that can be thought of as being determined by the quantifiers or , respectively: for example, a formula of the form is true at a point s just in case all the immediate successors of s verify .This paper uses a propositional modal language with one operator determined by a generalized quantifier to discuss a simple connection between standard invariance conditions on modal formulas and generalized quantifiers: the combined generalized quantifier conditions of conservativity and extension correspond to the modal condition of invariance under generated submodels, and the modal condition of invariance under bisimulations corresponds to the generalized quantifier being a Boolean combination of and .  相似文献   

4.
The number of virtual connections in the nodal space of an ATM network of arbitrary structure and topology is computed by a method based on a new concept—a covering domain having a concrete physical meaning. The method is based on a network information sources—boundary switches model developed for an ATM transfer network by the entropy approach. Computations involve the solution of systems of linear equations. The optimization model used to compute the number of virtual connections in a many-category traffic in an ATM network component is useful in estimating the resource of nodal equipment and communication channels. The variable parameters of the model are the transmission bands for different traffic categories.  相似文献   

5.
The capability of large, data-intensive expert systems is determined not only by the cleverness and expertise of their knowledge manipulation algorithms and methods but also by the fundamental speeds of the computer systems upon which they are implemented. To date, logical inferences per second (LIPS) is used as the power metric of the knowledge processing capacity of an expert system implementation. We show why this simplistic metric is misleading. We relate the power metrics for conventional computer systems to LIPS and demonstrate wide discrepancies. We review the power of today's largest conventional mainframes, such as the IBM 3090/400 and the Cray Research Cray-2 and forecast the expected power of mainframes and specialized processors in the coming decade.  相似文献   

6.
Dr. W. Liniger 《Computing》1968,3(4):280-285
Summary Two easy-to-check conditions are given which, together, are sufficient and almost necessary forA-stability of linear multistep integration formulae. As an example, these conditions are applied to the two-parameter family of all two-step methods which are at least second-order accurate.
Zusammenfassung Es werden zwei leicht nachprüfbare Bedingungen hergeleitet, welche fürA-Stabilität von linearen Mehrschrittverfahren hinreichend und fast notwendig sind. Als Beispiel werden diese Bedingungen auf die zweiparametrige Klasse aller Zweischrittverfahren von mindestens zweiter Ordnung angewendet.


With 1 Figure  相似文献   

7.
The AI methodology of qualitative reasoning furnishes useful tools to scientists and engineers who need to deal with incomplete system knowledge during design, analysis, or diagnosis tasks. Qualitative simulators have a theoretical soundness guarantee; they cannot overlook any concrete equation implied by their input. On the other hand, the basic qualitative simulation algorithms have been shown to suffer from the incompleteness problem; they may allow non-solutions of the input equation to appear in their output. The question of whether a simulator with purely qualitative input which never predicts spurious behaviors can ever be achieved by adding new filters to the existing algorithm has remained unanswered. In this paper, we show that, if such a sound and complete simulator exists, it will have to be able to handle numerical distinctions with such a high precision that it must contain a component that would better be called a quantitative, rather than qualitative reasoner. This is due to the ability of the pure qualitative format to allow the exact representation of the members of a rich set of numbers.  相似文献   

8.
We formalize natural deduction for first-order logic in the proof assistant Coq, using de Bruijn indices for variable binding. The main judgment we model is of the form d[:], stating that d is a proof term of formula under hypotheses it can be viewed as a typing relation by the Curry–Howard isomorphism. This relation is proved sound with respect to Coq's native logic and is amenable to the manipulation of formulas and of derivations. As an illustration, we define a reduction relation on proof terms with permutative conversions and prove the property of subject reduction.  相似文献   

9.
This paper presents a detailed study of Eurotra Machine Translation engines, namely the mainstream Eurotra software known as the E-Framework, and two unofficial spin-offs – the C,A,T and Relaxed Compositionality translator notations – with regard to how these systems handle hard cases, and in particular their ability to handle combinations of such problems. In the C,A,T translator notation, some cases of complex transfer are wild, meaning roughly that they interact badly when presented with other complex cases in the same sentence. The effect of this is that each combination of a wild case and another complex case needs ad hoc treatment. The E-Framework is the same as the C,A,T notation in this respect. In general, the E-Framework is equivalent to the C,A,T notation for the task of transfer. The Relaxed Compositionality translator notation is able to handle each wild case (bar one exception) with a single rule even where it appears in the same sentence as other complex cases.  相似文献   

10.
This paper considers the problem of quantifying literary style and looks at several variables which may be used as stylistic fingerprints of a writer. A review of work done on the statistical analysis of change over time in literary style is then presented, followed by a look at a specific application area, the authorship of Biblical texts.David Holmes is a Principal Lecturer in Statistics at the University of the West of England, Bristol with specific responsibility for co-ordinating the research programmes in the Department of Mathematical Sciences. He has taught literary style analysis to humanities students since 1983 and has published articles on the statistical analysis of literary style in theJournal of the Royal Statistical Society, History and Computing, andLiterary and Linguistic Computing. He presented papers at the ACH/ALLC conferences in 1991 and 1993.  相似文献   

11.
The design of the database is crucial to the process of designing almost any Information System (IS) and involves two clearly identifiable key concepts: schema and data model, the latter allowing us to define the former. Nevertheless, the term model is commonly applied indistinctly to both, the confusion arising from the fact that in Software Engineering (SE), unlike in formal or empirical sciences, the notion of model has a double meaning of which we are not always aware. If we take our idea of model directly from empirical sciences, then the schema of a database would actually be a model, whereas the data model would be a set of tools allowing us to define such a schema.The present paper discusses the meaning of model in the area of Software Engineering from a philosophical point of view, an important topic for the confusion arising directly affects other debates where model is a key concept. We would also suggest that the need for a philosophical discussion on the concept of data model is a further argument in favour of institutionalizing a new area of knowledge, which could be called: Philosophy of Engineering.  相似文献   

12.
We analyze four nce Memed novels of Yaar Kemal using six style markers: most frequent words, syllable counts, word type – or part of speech – information, sentence length in terms of words, word length in text, and word length in vocabulary. For analysis we divide each novel into five thousand word text blocks and count the frequencies of each style marker in these blocks. The style markers showing the best separation are most frequent words and sentence lengths. We use stepwise discriminant analysis to determine the best discriminators of each style marker. We then use these markers in cross validation based discriminant analysis. Further investigation based on multiple analysis of variance (MANOVA) reveals how the attributes of each style marker group distinguish among the volumes.  相似文献   

13.
Personalization and adaptation techniques are an interesting opportunity to design new services on-board vehicles. In this context, in fact, the need of an individual user to receive the right service at the right time and in the right way is more critical than in other cases, where personalization and adaptation already showed interesting advantages. At the same time, this context of application can provide new interesting insights for user modeling and adaptation. In the paper we present an architecture for providing personalized services on-board vehicles and we discuss an application to the case of tourist information. We focus on the choices we made to design an on-board system which was as less intrusive and distracting as possible and that could adapt its recommendations, the way it presents them and its own behavior to the user's preferences/interests and to the context of interaction (especially the driving conditions).  相似文献   

14.
Pushing Convertible Constraints in Frequent Itemset Mining   总被引:1,自引:0,他引:1  
Recent work has highlighted the importance of the constraint-based mining paradigm in the context of frequent itemsets, associations, correlations, sequential patterns, and many other interesting patterns in large databases. Constraint pushing techniques have been developed for mining frequent patterns and associations with antimonotonic, monotonic, and succinct constraints. In this paper, we study constraints which cannot be handled with existing theory and techniques in frequent pattern mining. For example, avg(S)v, median(S)v, sum(S)v (S can contain items of arbitrary values, {<, <, , } and v is a real number.) are customarily regarded as tough constraints in that they cannot be pushed inside an algorithm such as Apriori. We develop a notion of convertible constraints and systematically analyze, classify, and characterize this class. We also develop techniques which enable them to be readily pushed deep inside the recently developed FP-growth algorithm for frequent itemset mining. Results from our detailed experiments show the effectiveness of the techniques developed.  相似文献   

15.
A nonlinear stochastic integral equation of the Hammerstein type in the formx(t; ) = h(t, x(t; )) + s k(t, s; )f(s, x(s; ); )d(s) is studied wheret S, a measure space with certain properties, , the supporting set of a probability measure space (,A, P), and the integral is a Bochner integral. A random solution of the equation is defined to be an almost surely continuousm-dimensional vector-valued stochastic process onS which is bounded with probability one for eacht S and which satisfies the equation almost surely. Several theorems are proved which give conditions such that a unique random solution exists. AMS (MOS) subject classifications (1970): Primary; 60H20, 45G99. Secondary: 60G99.  相似文献   

16.
Data structures are interpreted as sets which may contain repeating elements. An algebraic system close to the integer ring is constructed on these sets.Translated from Kibernetika, No. 4, pp. 82–88, July–August, 1990.  相似文献   

17.
ART: A Hybrid Classification Model   总被引:1,自引:0,他引:1  
This paper presents a new family of decision list induction algorithms based on ideas from the association rule mining context. ART, which stands for Association Rule Tree, builds decision lists that can be viewed as degenerate, polythetic decision trees. Our method is a generalized Separate and Conquer algorithm suitable for Data Mining applications because it makes use of efficient and scalable association rule mining techniques.  相似文献   

18.
Suppose a directed graph has its arcs stored in secondary memory, and we wish to compute its transitive closure, also storing the result in secondary memory. We assume that an amount of main memory capable of holdings values is available, and thats lies betweenn, the number of nodes of the graph, ande, the number of arcs. The cost measure we use for algorithms is theI/O complexity of Kung and Hong, where we count 1 every time a value is moved into main memory from secondary memory, or vice versa.In the dense case, wheree is close ton 2, we show that I/O equal toO(n 3/s) is sufficient to compute the transitive closure of ann-node graph, using main memory of sizes. Moreover, it is necessary for any algorithm that is standard, in a sense to be defined precisely in the paper. Roughly, standard means that paths are constructed only by concatenating arcs and previously discovered paths. For the sparse case, we show that I/O equal toO(n 2e/s) is sufficient, although the algorithm we propose meets our definition of standard only if the underlying graph is acyclic. We also show that(n 2e/s) is necessary for any standard algorithm in the sparse case. That settles the I/O complexity of the sparse/acyclic case, for standard algorithms. It is unknown whether this complexity can be achieved in the sparse, cyclic case, by a standard algorithm, and it is unknown whether the bound can be beaten by nonstandard algorithms.We then consider a special kind of standard algorithm, in which paths are constructed only by concatenating arcs and old paths, never by concatenating two old paths. This restriction seems essential if we are to take advantage of sparseness. Unfortunately, we show that almost another factor ofn I/O is necessary. That is, there is an algorithm in this class using I/OO(n 3e/s) for arbitrary sparse graphs, including cyclic ones. Moreover, every algorithm in the restricted class must use(n 3e/s/log3 n) I/O, on some cyclic graphs.The work of this author was partially supported by NSF grant IRI-87-22886, IBM contract 476816, Air Force grant AFOSR-88-0266 and a Guggenheim fellowship.  相似文献   

19.
Schedulers for larger classes of pinwheel instances   总被引:1,自引:0,他引:1  
The pinwheel is a hard-real-time scheduling problem for scheduling satellite ground stations to service a number of satellites without data loss. Given a multiset of positive integers (instance)A={a1,..., an}, the problem is to find an infinite sequence (schedule) of symbols from {1,2,...,n} such that there is at least one symboli within any interval of ai symbols (slots). Not all instancesA can be scheduled; for example, no successful schedule exists for instances whose density,(A)= i i (l/ai), is larger than 1. It has been shown that all instances whose densities are less than a 0.5 density threshold can always be scheduled. If a schedule exists, another concern is the design of a fast on-line scheduler (FOLS) which can generate each symbol of the schedule in constant time. Based on the idea of integer reduction, two new FOLSs which can schedule different classes of pinwheel instances, are proposed in this paper. One uses single-integer reduction and the other uses double-integer reduction. They both improve the previous 0.5 result and have density thresholds of 13/20 and2/3, respectively. In particular, if the elements inA are large, the density thresholds will asymptotically approach In 2 and 1/R2, respectively.This research was supported in part by ONR Grant N00014-87-K-0833, and was done while Francis Chin was visiting the Computer Science Program, The University of Texas at Dallas, Richardson, TX 75083, USA.  相似文献   

20.
We propose an approach to verification of programs in a graphic language in the programming R-technology and introduce an axiomatic semantics of R-schemas and of graphic Pascal. We justify the advantages of graphic representation of programs for correctness proving and describe a support system for this approach under RAFOS and MS-DOS.Translated from Kibernetika, No. 1, pp. 21–27, January–February, 1990.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号