首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A central component of the analysis of panel clustering techniques for the approximation of integral operators is the so-called -admissibility condition min {diam(),diam()} 2dist(,) that ensures that the kernel function is approximated only on those parts of the domain that are far from the singularity. Typical techniques based on a Taylor expansion of the kernel function require a subdomain to be far enough from the singularity such that the parameter has to be smaller than a given constant depending on properties of the kernel function. In this paper, we demonstrate that any is sufficient if interpolation instead of Taylor expansionisused for the kernel approximation, which paves the way for grey-box panel clustering algorithms.  相似文献   

2.
A well-known problem in default logic is the ability of naive reasoners to explain bothg and ¬g from a set of observations. This problem is treated in at least two different ways within that camp.One approach is examination of the various explanations and choosing among them on the basis of various explanation comparators. A typical comparator is choosing the explanation that depends on the most specific observation, similar to the notion of narrowest reference class.Others examine default extensions of the observations and choose whatever is true in any extension, or what is true in all extensions or what is true in preferred extensions. Default extensions are sometimes thought of as acceptable models of the world that are discarded as more knowledge becomes available.We argue that the notions of specificity and extension lack clear semantics. Furthermore, we show that the problems these ideas were supposed to solve can be handled easily within a probabilistic framework.  相似文献   

3.
We analyze four nce Memed novels of Yaar Kemal using six style markers: most frequent words, syllable counts, word type – or part of speech – information, sentence length in terms of words, word length in text, and word length in vocabulary. For analysis we divide each novel into five thousand word text blocks and count the frequencies of each style marker in these blocks. The style markers showing the best separation are most frequent words and sentence lengths. We use stepwise discriminant analysis to determine the best discriminators of each style marker. We then use these markers in cross validation based discriminant analysis. Further investigation based on multiple analysis of variance (MANOVA) reveals how the attributes of each style marker group distinguish among the volumes.  相似文献   

4.
In this paper, we investigate the numerical solution of a model equation u xx = exp(– ) (and several slightly more general problems) when 1 using the standard central difference scheme on nonuniform grids. In particular, we are interested in the error behaviour in two limiting cases: (i) the total mesh point number N is fixed when the regularization parameter 0, and (ii) is fixed when N. Using a formal analysis, we show that a generalized version of a special piecewise uniform mesh 12 and an adaptive grid based on the equidistribution principle share some common features. And the optimal meshes give rates of convergence bounded by |log()| as 0 and N is given, which are shown to be sharp by numerical tests.  相似文献   

5.
Ward Elliott (from 1987) and Robert Valenza (from 1989) set out to the find the true Shakespeare from among 37 anti-Stratfordian Claimants. As directors of the Claremont Shakespeare Authorship Clinic, Elliott and Valenza developed novel attributional tests, from which they concluded that most Claimants are not-Shakespeare. From 1990-4, Elliott and Valenza developed tests purporting further to reject much of the Shakespeare canon as not-Shakespeare (1996a). Foster (1996b) details extensive and persistent flaws in the Clinic's work: data were collected haphazardly; canonical and comparative text-samples were chronologically mismatched; procedural controls for genre, stanzaic structure, and date were lacking. Elliott and Valenza counter by estimating maximum erosion of the Clinic's findings to include five of our 54 tests, which can amount, at most, to half of one percent (1998). This essay provides a brief history, showing why the Clinic foundered. Examining several of the Clinic's representative tests, I evaluate claims that Elliott and Valenza continue to make for their methodology. A final section addresses doubts about accuracy, validity and replicability that have dogged the Clinic's work from the outset.  相似文献   

6.
For a given polynomial-time computable honest function, the complexity of its max inverse function is compared with that of the other inverse functions. Two structural results are shown which suggest that the max inverse function is not the easiest.The preliminary version of this paper was presented at the International Symposium SIGAL 90 [WT]. Osamu Watanabe was supported in part by a Grant in Aid for Scientific Research of the Ministry of Education, Science and Culture of Japan under Grant-in-Aid for Co-operative Research (A) 02302047 (1990).  相似文献   

7.
The Gelfond-Lifschitz operator associated with a logic program (and likewise the operator associated with default theories by Reiter) exhibits oscillating behavior. In the case of logic programs, there is always at least one finite, nonempty collection of Herbrand interpretations around which the Gelfond-Lifschitz operator bounces around. The same phenomenon occurs with default logic when Reiter's operator is considered. Based on this, a stable class semantics and extension class semantics has been proposed. The main advantage of this semantics was that it was defined for all logic programs (and default theories), and that this definition was modelled using the standard operators existing in the literature such as Reiter's operator. In this paper our primary aim is to prove that there is a very interestingduality between stable class theory and the well-founded semantics for logic programming. In the stable class semantics, classes that were minimal with respect to Smyth's power-domain ordering were selected. We show that the well-founded semantics precisely corresponds to a class that is minimal w.r.t. Hoare's power domain ordering: the well-known dual of Smyth's ordering. Besides this elegant duality, this immediately suggests how to define a well-founded semantics for default logic in such a way that the dualities that hold for logic programming continue to hold for default theories. We show how the same technique may be applied to strong autoepistemic logic: the logic of strong expansions proposed by Marek and Truszczynski.  相似文献   

8.
Summary A framework is proposed for the structured specification and verification of database dynamics. In this framework, the conceptual model of a database is a many sorted first order linear tense theory whose proper axioms specify the update and the triggering behaviour of the database. The use of conceptual modelling approaches for structuring such a theory is analysed. Semantic primitives based on the notions of event and process are adopted for modelling the dynamic aspects. Events are used to model both atomic database operations and communication actions (input/output). Nonatomic operations to be performed on the database (transactions) are modelled by processes in terms of trigger/reaction patterns of behaviour. The correctness of the specification is verified by proving that the desired requirements on the evolution of the database are theorems of the conceptual model. Besides the traditional data integrity constraints, requirements of the form Under condition W, it is guaranteed that the database operation Z will be successfully performed are also considered. Such liveness requirements have been ignored in the database literature, although they are essential to a complete definition of the database dynamics.

Notation

Classical Logic Symbols (Appendix 1) for all (universal quantifier) - exists at least once (existential quantifier) - ¬ no (negation) - implies (implication) - is equivalent to (equivalence) - and (conjunction) - or (disjunction) Tense Logic Symbols (Appendix 1) G always in the future - G 0 always in the future and now - F sometime in the future - F 0 sometime in the future or now - H always in the past - H 0 always in the past and now - P sometime in the past - P 0 sometime in the past or now - X in the next moment - Y in the previous moment - L always - M sometime Event Specification Symbols (Sects. 3 and 4.1) (x) means immediately after the occurrence of x - (x) means immediately before the occurrence of x - (x) means x is enabled, i.e., x may occur next - { } ({w 1} x{w 2}) states that if w 1 holds before the occurrence of x, then w 2 will hold after the occurrence of x (change rule) - [ ] ([oa1, ..., oan]x) states that only the object attributes oa1, ..., oa n are modifiable by x (scope rule) - {{ }} ({{w}}x) states that if x may occur next, then w holds (enabling rule) Process Specification Symbols (Sects. 5.3 and 5.4) :: for causal rules - for behavioural rules Transition-Pattern Composition Symbols (Sects. 5.2 and 5.3) ; sequential composition - ¦ choice composition - parallel composition - :| guarded alternative composition Location Predicates (Sect. 5.2) (z) means immediately after the occurrence of the last event of z (after) - (z) means immediately before the occurrence of the first event of z (before) - (z) means after the beginning of z and before the end of z (during) - ( z) means before the occurrence of an event of z (at)  相似文献   

9.
In this paper the problem of routing messages along shortest paths in a distributed network without using complete routing tables is considered. In particular, the complexity of deriving minimum (in terms of number of intervals) interval routing schemes is analyzed under different requirements. For all the cases considered NP-hardness proofs are given, while some approximability results are provided. Moreover, relations among the different cases considered are studied.This work was supported by the EEC ESPRIT II Basic Research Action Program under Contract No. 7141 Algorithms and Complexity II, by the EEC Human Capital and Mobility MAP project, and by the Italian MURST 40% project Algoritmi, Modelli di Calcolo e Strutture Informative.  相似文献   

10.
Summary In a simple language for finite automata based on SCCS we introduce three different delay operators , , . The operators , are two different versions of an unbounded but finite delay operator. It is argued that the usual notion of bisimulation is inadequate and two generalisations are proposed. In both cases we give a complete axiomatisation for finite terms of the language and prove that certain forms of induction are sound. In one case we give a complete axiomatisation.  相似文献   

11.
A powerful methodology for scenario-based specification of reactive systems is described, in which the behavior is played in directly from the systems GUI or some abstract version thereof, and can then be played out. The approach is supported and illustrated by a tool, which we call the play-engine. As the behavior is played in, the play-engine automatically generates a formal version in an extended version of the language of live sequence charts (LSCs). As they are played out, it causes the application to react according to the universal (must) parts of the specification; the existential (may) parts can be monitored to check their successful completion. Play-in is a user-friendly high-level way of specifying behavior and play-out is a rather surprising way of working with a fully operational system directly from its inter-object requirements. The ideas appear to be relevant to many stages of system development, including requirements engineering, specification, testing, analysis and implementation.  相似文献   

12.
I discuss the attitude of Jewish law sources from the 2nd–:5th centuries to the imprecision of measurement. I review a problem that the Talmud refers to, somewhat obscurely, as impossible reduction. This problem arises when a legal rule specifies an object by referring to a maximized (or minimized) measurement function, e.g., when a rule applies to the largest part of a divided whole, or to the first incidence that occurs, etc. A problem that is often mentioned is whether there might be hypothetical situations involving more than one maximal (or minimal) value of the relevant measurement and, given such situations, what is the pertinent legal rule. Presumption of simultaneous occurrences or equally measured values are also a source of embarrassment to modern legal systems, in situations exemplified in the paper, where law determines a preference based on measured values. I contend that the Talmudic sources discussing the problem of impossible reduction were guided by primitive insights compatible with fuzzy logic presentation of the inevitable uncertainty involved in measurement. I maintain that fuzzy models of data are compatible with a positivistic epistemology, which refuses to assume any precision in the extra-conscious world that may not be captured by observation and measurement. I therefore propose this view as the preferred interpretation of the Talmudic notion of impossible reduction. Attributing a fuzzy world view to the Talmudic authorities is meant not only to increase our understanding of the Talmud but, in so doing, also to demonstrate that fuzzy notions are entrenched in our practical reasoning. If Talmudic sages did indeed conceive the results of measurements in terms of fuzzy numbers, then equality between the results of measurements had to be more complicated than crisp equations. The problem of impossible reduction could lie in fuzzy sets with an empty core or whose membership functions were only partly congruent. Reduction is impossible may thus be reconstructed as there is no core to the intersection of two measures. I describe Dirichlet maps for fuzzy measurements of distance as a rough partition of the universe, where for any region A there may be a non-empty set of - _A (upper approximation minus lower approximation), where the problem of impossible reduction applies. This model may easily be combined with probabilistic extention. The possibility of adopting practical decision standards based on -cuts (and therefore applying interval analysis to fuzzy equations) is discussed in this context. I propose to characterize the uncertainty that was presumably capped by the old sages as U-uncertainty, defined, for a non-empty fuzzy set A on the set of real numbers, whose -cuts are intervals of real numbers, as U(A) = 1/h(A) 0 h(A) log [1+(A)]d, where h(A) is the largest membership value obtained by any element of A and (A) is the measure of the -cut of A defined by the Lebesge integral of its characteristic function.  相似文献   

13.
Kumiko Ikuta 《AI & Society》1990,4(2):137-146
The role of craft language in the process of teaching (learning) Waza (skill) will be discussed from the perspective of human intelligence.It may be said that the ultimate goal of learning Waza in any Japanese traditional performance is not the perfect reproduction of the teaching (learning) process of Waza. In fact, a special metaphorical language (craft language) is used, which has the effect of encouraging the learner to activate his creative imagination. It is through this activity that the he learns his own habitus (Kata).It is suggested that, in considering the difference of function between natural human intelligence and artificial intelligence, attention should be paid to the imaginative activity of the learner as being an essential factor for mastering Kata.This article is a modified English version of Chapter 5 of my bookWaza kara shiru (Learning from Skill), Tokyo University Press, 1987, pp. 93–105.  相似文献   

14.
We consider systems of smooth nonlinear differential and algebraic equations in which some of the variables are distinguished as external variables. The realization problem is to replace the higher-order implicit differential equations by first-order explicit differential equations and the algebraic equations by mappings to the external variables. This involves the introduction of state variables. We show that under general conditions there exist realizations containing a set of auxiliary variables, called driving variables. We give sufficient conditions for the existence of realizations involving only state variables and external variables, which can then be split into input and output variables. It is shown that in general there are structural obstructions for the existence of such realizations. We give a constructive procedure to obtain realizations with or without driving variables. The realization procedure is also applied to systems defined by interconnections of state space systems. Finally, a theory of equivalence transformations of systems of higher-order differential equations is developed.  相似文献   

15.
The language of standard propositional modal logic has one operator ( or ), that can be thought of as being determined by the quantifiers or , respectively: for example, a formula of the form is true at a point s just in case all the immediate successors of s verify .This paper uses a propositional modal language with one operator determined by a generalized quantifier to discuss a simple connection between standard invariance conditions on modal formulas and generalized quantifiers: the combined generalized quantifier conditions of conservativity and extension correspond to the modal condition of invariance under generated submodels, and the modal condition of invariance under bisimulations corresponds to the generalized quantifier being a Boolean combination of and .  相似文献   

16.
Positive solutions to the decision problem for a class of quantified formulae of the first order set theoretic language based on , , =, involving particular occurrences of restricted universal quantifiers and for the unquantified formulae of , , =, {...}, , where {...} is the tuple operator and is a general choice operator, are obtained. To that end a method is developed which also provides strong reflection principles over the hereditarily finite sets. As far as finite satisfiability is concerned such results apply also to the unquantified extention of , , =, {...}, , obtained by adding the operators of binary union, intersection and difference and the relation of inclusion, provided no nested term involving is allowed.  相似文献   

17.
The adaptiveness of agents is one of the basic conditions for the autonomy. This paper describes an approach of adaptiveness forMonitoring Cognitive Agents based on the notion of generic spaces. This notion allows the definition of virtual generic processes so that any particular actual process is then a simple configuration of the generic process, that is to say a set of values of parameters. Consequently, generic domain ontology containing the generic knowledge for solving problems concerning the generic process can be developed. This lead to the design of Generic Monitoring Cognitive Agent, a class of agent in which the whole knowledge corpus is generic. In other words, modeling a process within a generic space becomes configuring a generic process and adaptiveness becomes genericity, that is to say independence regarding technology. In this paper, we present an application of this approach on Sachem, a Generic Monitoring Cognitive Agent designed in order to help the operators in operating a blast furnace. Specifically, the NeuroGaz module of Sachem will be used to present the notion of a generic blast furnace. The adaptiveness of Sachem can then be noted through the low cost of the deployment of a Sachem instance on different blast furnaces and the ability of NeuroGaz in solving problem and learning from various top gas instrumentation.  相似文献   

18.
Pizer and Eberly introduced the core as the analogue of the medial axis for greyscale images. For two-dimensional images, it is obtained as the ridge of a medial function defined on 2 + 1-dimensional scale space. The medial function is defined using Gaussian blurring and measures the extent to which a point is in the center of the object measured at a scale. Numerical calculations indicate the core has properties quite different from the medial axis. In this paper we give the generic properties of ridges and cores for two-dimensional images and explain the discrepancy between core and medial axis properties. We place cores in a larger relative critical set structure, which coherently relates disjoint pieces of core. We also give the generic transitions which occur for sequences of images varying with a parameter such as time. The genericity implies the stability of the full structure in any compact viewing area of scale space under sufficiently small L2 perturbations of the image intensity function. We indicate consequences for finding cores and also for adding markings to completely determine the structure of the medial function.  相似文献   

19.
Zusammenfassung Es geht in dieser Arbeit in der Hauptsache darum, ein vorgelegtes Differentialgleichungssystem so zu skalieren, daß in der zugehörigen Analogrechnerschaltung die Spannungen an den Ausgängen der Integratoren die durch die Referenzspannung einerseits und durch das Auflösevermögen andererseits gesetzten Schranken nicht über- bzw. unterschreiten. Es werden Abschätzungssätze hergeleitet, die diese Frage im Apriori-Sinn, also ohne die Lösung des Differentialgleichungssystems zu kennen, zu lösen gestatten. Zur Abschätzung werden zunächst Normen, dannKamke-Normen verwendet. Der im Titel erwähnte Satz vonPerron ergibt sich durch spezielle Normengebung und Verzicht auf Abschätzung nach unten. Erschwert werden die Betrachtungen durch die relative Schwäche der Forderung, daß die rechte Seite des Systemsdx/dt=f(x,t) der Bedingung aus xa folgt f(x,t)v(t)x genüge (...:=Norm,a positiv reell). Dadurch scheint es bei Abschätzungen mitKamke-Normen nicht mehr möglich, von den in der Literatur über Existenzbeweise und Abschätzungssätze üblichen Methoden Gebrauch zu machen. Zur Lösung dieser Frage wird eine bedingte Form des bekannten Satzes vonGronwall (auch Satz vonBellman genannt) entwickelt.
A conditional version of the integral inequality of gronwall, a slight generalization of a stability theorem of perron, and overflow-free scaling of analogue computer set-ups
Summary The main subject of this paper is the scaling of a given set of differential equations in such a way that the output voltages of the integrators of the associated analogue computer set-up do not exceed certain upper and lower bounds imposed by the reference voltage and the limited power of resolution of the elements of the analogue computer. The paper gives a priori bounds on the solution of the differential set. Some of these bounds work with norms, others withKamke-norms.Perron's stability theorem mentioned in the title of this paper results by inserting special norms and neglecting lower bounds. A difficulty arises by the relative weakness of the condition xa implies f(x,t)v(t)x on the right hand side of the setdx/dt=f(x,t), where ... is any norm anda is a positive real constant. As a consequence of this, it seems no longer possible to use the usual techniques known from the literature on existence theorems and bounds for the solution of differential equations. To cope with this situation, a conditional version of the well-known theorem ofGronwall (also known by the name of Lemma ofBellman) will be derived.

Diese Arbeit ist Teil einer am Institut für Angewandte Mathematik der Technischen Hochschule München unter Anleitung von Herrn o. Prof. Dr. rer. nat. habil.J. Heinhold angefertigten Dissertation.  相似文献   

20.
We investigate three-dimensional visibility problems for scenes that consist ofn non-intersecting spheres. The viewing point moves on a flightpath that is part of a circle at infinity given by a planeP and a range of angles {(t)¦t[01]} [02]. At timet, the lines of sight are parallel to the ray inP, which starts in the origin ofP and represents the angle(t) (orthographic views of the scene). We give an algorithm that computes the visibility graph at the start of the flight, all time parameters at which the topology of the scene changes, and the corresponding topology changes. The algorithm has running time0(n + k + p) logn), wheren is the number of spheres in the scene;p is the number of transparent topology changes (the number of different scene topologies visible along the flight path, assuming that all spheres are transparent); andk denotes the number of vertices (conflicts) which are in the (transparent) visibility graph at the start and do not disappear during the flight.The second author was supported by the ESPRIT II Basic Research Actions Program, under Contract No. 3075 (project ALCOM).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号