首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.
In this paper, we address a fundamental problem related to the induction of Boolean logic: Given a set of data, represented as a set of binary “truen-vectors” (or “positive examples”) and a set of “falsen-vectors” (or “negative examples”), we establish a Boolean function (or an extension)f, so thatfis true (resp., false) in every given true (resp., false) vector. We shall further require that such an extension belongs to a certain specified class of functions, e.g., class of positive functions, class of Horn functions, and so on. The class of functions represents our a priori knowledge or hypothesis about the extensionf, which may be obtained from experience or from the analysis of mechanisms that may or may not cause the phenomena under consideration. The real-world data may contain errors, e.g., measurement and classification errors might come in when obtaining data, or there may be some other influential factors not represented as variables in the vectors. In such situations, we have to give up the goal of establishing an extension that is perfectly consistent with the given data, and we are satisfied with an extensionfhaving the minimum number of misclassifications. Both problems, i.e., the problem of finding an extension within a specified class of Boolean functions and the problem of finding a minimum error extension in that class, will be extensively studied in this paper. For certain classes we shall provide polynomial algorithms, and for other cases we prove their NP-hardness.  相似文献   

2.
The aim of this paper is to raise some questions–and partly, also to answer them –in connection with two important problem groups of fuzzy mathematics: n-fuzzy objects and the sigma-properties of different interactive fuzzy structures. These questions are suggested by the analyzation of natural languages, the common sense thinking – which are typical fields where the most adequate mathematical model is a fuzzy one-especially by complex adjectival structures and subjective “verifying” processes, respectively. They have, however, a real practical significance also in the field of engineering, as, e.g., in learning machine problems.In the first part we try to point to the practical importance of the concept of fuzzy objects of type n (or n-fuzzy objects), from the aspect of modeling natural languages. A useful way to define n-fuzzy algebras, i.e., generalizing ordinary fuzzy algebras for n-fuzzy objects, is also given, with introducing an isomorphism mapping from the fuzzy to the n-fuzzy object space. As an example, R-n-fuzzy algebra is defined. Because of the isomorphic property of the above mapping the later studies can be restricted to ordinary fuzzy objects.

In the second part some very basic concepts in connection with the sigma-properties of fuzzy algebras are given and some simple theorems are proved. These are quite important from the aspect of fuzzy learning processes, as their probability theoretic interpretation leads to several convergence theorems – which are not dealt with here, however.

In this part we raise the concept of the quantified of a fuzzy algebra, and by means of this concept a close relation between interactive fuzzy and Boolean algebras is proved –a very different relation from that between Zadeh's original, noninteractive system and Boolean algebra.

Although any presentation of complete application examples is not at all intended in this paper, finally some aspects of the application of the above results, especially in learning control algorithms, are given, the statements backed up by the experience of a simulation experiment going on at present.  相似文献   

3.
Weakly dicomplemented lattices are bounded lattices equipped with two unary operations to encode a negation on concepts. They have been introduced to capture the equational theory of concept algebras (Wille 2000; Kwuida 2004). They generalize Boolean algebras. Concept algebras are concept lattices, thus complete lattices, with a weak negation and a weak opposition. A special case of the representation problem for weakly dicomplemented lattices, posed in Kwuida (2004), is whether complete weakly dicomplemented lattices are isomorphic to concept algebras. In this contribution we give a negative answer to this question (Theorem 4). We also provide a new proof of a well known result due to M.H. Stone (Trans Am Math Soc 40:37–111, 1936), saying that each Boolean algebra is a field of sets (Corollary 4). Before these, we prove that the boundedness condition on the initial definition of weakly dicomplemented lattices (Definition 1) is superfluous (Theorem 1, see also Kwuida (2009)).  相似文献   

4.
We consider a language for reasoning about probability which allows us to make statements such as “the probability of E1 is less than ” and “the probability of E1 is at least twice the probability of E2,” where E1 and E2 are arbitrary events. We consider the case where all events are measurable (i.e., represent measurable sets) and the more general case, which is also of interest in practice, where they may not be measurable. The measurable case is essentially a formalization of (the propositional fragment of) Nilsson's probabilistic logic. As we show elsewhere, the general (nonmeasurable) case corresponds precisely to replacing probability measures by Dempster-Shafer belief functions. In both cases, we provide a complete axiomatization and show that the problem of deciding satisfiability is NP-complete, no worse than that of propositional logic. As a tool for proving our complete axiomatizations, we give a complete axiomatization for reasoning about Boolean combinations of linear inequalities, which is of independent interest. This proof and others make crucial use of results from the theory of linear programming. We then extend the language to allow reasoning about conditional probability and show that the resulting logic is decidable and completely axiomatizable, by making use of the theory of real closed fields.  相似文献   

5.
This paper proposes two semantics of a probabilistic variant of the π-calculus: an interleaving semantics in terms of Segala automata and a true concurrent semantics, in terms of probabilistic event structures. The key technical point is a use of types to identify a good class of non-deterministic probabilistic behaviours which can preserve a compositionality of the parallel operator in the event structures and the calculus. We show an operational correspondence between the two semantics. This allows us to prove a “probabilistic confluence” result, which generalises the confluence of the linearly typed π-calculus.  相似文献   

6.
Given a timed automaton M, a linear temporal logic formula φ, and a bound k, bounded model checking for timed automata determines if there is a falsifying path of length k to the hypothesis that M satisfies the specification φ. This problem can be reduced to the satisfiability problem for Boolean constraint formulas over linear arithmetic constraints. We show that bounded model checking for timed automata is complete, and we give lower and upper bounds for the length k of counterexamples. Moreover, we define bounded model checking for networks of timed automata in a compositional way.  相似文献   

7.
The first half is a tutorial on orderings, lattices, Boolean algebras, operators on Boolean algebras, Tarski's fixed point theorem, and relation algebras.

In the second half, elements of a complete relation algebra are used as “meanings” for program statements. The use of relation algebras for this purpose was pioneered by de Bakker and de Roever in [10–12]. For a class of programming languages with program schemes, single μ-recursion, while-statements, if-then-else, sequential composition, and nondeterministic choice, a definition of “correct interpretation” is given which properly reflects the intuitive (or operational) meanings of the program constructs. A correct interpretation includes for each program statement an element serving as “input/output relation” and a domain element specifying that statement's “domain of nontermination”. The derivative of Hitchcock and Park [17] is defined and a relation-algebraic version of the extension by de Bakker [8, 9] of the Hitchcock-Park theorem is proved. The predicate transformers wps(-) and wlps(-) are defined and shown to obey all the standard laws in [15]. The “law of the excluded miracle” is shown to hold for an entire language if it holds for that language's basic statements (assignment statements and so on). Determinism is defined and characterized for all the program constructs. A relation-algebraic version of the invariance theorem for while-statements is given. An alternative definition of intepretation, called “demonic”, is obtained by using “demonic union” in place of ordinary union, and “demonic composition” in place of ordinary relational composition. Such interpretations are shown to arise naturally from a special class of correct interpretations, and to obey the laws of wps(-).  相似文献   


8.
On Full Abstraction for PCF: I, II, and III   总被引:1,自引:0,他引:1  
  相似文献   

9.
We observe that if R: = (I,ρ, J) is an incidence structure, viewed as a matrix, then the topological closure of the set of columns is the Stone space of the Boolean algebra generated by the rows. As a consequence, we obtain that the topological closure of the collection of principal initial segments of a poset P is the Stone space of the Boolean algebra Tailalg (P) generated by the collection of principal final segments of P, the so-called tail-algebra of P. Similar results concerning Priestley spaces and distributive lattices are given. A generalization to incidence structures valued by abstract algebras is considered.   相似文献   

10.
Some computationally hard problems, e.g., deduction in logical knowledge bases– are such that part of an instance is known well before the rest of it, and remains the same for several subsequent instances of the problem. In these cases, it is useful to preprocess off-line this known part so as to simplify the remaining on-line problem. In this paper we investigate such a technique in the context of intractable, i.e., NP-hard, problems. Recent results in the literature show that not all NP-hard problems behave in the same way: for some of them preprocessing yields polynomial-time on-line simplified problems (we call them compilable), while for other ones their compilability implies some consequences that are considered unlikely. Our primary goal is to provide a sound methodology that can be used to either prove or disprove that a problem is compilable. To this end, we define new models of computation, complexity classes, and reductions. We find complete problems for such classes, “completeness” meaning they are “the less likely to be compilable.” We also investigate preprocessing that does not yield polynomial-time on-line algorithms, but generically “decreases” complexity. This leads us to define “hierarchies of compilability,” that are the analog of the polynomial hierarchy. A detailed comparison of our framework to the idea of “parameterized tractability” shows the differences between the two approaches.  相似文献   

11.
We present a meta-logic that contains a new quantifier (for encoding “generic judgments”) and inference rules for reasoning within fixed points of a given specification. We then specify the operational semantics and bisimulation relations for the finite π-calculus within this meta-logic. Since we restrict to the finite case, the ability of the meta-logic to reason within fixed points becomes a powerful and complete tool since simple proof search can compute this one fixed point. The quantifier helps with the delicate issues surrounding the scope of variables within π-calculus expressions and their executions (proofs). We shall illustrate several merits of the logical specifications we write: they are natural and declarative; they contain no side conditions concerning names of variables while maintaining a completely formal treatment of such variables; differences between late and open bisimulation relations are easy to see declaratively; and proof search involving the application of inference rules, unification, and backtracking can provide complete proof systems for both one-step transitions and for bisimulation.  相似文献   

12.
We introduce economical tests of an economical system which lead to effect algebras, structures which are more general than Boolean algebras or λ-systems, which were recently used to describe unambiguous events, to model situations for example in decision theory when the Kolmogorov axiomatic is not applicable.  相似文献   

13.
14.
In this paper the data structures and algorithms which were used to implement hyper-resolution are presented. The algorithms, which do not generate hyper-resolvents by creating sequences of P1-resolvents, have been used to obtain proofs of
THEOREM 1. Let G be a group such that x3 = e for all xG. If h is defined as h(x, y) = xyx′y′ forx, yG, then for all x, yG, h(h(x, y), y) = e (the identity).
THEOREM 2. Let R be a ring such that x2 = x for all xR. Then R is commutative.
THEOREM 3. Every subgroup of index 2 is normal.
The data structures have been designed so that only a single copy of any literal or term is retained, no matter how often it occurs in the clauses kept. The main advantage of this approach is not the resulting savings in storage, but instead the fact that simultaneously matching a set of literals generates an entire set of hyper-resolvents.A method of extracting a set of “candidates for unification with a given literal” from the data structures is also presented. The result of using this method is a substantial reduction in the number of times a complete unification of two literals must be attempted.The initial results obtained from the program suggest that many resolution algorithms besides hyper-resolution could be enhanced by the use of similar data structures and algorithms.  相似文献   

15.
We study remarkable sub-lattice effect algebras of Archimedean atomic lattice effect algebras E, namely their blocks M, centers C(E), compatibility centers B(E) and sets of all sharp elements S(E) of E. We show that in every such effect algebra E, every atomic block M and the set S(E) are bifull sub-lattice effect algebras of E. Consequently, if E is moreover sharply dominating then every atomic block M is again sharply dominating and the basic decompositions of elements (BDE of x) in E and in M coincide. Thus in the compatibility center B(E) of E, nonzero elements are dominated by central elements and their basic decompositions coincide with those in all atomic blocks and in E. Some further details which may be helpful under answers about the existence and properties of states are shown. Namely, we prove the existence of an (o)-continuous state on every sharply dominating Archimedean atomic lattice effect algebra E with B(E)\not = C(E).B(E)\not =C(E). Moreover, for compactly generated Archimedean lattice effect algebras the equivalence of (o)-continuity of states with their complete additivity is proved. Further, we prove “State smearing theorem” for these lattice effect algebras.  相似文献   

16.
We characterize polynomials having the same set of nonzero cyclic resultants. Generically, for a polynomial f of degree d, there are exactly 2d−1 distinct degree d polynomials with the same set of cyclic resultants as f. However, in the generic monic case, degree d polynomials are uniquely determined by their cyclic resultants. Moreover, two reciprocal (“palindromic”) polynomials giving rise to the same set of nonzero cyclic resultants are equal. In the process, we also prove a unique factorization result in semigroup algebras involving products of binomials. Finally, we discuss how our results yield algorithms for explicit reconstruction of polynomials from their cyclic resultants.  相似文献   

17.
Given a monoid string rewriting system M, one way of obtaining a complete rewriting system for M is to use the classical Knuth–Bendix critical pairs completion algorithm. It is well-known that this algorithm is equivalent to computing a noncommutative Gröbner basis for M. This article develops an alternative approach, using noncommutative involutive basis methods to obtain a complete involutive rewriting system for M.  相似文献   

18.
In this paper, we characterize factor congruences in the quasivariety of BCK-algebras. As an application we prove that the free algebra over an infinite set of generators is indecomposable in any subvariety of BCK-algebras. We also study the decomposability of free algebras in the variety of hoop residuation algebras and its subvarieties. We prove that free algebras in a non k-potent subvariety of are indecomposable while finitely generated free algebras in k-potent subvarieties have a unique non-trivial decomposition into a direct product of two factors, and one of them is the two-element implication algebra. This paper is partially supported by Universidad Nacional del Sur and CONICET.  相似文献   

19.
Stone Coalgebras   总被引:1,自引:0,他引:1  
  相似文献   

20.
In this paper we consider fuzzy subsets of a universe as L-fuzzy subsets instead of [ 0, 1 ]-valued, where L is a complete lattice. We enrich the lattice L by adding some suitable operations to make it into a pseudo-BL algebra. Since BL algebras are main frameworks of fuzzy logic, we propose to consider the non-commutative BL-algebras which are more natural for modeling the fuzzy notions. Based on reasoning with in non-commutative fuzzy logic we model the linguistic modifiers such as very and more or less and give an appropriate membership function for each one by taking into account the context of the given fuzzy notion by means of resemblance L-fuzzy relations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号