共查询到20条相似文献,搜索用时 0 毫秒
1.
Unified full implication algorithms of fuzzy reasoning 总被引:1,自引:0,他引:1
Daowu Pei 《Information Sciences》2008,178(2):520-530
This paper discusses the full implication inference of fuzzy reasoning. For all residuated implications induced by left continuous t-norms, unified α-triple I algorithms are constructed to generalize the known results. As the corollaries of the main results of this paper, some special algorithms can be easily derived based on four important residuated implications. These algorithms would be beneficial to applications of fuzzy reasoning. Based on properties of residuated implications, the proofs of the many conclusions are greatly simplified. 相似文献
2.
J. Drewniak J. Sobera 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2006,10(6):514-520
We examine sup-min compositions in a finite family of fuzzy implications. Since the composition of invariant fuzzy implications
is an invariant function, then we get a kind of `multiplication table' for such implications. A multistage proof of such table
is presented. As a result we obtain examples of finite semigroups of fuzzy implications. 相似文献
3.
Incidence calculus is a mechanism for uncertain reasoning originally introduced by Bundy. He suggested a set of inference
rules for deriving new incidence bounds from a given set of lower and upper bounds of some propositions. However, it is important
to demonstrate that the inference axioms are complete in any axiomatization. It is proved in this paper that inference rules used by Bundy are indeed complete. 相似文献
4.
We study in this paper the use of consistency techniques and local propagation methods, originally developed for constraints over finite domains, for solving boolean constraints in Constraint Logic Programming (CLP). To this aim, we first present a boolean CLP language clp(B/FD) built upon a CLP language over finite domains clp(FD) which uses a propagation-based constraint solver. It is based on a single primitive constraint which allows the boolean solver to be encoded at a low level. The boolean solver obtained in this way is both very simple and very efficient: on average it is eight times faster than the CHIP propagation-based boolean solver, i.e. nearly an order of magnitude faster, and infinitely better than the CHIP boolean unification solver. It also performs on average several times faster than special-purpose stand-alone boolean solvers. We then present in a second time several simplifications of the above approach, leading to the design of a very simple and compact dedicated boolean solver. This solver can be implemented in a WAM-based logical engine with a minimal extension limited to four new abstract instructions. This clp(B) system provides a further factor two speedup w.r.t. clp(B/FD). 相似文献
5.
Jian Zhang 《Journal of Automated Reasoning》1996,17(1):1-22
The generation of models and counterexamples is an important form of reasoning. In this paper, we give a formal account of a system, called FALCON, for constructing finite algebras from given equational axioms. The abstract algorithms, as well as some implementation details and sample applications, are presented. The generation of finite models is viewed as a constraint satisfaction problem, with ground instances of the axioms as constraints. One feature of the system is that it employs a very simple technique, called the least number heuristic, to eliminate isomorphic (partial) models, thus reducing the size of the search space. The correctness of the heuristic is proved. Some experimental data are given to show the performance and applications of the system. 相似文献
6.
Monadic second order (MSO) logic has proved to be a useful tool in many areas of application, reaching from decidability and complexity to picture processing, correctness of programs and parallel processes. To characterize the structural borderline between decidability and undecidability is a classical research problem here. This problem is related to questions in computational complexity, especially to the model checking problem, for which many tools developed in the area of decidability have proved to be useful. For more than two decades it was conjectured in [D. Seese, The structure of the models of decidable monadic theories of graphs, Ann. Pure Appl. Logic 53 (1991) 169–195] that decidability of monadic theories of countable structures implies that the theory can be reduced via interpretability to a theory of trees. 相似文献
7.
8.
The concept of -fuzzy implicative filters of lattice implication algebras is introduced and some of its related properties are investigated. In particular, the relationships among the ordinary fuzzy implicative filters, the (∈,∈∨q)-fuzzy implicative filters and the -fuzzy implicative filters of lattice implication algebras will be studied. 相似文献
9.
Several important non-standard cut sets of lattice-valued fuzzy sets are investigated. These are strong cuts, “not less” and “neither less nor equal” cuts. In each case it is proved that collection of all cuts of any lattice-valued fuzzy set form a complete lattice under inclusion. Decomposition theorem (representation by cuts) is proved for “neither less nor equal” cuts. Necessary and sufficient conditions under which two lattice-valued fuzzy sets with the same domain have equal families of corresponding cut sets are given. 相似文献
10.
PCS (Proving-Computing-Solving) introduced by B. Buchberger in 2001 and S-Decomposition, introduced by T. Jebelean in 2001, are strategies for handling proof problems by combining logic inference steps (e.g., modus ponens, Skolemization, instantiation) with rewriting steps (application of definitions) and solving procedures based on algebraic techniques (e.g., Groebner Bases, Cylindrical Algebraic Decomposition). 相似文献
11.
Aytar has introduced the concepts of statistical limit and cluster points of a sequence of fuzzy numbers based on the definitions given in Fridy’s study for sequences of real numbers. In this paper, we define λ-statistical limit and λ-statistical cluster points of sequences of fuzzy numbers and discuss the relations among the sets of ordinary limit points, λ-statistical limit points and λ-statistical cluster points of sequences of fuzzy numbers. 相似文献
12.
N. S. Sapidis 《Computing》2007,79(2-4):337-352
Robust Product Lifecycle Management (PLM) technology requires availability of informationally- complete models for all parts
of a design-project including spatial constraints. This is the subject of the present investigation, leading to a new model for spatial constraints, the ``virtual solid',
which generalizes a similar concept used by Sapidis and Theodosiou to model ``required free-spaces' in plants [14]. The present
research focuses on the solid-modeling aspects of the virtual-solid methodology, and derives new solid-modeling problems (related
to object definition and to object processing), whose robust treatment is a prerequisite for developing efficient models for
complex spatial constraints. 相似文献
13.
This paper considers a relationship between fuzzy sets and algebraic hyperstructures. A generalization of ideas presented by Davvaz [B. Davvaz, Fuzzy Hv-submodules, Fuzzy Sets Syst. 117 (2001) 477-484] and Bhakat and Das [S.K. Bhakat, P. Das, (∈, ∈ ∨ q)-fuzzy subgroup, Fuzzy Sets Syst. 80 (1996) 359-368], is presented. In fact, the main purpose of this paper is to introduce and study a new sort of fuzzy Hv-submodule of an Hv-module called (∈, ∈ ∨ q)-fuzzy Hv-submodules. These fuzzy Hv-submodules are characterized by their level Hv-submodules. The concept of a fuzzy Hv-submodule with thresholds is introduced, and some interesting properties are investigated. 相似文献
14.
The heterogeneous nature of data types and computational structures involved in Computer Vision algorithms make the design
and implementation of massively parallel image processing systems a not yet fully solved problem. It is common belief that
in the next future MIMD architectures with their high degree of flexibility will play a very important role in this research
area, by using a limited number of identical but powerful processing elements. The aim of this paper is to show how a selected
list of algorithms in which a unique Image Understanding process can be decomposed could map onto a distributed-memory MIMD
architecture. The operative modalities we adopt are the SPMD modality for the low level processing and the MIMD modality for
the intermediate and high levels of processing. Either efficient parallel formulations of the algorithms with respect to the
interconnection topology of processors and their optimized implementations on a target transputer-based architecture are reported. 相似文献
15.
Conventional methods of storing aK-dimensional array allow easy extension only along one dimension. We present a technique of allocating a linear sequence of contiguous storage locations for aK-dimensional extendible array by adjoining blocks of (K–1)-dimensional subarrays. Element access is by determination of the block header location and then the displacement within the block. For cubical and all practical cases of rectangular arrays considered, the storage requirement isO (N) whereN is the array size. The element access cost isO (K) for the 2-step computed access function used.
Ein Speicherschema für erweiterbare Felder
Zusammenfassung Konventionelle Methoden der SpeicherungK-dimensionaler Felder lassen eine einfache Erweiterung lediglich entlang einer Dimension zu. Wir beschreiben eine Technik der Zuweisung einer linearen Folge von zusammenhängenden Speicherzellen fürK-dimensional erweiterbare Felder durch Hinzufügen von Blöcken aus (K–1)-dimensionierten Teilfeldern. Der Elementzugriff erfolgt durch Bestimmung des Headers und des Displacements innerhalb des Blockes. Für kubische und alle praktische Fälle rechteckiger Felder ist der SpeicherbedarfO (N) wobeiN die Feldgröße ist. Die Kosten eines Elementzugriffs betragenO (K) für die in zwei Schritten berechnete Zugriffsfunktion.相似文献
16.
Prof. Dr. Dr. h.c. W. Händler 《Computing》1992,48(1):5-20
A memory-coupled multiprocessor—well suited to bit-wise operation—can be utilized to operate as a 1024 items cellular processing unit. Each processor is working on 32 bits and 32 such processors are combined to a multiprocessor. The information is stored in vertical direction, as it is defined and described in earlier papers [1] on vertical processing. The two-dimensional array (32 times 32 bits) is composed of the 32 bit-machine-words of the coupled processors on the one hand and of 32 processors in nearest-neighbour-topology on the other hand. The bit-wise cellular operation at one of the 1024 points is realized by the program of the processor—possibly assisted by appropriate microprogam sequences.Dedicated to Professor Willard L. Miranker on the occasion of his 60th birthday 相似文献
17.
This paper is concerned with the complexity of proofs and of searching for proofs in resolution. We show that the recently proposed algorithm of Ben-Sasson & Wigderson for searching for proofs in resolution cannot give better than weakly exponential performance. This is a consequence of our main result: we show the optimality of the general relationship called size-width tradeoffs in Ben-Sasson & Wigderson. Moreover we obtain the optimality of the size-width tradeoffs for the widely used restrictions of resolution: regular, Davis-Putnam, negative, positive. Received: January 31, 2000. 相似文献
18.
Mahaney and others have shown that sparse self-reducible sets have time-efficient algorithms, and have concluded that it is unlikely that NP has sparse complete sets. Mahaney's work, intuition, and a 1978 conjecture of Hartmanis notwithstanding, nothing has been known about the density of complete sets for feasible classes until now. This paper shows that sparse self-reducible sets have space-efficient algorithms, and in many cases, even have time-space-efficient algorithms. We conclude that NL, NC
k
, AC
k
, LOG(DCFL), LOG(CFL), and P lack complete (or even Turing-hard) sets of low density unless implausible complexity class inclusions hold. In particular, if NL (respectively P,
k
, or NP) has a polylog-sparse logspace-hard set, then NLSC (respectively PSC,
k
, or PHSC), and if P has subpolynomially sparse logspace-hard sets, then PPSPACE. 相似文献
19.
David W. Juedes 《Computational Complexity》1995,5(3-4):267-283
Certain natural decision problems are known to be intractable because they are complete for E, the class of all problems decidable in exponential time. Lutz recently conjectured that many other seemingly intractable problems are not complete for E, but are intractable nonetheless because they areweakly complete for E (i.e., they are in E and hard for more than a measure 0 subset of E). The main result of this paper shows that Lutz's intuition is at least partially correct: many more problems are weakly complete for E than are complete for E.The main result of this paper states that weakly complete problems arenot rare in the sense that they form a non-measure 0 subset of E. This extends a recent result of Lutz that establishes the existence of problems that are weakly complete, but not complete, for E. The proof of Lutz's original result employs a sophisticatedmartingale diagonalization argument. Here, we simplify and extend Lutz's argument to prove the main result. This simplified martingale diagonalization argument may be applicable to other questions involving individual weakly complete problems. 相似文献
20.
Stephen Bloch 《Computational Complexity》1994,4(2):175-205
The main results of this paper are recursion-theoretic characterizations of two parallel complexity classes: the functions computable by uniform bounded fan-in circuit families of log and polylog depth (or equivalently, the functions bitwise computable by alternating Turing machines in log and polylog time). The present characterizations avoid the complex base functions, function constructors, anda priori size or depth bounds typical of previous work on these classes. This simplicity is achieved by extending the tiered recursion techniques of Leivant and Bellantoni & Cook. 相似文献