首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Fuzzy information granules indicate sufficiently interpretable fuzzy sets for achieving a high level of human cognitive abstraction. Furthermore, granularity, complexity, and accuracy are associated with fuzzy information granules. Measuring granularity is a promising means of verifying the effectiveness of the fuzzy granular model. Higher granularity indicates fine partitions, whereas coarser partitions suggest lower granularity. Therefore, accuracy is directly proportional to the granularity, such that, the higher the granularity, the more accurate and more complex the model is. Consequently, the granularity-simplicity tradeoff is also a significant criterion in considering the interpretability-accuracy tradeoff.This paper thoroughly reviews diverse ideas to understand the fuzzy information granule and addresses a sensible compromise between interpretability-accuracy and granularity-simplicity. Those requirements contradict each other, thus certain conceptual and mathematical considerations are necessary in designing a granular framework. Moreover, a double axis taxonomy is introduced in this paper: “complexity-based granularity versus semantic-based granularity” (which considers granularity measures) and “granular partition level versus granular rule base level” (regarding knowledge base stages). However, several constraints should be considered in designing a granular framework such as the granularity-accuracy dilemma, the overfitting/underfitting situation, the granular rule base level conflict, the interpretability constraint threshold, the stability-plasticity dilemma, and the parameter optimization. This paper primarily aims to present a conceptual framework to better understand existing methods, as well as how these methods can inspire future research.  相似文献   

2.
粒度计算是一种新的智能计算的理论和方法,目前受到很多学者的关注。但是,具体可行的粒表示模型和不同粒的推理方法研究相对较少。本文将模糊粗糙集纳入粒度计算这种新的理论框架,对于处理复杂信息系统,求解复杂问题无疑具有重要的意义。首先利用笛卡尔积,构建了模糊关系下的信息粒;然后给出不同粒度下模糊粗糙算子的表示方法,进而形成一个分层递阶结构;最后考虑了对于模糊信息系统粒度粗细的选择问题,并给出一个实例,从而为粒度计算提供一个具体而实用的框架。  相似文献   

3.
For over a decade, researchers in formal methods have tried to create formalisms that permit natural specification of systems and allow mathematical reasoning about their correctness. The availability of fully automated reasoning tools enables non-experts to use formal methods effectively—their responsibility reduces to specifying the model and expressing the desired properties. Thus, it is essential that these properties be represented in a language that is easy to use, sufficiently expressive and succinct. Linear-time temporal logic (LTL) is a formalism that has been used extensively by researchers for program specification and verification. One of the desired properties of LTL formulas is closure under stuttering. That is, we do not want the interpretation of formulas to change over traces where some states are repeated. This property is important from both practical and theoretical prospectives; all properties which are closed under stuttering can be expressed in LTL–X—a fragment of LTL without the next operator. However, it is often difficult to express properties in this fragment of LTL. Further, determining whether a given LTL property is closed under stuttering is PSPACE-complete. In this paper, we introduce a notion of edges of LTL formulas and present a formal theory of closure under stuttering. Edges allow natural modelling of systems with events. Our theory enables syntactic reasoning about whether the resulting properties are closed under stuttering. Finally, we apply the theory to the pattern-based approach of specifying temporal formulas.  相似文献   

4.
Probabilistic logic programming   总被引:1,自引:0,他引:1  
Of all scientific investigations into reasoning with uncertainty and chance, probability theory is perhaps the best understood paradigm. Nevertheless, all studies conducted thus far into the semantics of quantitative logic programming have restricted themselves to non-probabilistic semantic characterizations. In this paper, we take a few steps towards rectifying this situation. We define a logic programming language that is syntactically similar to the annotated logics of Blair et al., 1987 and Blair and Subrahmanian, 1988, 45–73) but in which the truth values are interpreted probabilistically. A probabilistic model theory and fixpoint theory is developed for such programs. This probabilistic model theory satisfies the requirements proposed by Fenstad (in “Studies in Inductive Logic and Probabilities” (R. C. Jeffrey, Ed.), Vol. 2, pp. 251–262, Univ. of California Press, Berkeley, 1980) for a function to be called probabilistic. The logical treatment of probabilities is complicated by two facts: first, that the connectives cannot be interpreted truth-functionally when truth values are regarded as probabilities; second, that negation-free definite-clause-like sentences can be inconsistent when interpreted probabilistically. We address these issues here and propose a formalism for probabilistic reasoning in logic programming. To our knowledge, this is the first probabilistic characterization of logic programming semantics.  相似文献   

5.
Inference mechanisms about spatial relations constitute an important aspect of spatial reasoning as they allow users to derive unknown spatial information from a set of known spatial relations. When formalized in the form of algebras, spatial-relation inferences represent a mathematically sound definition of the behavior of spatial relations, which can be used to specify constraints in spatial query languages. Current spatial query languages utilize spatial concepts that are derived primarily from geometric principles, which do not necessarily match with the concepts people use when they reason and communicate about spatial relations. This paper presents an alternative approach to spatial reasoning by starting with a small set of spatial operators that are derived from concepts closely related to human cognition. This cognitive foundation comes from the behavior of image schemata, which are cognitive structures for organizing people's experiences and comprehension. From the operations and spatial relations of a small-scale space, a container–surface algebra is defined with nine basic spatial operators—inside, outside, on, off, their respective converse relations—contains, excludes, supports, separated_from, and the identity relation equal. The container–surface algebra was applied to spaces with objects of different sizes and its inferences were assessed through human-subject experiments. Discrepancies between the container–surface algebra and the human-subject testing appear for combinations of spatial relations that result in more than one possible inference depending on the relative size of objects. For configurations with small- and large-scale objects larger discrepancies were found because people use relations such as part of and at in lieu of in. Basic concepts such as containers and surfaces seem to be a promising approach to define and derive inferences among spatial relations that are close to human reasoning.  相似文献   

6.
We consider a language for reasoning about probability which allows us to make statements such as “the probability of E1 is less than ” and “the probability of E1 is at least twice the probability of E2,” where E1 and E2 are arbitrary events. We consider the case where all events are measurable (i.e., represent measurable sets) and the more general case, which is also of interest in practice, where they may not be measurable. The measurable case is essentially a formalization of (the propositional fragment of) Nilsson's probabilistic logic. As we show elsewhere, the general (nonmeasurable) case corresponds precisely to replacing probability measures by Dempster-Shafer belief functions. In both cases, we provide a complete axiomatization and show that the problem of deciding satisfiability is NP-complete, no worse than that of propositional logic. As a tool for proving our complete axiomatizations, we give a complete axiomatization for reasoning about Boolean combinations of linear inequalities, which is of independent interest. This proof and others make crucial use of results from the theory of linear programming. We then extend the language to allow reasoning about conditional probability and show that the resulting logic is decidable and completely axiomatizable, by making use of the theory of real closed fields.  相似文献   

7.
In modern functional logic languages like Curry or Toy, programs are possibly non-confluent and non-terminating rewrite systems, defining possibly non-deterministic non-strict functions. Therefore, equational reasoning is not valid for deriving properties of such programs. In a previous work we showed how a mapping from CRWL –a well known logical framework for functional logic programming– into logic programming could be in principle used as logical conceptual tool for proving properties of functional logic programs. A severe problem faced in practice is that simple properties, even if they do not involve non-determinism, require difficult proofs when compared to those obtained using equational specifications and methods. In this work we improve our approach by taking into account determinism of (part of) the considered programs. This results in significant shortenings of proofs when we put in practice our methods using standard systems supporting equational reasoning like, e.g., Isabelle.  相似文献   

8.
We introduce a non-uniform subdivision algorithm that partitions the neighborhood of an extraordinary point in the ratio σ:1−σ, where σ(0,1). We call σ the speed of the non-uniform subdivision and verify C1 continuity of the limit surface. For σ=1/2, the Catmull–Clark limit surface is recovered. Other speeds are useful to vary the relative width of the polynomial spline rings generated from extraordinary nodes.  相似文献   

9.
Many formalisms for reasoning about knowing commit an agent to be logically omniscient. Logical omniscience is an unrealistic principle for us to use to build a real-world agent, since it commits the agent to knowing infinitely many things. A number of formalizations of knowledge have been developed that do not ascribe logical omniscience to agents. With few exceptions, these approaches are modifications of the possible-worlds semantics. In this paper we use a combination of several general techniques for building non-omniscient reasoners. First we provide for the explicit representation of notions such as problems, solutions, and problem solving activities, notions which are usually left implicit in the discussions of autonomous agents. A second technique is to take explicitly into account the notion of resource when we formalize reasoning principles. We use the notion of resource to describe interesting principles of reasoning that are used for ascribing knowledge to agents. For us, resources are abstract objects. We make extensive use of ordering and inaccessibility relations on resources, but we do not find it necessary to define a metric. Using principles about resources without using a metric is one of the strengths of our approach.We describe the architecture of a reasoner, built from a finite number of components, who solves a puzzle, involving reasoning about knowing, by explicitly using the notion of resource. Our approach allows the use of axioms about belief ordinarily used in problem solving – such as axiom K of modal logic – without being forced to attribute logical omniscience to any agent. In particular we address the issue of how we can use resource-unbounded (e.g., logically omniscient) reasoning to attribute knowledge to others without introducing contradictions. We do this by showing how omniscient reasoning can be introduced as a conservative extension over resource-bounded reasoning.  相似文献   

10.
We show that for any behavioral Σ-specification ß there is an ordinary algebraic specification ß̃ over a larger signature, such that a model behaviorally satisfies ß iff it satisfies, in the ordinary sense, the ∑-theorems of ß̃. The idea is to add machinery for contexts and experiments (sorts, operations and equations), use it, and then hide it. We develop a procedure, called unhiding, which takes a finite ß and produces a finite ß̃. The practical aspect of this procedure is that one can use any standard equational inductive theorem prover to derive behavioral theorems, even if neither equational reasoning nor induction is sound for behavioral satisfaction.  相似文献   

11.
This paper compares the performance of three clustering tests––Rogerson R, Getis-Ord G and Lin-Zeng LR-T––using a range of simulated sample distributions from rare to common spatial events. It is shown that all of the tests are sensitive to high value clustering, and all but G are sensitive to low-value clustering. For a spatial pattern exhibiting negative spatial autocorrelation, R is likely to associate the autocorrelation with clustering when sample size is greater than 20, while LR-T and G are unlikely to accept any presence of negative autocorrelation as clustering.  相似文献   

12.
Aiming at the deficiencies of analysis capacity from different levels and fuzzy treating method in product function modeling of conceptual design,the theory of quotient space and universal triple I fuzzy reasoning method are introduced,and then the function modeling algorithm based on the universal triple I fuzzy reasoning method is proposed.Firstly,the product function granular model based on the quotient space theory is built,with its function granular representation and computing rules defined at the same time.Secondly,in order to quickly achieve function granular model from function requirement,the function modeling method based on universal triple I fuzzy reasoning is put forward.Within the fuzzy reasoning of universal triple I method,the small-distance-activating method is proposed as the kernel of fuzzy reasoning;how to change function requirements to fuzzy ones,fuzzy computing methods,and strategy of fuzzy reasoning are respectively investigated as well;the function modeling algorithm based on the universal triple I fuzzy reasoning method is achieved.Lastly,the validity of the function granular model and function modeling algorithm is validated.Through our method,the reasonable function granular model can be quickly achieved from function requirements,and the fuzzy character of conceptual design can be well handled,which greatly improves conceptual design.  相似文献   

13.
14.
Parallelization of deduction strategies: An analytical study   总被引:1,自引:0,他引:1  
In this paper we present a general analysis of the parallelization of deduction strategies. We classify strategies assubgoal-reduction strategies, expansion-oriented strategies, andcontraction-based strategies. For each class we analyze how and what types of parallelism can be utilized. Since the operational semantics of deduction-based programming languages can be construed as subgoal-reduction strategies, our analysis encompasses, at the abstract level, both strategies for deduction-based programming and those for theorem proving. We distinguish different types of parallel deduction based on the granularity of parallelism. These two criteria — the classification of strategies and of types of parallelism — provide us with a framework to treat problems and with a grid to classify approaches to parallel deduction. Within this framework, we analyze many issues, including the dynamicity and size of the database of clauses during the derivation, the possibility of conflicts between parallel inferences, and duplication versus sharing of clauses. We also suggest the type of architectures that may be suitable for each class of strategies. We substantiate our analysis by describing existing methods, emphasizing parallel expansion-oriented strategies and parallel contraction-based strategies for theorem proving. The most interesting and least explored by existing approaches are the contraction-based strategies. The presence of contraction rules — rules that delete clauses — and especially the application ofbackward contraction, emerges as a key issue for parallelization of these strategies. Backward contraction is the main reason for the impressive experimental success of contraction-based strategies. Our analysis shows that backward contraction makes efficient parallelization much more difficult. In our analysis, coarse-grain parallelism appears to be the best choice for parallelizing contraction-based reasoning. Accordingly, we propose a notion ofparallelism at the search level as coarse-grain parallelism for deduction.Supported by the GE Foundation Faculty Fellowship to the University of Iowa and by the National Science Foundation with grant CCR-94-08667.Supported by grant NSC 83-0408-E-002-012T of the National Science Council of the Republic of China.  相似文献   

15.
This paper presents a framework for giving a compositional theory of Petri nets using category theory. An integral part of our approach is the use of linear logic in specifying and reasoning about Petri nets. We construct categories of nets based on V. C. V. de Paiva′s dialectica category models of linear logic [in "Proc. Category Theory and Computer Science, Manchester" (D. H. Pitt, D. E. Rydeheard, P. Dybjer, A. M. Pitts, and A. Poigné, Eds.), Lecture Notes in Computer Science, Vol. 389, Springer-Verlag, Berlin/New York, 1989] and exploit the structure of de Paiva′s models to give constructions on nets. We compare our categories of nets with others in the literature, and show how some of the most widely-studied categories can be expressed within our framework. Taking a category of elementary nets as an example we show how this approach yields both existing and novel constructions on nets and discuss their computational interpretation.  相似文献   

16.
In this paper, we describe a granular algorithm for translating information between two granular worlds, represented as fuzzy rulebases. These granular worlds are defined on the same universe of discourse, but employ different granulations of this universe. In order to translate information from one granular world to the other, we must regranulate the information so that it matches the information granularity of the target world. This is accomplished through the use of a first-order interpolation algorithm, implemented using linguistic arithmetic, a set of elementary granular computing operations. We first demonstrate this algorithm by studying the common “fuzzy-PD” rulebase at several different granularities, and conclude that the “3 × 3” granulation may be too coarse for this objective. We then examine the question of what the “natural” granularity of a system might be; this is studied through a 10-fold cross-validation experiment involving three different granulations of the same underlying mapping. For the problem under consideration, we find that a 7 × 7 granulation appears to be the minimum necessary precision.  相似文献   

17.
Qualitative temporal and spatial reasoning is in many cases based on binary relations such as before, after, starts, contains, contact, part of, and others derived from these by relational operators. The calculus of relation algebras is an equational formalism; it tells us which relations must exist, given several basic operations, such as Boolean operations on relations, relational composition and converse. Each equation in the calculus corresponds to a theorem, and, for a situation where there are only finitely many relations, one can construct a composition table which can serve as a look up table for the relations involved. Since the calculus handles relations, no knowledge about the concrete geometrical objects is necessary. In this sense, relational calculus is pointless. Relation algebras were introduced into temporal reasoning by Allen (1983, Communications of the ACM 26(1), 832–843) and into spatial reasoning by Egenhofer and Sharma (1992, Fifth International Symposium on Spatial Data Handling, Charleston, SC). The calculus of relation algebras is also well suited to handle binary constraints as demonstrated e.g. by Ladkin and Maddux (1994, Journal of the ACM 41(3), 435–469). In the present paper I will give an introduction to relation algebras, and an overview of their role in qualitative temporal and spatial reasoning.  相似文献   

18.
The Murnaghan–Nakayama rule is the classical formula for computing the character table of Sn. Y. Roichman (Adv. Math. 129 (1997) 25) has recently discovered a rule for the Kazhdan–Lusztig characters of q Hecke algebras of type A, which can also be used for the character table of Sn. For each of the two rules, we give an algorithm for computing entries in the character table of Sn. We then analyze the computational complexity of the two algorithms, and in the case of characters indexed by partitions in the (k,ℓ) hook, compare their complexities to each other. It turns out that the algorithm based on the Murnaghan–Nakayama rule requires far less operations than the other algorithm. We note the algorithms’ complexities’ relation to two enumeration problems of Young diagrams and Young tableaux.  相似文献   

19.
Temporal Granularity: Completing the Puzzle   总被引:1,自引:0,他引:1  
Granularity is an integral feature of both anchored (e.g., 25 October 1995, July 1996) and unanchored (e.g., 3 minutes, 6 hours 20 minutes, 5 days) temporal data. In supporting temporal data that is specified in different granularities, numerous approaches have been proposed to deal with the issues of converting temporal data from one granularity to another. The emphasis, however, has only been on granularity conversions with respect to anchored temporal data. In this paper we provide a novel approach to the treatment of granularity in temporal data. A granularity is modeled as a special kind of unanchored temporal primitive that can be used as a unit of time. That is, a granularity is modeled as a unit unanchored temporal primitive. We show how unanchored temporal data is represented, give procedures for converting the data to a given granularity, provide canonical forms for the data, and describe how operations between the data are performed. We also show how anchored temporal data is represented at different granularities and give the semantics of operations on anchored temporal data.  相似文献   

20.
Soft computing is an interdisciplinary area that focuses on the design of intelligent systems to process uncertain, imprecise and incomplete information. It mainly builds on fuzzy sets theory, fuzzy logic, neural computing, optimization, evolutionary algorithms, and approximate reasoning et al. Information granularity is in general regarded as a crucial design asset, which helps establish a better rapport of the resulting granular model with the system under modeling. Human centricity is an inherent property of people's view on a system, a process, a machine or a model. Information granularity can be used to reflect people's level of uncertainty and this makes its pivotal role in soft computing. Indeed, the concept of information granularity facilitates the development of theory and application of soft computing immensely. A number of papers pertaining to some recent advances in theoretical development and practical application of information granularity in soft computing are highlighted in this special issue. The main objective of this study is to collect as many as possible researches on human centricity and information granularity in the agenda of theories and applications of soft computing, review the main idea of these literatures, compare the advantages and disadvantages of their methods and try to find the relationships and relevance of these theories and applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号