首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Sugeno-type fuzzy models are used frequently in system modeling. The idea of information granulation inherently arises in the design process of Sugeno-type fuzzy model, whereas information granulation is closely related with the developed information granules. In this paper, the design method of Sugeno-type granular model is proposed on a basis of an optimal allocation of information granularity. The overall design process initiates with a well-established Sugeno-type numeric fuzzy model (the original Sugeno-type model). Through assigning soundly information granularity to the related parameters of the antecedents and the conclusions of fuzzy rules of the original Sugeno-type model (i.e. granulate these parameters in the way of optimal allocation of information granularity becomes realized), the original Sugeno-type model is extended to its granular counterpart (granular model). Several protocols of optimal allocation of information granularity are also discussed. The obtained granular model is applied to forecast three real-world time series. The experimental results show that the method of designing Sugeno-type granular model offers some advantages yielding models of good prediction capabilities. Furthermore, those also show merits of the Sugeno-type granular model: (1) the output of the model is an information granule (interval granule) rather than the specific numeric entity, which facilitates further interpretation; (2) the model can provide much more flexibility than the original Sugeno-type model; (3) the constructing approach of the model is of general nature as it could be applied to various fuzzy models and realized by invoking different formalisms of information granules.  相似文献   

2.
模糊规则模型广泛应用于许多领域,而现有的模糊规则模型主要使用基于数值形式的性能评估指标,忽略了对于模糊集合本身的评价,因此提出了一种模糊规则模型性能评估的新方法。该方法可以有效地评估模糊规则模型输出结果的非数值(粒度)性质。不同于通常使用的数值型性能指标(比如均方误差(MSE)),该方法通过信息粒的特征来表征模型输出的粒度结果的质量,并将该指标使用在模糊模型的性能优化中。信息粒性能采用(数据的)覆盖率和(信息粒自身的)特异性两个基本指标得以量化,并通过使用粒子群优化实现了粒度输出质量(表示为覆盖率和特异性的乘积)的最大化。此外,该方法还优化了模糊聚类形成的信息粒的分布。实验结果表明该指标对于模糊规则模型性能评估的有效性。  相似文献   

3.
4.
Linguistic models and linguistic modeling   总被引:2,自引:0,他引:2  
The study is concerned with a linguistic approach to the design of a new category of fuzzy (granular) models. In contrast to numerically driven identification techniques, we concentrate on budding meaningful linguistic labels (granules) in the space of experimental data and forming the ensuing model as a web of associations between such granules. As such models are designed at the level of information granules and generate results in the same granular rather than pure numeric format, we refer to them as linguistic models. Furthermore, as there are no detailed numeric estimation procedures involved in the construction of the linguistic models carried out in this way, their design mode can be viewed as that of a rapid prototyping. The underlying algorithm used in the development of the models utilizes an augmented version of the clustering technique (context-based clustering) that is centered around a notion of linguistic contexts-a collection of fuzzy sets or fuzzy relations defined in the data space (more precisely a space of input variables). The detailed design algorithm is provided and contrasted with the standard modeling approaches commonly encountered in the literature. The usefulness of the linguistic mode of system modeling is discussed and illustrated with the aid of numeric studies including both synthetic data as well as some time series dealing with modeling traffic intensity over a broadband telecommunication network.  相似文献   

5.
A lot of research has resulted in many time series models with high precision forecasting realized at the numerical level. However, in the real world, higher numerical precision may not be necessary for the perception, reasoning and decision-making of human. Model of time series with an ability of humans to perceive and process abstract entities (rather than numeric entities) is more adaptable for some problems of decision-making. With this regard, information granules and granular computing play a primordial role. Fox example, if change range (intervals) of stock prices for a certain period in the future is regarded as information granule, constructing model that can forecast change ranges (intervals) of stock prices for a period in the future is better able to help stock investors make reasonable decisions in comparison with those based upon specific forecasting numerical value of stock price. In this paper, we propose a new modeling approach to realize interval prediction, in which the idea of information granules and granular computing is integrated with the classical Chen’s method. The proposed method is to segment an original numeric time series into a collection of time windows first, and then build fuzzy granules expressed as a certain fuzzy set over each time windows by exploiting the principle of justifiable granularity. Finally, fuzzy granular model can be constructed by mining fuzzy logical relationships of adjacent granules. The constructed model can carry out interval prediction by degranulation operation. Two benchmark time series are used to validate the feasibility and effectiveness of the proposed approach. The obtained results demonstrate the effectiveness of the approach. Besides, for modeling and prediction of large-scale time series, the proposed approach exhibit a clear advantage of reducing computation overhead of modeling and simplifying forecasting.  相似文献   

6.
In this paper, we explore the distributivity of implication operators [especially Residuated (R)- and Strong (S)-implications] over Takagi (T)- and Sugeno (S)-norms. The motivation behind this work is the on going discussion on the law [(p/spl and/q)/spl rarr/r]/spl equiv/[(p/spl rarr/r)/spl or/(q/spl rarr/r)] in fuzzy logic as given in the title of the paper by Trillas and Alsina. The above law is only one of the four basic distributive laws. The general form of the previous distributive law is J(T(p,q),r)/spl equiv/S(J(p,r),J(q,r)). Similarly, the other three basic distributive laws can be generalized to give equations concerning distribution of fuzzy implications J on T- and S- norms. In this paper, we study the validity of these equations under various conditions on the implication operator J. We also propose some sufficiency conditions on a binary operator under which the general distributive equations are reduced to the basic distributive equations and are satisfied. Also in this work, we have solved one of the open problems posed by M. Baczynski (2002).  相似文献   

7.
Recursive information granulation: aggregation and interpretation issues   总被引:1,自引:0,他引:1  
This paper contributes to the conceptual and algorithmic framework of information granulation. We revisit the role of information granules that are relevant to several main classes of technical pursuits involving temporal and spatial granulation. A detailed algorithm of information granulation, regarded as an optimization problem reconciling two conflicting design criteria, namely, a specificity of information granules and their experimental relevance (coverage of numeric data), is provided in the paper. The resulting information granules are formalized in the language of set theory (interval analysis). The uniform treatment of data points and data intervals (sets) allows for a recursive application of the algorithm. We assess the quality of information granules through application of the fuzzy c-means (FCM) clustering algorithm. Numerical studies deal with two-dimensional (2D) synthetic data and experimental traffic data.  相似文献   

8.
Owing to their inherent nature, terrorist activities could be highly diversified. The risk assessment becomes a crucial component as it helps us weigh pros and cons versus possible actions or some planning pursuits. The recognition of threats and their relevance/seriousness is an integral part of the overall process of classification, recognition, and assessing eventual actions undertaken in presence of acts of chem.-bio terrorism. In this study, we introduce an overall scheme of risk assessment realized on a basis of classification results produced for some experimental data capturing the history of previous threat cases. The structural relationships in these experimental data are first revealed with the help of information granulation – fuzzy clustering. We introduce two criteria using which information granules are evaluated, that is (a) representation capabilities which are concerned with the quality of representation of numeric data by abstract constructs such as information granules, and (b) interpretation aspects which are essential in the process of risk evaluation. In case of representation facet of information granules, we demonstrate how a reconstruction criterion quantifies their quality. Three ways in which interpretability is enhanced are studied. First, we show how to construct the information granules with extended cores (where the uncertainty associated with risk evaluation could be reduced) and shadowed sets, which provide a three-valued logic perspective of information granules given in the form of fuzzy sets. Subsequently, we show a way of interpreting fuzzy sets via an optimized set of its α-cuts.  相似文献   

9.
On the representation of intuitionistic fuzzy t-norms and t-conorms   总被引:5,自引:0,他引:5  
Intuitionistic fuzzy sets form an extension of fuzzy sets: while fuzzy sets give a degree to which an element belongs to a set, intuitionistic fuzzy sets give both a membership degree and a nonmembership degree. The only constraint on those two degrees is that their sum must be smaller than or equal to 1. In fuzzy set theory, an important class of triangular norms and conorms is the class of continuous Archimedean nilpotent triangular norms and conorms. It has been shown that for such t-norms T there exists a permutation /spl phi/ of [0,1] such that T is the /spl phi/-transform of the Lukasiewicz t-norm. In this paper we introduce the notion of intuitionistic fuzzy t-norm and t-conorm, and investigate under which conditions a similar representation theorem can be obtained.  相似文献   

10.
Information granules form an abstract and efficient characterization of large volumes of numeric data. Fuzzy clustering is a commonly encountered information granulation approach. A reconstruction (degranulation) is about decoding information granules into numeric data. In this study, to enhance quality of reconstruction, we augment the generic data reconstruction approach by introducing a transformation mapping of the originally produced partition matrix and setting up an adjustment mechanism modifying a localization of the prototypes. We engage several population-based search algorithms to optimize interaction matrices and prototypes. A series of experimental results dealing with both synthetic and publicly available data sets are reported to show the enhancement of the data reconstruction performance provided by the proposed method.  相似文献   

11.
This study elaborates on the role of information granularity in the development of fuzzy controllers. As opposed to numeric data being commonly accepted by fuzzy controllers, we discuss a general processing framework involving data-information granules exhibiting various levels of information granularity. The paper analyzes an impact of information granularity on the performance of the controller. We study a way in which information granules arise in control problems, elaborate on a way of describing these granules as well as provide a way of quantifying the level of information granularity. A number of analysis and design issues are studied including robustness of the fuzzy controller, representation of linguistic information and quantification of its granularity. Nonlinear characteristics of the compiled version of the fuzzy controller operating in presence of granular information are discussed in detail. Illustrative numerical examples are provided as well. ©1999 John Wiley & Sons, Inc.  相似文献   

12.
In this study, we introduce a concept of a granular input space in system modeling, in particular in fuzzy rule-based modeling. The underlying problem can be succinctly formulated in the following way: given is a numeric model, develop an efficient way of forming granular input variables so that the corresponding granular outputs of the model achieve the highest level of specificity. The rationale behind the formulation of the problem is offered along with several illustrative examples. In conjunction with the underlying idea, developed is an algorithmic framework supporting an optimization of the specificity of the model exposed to granular inputs (data). It is dwelled upon one of the principles of Granular Computing, namely an optimal allocation of information granularity. For illustrative purposes, the study is focused on information granules formalized in terms of intervals (however the proposed approach becomes equally relevant for other formalism of information granules). Some comparative analysis with the existing idea of global sensitivity analysis is also carried out by contrasting the essential differences among the two approaches and analyzing the results of computational experiments.  相似文献   

13.
In this study, we introduce a concept of granular worlds and elaborate on various representation and communication issues arising therein. A granular world embodies a collection of information granules being regarded as generic conceptual entities used to represent knowledge and handle problem solving. Granular computing is a paradigm supporting knowledge representation, coping with complexity, and facilitating interpretation of processing. In this sense, it is crucial to all man‐machine pursuits and data mining and intelligent data analysis, in particular. There are two essential facets that are inherently associated with any granular world, that is a formalism used to describe and manipulate information granules and the granularity of the granules themselves (roughly speaking, by the granularity we mean a “size” of such information granules; its detailed definition depends upon the formal setting of the granular world). There are numerous formal models of granular worlds ranging from set‐theoretic developments (including sets, fuzzy sets, and rough sets) to probabilistic counterparts (random sets, random variables and alike). In light of the evident diversity of granular world (occurring both in terms of the underlying formal settings as well as levels of granularity), we elaborate on their possible interaction and identify implications of such communication. More specifically, we have cast these in the form of the interoperability problem that is associated with the representation of information granules. © 2000 John Wiley & Sons, Inc.  相似文献   

14.
As the use of nonclassical logics becomes increasingly important in computer science, artificial intelligence and logic programming, the development of efficient automated theorem proving based on nonclassical logic is currently an active area of research. This paper aims at the resolution principle for the Pavelka type fuzzy logic (1979). Pavelka showed that the only natural way of formalizing fuzzy logic for truth-values in the unit interval [0, 1] is by using the Lukasiewicz's implication operator a/spl rarr/b=min{1,1-a+b} or some isomorphic forms of it. Hence, we first focus on the resolution principle for the Lukasiewicz logic L/sub /spl aleph// with [0, 1] as the truth-valued set. Some limitations of classical resolution and resolution procedures for fuzzy logic with Kleene implication are analyzed. Then some preliminary ideals about combining resolution procedure with the implication connectives in L/sub /spl aleph// are given. Moreover, a resolution-like principle in L/sub /spl aleph// is proposed and the soundness theorem of this resolution procedure is also proved. Second, we use this resolution-like principle to Horn clauses with truth-values in an enriched residuated lattice and consider the L-type fuzzy Prolog.  相似文献   

15.
Feature analysis and feature selection are fundamental pursuits in pattern recognition. We revisit and generalize an issue of feature selection by introducing a mechanism of soft (fuzzy) feature selection. The underlying idea is to consider features to be granular rather than numeric. By varying the level of granularity, we modify the level of contribution of the specific feature to the overall feature space. We admit an interval model of the features meaning that their values assume a form of numeric intervals. The intervalization of the features exhibits a clear-cut interpretation. Moreover a contribution of the features to the formation of the feature space can be easily controlled: the broader the interval, the less essential contribution of the feature to the entire feature space. In limit, when the intervals get broad enough, one may view the feature to be completely eliminated (dropped) from the feature space. The quantification of the features in terms of their importance is realized in the setting of the clustering FCM model (namely, a process of the binary or fuzzy feature selection is carried out and numerically quantified in the space of membership values generated by fuzzy clusters). As the focal point of this study concerns an interval-like form of information granules, we reveal how such feature intervalization helps approximate fuzzy sets described by any type of membership function. Detailed computations give rise to a detailed quantification of such granular features. Numerical experiments provide a comprehensive numerical illustration of the problem.  相似文献   

16.
In this paper, we develop a granular input space for neural networks, especially for multilayer perceptrons (MLPs). Unlike conventional neural networks, a neural network with granular input is an augmented study on a basis of a well learned numeric neural network. We explore an efficient way of forming granular input variables so that the corresponding granular outputs of the neural network achieve the highest values of the criteria of specificity (and support). When we augment neural networks through distributing information granularities across input variables, the output of a network has different levels of sensitivity on different input variables. Capturing the relationship between input variables and output result becomes of a great help for mining knowledge from the data. And in this way, important features of the data can be easily found. As an essential design asset, information granules are considered in this construct. The quantification of information granules is viewed as levels of granularity which is given by the expert. The detailed optimization procedure of allocation of information granularity is realized by an improved partheno genetic algorithm (IPGA). The proposed algorithm is testified effective by some numeric studies completed for synthetic data and data coming from the machine learning and StatLib repositories. Moreover, the experimental studies offer a deep insight into the specificity of input features.  相似文献   

17.
18.
In his paper, we introduce a model of generalization and specialization of information granules. The information granules themselves are modeled as fuzzy sets or fuzzy relations. The generalization is realized by or-ing fuzzy sets while the specialization is completed through logic and operation. These two logic operators are realized using triangular norms (that is t- and a-norms). We elaborate on two (top-down and bottom-up) strategies of constructing information granules that arise as results of generalization and specialization. Various triangular norms are experimented with and some conclusions based on numeric studies are derived.  相似文献   

19.
Information granules, such as e.g., fuzzy sets, capture essential knowledge about data and the key dependencies between them. Quite commonly, we may envision that information granules (fuzzy sets) have become a result of fuzzy clustering and therefore could be succinctly represented in the form of some fuzzy partition matrices. Interestingly, the same data set could be represented from various standpoints and this multifaceted view yields a collection of different partition matrices being reflective of the higher-order granular knowledge about the data. The levels of specificity of the clusters the data are organized into could be quite different—the larger the number of clusters, the more detailed insight into the structure of data becomes available. Given the granularity of the resulting constructs (rather than plain data themselves), one could view a collection of partition matrices as a certain type of a network of knowledge. Considering a variety of sources of knowledge encountered across the network, we are interested in forming consensus between them. In a nutshell, this leads to the construction of certain fuzzy partition matrices which “reconcile” the knowledge captured by the individual partition matrices. Given that the granularity of the sources of knowledge under consideration could vary quite substantially, we develop a unified optimization perspective by introducing fuzzy proximity matrices that are induced by the corresponding partition matrices. In the sequel, the optimization is realized on a basis of these proximity matrices. We offer a detailed algorithm and illustrate its performance using a series of numeric experiments.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号