首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
In this study, we introduce and discuss a concept of knowledge transfer in system modeling. In a nutshell, knowledge transfer is about forming ways on how a source of knowledge (namely, an existing model) can be used in presence of new, very limited experimental evidence. In virtue of the nature of the problem at hand (a situation encountered quite commonly, e.g. in project cost estimation), new data could be very limited and this scarcity of data makes it insufficient to construct a new model. At the same time, the new data originate from a similar (but not the same) phenomenon (process) for which the original model has been constructed so the existing model, even though it could applied, has to be treated with a certain level of reservation. Such situations can be encountered, e.g. in software engineering where in spite existing similarities, each project, process, or product exhibits its own unique characteristics. Taking this into consideration, the existing model is generalized (abstracted) by forming its granular counterpart – granular model where its parameters are regarded as information granules rather than numeric entities, viz. their non-numeric (granular) version is formed based on the values of the numeric parameters present in the original model. The results produced by the granular model are also granular and in this manner they become reflective of the differences existing between the current phenomenon and the process for which the previous model has been formed.In the study on knowledge transfer and reusability, information granularity is viewed as an important design asset and as such it is subject to optimization. We formulate an optimal information granularity allocation problem: assuming a certain level of granularity, distribute it optimally among the parameters of the model (making them granular) so that a certain data coverage criterion is maximized. While the underlying concept is general and applicable to a variety of models, in this study, we discuss its use to fuzzy neural networks with intent to clearly visualize the advantages of the approach and emphasize various ways of forming granular versions of the weights (parameters) of the connections of the network. Several granularity allocation protocols (ranging from a uniform distribution of granularity, symmetric and asymmetric schemes of allocation) are discussed and the effectiveness of each of them is quantified. The use of Particle Swarm Optimization (PSO) as the underlying optimization tool to realize optimal granularity allocation is discussed.  相似文献   

2.
This study elaborates on the role of information granularity in the development of fuzzy controllers. As opposed to numeric data being commonly accepted by fuzzy controllers, we discuss a general processing framework involving data-information granules exhibiting various levels of information granularity. The paper analyzes an impact of information granularity on the performance of the controller. We study a way in which information granules arise in control problems, elaborate on a way of describing these granules as well as provide a way of quantifying the level of information granularity. A number of analysis and design issues are studied including robustness of the fuzzy controller, representation of linguistic information and quantification of its granularity. Nonlinear characteristics of the compiled version of the fuzzy controller operating in presence of granular information are discussed in detail. Illustrative numerical examples are provided as well. ©1999 John Wiley & Sons, Inc.  相似文献   

3.
4.
Data imputation is a common practice encountered when dealing with incomplete data. Irrespectively of the existing spectrum of techniques, the results of imputation are commonly numeric meaning that once the data have been imputed they are not distinguishable from the original data being initially available prior to imputation. In this study, the crux of the proposed approach is to develop a way of representing imputed (missing) entries as information granules and in this manner quantify the quality of the imputation process and the quality of the ensuing data. We establish a two-stage imputation mechanism in which we start with any method of numeric imputation and then form a granular representative of missing value. In this sense, the approach could be regarded as an enhancement of the existing imputation techniques.Proceeding with the detailed imputation schemes, we discuss two ways of imputation. In the first one, imputation is realized for individual variables of data sets and afterwards enhanced by the buildup of information granules. In the second approach, we are concerned with the use of fuzzy clustering, Fuzzy C-Means (FCM), which helps establish a structure in the data and then use this information in the imputation process.The design of information granules invokes the fundamentals of Granular Computing, namely a principle of justifiable granularity and an allocation of information granularity. Numeric experiments concerned with a suite of publicly available data sets offer detailed insights into the main facets of the overall design process and deliver a parametric analysis of the methods.  相似文献   

5.
The Sugeno-type fuzzy models are used frequently in system modeling. The idea of information granulation inherently arises in the design process of Sugeno-type fuzzy model, whereas information granulation is closely related with the developed information granules. In this paper, the design method of Sugeno-type granular model is proposed on a basis of an optimal allocation of information granularity. The overall design process initiates with a well-established Sugeno-type numeric fuzzy model (the original Sugeno-type model). Through assigning soundly information granularity to the related parameters of the antecedents and the conclusions of fuzzy rules of the original Sugeno-type model (i.e. granulate these parameters in the way of optimal allocation of information granularity becomes realized), the original Sugeno-type model is extended to its granular counterpart (granular model). Several protocols of optimal allocation of information granularity are also discussed. The obtained granular model is applied to forecast three real-world time series. The experimental results show that the method of designing Sugeno-type granular model offers some advantages yielding models of good prediction capabilities. Furthermore, those also show merits of the Sugeno-type granular model: (1) the output of the model is an information granule (interval granule) rather than the specific numeric entity, which facilitates further interpretation; (2) the model can provide much more flexibility than the original Sugeno-type model; (3) the constructing approach of the model is of general nature as it could be applied to various fuzzy models and realized by invoking different formalisms of information granules.  相似文献   

6.
In this paper, we develop a granular input space for neural networks, especially for multilayer perceptrons (MLPs). Unlike conventional neural networks, a neural network with granular input is an augmented study on a basis of a well learned numeric neural network. We explore an efficient way of forming granular input variables so that the corresponding granular outputs of the neural network achieve the highest values of the criteria of specificity (and support). When we augment neural networks through distributing information granularities across input variables, the output of a network has different levels of sensitivity on different input variables. Capturing the relationship between input variables and output result becomes of a great help for mining knowledge from the data. And in this way, important features of the data can be easily found. As an essential design asset, information granules are considered in this construct. The quantification of information granules is viewed as levels of granularity which is given by the expert. The detailed optimization procedure of allocation of information granularity is realized by an improved partheno genetic algorithm (IPGA). The proposed algorithm is testified effective by some numeric studies completed for synthetic data and data coming from the machine learning and StatLib repositories. Moreover, the experimental studies offer a deep insight into the specificity of input features.  相似文献   

7.
In this study, we introduce a concept of granular worlds and elaborate on various representation and communication issues arising therein. A granular world embodies a collection of information granules being regarded as generic conceptual entities used to represent knowledge and handle problem solving. Granular computing is a paradigm supporting knowledge representation, coping with complexity, and facilitating interpretation of processing. In this sense, it is crucial to all man‐machine pursuits and data mining and intelligent data analysis, in particular. There are two essential facets that are inherently associated with any granular world, that is a formalism used to describe and manipulate information granules and the granularity of the granules themselves (roughly speaking, by the granularity we mean a “size” of such information granules; its detailed definition depends upon the formal setting of the granular world). There are numerous formal models of granular worlds ranging from set‐theoretic developments (including sets, fuzzy sets, and rough sets) to probabilistic counterparts (random sets, random variables and alike). In light of the evident diversity of granular world (occurring both in terms of the underlying formal settings as well as levels of granularity), we elaborate on their possible interaction and identify implications of such communication. More specifically, we have cast these in the form of the interoperability problem that is associated with the representation of information granules. © 2000 John Wiley & Sons, Inc.  相似文献   

8.
模糊规则模型广泛应用于许多领域,而现有的模糊规则模型主要使用基于数值形式的性能评估指标,忽略了对于模糊集合本身的评价,因此提出了一种模糊规则模型性能评估的新方法。该方法可以有效地评估模糊规则模型输出结果的非数值(粒度)性质。不同于通常使用的数值型性能指标(比如均方误差(MSE)),该方法通过信息粒的特征来表征模型输出的粒度结果的质量,并将该指标使用在模糊模型的性能优化中。信息粒性能采用(数据的)覆盖率和(信息粒自身的)特异性两个基本指标得以量化,并通过使用粒子群优化实现了粒度输出质量(表示为覆盖率和特异性的乘积)的最大化。此外,该方法还优化了模糊聚类形成的信息粒的分布。实验结果表明该指标对于模糊规则模型性能评估的有效性。  相似文献   

9.
In this study, we introduce a concept of a granular input space in system modeling, in particular in fuzzy rule-based modeling. The underlying problem can be succinctly formulated in the following way: given is a numeric model, develop an efficient way of forming granular input variables so that the corresponding granular outputs of the model achieve the highest level of specificity. The rationale behind the formulation of the problem is offered along with several illustrative examples. In conjunction with the underlying idea, developed is an algorithmic framework supporting an optimization of the specificity of the model exposed to granular inputs (data). It is dwelled upon one of the principles of Granular Computing, namely an optimal allocation of information granularity. For illustrative purposes, the study is focused on information granules formalized in terms of intervals (however the proposed approach becomes equally relevant for other formalism of information granules). Some comparative analysis with the existing idea of global sensitivity analysis is also carried out by contrasting the essential differences among the two approaches and analyzing the results of computational experiments.  相似文献   

10.
In this study, we are concerned with a construction of granular neural networks (GNNs)—architectures formed as a direct result reconciliation of results produced by a collection of local neural networks constructed on a basis of individual data sets. Being cognizant of the diversity of the results produced by the collection of networks, we arrive at the concept of granular neural network, producing results in the form of information granules (rather than plain numeric entities) that become reflective of the diversity of the results generated by the contributing networks. The design of a granular neural network exploits the concept of justifiable granularity. Introduced is a performance index quantifying the quality of information granules generated by the granular neural network. This study is illustrated with the aid of machine learning data sets. The experimental results provide a detailed insight into the developed granular neural networks.  相似文献   

11.
粒度计算是一种新的智能计算的理论和方法,目前受到很多学者的关注。但是,具体可行的粒表示模型和不同粒的推理方法研究相对较少。本文将模糊粗糙集纳入粒度计算这种新的理论框架,对于处理复杂信息系统,求解复杂问题无疑具有重要的意义。首先利用笛卡尔积,构建了模糊关系下的信息粒;然后给出不同粒度下模糊粗糙算子的表示方法,进而形成一个分层递阶结构;最后考虑了对于模糊信息系统粒度粗细的选择问题,并给出一个实例,从而为粒度计算提供一个具体而实用的框架。  相似文献   

12.
In this study, we propose a model of granular data emerging through a summarization and processing of numeric data. This model supports data analysis and contributes to further interpretation activities. The structure of data is revealed through the FCM equipped with the Tchebyschev (l) metric. The paper offers a novel contribution of a gradient-based learning of the prototypes developed in the l-based FCM. The l metric promotes development of easily interpretable information granules, namely hyperboxes. A detailed discussion of their geometry is provided. In particular, we discuss a deformation effect of the hyperbox-shape of granules due to an interaction between the granules. It is shown how the deformation effect can be quantified. Subsequently, we show how the clustering gives rise to a two-level topology of information granules: the core part of the topology comes in the form of hyperbox information granules. A residual structure is expressed through detailed, yet difficult to interpret, membership grades. Illustrative examples including synthetic data are studied.  相似文献   

13.
现有的用户画像分析模型使用单一模型单一粒度的学习方式处理异构多源的原始数据,限制分析模型的性能,无法完整展示多层次、多角度的用户画像特征.针对该问题,基于粒计算思想,文中提出多粒度用户画像分析模型.首先,构建数据的多粒度表示结构,粒化原始数据.再根据数据粒度结构,提出基于集成学习的粒度提升算法,用于融合低粒层的数据信息以得到高粒层的数据表示.最后,在多个粒层数据表示上进行用户画像分析,展示一个较全面的用户画像.实验表明,相比单一粒度的用户画像,多粒度的用户画像更全面、立体和丰富  相似文献   

14.
We are concerned with the granular representation of mappings (or experimental data) coming in the form R:R/spl rarr/[0,1] (for one-dimensional cases) and R:R/sup n//spl rarr/[0,1] (for multivariable cases) with R being a set of real numbers. As the name implies, a granular mapping is defined over information granules and maps them into a collection of granules expressed in some output space. The design of the granular mapping is discussed in the case of set and fuzzy set-based granulation. The proposed development is regarded as a two-phase process that comprises: 1) a definition of an interaction between information granules and experimental evidence or existing numeric mapping and 2) the use of these measures of interaction in building an explicit expression for the granular mapping. We show how to develop information granules in case of multidimensional numeric data by resorting to fuzzy clustering (fuzzy C-means). Experimental results serve as an illustration of the proposed approach.  相似文献   

15.
Information granules form an abstract and efficient characterization of large volumes of numeric data. Fuzzy clustering is a commonly encountered information granulation approach. A reconstruction (degranulation) is about decoding information granules into numeric data. In this study, to enhance quality of reconstruction, we augment the generic data reconstruction approach by introducing a transformation mapping of the originally produced partition matrix and setting up an adjustment mechanism modifying a localization of the prototypes. We engage several population-based search algorithms to optimize interaction matrices and prototypes. A series of experimental results dealing with both synthetic and publicly available data sets are reported to show the enhancement of the data reconstruction performance provided by the proposed method.  相似文献   

16.
17.
Information granules, such as e.g., fuzzy sets, capture essential knowledge about data and the key dependencies between them. Quite commonly, we may envision that information granules (fuzzy sets) have become a result of fuzzy clustering and therefore could be succinctly represented in the form of some fuzzy partition matrices. Interestingly, the same data set could be represented from various standpoints and this multifaceted view yields a collection of different partition matrices being reflective of the higher-order granular knowledge about the data. The levels of specificity of the clusters the data are organized into could be quite different—the larger the number of clusters, the more detailed insight into the structure of data becomes available. Given the granularity of the resulting constructs (rather than plain data themselves), one could view a collection of partition matrices as a certain type of a network of knowledge. Considering a variety of sources of knowledge encountered across the network, we are interested in forming consensus between them. In a nutshell, this leads to the construction of certain fuzzy partition matrices which “reconcile” the knowledge captured by the individual partition matrices. Given that the granularity of the sources of knowledge under consideration could vary quite substantially, we develop a unified optimization perspective by introducing fuzzy proximity matrices that are induced by the corresponding partition matrices. In the sequel, the optimization is realized on a basis of these proximity matrices. We offer a detailed algorithm and illustrate its performance using a series of numeric experiments.  相似文献   

18.
Linguistic models and linguistic modeling   总被引:2,自引:0,他引:2  
The study is concerned with a linguistic approach to the design of a new category of fuzzy (granular) models. In contrast to numerically driven identification techniques, we concentrate on budding meaningful linguistic labels (granules) in the space of experimental data and forming the ensuing model as a web of associations between such granules. As such models are designed at the level of information granules and generate results in the same granular rather than pure numeric format, we refer to them as linguistic models. Furthermore, as there are no detailed numeric estimation procedures involved in the construction of the linguistic models carried out in this way, their design mode can be viewed as that of a rapid prototyping. The underlying algorithm used in the development of the models utilizes an augmented version of the clustering technique (context-based clustering) that is centered around a notion of linguistic contexts-a collection of fuzzy sets or fuzzy relations defined in the data space (more precisely a space of input variables). The detailed design algorithm is provided and contrasted with the standard modeling approaches commonly encountered in the literature. The usefulness of the linguistic mode of system modeling is discussed and illustrated with the aid of numeric studies including both synthetic data as well as some time series dealing with modeling traffic intensity over a broadband telecommunication network.  相似文献   

19.
为探讨数据关联问题,对数据集实施分层粒化处理,得到分层结构的粒化树.进而利用粒化树的层次信息和粒度的数值表示,并通过关联数据产生的数据联系,给出两棵粒化树之间数据关联的定义.文中视上近似为算子,借助上近似运算对应的粒,获得数据关联的判定定理,并基于粒度的数值信息判定关联紧密程度,形成数据关联的粒化树描述方法,其展示的粒化分层和粒度数值表示可看作粒计算研究的一种形式.实例的讨论表明粒化树方法的应用价值  相似文献   

20.
针对类簇交叉且分布不均衡的复杂数据,依据可信粒度准则,提出一种结合区间二型模糊粗糙C均值(IT2FRCM)聚类与混合度量的两阶段信息粒化算法。在第一阶段,利用IT2FRCM算法对原始数据进行聚类分析,得到初始的信息粒。在第二阶段,综合考虑数据空间分布、样本规模及粒子性质等因素,采用混合度量方法设计均衡证据合理性和语义独特性的粒化函数,并基于可信粒度准则优化由覆盖度和独特性组成的复合函数,求解最佳粒子边界。在人工数据集和UCI数据集上的实验结果表明,该算法能够有效提高不平衡数据的信息粒化质量和粒子代表性,在归类正确数、粒子特性等指标上均取得了理想表现。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号