首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 62 毫秒
1.
一种面向连续属性空间的模糊粗糙约简   总被引:3,自引:1,他引:2  
简要叙述了模糊粗糙集理论中与属性约简相关的几个重要概念,研究了属性模糊化方法,并提出了一种结合遗传算法和模糊粗糙集理论的属性约简算法,它能快速找到完整地保留了原始数据集合的信息的一个属性约简.  相似文献   

2.
针对模糊信息系统,通过分析R.Jensen所定义近似算子的松散性,构造了一种严格状态下的近似算子,该算子可以保证下近似随信息系统中属性个数的增加而单调递增;在此基础上,对模糊信息系统的相对约简概念进行了定义,应用所定义依赖度提出了一种模糊信息系统的启发式知识约简算法;将该方法应用于目标威胁等级评估信息系统的知识约简,计算结果验证了该方法的有效性。  相似文献   

3.
现实世界中常常包含着海量的、不完整的、模糊及不精确的数据或对象,使得模糊信息粒化成为近年来研究趋势。利用论域上的模糊等价关系定义了模糊粒度世界的模糊知识粒度,给出了新的属性约简条件和核属性计算方法,以便更好地挖掘出潜在的、有利用价值的信息。针对粗糙集在对连续属性约简的过程中容易造成信息缺失和不能对模糊属性处理的现象,提出了一种基于模糊知识粒度对混合决策系统约简的启发式算法,省去了连续属性离散化过程,减少了计算量,为离散值域和混合值域约简提供了统一的方法。最后通过实例验证了其有效性。  相似文献   

4.
从宏观的角度研究集合不容易发现元素之间的关系,并且不可避免地带来人为的随意性和不确定性。从微观元素的相似性出发,首先建立了在模糊等价关系上的等价类以及模糊等价关系上的粗糙集,研究了相似程度参数的合理取值范围问题,提出并证明了粗糙集算子的计算定理,然后论述了模糊等价关系上的粗糙集与经典粗糙集的关系,发现并研究了经典粗糙理论处理相同元素时会出现的元素分类不一致问题,最后给出了经典粗糙集算子的计算方法。  相似文献   

5.
基于模糊粗糙集的传统约简算法的时间代价较高,在处理大规模数据时耗时过长,且在许多实际大规模数据集上存在有限时间内无法收敛等问题。因此将权重引入属性约简的定义中,其中属性权重是属性重要度的数值指标。通过构建优化问题来求解属性权重,证明了属性依赖度即是属性权重的最优解。因此,提出了基于属性权重排序的约简算法,从而大大提升了约简的速度,使得约简算法可以应用于大规模数据集,特别是高维数据集中。  相似文献   

6.
7.
定义了模糊属性对象间的相似关系、属性重要性程度等概念,然后使用基于相似关系的扩展的模糊粗糙集模型构造了一种适用于连续值属性的决策表属性约简算法,通过该算法可以得到基于重要性程度的条件属性集的约简,通过实例分析及比较研究,证明这种算法是有效的且算法的时间复杂度很低。  相似文献   

8.
陈俞  赵素云  李雪峰  陈红  李翠平 《软件学报》2017,28(11):2825-2835
传统的属性约简由于其时间复杂度和空间复杂度过高,几乎无法应用到大规模的数据集中.将随机抽样引入传统的模糊粗糙集中,使得属性约简的效率大幅度提升.首先,在统计下近似的基础上提出一种统计属性约简的定义.这里的约简不是原有意义上的约简,而是保持基于统计下近似定义的统计辨识度不变的属性子集.然后,采用抽样的方法计算统计辨识度的样本估计值,基于此估计值可以对统计属性重要性进行排序,从而可以设计一种快速的适用于大规模数据的序约简算法.由于随机抽样集以及统计近似概念的引入,该算法从时间和空间上均降低了约简的计算复杂度,同时又保持了数据集中信息含量几乎不变.最后,数值实验将基于随机抽样的序约简算法和两种传统的属性约简算法从以下3个方面进行了对比:计算属性约简时间消耗、计算属性约简空间消耗、约简效果.对比实验验证了基于随机抽样的序约简算法在时间与空间上的优势.  相似文献   

9.
一种改进的基于模糊-粗糙集的属性约简算法   总被引:4,自引:1,他引:4  
传统的粗糙集理论所面临的问题是只能应用于离散数据集;而离散后的数据集中可能会丢失重要的信息。本文在现有算法的基础上进行了改进,运用模糊—粗糙水平集来达到简化计算量的目的。  相似文献   

10.
《微型机与应用》2016,(12):55-58
数据流挖掘是当前数据挖掘研究的一个热点,并且数据流的类型也不尽相同。利用模糊粗糙集和F-粗糙集的基本原理和基本方法,提出了一种对模糊型数据流进行模糊并行约简、删除冗余属性的方法,并运用模糊并行约简中属性重要性的变化探测模糊概念漂移现象。有别于传统方法,该方法利用模糊数据的内部本质特性对模糊概念漂移进行探测,并且通过实例验证其探测模糊概念漂移的可行性和有效性。  相似文献   

11.
One of the key problems of knowledge discovery is knowledge reduction. This paper proposes a new method for knowledge reduction in information systems. First, two families of closed sets Cr and CR are defined, where r and R are equivalence relations defined on the attribute set and its power set, respectively. The properties of Cr and CR are also discussed. The necessary and sufficient condition for Cr=CR is then given and employed to construct an approach to attribute reduction in information systems. It is also proved that under the condition Cr=CR, the proposed approach to knowledge reduction is equivalent to the well-accepted one in reference [W.X. Zhang, Y. Leung, W.Z. Wu, Information Systems and Knowledge Discovery, Science Publishing Company, Beijing, 2003].  相似文献   

12.
In this paper we introduce a new type of fuzzy modifiers (i.e. mappings that transform a fuzzy set into a modified fuzzy set) based on fuzzy relations. We show how they can be applied for the representation of weakening adverbs (more or less, roughly) and intensifying adverbs (very, extremely) in the inclusive and the non-inclusive interpretation. We illustrate their use in an approximate reasoning scheme.  相似文献   

13.
Rough sets and fuzzy rough sets serve as important approaches to granular computing, but the granular structure of fuzzy rough sets is not as clear as that of classical rough sets since lower and upper approximations in fuzzy rough sets are defined in terms of membership functions, while lower and upper approximations in classical rough sets are defined in terms of union of some basic granules. This limits further investigation of the existing fuzzy rough sets. To bring to light the innate granular structure of fuzzy rough sets, we develop a theory of granular computing based on fuzzy relations in this paper. We propose the concept of granular fuzzy sets based on fuzzy similarity relations, investigate the properties of the proposed granular fuzzy sets using constructive and axiomatic approaches, and study the relationship between granular fuzzy sets and fuzzy relations. We then use the granular fuzzy sets to describe the granular structures of lower and upper approximations of a fuzzy set within the framework of granular computing. Finally, we characterize the structure of attribute reduction in terms of granular fuzzy sets, and two examples are also employed to illustrate our idea in this paper.  相似文献   

14.
Causes (diagnoses) are retrieved and identified using observed effects (symptoms) based on fuzzy relations and Zadeh’s compositional rule of inference. An approach to designing adaptive fuzzy diagnostic systems is proposed. It allows solving fuzzy logic equations and designing and adjusting fuzzy relations using expert and experimental information. Translated from Kibernetika i Sistemnyi Analiz, No. 4, pp. 135–150, July–August 2009.  相似文献   

15.
Pasi Luukka 《Knowledge》2009,22(1):57-62
This paper examines a classifier based on similarity measures originating from probabilistic equivalence relations with a generalized mean. Equivalences are weighted and weight optimization is carried out with differential evolution algorithms. In the classifier, a similarity measure based on the ?ukasiewicz structure has previously been used, but this paper concentrates on measures which can be considered to be weighted similarity measures defined in a probabilistic framework, applied variable by variable and aggregated along the features using a generalized mean. The weights for these measures are determined using a differential evolution process. The classification accuracy with these measures are tested on different data sets. Classification results are obtained with medical data sets, and the results are compared to other classifiers, which gives quite good results. The result presented in this paper are promising, and in several cases better results were achieved.  相似文献   

16.

The fuzzy c-means algorithm (FCM) is aimed at computing the membership degree of each data point to its corresponding cluster center. This computation needs to calculate the distance matrix between the cluster center and the data point. The main bottleneck of the FCM algorithm is the computing of the membership matrix for all data points. This work presents a new clustering method, the bdrFCM (boundary data reduction fuzzy c-means). Our algorithm is based on the original FCM proposal, adapted to detect and remove the boundary regions of clusters. Our implementation efforts are directed in two aspects: processing large datasets in less time and reducing the data volume, maintaining the quality of the clusters. A significant volume of real data application (> 106 records) was used, and we identified that bdrFCM implementation has good scalability to handle datasets with millions of data points.

  相似文献   

17.
Dubois and Prade (1990) [1] introduced the notion of fuzzy rough sets as a fuzzy generalization of rough sets, which was originally proposed by Pawlak (1982) [8]. Later, Radzikowska and Kerre introduced the so-called (I,T)-fuzzy rough sets, where I is an implication and T is a triangular norm. In the present paper, by using a pair of implications (I,J), we define the so-called (I,J)-fuzzy rough sets, which generalize the concept of fuzzy rough sets in the sense of Radzikowska and Kerre, and that of Mi and Zhang. Basic properties of (I,J)-fuzzy rough sets are investigated in detail.  相似文献   

18.
频繁项集挖掘是数据挖掘应用中的关键问题,而巨大的频繁项集数目成为了现实应用中的阻碍。为了解决这一问题,本文提出了一种基于格结构的频繁项集精简模型,并证明了该方法产生支持度误差的范围。此外,在模型的基础上提出了一种模糊等价类精简表示算法FEC。实验结果表明,该方法能够保证在频繁项集数量大幅降低的同时,不会引入过大的支持度错误,与Index-Meta算法相比,产生的支持度错误较小,有较高的应用价值。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号