首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 109 毫秒
1.
全粒度粗糙集是一种既能表示显式知识又能表示隐式知识的粗糙集模型, 能更好地表示人类认识的复杂性、多样性和不确定性.文中结合经典粗糙集理论,定义全粒度隶属度、全粒度粗糙度、概念的全粒度属性依赖度、决策系统的全粒度属性依赖度等不确定性指标,探究这些不确定性指标的性质,指出这些不确定指标与全粒度绝对约简、概念的全粒度属性约简、全粒度Pawlak约简的联系,有助于全粒度粗糙集的属性约简和实际应用.  相似文献   

2.
全粒度粗糙集时空复杂度较高,难于计算属性约简.针对此问题,文中利用等价类定义信息系统中的可区分度,并研究其性质,证明基于可区分度的属性约简等价于绝对约简.定义决策系统中的正区域可区分度,并探究其性质,证明基于正区域可区分度约简是全粒度Pawlak约简的超集,但绝大部分情况下等于全粒度Pawlak约简,可作为全粒度Pawlak约简的近似.理论分析和实验表明,相比其它属性约简算法,基于正区域可区分度约简在计算复杂度和分类准确率等方面具有较大优势.  相似文献   

3.
杨春亮 《数字社区&智能家居》2009,5(4):2704-2705,2711
该文从粒度计算的角度对粗糙集理论的属性约简进行研究,定义了粒度的概念,并在此基础上提出了一种新的属性约简算法。实验分析表明,这种粒度计算方法能得到信息系统的最小约简。  相似文献   

4.
该文从粒度计算的角度对粗糙集理论的属性约简进行研究,定义了粒度的概念,并在此基础上提出了一种新的属性约简算法。实验分析表明,这种粒度计算方法能得到信息系统的最小约简。  相似文献   

5.
从粒度计算的角度对粗糙集理论的属性约简进行研究,分别基于代数方法和信息论方法定义了粒度差和粒度熵的概念,并在此基础上提出了两种新的属性约简算法.实验分析表明,这两种可靠有效的粒度计算方法都能得到信息表的最小约简,为进一步研究知识的粒度计算提供了可行的方法.  相似文献   

6.
定义了基于广义多粒度粗糙集的属性约简,研究了约简的一些基本性质,给出matlab计算的过程,并给出计算实例。定义了信息系统的严格协调、软不协调性、粒度协调、粒度不协调,定义了广义多粒度下约简、粒度约简、(下/上近似)分布协调约简、(下/上近似)质量协调约简,并给出部分结论。广义多粒度粗糙集的约简适用于乐观多粒度粗糙集和悲观多粒度粗糙集。研究结果可完善多粒度粗糙集理论,为理论研究和应用奠定基础。  相似文献   

7.
属性约简是机器学习等领域中常用的数据预处理方法。在基于粗糙集理论的属性约简算法中,大多是根据单一的方法来度量属性重要度。为了从多角度对属性达到更为优越的评估效果,首先在已有的模糊邻域粗糙集模型中定义属性依赖度度量,然后根据粒计算理论中知识粒度的概念,在模糊邻域粗糙集模型下提出了模糊邻域粒度度量。由于属性依赖度和知识粒度代表了不同视角的属性评估方法,因此将这两种方法结合起来用于信息系统的属性重要度评估,最后给出一种启发式属性约简算法。实验结果表明,所提出的算法具有较好的属性约简性能。  相似文献   

8.
经典的多粒度粗糙集模型采用多个等价关系(多粒度结构)来逼近目标集。根据乐观和悲观策略,常见的多粒度粗糙集分为两种类型:乐观多粒度粗糙集和悲观多粒度粗糙集。然而,这两个模型缺乏实用性,一个过于严格,另一个过于宽松。此外,多粒度粗糙集模型由于在逼近一个概念时需要遍历所有的对象,因此非常耗时。为了弥补这一缺点,进而扩大多粒度粗糙集模型的使用范围,首先在不完备信息系统中引入了可调节多粒度粗糙集模型,随后定义了局部可调节多粒度粗糙集模型。其次,证明了局部可调节多粒度粗糙集和可调节多粒度粗糙集具有相同的上下近似。通过定义下近似协调集、下近似约简、下近似质量、下近似质量约简、内外重要度等概念,提出了一种基于局部可调节多粒度粗糙集的属性约简方法。在此基础上,构造了基于粒度重要性的属性约简的启发式算法。最后,通过实例说明了该方法的有效性。实验结果表明,局部可调节多粒度粗糙集模型能够准确处理不完备信息系统的数据,降低了算法的复杂度。  相似文献   

9.
为拓展覆盖粗糙集模型,用多粒度方法研究了张燕兰等提出的广义覆盖决策信息系统模型,定义了多粒度意义下的覆盖上下近似,提出了多粒度属性约简算法。用实例对多粒度覆盖粗糙集属性约简方法和胡清华等提出的单粒度方法进行了比较。  相似文献   

10.
经典的粗糙集理论提出知识是有粒度的,但它没有量化信息粒度所表示的信息量.本文定义了知识粒数,知识粒的微粒数,平均微粒数,近1系数,知识粒度的概念;提出了知识粒度的量化计算方法;提出了一种基于知识粒度的属性约简算法以避免选择约简子集的盲目性:给出了时间复杂度证明;通过海上交通事故的决策实例证明所提出的粒度计算方法是可行有效的.  相似文献   

11.
Attribute reduction is viewed as an important preprocessing step for pattern recognition and data mining. Most of researches are focused on attribute reduction by using rough sets. Recently, Tsang et al. discussed attribute reduction with covering rough sets in the paper (Tsang et al., 2008), where an approach based on discernibility matrix was presented to compute all attribute reducts. In this paper, we provide a new method for constructing simpler discernibility matrix with covering based rough sets, and improve some characterizations of attribute reduction provided by Tsang et al. It is proved that the improved discernibility matrix is equivalent to the old one, but the computational complexity of discernibility matrix is relatively reduced. Then we further study attribute reduction in decision tables based on a different strategy of identifying objects. Finally, the proposed reduction method is compared with some existing feature selection methods by numerical experiments and the experimental results show that the proposed reduction method is efficient and effective.  相似文献   

12.
Attribute selection with fuzzy decision reducts   总被引:2,自引:0,他引:2  
Rough set theory provides a methodology for data analysis based on the approximation of concepts in information systems. It revolves around the notion of discernibility: the ability to distinguish between objects, based on their attribute values. It allows to infer data dependencies that are useful in the fields of feature selection and decision model construction. In many cases, however, it is more natural, and more effective, to consider a gradual notion of discernibility. Therefore, within the context of fuzzy rough set theory, we present a generalization of the classical rough set framework for data-based attribute selection and reduction using fuzzy tolerance relations. The paper unifies existing work in this direction, and introduces the concept of fuzzy decision reducts, dependent on an increasing attribute subset measure. Experimental results demonstrate the potential of fuzzy decision reducts to discover shorter attribute subsets, leading to decision models with a better coverage and with comparable, or even higher accuracy.  相似文献   

13.
陈亚菲  王霞 《计算机科学》2015,42(6):107-110, 134
有效教学理念的推广与实施是新课程改革的重点项目之一,然而在深入贯彻有效教学方针的过程中,教学效果不明显等问题不断凸显.首先根据粗糙集约简理论给出约简算法.其次在对有效教学影响因素进行实地调查研究的基础上,通过随机抽样得到有关有效教学影响因素分析表,并将其转化为决策信息系统.然后利用约简算法对有效教学影响因素实例进行分析,得到分布约简、最大分布约简、分配约简、下近似约简和上近似约简.最后对有效教学影响因素决策信息系统的约简结果进行解释,从而指导有效教学方案的制定,提高有效教学的效果.  相似文献   

14.
邻域粗糙集可以直接处理数值型数据,F-粗糙集是第一个动态粗糙集模型.针对动态变化的数值型数据,结合邻域粗糙集和F-粗糙集的优势,提出了F-邻域粗糙集和F-邻域并行约简.首先,定义了F-邻域粗糙集上下近似、边界区域;其次,在F-邻域粗糙集中提出了F-属性依赖度和属性重要度矩阵;根据F-属性依赖度和属性重要度矩阵分别提出了...  相似文献   

15.
Attribute reduction is considered as an important preprocessing step for pattern recognition, machine learning, and data mining. This paper provides a systematic study on attribute reduction with rough sets based on general binary relations. We define a relation information system, a consistent relation decision system, and a relation decision system and their attribute reductions. Furthermore, we present a judgment theorem and a discernibility matrix associated with attribute reduction in each type of system; based on the discernibility matrix, we can compute all the reducts. Finally, the experimental results with UCI data sets show that the proposed reduction methods are an effective technique to deal with complex data sets.  相似文献   

16.
多粒度粗糙集本质上是异构的,但是目前尚未运用于异构数据处理.从绝对约简的角度出发,提出多粒度粗糙集的双层绝对约简——多粒度绝对约简和多粒度绝对粒度约简.分析多粒度双层绝对约简的性质,特别是从异构数据约简的角度探究多粒度双层绝对约简的特性,提出多粒度双层绝对约简算法.理论分析和实例表明多粒度双层绝对约简算法的可行性.  相似文献   

17.
Induction of multiple fuzzy decision trees based on rough set technique   总被引:5,自引:0,他引:5  
The integration of fuzzy sets and rough sets can lead to a hybrid soft-computing technique which has been applied successfully to many fields such as machine learning, pattern recognition and image processing. The key to this soft-computing technique is how to set up and make use of the fuzzy attribute reduct in fuzzy rough set theory. Given a fuzzy information system, we may find many fuzzy attribute reducts and each of them can have different contributions to decision-making. If only one of the fuzzy attribute reducts, which may be the most important one, is selected to induce decision rules, some useful information hidden in the other reducts for the decision-making will be losing unavoidably. To sufficiently make use of the information provided by every individual fuzzy attribute reduct in a fuzzy information system, this paper presents a novel induction of multiple fuzzy decision trees based on rough set technique. The induction consists of three stages. First several fuzzy attribute reducts are found by a similarity based approach, and then a fuzzy decision tree for each fuzzy attribute reduct is generated according to the fuzzy ID3 algorithm. The fuzzy integral is finally considered as a fusion tool to integrate the generated decision trees, which combines together all outputs of the multiple fuzzy decision trees and forms the final decision result. An illustration is given to show the proposed fusion scheme. A numerical experiment on real data indicates that the proposed multiple tree induction is superior to the single tree induction based on the individual reduct or on the entire feature set for learning problems with many attributes.  相似文献   

18.
Traditional rough set theory is mainly used to extract rules from and reduce attributes in databases in which attributes are characterized by partitions, while the covering rough set theory, a generalization of traditional rough set theory, does the same yet characterizes attributes by covers. In this paper, we propose a way to reduce the attributes of covering decision systems, which are databases characterized by covers. First, we define consistent and inconsistent covering decision systems and their attribute reductions. Then, we state the sufficient and the necessary conditions for reduction. Finally, we use a discernibility matrix to design algorithms that compute all the reducts of consistent and inconsistent covering decision systems. Numerical tests on four public data sets show that the proposed attribute reductions of covering decision systems accomplish better classification performance than those of traditional rough sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号