首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Traditional rough set theory is mainly used to extract rules from and reduce attributes in databases in which attributes are characterized by partitions, while the covering rough set theory, a generalization of traditional rough set theory, does the same yet characterizes attributes by covers. In this paper, we propose a way to reduce the attributes of covering decision systems, which are databases characterized by covers. First, we define consistent and inconsistent covering decision systems and their attribute reductions. Then, we state the sufficient and the necessary conditions for reduction. Finally, we use a discernibility matrix to design algorithms that compute all the reducts of consistent and inconsistent covering decision systems. Numerical tests on four public data sets show that the proposed attribute reductions of covering decision systems accomplish better classification performance than those of traditional rough sets.  相似文献   

2.
The covering generalized rough sets are an improvement of traditional rough set model to deal with more complex practical problems which the traditional one cannot handle. It is well known that any generalization of traditional rough set theory should first have practical applied background and two important theoretical issues must be addressed. The first one is to present reasonable definitions of set approximations, and the second one is to develop reasonable algorithms for attributes reduct. The existing covering generalized rough sets, however, mainly pay attention to constructing approximation operators. The ideas of constructing lower approximations are similar but the ideas of constructing upper approximations are different and they all seem to be unreasonable. Furthermore, less effort has been put on the discussion of the applied background and the attributes reduct of covering generalized rough sets. In this paper we concentrate our discussion on the above two issues. We first discuss the applied background of covering generalized rough sets by proposing three kinds of datasets which the traditional rough sets cannot handle and improve the definition of upper approximation for covering generalized rough sets to make it more reasonable than the existing ones. Then we study the attributes reduct with covering generalized rough sets and present an algorithm by using discernibility matrix to compute all the attributes reducts with covering generalized rough sets. With these discussions we can set up a basic foundation of the covering generalized rough set theory and broaden its applications.  相似文献   

3.
Attribute reduction is considered as an important preprocessing step for pattern recognition, machine learning, and data mining. This paper provides a systematic study on attribute reduction with rough sets based on general binary relations. We define a relation information system, a consistent relation decision system, and a relation decision system and their attribute reductions. Furthermore, we present a judgment theorem and a discernibility matrix associated with attribute reduction in each type of system; based on the discernibility matrix, we can compute all the reducts. Finally, the experimental results with UCI data sets show that the proposed reduction methods are an effective technique to deal with complex data sets.  相似文献   

4.
Fuzzy rough sets are considered as an effective tool to deal with uncertainty in data analysis, and fuzzy similarity relations are used in fuzzy rough sets to calculate similarity between objects. On the other hand in kernel tricks, a kernel maps data into a higher dimensional feature space where the resulting structure of the learning task is linearly separable, while the kernel is the inner product of this feature space and can also be viewed as a similarity function. It has been reported there is an overlap between family of kernels and collection of fuzzy similarity relations. This fact motivates the idea in this paper to use some kernels as fuzzy similarity relations and develop kernel based fuzzy rough sets. First, we consider Gaussian kernel and propose Gaussian kernel based fuzzy rough sets. Second we introduce parameterized attribute reduction with the derived model of fuzzy rough sets. Structures of attribute reduction are investigated and an algorithm with discernibility matrix to find all reducts is developed. Finally, a heuristic algorithm is designed to compute reducts with Gaussian kernel fuzzy rough sets. Several experiments are provided to demonstrate the effectiveness of the idea.  相似文献   

5.
基于粗糙集和信息增益的属性约简改进方法   总被引:2,自引:0,他引:2  
针对属性过多对于有效的数据挖掘很不利以及约简中差别矩阵的产生会占用较大存储空间的问题,提出了一种基于粗糙集和信息增益的属性约简改进算法.该算法首先采用信息增益技术对决策表属性进行相关分析,删除部分冗余属性,减小属性约简的复杂度,然后直接从决策表中提取出分明函数,求出属性约简.由于避免了分明矩阵的生成,因此该算法不仅节约了时间和空间,而且提高了效率.  相似文献   

6.
基于可辨识矩阵的快速粗糙集属性约简算法   总被引:1,自引:0,他引:1  
Karno Bozi提出的Core Searching算法在向约简中插入候选属性的时候,根据属性出现次数需要循环查找可辨识矩阵中的所有剩余项,直至矩阵为空,导致计算量较大和结果中冗余属性存在的可能.基于Core Searching算法提出通过给属性设立计数器的基于可辨识矩阵的快速属性约简算法,实例分析表明,该算法与Core Searching算法相比,在计算量减少和循环次数减少的同时能得到更简约的结果,是一种快速、高效的属性约简算法.  相似文献   

7.
Attribute reduction is one of the most important issues in the research of rough set theory. Numerous significance measure based heuristic attribute reduction algorithms have been presented to achieve the optimal reduct. However, how to handle the situation that multiple attributes have equally largest significances is still largely unknown. In this regard, an enhancement for heuristic attribute reduction (EHAR) in rough set is proposed. In some rounds of the process of adding attributes, those that have the same largest significance are not randomly selected, but build attribute combinations and compare their significances. Then the most significant combination rather than a randomly selected single attribute is added into the reduct. With the application of EHAR, two representative heuristic attribute reduction algorithms are improved. Several experiments are used to illustrate the proposed EHAR. The experimental results show that the enhanced algorithms with EHAR have a superior performance in achieving the optimal reduct.  相似文献   

8.
一种基于区分矩阵的属性约简算法   总被引:5,自引:3,他引:5  
属性约简是粗糙集理论研究的关键问题之一。文章以属性在区分矩阵中出现的频率作为启发,对HORAFA算法做了一些改进。它是以核为基础,加入属性重要性最大的属性,直到不能再加。为了能找到信息系统的最优约简,在此基础上加了一个反向消除过程,直到不能再删为止。最后通过一个实例完整演示了该方法,证实其有效性。  相似文献   

9.
提出了一种基于粗糙集约简的支持向量机图像插值方法,目的在于提高基于学习的插值方法的插值效率,改善放大图像边缘模糊现象。首先在原始图像上利用已知的像素灰度值及邻域内像素间的相关性构造训练样本集;然后利用粗糙集约简算法约简掉其中重要度较小的特征,并用约简后的样本集训练支持向量机;再用测试样本及训练好的支持向量机估计偶行偶列的像素灰度值;最后利用测试样本及训练好的支持向量机估计剩余的未知像素灰度值。仿真表明,所提方法有效提高了插值效率,获得了较好的客观指标,得到了满意的插值图像。  相似文献   

10.
提出了基于分明矩阵的启发式知识约简算法.该算法以分明矩阵中属性出现的频率作为启发信息,通过构造新的决策表,每次选取出现个数最多的属性,直到选取的属性能够保持原决策表的分类能力,此时得到的集合即是一个约简.试验结果表明,该算法在大多数情况下都能够找到最小约简或令人满意的次优解.  相似文献   

11.
Attribute reduction with variable precision rough sets (VPRS) attempts to select the most information-rich attributes from a dataset by incorporating a controlled degree of misclassification into approximations of rough sets. However, the existing attribute reduction algorithms with VPRS have no incremental mechanisms of handling dynamic datasets with increasing samples, so that they are computationally time-consuming for such datasets. Therefore, this paper presents an incremental algorithm for attribute reduction with VPRS, in order to address the time complexity of current algorithms. First, two Boolean row vectors are introduced to characterize the discernibility matrix and reduct in VPRS. Then, an incremental manner is employed to update minimal elements in the discernibility matrix at the arrival of an incremental sample. Based on this, a deep insight into the attribute reduction process is gained to reveal which attributes to be added into and/or deleted from a current reduct, and our incremental algorithm is designed by this adoption of the attribute reduction process. Finally, experimental comparisons validate the effectiveness of our proposed incremental algorithm.  相似文献   

12.
Attribute reduction in decision-theoretic rough set models   总被引:6,自引:0,他引:6  
Yiyu Yao 《Information Sciences》2008,178(17):3356-3373
Rough set theory can be applied to rule induction. There are two different types of classification rules, positive and boundary rules, leading to different decisions and consequences. They can be distinguished not only from the syntax measures such as confidence, coverage and generality, but also the semantic measures such as decision-monotocity, cost and risk. The classification rules can be evaluated locally for each individual rule, or globally for a set of rules. Both the two types of classification rules can be generated from, and interpreted by, a decision-theoretic model, which is a probabilistic extension of the Pawlak rough set model.As an important concept of rough set theory, an attribute reduct is a subset of attributes that are jointly sufficient and individually necessary for preserving a particular property of the given information table. This paper addresses attribute reduction in decision-theoretic rough set models regarding different classification properties, such as: decision-monotocity, confidence, coverage, generality and cost. It is important to note that many of these properties can be truthfully reflected by a single measure γ in the Pawlak rough set model. On the other hand, they need to be considered separately in probabilistic models. A straightforward extension of the γ measure is unable to evaluate these properties. This study provides a new insight into the problem of attribute reduction.  相似文献   

13.
Generally speaking, there are four fuzzy approximation operators defined on a general triangular norm (t-norm) framework in fuzzy rough sets. Different types of t-norms specify various approximation operators. One issue whether and how the different fuzzy approximation operators affect the result of attribute reduction is then arisen. This paper addresses this issue from the theoretical viewpoint by reviewing attribute reduction with fuzzy rough sets and then describing and proving some theorems which demonstrate the effects of the fuzzy approximation operators on the results of attribute reduction. First, we review some notions of attribute reduction with fuzzy rough sets, such as positive region, dependency degree and attribute reduction. We then present and prove some theorems which describe how and to what degree fuzzy approximation operators impact the performance of attribute reduction. Finally, we report some experimental simulation results which demonstrate the effectiveness and correctness of the theoretical contributions. One main contribution in this paper is that we have described and proven that each attribute reduction obtained using one type of fuzzy lower approximation operator always contains one reduction obtained using the other type of fuzzy lower approximation operator.  相似文献   

14.
Efficient attribute reduction in large, incomplete decision systems is a challenging problem; existing approaches have time complexities no less than O(∣C2U2). This paper derives some important properties of incomplete information systems, then constructs a positive region-based algorithm to solve the attribute reduction problem with a time complexity no more than O(∣C2U∣log∣U∣). Furthermore, our approach does not change the size of the original incomplete system. Numerical experiments show that the proposed approach is indeed efficient, and therefore of practical value to many real-world problems. The proposed algorithm can be applied to both consistent and inconsistent incomplete decision systems.  相似文献   

15.
在系统熵的基础上,定义了一种新的属性重要度并提出了一种基于改进系统熵的粗糙集属性约简算法,实验分析表明,该属性重要度为启发式信息进行的属性约简,取得了理想效果。  相似文献   

16.
Rough set has drawn great attention in recent decades, among which multi-granulation rough set (MGRS) is an arresting direction. It constructs a formal theoretical framework to solve complex problems under the circumstance of multiple binary relations. However, the fusion of multi-granulation rough set and grey system for acquiring knowledge is still a gap. Toward this end, we devise a grey multi-granulation rough set (GMGRS) by taking multiple grey relational relations into consideration under the framework of MGRS. In grey information system, the constructed grey relational relation that measures the relationship among objects can be used to further establish multiple binary relations. Based on two different approximate strategies (seeking common reserving difference and seeking common eliminating difference), two types of GMGRS are presented, respectively. After discussing several important properties of GMGRS, we discover that the properties of the proposed GMGRS are synchronous with the classical MGRS. Meanwhile, to obtain the attribute reduction under GMGRS, we reconstruct significance measure and termination criterion based on the θ-precision pessimistic GMGRS. Last but not least, theoretical studies and practical examples demonstrate that our proposed GMGRS largely enrich the MGRS theory and provide a new technique for knowledge discovery, which is practical in real-world scenarios.  相似文献   

17.
地下施工中影响施工的风险影响因素十分繁杂,为了从中剔除不必要或不重要的因素,提出一种新的基于免疫的粗糙集属性约简算法--IRSAR.该算法在求出决策表的相对核core的基础上初始化抗体群,并定义了新的亲和度计算函数和克隆增殖函数,有效地提高了亲和度的收敛速度.利用记忆数组存放每一代中满足条件的抗体,制定了记忆数组更新策略,从而得到最优抗体.实验结果表明,IRSAR算法能够较快地得出合理有效的约简结果.  相似文献   

18.
基于区间二型模糊粗糙集的连续属性约简算法   总被引:1,自引:0,他引:1  
一型模糊粗糙集可以直接处理连续属性集,但不能处理高度不确定性数据,而区间二型模糊集可以增强系统处理不确定性的能力。为了提高处理噪声数据的精确度,在一型模糊粗糙集的基础上,定义区间二型模糊粗糙集。基于区间二型模糊粗糙集模型研究了连续域决策信息系统的属性约简,通过紧计算域给出了新的约简算法。由于拒绝变量集合的存在,提出的约简算法可在有限时间内收敛,并且得到了更加合理的结果。数值仿真验证了约简算法的可行性与有效性。  相似文献   

19.
属性约简是粗糙集理论研究的关键问题,针对求取决策系统所有约简的NP问题,基于差别矩阵提出一种决策系统属性约简优化算法.通过改进差别矩阵得到差别集,在获得核与约简候选信息基础上,以属性频度作为启发式信息,快速有效地求取决策系统的所有约简.分析表明了该算法的可行性与有效性.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号