首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Attribute reduction is one of the most important issues in the research of rough set theory. Numerous significance measure based heuristic attribute reduction algorithms have been presented to achieve the optimal reduct. However, how to handle the situation that multiple attributes have equally largest significances is still largely unknown. In this regard, an enhancement for heuristic attribute reduction (EHAR) in rough set is proposed. In some rounds of the process of adding attributes, those that have the same largest significance are not randomly selected, but build attribute combinations and compare their significances. Then the most significant combination rather than a randomly selected single attribute is added into the reduct. With the application of EHAR, two representative heuristic attribute reduction algorithms are improved. Several experiments are used to illustrate the proposed EHAR. The experimental results show that the enhanced algorithms with EHAR have a superior performance in achieving the optimal reduct.  相似文献   

2.
自适应和声搜索算法及在粗糙集属性约简中的应用   总被引:1,自引:0,他引:1  
针对改进和声搜索算法(IHS)存在的不足,提出了自适应和声搜索算法(AHS).该算法利用和声库中变量函数的最大差值来调节PAR 和bw,从而提高了对多维问题的搜索效率.利用5个标准测试函数对AHS算法进行测试,并应用于粗糙集的属性约简中.仿真结果表明了该算法的有效性和实用性.  相似文献   

3.
为了寻找一种有效的最小属性约简方法,给出了条件属性集上的属性重要度序关系,基于此序关系构建了属性集上的集合枚举树,提出了一种快速的最小属性约简算法,该算法采用至上而下、层次优先策略搜索集合枚举树寻找属性最小约简。为了提高算法性能,该算法采用核和父集剪枝策略减少搜索空间,采用优化计算来确保同一集合的正域只计算一次。基于UCI数据的实验结果表明,该算法是有效的。  相似文献   

4.
Feature selection (attribute reduction) from large-scale incomplete data is a challenging problem in areas such as pattern recognition, machine learning and data mining. In rough set theory, feature selection from incomplete data aims to retain the discriminatory power of original features. To address this issue, many feature selection algorithms have been proposed, however, these algorithms are often computationally time-consuming. To overcome this shortcoming, we introduce in this paper a theoretic framework based on rough set theory, which is called positive approximation and can be used to accelerate a heuristic process for feature selection from incomplete data. As an application of the proposed accelerator, a general feature selection algorithm is designed. By integrating the accelerator into a heuristic algorithm, we obtain several modified representative heuristic feature selection algorithms in rough set theory. Experiments show that these modified algorithms outperform their original counterparts. It is worth noting that the performance of the modified algorithms becomes more visible when dealing with larger data sets.  相似文献   

5.
Analysis on Attribute Reduction Strategies of Rough Set   总被引:52,自引:0,他引:52       下载免费PDF全文
Severa strategies for the minimal attribute reduction with polynomial time complexity(O(n^k)have been developed in rough set theory.Are they complete?While investigating the attribute reduction strategy based on the discernibility matrix(DM),a counterexample is constructed theoretically,which demonstrates that these strategies are all incomplete with respect to the minimal reduction.  相似文献   

6.
一种决策表增量属性约简算法   总被引:11,自引:0,他引:11  
胡峰  代劲  王国胤 《控制与决策》2007,22(3):268-272
为了对动态变化的决策表进行属性约简处理,在改进的分辨矩阵的基础上,提出一种增量式属性约简算法,当决策表添加新的记录后.能快速得到新决策表的所有约简和最小约筒.此外,通过对不相容决策表的正区域的决策值和边界域对原决策表进行分解.得到了一种分布式增量属性约简模型.仿真研究表明了算法的正确性和高效性.  相似文献   

7.
属性约简是粗糙集理论的核心问题,为了获得更多更稳定的最小属性约简,根据决策粗糙集模型将最小属性约简问题转化为决策风险最小化问题,并给出了新的适应度函数计算方法;在此基础上利用回溯搜索算法较强的全局搜索性能,提出了基于回溯搜索算法的决策粗糙集属性约简算法;对UCI数据集的实验结果以及与其他约简算法的比较表明,该算法能够得到更多的最小属性约简,而且能够在多次运行中保持约简结果个数的稳定性。  相似文献   

8.
Attribute reduction in decision-theoretic rough set models   总被引:6,自引:0,他引:6  
Yiyu Yao 《Information Sciences》2008,178(17):3356-3373
Rough set theory can be applied to rule induction. There are two different types of classification rules, positive and boundary rules, leading to different decisions and consequences. They can be distinguished not only from the syntax measures such as confidence, coverage and generality, but also the semantic measures such as decision-monotocity, cost and risk. The classification rules can be evaluated locally for each individual rule, or globally for a set of rules. Both the two types of classification rules can be generated from, and interpreted by, a decision-theoretic model, which is a probabilistic extension of the Pawlak rough set model.As an important concept of rough set theory, an attribute reduct is a subset of attributes that are jointly sufficient and individually necessary for preserving a particular property of the given information table. This paper addresses attribute reduction in decision-theoretic rough set models regarding different classification properties, such as: decision-monotocity, confidence, coverage, generality and cost. It is important to note that many of these properties can be truthfully reflected by a single measure γ in the Pawlak rough set model. On the other hand, they need to be considered separately in probabilistic models. A straightforward extension of the γ measure is unable to evaluate these properties. This study provides a new insight into the problem of attribute reduction.  相似文献   

9.
In this paper,a new approach is presented to find the reference set for the nearest neighbor classifer.The optimal reference set,which has minimum sample size and satisfies a certain error rate threshold,is obtained through a Tabu search algorithm.When the error rate threshold is set to zero,the algorithm obtains a near minimal consistent subset of a given training set.While the threshold is set to a small appropriate value,the obtained reference set may compensate the bias of the nearest neighbor estimate.An aspiration criterion for Tabu search is introduced,which aims to prevent the search process form the inefficient wandering between the feasible and infeasible regions in the search space and speed up the convergence.Experimental results based on a number of typical data sets are presented and analyzed to illustrate the benefits of the proposed method.Compared to conventional methods,such as CNN and Dasarathy‘s algorithm,the size of the reduced reference sets is much smaller,and the nearest neighbor classification performance is better,especially when the error rate thresholds are set to appropriate nonzerovalues,The experimental results also illustrate that the MCS(inimal consistent set)of Dasarathy‘s algorithm is not minimal,and its candidate consistent set is not always ensured to reduce monotonically.A counter example is also given to confirm this claim.  相似文献   

10.
属性约简是粗糙集合研究的重要内容之一。为了能够有效地获取决策表中属性最小相对约简,提出了一种基于GA-PSO的属性约简算法。该算法以条件属性对决策属性的支持度为基础,求解核属性,把所有的条件属性(除去核属性)加入粒子群算法的初始种群中,并用遗传算法对不满足适应度条件的粒子进行交叉变异操作。实验结果表明,该算法在加强局部搜索能力的同时保持了该算法全局寻优的特性,能够快速有效地获得最小相对属性集。  相似文献   

11.
陈鑫影  李雄飞 《计算机应用》2007,27(8):1964-1966
从粗糙集理论出发,在可分辨关系和对象差异矩阵概念的基础上构造出基于粗糙集理论的并行约简算法。算法首先将原系统划分为多个子系统,然后利用评价指数对划分得到的子系统并行求解,最后以子系统的局部约简结果为基础,求得原系统的约简。算法的时空性能较好,适于处理大规模数据集。  相似文献   

12.
Efficient attribute reduction in large, incomplete decision systems is a challenging problem; existing approaches have time complexities no less than O(∣C2U2). This paper derives some important properties of incomplete information systems, then constructs a positive region-based algorithm to solve the attribute reduction problem with a time complexity no more than O(∣C2U∣log∣U∣). Furthermore, our approach does not change the size of the original incomplete system. Numerical experiments show that the proposed approach is indeed efficient, and therefore of practical value to many real-world problems. The proposed algorithm can be applied to both consistent and inconsistent incomplete decision systems.  相似文献   

13.
杨胜  施鹏飞  顾钧 《控制与决策》2004,19(11):1208-1212
从属性集互信息的角度分析了粗糙集理论的属性约筒问题.首先在互信息的基础上定义了一个新的属性子集的冗余性和协同能力度量——属性子集的冗余协同系数;然后将它作为属性约筒度量,提出了基于Beam搜索的粗糙集属性约筒算法.实验表明属性约简算法具有良好的运行效果.  相似文献   

14.
Attribute reduction is viewed as an important preprocessing step for pattern recognition and data mining. Most of researches are focused on attribute reduction by using rough sets. Recently, Tsang et al. discussed attribute reduction with covering rough sets in the paper (Tsang et al., 2008), where an approach based on discernibility matrix was presented to compute all attribute reducts. In this paper, we provide a new method for constructing simpler discernibility matrix with covering based rough sets, and improve some characterizations of attribute reduction provided by Tsang et al. It is proved that the improved discernibility matrix is equivalent to the old one, but the computational complexity of discernibility matrix is relatively reduced. Then we further study attribute reduction in decision tables based on a different strategy of identifying objects. Finally, the proposed reduction method is compared with some existing feature selection methods by numerical experiments and the experimental results show that the proposed reduction method is efficient and effective.  相似文献   

15.
随着网络和通信技术的快速的发展,社会进入了大数据时代。如何能够快速地从海量大数据中找到属性约简是目前研究的一个热点。由于传统属性约简的方法在计算大数据属性约简时,需要消耗巨大的计算时间,不能有效地处理日益积累的大数据属性约简的问题。为了提高传统属性约简算法的效率,针对较大决策信息系统属性约简更新问题,利用多粒度粗糙集理论,提出了基于多粒度粗糙集模型的矩阵属性约简算法,通过2组UCI数据集对所提出的多粒度矩阵属性约简算法的性能进行测试,结果验证了该多粒度矩阵属性约简算法是合理且有效的。  相似文献   

16.
粗糙集理论中的求核与约简   总被引:41,自引:1,他引:41  
约简与核是粗糙集理论的两个重要概念,而直接由定义来计算约倚与核是一个典型的NP难题。发现了分辨矩阵的若干有用性质,利用这些性质使粗糙集理论中的求核与约简问题得以解决。进而分别讨论了无决策信息系统的约简和有决策信患系统的约简问题。最后举例说明了所得结果的有效性。  相似文献   

17.
传统粗糙集分类方法过于严格,对噪音过分敏感。针对带不确定因子决策系统,提出一种基于属性依赖度的约简算法,使含不确定信息及数据噪音的系统中的属性得以简化,找到一种具有广泛表达能力的数据隐含格式,删去冗余的规则,并保持系统的原有用途和性能。通过一个例子实现了该算法。  相似文献   

18.
结合模糊集理论的粗糙集属性约简算法*   总被引:1,自引:0,他引:1  
结合模糊关系的理论,对粗糙集理论的属性约简算法进行研究,提出了一个新的属性约简算法,并给出了一个应用实例.  相似文献   

19.
属性约简是粗糙集理论中的重要问题。许多学者针对邻域粗糙集提出多种属性约简方法,包括应用最为广泛的启发式算法。在多半径邻域粗糙集的基础上,针对当前启发式约简算法往往会包含一定冗余属性的缺陷,提出一种融合属性权重影响的改进约简运算方法,通过根据各属性权值大小设置阈值使得约简结果能够消除冗余属性。实验选取UCI的数据集与当前几种常用启发式约简算法进行比较分析。实验结果表明,所提出的属性约简方法能够得到更优的约简集合,同时更大程度地保留了决策表本身的知识信息,具有较高的分类能力。  相似文献   

20.
现有企业资源计划((ERP)系统实施风险评价指标体系的主观性较强.存在冗余现象,缺乏合理的建立依据.对此,通过引入粗糙集理论中的属性约简法,结合15家己实施ERP系统的企业实例,对ERP系统实施风险控制指标进行属性约简.结果表明,粗糙集属性约简理论在ERP系统实施风险控制指标体系约简中的应用是有效的,并得出了包含人力资源管理、企业建模、项目管理等7项重要风险因素.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号