首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Ensemble methods aim at combining multiple learning machines to improve the efficacy in a learning task in terms of prediction accuracy, scalability, and other measures. These methods have been applied to evolutionary machine learning techniques including learning classifier systems (LCSs). In this article, we first propose a conceptual framework that allows us to appropriately categorize ensemble‐based methods for fair comparison and highlights the gaps in the corresponding literature. The framework is generic and consists of three sequential stages: a pre‐gate stage concerned with data preparation; the member stage to account for the types of learning machines used to build the ensemble; and a post‐gate stage concerned with the methods to combine ensemble output. A taxonomy of LCSs‐based ensembles is then presented using this framework. The article then focuses on comparing LCS ensembles that use feature selection in the pre‐gate stage. An evaluation methodology is proposed to systematically analyze the performance of these methods. Specifically, random feature sampling and rough set feature selection‐based LCS ensemble methods are compared. Experimental results show that the rough set‐based approach performs significantly better than the random subspace method in terms of classification accuracy in problems with high numbers of irrelevant features. The performance of the two approaches are comparable in problems with high numbers of redundant features.  相似文献   

2.
自动文本分类的效果在很大程度上依赖于属性特征的选择。针对传统基于频率阈值过滤的特征选择方法会导致有效信息丢失,影响分类精度的不足,提出了一种基于粗糙集的文本自动分类算法。该方法对加权后的特征属性进行离散化,建立一个决策表;根据基于依赖度的属性重要度对决策表中条件属性进行适当的筛选;采用基于条件信息熵的启发式算法实现文本属性特征的约简。实验结果表明,该方法能约简大量冗余的特征属性,在不降低分类精度的同时,提高文本分类的运行效率。  相似文献   

3.
结合变精度粗糙集(VPRS)和优化极限学习机(OMELM)诊断算法的优点,将变精度粗糙集引入OMELM中,提出变精度粗糙集-OMELM故障诊断方法,用于诊断倾转旋翼机过渡模式下飞控系统的故障。首先根据采集的倾转旋翼机多故障输出信号的数据特点,提出一种属性约简算法,定义了一种新的变精度粗熵;然后从信息论互信息增量的角度定义属性重要性,构造OMELM分类器并对约简后的属性特征进行故障多分类;最后以XV-15进行验证,结果表明所提出的方法平均辨识率高、诊断时间短。  相似文献   

4.
基于变精度粗糙信息熵的特征约简算法   总被引:2,自引:0,他引:2  
为解决传统粗糙集不确定度量存在的局限,提出将变精度粗糙信患熵作为度量标准.该度量标准不仅具有变精度粗糙集良好的抗噪声干扰性能,而且具有基于信息理论的粗糙信息熵更全面反映系统不确定性的能力.给出了基于变精度粗糙信息熵的特征约简算法,实验结果表明该算法具有良好的运行效果.  相似文献   

5.
Neighbourhood rough set theory has proven already, as an efficient tool for knowledge discovering from heterogeneous data. However, some types of the data are incomplete and noisy in practical environments, such as signal analysis, fault diagnosis etc. To solve this problem, a universal neighbourhood rough sets model (variable precision tolerance neighbourhood rough sets [VPTNRS] model) is proposed based on a tolerance neighbourhood relation and the probabilistic theory. The proposed model can be inducing a family of much more comprehensive information granules to characterize arbitrary concepts in complex universe. In this paper, we discussed the properties of the model as well as some important relevant theorems are also introduced and proved. Furthermore, a heuristic heterogeneous feature selection algorithm is given based on the model. The experimental results with 10 choices University of California Irvine (UCI) standard data sets showed that the universal model performed well both in feature selection and classification, especially in incomplete environment.  相似文献   

6.
在多标记学习中,数据降维是一项重要且具有挑战性的任务,而特征选择又是一种高效的数据降维技术。在邻域粗糙集理论的基础上提出一种多标记专属特征选择方法,该方法从理论上确保了所得到的专属特征与相应标记具有较强的相关性,进而改善了约简效果。首先,该方法运用粗糙集理论的约简算法来减少冗余属性,在保持分类能力不变的情况下获得标记的专属特征;然后,在邻域精确度和邻域粗糙度概念的基础上,重新定义了基于邻域粗糙集的依赖度与重要度的计算方法,探讨了该模型的相关性质;最后,构建了一种基于邻域粗糙集的多标记专属特征选择模型,实现了多标记分类任务的特征选择算法。在多个公开的数据集上进行仿真实验,结果表明了该算法是有效的。  相似文献   

7.
The degree of malignancy in brain glioma is assessed based on magnetic resonance imaging (MRI) findings and clinical data before operation. These data contain irrelevant features, while uncertainties and missing values also exist. Rough set theory can deal with vagueness and uncertainty in data analysis, and can efficiently remove redundant information. In this paper, a rough set method is applied to predict the degree of malignancy. As feature selection can improve the classification accuracy effectively, rough set feature selection algorithms are employed to select features. The selected feature subsets are used to generate decision rules for the classification task. A rough set attribute reduction algorithm that employs a search method based on particle swarm optimization (PSO) is proposed in this paper and compared with other rough set reduction algorithms. Experimental results show that reducts found by the proposed algorithm are more efficient and can generate decision rules with better classification performance. The rough set rule-based method can achieve higher classification accuracy than other intelligent analysis methods such as neural networks, decision trees and a fuzzy rule extraction algorithm based on Fuzzy Min-Max Neural Networks (FRE-FMMNN). Moreover, the decision rules induced by rough set rule induction algorithm can reveal regular and interpretable patterns of the relations between glioma MRI features and the degree of malignancy, which are helpful for medical experts.  相似文献   

8.
Attribute selection is one of the important problems encountered in pattern recognition, machine learning, data mining, and bioinformatics. It refers to the problem of selecting those input attributes or features that are most effective to predict the sample categories. In this regard, rough set theory has been shown to be successful for selecting relevant and nonredundant attributes from a given data set. However, the classical rough sets are unable to handle real valued noisy features. This problem can be addressed by the fuzzy-rough sets, which are the generalization of classical rough sets. A feature selection method is presented here based on fuzzy-rough sets by maximizing both relevance and significance of the selected features. This paper also presents different feature evaluation criteria such as dependency, relevance, redundancy, and significance for attribute selection task using fuzzy-rough sets. The performance of different rough set models is compared with that of some existing feature evaluation indices based on the predictive accuracy of nearest neighbor rule, support vector machine, and decision tree. The effectiveness of the fuzzy-rough set based attribute selection method, along with a comparison with existing feature evaluation indices and different rough set models, is demonstrated on a set of benchmark and microarray gene expression data sets.  相似文献   

9.
许召召  申德荣  聂铁铮  寇月 《软件学报》2022,33(3):1128-1140
随着信息技术以及电子病历和病案在医疗机构的应用,医院数据库产生了大量的医学数据.决策树因其分类精度高、计算速度快,且分类规则简单、易于理解,而被广泛应用于医学数据分析中.然而,医学数据固有的高维特征空间和高度特征冗余等特点,使得传统的决策树在医学数据上的分类精度并不理想.基于此,提出了一种融合信息增益比排序分组和分组进...  相似文献   

10.
基于粗糙集的表情特征选择   总被引:1,自引:1,他引:0       下载免费PDF全文
为解决取得特征向量维数过高问题,提出了一种改进的粗糙集属性约简算法。运用几何特征点方法得到人脸表情的局部特征向量,引入粗糙集理论,用改进的属性约简算法对提取到的表情特征进行优化选择,去掉冗余特征和对表情分类无用的不相关信息。实验结果显示,该方法不仅实现方便,识别率高,识别所用的时间也大大减少,充分表明了该方法的有效性。  相似文献   

11.
Semantics-preserving dimensionality reduction refers to the problem of selecting those input features that are most predictive of a given outcome; a problem encountered in many areas such as machine learning, pattern recognition, and signal processing. This has found successful application in tasks that involve data sets containing huge numbers of features (in the order of tens of thousands), which would be impossible to process further. Recent examples include text processing and Web content classification. One of the many successful applications of rough set theory has been to this feature selection area. This paper reviews those techniques that preserve the underlying semantics of the data, using crisp and fuzzy rough set-based methodologies. Several approaches to feature selection based on rough set theory are experimentally compared. Additionally, a new area in feature selection, feature grouping, is highlighted and a rough set-based feature grouping technique is detailed.  相似文献   

12.
Feature selection (attribute reduction) from large-scale incomplete data is a challenging problem in areas such as pattern recognition, machine learning and data mining. In rough set theory, feature selection from incomplete data aims to retain the discriminatory power of original features. To address this issue, many feature selection algorithms have been proposed, however, these algorithms are often computationally time-consuming. To overcome this shortcoming, we introduce in this paper a theoretic framework based on rough set theory, which is called positive approximation and can be used to accelerate a heuristic process for feature selection from incomplete data. As an application of the proposed accelerator, a general feature selection algorithm is designed. By integrating the accelerator into a heuristic algorithm, we obtain several modified representative heuristic feature selection algorithms in rough set theory. Experiments show that these modified algorithms outperform their original counterparts. It is worth noting that the performance of the modified algorithms becomes more visible when dealing with larger data sets.  相似文献   

13.
Zhanquan  Sun  Chaoli  Wang  Engang  Tian  Zhong  Yin 《Multimedia Tools and Applications》2022,81(10):13467-13488

The electrocardiogram (ECG) has been proven to be the most common and effective approach to investigate cardiovascular diseases because that it is simple, noninvasive and inexpensive. However, the differences among ECG signals are difficult to be distinguished. In this paper, hand-engineered ECG features and automatic ECG features extracted with deep neural networks are combined to generate high dimensional features. First, rich hand-engineered features were extracted using some extraction methods for common ECG features. Second, a convolutional neural network model was designed to extract the ECG features automatically. High dimensional feature set is obtained through combing hand-engineered features and automatic features. To get the most informative ECG feature combination, a feature selection method based on mutual information was proposed. An ensemble learning method was then used to build the classification model for abnormal ECG types. Six atrial arrhythmia subtypes’ ECG signals from the Chinese cardiovascular disease database dataset were analyzed through the proposed method. The precision of the classification results reaches 98.41%, which is higher than the results based on other current methods.

  相似文献   

14.
Neighborhood rough set based heterogeneous feature subset selection   总被引:6,自引:0,他引:6  
Feature subset selection is viewed as an important preprocessing step for pattern recognition, machine learning and data mining. Most of researches are focused on dealing with homogeneous feature selection, namely, numerical or categorical features. In this paper, we introduce a neighborhood rough set model to deal with the problem of heterogeneous feature subset selection. As the classical rough set model can just be used to evaluate categorical features, we generalize this model with neighborhood relations and introduce a neighborhood rough set model. The proposed model will degrade to the classical one if we specify the size of neighborhood zero. The neighborhood model is used to reduce numerical and categorical features by assigning different thresholds for different kinds of attributes. In this model the sizes of the neighborhood lower and upper approximations of decisions reflect the discriminating capability of feature subsets. The size of lower approximation is computed as the dependency between decision and condition attributes. We use the neighborhood dependency to evaluate the significance of a subset of heterogeneous features and construct forward feature subset selection algorithms. The proposed algorithms are compared with some classical techniques. Experimental results show that the neighborhood model based method is more flexible to deal with heterogeneous data.  相似文献   

15.
In medical information system, the data that describe patient health records are often time stamped. These data are liable to complexities such as missing data, observations at irregular time intervals and large attribute set. Due to these complexities, mining in clinical time-series data, remains a challenging area of research. This paper proposes a bio-statistical mining framework, named statistical tolerance rough set induced decision tree (STRiD), which handles these complexities and builds an effective classification model. The constructed model is used in developing a clinical decision support system (CDSS) to assist the physician in clinical diagnosis. The STRiD framework provides the following functionalities namely temporal pre-processing, attribute selection and classification. In temporal pre-processing, an enhanced fuzzy-inference based double exponential smoothing method is presented to impute the missing values and to derive the temporal patterns for each attribute. In attribute selection, relevant attributes are selected using the tolerance rough set. A classification model is constructed with the selected attributes using temporal pattern induced decision tree classifier. For experimentation, this work uses clinical time series datasets of hepatitis and thrombosis patients. The constructed classification model has proven the effectiveness of the proposed framework with a classification accuracy of 91.5% for hepatitis and 90.65% for thrombosis.  相似文献   

16.
程玉胜  陈飞  王一宾 《计算机应用》2018,38(11):3105-3111
针对传统特征选择算法无法处理流特征数据、冗余性计算复杂、对实例描述不够准确的问题,提出了基于粗糙集的数据流多标记分布特征选择算法。首先,将在线流特征选择框架引入多标记学习中;其次,用粗糙集中的依赖度替代原有的条件概率,仅仅利用数据自身的信息计算,使得数据流特征选择算法更加高效快捷;最后,由于在现实世界中,每个标记对实例的描述程度并不相同,为更加准确地描述实例,将传统的逻辑标记用标记分布的形式进行刻画。在多组数据集上的实验表明,所提算法能保留与标记空间有着较高相关性的特征,使得分类精度相较于未进行特征选择的有一定程度的提高。  相似文献   

17.
Rough set theory is one of the effective methods to feature selection, which can preserve the meaning of the features. The essence of rough set approach to feature selection is to find a subset of the original features. Since finding a minimal subset of the features is a NP-hard problem, it is necessary to investigate effective and efficient heuristic algorithms. Ant colony optimization (ACO) has been successfully applied to many difficult combinatorial problems like quadratic assignment, traveling salesman, scheduling, etc. It is particularly attractive for feature selection since there is no heuristic information that can guide search to the optimal minimal subset every time. However, ants can discover the best feature combinations as they traverse the graph. In this paper, we propose a new rough set approach to feature selection based on ACO, which adopts mutual information based feature significance as heuristic information. A novel feature selection algorithm is also given. Jensen and Shen proposed a ACO-based feature selection approach which starts from a random feature. Our approach starts from the feature core, which changes the complete graph to a smaller one. To verify the efficiency of our algorithm, experiments are carried out on some standard UCI datasets. The results demonstrate that our algorithm can provide efficient solution to find a minimal subset of the features.  相似文献   

18.
随着互联网和物联网技术的发展,数据的收集变得越发容易。但是,高维数据中包含了很多冗余和不相关的特征,直接使用会徒增模型的计算量,甚至会降低模型的表现性能,故很有必要对高维数据进行降维处理。特征选择可以通过减少特征维度来降低计算开销和去除冗余特征,以提高机器学习模型的性能,并保留了数据的原始特征,具有良好的可解释性。特征选择已经成为机器学习领域中重要的数据预处理步骤之一。粗糙集理论是一种可用于特征选择的有效方法,它可以通过去除冗余信息来保留原始特征的特性。然而,由于计算所有的特征子集组合的开销较大,传统的基于粗糙集的特征选择方法很难找到全局最优的特征子集。针对上述问题,文中提出了一种基于粗糙集和改进鲸鱼优化算法的特征选择方法。为避免鲸鱼算法陷入局部优化,文中提出了种群优化和扰动策略的改进鲸鱼算法。该算法首先随机初始化一系列特征子集,然后用基于粗糙集属性依赖度的目标函数来评价各子集的优劣,最后使用改进鲸鱼优化算法,通过不断迭代找到可接受的近似最优特征子集。在UCI数据集上的实验结果表明,当以支持向量机为评价所用的分类器时,文中提出的算法能找到具有较少信息损失的特征子集,且具有较高的分类精度。因此,所提算法在特征选择方面具有一定的优势。  相似文献   

19.

针对K-means 聚类算法过度依赖初始聚类中心、局部收敛、稳定性差等问题, 提出一种基于变异精密搜索的蜂群聚类算法. 该算法利用密度和距离初始化蜂群, 并根据引领蜂的适应度和密度求解跟随蜂的选择概率P;  然后通过变异精密搜索法产生的新解来更新侦查蜂, 以避免陷入局部最优; 最后结合蜂群与粗糙集来优化K-means. 实验结果表明, 该算法不仅能有效抑制局部收敛、减少对初始聚类中心的依赖, 而且准确率和稳定性均有较大的提高.

  相似文献   

20.
Xu  Ruohao  Li  Mengmeng  Yang  Zhongliang  Yang  Lifang  Qiao  Kangjia  Shang  Zhigang 《Applied Intelligence》2021,51(10):7233-7244

Feature selection is a technique to improve the classification accuracy of classifiers and a convenient data visualization method. As an incremental, task oriented, and model-free learning algorithm, Q-learning is suitable for feature selection, this study proposes a dynamic feature selection algorithm, which combines feature selection and Q-learning into a framework. First, the Q-learning is used to construct the discriminant functions for each class of the data. Next, the feature ranking is achieved according to the all discrimination functions vectors for each class of the data comprehensively, and the feature ranking is doing during the process of updating discriminant function vectors. Finally, experiments are designed to compare the performance of the proposed algorithm with four feature selection algorithms, the experimental results on the benchmark data set verify the effectiveness of the proposed algorithm, the classification performance of the proposed algorithm is better than the other feature selection algorithms, meanwhile the proposed algorithm also has good performance in removing the redundant features, and the experiments of the effect of learning rates on the our algorithm demonstrate that the selection of parameters in our algorithm is very simple.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号