首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
两阶段无监督顺序前向分形属性规约算法   总被引:3,自引:0,他引:3  
采用单个属性多重分形维数及属性合并之后分形维数变化程度作为属性相关性的度量依据,以结果属性子集分形维数与属性全集分形维数的差值作为评价结果属性子集优劣的标准,将分形属性规约问题转化为属性个数受限的最大无关分形属性子集搜索问题.针对高维属性空间搜索的"组合爆炸"现象,设计了结合相关性分析与冗余性分析的两阶段顺序前向无监督分形属性规约算法.初步分析了算法的时空复杂性,基于标准与合成数据集的实验结果表明,算法能够以较低的分形维数计算工作量得到较优的属性子集.  相似文献   

2.
维度规约不但可以提高模式识别及机器学习的效率和准确性,同时作为一种有效的数据预处理技术也得到了众多研究者的密切关注,基于分形的特征选择技术是目前维度规约研究领域的新动态.借鉴Z-ordering索引技术的思想,设计并实现了一种改进的分形属性选择方法ZBFDR(Z-ordering based FDR),该方法仅需要扫描数据集一遍建立底层网格结构,基于该底层网格结构计算数据集的分形维数及实现属性选择操作.ZBFDR避免了FDR(fractal dimensionality reduction)算法多次扫描数据集问题,空间需求也低于OptFDR(optimized FDR),在合成数据集及实际数据集上的实验结果表明ZBFDR具有较为优良的整体性能.  相似文献   

3.
属性选择是一种有效的数据预处理方法,可同时保留多变量时间序列重要变量的时序关系及其实际物理意义。针对很多实际数据无类别信息的问题,文中提出一种无监督属性选择算法并分析其复杂度。首先设计一种无需进行相空间重构的多变量时间序列分形维数计算方法,并将分形维数视为其本质维,利用属性子集的分形维数及其属性数目的变化作为子集优劣的评价标准。再优化离散粒子群算法以解决高维属性空间搜索的“组合爆炸”问题。最后利用典型混沌动力学系统所产生的多变量时间序列和UCI数据库的5组数据集进行仿真计算,结果表明该算法可在较短时间内找到较优的属性子集,具有较优的整体性能。  相似文献   

4.
属性选择通常作为一个主要的预处理步骤,在机器学习和数据挖掘领域有着广泛的应用。选择出能够表征数据集分形特征的属性子集,对研究数据集的分形规律具有重要的价值。根据数据集的分形特征,引入了密度分析方法,指出了当前基于分形维数的属性选择方法的不足,提出了一种基于分形和邻接空间密度变化的属性选择方法。为了分析实验结果的有效性,利用SVM分类算法和K-fold交叉验证相结合的方法对3个数据集属性选择前后的分类性能进行了测试。实验证明该方法在属性选择方面有较好的性能,能够得到较优的属性子集。  相似文献   

5.
牟琦  毕孝儒  厍向阳 《计算机工程》2011,37(14):103-105
高维网络数据中的无关属性和冗余属性容易使分类算法的网络入侵检测速度变慢、检测率降低。为此,提出一种基于遗传量子粒子群优化(GQPSO)算法的网络入侵特征选择方法,该方法将遗传算法中的选择变异策略与QPSO有机结合形成GQPSO算法,并以网络数据属性之间的归一化互信息量作为该算法适应度函数,指导其对网络数据的属性约简,实现网络入侵特征子集的优化选择。在KDDCUP1999数据集上进行仿真实验,结果表明,与QPSO算法、PSO算法相比,该方法能更有效地精简网络数据特征,提高分类算法的网络入侵检测速度及检测率。  相似文献   

6.
针对高维复杂的符号数据集在聚类中的聚类效果差和计算耗时过大的问题,首先提出了一种基于邻域距离的无监督特征选择算法,然后在选择到的特征子集上进行重新聚类,从而有效提高了聚类结果的精度,降低了聚类计算的计算耗时。实验结果表明,该算法可以找到有效的特征子集,提高数据集的聚类精度,降低面对高维复杂数据集聚类的计算耗时。  相似文献   

7.
张乐园  李佳烨  李鹏清 《计算机应用》2018,38(12):3444-3449
针对高维的数据中往往存在非线性、低秩形式和属性冗余等问题,提出一种基于核函数的属性自表达无监督属性选择算法——低秩约束的非线性属性选择算法(LRNFS)。首先,将每一维的属性映射到高维的核空间上,通过核空间上的线性属性选择去实现低维空间上的非线性属性选择;然后,对自表达形式引入偏差项并对系数矩阵进行低秩与稀疏处理;最后,引入核矩阵的系数向量的稀疏正则化因子来实现属性选择。所提算法中用核矩阵来体现其非线性关系,低秩考虑数据的全局信息进行子空间学习,自表达形式确定属性的重要程度。实验结果表明,相比于基于重新调整的线性平方回归(RLSR)半监督特征选择算法,所提算法进行属性选择之后作分类的准确率提升了2.34%。所提算法解决了数据在低维特征空间上线性不可分的问题,提升了属性选择的准确率。  相似文献   

8.
基于分形维数的属性约简   总被引:1,自引:0,他引:1  
关于属性约简的算法已经提出了许多,基于粗糙集的属性约简算法就是其中的一类。但该类算法执行效率低且不一定得到最小约简。本文讨论了基于可辨识矩阵的属性频度算法(BDMF)并提出了基于分形维数的向后剔除属性约简算法(FDR)。仿真实验表明FDR比BDMF的运行效率高,且约简的效果更好。  相似文献   

9.
半监督学习过程中,由于无标记样本的随机选择造成分类器性能降低及不稳定性的情况经常发生;同时,面对仅包含少量有标记样本的高维数据的分类问题,传统的半监督学习算法效果不是很理想.为了解决这些问题,本文从探索数据样本空间和特征空间两个角度出发,提出一种结合随机子空间技术和集成技术的安全半监督学习算法(A safe semi-supervised learning algorithm combining stochastic subspace technology and ensemble technology,S3LSE),处理仅包含极少量有标记样本的高维数据分类问题.首先,S3LSE采用随机子空间技术将高维数据集分解为B个特征子集,并根据样本间的隐含信息对每个特征子集优化,形成B个最优特征子集;接着,将每个最优特征子集抽样形成G个样本子集,在每个样本子集中使用安全的样本标记方法扩充有标记样本,生成G个分类器,并对G个分类器进行集成;然后,对B个最优特征子集生成的B个集成分类器再次进行集成,实现高维数据的分类.最后,使用高维数据集模拟半监督学习过程进行实验,实验结果表明S3LSE具有较好的性能.  相似文献   

10.
针对软件可靠性预测中软件度量维数灾难问题,提出一种基于自适应遗传算法和KNN算法相结合的软件度量属性选择方法,筛选出与软件可靠性关系最为密切的关键属性集。该方法在属性子集搜索上采用遗传算法进行随机搜索,在属性子集评价上采用KNN分类准确率和属性子集规模作为学习算法及评价指标。实验结果表明,该算法可有效地找出具有较好可分离性的属性子集,从而实现降维并提高软件可靠性预测精度。  相似文献   

11.
Gene expression data are expected to be a significant aid in the development of efficient cancer diagnosis and classification platforms. However, gene expression data are high-dimensional and the number of samples is small in comparison to the dimensions of the data. Furthermore, the data are inherently noisy. Therefore, in order to improve the accuracy of the classifiers, we would be better off reducing the dimensionality of the data. As a method of dimensionality reduction, there are two previous proposals: feature selection and dimensionality reduction. Feature selection is a feedback method which incorporate the classifier algorithm in the future selection process. Dimensionality reduction refers to algorithms and techniques which create new attributes as combinations of the original attributes in order to reduce the dimensionality of a data set. In this article, we compared the feature selection methods and the dimensionality reduction methods, and verified the effectiveness of both types. For the feature selection methods we used one previously known method and three proposed methods, and for the dimensionality reduction methods we used one previously known method and one proposed method. From an experiment using a benchmark data set, we confirmed the effectiveness of our proposed method with each type of dimensional reduction method.  相似文献   

12.
Multi-label learning deals with data associated with a set of labels simultaneously. Dimensionality reduction is an important but challenging task in multi-label learning. Feature selection is an efficient technique for dimensionality reduction to search an optimal feature subset preserving the most relevant information. In this paper, we propose an effective feature evaluation criterion for multi-label feature selection, called neighborhood relationship preserving score. This criterion is inspired by similarity preservation, which is widely used in single-label feature selection. It evaluates each feature subset by measuring its capability in preserving neighborhood relationship among samples. Unlike similarity preservation, we address the order of sample similarities which can well express the neighborhood relationship among samples, not just the pairwise sample similarity. With this criterion, we also design one ranking algorithm and one greedy algorithm for feature selection problem. The proposed algorithms are validated in six publicly available data sets from machine learning repository. Experimental results demonstrate their superiorities over the compared state-of-the-art methods.   相似文献   

13.
Dimensionality reduction is an important and challenging task in machine learning and data mining. Feature selection and feature extraction are two commonly used techniques for decreasing dimensionality of the data and increasing efficiency of learning algorithms. Specifically, feature selection realized in the absence of class labels, namely unsupervised feature selection, is challenging and interesting. In this paper, we propose a new unsupervised feature selection criterion developed from the viewpoint of subspace learning, which is treated as a matrix factorization problem. The advantages of this work are four-fold. First, dwelling on the technique of matrix factorization, a unified framework is established for feature selection, feature extraction and clustering. Second, an iterative update algorithm is provided via matrix factorization, which is an efficient technique to deal with high-dimensional data. Third, an effective method for feature selection with numeric data is put forward, instead of drawing support from the discretization process. Fourth, this new criterion provides a sound foundation for embedding kernel tricks into feature selection. With this regard, an algorithm based on kernel methods is also proposed. The algorithms are compared with four state-of-the-art feature selection methods using six publicly available datasets. Experimental results demonstrate that in terms of clustering results, the proposed two algorithms come with better performance than the others for almost all datasets we experimented with here.  相似文献   

14.
Dimensionality reduction is a great challenge in high dimensional unlabelled data processing. The existing dimensionality reduction methods are prone to employing similarity matrix and spectral clustering algorithm. However, the noises in original data always make the similarity matrix unreliable and degrade the clustering performance. Besides, existing spectral clustering methods just focus on the local structures and ignore the global discriminative information, which may lead to overfitting in some cases. To address these issues, a novel unsupervised 2-dimensional dimensionality reduction method is proposed in this paper, which incorporates the similarity matrix learning and global discriminant information into the procedure of dimensionality reduction. Particularly, the number of the connected components in the learned similarity matrix is equal to cluster number. We compare the proposed method with several 2-dimensional unsupervised dimensionality reduction methods and evaluate the clustering performance by K-means on several benchmark data sets. The experimental results show that the proposed method outperforms the state-of-the-art methods.  相似文献   

15.
Dimensionality reduction is an essential data preprocessing technique for large-scale and streaming data classification tasks. It can be used to improve both the efficiency and the effectiveness of classifiers. Traditional dimensionality reduction approaches fall into two categories: feature extraction and feature selection. Techniques in the feature extraction category are typically more effective than those in feature selection category. However, they may break down when processing large-scale data sets or data streams due to their high computational complexities. Similarly, the solutions provided by the feature selection approaches are mostly solved by greedy strategies and, hence, are not ensured to be optimal according to optimized criteria. In this paper, we give an overview of the popularly used feature extraction and selection algorithms under a unified framework. Moreover, we propose two novel dimensionality reduction algorithms based on the orthogonal centroid algorithm (OC). The first is an incremental OC (IOC) algorithm for feature extraction. The second algorithm is an orthogonal centroid feature selection (OCFS) method which can provide optimal solutions according to the OC criterion. Both are designed under the same optimization criterion. Experiments on Reuters Corpus Volume-1 data set and some public large-scale text data sets indicate that the two algorithms are favorable in terms of their effectiveness and efficiency when compared with other state-of-the-art algorithms.  相似文献   

16.
在面向大规模复杂数据的模式分类和识别问题中,绝大多数的分类器都遇到了维数灾难这一棘手的问题.在进行高维数据分类之前,基于监督流形学习的非线性降维方法可提供一种有效的解决方法.利用多项式逻辑斯蒂回归方法进行分类预测,并结合基于非线性降维的非监督流形学习方法解决图像以及非图像数据的分类问题,因而形成了一种新的分类识别方法.大量的实验测试和比较分析验证了本文所提方法的优越性.  相似文献   

17.
李海林  杨丽彬 《控制与决策》2013,28(11):1718-1722

数据降维和特征表示是解决时间序列维灾问题的关键技术和重要方法, 它们在时间序列数据挖掘中起基础性作用. 鉴于此, 提出一种新的时间序列数据降维和特征表示方法, 利用正交多项式回归模型对时间序列实现特征提取, 结合特征序列长度对时间序列的拟合分析结果, 运用奇异值分解方法对特征序列进一步降维处理, 进而得到保存大部分信息且维数更低的特征序列. 数值实验结果表明, 新方法可以在维度较低的特征空间下取得较好的数据挖掘聚类和分类效果.

  相似文献   

18.
基于稀疏表示的半监督降维方法   总被引:1,自引:1,他引:0       下载免费PDF全文
提出一种基于稀疏表示的半监督降维方法(SpSSDR)。不同于其他基于图的半监督降维方法分步构图,SpSSDR通过稀疏重构系数来同时定义图上边连接性及边权重,再结合边约束信息进行降维。在高维人脸数据上的实验表明,SpSSDR不仅对噪声鲁棒,对边信息的利用也更有效。  相似文献   

19.
基于互信息和遗传算法的两阶段特征选择方法   总被引:2,自引:0,他引:2  
为了在特征选择过程中得到较优的特征子集,结合标准化互信息和遗传算法提出了一种新的两阶段特征选择方法。该方法首先采用标准化的互信息对特征进行排序,然后用排序在前的特征初始化第二阶段遗传算法的部分种群,使得遗传算法的初始种群中含有较好的搜索起点,从而遗传算法只需较少的进化代数就可搜寻到较优的特征子集。实验显示,所提出的特征选择方法在特征约简和分类等方面具有较好的效果。  相似文献   

20.
图像的无监督聚类就是基于图像数据,在无任何先验信息的情况下将整个图像集合划分成若干子集的过程。由于图像的本征维度很高,在图像处理中会遇到“维数灾难”问题。针对图像无监督聚类的特点,提出了一种图像的扩散界面无监督聚类算法,将图像编码成高维观测空间中的点,再通过投影变换映射到低维特征空间,在低维特征空间中构建扩散界面无监督聚类模型,并在模型中引入维度约简算子,采用循环迭代算法优化扩散界面模型的能量函数。基于最优的扩散界面,将整个图像集合聚类成不同的子集。实验结果表明,扩散界面无监督聚类算法优于传统聚类算法中的K-means算法、DBSCAN算法和Spectral Clustering算法,能够更好地实现图像的无监督聚类,在相同条件下具有更高的准确度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号