首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 157 毫秒
1.
利用目前使用较广泛的Savitzky-Golay滤波拟合法和傅立叶谐波分析法对NDVI时间序列进行滤波处理,分析植被指数时间序列滤波重建方法对北京一号小卫星数据的适用性。试验结果表明,较之Savitzky-Golay方法,傅立叶谐波改进算法的重建结果更能体现地物的物候变化规律,且对原始数据的扰动较小,更有利于基于植被指数时间序列的土地覆盖分类及作物估产等定量应用,比较适合于北京一号小卫星NDVI时间序列的重建。  相似文献   

2.
针对大面积土地覆盖遥感分类中数据获取难度大、复杂度高、分类结果不够精确且易受季候变化影响等问题,提出了一种利用Landsat时间序列数据,生成年度时序特征,并结合特定算法(UniBagging)进行土地覆盖分类的方法(LandUTime)。该方法定义了一种基于时间序列数据的特征生成方式,根据时序数据特点,设计了一种基于特征子空间的集成分类算法。实现过程分为2个阶段,首先基于特定模型,在像元级别上对Landsat时间序列图像进行回归分析,生成模式特征,然后将所有特征整合成"特征块",根据特征子空间将基分类器集成到相互分离的集合中,最后通过加权投票的方法进行分类结果输出。实验结果与定量分析表明,与传统的特征提取及分类方法相比,该方法提高了分类精度,而且对高维数据具有鲁棒性;可以有效克服大面积土地覆盖分类中云遮掩、数据条带和物候变化等问题的影响,具有较高的准确性和实用性。  相似文献   

3.
为解决垃圾网页检测过程中的不平衡分类和"维数灾难"问题,提出一种基于随机森林(RF)和欠采样集成的二元分类器算法。首先使用欠采样技术将训练样本集大类抽样成多个子样本集,再将其分别与小类样本集合并构成多个平衡的子训练样本集;然后基于各个子训练样本集训练出多个随机森林分类器;最后用多个随机森林分类器对测试样本集进行分类,采用投票法确定测试样本的最终所属类别。在WEBSPAM UK-2006数据集上的实验表明,该集成分类器算法应用于垃圾网页检测比随机森林算法及其Bagging和Adaboost集成分类器算法效果更好,准确率、F1测度、ROC曲线下面积(AUC)等指标提高至少14%,13%和11%。与Web spam challenge 2007 优胜团队的竞赛结果相比,该集成分类器算法在F1测度上提高至少1%,在AUC上达到最优结果。  相似文献   

4.
由于高维数据通常存在冗余和噪声,在其上直接构造覆盖模型不能充分反映数据的分布信息,导致分类器性能下降.为此提出一种基于精简随机子空间多树集成分类方法.该方法首先生成多个随机子空间,并在每个子空间上构造独立的最小生成树覆盖模型.其次对每个子空间上构造的分类模型进行精简处理,通过一个评估准则(AUC值),对生成的一类分类器进行精简.最后均值合并融合这些分类器为一个集成分类器.实验结果表明,与其它直接覆盖分类模型和bagging算法相比,多树集成覆盖分类器具有更高的分类正确率.  相似文献   

5.
李英成  薛艳丽  王广亮  王莉  伍菲 《遥感信息》2006,(6):43-46,I0004
依托863研究课题,在国土资源部颁发的《全国土地分类》(过渡期适用)的指导下,利用2006年5月接收的3景北京-1号小卫星多光谱数据在山东省进行了土地利用信息提取能力评价研究。研究内容包括通过目视判断单纯利用北京-1号多光谱数据能够达到的土地利用分类级别,然后经过训练样本选择以及监督分类计算,对分类结果参照典型地区高分辨率卫星影像、1∶1万土地利用现状图等辅助资料进行定性与定量分析,最终形成宏观尺度遥感监测要求下的土地利用分类指标,在此基础上完成山东全省的1∶10万土地利用图制作的应用示范。  相似文献   

6.
杨显飞  张健沛  杨静 《计算机工程》2011,37(20):180-182
选择性集成分类算法虽能提高集合分类器在整体数据集上的分类性能,但针对某一具体数据进行分类时,其选择出的个体分类器集合并不一定是最优组合。为此,从数据自适应角度出发,提出一种数据流选择性集成的两阶段动态融合方法,利用待分类数据所在特征空间中的位置,动态选择个体分类器集合,并对其进行分类。理论分析和实验结果表明,与GASEN算法相比,该方法的分类准确率更高。  相似文献   

7.
采用北京一号小卫星和环境与灾害监测预报小卫星遥感数据,对徐州西矿区2008年~2009年土地利用/覆盖及景观格局变化进行分析。通过对4种分类方法的比较,选择精度最高的决策树分类图进行分析。试验表明:采矿活动导致徐州西矿区植被面积下降明显,大部分转为建筑用地;同时,矿区土地复垦使塌陷积水面积减少;区域景观破碎化程度有所下降,斑块的形状趋于规则化,景观多样性减小。评价变化检测精度,结果表明:国产小卫星遥感数据能有效地应用于矿区土地利用/覆盖及景观格局变化监测。  相似文献   

8.
为了提高物体分类性能,提出了一种神经网络池特征分类方法,并结合SIFT特征实现物体的可靠分类。该方法首先提取样本的SIFT特征向量,并从特征向量集合中随机选取样本子集;然后采用径向基神经网络为每一个样本子集构建基元分类器;接着通过重复迭代方式得到许多基元分类器集合,再结合增强技术组建神经网络池;最后采用朴素贝叶斯模型对神经网络池中的各个基元分类器集合的分类结果进行融合,预测特征的最终分类结果。实验结果表明,新方法的运算效率高,对VOC-2007数据集的分类正确率高。  相似文献   

9.
该研究系统分析了卫星激光测高数据、卫星激光测高数据与其他数据源联合进行土地覆盖分类的研究进展,对比总结了波形处理和土地覆盖分类方法的优缺点,分析了不同方法在不同区域的应用前景,并总结了目前研究中存在的不足。研究发现,当分类数较多时,波形特征参数法和曲线匹配法的分类准确率较低;当地物分类数较少时,波形匹配法和支持向量机联合使用后表现出较高的准确率,且该方法在城区的应用效果较为优秀。通过对星载激光测高数据土地覆盖分类方法的总结和梳理,以期能为国产高分七号卫星激光测高数据在土地覆盖分类应用提供参考。  相似文献   

10.
徐菲菲  魏莱  杜海洲  王文欢 《计算机科学》2013,40(7):216-221,235
目前,支持向量机技术(SVM)在遥感信息获取中普遍受到参数选择不准确和小样本问题的制约。针对这些问题, 提出一种新的半监督集成SVM(EPS3VM)分类模型。模型一方面利用自适应变异粒子群优化算法对SVM参数寻优以提高基分类器精度(PSVM);另一方面采用自训练算法(Self-training),充分利用大量廉价的未标记样本产生性能差异的半监督分类器个体(PS3VM),其中,在未标记样本标注过程中,引入模糊聚类算法(Gustafson-kessel)来控制错误类别的输入,最后对个体分类器采用加权集成策略,以进一步提高分类模型的泛化能力。为了测试其性能,应用该模型进行多光谱遥感影像的土地覆盖分类实验,并与PSVM、PS3VM进行对比,分类精度从PSVM的88.48%提高到96.88%,Kappa系数由0.8546提高到0.9606。结果表明,EPS3VM在克服传统SVM参数选择不准确的同时,有效地应对了小样本问题,分类性能更优。  相似文献   

11.
张丹  杨斌  张瑞禹 《遥感信息》2009,(5):41-43,55
在遥感影像分类应用中,不同分类器的分类精度是不同的,而同一分类器对不同类别的分类精度也是不相同的。多分类器结合的思想就是利用现有分类器之间的互补性,通过适当的方法将不同的分类器之间进行优势互补,往往可以得到比单个分类器更好的分类结果。本文研究了如何在Matlab下采用最短距离分类器、贝叶斯分类器、BP神经网络分类器对影像进行分类,并采用投票法进行多种分类器结合的遥感影像分类,最后进行分类后处理。实验结果表明多分类器结合的遥感影像分类比单一分类器分类的精度高。  相似文献   

12.
Remote sensing image classification is a common application of remote sensing images. In order to improve the performance of Remote sensing image classification, multiple classifier combinations are used to classify the Landsat-8 Operational Land Imager (Landsat-8 OLI) images. Some techniques and classifier combination algorithms are investigated. The classifier ensemble consisting of five member classifiers is constructed. The results of every member classifier are evaluated. The voting strategy is experimented to combine the classification results of the member classifier. The results show that all the classifiers have different performances and the multiple classifier combination provides better performance than a single classifier, and achieves higher overall accuracy of classification. The experiment shows that the multiple classifier combination using producer’s accuracy as voting-weight (MCCmod2 and MCCmod3) present higher classification accuracy than the algorithm using overall accuracy as voting-weight (MCCmod1).And the multiple classifier combinations using different voting-weights affected the classification result in different land-cover types. The multiple classifier combination algorithm presented in this article using voting-weight based on the accuracy of multiple classifier may have stability problems, which need to be addressed in future studies.  相似文献   

13.
Each type of classifier has its own advantages as well as certain shortcomings. In this paper, we take the advantages of the associative classifier and the Naïve Bayes Classifier to make up the shortcomings of each other, thus improving the accuracy of text classification. We will classify the training cases with the Naïve Bayes Classifier and set different confidence threshold values for different class association rules (CARs) to different classes by the obtained classification accuracy rate of the Naïve Bayes Classifier to the classes. Since the accuracy rates of all selected CARs of the class are higher than that obtained by the Naïve Bayes Classifier, we could further optimize the classification result through these selected CARs. Moreover, for those unclassified cases, we will classify them with the Naïve Bayes Classifier. The experimental results show that combining the advantages of these two different classifiers better classification result can be obtained than with a single classifier.  相似文献   

14.
基于模糊积分和遗传算法的分类器组合算法   总被引:3,自引:0,他引:3  
将多个分类器进行组合能提高分类精度。基于模糊测度的Sugeno和Choquet积分具有理想的特性,因此该文利用其进行分类器组合。然而在实际中难以求得模糊测度。该文利用两种方法求取模糊测度,一是分类器对样本数据的分类能力,另一种是根据遗传算法。这两种方法均考虑了每个分类器对不同类的分类能力不同这一经验知识。实验中对UCI中的几个数据库进行了测试,同时将该组合方法应用于一多传感器融合工件识别系统。测试结果表明了该算法是一种计算简便、精度较高的分类器组合方法。  相似文献   

15.
The aim of bankruptcy prediction in the areas of data mining and machine learning is to develop an effective model which can provide the higher prediction accuracy. In the prior literature, various classification techniques have been developed and studied, in/with which classifier ensembles by combining multiple classifiers approach have shown their outperformance over many single classifiers. However, in terms of constructing classifier ensembles, there are three critical issues which can affect their performance. The first one is the classification technique actually used/adopted, and the other two are the combination method to combine multiple classifiers and the number of classifiers to be combined, respectively. Since there are limited, relevant studies examining these aforementioned disuses, this paper conducts a comprehensive study of comparing classifier ensembles by three widely used classification techniques including multilayer perceptron (MLP) neural networks, support vector machines (SVM), and decision trees (DT) based on two well-known combination methods including bagging and boosting and different numbers of combined classifiers. Our experimental results by three public datasets show that DT ensembles composed of 80–100 classifiers using the boosting method perform best. The Wilcoxon signed ranked test also demonstrates that DT ensembles by boosting perform significantly different from the other classifier ensembles. Moreover, a further study over a real-world case by a Taiwan bankruptcy dataset was conducted, which also demonstrates the superiority of DT ensembles by boosting over the others.  相似文献   

16.
Classifier combination falls in the so called data mining area. Its aim is to combine some paradigms from the supervised classification – sometimes with a previous non-supervised data division phase – in order to improve the individual accuracy of the component classifiers. Formation of classifier hierarchies is an alternative among the several methods of classifier combination. In this paper we present a novel method to find good hierarchies of classifiers for given databases. In this new proposal, a search is performed by means of genetic algorithms, returning the best individual according to the classification accuracy over the dataset, estimated through 10-fold cross-validation. Experiments have been carried out over 14 databases from the UCI repository, showing an improvement in the performance compared to the single classifiers. Moreover, similar or better results than other approaches, such as decision tree bagging and boosting, have been obtained.  相似文献   

17.
基于改进的Adaboost-BP模型在降水中的预测   总被引:1,自引:0,他引:1  
王军  费凯  程勇 《计算机应用》2017,37(9):2689-2693
针对目前分类算法对降水预测过程存在着泛化能力低、精度不足的问题,提出改进Adaboost算法集成反向传播(BP)神经网络组合分类模型。该模型通过构造多个神经网络弱分类器,赋予弱分类器权值,将其线性组合为强分类器。改进后的Adaboost算法以最优化归一化因子为目标,在提升过程中调整样本权值更新策略,以此达到最小化归一化因子的目的,从而确保增加弱分类器个数的同时降低误差上界估计,通过最终集成的强分类器来提高模型的泛化能力和分类精度。选取江苏境内6个站点的逐日气象资料作为实验数据,建立7个降水等级的预报模型,从对降雨量有影响的众多因素中,选取12个与降水相关性较大的属性作为预报因子。通过多次实验统计,结果表明基于改进的Adaboost-BP组合模型具有较好的性能,尤其对58259站点的适应性较好,总体分类精度达到81%,在7个等级中,对0级降雨的预测精度最好,对其他等级的降雨预测有不同程度的精度提升,理论推导及实验结果证明该种改进可以提高预测精度。  相似文献   

18.
Automatic emotion recognition from speech signals is one of the important research areas, which adds value to machine intelligence. Pitch, duration, energy and Mel-frequency cepstral coefficients (MFCC) are the widely used features in the field of speech emotion recognition. A single classifier or a combination of classifiers is used to recognize emotions from the input features. The present work investigates the performance of the features of Autoregressive (AR) parameters, which include gain and reflection coefficients, in addition to the traditional linear prediction coefficients (LPC), to recognize emotions from speech signals. The classification performance of the features of AR parameters is studied using discriminant, k-nearest neighbor (KNN), Gaussian mixture model (GMM), back propagation artificial neural network (ANN) and support vector machine (SVM) classifiers and we find that the features of reflection coefficients recognize emotions better than the LPC. To improve the emotion recognition accuracy, we propose a class-specific multiple classifiers scheme, which is designed by multiple parallel classifiers, each of which is optimized to a class. Each classifier for an emotional class is built by a feature identified from a pool of features and a classifier identified from a pool of classifiers that optimize the recognition of the particular emotion. The outputs of the classifiers are combined by a decision level fusion technique. The experimental results show that the proposed scheme improves the emotion recognition accuracy. Further improvement in recognition accuracy is obtained when the scheme is built by including MFCC features in the pool of features.  相似文献   

19.
Generalized rules for combination and joint training of classifiers   总被引:1,自引:0,他引:1  
Classifier combination has repeatedly been shown to provide significant improvements in performance for a wide range of classification tasks. In this paper, we focus on the problem of combining probability distributions generated by different classifiers. Specifically, we present a set of new combination rules that generalize the most commonly used combination functions, such as the mean, product, min, and max operations. These new rules have continuous and differentiable forms, and can thus not only be used for combination of independently trained classifiers, but also as objective functions in a joint classifier training scheme. We evaluate both of these schemes by applying them to the combination of phone classifiers in a speech recognition system. We find a significant performance improvement over previously used combination schemes when jointly training and combining multiple systems using a generalization of the product rule.  相似文献   

20.
This article proposes a new approach to improve the classification performance of remotely sensed images with an aggregative model based on classifier ensemble (AMCE). AMCE is a multi-classifier system with two procedures, namely ensemble learning and predictions combination. Two ensemble algorithms (Bagging and AdaBoost.M1) were used in the ensemble learning process to stabilize and improve the performance of single classifiers (i.e. maximum likelihood classifier, minimum distance classifier, back propagation neural network, classification and regression tree, and support vector machine (SVM)). Prediction results from single classifiers were integrated according to a diversity measurement with an averaged double-fault indicator and different combination strategies (i.e. weighted vote, Bayesian product, logarithmic consensus, and behaviour knowledge space). The suitability of the AMCE model was examined using a Landsat Thematic Mapper (TM) image of Dongguan city (Guangdong, China), acquired on 2 January 2009. Experimental results show that the proposed model was significantly better than the most accurate single classification (i.e. SVM) in terms of classification accuracy (i.e. from 88.83% to 92.45%) and kappa coefficient (i.e. from 0.8624 to 0.9088). A stepwise comparison illustrates that both ensemble learning and predictions combination with the AMCE model improved classification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号