首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
Data classification is an important topic in the field of data mining due to its wide applications. A number of related methods have been proposed based on the well-known learning models such as decision tree or neural network. Although data classification was widely discussed, relatively few studies explored the topic of temporal data classification. Most of the existing researches focused on improving the accuracy of classification by using statistical models, neural network, or distance-based methods. However, they cannot interpret the results of classification to users. In many research cases, such as gene expression of microarray, users prefer the classification information above a classifier only with a high accuracy. In this paper, we propose a novel pattern-based data mining method, namely classify-by-sequence (CBS), for classifying large temporal datasets. The main methodology behind the CBS is integrating sequential pattern mining with probabilistic induction. The CBS has the merit of simplicity in implementation and its pattern-based architecture can supply clear classification information to users. Through experimental evaluation, the CBS was shown to deliver classification results with high accuracy under two real time series datasets. In addition, we designed a simulator to evaluate the performance of CBS under datasets with different characteristics. The experimental results show that CBS can discover the hidden patterns and classify data effectively by utilizing the mined sequential patterns.  相似文献   

2.
图像分类的深度卷积神经网络模型综述   总被引:3,自引:0,他引:3       下载免费PDF全文
图像分类是计算机视觉中的一项重要任务,传统的图像分类方法具有一定的局限性。随着人工智能技术的发展,深度学习技术越来越成熟,利用深度卷积神经网络对图像进行分类成为研究热点,图像分类的深度卷积神经网络结构越来越多样,其性能远远好于传统的图像分类方法。本文立足于图像分类的深度卷积神经网络模型结构,根据模型发展和模型优化的历程,将深度卷积神经网络分为经典深度卷积神经网络模型、注意力机制深度卷积神经网络模型、轻量级深度卷积神经网络模型和神经网络架构搜索模型等4类,并对各类深度卷积神经网络模型结构的构造方法和特点进行了全面综述,对各类分类模型的性能进行了对比与分析。虽然深度卷积神经网络模型的结构设计越来越精妙,模型优化的方法越来越强大,图像分类准确率在不断刷新的同时,模型的参数量也在逐渐降低,训练和推理速度不断加快。然而深度卷积神经网络模型仍有一定的局限性,本文给出了存在的问题和未来可能的研究方向,即深度卷积神经网络模型主要以有监督学习方式进行图像分类,受到数据集质量和规模的限制,无监督式学习和半监督学习方式的深度卷积神经网络模型将是未来的重点研究方向之一;深度卷积神经网络模型的速度和资源消耗仍不尽人意,应用于移动式设备具有一定的挑战性;模型的优化方法以及衡量模型优劣的度量方法有待深入研究;人工设计深度卷积神经网络结构耗时耗力,神经架构搜索方法将是未来深度卷积神经网络模型设计的发展方向。  相似文献   

3.
Hyperspectral and thermal infrared (TIR) multispectral remote sensing have great potential for surface geological mapping. This paper investigates the potential impact of combining these data on the comparative accuracy of different classification methods. A series of simulated datasets based on the characteristics of Airborne Visible/InfraRed Imaging Spectrometer (AVIRIS) and MODIS/ASTER Airborne Simulator (MASTER) sensors was created from surface reflectance and emissivity data derived from library spectra of 16 common minerals and rocks occurring in Cuprite, Nevada. System noise, illumination effects, the presence of vegetation, and spectral mixing were added to create the simulated data. Five commonly used classification algorithms, minimum distance, maximum likelihood classification, binary encoding, spectral angle mapper (SAM) and spectral feature fitting (SFF), were applied to all datasets. All the classification methods, excluding binary encoding, achieved nominal to significant improvement in overall accuracy when applied to the combined datasets in comparison to using only the AVIRIS dataset. Furthermore, certain classification methods of the combined datasets show a marked increase in individual rock or mineral class accuracies. Limestone, silicified and muscovite, for instance, show an improvement of almost 30% or greater in either producer's or user's accuracy using the combined datasets with SAM. SFF provides a great improvement in accuracy for limestone, quartz and muscovite. In terms of overall comparative accuracy for the individual and the combined datasets, maximum likelihood classification shows the best performance. For the simulated AVIRIS data, SFF was generally superior to SAM, although the accuracy of SAM applied to the combined datasets was slightly better than that of SFF. SAM applied to the combined datasets increases classification accuracy for some minerals and rocks which do not exhibit distinct absorption feature in the TIR region, while for SFF, only the accuracy of minerals and rocks with characteristic absorption features in the TIR region is improved.  相似文献   

4.

Medical data classification is applied in intelligent medical decision support system to classify diseases into different categories. Several classification methods are commonly used in various healthcare settings. These techniques are fit for enhancing the nature of prediction, initial identification of sicknesses and disease classification. The categorization complexities in healthcare area are focused around the consequence of healthcare data investigation or depiction of medicine by the healthcare professions. This study concentrates on applying uncertainty (i.e. rough set)-based pattern classification techniques for UCI healthcare data for the diagnosis of diseases from different patients. In this study, covering-based rough set classification (i.e. proposed pattern classification approach) is applied for UCI healthcare data. Proposed CRS gives effective results than delicate pattern classifier model. The results of applying the CRS classification method to UCI healthcare data analysis are based upon a variety of disease diagnoses. The execution of the proposed covering-based rough set classification is contrasted with other approaches, such as rough set (RS)-based classification methods, Kth nearest neighbour, improved bijective soft set, support vector machine, modified soft rough set and back propagation neural network methodologies using different evaluating measures.

  相似文献   

5.
The decision tree method has grown fast in the past two decades and its performance in classification is promising. The tree-based ensemble algorithms have been used to improve the performance of an individual tree. In this study, we compared four basic ensemble methods, that is, bagging tree, random forest, AdaBoost tree and AdaBoost random tree in terms of the tree size, ensemble size, band selection (BS), random feature selection, classification accuracy and efficiency in ecological zone classification in Clark County, Nevada, through multi-temporal multi-source remote-sensing data. Furthermore, two BS schemes based on feature importance of the bagging tree and AdaBoost tree were also considered and compared. We conclude that random forest or AdaBoost random tree can achieve accuracies at least as high as bagging tree or AdaBoost tree with higher efficiency; and although bagging tree and random forest can be more efficient, AdaBoost tree and AdaBoost random tree can provide a significantly higher accuracy. All ensemble methods provided significantly higher accuracies than the single decision tree. Finally, our results showed that the classification accuracy could increase dramatically by combining multi-temporal and multi-source data set.  相似文献   

6.
The availability of a large amount of medical data leads to the need of intelligent disease prediction and analysis tools to extract hidden information. A large number of data mining and statistical analysis tools are used for disease prediction. Single data‐mining techniques show acceptable level of accuracy for heart disease diagnosis. This article focuses on prediction and analysis of heart disease using weighted vote‐based classifier ensemble technique. The proposed ensemble model overcomes the limitations of conventional data‐mining techniques by employing the ensemble of five heterogeneous classifiers: naive Bayes, decision tree based on Gini index, decision tree based on information gain, instance‐based learner, and support vector machines. We have used five benchmark heart disease data sets taken from UCI repository. Each data set contains different set of feature space that ultimately leads to the prediction of heart disease. The effectiveness of proposed ensemble classifier is investigated by comparing the performance with different researchers' techniques. Tenfold cross‐validation is used to handle the class imbalance problem. Moreover, confusion matrices and analysis of variance statistics are used to show the prediction results of all classifiers. The experimental results verify that the proposed ensemble classifier can deal with all types of attributes and it has achieved the high diagnosis accuracy of 87.37%, sensitivity of 93.75%, specificity of 92.86%, and F‐measure of 82.17%. The F‐ratio higher than the F‐critical and p‐value less than 0.01 for a 95% confidence interval indicate that the results are statistically significant for all the data sets.  相似文献   

7.
This paper explores the potential of an artificial immune‐based supervised classification algorithm for land‐cover classification. This classifier is inspired by the human immune system and possesses properties similar to nonlinear classification, self/non‐self identification, and negative selection. Landsat ETM+ data of an area lying in Eastern England near the town of Littleport are used to study the performance of the artificial immune‐based classifier. A univariate decision tree and maximum likelihood classifier were used to compare its performance in terms of classification accuracy and computational cost. Results suggest that the artificial immune‐based classifier works well in comparison with the maximum likelihood and the decision‐tree classifiers in terms of classification accuracy. The computational cost using artificial immune based classifier is more than the decision tree but less than the maximum likelihood classifier. Another data set from an area in Spain is also used to compare the performance of immune based supervised classifier with maximum likelihood and decision‐tree classification algorithms. Results suggest an improved performance with the immune‐based classifier in terms of classification accuracy with this data set, too. The design of an artificial immune‐based supervised classifier requires several user‐defined parameters to be set, so this work is extended to study the effect of varying the values of six parameters on classification accuracy. Finally, a comparison with a backpropagation neural network suggests that the neural network classifier provides higher classification accuracies with both data sets, but the results are not statistically significant.  相似文献   

8.
Due to technological improvements, the number and volume of datasets are considerably increasing and bring about the need for additional memory and computational complexity. To work with massive datasets in an efficient way; feature selection, data reduction, rule based and exemplar based methods have been introduced. This study presents a method, which may be called joint generalized exemplar (JGE), for classification of massive datasets. This method aims to enhance the computational performance of NGE by working against nesting and overlapping of hyper-rectangles with reassessing the overlapping parts with the same procedure repeatedly and joining non-overlapped hyper-rectangle sections that falling within the same class. This provides an opportunity to have adaptive decision boundaries, and also employing batch data searching instead of incremental searching. Later, the classification was done in accordance with the distance between each particular query and generalized exemplars. The accuracy and time requirements for classification of synthetic datasets and a benchmark dataset obtained by JGE, NGE and other popular machine learning methods were compared and the achieved results by JGE found acceptable.  相似文献   

9.
针对数据不平衡带来的少数类样本识别率低的问题,提出通过加权策略对过采样和随机森林进行改进的算法,从数据预处理和算法两个方面降低数据不平衡对分类器的影响。数据预处理阶段应用合成少数类过采样技术(Synthetic Minority Oversampling Technique,SMOTE)降低数据不平衡度,每个少数类样本根据其相对于剩余样本的欧氏距离分配权重,使每个样本合成不同数量的新样本。算法改进阶段利用Kappa系数评价随机森林中决策树训练后的分类效果,并赋予每棵树相应的权重,使分类能力更好的树在投票阶段有更大的投票权,提高随机森林算法对不平衡数据的整体分类性能。在KEEL数据集上的实验表明,与未改进算法相比,改进后的算法对少数类样本分类准确率和整体样本分类性能有所提升。  相似文献   

10.
基于多源的跨领域数据分类快速新算法   总被引:1,自引:0,他引:1  
顾鑫  王士同  许敏 《自动化学报》2014,40(3):531-547
研究跨领域学习与分类是为了将对多源域的有监督学习结果有效地迁移至目标域,实现对目标域的无标记分 类. 当前的跨领域学习一般侧重于对单一源域到目标域的学习,且样本规模普遍较小,此类方法领域自适应性较差,面对 大样本数据更显得无能为力,从而直接影响跨域学习的分类精度与效率. 为了尽可能多地利用相关领域的有用数据,本文 提出了一种多源跨领域分类算法(Multiple sources cross-domain classification,MSCC),该算法依据被众多实验证明有效的罗杰斯特回归模型与一致性方法构建多个源域分类器并综合指导目标域的数据分类. 为了充分高效利用大样本的 源域数据,满足大样本的快速运算,在MSCC的基础上,本文结合最新的CDdual (Dual coordinate descent method)算 法,提出了算法MSCC的快速算法MSCC-CDdual,并进行了相关的理论分析. 人工数据集、文本数据集与图像数据集的实 验运行结果表明,该算法对于大样本数据集有着较高的分类精度、快速的运行速度和较高的领域自适应性. 本文的主要贡 献体现在三个方面:1)针对多源跨领域分类提出了一种新的一致性方法,该方法有利于将MSCC算法发展为MSCC-CDdual快速算法;2)提出了MSCC-CDdual快速算法,该算法既适用于样本较少的数据集又适用于大样本数据集;3) MSCC-CDdual 算法在高维数据集上相比其他算法展现了其独特的优势.  相似文献   

11.
Information related to land cover is immensely important to global change science. In the past decade, data sources and methodologies for creating global land cover maps from remote sensing have evolved rapidly. Here we describe the datasets and algorithms used to create the Collection 5 MODIS Global Land Cover Type product, which is substantially changed relative to Collection 4. In addition to using updated input data, the algorithm and ancillary datasets used to produce the product have been refined. Most importantly, the Collection 5 product is generated at 500-m spatial resolution, providing a four-fold increase in spatial resolution relative to the previous version. In addition, many components of the classification algorithm have been changed. The training site database has been revised, land surface temperature is now included as an input feature, and ancillary datasets used in post-processing of ensemble decision tree results have been updated. Further, methods used to correct classifier results for bias imposed by training data properties have been refined, techniques used to fuse ancillary data based on spatially varying prior probabilities have been revised, and a variety of methods have been developed to address limitations of the algorithm for the urban, wetland, and deciduous needleleaf classes. Finally, techniques used to stabilize classification results across years have been developed and implemented to reduce year-to-year variation in land cover labels not associated with land cover change. Results from a cross-validation analysis indicate that the overall accuracy of the product is about 75% correctly classified, but that the range in class-specific accuracies is large. Comparison of Collection 5 maps with Collection 4 results show substantial differences arising from increased spatial resolution and changes in the input data and classification algorithm.  相似文献   

12.
In practice, there are many binary classification problems, such as credit risk assessment, medical testing for determining if a patient has a certain disease or not, etc. However, different problems have different characteristics that may lead to different difficulties of the problem. One important characteristic is the degree of imbalance of two classes in data sets. For data sets with different degrees of imbalance, are the commonly used binary classification methods still feasible? In this study, various binary classification models, including traditional statistical methods and newly emerged methods from artificial intelligence, such as linear regression, discriminant analysis, decision tree, neural network, support vector machines, etc., are reviewed, and their performance in terms of the measure of classification accuracy and area under Receiver Operating Characteristic (ROC) curve are tested and compared on fourteen data sets with different imbalance degrees. The results help to select the appropriate methods for problems with different degrees of imbalance.  相似文献   

13.
Hybrid models based on feature selection and machine learning techniques have significantly enhanced the accuracy of standalone models. This paper presents a feature selection‐based hybrid‐bagging algorithm (FS‐HB) for improved credit risk evaluation. The 2 feature selection methods chi‐square and principal component analysis were used for ranking and selecting the important features from the datasets. The classifiers were built on 5 training and test data partitions of the input data set. The performance of the hybrid algorithm was compared with that of the standalone classifiers: feature selection‐based classifiers and bagging. The hybrid FS‐HB algorithm performed best for qualitative dataset with less features and tree‐based unstable base classifier. Its performance on numeric data was also better than other standalone classifiers, whereas comparable to bagging with only selected features. Its performance was found better on 70:30 data partition and the type II error, which is very significant in risk evaluation was also reduced significantly. The improved performance of FS‐HB is attributed to the important features used for developing the classifier thereby reducing the complexity of the algorithm and the use of ensemble methodology, which added to the classical bias variance trade‐off and performed better than standalone classifiers.  相似文献   

14.
Robust classification approaches are required for accurate classification of complex land-use/land-cover categories of desert landscapes using remotely sensed data. Machine-learning ensemble classifiers have proved to be powerful for the classification of remotely sensed data. However, they have not been evaluated for classifying land-cover categories in desert regions. In this study, the performance of two machine-learning ensemble classifiers – random forests (RF) and boosted artificial neural networks – is explored in the context of classification of land use/land cover of desert landscapes. The evaluation is based on the accuracy of classification of remotely sensed data, with and without integration of ancillary data. Landsat-5 Thematic Mapper data captured for a desert landscape in the north-western coastal desert of Egypt are used with ancillary variables derived from a digital terrain model to classify 13 different land-use/land-cover categories. Results show that the two ensemble methods produce accurate land-cover classifications, with and without integrating spectral data with ancillary data. In general, the overall accuracy exceeded 85% and the kappa coefficient (κ) attained values over 0.83. The integration of ancillary data improved the performance of the boosted artificial neural networks by approximately 5% and the random forests by 9%. The latter showed overall higher accuracy; however, boosted artificial neural networks showed better generalization ability and lower overfitting tendencies. The results reveal the merit of applying ensemble methods to integrated spectral and ancillary data of similar desert landscapes for achieving high classification accuracies.  相似文献   

15.
As the credit industry has been growing rapidly, credit scoring models have been widely used by the financial industry during this time to improve cash flow and credit collections. However, a large amount of redundant information and features are involved in the credit dataset, which leads to lower accuracy and higher complexity of the credit scoring model. So, effective feature selection methods are necessary for credit dataset with huge number of features. In this paper, a novel approach, called RSFS, to feature selection based on rough set and scatter search is proposed. In RSFS, conditional entropy is regarded as the heuristic to search the optimal solutions. Two credit datasets in UCI database are selected to demonstrate the competitive performance of RSFS consisted in three credit models including neural network model, J48 decision tree and Logistic regression. The experimental result shows that RSFS has a superior performance in saving the computational costs and improving classification accuracy compared with the base classification methods.  相似文献   

16.
决策树作为一种经典的分类算法,因其分类规则简单易懂被广泛应用于医学数据分析中.然而,医学数据的样本不平衡问题使得决策树算法的分类效果降低.数据重采样是目前解决样本不平衡问题的常见方法,通过改变样本分布提升少数类样本的分类性能.现有重采样方法往往独立于后续学习算法,采样后的数据对于弱分类器的构建不一定有效.鉴于此,提出一种基于C4.5算法的混合采样算法.该算法以C4.5算法为迭代采样的评价准则控制过采样和欠采样的迭代过程,同时依据数据的不平衡比动态更新过采样的采样倍率,最终以投票机制组合多个弱分类器预测结果.通过在9组UCI数据集上的对比实验,表明所提出算法的有效性,同时算法也在稽留流产数据上实现了准确的预测.  相似文献   

17.
Feature selection is an important data preprocessing step for the construction of an effective bankruptcy prediction model. The prediction performance can be affected by the employed feature selection and classification techniques. However, there have been very few studies of bankruptcy prediction that identify the best combination of feature selection and classification techniques. In this study, two types of feature selection methods, including filter‐ and wrapper‐based methods, are considered, and two types of classification techniques, including statistical and machine learning techniques, are employed in the development of the prediction methods. In addition, bagging and boosting ensemble classifiers are also constructed for comparison. The experimental results based on three related datasets that contain different numbers of input features show that the genetic algorithm as the wrapper‐based feature selection method performs better than the filter‐based one by information gain. It is also shown that the lowest prediction error rates for the three datasets are provided by combining the genetic algorithm with the naïve Bayes and support vector machine classifiers without bagging and boosting.  相似文献   

18.
基于最大互信息最大相关熵的特征选择方法   总被引:5,自引:1,他引:4  
特征选择算法主要分为filter和wrapper两大类,并已提出基于不同理论的算法模型,但依然存在算法处理能力不强、子集分类精度不高等问题。基于模糊粗糙集的信息熵模型提出最大互信息最大相关熵标准,并根据该标准设计了一种新的特征选择方法,能同时处理离散数据、连续数据和模糊数据等混合信息。经UCI数据集试验,表明该算法与其他算法相比,具有较高的精度,且稳定性较高,是有效的。  相似文献   

19.
While extensive research in data mining has been devoted to developing better feature selection techniques, none of this research has examined the intrinsic relationship between dataset characteristics and a feature selection technique’s performance. Thus, our research examines experimentally how dataset characteristics affect both the accuracy and the time complexity of feature selection. To evaluate the performance of various feature selection techniques on datasets of different characteristics, extensive experiments with five feature selection techniques, three types of classification algorithms, seven types of dataset characterization methods and all possible combinations of dataset characteristics are conducted on 128 publicly available datasets. We apply the decision tree method to evaluate the interdependencies between dataset characteristics and performance. The results of the study reveal the intrinsic relationship between dataset characteristics and feature selection techniques’ performance. Additionally, our study contributes to research in data mining by providing a roadmap for future research on feature selection and a significantly wider framework for comparative analysis.  相似文献   

20.

In the fields of pattern recognition and machine learning, the use of data preprocessing algorithms has been increasing in recent years to achieve high classification performance. In particular, it has become inevitable to use the data preprocessing method prior to classification algorithms in classifying medical datasets with the nonlinear and imbalanced data distribution. In this study, a new data preprocessing method has been proposed for the classification of Parkinson, hepatitis, Pima Indians, single proton emission computed tomography (SPECT) heart, and thoracic surgery medical datasets with the nonlinear and imbalanced data distribution. These datasets were taken from UCI machine learning repository. The proposed data preprocessing method consists of three steps. In the first step, the cluster centers of each attribute were calculated using k-means, fuzzy c-means, and mean shift clustering algorithms in medical datasets including Parkinson, hepatitis, Pima Indians, SPECT heart, and thoracic surgery medical datasets. In the second step, the absolute differences between the data in each attribute and the cluster centers are calculated, and then, the average of these differences is calculated for each attribute. In the final step, the weighting coefficients are calculated by dividing the mean value of the difference to the cluster centers, and then, weighting is performed by multiplying the obtained weight coefficients by the attribute values in the dataset. Three different attribute weighting methods have been proposed: (1) similarity-based attribute weighting in k-means clustering, (2) similarity-based attribute weighting in fuzzy c-means clustering, and (3) similarity-based attribute weighting in mean shift clustering. In this paper, we aimed to aggregate the data in each class together with the proposed attribute weighting methods and to reduce the variance value within the class. Thus, by reducing the value of variance in each class, we have put together the data in each class and at the same time, we have further increased the discrimination between the classes. To compare with other methods in the literature, the random subsampling has been used to handle the imbalanced dataset classification. After attribute weighting process, four classification algorithms including linear discriminant analysis, k-nearest neighbor classifier, support vector machine, and random forest classifier have been used to classify imbalanced medical datasets. To evaluate the performance of the proposed models, the classification accuracy, precision, recall, area under the ROC curve, κ value, and F-measure have been used. In the training and testing of the classifier models, three different methods including the 50–50% train–test holdout, the 60–40% train–test holdout, and tenfold cross-validation have been used. The experimental results have shown that the proposed attribute weighting methods have obtained higher classification performance than random subsampling method in the handling of classifying of the imbalanced medical datasets.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号