首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A Nearest Hyperrectangle Learning Method   总被引:5,自引:0,他引:5  
Salzberg  Steven 《Machine Learning》1991,6(3):251-276
This paper presents a theory of learning called nested generalized exemplar (NGE) theory, in which learning is accomplished by storing objects in Euclidean n-space, En, as hyperrectangles. The hyperrectangles may be nested inside one another to arbitrary depth. In contrast to generalization processes that replace symbolic formulae by more general formulae, the NGE algorithm modifies hyperrectangles by growing and reshaping them in a well-defined fashion. The axes of these hyperrectangles are defined by the variables measured for each example. Each variable can have any range on the real line; thus the theory is not restricted to symbolic or binary values.This paper describes some advantages and disadvantages of NGE theory, positions it as a form of exemplarbased learning, and compares it to other inductive learning theories. An implementation has been tested in three different domains, for which results are presented below: prediction of breast cancer, classification of iris flowers, and prediction of survival times for heart attack patients. The results in these domains support the claim that NGE theory can be used to create compact representations with excellent predictive accuracy.  相似文献   

2.
This paper presents a supervised subspace learning method called Kernel Generalized Discriminative Common Vectors (KGDCV), as a novel extension of the known Discriminative Common Vectors method with Kernels. Our method combines the advantages of kernel methods to model complex data and solve nonlinear problems with moderate computational complexity, with the better generalization properties of generalized approaches for large dimensional data. These attractive combination makes KGDCV specially suited for feature extraction and classification in computer vision, image processing and pattern recognition applications. Two different approaches to this generalization are proposed: a first one based on the Kernel Trick and a second one based on the Nonlinear Projection Trick (NPT) for even higher efficiency. Both methodologies have been validated on four different image datasets containing faces, objects and handwritten digits and compared against well-known nonlinear state-of-the-art methods. Results show better discriminant properties than other generalized approaches both linear or kernel. In addition, the KGDCV-NPT approach presents a considerable computational gain, without compromising the accuracy of the model.  相似文献   

3.
Text categorization presents unique challenges to traditional classification methods due to the large number of features inherent in the datasets from real-world applications of text categorization, and a great deal of training samples. In high-dimensional document data, the classes are typically categorized only by subsets of features, which are typically different for the classes of different topics. This paper presents a simple but effective classifier for text categorization using class-dependent projection based method. By projecting onto a set of individual subspaces, the samples belonging to different document classes are separated such that they are easily to be classified. This is achieved by developing a new supervised feature weighting algorithm to learn the optimized subspaces for all the document classes. The experiments carried out on common benchmarking corpuses showed that the proposed method achieved both higher classification accuracy and lower computational costs than some distinguishing classifiers in text categorization, especially for datasets including document categories with overlapping topics.  相似文献   

4.
Healthcare data analysis is currently a challenging and crucial research issue for the development of a robust disease diagnosis and prediction system. Many specific and a few common methods have been discussed in the literature for healthcare data classification. The present study implements 32 classification methods of six categories (Bayes, function‐based, lazy, meta, rule‐based, and tree‐based) with the objective of searching the best and common categories and methods in healthcare data mining. The performance of each classification method has been evaluated based on analysis time, classification accuracy, precision, recall, F‐measure, area under the receiver operating characteristic curve, root mean square error, kappa coefficient, Kulczynski's measure, and Fowlkes–Mallows index and compared with more than 90 classification methods used in past studies. Seventeen healthcare datasets related to thyroid, cancer, skin disease, heart disease, hepatitis, lymphography, audiology, diabetes, surgery, arrhythmia, postsurvival, liver, and tumour have been used in the performance assessment of the classification methods. The tree‐based classification methods have a better performance (with an average classification accuracy of 79.92% and maximum accuracy of 99.50%; an analysis time of 3.91 s for the logistic model tree classifier) than the other methods. Furthermore, the association of datasets and classification methods has been discussed.  相似文献   

5.
In supervised classification, we often encounter many real world problems in which the data do not have an equitable distribution among the different classes of the problem. In such cases, we are dealing with the so-called imbalanced data sets. One of the most used techniques to deal with this problem consists of preprocessing the data previously to the learning process. This paper proposes a method belonging to the family of the nested generalized exemplar that accomplishes learning by storing objects in Euclidean n-space. Classification of new data is performed by computing their distance to the nearest generalized exemplar. The method is optimized by the selection of the most suitable generalized exemplars based on evolutionary algorithms. An experimental analysis is carried out over a wide range of highly imbalanced data sets and uses the statistical tests suggested in the specialized literature. The results obtained show that our evolutionary proposal outperforms other classic and recent models in accuracy and requires to store a lower number of generalized examples.  相似文献   

6.
Algorithms based on Nested Generalized Exemplar (NGE) theory (Salzberg, 1991) classify new data points by computing their distance to the nearest generalized exemplar (i.e., either a point or an axis-parallel rectangle). They combine the distance-based character of nearest neighbor (NN) classifiers with the axis-parallel rectangle representation employed in many rule-learning systems. An implementation of NGE was compared to thek-nearest neighbor (kNN) algorithm in 11 domains and found to be significantly inferior to kNN in 9 of them. Several modifications of NGE were studied to understand the cause of its poor performance. These show that its performance can be substantially improved by preventing NGE from creating overlapping rectangles, while still allowing complete nesting of rectangles. Performance can be further improved by modifying the distance metric to allow weights on each of the features (Salzberg, 1991). Best results were obtained in this study when the weights were computed using mutual information between the features and the output class. The best version of NGE developed is a batch algorithm (BNGE FWMI) that has no user-tunable parameters. BNGE FWMI's performance is comparable to the first-nearest neighbor algorithm (also incorporating feature weights). However, thek-nearest neighbor algorithm is still significantly superior to BNGE FWMI in 7 of the 11 domains, and inferior to it in only 2. We conclude that, even with our improvements, the NGE approach is very sensitive to the shape of the decision boundaries in classification problems. In domains where the decision boundaries are axis-parallel, the NGE approach can produce excellent generalization with interpretable hypotheses. In all domains tested, NGE algorithms require much less memory to store generalized exemplars than is required by NN algorithms.  相似文献   

7.
针对现有过采样方法存在的易引入噪声点、合成样本重叠的问题,提出一种基于自然最近邻的不平衡数据过采样方法.确定少数类样本的自然最近邻,每个样本的近邻个数由算法自适应计算生成,反映了样本分布的疏密程度.基于自然近邻关系对少数类样本聚类,由位于同一类簇中密集区域的核心点和稀疏区域的非核心点生成新样本.在二维合成数据集和UCI...  相似文献   

8.
针对图像聚类中面临的高维、准确度低、部分重叠等问题,提出了一种高效的基于链接层次聚类的多标记图像聚类。该方法通过图像距离计算相似度,通过链接聚类检测重叠簇。从而每个图像可能归属于多个簇,使得簇标签的意义更明确。为了检验方法的有效性,对通过搜索引擎检索特定关键词返回的图片数据集进行聚类。结果表明,该方法能有效发现具有重叠划分的簇,且簇的意义比较明确。  相似文献   

9.
提出了一种聚类学习与增量SVM训练相结合的的入侵检测方法,采用聚类分析、样本修剪与增量学习相结合的方式,通过聚合相似的训练样本以支持多类别分类,通过去除相似的样本而只取其代表点,从而减少参加训练的样本数量,提高学习效率,同时采用基于广义KKT判决的增量学习方法,有效改善了多类别入侵检测场合下样本数据集过于庞大,学习速度过慢且难以保障SVM入侵检测能力持续优化的问题。  相似文献   

10.
高光谱图像分类算法通常需要逐点对图像中的像素点进行迭代处理,计算复杂度及并行程度存在较大差异。随着高光谱遥感图像空间、光谱和辐射分辨率的不断提升,这些算法无法满足实时处理海量遥感图像数据的需求。通过分析NPU存储计算一体化模式与遥感图像分类算法的实现步骤,设计低功耗CPU+NPU异构资源计算架构的低秩稀疏子空间聚类(LRSSC)算法,将数据密集型计算转移至NPU,并利用NPU数据驱动并行计算和内置AI加速,对基于机器学习算法的海量遥感数据进行实时分类。受到big.LITTLE计算范式的启发,CPU+NPU异构资源计算架构由8 bit和低精度位宽NPU共同组成以提高整体吞吐量,同时减少图网络推理过程中的能量损耗。实验结果表明,与CPU计算架构和CPU+GPU异构计算架构的LRSSC算法相比,CPU+NPU异构计算架构的LRSSC算法在Pavia University遥感数据集下的计算速度提升了3~14倍。  相似文献   

11.
In the past decades, we have witnessed a revolution in information technology. Routine collection of systematically generated data is now commonplace. Databases with hundreds of fields (variables), and billions of records (observations) are not unusual. This presents a difficulty for classical data analysis methods, mainly due to the limitation of computer memory and computational costs (in time, for example). In this paper, we propose an intelligent regression analysis methodology which is suitable for modeling massive datasets. The basic idea here is to split the entire dataset into several blocks, applying the classical regression techniques for data in each block, and finally combining these regression results via weighted averages. Theoretical justification of the goodness of the proposed method is given, and empirical performance based on extensive simulation study is discussed.  相似文献   

12.
Survey data are often incomplete. Classification with incomplete survey data is a new subject. This study proposes a Hopfield neural network based model of classification for incomplete survey data. Using this model, an incomplete pattern is translated into fuzzy patterns. These fuzzy patterns, along with patterns without missing values, are then used as the exemplar set for teaching the Hopfield neural network. The classifier also retains information of fuzzy class membership for each exemplar pattern. When presenting a test sample, the neural network would find an exemplar that best matches the test pattern and give the classification result. Compared with other classification techniques, the proposed method can utilize more information provided by the data with missing values, and reveal the risk of the classification result on the individual observation basis.  相似文献   

13.
When a phenomenon is described by a parametric model and multiple datasets are available, a key problem in statistics is to discover which datasets are characterized by the same parameter values. Equivalently, one is interested in partitioning the family of datasets into blocks collecting data that are described by the same parameters. Because of noise, different partitions can be consistent with the data, in the sense that they are accepted by generalized likelihood ratio tests with a given confidence level. Given the fact that testing all possible partitions is a computationally unaffordable task, we propose an algorithm for finding all acceptable partitions while avoiding testing unnecessary ones. The core of our method is an efficient procedure, based on partial order relations on partitions, for computing all partitions that verify an upper bound on a monotone function. The reduction of the computational burden brought about by the algorithm is analyzed both theoretically and experimentally. Applications to the identification of switched systems are also presented.  相似文献   

14.
Although the \(k\)-NN classifier is a popular classification method, it suffers from the high computational cost and storage requirements it involves. This paper proposes two effective cluster-based data reduction algorithms for efficient \(k\)-NN classification. Both have low preprocessing cost and can achieve high data reduction rates while maintaining \(k\)-NN classification accuracy at high levels. The first proposed algorithm is called reduction through homogeneous clusters (RHC) and is based on a fast preprocessing clustering procedure that creates homogeneous clusters. The centroids of these clusters constitute the reduced training set. The second proposed algorithm is a dynamic version of RHC that retains all its properties and, in addition, it can manage datasets that cannot fit in main memory and is appropriate for dynamic environments where new training data are gradually available. Experimental results, based on fourteen datasets, illustrate that both algorithms are faster and achieve higher reduction rates than four known methods, while maintaining high classification accuracy.  相似文献   

15.
Associative classification (AC), which is based on association rules, has shown great promise over many other classification techniques. To implement AC effectively, we need to tackle the problems on the very large search space of candidate rules during the rule discovery process and incorporate the discovered association rules into the classification process. This paper proposes a new approach that we call artificial immune system-associative classification (AIS-AC), which is based on AIS, for mining association rules effectively for classification. Instead of massively searching for all possible association rules, AIS-AC will only find a subset of association rules that are suitable for effective AC in an evolutionary manner. In this paper, we also evaluate the performance of the proposed AIS-AC approach for AC based on large datasets. The performance results have shown that the proposed approach is efficient in dealing with the problem on the complexity of the rule search space, and at the same time, good classification accuracy has been achieved. This is especially important for mining association rules from large datasets in which the search space of rules is huge.  相似文献   

16.
Sentiment classification has played an important role in various research area including e-commerce applications and a number of advanced Computational Intelligence techniques including machine learning and computational linguistics have been proposed in the literature for improved sentiment classification results. While such studies focus on improving performance with new techniques or extending existing algorithms based on previously used dataset, few studies provide practitioners with insight on what techniques are better for their datasets that have different properties. This paper applies four different sentiment classification techniques from machine learning (Naïve Bayes, SVM and Decision Tree) and sentiment orientation approaches to datasets obtained from various sources (IMDB, Twitter, Hotel review, and Amazon review datasets) to learn how different data properties including dataset size, length of target documents, and subjectivity of data affect the performance of those techniques. The results of computational experiments confirm the sensitivity of the techniques on data properties including training data size, the document length and subjectivity of training /test data in the improvement of performances of techniques. The theoretical and practical implications of the findings are discussed.  相似文献   

17.
Label Distribution Learning (LDL) is a general learning framework that assigns an instance to a distribution over a set of labels rather than to a single label or multiple labels. Current LDL methods have proven their effectiveness in many real-life machine learning applications. However, LDL is a generalization of the classification task and as such it is exposed to the same problems as standard classification algorithms, including class-imbalanced, noise, overlapping or irregularities. The purpose of this paper is to mitigate these effects by using decomposition strategies. The technique devised, called Decomposition-Fusion for LDL (DF-LDL), is based on one of the most renowned strategy in decomposition: the One-vs-One scheme, which we adapt to be able to deal with LDL datasets. In addition, we propose a competent fusion method that allows us to discard non-competent classifiers when their output is probably not of interest. The effectiveness of the proposed DF-LDL method is verified on several real-world LDL datasets on which we have carried out two types of experiments. First, comparing our proposal with the base learners and, second, comparing our proposal with the state-of-the-art LDL algorithms. DF-LDL shows significant improvements in both experiments.  相似文献   

18.
Linear discriminant analysis (LDA) is a dimension reduction method which finds an optimal linear transformation that maximizes the class separability. However, in undersampled problems where the number of data samples is smaller than the dimension of data space, it is difficult to apply LDA due to the singularity of scatter matrices caused by high dimensionality. In order to make LDA applicable, several generalizations of LDA have been proposed recently. In this paper, we present theoretical and algorithmic relationships among several generalized LDA algorithms and compare their computational complexities and performances in text classification and face recognition. Towards a practical dimension reduction method for high dimensional data, an efficient algorithm is proposed, which reduces the computational complexity greatly while achieving competitive prediction accuracies. We also present nonlinear extensions of these LDA algorithms based on kernel methods. It is shown that a generalized eigenvalue problem can be formulated in the kernel-based feature space, and generalized LDA algorithms are applied to solve the generalized eigenvalue problem, resulting in nonlinear discriminant analysis. Performances of these linear and nonlinear discriminant analysis algorithms are compared extensively.  相似文献   

19.
A hybrid decision tree training method using data streams   总被引:1,自引:1,他引:0  
Classical classification methods usually assume that pattern recognition models do not depend on the timing of the data. However, this assumption is not valid in cases where new data frequently become available. Such situations are common in practice, for example, spam filtering or fraud detection, where dependencies between feature values and class numbers are continually changing. Unfortunately, most classical machine learning methods (such as decision trees) do not take into consideration the possibility of the model changing, as a result of so-called concept drift and they cannot adapt to a new classification model. This paper focuses on the problem of concept drift, which is a very important issue, especially in data mining methods that use complex structures (such as decision trees) for making decisions. We propose an algorithm that is able to co-train decision trees using a modified NGE (Nested Generalized Exemplar) algorithm. The potential for adaptation of the proposed algorithm and the quality thereof are evaluated through computer experiments, carried out on benchmark datasets from the UCI Machine Learning Repository.  相似文献   

20.
传统分类算法一般要求数据集类别分布平衡,然而在实际情况中往往面临的是不平衡的类别分布。目前存在的数据层面和模型层面算法试图从不同角度解决该问题,但面临着参数选择以及重复采样产生的额外计算等问题。针对此问题,提出了一种在小批量内样本损失自适应均衡化的方法。该算法采用了一种动态学习损失函数的方式,根据小批量内样本标签信息调整各样本损失权重,从而实现在小批量内各类别样本总损失的平衡性。通过在caltech101和ILSVRC2014数据集上的实验表明,该算法能够有效地减少计算成本并提高分类精度,且一定程度上避免了过采样方法所带来的模型过拟合风险。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号