共查询到20条相似文献,搜索用时 31 毫秒
1.
Hao Du 《Pattern recognition》2007,40(5):1486-1497
This paper points out and analyzes the advantages and drawbacks of the nearest feature line (NFL) classifier. To overcome the shortcomings, a new feature subspace with two simple and effective improvements is built to represent each class. The proposed method, termed rectified nearest feature line segment (RNFLS), is shown to possess a novel property of concentration as a result of the added line segments (features), which significantly enhances the classification ability. Another remarkable merit is that RNFLS is applicable to complex tasks such as the two-spiral distribution, which the original NFL cannot deal with properly. Finally, experimental comparisons with NFL, NN(nearest neighbor), k-NN and NNL (nearest neighbor line) using both artificial and real-world data-sets demonstrate that RNFLS offers the best performance. 相似文献
2.
基于PCA与改进的最近邻法则的异常检测 总被引:1,自引:0,他引:1
聂方彦 《计算机工程与设计》2008,29(10):2502-2504
提出一种新颖的基于特征抽取的异常检测方法,先对预处理数据进行标准化变换,然后应用主成份分析(PCA)抽取入侵特征,最后应用一种改进的最近邻分类方法--基于中心的最近邻分类法(CNN)检测入侵.利用KDD Cup'99数据集,将PCA删与PCA NN、PCA SVM、标准SVM进行比较,结果显示,在不降低分类器性能的情况下,特征抽取方法能对输入数据有效降维,且在各种方法中,PCA与CNN的结合能得到最优的入侵检测性能. 相似文献
3.
A new approach called shortest feature line segment (SFLS) is proposed to implement pattern classification in this paper, which can retain the ideas and advantages of nearest feature line (NFL) and at the same time can counteract the drawbacks of NFL. The proposed SFLS uses the length of the feature line segment satisfying given geometric relation with query point instead of the perpendicular distance defined in NFL. SFLS has clear geometric-theoretic foundation and is relatively simple. Experimental results on some artificial datasets and real-world datasets are provided, together with the comparisons between SFLS and other neighborhood-based classification methods, including nearest neighbor (NN), k-NN, NFL and some refined NFL methods, etc. It can be concluded that SFLS is a simple yet effective classification approach. 相似文献
4.
In this paper, two novel classifiers based on locally nearest neighborhood rule, called nearest neighbor line and nearest neighbor plane, are presented for pattern classification. Comparison to nearest feature line and nearest feature plane, the proposed methods take much lower computation cost and achieve competitive performance. 相似文献
5.
6.
7.
8.
P. Viswanath Author Vitae Author Vitae Shalabh Bhatnagar Author Vitae 《Pattern recognition》2005,38(8):1187-1195
Nearest neighbor (NN) classifier is the most popular non-parametric classifier. It is a simple classifier with no design phase and shows good performance. Important factors affecting the efficiency and performance of NN classifier are (i) memory required to store the training set, (ii) classification time required to search the nearest neighbor of a given test pattern, and (iii) due to the curse of dimensionality the number of training patterns needed by it to achieve a given classification accuracy becomes prohibitively large when the dimensionality of the data is high. In this paper, we propose novel techniques to improve the performance of NN classifier and at the same time to reduce its computational burden. These techniques are broadly based on: (i) overlap based pattern synthesis which can generate a larger number of artificial patterns than the number of input patterns and thus can reduce the curse of dimensionality effect, (ii) a compact representation of the given set of training patterns called overlap pattern graph (OLP-graph) which can be incrementally built by scanning the training set only once and (iii) an efficient NN classifier called OLP-NNC which directly works with OLP-graph and does implicit overlap based pattern synthesis. A comparison based on experimental results is given between some of the relevant classifiers. The proposed schemes are suitable for applications dealing with large and high dimensional datasets like those in data mining. 相似文献
9.
10.
Yong-Sheng Chen Author Vitae Yi-Ping Hung Author Vitae Ting-Fang Yen Author Vitae 《Pattern recognition》2007,40(2):360-375
In this paper, we present a fast and versatile algorithm which can rapidly perform a variety of nearest neighbor searches. Efficiency improvement is achieved by utilizing the distance lower bound to avoid the calculation of the distance itself if the lower bound is already larger than the global minimum distance. At the preprocessing stage, the proposed algorithm constructs a lower bound tree (LB-tree) by agglomeratively clustering all the sample points to be searched. Given a query point, the lower bound of its distance to each sample point can be calculated by using the internal node of the LB-tree. To reduce the amount of lower bounds actually calculated, the winner-update search strategy is used for traversing the tree. For further efficiency improvement, data transformation can be applied to the sample and the query points. In addition to finding the nearest neighbor, the proposed algorithm can also (i) provide the k-nearest neighbors progressively; (ii) find the nearest neighbors within a specified distance threshold; and (iii) identify neighbors whose distances to the query are sufficiently close to the minimum distance of the nearest neighbor. Our experiments have shown that the proposed algorithm can save substantial computation, particularly when the distance of the query point to its nearest neighbor is relatively small compared with its distance to most other samples (which is the case for many object recognition problems). 相似文献
11.
遥感影像单类信息提取是一种特殊的分类,旨在训练和提取单一兴趣类别。研究了基于最近邻分类器的单类信息提取方法,包括类别划分和样本选择问题。首先分析论证了最近邻方法提取单类信息只与所选择的样本相关,而与类别划分无关,因此可以将单类信息提 取作为二类分类问题进行处理。然后在二类分类问题中,根据空间和特征邻近性选择非兴趣类别的部分训练样本,简化了分类过程。实验结果表明,所提出的方法可以有效实现遥感影像单类信息的提取。 相似文献
12.
13.
In this paper, we propose a novel method to measure the dissimilarity of categorical data. The key idea is to consider the dissimilarity between two categorical values of an attribute as a combination of dissimilarities between the conditional probability distributions of other attributes given these two values. Experiments with real data show that our dissimilarity estimation method improves the accuracy of the popular nearest neighbor classifier. 相似文献
14.
Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery 总被引:2,自引:0,他引:2
In using traditional digital classification algorithms, a researcher typically encounters serious issues in identifying urban land cover classes employing high resolution data. A normal approach is to use spectral information alone and ignore spatial information and a group of pixels that need to be considered together as an object. We used QuickBird image data over a central region in the city of Phoenix, Arizona to examine if an object-based classifier can accurately identify urban classes. To demonstrate if spectral information alone is practical in urban classification, we used spectra of the selected classes from randomly selected points to examine if they can be effectively discriminated. The overall accuracy based on spectral information alone reached only about 63.33%. We employed five different classification procedures with the object-based paradigm that separates spatially and spectrally similar pixels at different scales. The classifiers to assign land covers to segmented objects used in the study include membership functions and the nearest neighbor classifier. The object-based classifier achieved a high overall accuracy (90.40%), whereas the most commonly used decision rule, namely maximum likelihood classifier, produced a lower overall accuracy (67.60%). This study demonstrates that the object-based classifier is a significantly better approach than the classical per-pixel classifiers. Further, this study reviews application of different parameters for segmentation and classification, combined use of composite and original bands, selection of different scale levels, and choice of classifiers. Strengths and weaknesses of the object-based prototype are presented and we provide suggestions to avoid or minimize uncertainties and limitations associated with the approach. 相似文献
15.
Qinghua HuAuthor Vitae Pengfei ZhuAuthor VitaeYongbin YangAuthor Vitae Daren YuAuthor Vitae 《Neurocomputing》2011,74(4):656-660
The nearest neighbor classification is a simple and yet effective technique for pattern recognition. Performance of this technique depends significantly on the distance function used to compute similarity between examples. Some techniques were developed to learn weights of features for changing the distance structure of samples in nearest neighbor classification. In this paper, we propose an approach to learning sample weights for enlarging margin by using a gradient descent algorithm to minimize margin based classification loss. Experimental analysis shows that the distances trained in this way reduce the loss of the margin and enlarge the hypothesis margin on several datasets. Moreover, the proposed approach consistently outperforms nearest neighbor classification and some other state-of-the-art methods. 相似文献
16.
Tensor decompositions have many application areas in several domains where one key application is revealing relational structure between multiple dimensions simultaneously and thus enabling the compression of relational data. In this paper, we propose the Discriminative Tensor Decomposition with Large Margin (shortly, Large Margin Tensor Decomposition, LMTD), which can be viewed as a tensor-to-tensor projection operation. It is a novel method for calculating the mutual projection matrices that map the tensors into a lower dimensional space such that the nearest neighbor classification accuracy is improved. The LMTD aims finding the mutual discriminative projection matrices which minimize the misclassification rate by minimizing the Frobenius distance between the same class instances (in-class neighbors) and maximizing the distance between different class instances (impostor neighbors). Two versions of LMTD are proposed, where the nearest neighbor classification error is computed in the feature (latent) or input (observations) space. We evaluate the proposed models on real data sets and provide a comparison study with alternative decomposition methods in the literature in terms of their classification accuracy and mean average precision. 相似文献
17.
独立分量分析在模式识别中的应用 总被引:8,自引:0,他引:8
模式识别中关键的两个环节是模式的特征提取及利用分类器分类识别。采用独立分量分析进行特征提取 ,并比较了最近邻分类器和cos分类器的分类识别性能。利用ORL人脸图像数据库进行实验 ,结果表明独立分量分析与cos分类器相结合可得到更好的识别结果。 相似文献
18.
针对伪近邻分类算法(LMPNN)对异常点和噪声点仍然敏感的问题,提出了一种基于双向选择的伪近邻算法(BS-PNN)。利用邻近性度量选取[k]个最近邻,让测试样本和近邻样本通过互近邻定义进行双向选择;通过计算每类中互近邻的个数及其局部均值的加权距离,从而得到测试样本到伪近邻的欧氏距离;利用改进的类可信度作为投票度量方式,对测试样本进行分类。BS-PNN算法在处理复杂的分类任务时,具有能够准确识别噪声点,降低近邻个数[k]的敏感性,提高分类精度等优势。在UCI和KEEL的15个实际数据集上进行仿真实验,并与KNN、WKNN、LMKNN、PNN、LMPNN、DNN算法以及P-KNN算法进行比较,实验结果表明,基于双向选择的伪近邻算法的分类性能明显优于其他几种近邻分类算法。 相似文献
19.
The paper introduces a novel adaptive local hyperplane (ALH) classifier and it shows its superior performance in the face
recognition tasks. Four different feature extraction methods (2DPCA, (2D)2PCA, 2DLDA and (2D)2LDA) have been used in combination with five classifiers (K-nearest neighbor (KNN), support vector machine (SVM), nearest feature line (NFL), nearest neighbor line (NNL) and ALH). All
the classifiers and feature extraction methods have been applied to the renown benchmarking face databases—the Cambridge ORL
database and the Yale database and the ALH classifier with a LDA based extractor outperforms all the other methods on them.
The ALH algorithm on these two databases is very promising but more study on larger databases need yet to be done to show
all the advantages of the proposed algorithm. 相似文献
20.
Chang Yin Zhou 《Pattern recognition》2006,39(4):635-645
Nearest neighbor (NN) classification assumes locally constant class conditional probabilities, and suffers from bias in high dimensions with a small sample set. In this paper, we propose a novel cam weighted distance to ameliorate the curse of dimensionality. Different from the existing neighborhood-based methods which only analyze a small space emanating from the query sample, the proposed nearest neighbor classification using the cam weighted distance (CamNN) optimizes the distance measure based on the analysis of inter-prototype relationship. Our motivation comes from the observation that the prototypes are not isolated. Prototypes with different surroundings should have different effects in the classification. The proposed cam weighted distance is orientation and scale adaptive to take advantage of the relevant information of inter-prototype relationship, so that a better classification performance can be achieved. Experiments show that CamNN significantly outperforms one nearest neighbor classification (1-NN) and k-nearest neighbor classification (k-NN) in most benchmarks, while its computational complexity is comparable with that of 1-NN classification. 相似文献