首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we investigate the practical implementation issues of the real-time constrained linear discriminant analysis (CLDA) approach for remotely sensed image classification. Specifically, two issues are to be resolved: (1) what is the best implementation scheme that yields lowest chip design complexity with comparable classification performance, and (2) how to extend CLDA algorithm for multispectral image classification. Two limitations about data dimensionality have to be relaxed. One is in real-time hyperspectral image classification, where the number of linearly independent pixels received for classification must be larger than the data dimensionality (i.e., the number of spectral bands) in order to generate a non-singular sample correlation matrix R for the classifier, and relaxing this limitation can help to resolve the aforementioned first issue. The other is in multispectral image classification, where the number of classes to be classified cannot be greater than the data dimensionality, and relaxing this limitation can help to resolve the aforementioned second issue. The former can be solved by introducing a pseudo inverse initiate of sample correlation matrix for R-1 adaptation, and the latter is taken care of by expanding the data dimensionality via the operation of band multiplication. Experiments on classification performance using these modifications are conducted to demonstrate their feasibility. All these investigations lead to a detailed ASIC chip design scheme for the real-time CLDA algorithm suitable to both hyperspectral and multispectral images. The proposed techniques to resolving these two dimensionality limitations are instructive to the real-time implementation of several popular detection and classification approaches in remote sensing image exploitation.  相似文献   

2.
We have proposed a constrained linear discriminant analysis (CLDA) approach for classifying the remotely sensed hyperspectral images. Its basic idea is to design an optimal linear transformation operator which can maximize the ratio of inter-class to intra-class distance while satisfying the constraint that the different class centers after transformation are aligned along different directions. Its major advantage over the traditional Fisher's linear discriminant analysis is that the classification can be achieved simultaneously with the transformation. The CLDA is a supervised approach, i.e., the class spectral signatures need to be known a priori. But, in practice, these informations may be difficult or even impossible to obtain. So in this paper we will extend the CLDA algorithm into an unsupervised version, where the class spectral signatures are to be directly generated from an unknown image scene. Computer simulation is used to evaluate how well the algorithm performs in terms of finding the pure signatures. We will also discuss how to implement the unsupervised CLDA algorithm in real-time for resolving the critical situations when the immediate data analysis results are required.  相似文献   

3.
Remotely sensed hyperspectral imagery has many important applications since its high-spectral resolution enables more accurate object detection and classification. To support immediate decision-making in critical circumstances, real-time onboard implementation is greatly desired. This paper investigates real-time implementation of several popular detection and classification algorithms for image data with different formats. An effective approach to speeding up real-time implementation is proposed by using a small portion of pixels in the evaluation of data statistics. An empirical rule of an appropriate percentage of pixels to be used is investigated, which results in reduced computational complexity and simplified hardware implementation. An overall system architecture is also provided.
Qian DuEmail:
  相似文献   

4.
Target detection is one of the most important applications of hyperspectral imagery in the field of both civilian and military. In this letter, we firstly propose a new spectral matching method for target detection in hyperspectral imagery, which utilizes a pre-whitening procedure and defines a regularized spectral angle between the spectra of the test sample and the targets. The regularized spectral angle, which possesses explicit geometric sense in multidimensional spectral vector space, indicates a measure to make the target detection more effective. Furthermore Kernel realization of the Angle-Regularized Spectral Matching (KAR-SM, based on kernel mapping) improves detection even more. To demonstrate the detection performance of the proposed method and its kernel version, experiments are conducted on real hyperspectral images. The experimental tests show that the proposed detector outperforms the conventional spectral matched filter and its kernel version.  相似文献   

5.
In this paper, a modified Fisher linear discriminant analysis (FLDA) is proposed and aims to not only overcome the rank limitation of FLDA, that is, at most only finding a discriminant vector for 2-class problem based on Fisher discriminant criterion, but also relax singularity of the within-class scatter matrix and finally improves classification performance of FLDA. Experiments on nine publicly available datasets show that the proposed method has better or comparable performance on all the datasets than FLDA.  相似文献   

6.
A novel model for Fisher discriminant analysis is developed in this paper. In the new model, maximal Fisher criterion values of discriminant vectors and minimal statistical correlation between feature components extracted by discriminant vectors are simultaneously required. Then the model is transformed into an extreme value problem, in the form of an evaluation function. Based on the evaluation function, optimal discriminant vectors are worked out. Experiments show that the method presented in this paper is comparative to the winner between FSLDA and ULDA.  相似文献   

7.
In this paper, a kernelized version of clustering-based discriminant analysis is proposed that we name KCDA. The main idea is to first map the original data into another high-dimensional space, and then to perform clustering-based discriminant analysis in the feature space. Kernel fuzzy c-means algorithm is used to do clustering for each class. A group of tests on two UCI standard benchmarks have been carried out that prove our proposed method is very promising.  相似文献   

8.
The main goal of this paper is to propose an innovative technique for anomaly detection in hyperspectral imageries. This technique allows anomalies to be identified whose signatures are spectrally distinct from their surroundings, without any a priori knowledge of the target spectral signature. It is based on an one-dimensional projection pursuit with the Legendre index as the measure of interest. The index optimization is performed with a simulated annealing over a simplex in order to bypass local optima which could be sub-optimal in certain cases. It is argued that the proposed technique could be considered as seeking a projection to depart from the normal distribution, and unfolding the outliers as a consequence. The algorithm is tested with AHS and HYDICE hyperspectral imageries, where the results show the benefits of the approach in detecting a great variety of objects whose spectral signatures have sufficient deviation from the background. The technique proves to be automatic in the sense that there is no need for parameter tuning, giving meaningful results in all cases. Even objects of sub-pixel size, which cannot be made out by the human naked eye in the original image, can be detected as anomalies. Furthermore, a comparison between the proposed approach and the popular RX technique is given. The former outperforms the latter demonstrating its ability to reduce the proportion of false alarms.  相似文献   

9.
董琰 《计算机工程与设计》2012,33(4):1591-1594,1681
为了解决高维小样本数据的分类中Fisherface思想判别分析方法的不足,在最大散度差准则的基础上,提出了利用多线性子空间技术对每类样本进行单独描述的方法,该方法能更准确地反映样本在类内类间的分布关系.在分类中不是以距离作为判别依据,而是按照贝叶斯决策规则得到的隶属置信度作为衡量标准.实验结果表明了该方法的有效性,和同类方法相比,有更高的识别率.  相似文献   

10.
This paper presents a new approach to the analysis of hyperspectral images, a new class of image data that is mainly used in remote sensing applications. The method is based on the generalization of concepts from mathematical morphology to multi-channel imagery. A new vector organization scheme is described, and fundamental morphological vector operations are defined by extension. Theoretical definitions of extended morphological operations are used in the formal definition of the concept of extended morphological profile, which is used for multi-scale analysis of hyperspectral data. This approach is particularly well suited for the analysis of image scenes where most of the pixels collected by the sensor are characterized by their mixed nature, i.e. they are formed by a combination of multiple underlying responses produced by spectrally distinct materials. Experimental results demonstrate the applicability of the proposed technique in mixed pixel analysis of simulated and real hyperspectral data collected by the NASA/Jet Propulsion Laboratory Airborne Visible/Infrared Imaging Spectrometer and the DLR Digital Airborne (DAIS 7915) and Reflective Optics System Imaging Spectrometers. The proposed method works effectively in the presence of noise and low spatial resolution. A quantitative and comparative performance study with regards to other standard hyperspectral analysis methodologies reveals that the combined utilization of spatial and spectral information in the proposed technique produces classification results which are superior to those found by using the spectral information alone.  相似文献   

11.
Hyperspectral imaging, which records a detailed spectrum of light arriving in each pixel, has many potential uses in remote sensing as well as other application areas. Practical applications will typically require real-time processing of large data volumes recorded by a hyperspectral imager. This paper investigates the use of graphics processing units (GPU) for such real-time processing. In particular, the paper studies a hyperspectral anomaly detection algorithm based on normal mixture modelling of the background spectral distribution, a computationally demanding task relevant to military target detection and numerous other applications. The algorithm parts are analysed with respect to complexity and potential for parallellization. The computationally dominating parts are implemented on an Nvidia GeForce 8800 GPU using the Compute Unified Device Architecture programming interface. GPU computing performance is compared to a multi-core central processing unit implementation. Overall, the GPU implementation runs significantly faster, particularly for highly data-parallelizable and arithmetically intensive algorithm parts. For the parts related to covariance computation, the speed gain is less pronounced, probably due to a smaller ratio of arithmetic to memory access. Detection results on an actual data set demonstrate that the total speedup provided by the GPU is sufficient to enable real-time anomaly detection with normal mixture models even for an airborne hyperspectral imager with high spatial and spectral resolution.  相似文献   

12.
13.
In the last decade, many variants of classical linear discriminant analysis (LDA) have been developed to tackle the under-sampled problem in face recognition. However, choosing the variants is not easy since these methods involve eigenvalue decomposition that makes cross-validation computationally expensive. In this paper, we propose to solve this problem by unifying these LDA variants in one framework: principal component analysis (PCA) plus constrained ridge regression (CRR). In CRR, one selects the target (also called class indicator) for each class, and finds a projection to locate the class centers at their class targets and the transform minimizes the within-class distances with a penalty on the transform norm as in ridge regression. Under this framework, many existing LDA methods can be viewed as PCA+CRR with particular regularization numbers and class indicators and we propose to choose the best LDA method as choosing the best member from the CRR family. The latter can be done by comparing their leave-one-out (LOO) errors and we present an efficient algorithm, which requires similar computations to the training process of CRR, to evaluate the LOO errors. Experiments on Yale Face B, Extended Yale B and CMU-PIE databases are conducted to demonstrate the effectiveness of the proposed methods.  相似文献   

14.
This paper describes a new methodology to detect small anomalies in high resolution hyperspectral imagery, which involves successively: (1) a multivariate statistical analysis (principal component analysis, PCA) of all spectral bands; (2) a geostatistical filtering of noise and regional background in the first principal components using factorial kriging; and finally (3) the computation of a local indicator of spatial autocorrelation to detect local clusters of high or low reflectance values and anomalies. The approach is illustrated using 1 m resolution data collected in and near northeastern Yellowstone National Park. Ground validation data for tarps and for disturbed soils on mine tailings demonstrate the ability of the filtering procedure to reduce the proportion of false alarms (i.e., pixels wrongly classified as target), and its robustness under low signal to noise ratios. In almost all scenarios, the proposed approach outperforms traditional anomaly detectors (i.e., RX detector which computes the Mahalanobis distance between the vector of spectral values and the vector of global means), and fewer false alarms are obtained when using a novel statistic S2 (average absolute deviation of p-values from 0.5 through all spectral bands) to summarize information across bands. Image degradation through addition of noise or reduction of spectral resolution tends to blur the detection of anomalies, increasing false alarms, in particular for the identification of the least pure pixels. Results from a mine tailings site demonstrate the approach performs reasonably well for highly complex landscape with multiple targets of various sizes and shapes. By leveraging both spectral and spatial information, the technique requires little or no input from the user, and hence can be readily automated.  相似文献   

15.
In this paper we present a new implementation for the null space based linear discriminant analysis. The main features of our implementation include: (i) the optimal transformation matrix is obtained easily by only orthogonal transformations without computing any eigendecomposition and singular value decomposition (SVD), consequently, our new implementation is eigendecomposition-free and SVD-free; (ii) its main computational complexity is from a economic QR factorization of the data matrix and a economic QR factorization of a n×n matrix with column pivoting, here n is the sample size, thus our new implementation is a fast one. The effectiveness of our new implementation is demonstrated by some real-world data sets.  相似文献   

16.
In this paper, we give a theoretical analysis on kernel uncorrelated discriminant analysis (KUDA) and point out the drawbacks underlying the current KUDA algorithm which was recently introduced by Liang and Shi [Pattern Recognition 38(2) (2005) 307-310]. Then we propose an effective algorithm to overcome these drawbacks. The effectiveness of the proposed method was confirmed by experiments.  相似文献   

17.
This paper develops a new methodology for pattern classification by concurrently determined k piecewise linear and convex discriminant functions. Toward the end, we design a new l1-norm distance metric for measuring misclassification errors and use it to develop a mixed 0–1 integer and linear program (MILP) for the k piecewise linear and convex separation of data. The proposed model is meritorious in that it considers the synergy as well as the individual role of the k hyperplanes in constructing a decision surface and exploits the advances in theory and algorithms and the advent of powerful softwares for MILP for its solution. With artificially created data, we illustrate pros and cons of pattern classification by the proposed methodology. With six benchmark classification datasets, we demonstrate that the proposed approach is effective and competitive with well-established learning methods. In summary, the classifiers constructed by the proposed approach obtain the best prediction rates on three of the six datasets and the second best records for two of the remaining three datasets.  相似文献   

18.
提出一种非相关线性判别分析(ULDA)结合统计卡方检验(CHI2)的方法用于蛋白质组质谱数据的分类及特征挑选.首先以卡方检验为过滤器去除无类间差别的变量,然后用ULDA进行样本分类与特征筛选,通过对两组数据的分析,最终选择出的特征变量在这两组数据中的特异性分别为98.2%和95.74%,灵敏度均为100%.结果表明本文提出的方法能较好地处理变量数很大的蛋白质组数据,同时表明最后选择的特征变量有可能作为潜在的生物标记物,为相关疾病的早期诊断提供线索.  相似文献   

19.
The perturbation theory provides a useful tool for the sensitivity analysis in linear discriminant analysis (LDA). Though some influence functions by single perturbation and local influence in LDA have been discussed in literature, we propose yet another influence function inspired by Critchley [1985. Influence in principal component analysis. Biometrika 72, 627-636], called the deleted empirical influence function, as an alternative approach for the influence analysis in LDA. It is well-known that single-perturbation diagnostics can suffer from the masking effect. Hence in this paper we also develop the pair-perturbation influence functions to detect the masked influential points. The comparisons between pair-perturbation influence functions and local influences in pairs in LDA are also investigated. Finally, two examples are provided to illustrate the results of these approaches.  相似文献   

20.
为了提高人脸正确识别率和效率,在行列方向的二维线性判别分析((2D)2LDA)基础之上,提出了一种二维复判别分析(2DCCDA)的人脸识别方法.该方法通过(2D)2LDA并行提取到的行和列特征矩阵,利用复二维鉴别式分析(C2DLDA)将行和列特征融合成复数特征矩阵,从复数特征矩阵中提取出最具分类能力的系数组成特征向量.相比较二维线性判别分析(2DLDA)和(2D)2LDA方法,2DCCDA需要更少的特征系数来表征一幅图像,并且正确识别率也相应提高.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号