首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Pixel mapping is one of the basic processes in color quantization. In this paper, we shall propose a new algorithm using principal component analysis as a faster approach to pixel mapping. Within much shorter search time, our new scheme can find the nearest color which is identical to the one found using a full search. The idea behind the proposed method is quite simple. First, we compute two principal component directions (PCDs) for the palette. Then, the projected values on PCDs are computed for each color in palette. Finally, the projected values, following the triangular inequality principle, can help us reduce the computation time for finding the nearest color. The experimental results reveal that the proposed scheme is more efficient than the previous work.  相似文献   

2.
The classical analysis of a stochastic signal into principal components compresses the signal using an optimal selection of linear features. Noisy Principal Component Analysis (NPCA) is an extension of PCA under the assumption that the extracted features are unreliable, and the unreliability is modeled by additive noise. The applications of this assumption appear for instance, in communications problems with noisy channels. The level of noise in the NPCA features affects the reconstruction error in a way resembling the water-filling analogy in information theory. Robust neural network models for Noisy PCA can be defined with respect to certain synaptic weight constraints. In this paper we present the NPCA theory related to a particularly simple and tractable constraint which allows us to evaluate the robustness of old PCA Hebbian learning rules. It turns out that those algorithms are not optimally robust in the sense that they produce a zero solution when the noise power level reaches half the limit set by NPCA. In fact, they are not NPCA-optimal for any other noise levels except zero. Finally, we propose new NPCA-optimal robust Hebbian learning algorithms for multiple adaptive noisy principal component extraction.  相似文献   

3.
This paper addresses the problem of face recognition using independent component analysis (ICA). More specifically, we are going to address two issues on face representation using ICA. First, as the independent components (ICs) are independent but not orthogonal, images outside a training set cannot be projected into these basis functions directly. In this paper, we propose a least-squares solution method using Householder Transformation to find a new representation. Second, we demonstrate that not all ICs are useful for recognition. Along this direction, we design and develop an IC selection algorithm to find a subset of ICs for recognition. Three public available databases, namely, MIT AI Laboratory, Yale University and Olivette Research Laboratory, are selected to evaluate the performance and the results are encouraging.  相似文献   

4.
Syed A.  Nasser M.   《Pattern recognition》2002,35(12):2895-2904
A modular clutter-rejection technique that uses region-based principal component analysis (PCA) is proposed. A major problem in FLIR ATR is the poorly centered targets generated by the preprocessing stage. Our modular clutter-rejection system usesstatic as well as dynamic region of interest (ROI) extraction to overcome the problem of poorly centered targets. In static ROI extraction, the center of the representative ROI coincides with the center of the potential target image. In dynamic ROI extraction, a representative ROI is moved in several directions with respect to the center of the potential target image to extract a number of ROIs. Each module in the proposed system applies region-based PCA to generate the feature vectors, which are subsequently used to make a decision about the identity of the potential target. Region-based PCA uses topological features of the targets to reject false alarms. In this technique, a potential target is divided into several regions and a PCA is performed on each region to extract regional feature vectors. We propose using regional feature vectors of arbitrary shapes and dimensions that are optimized for the topology of a target in a particular region. These regional feature vectors are then used by a two-class classifier based on the learning vector quantization to decide whether a potential target is a false alarm or a real target. We also present experimental results using real-life data to evaluate and compare the performance of the clutter-rejection systems with static and dynamic ROI extraction.  相似文献   

5.
Collaborative filtering based on iterative principal component analysis   总被引:2,自引:0,他引:2  
Collaborative filtering (CF) is one of the most popular recommender system technologies, and utilizes the known preferences of a group of users to predict the unknown preference of a new user. However, the existing CF techniques has the drawback that it requires the entire existing data be maintained and analyzed repeatedly whenever new user ratings are added. To avoid such a problem, Eigentaste, a CF approach based on the principal component analysis (PCA), has been proposed. However, Eigentaste requires that each user rate every item in the so called gauge set for executing PCA, which may not be always feasible in practice. Developed in this article is an iterative PCA approach in which no gauge set is required, and singular value decomposition is employed for estimating missing ratings and dimensionality reduction. Principal component values for users in reduced dimension are used for clustering users. Then, the proposed approach is compared to Eigentaste in terms of the mean absolute error of prediction using the Jester, MovieLens, and EachMovie data sets. Experimental results show that the proposed approach, even without a gauge set, performs slightly better than Eigentaste regardless of the data set and clustering method employed, implying that it can be used as a useful alternative when defining a gauge set is neither possible nor practical.  相似文献   

6.
A simple linear identification algorithm is presented in this paper. The last principal component (LPC), the eigenvector corresponding to the smallest eigenvalue of a non-negative symmetric matrix, contains an optimal linear relation of the column vectors of the data matrix. This traditional, well-known principal component analysis is extended to the generalized last principal component analysis (GLPC). For processes with colored measurement noise or disturbances, consistency of the GLPC estimator is achieved without involving iteration or non-linear numerical optimization. The proposed algorithm is illustrated by a simulated example and application to a pilot-scale process.  相似文献   

7.
In this paper, a novel subspace method called diagonal principal component analysis (DiaPCA) is proposed for face recognition. In contrast to standard PCA, DiaPCA directly seeks the optimal projective vectors from diagonal face images without image-to-vector transformation. While in contrast to 2DPCA, DiaPCA reserves the correlations between variations of rows and those of columns of images. Experiments show that DiaPCA is much more accurate than both PCA and 2DPCA. Furthermore, it is shown that the accuracy can be further improved by combining DiaPCA with 2DPCA.  相似文献   

8.
Face recognition using kernel entropy component analysis   总被引:1,自引:0,他引:1  
In this letter, we have reported a new face recognition algorithm based on Renyi entropy component analysis. In the proposed model, kernel-based methodology is integrated with entropy analysis to choose the best principal component vectors that are subsequently used for pattern projection to a lower-dimensional space. Extensive experimentation on Yale and UMIST face database has been conducted to reveal the performance of the entropy based principal component analysis method and comparative analysis is made with the kernel principal component analysis method to signify the importance of selection of principal component vectors based on entropy information rather based only on magnitude of eigenvalues.  相似文献   

9.
This paper presents a unified theory of a class of learning neural nets for principal component analysis (PCA) and minor component analysis (MCA). First, some fundamental properties are addressed which all neural nets in the class have in common. Second, a subclass called the generalized asymmetric learning algorithm is investigated, and the kind of asymmetric structure which is required in general to obtain the individual eigenvectors of the correlation matrix of a data sequence is clarified. Third, focusing on a single-neuron model, a systematic way of deriving both PCA and MCA learning algorithms is shown, through which a relation between the normalization in PCA algorithms and that in MCA algorithms is revealed. This work was presented, in part, at the Third International Symposium on Artificial Life and Robotics, Oita, Japan, January 19–21, 1998  相似文献   

10.
Thermal infrared remote sensing can quickly and accurately detect the volcanic ash cloud. However, remote sensing data have pretty strong inter-band correlation and data redundancy, both of which have decreased to a certain degree the detecting accuracy of volcanic ash cloud. Principal component analysis (PCA) can compress a large number of complex information into a few principal components and overcome the correlation and redundancy. Taking the Eyjafjallajokull volcanic ash cloud formed on April 19, 2010 for example, in this paper, the PCA is used to detect the volcanic ash cloud based on moderate resolution imaging spectroradiometer (MODIS) remote sensing image. The results show that: the PCA can successfully acquire the volcanic ash cloud from MODIS image; the detected volcanic ash cloud has a good consistency with the spatial distribution, SO2 concentration and volcanic absorbing aerosol index (AAI).  相似文献   

11.
This paper presents an efficient image denoising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). For a better preservation of image local structures, a pixel and its nearest neighbors are modeled as a vector variable, whose training samples are selected from the local window by using block matching based LPG. Such an LPG procedure guarantees that only the sample blocks with similar contents are used in the local statistics calculation for PCA transform estimation, so that the image local features can be well preserved after coefficient shrinkage in the PCA domain to remove the noise. The LPG-PCA denoising procedure is iterated one more time to further improve the denoising performance, and the noise level is adaptively adjusted in the second stage. Experimental results on benchmark test images demonstrate that the LPG-PCA method achieves very competitive denoising performance, especially in image fine structure preservation, compared with state-of-the-art denoising algorithms.  相似文献   

12.
This paper shows that current multivariate statistical monitoring technology may not detect incipient changes in the variable covariance structure nor changes in the geometry of the underlying variable decomposition. To overcome these deficiencies, the local approach is incorporated into the multivariate statistical monitoring framework to define two new univariate statistics for fault detection. Fault isolation is achieved by constructing a fault diagnosis chart which reveals changes in the covariance structure resulting from the presence of a fault. A theoretical analysis is presented and the proposed monitoring approach is exemplified using application studies involving recorded data from two complex industrial processes.  相似文献   

13.
A complete Bayesian framework for principal component analysis (PCA) is proposed. Previous model-based approaches to PCA were often based upon a factor analysis model with isotropic Gaussian noise. In contrast to PCA, these approaches do not impose orthogonality constraints. A new model with orthogonality restrictions is proposed. Its approximate Bayesian solution using the variational approximation and results from directional statistics is developed. The Bayesian solution provides two notable results in relation to PCA. The first is uncertainty bounds on principal components (PCs), and the second is an explicit distribution on the number of relevant PCs. The posterior distribution of the PCs is found to be of the von-Mises-Fisher type. This distribution and its associated hypergeometric function, , are studied. Numerical reductions are revealed, leading to a stable and efficient orthogonal variational PCA (OVPCA) algorithm. OVPCA provides the required inferences. Its performance is illustrated in simulation, and for a sequence of medical scintigraphic images.  相似文献   

14.
A new subspace identification approach based on principal component analysis   总被引:17,自引:0,他引:17  
Principal component analysis (PCA) has been widely used for monitoring complex industrial processes with multiple variables and diagnosing process and sensor faults. The objective of this paper is to develop a new subspace identification algorithm that gives consistent model estimates under the errors-in-variables (EIV) situation. In this paper, we propose a new subspace identification approach using principal component analysis. PCA naturally falls into the category of EIV formulation, which resembles total least squares and allows for errors in both process input and output. We propose to use PCA to determine the system observability subspace, the A, B, C, and D matrices and the system order for an EIV formulation. Standard PCA is modified with instrumental variables in order to achieve consistent estimates of the system matrices. The proposed subspace identification method is demonstrated using a simulated process and a real industrial process for model identification and order determination. For comparison the MOESP algorithm and N4SID algorithm are used as benchmarks to demonstrate the advantages of the proposed PCA based subspace model identification (SMI) algorithm.  相似文献   

15.
The statistical analysis of tree structured data is a new topic in statistics with wide application areas. Some Principal Component Analysis (PCA) ideas have been previously developed for binary tree spaces. These ideas are extended to the more general space of rooted and ordered trees. Concepts such as tree-line and forward principal component tree-line are redefined for this more general space, and the optimal algorithm that finds them is generalized.An analog of the classical dimension reduction technique in PCA for tree spaces is developed. To do this, backward principal components, the components that carry the least amount of information on tree data set, are defined. An optimal algorithm to find them is presented. Furthermore, the relationship of these to the forward principal components is investigated, and a path-independence property between the forward and backward techniques is proven.These methods are applied to a brain artery data set of 98 subjects. Using these techniques, the effects of aging to the brain artery structure of males and females is investigated. A second data set of the organization structure of a large US company is also analyzed and the structural differences across different types of departments within the company are explored.  相似文献   

16.
Pattern recognition techniques have been widely used in a variety of scientific disciplines including computer vision, artificial intelligence, biology, and so forth. Although many methods present satisfactory performances, they still have several weak points, thus leaving a lot of space for further improvements. In this paper, we propose two performance-driven subspace learning methods by extending the principal component analysis (PCA) and the kernel PCA (KPCA). Both methods adopt a common structure where genetic algorithms are employed to pursue optimal subspaces. Because the proposed feature extractors aim at achieving high classification accuracy, enhanced generalization ability can be expected. Extensive experiments are designed to evaluate the effectiveness of the proposed algorithms in real-world problems including object recognition and a number of machine learning tasks. Comparative studies with other state-of-the-art techniques show that the methods in this paper are capable of enhancing generalization ability for pattern recognition systems.  相似文献   

17.
Inspired by the conviction that the successful model employed for face recognition [M. Turk, A. Pentland, Eigenfaces for recognition, J. Cogn. Neurosci. 3(1) (1991) 71-86] should be extendable for object recognition [H. Murase, S.K. Nayar, Visual learning and recognition of 3-D objects from appearance, International J. Comput. Vis. 14(1) (1995) 5-24], in this paper, a new technique called two-dimensional principal component analysis (2D-PCA) [J. Yang et al., Two-dimensional PCA: a new approach to appearance based face representation and recognition, IEEE Trans. Patt. Anal. Mach. Intell. 26(1) (2004) 131-137] is explored for 3D object representation and recognition. 2D-PCA is based on 2D image matrices rather than 1D vectors so that the image matrix need not be transformed into a vector prior to feature extraction. Image covariance matrix is directly computed using the original image matrices, and its eigenvectors are derived for feature extraction. The experimental results indicate that the 2D-PCA is computationally more efficient than conventional PCA (1D-PCA) [H. Murase, S.K. Nayar, Visual learning and recognition of 3-D objects from appearance, International J. Comput. Vis. 14(1) (1995) 5-24]. It is also revealed through experimentation that the proposed method is more robust to noise and occlusion.  相似文献   

18.
Vertices Principal Component Analysis (V-PCA), and Centers Principal Component Analysis (C-PCA) generalize Principal Component Analysis (PCA) in order to summarize interval valued data. Neural Network Principal Component Analysis (NN-PCA) represents an extension of PCA for fuzzy interval data. However, also the first two methods can be used for analyzing fuzzy interval data, but they then ignore the spread information. In the literature, the V-PCA method is usually considered computationally cumbersome because it requires the transformation of the interval valued data matrix into a single valued data matrix the number of rows of which depends exponentially on the number of variables and linearly on the number of observation units. However, it has been shown that this problem can be overcome by considering the cross-products matrix which is easy to compute. A review of C-PCA and V-PCA (which hence also includes the computational short-cut to V-PCA) and NN-PCA is provided. Furthermore, a comparison is given of the three methods by means of a simulation study and by an application to an empirical data set. In the simulation study, fuzzy interval data are generated according to various models, and it is reported in which conditions each method performs best.  相似文献   

19.
An artificial data matrix of element concentrations at sampling locations was created which included six simulated gradients of correlated variables (Ca+Mg, Ni+V, Pb+Cu+Zn, Cd, Fe and K), representing a simplified model of a National survey. The data matrix model was used to explore the efficiency with which Principal Components Analysis (PCA), without and with Varimax rotation, could derive the imposed gradients. The dependence of PCA on outliers was decreased by log-transformation of data. The Components derived from non-rotated PCA were confounded by bipolar clusters and oblique gradients, both resulting in superimposition of two independent gradients on one Component. Therefore, erroneous interpretation of results could result from assessment of variable loadings on Components, without assessment of coupled independent gradients. Varimax rotation greatly improved the results, by rotation of too few Components led to the same problems, and rotation of too many Components led to fragmentation of correlated variables onto single-element Components. The best configuration matching the original model could be selected after investigation of element concentrations superimposed on sample ordinations.  相似文献   

20.
The aim of this paper is to develop a fuzzy classifier form the point of view of a fuzzy information retrieval system. The genetic algorithm is employed to find useful fuzzy concepts with high classification performance for classification problems; then, each of classes and patterns can be represented by a fuzzy set of useful fuzzy concepts. Each of fuzzy concepts is linguistically interpreted and the corresponding membership functions remain fixed during the evolution. A pattern can be categorized into one class if there exists a maximum degree of similarity between them. For not distorting the usefulness of the proposed classifier for high-dimensional problems, the principal component analysis is incorporated into the proposed classifier to reduce dimensions. The generalization ability of the proposed classifier is examined by performing computer simulations on some well-known data sets, such as the breast cancer data and the wine classification data. The results demonstrate that the proposed classifier works well in comparison with other classification methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号