共查询到20条相似文献,搜索用时 0 毫秒
1.
《国际计算机数学杂志》2012,89(1-2):5-24
The classical analysis of a stochastic signal into principal components compresses the signal using an optimal selection of linear features. Noisy Principal Component Analysis (NPCA) is an extension of PCA under the assumption that the extracted features are unreliable, and the unreliability is modeled by additive noise. The applications of this assumption appear for instance, in communications problems with noisy channels. The level of noise in the NPCA features affects the reconstruction error in a way resembling the water-filling analogy in information theory. Robust neural network models for Noisy PCA can be defined with respect to certain synaptic weight constraints. In this paper we present the NPCA theory related to a particularly simple and tractable constraint which allows us to evaluate the robustness of old PCA Hebbian learning rules. It turns out that those algorithms are not optimally robust in the sense that they produce a zero solution when the noise power level reaches half the limit set by NPCA. In fact, they are not NPCA-optimal for any other noise levels except zero. Finally, we propose new NPCA-optimal robust Hebbian learning algorithms for multiple adaptive noisy principal component extraction. 相似文献
2.
We propose a method for non-uniform reconstruction of 3D scalar data. Typically, radial basis functions, trigonometric polynomials or shift-invariant functions are used in the functional approximation of 3D data. We adopt a variational approach for the reconstruction and rendering of 3D data. The principle idea is based on data fitting via thin-plate splines. An approximation by B-splines offers more compact support for fast reconstruction. We adopt this method for large datasets by introducing a block-based reconstruction approach. This makes the method practical for large datasets. Our reconstruction will be smooth across blocks. We give reconstruction measurements as error estimations based on different parameter settings and also an insight on the computational effort. We show that the block size used in reconstruction has a negligible effect on the reconstruction error. Finally we show rendering results to emphasize the quality of this 3D reconstruction technique. 相似文献
3.
We introduce a method for surface reconstruction from point sets that is able to cope with noise and outliers. First, a splat-based representation is computed from the point set. A robust local 3D RANSAC-based procedure is used to filter the point set for outliers, then a local jet surface – a low-degree surface approximation – is fitted to the inliers. Second, we extract the reconstructed surface in the form of a surface triangle mesh through Delaunay refinement. The Delaunay refinement meshing approach requires computing intersections between line segment queries and the surface to be meshed. In the present case, intersection queries are solved from the set of splats through a 1D RANSAC procedure. 相似文献
4.
5.
Fuzzy auto-associative neural networks for principal component extraction of noisy data. 总被引:1,自引:0,他引:1
In this paper, we propose a fuzzy auto-associative neural network for principal component extraction. The objective function is based on reconstructing the inputs from the corresponding outputs of the auto-associative neural network. Unlike the traditional approaches, the proposed criterion is a fuzzy mean squared error. We prove that the proposed objective function is an appropriate fuzzy formulation of auto-associative neural network for principal component extraction. Simulations are given to show the performances of the proposed neural networks in comparison with the existing method. 相似文献
6.
《International journal of remote sensing》2012,33(6):2303-2325
ABSTRACTDue to the signal-to-noise ratio (SNR) of sensors, as well as atmospheric absorption and illumination conditions, etc., hyperspectral data at some bands are of poor quality. Data restoration for noisy bands is important for many remote sensing applications. In this paper, we present a novel data-driven Principal Component Analysis (PCA) approach for restoring leaf reflectance spectra at noisy bands using the spectra at effective bands. The technique decomposes the leaf reflectance spectra into their principal components (PCs), selects the leading PCs that describe the most variance in the data, and restores the data from these components. First, the first 10 PCs were determined from a training dataset simulated by the leaf optical properties model (PROSPECT-5) that contained 99.998% of the total information in the 3636 training samples. Then, the performance of the PCA method for restoration of the reflectance at noisy bands was investigated using the ANGERS leaf optical properties dataset; the results showed the spectral root mean squared error (RMSE) is in the range 6.46 × 10?4 to 6.44 × 10?2, which is about 3 ? 34 times more accurate than the stepwise regression method and partial least squares method (PLSR) for the ANGERS dataset. The results also showed that if the noisy bands are far away from the effective bands, the accuracy of the restored leaf reflectance spectra will decrease. Thirdly, the reliability of the restored reflectance spectra for retrieving leaf biochemical contents was assessed using the ANGERS dataset and leaf optical properties dataset established by the Beijing Academy of Agriculture and Forestry Sciences (BAAFS). Three water-sensitive vegetation indices ? normalized difference water index (NDWI), normalized difference infrared index (NDII) and Datt water index (DWI), derived from the restored leaf spectra ? were employed to retrieve the equivalent water thickness (EWT). The results showed that the leaf water content can be accurately retrieved from the restored leaf reflectance spectra. In addition, the PCA method to restore vegetation spectral reflectance can be applied on canopy level as well. The results showed that the spectral root mean squared error (RMSE) is in the range 8.22 × 10?4 to 1.87 × 10?2. The performance of the restored canopy spectra was investigated according to the results of retrieving canopy equivalent water thickness (CEWT) using the five spectral indices NDWI, NDWI1370, NDWI1890, NDII and DWI. The results indicated that the restored canopy spectra can be used for retrieving CEWT reliably and improve the accuracy of retrieval compared to the results using original canopy reflectance spectra. 相似文献
7.
Coupled principal component analysis 总被引:1,自引:0,他引:1
A framework for a class of coupled principal component learning rules is presented. In coupled rules, eigenvectors and eigenvalues of a covariance matrix are simultaneously estimated in coupled equations. Coupled rules can mitigate the stability-speed problem affecting noncoupled learning rules, since the convergence speed in all eigendirections of the Jacobian becomes widely independent of the eigenvalues of the covariance matrix. A number of coupled learning rule systems for principal component analysis, two of them new, is derived by applying Newton's method to an information criterion. The relations to other systems of this class, the adaptive learning algorithm (ALA), the robust recursive least squares algorithm (RRLSA), and a rule with explicit renormalization of the weight vector length, are established. 相似文献
8.
Pekka J. Korhonen 《Computational statistics & data analysis》1984,2(3):243-255
In this study we deal with the problem of finding subjective principal components for a given set of variables in a data matrix. The principal components are not determined by maximizing their variances; they are specified by a user, who can maximize the absolute values of the correlations between principal components and the variables important to him. The correlation matrix of the variables is the basic information needed in the analysis.The problem is formulated as a multiple criteria problem and solved by using an interactive procedure. The procedure is convenient to use and easy to implement. We have implemented an experimental version on an APPLE III microcomputer. A graphical display is used as an aid in finding the principal components. An illustrative application is presented, too. 相似文献
9.
Point based graphics avoids the generation of a polygonal approximation of sampled geometry and uses algorithms that directly
work with the point set.
Basic ingredients of point based methods are algorithms to compute nearest neighbors, to estimate surface properties as, e.g.
normals and to smooth the point set. In this paper we report on the results of an experimental study that compared different
methods for the mentioned subtasks. 相似文献
10.
In this paper, a novel subspace method called diagonal principal component analysis (DiaPCA) is proposed for face recognition. In contrast to standard PCA, DiaPCA directly seeks the optimal projective vectors from diagonal face images without image-to-vector transformation. While in contrast to 2DPCA, DiaPCA reserves the correlations between variations of rows and those of columns of images. Experiments show that DiaPCA is much more accurate than both PCA and 2DPCA. Furthermore, it is shown that the accuracy can be further improved by combining DiaPCA with 2DPCA. 相似文献
11.
In this paper we treat the problem of determining optimally (in the least-squares sense) the 3D coordinates of a point, given its noisy images formed by any number of cameras of known geometry. The optimality criterion is determined by the covariance matrices associated with the images of the point. The covariance matrices are not restricted to be positive definite but are allowed to be singular. Thus, image points constrained to lie along straight lines can be handled as well. Estimation of the covariance of the reconstructed point is provided.The often appearing two-camera stereo case is treated in detail. It is shown in this case that, under reasonable conditions, the main step of the reconstruction reduces to finding the unique zero of a sixth degree polynomial in the interval (0, 1).The authors are listed in random order. 相似文献
12.
A class of learning algorithms for principal component analysis andminor component analysis 总被引:1,自引:0,他引:1
Principal component analysis (PCA) and minor component analysis (MCA) are a powerful methodology for a wide variety of applications such as pattern recognition and signal processing. In this paper, we first propose a differential equation for the generalized eigenvalue problem. We prove that the stable points of this differential equation are the eigenvectors corresponding to the largest eigenvalue. Based on this generalized differential equation, a class of PCA and MCA learning algorithms can be obtained. We demonstrate that many existing PCA and MCA learning algorithms are special cases of this class, and this class includes some new and simpler MCA learning algorithms. Our results show that all the learning algorithms of this class have the same order of convergence speed, and they are robust to implementation error. 相似文献
13.
A class of learning algorithms for principal component analysis andminor component analysis 总被引:1,自引:0,他引:1
Qingfu Zhang Yiu Wing Leung 《Neural Networks, IEEE Transactions on》2000,11(1):200-204
In this paper, we first propose a differential equation for the generalized eigenvalue problem. We prove that the stable points of this differential equation are the eigenvectors corresponding to the largest eigenvalue. Based on this generalized differential equation, a class of principal component analysis (PCA) and minor component analysis (MCA) learning algorithms can be obtained. We demonstrate that many existing PCA and MCA learning algorithms are special cases of this class, and this class includes some new and simpler MCA learning algorithms. Our results show that all the learning algorithms of this class have the same order of convergence speed, and they are robust to implementation error. 相似文献
14.
A complete Bayesian framework for principal component analysis (PCA) is proposed. Previous model-based approaches to PCA were often based upon a factor analysis model with isotropic Gaussian noise. In contrast to PCA, these approaches do not impose orthogonality constraints. A new model with orthogonality restrictions is proposed. Its approximate Bayesian solution using the variational approximation and results from directional statistics is developed. The Bayesian solution provides two notable results in relation to PCA. The first is uncertainty bounds on principal components (PCs), and the second is an explicit distribution on the number of relevant PCs. The posterior distribution of the PCs is found to be of the von-Mises-Fisher type. This distribution and its associated hypergeometric function, , are studied. Numerical reductions are revealed, leading to a stable and efficient orthogonal variational PCA (OVPCA) algorithm. OVPCA provides the required inferences. Its performance is illustrated in simulation, and for a sequence of medical scintigraphic images. 相似文献
15.
The model for improving the robustness of sparse principal component analysis(PCA) is proposed in this paper. Instead of the l2-norm variance utilized in the conventional sparse PCA model,the proposed model maximizes the l1-norm variance,which is less sensitive to noise and outlier. To ensure sparsity,lp-norm(0 p 1) constraint,which is more general and effective than l1-norm,is considered. A simple yet efficient algorithm is developed against the proposed model. The complexity of the algorithm approximately linearly increases with both of the size and the dimensionality of the given data,which is comparable to or better than the current sparse PCA methods. The proposed algorithm is also proved to converge to a reasonable local optimum of the model. The efficiency and robustness of the algorithm is verified by a series of experiments on both synthetic and digit number image data. 相似文献
16.
17.
Iterative kernel principal component analysis for image modeling 总被引:6,自引:0,他引:6
Kim KI Franz MO Schölkopf B 《IEEE transactions on pattern analysis and machine intelligence》2005,27(9):1351-1366
In recent years, Kernel Principal Component Analysis (KPCA) has been suggested for various image processing tasks requiring an image model such as, e.g., denoising or compression. The original form of KPCA, however, can be only applied to strongly restricted image classes due to the limited number of training examples that can be processed. We therefore propose a new iterative method for performing KPCA, the Kernel Hebbian Algorithm which iteratively estimates the Kernel Principal Components with only linear order memory complexity. In our experiments, we compute models for complex image classes such as faces and natural images which require a large number of training examples. The resulting image models are tested in single-frame super-resolution and denoising applications. The KPCA model is not specifically tailored to these tasks; in fact, the same model can be used in super-resolution with variable input resolution, or denoising with unknown noise characteristics. In spite of this, both super-resolution and denoising performance are comparable to existing methods. 相似文献
18.
Haixian Wang 《Machine Vision and Applications》2011,22(2):433-438
In this paper, a new technique called structural two-dimensional principal component analysis (S2DPCA) is proposed for image
recognition. S2DPCA is a subspace learning method that identifies the structural information for discrimination. Different
from conventional two-dimensional principal component analysis (2DPCA) that only reflects within-row information of images,
the goal of S2DPCA is to discover structural discriminative information contained in both within-row and between-row of the
images. By contrast with 2DPCA, S2DPCA is directly based on the augmented images encoding corresponding row membership, and
the projection directions of S2DPCA are obtained by solving an eigenvalue problem of the augmented image covariance matrix.
Computationally, S2DPCA is straightforward and comparative with 2DPCA. Like 2DPCA, the singularity problem is completely avoided
in S2DPCA. Experiments on face recognition and handwritten digit recognition are presented to show the effectiveness of the
proposed approach. 相似文献
19.
We propose a constrained EM algorithm for principal component analysis (PCA) using a coupled probability model derived from single-standard factor analysis models with isotropic noise structure. The single probabilistic PCA, especially for the case where there is no noise, can find only a vector set that is a linear superposition of principal components and requires postprocessing, such as diagonalization of symmetric matrices. By contrast, the proposed algorithm finds the actual principal components, which are sorted in descending order of eigenvalue size and require no additional calculation or postprocessing. The method is easily applied to kernel PCA. It is also shown that the new EM algorithm is derived from a generalized least-squares formulation. 相似文献
20.
An adaptive learning algorithm for principal component analysis 总被引:2,自引:0,他引:2
Liang-Hwa Chen Shyang Chang 《Neural Networks, IEEE Transactions on》1995,6(5):1255-1263
Principal component analysis (PCA) is one of the most general purpose feature extraction methods. A variety of learning algorithms for PCA has been proposed. Many conventional algorithms, however, will either diverge or converge very slowly if learning rate parameters are not properly chosen. In this paper, an adaptive learning algorithm (ALA) for PCA is proposed. By adaptively selecting the learning rate parameters, we show that the m weight vectors in the ALA converge to the first m principle component vectors with almost the same rates. Comparing with the Sanger's generalized Hebbian algorithm (GHA), the ALA can quickly find the desired principal component vectors while the GHA fails to do so. Finally, simulation results are also included to illustrate the effectiveness of the ALA. 相似文献