首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, a novel data projection method, local and global principal component analysis (LGPCA) is proposed for process monitoring. LGPCA is a linear dimensionality reduction technique through preserving both of local and global information in the observation data. Beside preservation of the global variance information of Euclidean space that principal component analysis (PCA) does, LGPCA is characterized by capturing a good linear embedding that preserves local structure to find meaningful low-dimensional information hidden in the high-dimensional process data. LGPCA-based T2 (D) and squared prediction error (Q) statistic control charts are developed for on-line process monitoring. The validity and effectiveness of LGPCA-based monitoring method are illustrated through simulation processes and Tennessee Eastman process (TEP). The experimental results demonstrate that the proposed method effectively captures meaningful information hidden in the observations and shows superior process monitoring performance compared to those regular monitoring methods.  相似文献   

2.
张洪祥  毛志忠 《控制工程》2011,18(2):244-247
针对属性权重完全未知且属性值为多维时间序列的评价决策问题,提出一种基于加速遗传算法-投影寻踪和多属性决策的混杂评价决策模型方法.该方法将首先利用投影寻殊方法对多维时间序列数据按照属性进行降维处理,以解决数据处理过程中"维数灾难"带来的影响,并使用加速遗传算法确定最佳投影方向作为属性权重;对于得到的具有时间序列特性的决策...  相似文献   

3.
We present a new strategy called "curvilinear component analysis" (CCA) for dimensionality reduction and representation of multidimensional data sets. The principle of CCA is a self-organized neural network performing two tasks: vector quantization (VQ) of the submanifold in the data set (input space); and nonlinear projection (P) of these quantizing vectors toward an output space, providing a revealing unfolding of the submanifold. After learning, the network has the ability to continuously map any new point from one space into another: forward mapping of new points in the input space, or backward mapping of an arbitrary position in the output space.  相似文献   

4.
In this work, we discuss a recently proposed approach for supervised dimensionality reduction, the Supervised Distance Preserving Projection (SDPP) and, we investigate its applicability to monitoring material's properties from spectroscopic observations. Motivated by continuity preservation, the SDPP is a linear projection method where the proximity relations between points in the low-dimensional subspace mimic the proximity relations between points in the response space. Such a projection facilitates the design of efficient regression models and it may also uncover useful information for visualisation. An experimental evaluation is conducted to show the performance of the SDPP and compare it with a number of state-of-the-art approaches for unsupervised and supervised dimensionality reduction. The regression step after projection is performed using computationally light models with low maintenance cost like Multiple Linear Regression and Locally Linear Regression with k-NN neighbourhoods. For the evaluation, a benchmark and a full-scale calibration problem are discussed. The case studies pertain the estimation of a number of chemico-physical properties in diesel fuels and in light cycle oils, starting from near-infrared spectra. Based on the experimental results, we found that the SDPP leads to parsimonious projections that can be used to design light and yet accurate estimation models.  相似文献   

5.
Many problems in information processing involve some form of dimensionality reduction, such as face recognition, image/text retrieval, data visualization, etc. The typical linear dimensionality reduction algorithms include principal component analysis (PCA), random projection, locality-preserving projection (LPP), etc. These techniques are generally unsupervised which allows them to model data in the absence of labels or categories. In this paper, we propose a semi-supervised subspace learning algorithm for image retrieval. In relevance feedback-driven image retrieval system, the user-provided information can be used to better describe the intrinsic semantic relationships between images. Our algorithm is fundamentally based on LPP which can incorporate user's relevance feedbacks. As the user's feedbacks are accumulated, we can ultimately obtain a semantic subspace in which different semantic classes can be best separated and the retrieval performance can be enhanced. We compared our proposed algorithm to PCA and the standard LPP. Experimental results on a large collection of images have shown the effectiveness and efficiency of our proposed algorithm.  相似文献   

6.
Many of the computational intelligence techniques currently used do not scale well in data type or computational performance, so selecting the right dimensionality reduction technique for the data is essential. By employing a dimensionality reduction technique called representative dissimilarity to create an embedded space, large spaces of complex patterns can be simplified to a fixed‐dimensional Euclidean space of points. The only current suggestions as to how the representatives should be selected are principal component analysis, projection pursuit, and factor analysis. Several alternative representative strategies are proposed and empirically evaluated on a set of term vectors constructed from HTML documents. The results indicate that using a representative dissimilarity representation with at least 50 representatives can achieve a significant increase in classification speed, with a minimal sacrifice in accuracy, and when the representatives are selected randomly, the time required to create the embedded space is significantly reduced, also with a small penalty in accuracy. © 2006 Wiley Periodicals, Inc. Int J Int Syst 21: 1093–1109, 2006.  相似文献   

7.
基于改进结构保持数据降维方法的故障诊断研究   总被引:1,自引:0,他引:1  
韩敏  李宇  韩冰 《自动化学报》2021,47(2):338-348
传统基于核主成分分析(Kernel principal component analysis, KPCA)的数据降维方法在提取有效特征信息时只考虑全局结构保持而未考虑样本间的局部近邻结构保持问题, 本文提出一种改进全局结构保持算法的特征提取与降维方法.改进的特征提取与降维方法将流形学习中核局部保持投影(Kernel locality preserving projection, KLPP)的思想融入核主成分分析的目标函数中, 使样本投影后的特征空间不仅保持原始样本空间的整体结构, 还保持样本空间相似的局部近邻结构, 包含更丰富的特征信息.上述方法通过同时进行的正交化处理可避免局部子空间结构发生失真, 并能够直观显示出低维结果, 将低维数据输入最近邻分类器, 以识别率和聚类分析结果作为衡量指标, 同时将所提方法应用于故障诊断中.使用AVL Boost软件模拟的柴油机故障数据和田纳西(Tennessee Eastman, TE)化工数据仿真, 验证了所提方法的有效性.  相似文献   

8.
为了有效地解决传统的基于向量表示的文档维数降维算法存在的维数灾难和奇异值问题,提出了基于张量最大间隔投影的Web文档分类算法,该算法能够在维数降维的过程中充分利用文档的结构和关联信息来提高算法的分类鉴别能力,在WebKB和20NG数据集上的实验结果表明该算法优于其他常用的的文档分类算法。  相似文献   

9.
Existing supervised and semi-supervised dimensionality reduction methods utilize training data only with class labels being associated to the data samples for classification. In this paper, we present a new algorithm called locality preserving and global discriminant projection with prior information (LPGDP) for dimensionality reduction and classification, by considering both the manifold structure and the prior information, where the prior information includes not only the class label but also the misclassification of marginal samples. In the LPGDP algorithm, the overlap among the class-specific manifolds is discriminated by a global class graph, and a locality preserving criterion is employed to obtain the projections that best preserve the within-class local structures. The feasibility of the LPGDP algorithm has been evaluated in face recognition, object categorization and handwritten Chinese character recognition experiments. Experiment results show the superior performance of data modeling and classification to other techniques, such as linear discriminant analysis, locality preserving projection, discriminant locality preserving projection and marginal Fisher analysis.  相似文献   

10.
石松  陈云 《计算机工程》2014,(2):171-174
投影寻踪可有效解决文本分类中的维数灾难问题,而投影方向优化是投影寻踪需要解决的关键问题。传统的投影寻踪方法将投影指标优化看作单目标优化问题,会使解的质量受到影响。为此,提出一种基于多目标优化的投影寻踪方法。将类别之间的距离和类别内数据的聚类紧密程度作为2个优化目标,并将投影扩展到多维,利用混沌粒子群优化算法寻找最优的投影方向。在常用文本数据集上进行实验,确定最优投影指标及维度,并比较不同分类模型的分类结果,结果表明,使用该方法能有效提高文本分类性能。  相似文献   

11.
Recently, many dimensionality reduction algorithms, including local methods and global methods, have been presented. The representative local linear methods are locally linear embedding (LLE) and linear preserving projections (LPP), which seek to find an embedding space that preserves local information to explore the intrinsic characteristics of high dimensional data. However, both of them still fail to nicely deal with the sparsely sampled or noise contaminated datasets, where the local neighborhood structure is critically distorted. On the contrary, principal component analysis (PCA), the most frequently used global method, preserves the total variance by maximizing the trace of feature variance matrix. But PCA cannot preserve local information due to pursuing maximal variance. In order to integrate the locality and globality together and avoid the drawback in LLE and PCA, in this paper, inspired by the dimensionality reduction methods of LLE and PCA, we propose a new dimensionality reduction method for face recognition, namely, unsupervised linear difference projection (ULDP). This approach can be regarded as the integration of a local approach (LLE) and a global approach (PCA), so that it has better performance and robustness in applications. Experimental results on the ORL, YALE and AR face databases show the effectiveness of the proposed method on face recognition.  相似文献   

12.
首先利用粒子群算法和投影寻踪技术构造神经网络的学习矩阵,基于负相关学习的样本重构方法生成神经网络集成个体,进一步用粒子群算法和投影寻踪回归方法对集成个体集成,生成神经网络集成的输出结论,建立基于粒子群算法-投影寻踪的样本重构神经网络集成模型。该方法应用于广西全区的月降水量预报,结果表明该方法在降水预报中能有效从众多天气因子中构造神经网络的学习矩阵,而且集成学习预测精度高、稳定性好,具有一定的推广能力。  相似文献   

13.
利用正交投影技术进行降维可以更好地保留与度量结构有关的信息, 提高人脸识别性能。在谱回归判别分析(SRDA)和谱回归核判别分析(SRKDA)的基础上, 提出正交SRDA(OSRDA)和正交SRKDA(OSRKDA)降维算法。首先, 给出基于Cholesky分解求解正交鉴别矢量集的方法, 然后, 通过该方法对SRDA和SRKDA投影向量作正交化处理。其简单、容易实现而且克服了迭代计算正交鉴别矢量集的方法不适应于谱回归(SR)降维的缺点。ORL、Yale和PIE库上的实验结果表明了算法的有效性和效率, 在有效降维的同时能进一步提高鉴别能力。  相似文献   

14.
Similarity search usually encounters a serious problem in the high-dimensional space, known as the “curse of dimensionality.” In order to speed up the retrieval efficiency, most previous approaches reduce the dimensionality of the entire data set to a fixed lower value before building indexes (referred to as global dimensionality reduction (GDR)). More recent works focus on locally reducing the dimensionality of data to different values (called the local dimensionality reduction (LDR)). In addition, random projection is proposed as an approximate dimensionality reduction (ADR) technique to answer the approximate similarity search instead of the exact one. However, so far little work has formally evaluated the effectiveness and efficiency of GDR, LDR, and ADR for the range query. Motivated by this, in this paper, we propose general cost models for evaluating the query performance over the reduced data sets by GDR, LDR, and ADR, in light of which we introduce a novel (A)LDR method, Partitioning based on RANdomized Search (PRANS). It can achieve high retrieval efficiency with the guarantee of optimality given by the formal models. Finally, a {rm B}^{+}-tree index is constructed over the reduced partitions for fast similarity search. Extensive experiments validate the correctness of our cost models on both real and synthetic data sets and demonstrate the efficiency and effectiveness of the proposed PRANS method.  相似文献   

15.
主成分分析(PCA)是一种无监督降维方法.然而现有的方法没有考虑样本的差异性,且不能联合地提取样本的重要信息,从而影响了方法的性能.针对以上问题,提出自步稀疏最优均值主成分分析方法.模型以L2,1范数定义损失函数,同时用L2.1范数约束投影矩阵作为正则化项,且将均值作为在迭代中优化的变量,这样可一致地选择重要特征,提高...  相似文献   

16.
Fisher discriminant analysis gives the unsatisfactory results if points in the same class have within-class multimodality and fails to produce the non-negativity of projection vectors. In this paper, we focus on the newly formulated within and between-class scatters based supervised locality preserving dimensionality reduction problem and propose an effective dimensionality reduction algorithm, namely, Multiplicative Updates based non-negative Discriminative Learning (MUNDL), which optimally seeks to obtain two non-negative embedding transformations with high preservation and discrimination powers for two data sets in different classes such that nearby sample pairs in the original space compact in the learned embedding space, under which the projections of the original data in different classes can be appropriately separated from each other. We also show that MUNDL can be easily extended to nonlinear dimensionality reduction scenarios by employing the standard kernel trick. We verify the feasibility and effectiveness of MUNDL by conducting extensive data visualization and classification experiments. Numerical results on some benchmark UCI and real-world datasets show the MUNDL method tends to capture the intrinsic local and multimodal structure characteristics of the given data and outperforms some established dimensionality reduction methods, while being much more efficient.  相似文献   

17.
针对附着镜头或玻璃表面的雨滴会造成图像退化的问题,提出了一种多阶段渐进式图像去雨滴方法。整个去雨滴过程被分解为多个更易于实现的阶段。首先在每个阶段设计多尺度融合的编码—解码网络以学习雨滴特征,通过构建带有门控循环单元的多尺度扩张卷积来细化内部传递的空间特征。然后引入无降维的通道注意力机制对特定空间特征下的通道信息进行提取。最后为加强每个阶段各部分之间的信息交换,采用跨阶段特征融合机制,在每个阶段的编码—解码网络之间加入横向连接,以实现特征信息的横向传递。在每个阶段之间加入监督注意模块,以增强不同阶段之间的信息传递,最终渐进地实现雨滴的去除。实验表明该方法能够有效地去除雨滴。  相似文献   

18.
Locality-preserved maximum information projection.   总被引:3,自引:0,他引:3  
Dimensionality reduction is usually involved in the domains of artificial intelligence and machine learning. Linear projection of features is of particular interest for dimensionality reduction since it is simple to calculate and analytically analyze. In this paper, we propose an essentially linear projection technique, called locality-preserved maximum information projection (LPMIP), to identify the underlying manifold structure of a data set. LPMIP considers both the within-locality and the between-locality in the processing of manifold learning. Equivalently, the goal of LPMIP is to preserve the local structure while maximize the out-of-locality (global) information of the samples simultaneously. Different from principal component analysis (PCA) that aims to preserve the global information and locality-preserving projections (LPPs) that is in favor of preserving the local structure of the data set, LPMIP seeks a tradeoff between the global and local structures, which is adjusted by a parameter alpha, so as to find a subspace that detects the intrinsic manifold structure for classification tasks. Computationally, by constructing the adjacency matrix, LPMIP is formulated as an eigenvalue problem. LPMIP yields orthogonal basis functions, and completely avoids the singularity problem as it exists in LPP. Further, we develop an efficient and stable LPMIP/QR algorithm for implementing LPMIP, especially, on high-dimensional data set. Theoretical analysis shows that conventional linear projection methods such as (weighted) PCA, maximum margin criterion (MMC), linear discriminant analysis (LDA), and LPP could be derived from the LPMIP framework by setting different graph models and constraints. Extensive experiments on face, digit, and facial expression recognition show the effectiveness of the proposed LPMIP method.  相似文献   

19.
半监督降维(Semi\|Supervised Dimensionality Reduction,SSDR)框架下,基于成对约束提出一种半监督降维算法SCSSDR。利用成对样本进行构图,在保持局部结构的同时顾及数据的全局结构。通过最优化目标函数,使得同类样本更加紧凑\,异类样本更加离散。采用UCI数据集对算法进行定量分析,发现该方法优于PCA及传统流形学习算法,进一步的UCI数据集和高光谱数据集分类实验表明:该方法适合于进行分类目的特征提取。  相似文献   

20.
提出两个判别性的特征融合方法——主成分判别性分析和核主成分判别性分析。基于主成份分析和最大间隔准则理论,构造一个多目标规划模型作为特征融合的目标。随后,该模型被转化成一个单目标规划问题并通过特征分解的方法求解。此外,将一个近似分块对角核矩阵K分成c(c为数据集中的类别数)个小矩阵,并求出它们的特征值和特征向量。在此基础上,通过向量代数处理得到一个映射矩阵α,当核矩阵K投影到α上,同类样本的相似信息能最大程度地得到保持。本文中的实验证实两种方法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号