首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Principal Component Analysis (PCA) is a well-known technique, the aim of which is to synthesize huge amounts of numerical data by means of a low number of unobserved variables, called components. In this paper, an extension of PCA to deal with interval valued data is proposed. The method, called Midpoint Radius Principal Component Analysis (MR-PCA), recovers the underlying structure of interval valued data by using both the midpoints (or centers) and the radii (a measure of the interval width) information. In order to analyze how MR-PCA works, the results of a simulation study and two applications on chemical data are proposed.  相似文献   

3.
4.
The purpose of this paper was to evaluate a multivariate strategy for handling time-dependent kinetic data during formulation development. Dissolution profiles were evaluated by the Weibull equation, multiple linear regression (MLR), principal component analysis (PCA), alone and in combination. In addition a soft independent modeling of class analogy (SIMCA) was performed. Employing a typical kinetic model for solid formulations (here Weibull) showed difficulties with the model adaptation, resulting in increased model standard deviation and thereby failure in identifying significant variables. In general, the selection of a kinetic model is crucial for finding the significant formulation variables. Describing the dissolution profile based on MLR models of individual time points described the dissolution rates as a function of formulation variables with good precision. Establishing prediction models made it easy to evaluate effects on the entire dissolution profile. The use of PCA/MLR (PCR) reduced the influence of noise from single measurements in a kinetic profile, since they develop statistical parameters representing the profile without being dependent on a physicochemically-modeled profile. The use of PCA reduced the eight time-point variables to two latent variables (principal components), simplifying the classification of formulations and new samples as well as avoiding unwanted effects of model non-linearities between the factors and responses (model error). The group membership of new samples was demonstrated by SIMCA.  相似文献   

5.
In this study, an efficient method for extracting and selecting features of unrefined Electroencephalogram (EEG) signals according to the one‐dimensional local binary pattern (1D‐LBP) is presented. Considering that taking a correct decision on various issues particularly in the field of diagnosing diseases, such as epilepsy, is of paramount importance, a functional approach is designed to extract the optimal features of EEG signals. The proposed method is comprised of two main steps: First, extraction and selection of features is performed based on a novel improved 1D‐LBP model followed by data normalization through principal component analysis (PCA); as combining 1D‐LBP neighboring models and PCA (1D‐LBPc2p) method. The second step includes classification using two of the best ensemble classification algorithms, that is, random forest and rotation forest. A comparative evaluation is performed between the proposed methods and 13 distinct reported approaches including uniform and non‐uniform 1D‐LBP. The results are demonstrating that the combining method presented in our approaches has superiority along with efficiency by providing higher accuracy compared to the other models and classifiers. The proposed method in this paper can be considered as a new method for feature extraction and selection of other kinds of EEG signals and data sets.  相似文献   

6.
Functional data and profiles are characterized by complex relationships between a response and several predictor variables. Fortunately, statistical process control methods provide a solid ground for monitoring the stability of these relationships over time. This study focuses on the monitoring of 2‐dimensional geometric specifications. Although the existing approaches deploy regression models with spatial autoregressive error terms combined with control charts to monitor the parameters, they are designed based on some idealistic assumptions that can be easily violated in practice. In this paper, the independent component analysis (ICA) is used in combination with a statistical process control method as an alternative scheme for phase II monitoring of geometric profiles when non‐normality of the error term is present. The performance of this method is evaluated and compared with a regression‐ and PCA‐based approach through simulation of the average run length criterion. The results reveal that the proposed ICA‐based approach is robust against non‐normality in the in‐control analysis, and its out‐of‐control performance is on par with that of the PCA‐based method in case of normal and near‐normal error terms.  相似文献   

7.
《技术计量学》2013,55(4):392-403
Principal components analysis (PCA) is often used in the analysis of multivariate process data to identify important combinations of the original variables on which to focus for more detailed study. However, PCA and other related projection techniques from the standard multivariate repertoire are not explicitly designed to address or to exploit the strong autocorrelation and temporal cross-correlation structures that are often present in multivariate process data. Here we propose two alternative projection techniques that do focus on the temporal structure in such data and that therefore produce components that may have some analytical advantages over those resulting from more conventional multivariate methods. As in PCA, both of our suggested methods linearly transform the original p-variate time series into uncorrelated components; however, unlike PCA, they concentrate on deriving components with particular temporal correlation properties, rather than those with maximal variance. The first technique finds components that exhibit distinctly different autocorrelation structures via modification of a signal-noise decomposition method used in image analysis. The second method draws on ideas from common PCA to produce components that are not only uncorrelated as in PCA, but that also have approximately zero temporally lagged cross-correlations for all time lags. We present the technical details for these two methods, assess their performance through simulation studies, and illustrate their use on multivariate output measures from a fluidized catalytic cracking unit used in petrochemical production, contrasting the results obtained with those from standard PCA.  相似文献   

8.
This paper proposes a new method for exploratory analysis and the interpretation of latent structures. The approach is named missing-data methods for exploratory data analysis (MEDA). The MEDA approach can be applied in combination with several models, including Principal Components Analysis (PCA), Factor Analysis (FA) and Partial Least Squares (PLS). It can be seen as a substitute of rotation methods with better properties associated: it is more accurate than rotation methods in the detection of relationships between pairs of variables, it is robust to the overestimation of the number of PCs and it does not depend on the normalization of the loadings. MEDA is useful to infer the structure in the data and also to interpret the contribution of each latent variable. The interpretation of PLS models with MEDA, including variables selection, may be specially valuable for the chemometrics community. The use of MEDA with PCA and PLS models is demonstrated with several simulated and real examples.  相似文献   

9.
针对目前变流量制冷系统仿真算法设计研究不多以及通用性不足,提出一种物理意义明确且通用的迭代算法(简称为ALG-I)及其变种算法(简称为ALG-II)。对迭代变量的选择,迭代判据的确定给出了操作准则和方法。对算法流程的关键步骤进行了详细阐述。ALG-I与ALG-II具有相似特性但需要的仿真时间不同。仿真结果表明,提出的算法能适用于任意数目蒸发器的变流量制冷系统仿真,且仿真时间未随蒸发器个数增大而急剧增加,表明算法具有与蒸发器个数无关的通用特性。从控制分析需要快速响应的要求来看,对于一拖一系统, ALG-II比ALG-I有优势,而对于一拖多系统,ALG-I则比ALG-II有优势。最后,系统对连续变化的控制变量(包括膨胀阀开度及压缩机转速)的合理响应表明提出的算法可以有效地用于VRF系统的能耗与控制仿真。  相似文献   

10.
The theory together with an algorithm for uncorrelated linear discriminant analysis (ULDA) is introduced and applied to explore metabolomics data. ULDA is a supervised method for feature extraction (FE), discriminant analysis (DA) and biomarker screening based on the Fisher criterion function. While principal component analysis (PCA) searches for directions of maximum variance in the data, ULDA seeks linearly combined variables called uncorrelated discriminant vectors (UDVs). The UDVs maximize the separation among different classes in terms of the Fisher criterion. The performance of ULDA is evaluated and compared with PCA, partial least squares discriminant analysis (PLS-DA) and target projection discriminant analysis (TP-DA) for two datasets, one simulated and one real from a metabolomic study. ULDA showed better discriminatory ability than PCA, PLS-DA and TP-DA. The shortcomings of PCA, PLS-DA and TP-DA are attributed to interference from linear correlations in data. PLS-DA and TP-DA performed successfully for the simulated data, but PLS-DA was slightly inferior to ULDA for the real data. ULDA successfully extracted optimal features for discriminant analysis and revealed potential biomarkers. Furthermore, by means of cross-validation, the classification model obtained by ULDA showed better predictive ability than PCA, PLS-DA and TP-DA. In conclusion, ULDA is a powerful tool for revealing discriminatory information in metabolomics data.  相似文献   

11.
W. Tong 《Strain》2013,49(4):313-334
Digital image correlation (DIC) metrology has been increasingly used in a wide range of experimental mechanics research and applications. The DIC algorithm used so far is however limited mostly to the classic forward additive Lucas–Kanade type. In this paper, a survey is given about the formulation of other types of Lucas–Kanade DIC algorithms that have been appeared in computer vision, robotics, medical image analysis literature and so on. Concise notations consistent with the finite deformation kinematics analysis in continuum mechanics are used to describe all Lucas–Kanade DIC algorithms. An intermediate image is introduced as a frame of reference to clarify the so‐called compositional algorithms in a two‐frame DIC analysis. Explicit examples about the additive and compositional updating of deformation parameters are given for affine deformation mapping. Extensions of these algorithms to the so‐called consistent or symmetric types are also presented. The equivalency of final numerical solutions using additive, compositional and inverse compositional algorithms is shown analytically for the case of affine deformation mapping. In particular, the inverse compositional algorithm for affine image subset deformation is highlighted for its superior computational efficiency. While computationally less efficient, consistent and symmetric algorithms may be more robust and less biased and their potentials in experimental mechanics applications remain to be explored. The unified formulation of these Lucas–Kanade DIC algorithms collected all together in this paper can serve as a useful guide for researchers in experimental mechanics to further evaluate the merits as well as limitations of these non‐classic algorithms for image‐based precision displacement measurement applications.  相似文献   

12.
The FEM is the main tool used for structural analysis. When the design of the mechanical system involves uncertain parameters, a coupling of the FEM with reliability analysis algorithms allows to compute the failure probability of the system. However, this coupling leads to successive finite element analysis of parametric models involving high computational effort. Over the past years, model reduction techniques have been developed in order to reduce the computational requirements in the numerical simulation of complex models. The objective of this work is to propose an efficient methodology to compute the failure probability for a multi‐material elastic structure, where the Young moduli are considered as uncertain variables. A proper generalized decomposition algorithm is developed to compute the solution of parametric multi‐material model. This parametrized solution is used in conjunction with a first‐order reliability method to compute the failure probability of the structure. Applications to multilayered structures in two‐dimensional plane elasticity are presented.Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
Due to the variables and unknowns in both material properties and predictive models in creep crack growth (CCG) rates, it is difficult to predict failure of a component precisely. A failure strain constraint based transient and steady state CCG model (called NSW) modified using probabilistic techniques, has been employed to predict CCG using uniaxial data as basic material property. In this paper the influence of scatter in the creep uniaxial properties, the parameter C* and creep crack initiation and growth rate have been examined using probabilistic methods. Using uniaxial and CCG properties of C‐Mn steel at 360 °C, a method is developed which takes into account the scatter of the data and its sensitivity to the correlating parameters employed. It is shown that for an improved prediction method in components containing cracks the NSW crack growth model employed would benefit from a probabilistic analysis. This should be performed by considering the experimental scatter in failure strain, the creep stress index and in estimating the C* parameter.  相似文献   

14.
Principal components regression (PCR) is applied to the dynamic inferential estimation of plant outputs from highly correlated data. A genetic algorithm (GA) approach is developed for the optimal selection of subsets from the available measurement variables, thereby providing a method of identifying nonessential elements. The theoretical link between principal components analysis (PCA) and state–space modelling is employed to identify a measurement equation involving the GA-selected subset, which is then used for inferential estimation of the omitted variables. These techniques are successfully demonstrated for the inferential estimation of outputs from a validated industrial benchmark simulation of an overheads condensor and reflux drum model (OCRD).  相似文献   

15.
Clustering and feature selection using sparse principal component analysis   总被引:1,自引:0,他引:1  
In this paper, we study the application of sparse principal component analysis (PCA) to clustering and feature selection problems. Sparse PCA seeks sparse factors, or linear combinations of the data variables, explaining a maximum amount of variance in the data while having only a limited number of nonzero coefficients. PCA is often used as a simple clustering technique and sparse factors allow us here to interpret the clusters in terms of a reduced set of variables. We begin with a brief introduction and motivation on sparse PCA and detail our implementation of the algorithm in d’Aspremont et al. (SIAM Rev. 49(3):434–448, 2007). We then apply these results to some classic clustering and feature selection problems arising in biology.  相似文献   

16.
This paper describes the development of a data‐driven advance warning system for the onset of loss of separation in an industrial distillation column. The system would enable preventive actions to avoid several hours of bad operation and subsequent recovery of the process. Data of more than 2 years of process operation were used to identify and validate various monitoring systems based on both static principal component analysis (PCA) and dynamic PCA. Despite the presence of autocorrelation in the data, only minor differences in advance warning were observed between PCA and dynamic PCA. The developed system provides warnings for 35% to 45% of the observed periods of bad column operation, with respective advance warning times of 16 and 6 minutes. It proves a valuable additional tool to monitor the operation of the distillation column and avoid losses of product, with the potential of reducing bad operation (and the associated costs) by up to 45% and substantially improving overall process reliability.  相似文献   

17.
Given the relevance of principal component analysis (PCA) to the treatment of spectrometric data, we have evaluated potentialities and limitations of such useful statistical approach for the harvesting of information in large sets of X-ray photoelectron spectroscopy (XPS) spectra. Examples allowed highlighting the contribution of PCA to data treatment by comparing the results of this data analysis with those obtained by the usual XPS quantification methods. PCA was shown to improve the identification of chemical shifts of interest and to reveal correlations between peak components. First attempts to use the method led to poor results, which showed mainly the distance between series of samples analyzed at different moments. To weaken the effect of variations of minor interest, a data normalization strategy was developed and tested. A second issue was encountered with spectra suffering of an even slightly inaccurate binding energy scale correction. Indeed, minor shifts of energy channels lead to the PCA being performed on incorrect variables and consequently to misleading information. In order to improve the energy scale correction and to speed up this step of data pretreatment, a data processing method based on PCA was used. Finally, the overlap of different sources of variation was studied. Since the intensity of a given energy channel consists of electrons from several origins, having suffered inelastic collisions (background) or not (peaks), the PCA approach cannot compare them separately, which may lead to confusion or loss of information. By extracting the peaks from the background and considering them as new variables, the effect of the elemental composition could be taken into account in the case of spectra with very different backgrounds. In conclusion, PCA is a very useful diagnostic tool for the interpretation of XPS spectra, but it requires a careful and appropriate data pretreatment.  相似文献   

18.
This article describes an effective human face recognition algorithm. Even though the principle component analysis (PCA) is one of the most common feature extraction methods, it is not suitable to implement a real‐time embedded system for face recognition because large amount of computational load and memory capacity are necessary. To overcome this problem, we employ the incremental two‐directional two‐dimensional PCA (I(2D)2PCA) which is a combination of the (2D)2PCA to demand much less computational complexity than the conventional PCA and the incremental PCA (IPCA) to adapt the eigenspace only by using a new incoming sample datum without reusing of all the previous trained data. Furthermore, the modified census transform (MCT), a local normalization method useful for real‐world application and implementation in an embedded system, is adopted to address robustness to illumination variations. To achieve better recognition accuracy with less computational load, the processed features are classified by the compressive sensing approach using ?2–minimization. Experimental results on the Yale Face Database B show that the described system using the ?2–minimization‐based classification method for input data processed by the I(2D)2PCA and the MCT provided efficient and robust face recognition. © 2013 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 23, 133–139, 2013  相似文献   

19.
This paper presents the formulation of numerical algorithms for the solution of the closest‐point projection equations that appear in typical implementations of return mapping algorithms in elastoplasticity. The main motivation behind this work is to avoid the poor global convergence properties of a straight application of a Newton scheme in the solution of these equations, the so‐called Newton‐CPPM. The mathematical structure behind the closest‐point projection equations identified in Part I of this work delineates clearly different strategies for the successful solution of these equations. In particular, primal and dual closest‐point projection algorithms are proposed, in non‐augmented and augmented Lagrangian versions for the imposition of the consistency condition. The primal algorithms involve a direct solution of the original closest‐point projection equations, whereas the dual schemes involve a two‐level structure by which the original system of equations is staggered, with the imposition of the consistency condition driving alone the iterative process. Newton schemes in combination with appropriate line search strategies are considered, resulting in the desired asymptotically quadratic local rate of convergence and the sought global convergence character of the iterative schemes. These properties, together with the computational performance of the different schemes, are evaluated through representative numerical examples involving different models of finite‐strain plasticity. In particular, the avoidance of the large regions of no convergence in the trial state observed in the standard Newton‐CPPM is clearly illustrated. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

20.
Analogously to the classical return‐mapping algorithm, so‐called variational constitutive updates are numerical methods allowing to compute the unknown state variables such as the plastic strains and the stresses for material models showing an irreversible mechanical response. In sharp contrast to standard approaches in computational inelasticity, the state variables follow naturally and jointly from energy minimization in case of variational constitutive updates. This leads to significant advantages from a numerical, mathematical as well as from a physical point of view. However, while the classical return‐mapping algorithm has been being developed for several decades, and thus, it has already reached a certain maturity, variational constitutive updates have drawn attention only relatively recently. This is particularly manifested in the numerical performance of such algorithms. Within the present paper, the numerical efficiency of variational constitutive updates is critically analyzed. It will be shown that a naive approximation of the flow rule causes a singular Hessian within the respective Newton–Raphson scheme. However, by developing a novel parameterization of the flow rule, an efficient algorithm is derived. Its performance is carefully compared to that of the classical return‐mapping scheme. This comparison clearly shows that the novel variationally consistent implementation is, at least, as efficient as the classical return‐mapping algorithm. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号