首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Multivariate statistical methods for the analysis, monitoring and diagnosis of process operating performance are becoming more important because of the availability of on-line process computers which routinely collect measurements on large numbers of process variables. Traditional univariate control charts have been extended to multivariate quality control situations using the Hotelling T2 statistic. Recent approaches to multivariate statistical process control which utilize not only product quality data (Y), but also all of the available process variable data (X) are based on multivariate statistical projection methods (principal component analysis, (PCA), partial least squares, (PLS), multi-block PLS and multi-way PCA). An overview of these methods and their use in the statistical process control of multivariate continuous and batch processes is presented. Applications are provided on the analysis of historical data from the catalytic cracking section of a large petroleum refinery, on the monitoring and diagnosis of a continuous polymerization process and on the monitoring of an industrial batch process.  相似文献   

2.
In this paper, a new method to approximate a data set by another data set with constrained covariance matrix is proposed. The method is termed Approximation of a DIstribution for a given COVariance (ADICOV). The approximation is solved in any projection subspace, including that of Principal Component Analysis (PCA) and Partial Least Squares (PLS). Given the direct relationship between covariance matrices and projection models, ADICOV is useful to test whether a data set satisfies the covariance structure in a projection model. This idea is broadly applicable in chemometrics. Also, ADICOV can be used to simulate data with a specific covariance structure and data distribution. Some applications are illustrated in an industrial case of study.  相似文献   

3.
提出一种通过提取时域指标特征和运用主成分分析法(PCA)诊断螺栓松动故障的方法,将原始数据进行经验模式分解(EMD)后计算相应IMF的5个无量纲因子;利用主成分分析法(PCA)对数据向量进行降维和残基空间投影处理,计算数据样本的预测误差,在智能供水系统试验台架上开展螺栓松动故障诊断试验,试验结果表明,所建立PCA模型能...  相似文献   

4.
A multivariate data matrix containing a number of missing values was obtained from a study on the changes in colour and phenolic composition during the ageing of port. Two approaches were taken in the analysis of the data. The first involved the use of multiple imputation (MI) followed by principal components analysis (PCA). The second examined the use of maximum likelihood principal component analysis (MLPCA). The use of multiple imputation allows for missing value uncertainty to be incorporated into the analysis of the data. Initial estimates of missing values were firstly calculated using the Expectation Maximization algorithm (EM), followed by Data Augmentation (DA) in order to generate five imputed data matrices. Each complete data matrix was subsequently analysed by PCA, then averaging their principal component (PC) scores and loadings to give an estimation of errors. The first three PCs accounted for 93.3% of the explained variance. Changes to colour and monomeric anthocyanin composition were explained on PC1 (79.63% explained variance), phenolic composition and hue mainly on PC2 (8.61% explained variance) and phenolic composition and the formation of polymeric pigment on PC3 (5.04% explained variance). In MLPCA estimates of measurement uncertainty is incorporated in the decomposition step, with missing values being assigned large measurement uncertainties. PC scores on the first two PCs after multiple imputation and PCA (MI+PCA) were comparable to maximum likelihood scores on the first two PCs extracted by MLPCA.  相似文献   

5.
Dimensionality reduction is an important technique for preprocessing of high-dimensional data. Because only one side of the original data is represented in a low-dimensional subspace, useful information may be lost. In the present study, novel dimensionality reduction methods were developed that are suitable for metabolome data, where observation varies with time. Metabolomics deal with this type of data, which are often obtained in microorganism fermentation processes. However, no dimensionality reduction method that utilizes information from the original data in a positive manner has been reported to date. The ordinary dimensionality reduction methods of principal component analysis (PCA), partial least squares (PLS), orthonormalized PLS (OPLS), and regularized Fisher discriminant analysis (RFDA) were extended by introducing differential penalties to the latent variables in each class. A nonlinear extension of this approach, using kernel methods, was also proposed in the form of kernel-smoothed PCA, PLS, OPLS, and FDA. Since all of these methods are formulated as generalized eigenvalue problems, the solutions can be computed easily. These methods were then applied to intracellular metabolite data of a xylose-fermenting yeast in ethanol fermentation. Visualization in the low-dimensional subspace suggests that smoothed PCA successfully preserves the information about the time course of observations during fermentation, and that RFDA can produce high separation among different strains.  相似文献   

6.
Statistical procedures enable a multivariate analysis of the measurements to identify specific characteristics of the dissolved organic matter (DOM) fractions in raw natural water, including the concentrations. In this work, three already established models were used to predict the concentrations of fractions of DOM from spectral fluorescent signatures (SFSs): a general linear regression (GLR), loadings and scores of a principal components analysis (PCA), and a partial least squares regression (PLS). Details about the method undertaken to prepare the fractions were given. Water samples from surface water treatment plants in New Jersey were used for the testing. In all cases, PLS have shown much better biases and accuracies than GLR and PCA models. Hydrophilic neutral, however, showed poor performances (bias 33%) due to the isolation technique used. Recommendations were provided in order to improve the DOM characterization through SFS, which linked to PLS make a powerful and cost-effective surrogate parameter to characterize DOM.  相似文献   

7.
An example of combining self-modeling curve resolution (SMCR) methods and partial least squares (PLS) to construct a quantitative model using minimal reference data is presented. The objective was to construct a quantitative calibration model to allow real-time in situ ultraviolet-attenuated total reflection (UV/ATR) measurements to determine the end-point during a chlorination reaction. Time restrictions for development combined with difficult reaction sampling conditions required the method to be developed using only a few key reference measurements. Utilizing evolving factor analysis (EFA) and the orthogonal projection approach (OPA), initial estimates of the concentration and spectral profiles for the intermediate and product were obtained. Further optimization by multivariate curve resolution-alternating least squares (MCR-ALS) led to refined estimates of the concentration profiles. A PLS2 model was then constructed using the calculated concentration profiles and the preprocessed UV spectra. Using a standard PLS model compatible with the spectrometer's standard process software facilitated real-time predictions for new batches. This method was applied to five 45 liter batches in a large-scale laboratory facility. The method successfully predicted the product concentration of batch 1 but exhibited larger prediction error for subsequent batches. The largest prediction error was attained during batch 3, for which a final concentration of 0.22 mole L(-1) was predicted, while the true measured value was 0.271 mole L(-1) (an error of 18.8%). However, the qualitative real-time profiles proved to be extremely useful as they allowed the end-point to be determined without sampling or performing off-line analysis. Furthermore, the concentration profile of the intermediate species, which could not be observed by the offline method, could also be observed in real-time and gave further confidence that the process was approaching the end-point. Another benefit of real-time reaction profiles was encountered during the manufacture when the formation of product in batch 3 appeared to be progressing slower than was observed in previous batches. This prompted a check of the batch temperature and it was found to be 10 degrees C lower than the required set-point. The temperature was corrected and the batch successfully reached completion in the expected time.  相似文献   

8.
Typical process measurements are usually correlated with each other and compounded with various phenomena occurring at different time and frequency domains. To take into account this multivariate and multi-scale nature of process dynamics, a multi-scale PLS (MSPLS) algorithm combining PLS and wavelet analysis is proposed. The MSPLS first decomposes the process measurements into separated multi-scale components using on-line wavelet transform, and then the resultant multi-scale data blocks are modeled in the framework of multi-block PLS algorithm which can describe the global relationships across the entire scale blocks as well as the localized features within each sub-block at detailed resolutions. To demonstrate the feasibility of the MSPLS method, its process monitoring abilities were tested not only for the simulated data sets containing several fault scenarios but also for a real industrial data set, and compared with the monitoring abilities of the standard PLS method on the quantitative basis. The results clearly showed that the MSPLS was superior to the standard PLS for all cases especially in that it could provide additional scale-level information about the fault characteristics as well as more sensitive fault detection ability.  相似文献   

9.
We present a methodology for solving a non‐linear inverse geometry heat transfer problem where the observations are temperature measurements at points inside the object and the unknown is the geometry of the volume where the problem is defined. The representation of the geometry is based on radial basis functions (RBFs) and the non‐linear inverse problem is solved using the iteratively regularized Gauss–Newton method. In our work, we consider not only the problem with no geometry restrictions but also the bound‐constrained problem. The methodology is used for the industrial application of estimating the location of the 1150°C isotherm in a blast furnace hearth, based on measurements of the thermocouples located inside it. We validate the solution of the algorithm against simulated measurements with different levels of noise and study its behaviour on different regularization matrices. Finally, we analyse the error behaviour of the solution. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

10.
An analytical technique based on kernel matrix representation is demonstrated to provide further chemically meaningful insight into partial least squares (PLS) regression models. The kernel matrix condenses essential information about scores derived from PLS or principal component analysis (PCA). Thus, it becomes possible to establish the proper interpretation of the scores. A PLS model for the total nitrogen (TN) content in multiple Thai fish sauces is built with a set of near-infrared (NIR) transmittance spectra of the fish sauce samples. The kernel analysis of the scores effectively reveals that the variation of the spectral feature induced by the change in protein content is substantially associated with the total water content and the protein hydration. Kernel analysis is also carried out on a set of time-dependent infrared (IR) spectra representing transient evaporation of ethanol from a binary mixture solution of ethanol and oleic acid. A PLS model to predict the elapsed time is built with the IR spectra and the kernel matrix is derived from the scores. The detailed analysis of the kernel matrix provides penetrating insight into the interaction between the ethanol and the oleic acid.  相似文献   

11.
三坐标测量机(CMM)动态误差源错综复杂,并且相互影响,因此很难建立一个通过误差源分析误差的准确预测模型.本文以空间测量位置的三维坐标值和测量机测量时的计算机直接控制(DCC)参数,包括移动速度、逼近距离和触测速度作为CMM动态测量误差模型的原始自变量,并通过3B样条变换获得各原始自变量与动态测量误差的非线性关系函数,再利用正交投影法把解释矩阵中与因变量无关的成分扣除掉,得到新的解释矩阵后再用偏最小二乘(PLS)回归进行降维和参数估计,从而得到CMM动态测量误差模型,即基于3B样条-正交投影偏最小二乘(3BS-OPPLS)模型.这样既避免了分析错综复杂的误差源及其相互影响,又能够捕捉各自变量对动态测量误差的非线性影响,并能克服因解释变量过多而产生的多重共线性问题.实验结果表明建立的3BS-OPPLS模型的预测效果优于未经正交投影的3B样条-偏最小二乘(3BS-PLS)模型,模型的预测精度得到显著提高.  相似文献   

12.
It is not new that model order reduction (MOR) methods are employed in almost all fields of engineering to reduce the processing time of complex computational simulations. At the same time, interior point methods (IPMs), a method to deal with inequality constraint problems (which is little explored in engineering), can be applied in many fields such as plasticity theory, contact mechanics, micromechanics, and topology optimization. In this work, a MOR based in Galerkin projection is coupled with the infeasible primal-dual IPM. Such research concentrates on how to develop a Galerkin projection in one field with the interior point method; the combination of both methods, coupled with Schur complement, permits to solve this MOR similar to problems without constraints, leading to new approaches to adaptive strategies. Moreover, this research develops an analysis of error from the Galerkin projection related to the primal and dual variables. Finally, this work also suggests an adaptive strategy to alternate the Galerkin projection operator, between primal and dual variable, according to the error during the processing of a problem.  相似文献   

13.
Liu HB  Yang JC  Yi WJ  Wang JQ  Yang JK  Li XJ  Tan JC 《Applied optics》2012,51(16):3590-3598
In most spacecraft, there is a need to know the craft's angular rate. Approaches with least squares and an adaptive Kalman filter are proposed for estimating the angular rate directly from the star tracker measurements. In these approaches, only knowledge of the vector measurements and sampling interval is required. The designed adaptive Kalman filter can filter out noise without information of the dynamic model and inertia dyadic. To verify the proposed estimation approaches, simulations based on the orbit data of the challenging minisatellite payload (CHAMP) satellite and experimental tests with night-sky observation are performed. Both the simulations and experimental testing results have demonstrated that the proposed approach performs well in terms of accuracy, robustness, and performance.  相似文献   

14.
Choi K  Schulz TJ 《Applied optics》2008,47(10):B104-B116
Thin observation module by bounded optics (TOMBO) is an optical system that achieves compactness and thinness by replacing a conventional large full aperture by a lenslet array with several smaller apertures. This array allows us to collect diverse low-resolution measurements. Finding an efficient way of combining these diverse measurements to make a high-resolution image is an important research problem. We focus on finding a computational method for performing the resolution restoration and evaluating the method via simulations. Our approach is based on advanced signal-processing concepts: we construct a computational data model based on Fourier optics and propose restoration algorithms based on minimization of an information-theoretic measure, called Csiszár's I divergence between two nonnegative quantities: the measured data and the hypothetical images that are induced by our algorithms through the use of our computational data model. We also incorporate Poisson and Gaussian noise processes to model the physical measurements. To solve the optimization problem, we adapt the popular expectation-maximization method. These iterative algorithms, in a multiplicative form, preserve powerful nonnegativity constraints. We further incorporate a regularization based on minimization of total variation to suppress incurring artifacts such as roughness on the surfaces of the estimates. Two sets of simulation examples show that the algorithms can produce very high-quality estimates from noiseless measurements and reasonably good estimates from noisy measurements, even when the measurements are incomplete. Several interesting and useful avenues for future work such as the effects of measurement selection are suggested in our conclusional remarks.  相似文献   

15.
Present sensitivity analysis of motion error usually focuses on the trajectory deviation of the mechanism, which inevitably introduces an intractable time dependent problem. For efficiently and accurately measuring the motion error of the planar mechanism with dimension and clearance uncertainties by global sensitivity analysis (GSA), a novel method is proposed in this work. By applying the principal component analysis (PCA), the motion error is transformed into new vector output and cleverly avoids the time dependent problem. To ensure the accuracy of PCA in the case of small samples, the Bootstrap method is introduced. Based on the PCA results, the artificial neural network (ANN) surrogate model is established between the input variables and the vector output. Then the classical variance-based GSA method is applied to obtain the variable importance ranking for different PCs, and the synthesized GSA indices are introduced. Four representative examples are studied to demonstrate the versatility and effectiveness of the proposed method.  相似文献   

16.
Shao Y  He Y  Mao J 《Applied optics》2007,46(25):6391-6396
Visible and near-infrared (Vis/NIR) reflectance spectroscopy has been investigated for its ability to nondestructively detect acidity in bayberry juice. What we believe to be a new, better mathematic model is put forward, which we have named principal component analysis-stepwise regression analysis-backpropagation neural network (PCA-SRA-BPNN), to build a correlation between the spectral reflectivity data and the acidity of bayberry juice. In this model, the optimum network parameters, such as the number of input nodes, hidden nodes, learning rate, and momentum, are chosen by the value of root-mean-square (rms) error. The results show that its prediction statistical parameters are correlation coefficient (r) of 0.9451 and root-mean-square error of prediction (RMSEP) of 0.1168. Partial least-squares (PLS) regression is also established to compare with this model. Before doing this, the influences of various spectral pretreatments (standard normal variate, multiplicative scatter correction, S. Golay first derivative, and wavelet package transform) are compared. The PLS approach with wavelet package transform preprocessing spectra is found to provide the best results, and its prediction statistical parameters are correlation coefficient (r) of 0.9061 and RMSEP of 0.1564. Hence, these two models are both desirable to analyze the data from Vis/NIR spectroscopy and to solve the problem of the acidity prediction of bayberry juice. This supplies basal research to ultimately realize the online measurements of the juice's internal quality through this Vis/NIR spectroscopy technique.  相似文献   

17.
Principal component transform — Outer product analysis in the PCA context   总被引:1,自引:0,他引:1  
Outer product analysis is a method that permits the combination of two spectral domains with the aim of emphasizing co-evolutions of spectral regions. This data fusion technique consists in the product of all combinations of the variables that define each spectral domain. The main issue concerning the application of this technique is the very wide data matrix obtained which can be very hard to handle with multivariate techniques such as PCA or PLS, due to computer resources constraints. The present work presents an alternative way to perform outer product analysis in the PCA context without incurring into high demands on computational resources. This works shows that by decomposing each spectral domain with PCA and performing the outer product on the recovered scores, one can obtain the same results as if one calculated the outer product in the original variable space, but using much less computational resources. The results show that this approach will make possible to apply outer product analysis to very wide domains.  相似文献   

18.
This article aimed to model the effects of raw material properties and roller compactor operating parameters (OPs) on the properties of roller compacted ribbons and granules with the aid of principal component analysis (PCA) and partial least squares (PLS) projection. A database of raw material properties was established through extensive physical and mechanical characterization of several microcrystalline cellulose (MCC) and lactose grades and their blends. A design of experiment (DoE) was used for ribbon production. PLS models constructed with only OP-modeled roller compaction (RC) responded poorly. Inclusion of raw material properties markedly improved the goodness of fit (R(2) = .897) and model predictability (Q(2) = 0.72).  相似文献   

19.
《技术计量学》2013,55(4):328-337
Formulation and evaluation of environmental policy depends on receptor models that are used to assess the number and nature of pollution sources affecting the air quality for a region of interest. Different approaches have been developed for situations in which no information is available about the number and nature of these sources (e.g., exploratory factor analysis) and the composition of these sources is assumed known (e.g., regression and measurement error models). We propose a flexible approach for fitting the receptor model when only partial pollution source information is available. The use of latent variable modeling allows the direct incorporation of subject matter knowledge into the model, including known physical constraints and partial pollution source information obtained from laboratory measurements or past studies. Because air quality data often exhibit temporal and/or spatial dependence, we consider the importance of accounting for such correlation in estimating model parameters and making valid statistical inferences. We propose an approach for incorporating dependence structure directly into estimation and inference procedures via a new nested block bootstrap method that adjusts for bias in estimating moment matrices. A goodness-of-fit test that is valid in the presence of such dependence is proposed. The application of the approach is facilitated by a new multivariate extension of an existing block size determination algorithm. The proposed approaches are evaluated by simulation and illustrated with an analysis of hourly measurements of volatile organic compounds in the El Paso, Texas/Ciudad Juarez, Mexico area.  相似文献   

20.
We consider the problem of estimating the 2D vector displacement field in a heterogeneous elastic solid deforming under plane stress conditions. The problem is motivated by applications in quasistatic elastography. From precise and accurate measurements of one component of the 2D vector displacement field and very limited information of the second component, the method reconstructs the second component quite accurately. No a priori knowledge of the heterogeneous distribution of material properties is required. This method relies on using a special form of the momentum equations to filter ultrasound displacement measurements to produce more precise estimates. We verify the method with applications to simulated displacement data. We validate the method with applications to displacement data measured from a tissue mimicking phantom, and in-vivo data; significant improvements are noticed in the filtered displacements recovered from all the tests. In verification studies, error in lateral displacement estimates decreased from about 50% to about 2%, and strain error decreased from more than 250% to below 2%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号