首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Prediction of chemical composition of flowing liquids using passive acoustic measurements and multivariate regression (acoustic chemometrics) has been reported as a promising in-line measurement method. However, the passive acoustic measurement results are also affected directly or indirectly by other factors than composition of the liquid, i.e. physical conditions of the flow and equipment/pipe properties. The present study focuses on the effects of flow rate, accelerometer location and temperature on the acoustic spectra and prediction of composition of liquids. The studied liquids were two-component mixtures of sucrose and water, and three-component mixtures of ethanol, sucrose and water. Multivariate models were estimated using both local and global calibration on full spectra, and augmented frequency and amplitude matrices derived from full spectra. Flow rate and accelerometer location had the most pronounced effect on acoustic spectra and prediction results from recalibrated local models. Temperature had a minor effect on the acoustic spectra and prediction results. The prediction error for determination of ethanol, sucrose and water increased with increasing flow rate. Changes in flow rate resulted in considerable spectral variations, causing the resultant local calibration model to perform poorly predicting the new samples taken at other flow conditions. Global models performed well on prediction of liquid composition at all studied flow and temperature levels. The global models, however, needed higher number of PLS factors and led to higher prediction errors compared to local models. Using the augmented frequency and amplitude matrices in PLS/PPLS global regression models led to higher prediction errors compared to full spectra models. However, the augmented frequency and amplitude models were more parsimonious (4–6 PLS factors) compared to the full spectra models (10–12 PLS factors).  相似文献   

2.
Single-beam spectra were collected over the combination region of the near-infrared spectrum for 80 samples collected from 15 people over a two-week period. Partial least-squares (PLS) regression was used to generate an optimized calibration model for urea. PLS calibration models accurately measure urea in the spent dialysate matrix. Prediction errors are on the order of 0.15 mM, which is sufficient for the clinical assessment of the dialysis process. In addition, the feasibility of a global calibration model is demonstrated by generating a calibration model from samples and spectra obtained from 12 people to predict the level of urea in samples collected from 3 different people. In this case, the standard error of prediction is 0.09 mM. Spectra were modified in order to systematically examine the impact of resolution and noise. Little impact is observed by altering the spectral resolution from 4 to 32 cm-1. Spectral noise, however, plays an important role in the accuracy of these calibration models. Increasing the magnitude of the spectral noise increases the prediction errors and increases the width of the spectral range necessary for extracting the analytical information. The utility of the method is demonstrated by analyzing dialysate samples collected during actual dialysis treatments. In addition, the necessary resolution and spectral quality necessary for reliable on-line urea monitoring is identified. These findings indicate that a dedicated, on-line urea spectrometer must posses a resolution of 16 cm-1 coupled with a sample thickness of 1.5 mm and spectral noise levels on the order of 25 micro-absorbance units when measured as the root-mean-square (RMS) noise of 100% lines.  相似文献   

3.
The traditional way of handling temperature shifts and other perturbations in calibration situations is to incorporate the non-relevant spectral variation in the calibration set by measuring the samples at various conditions. The present paper proposes two low-cost approaches based on simulation and prior knowledge about the perturbations, and these are compared to traditional methods. The first approach is based on augmentation of the calibration matrix through adding simulated noise on the spectra. The second approach is a correction method that removes the non-relevant variation from new spectra. Neither method demands exact knowledge of the perturbation levels. Using the augmentation method it was found that a few, in this case four, selected samples run under different conditions gave approximately the same robustness as running all the calibration samples under different conditions. For the carbohydrate data set, all robustification methods investigated worked well, including the use of pure water spectra for temperature compensation. For the more complex meat data set, only the augmentation method gave comparable results to the full global model.  相似文献   

4.
The performance prediction models in the Pavement-ME design software are nationally calibrated using in-service pavement material properties, pavement structure, climate and truck loadings, and performance data obtained from the Long-Term Pavement Performance programme. The nationally calibrated models may not perform well if the inputs and performance data used to calibrate those do not represent the local design and construction practices. Therefore, before implementing the new M-E design procedure, each state highway agency (SHA) should evaluate how well the nationally calibrated performance models predict the measured field performance. The local calibrations of the Pavement-ME performance models are recommended to improve the performance prediction capabilities to reflect the unique conditions and design practices. During the local calibration process, the traditional calibration techniques (split sampling) may not necessarily provide adequate results when limited number of pavement sections are available. Consequently, there is a need to employ statistical and resampling methodologies that are more efficient and robust for model calibrations given the data related challenges encountered by SHAs. The main objectives of the paper are to demonstrate the local calibration of rigid pavement performance models and compare the calibration results based on different resampling techniques. The bootstrap is a non-parametric and robust resampling technique for estimating standard errors and confidence intervals of a statistic. The main advantage of bootstrapping is that model parameters estimation is possible without making distribution assumptions. This paper presents the use of bootstrapping and jackknifing to locally calibrate the transverse cracking and IRI performance models for newly constructed and rehabilitated rigid pavements. The results of the calibration show that the standard error of estimate and bias are lower compared to the traditional sampling methods. In addition, the validation statistics are similar to that of the locally calibrated model, especially for the IRI model, which indicates robustness of the local model coefficients.  相似文献   

5.
A comparative study involving a global linear method (partial least squares), a local linear method (locally weighted regression), and a nonlinear method (neural networks) has been performed in order to implement a calibration model on an industrial process. The models were designed to predict the water content in a reactor during a distillation process, using in-line measurements from a near-infrared analyzer. Curved effects due to changes in temperature and variations between the different batches make the problem particularly challenging. The influence of spectral range selection and data preprocessing has been studied. With each calibration method, specific procedures have been applied to promote model robustness. In particular, the use of a monitoring set with neural networks does not always prevent overfitting. Therefore, we developed a model selection criterion based on the determination of the median of monitoring error over replicate trials. The back-propagation neural network models selected were found to outperform the other methods on independent test data.  相似文献   

6.
Comparisons of prediction models from the new augmented classical least squares (ACLS) and partial least squares (PLS) multivariate spectral analysis methods were conducted using simulated data containing deviations from the idealized model. The simulated data were based on pure spectral components derived from real near-infrared spectra of multicomponent dilute aqueous solutions. Simulated uncorrelated concentration errors, uncorrelated and correlated spectral noise, and nonlinear spectral responses were included to evaluate the methods on situations representative of experimental data. The statistical significance of differences in prediction ability was evaluated using the Wilcoxon signed rank test. The prediction differences were found to be dependent on the type of noise added, the numbers of calibration samples, and the component being predicted. For analyses applied to simulated spectra with noise-free nonlinear response, PLS was shown to be statistically superior to ACLS for most of the cases. With added uncorrelated spectral noise, both methods performed comparably. Using 50 calibration samples with simulated correlated spectral noise, PLS showed an advantage in 3 out of 9 cases, but the advantage dropped to 1 out of 9 cases with 25 calibration samples. For cases with different noise distributions between calibration and validation, ACLS predictions were statistically better than PLS for two of the four components. Also, when experimentally derived correlated spectral error was added, ACLS gave better predictions that were statistically significant in 15 out of 24 cases simulated. On data sets with nonuniform noise, neither method was statistically better, although ACLS usually had smaller standard errors of prediction (SEPs). The varying results emphasize the need to use realistic simulations when making comparisons between various multivariate calibration methods. Even when the differences between the standard error of predictions were statistically significant, in most cases the differences in SEP were small. This study demonstrated that unlike CLS, ACLS is competitive with PLS in modeling nonlinearities in spectra without knowledge of all the component concentrations. This competitiveness is important when maintaining and transferring models for system drift, spectrometer differences, and unmodeled components, since ACLS models can be rapidly updated during prediction when used in conjunction with the prediction augmented classical least squares (PACLS) method, while PLS requires full recalibration.  相似文献   

7.
A rapid assessment of product quality can often be made using a combination of near-infrared spectroscopy (NIR) and multivariate calibration. The robustness of such a method is determined by the sensitivity of the multivariate calibration model to variations in the spectral data. An approach is described that uses a combination of experimental design methodology and principal component analysis to identify the main sources of variation in the spectra and to estimate their influence on the quantitative predictions. This is accomplished by comparing variations in a set of measured, replicate spectra to spectra with simulated variations. The approach was applied to the hydroxyl number determination of polyols by NIR spectroscopy and partial least-squares calibration. The results indicated that the most significant sources of variation were due to a variable cell path length and a variable curved background. Correction for these errors resulted in a 58% reduction in the standard deviation of the hydroxyl number predictions, indicating that a substantial improvement in the method precision is possible.  相似文献   

8.
以果糖溶液为研究对象,采用斜率/截距(S/B)算法,通过不同温度的果糖含量检测模型在两台不同厂家的傅里叶变换近红外光谱仪间的传递,讨论了样品温度变化对模型传递的影响.主、从仪器直接建模时,预测集的均方根偏差(RMSEP)均随温度升高呈增大的趋势,但整体变化不显著.直接预测时,预测集的RMSEP均大于0.86,效果不理想;采用S/B算法传递后,预测结果得到改善,主、从仪器相同温度光谱间的传递结果优于不同温度光谱间的传递,15℃、22℃和28℃时相同温度光谱间传递后的RMSEP分别为0.317、0.389和0.416,差异显著.实验结果表明,进行模型传递时,样品温度变化会对模型传递产生较大影响;保证样品温度一致下,选择合适温度有利于取得最佳传递效果.  相似文献   

9.
In polyetheracrylat (PEA) production, it is important to monitor three process parameters in order to assure a high quality of the final product: hydroxyl (OH) number, viscosity and acidity (acid number). Due to the high resolution and high sensitivity, it has been shown in the past that the Fourier transform near infrared (FTNIR) process spectrum measurements can be used to obtain spectra with precise content information about these process parameters. In order to perform an automatic supervision and to reduce the (off-line, laboratory) analysis effort of experts and operators of these substances, chemometric quantification models have to be used. In this paper, we investigate the usage of a specific type of fuzzy systems, so-called Takagi-Sugeno fuzzy systems, for calibrating the chemometric models. This type of model architecture supports the usage of piecewise local linear predictors, being able to model flexibly different degrees of non-linearities implicitly contained in the mapping between NIR spectra and reference values. The training of these models is conducted by an evolving clustering method (adding new local linear models on demand) and a local (weighted) least squares estimation of the consequent parameters, and connected with a wavelength (dimensionality) reduction mechanism. Results on a concrete data set show that it can outperform state-of-the-art calibration methods as well as support vector regression as alternative non-linear model.  相似文献   

10.
Process analytical technologies (PAT) are identified as an essential element in the Quality by Design framework, providing the cornerstone to implement continuous pharmaceutical manufacturing. This study is concerned with employing three in-line PATs: Eyecon? 3D imaging system, Near-infrared spectroscopy (NIRS) and Raman spectroscopy (RS), in a continuous twin-screw granulation process to enable real-time monitoring and prediction of critical quality attributes of granules. The Thermo Scientific? Pharma 11 twin-screw granulator was used to manufacture granules from a low-dose formulation with caffeine anhydrous as the model drug. A 30-run full factorial design including three critical process parameters (liquid to solid ratio, barrel temperature and throughput) was conducted to evaluate the performance of each analytical tool. Eyecon? successfully captured the granule size and shape variation from different experimental conditions and demonstrated sufficient sensitivity to the fluctuation of size parameter D10 in the presence of process perturbations. The partial least square regression (PLSR) models developed using NIRS showed small relative standard error of prediction values (less than 5%) for most granule physical properties. In contrast, the RS-based PLSR models revealed higher prediction errors towards granule drug concentration, potentially due to the inhomogeneous premixing of raw materials during calibration model development.  相似文献   

11.
Raman spectroscopy has been widely used to monitor various aspects of the crystallization process. Although it has long been known that particle size can influence Raman signal, relatively little research has been conducted in this area, in particular for mixtures of organic materials. The aim of this study was to investigate the effect of particle size on quantification of polymorphic mixtures. Several sets of calibration samples containing different particle size fractions were prepared and Raman spectra were collected with different probes. Calibration models were built using both univariate and multivariate analysis. It was found that, for a single component system, Raman intensity decreased with increasing particle size. For mixtures, calibration models generated from the same particle size distribution as the sample yielded relatively good predictions of the actual sample composition. However, if the particle sizes of the calibration and unknown samples were different, prediction errors resulted. For extreme differences in particle sizes, prediction errors of up to 20% were observed. Prediction errors could be minimized by changing the sampling optics employed.  相似文献   

12.
Multivariate calibration models are sensitive to wavelength shifts in calibration spectra as such disturbances are linearly independent from unshifted spectra and increase the calibration model's dimension. However, if wavelength shifts included in the calibration model are random, the predictability of the model is not improved. On the contrary, overfitting is introduced, thereby increasing the prediction error. Because calibration spectra are defined to be error free and are the only available data at that point, there is no analytical way to find out that the calibration model is erroneous. This study gives a mathematical explanation of how the model's dimension is increased by wavelength shifts and that the additional basis vectors, principal components for instance, possess derivative-shaped features. It is also demonstrated by means of an example that the reverse is not necessarily true. Hence, derivative-shaped features found in principal components are no indication of wavelength-shifted calibration spectra. A method is presented for analyzing calibration spectra for such shifts. The algorithm takes advantage of the fact that artificial shift compensations of true shifts increase the similarity, i.e., correlation, of shifted spectra with respect to the remaining, unshifted spectra. Synthetic and experimental data are used to demonstrate and assess the performance of the algorithm. It is shown that wavelength shifts in calibration spectra can be detected and corrected if a small number of spectra are disturbed. Significant improvements of the prediction errors of chemometric calibration models can be achieved by means of this shift-correction algorithm.  相似文献   

13.
Preprocessing of near-infrared spectra to remove unwanted, i.e., non-related spectral variation and selection of informative wavelengths is considered to be a crucial step prior to the construction of a quantitative calibration model. The standard methodology when comparing various preprocessing techniques and selecting different wavelengths is to compare prediction statistics computed with an independent set of data not used to make the actual calibration model. When the errors of reference value are large, no such values are available at all, or only a limited number of samples are available, other methods exist to evaluate the preprocessing method and wavelength selection. In this work we present a new indicator (SE) that only requires blank sample spectra, i.e., spectra of samples that are mixtures of the interfering constituents (everything except the analyte), a pure analyte spectrum, or alternatively, a sample spectrum where the analyte is present. The indicator is based on computing the net analyte signal of the analyte and the total error, i.e., instrumental noise and bias. By comparing the indicator values when different preprocessing techniques and wavelength selections are applied to the spectra, the optimal preprocessing technique and the optimal wavelength selection can be determined without knowledge of reference values, i.e., it minimizes the non-related spectral variation. The SE indicator is compared to two other indicators that also use net analyte signal computations. To demonstrate the feasibility of the SE indicator, two near-infrared spectral data sets from the pharmaceutical industry were used, i.e., diffuse reflectance spectra of powder samples and transmission spectra of tablets. Especially in pharmaceutical spectroscopic applications, it is expected beforehand that the non-related spectral variation is rather large and it is important to remove it. The indicator gave excellent results with respect to wavelength selection and optimal preprocessing. The SE indicator performs better than the two other indicators, and it is also applicable to other situations where the Beer-Lambert law is valid.  相似文献   

14.
Predictions obtained from a multivariate calibration model are sensitive to variations in the spectra such as baseline shifts, multiplicative effects, etc. Many spectral pretreatment methods have been developed to reduce these distortions, and the best method is usually the one that minimizes the prediction error for an independent test set. This paper shows how multivariate sensitivity can be used to interpret spectral pretreatment results. Understanding why a particular pretreatment method gives good or bad results is important for ruling out chance effects in the conventional process of "trial and error", thus obtaining more confidence in the finally selected model. The principles are exemplified using the transmission near-infrared spectroscopic prediction of oxygenates in ampules of the standard reference material gasoline. The pretreatment methods compared are the multiplicative signal correction, first-derivative method, and second-derivative method. It is shown that for this application the first- and second-derivative methods are successful in removing the background. However, differentiating the spectra substantially reduces multivariate net analyte signal (in the worst case by a factor of 21). Consequently, a significantly smaller multivariate sensitivity is obtained which leads to increased spectral error propagation resulting in a larger uncertainty in the regression vector estimate and larger prediction errors. Differentiating spectra also increases the spectral noise (each time by a factor 2(1/2)) but this effect, which is well-known, is of minor importance for the current application.  相似文献   

15.
In process analytical applications it is not always possible to keep the measurement conditions constant. However, fluctuations in external variables such as temperature can have a strong influence on measurement results. For example, nonlinear temperature effects on near-infrared (NIR) spectra may lead to a strongly biased prediction result from multivariate calibration models such as PLS. A new method, called Continuous Piecewise Direct Standardization (CPDS) has been developed for the correction of such external influences. It represents a generalization of the discrete PDS calibration transfer method and is able to adjust for continuous nonlinear influences such as the temperature effects on spectra. It was applied to shortwave NIR spectra of ethanol/water/2-propanol mixtures measured at different temperatures in the range 30-70 degrees C. The method was able to remove, almost completely, the temperature effects on the spectra, and prediction of the mole fractions of the chemical components was close to the results obtained at constant temperature.  相似文献   

16.
A spectrum simulation method is described for use in the development and transfer of multivariate calibration models from near-infrared spectra. By use of previously measured molar absorptivities and solvent displacement factors, synthetic calibration spectra are computed using only background spectra collected with the spectrometer for which a calibration model is desired. The resulting synthetic calibration set is used with partial least squares regression to form the calibration model. This methodology is demonstrated for use in the analysis of physiological levels of glucose (1-30 mM) in an aqueous matrix containing variable levels of alanine, ascorbate, lactate, urea, and triacetin. Experimentally measured data from two different Fourier transform spectrometers with different noise levels and stabilities are used to evaluate the simulation method. With the more stable instrument (A), well-performing calibration models are obtained, producing a standard error of prediction (SEP) of 0.70 mM. With the less stable instrument (B), the calibration based solely on synthetic spectra is less successful, producing an SEP value of 1.58 mM. For cases in which the synthetic spectra do not describe enough spectral variance, an augmentation protocol is evaluated in which the synthetic calibration spectra are augmented with the spectra of a small number of experimentally measured calibration samples. For instruments A and B, respectively, augmentation with measured spectra of nine samples lowers the SEP values to 0.64 and 0.85 mM.  相似文献   

17.
Process modelling is the foundation of developing process controllers for monitoring and improving process/system health. Modelling process behaviours using a pure empirical approach might not be feasible due to limitation in collecting large amount of data. Engineering models provide valuable information about processes’ general behaviours but they might not capture distinct characteristics in the particular process studied. Many recent publications presented various ideas of using limited experimental data to adjust engineering models for making them suitable for certain applications. However, the focuses there are global adjustments, where modification of engineering models impacts the entire model-application region. In practice, some engineering models are only valid in a part of experimental data domain. Moreover, many discrepancies between engineering models and experimental data are in local regions. For example, in a chemical vapour deposition process, at high temperatures a process may be described by a diffusion limited model, while at low temperatures the process may be described by a reaction limited model. To address these problems, this article proposes two approaches for integrating engineering and data models: local model calibration and local model averaging. Through the local model calibration, the discrepancies between engineering’s first-principle models and experimental data are resolved locally based on experts’ feedbacks. To combine models adjusted locally in some regions and also models required little adjustments in other regions, a model averaging procedure based on local kernel weights is proposed. The effectiveness of the proposed method is demonstrated on simulated examples, and compared against a well-known existing global-adjustment method.  相似文献   

18.
Watari M  Ozaki Y 《Applied spectroscopy》2004,58(10):1210-1218
This paper reports the prediction of the ethylene content (C2 content) in random polypropylene (RPP) and block polypropylene (BPP) in the melt state by near-infrared (NIR) spectroscopy and chemometrics. NIR spectra of RPP and BPP in the melt states were measured by a Fourier transform near-infrared (FT-NIR) on-line monitoring system. The NIR spectra of RPP and BPP were compared. Partial least-squares (PLS) regression calibration models predicting the ethylene (C2) content that were developed by using each RPP or BPP spectra set separately yielded good results (SECV (standard error of cross validation): RPP, 0.16%; BPP, 0.31%; correlation coefficient: RPP, 0.998; BPP, 0.996). We also built a common PLS calibration model by using both the RPP and the BPP spectra set. The results showed that the common calibration model has larger SECV values than the models based on the RPP or the BPP spectra sets individually and is not practical for the prediction of the C2 content. We further investigated whether a calibration model developed by using the BPP spectra set can predict the C2 contents in the RPP sample set. If this is possible, it can save a significant amount of work and cost. The results showed that the use of the BPP model for the RPP sample set is difficult, and vice versa, because there are some differences in the molar absorption coefficients between the RPP and BPP spectra. To solve this problem, a transfer method from one sample spectra (BPP) set to the other spectra (RPP) set was studied. A difference spectrum between an RPP spectrum and a BPP spectrum was used to transfer from the BPP calibration set to the RPP calibration set. The prediction result (SEP (standard error of prediction), 0.23%, correlation coefficient, 0.994) of RPP samples by the transferred calibration set and model showed that it is possible to transfer from the BPP calibration set to the RPP calibration set. We also studied the transfer from the RPP calibration set (the range of C2 content: 0-4.3%) to the BPP calibration set. The prediction result of C2 content (the range of C2 contents: 0-7.7%) in BPP by use of the calibration model based on the transferred BPP spectra from the RPP spectra showed that the transfer method is only effective for the interpolation of the C2 content range by the nonlinear change in the peak intensities with the C2 content.  相似文献   

19.
An updating procedure is described for improving the robustness of multivariate calibration models based on near-infrared spectroscopy. Employing a single blank sample containing no analyte, repeated spectra are acquired during the instrumental warm-up period. These spectra are used to capture the instrumental profile on the analysis day in a way that can be used to update a previously computed calibration model. By augmenting the original spectra of the calibration samples with a group of spectra collected from the blank sample, an updated model can be computed that incorporates any instrumental drift that has occurred. This protocol is evaluated in the context of an analysis of physiological levels of glucose in a simulated biological matrix designed to mimic blood plasma. Employing data of calibration and prediction samples acquired over approximately six months, procedures are studied for implementing the algorithm in conjunction with calibration models based on partial least squares (PLS) regression. Over the range of 1-20 mM glucose, the final algorithm achieves a standard error of prediction (SEP) of 0.79 mM when the augmented PLS model is applied to data collected 176 days after the collection of the calibration spectra. Without updating, the original PLS model produces a seriously degraded SEP of 13.4 mM.  相似文献   

20.
The ability to quantify lysozyme is demonstrated for a series of aqueous samples with different degrees of scattering. Near-infrared spectra are collected for two sets of lysozyme/scattering solutions. In both sets of samples, the solutions are composed of lysozyme dissolved in acetate buffer with suspended monodisperse latex microspheres of polystyrene. The diameter of the microspheres is 6.4 microm for the first set and 0.6 microm for the second. For each set, the amount of microspheres range from 0.005 to 0.998 wt %, the lysozyme concentrations range from 0.834 to 28.6 mg/mL, and solution compositions are designed to minimize correlations between the concentration of lysozyme and percentage of microspheres. Near-infrared spectra are collected individually for each set of solutions. Single-beam spectra are collected over the combination spectral range (5000-4000 cm(-1), 2.0-2.5 microm) by transmitting the incident radiation through a 1.5-mm-thick sample that is maintained at 21 degrees C. Partial least-squares calibration models are evaluated individually for each data set both with and without wavelength optimization. Results indicate that models from raw, nonmodified, single-beam spectra are incapable of extracting lysozyme concentration from these highly scattering solutions. Accurate concentration measurements are possible, however, by implementing either a multiplicative scatter correction to the single-beam spectra or by taking the ratio of these single-beam spectra to an appropriate reference spectrum. In addition, digital Fourier filtering of these spectra enhances model performance. The best calibration model in the presence of 6.4-microm microspheres is obtained from multiplicative scatter corrected single-beam spectra over the 4550-4190-cm(-1) spectral range. The mean percent error of prediction (MPEP) and standard error of prediction (SEP) for this model are 2.2% and 0.28 mg/mL, respectively. Likewise, the multiplicative scatter corrected spectra with wavelength optimization provided the best calibration model for the 0.6-microm data set. In this case, the MPEP and SEP are 2.3% and 0.44 mg/mL, respectively. In addition, the ability to predict lysozyme concentrations is evaluated for the situation where the degree of scattering is greater in the predication samples compared to the calibration samples. Differences in the prediction ability are noted between the 6.4- and 0.6-microm data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号