首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 734 毫秒
1.
The application of fluorescence excitation-emission matrix (EEM) spectroscopy to the quantitative analysis of complex, aqueous solutions of cell culture media components was investigated. These components, yeastolate, phytone, recombinant human insulin, eRDF basal medium, and four different chemically defined (CD) media, are used for the formulation of basal and feed media employed in the production of recombinant proteins using a Chinese Hamster Ovary (CHO) cell based process. The comprehensive analysis (either identification or quality assessment) of these materials using chromatographic methods is time consuming and expensive and is not suitable for high-throughput quality control. The use of EEM in conjunction with multiway chemometric methods provided a rapid, nondestructive analytical method suitable for the screening of large numbers of samples. Here we used multiway robust principal component analysis (MROBPCA) in conjunction with n-way partial least squares discriminant analysis (NPLS-DA) to develop a robust routine for both the identification and quality evaluation of these important cell culture materials. These methods are applicable to a wide range of complex mixtures because they do not rely on any predetermined compositional or property information, thus making them potentially very useful for sample handling, tracking, and quality assessment in biopharmaceutical industries.  相似文献   

2.
We have developed a new data acquisition approach followed by a suitable data analysis for Laser-induced breakdown spectroscopy. It provides absolute concentrations of elements in particulate materials (e.g., industrial dusts and soils). In contrast to the known calibration procedures (based on the ratio of spectral lines), which are applicable only when one component is constant, this approach requires no constant constituent and results in absolute (rather than relative) concentrations. Thus, the major drawback of this analytical method, namely, the signals' instability (especially when particulate materials are concerned) is partially solved. Unlike the commonly used integrated data acquisition, we use a sequence of signals from single breakdown events. We compensate for pulse to pulse fluctuations in an intrinsic way, and the final results do not depend on the presence of any constant component. Extended linear calibration curves are obtained, and limits of detection are improved by 1 order of magnitude relative to previous methods applied to the same samples (e.g., detection limit of 10(-12) g of Zn in aerosol samples). The proposed compensation for pulse variations is based on the assumption that they can be described as a multiplicative effect for both the spectral peaks and a component of the baseline. In other words, we assume that the same fluctuation pattern observed in the spectral peaks is present in the baseline as well. This assumption is shown to hold and is utilized in the proposed method. In addition, a proper data-filtering process, which eliminates ill-conditioned spectra, is shown to partially compensate for problems due to the nature of analysis of particulate materials.  相似文献   

3.
Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.  相似文献   

4.
5.
The Micronutrients Measurement Quality Assurance Program (M2QAP) at the National Institute of Standards and Technology was created in 1984 with the goal of improving among-participant measurement comparability for fat-soluble vitamin-related compounds in human serum. We recently described improved tools for evaluating comparison exercise data; we here extend and apply these tools to the evaluation of the measurement community's performance over the entire 15-year history of the M2QAP. We here display measurement performance characteristics for the 14 measurands most commonly reported by the M2QAP community. We confirm that among-participant comparability for total beta-carotene cannot be much improved without improving average long-term within-participant measurement stability. We demonstrate that improved measurand definition and/or identification of interferences may help participants improve comparability for many of the M2QAP's other commonly reported measurands. The reported measurement performance characteristics may be of interest to clinical, nutritional, and epidemiological studies involving any of these measurands. The data analysis techniques utilized may be applicable to other programs.  相似文献   

6.
7.
Sufficient dimension reduction (SDR) techniques have proven to be very useful data analysis tools in various applications. Underlying many SDR techniques is a critical assumption that the predictors are elliptically contoured. When this assumption appears to be wrong, practitioners usually try variable transformation such that the transformed predictors become (nearly) normal. The transformation function is often chosen from the log and power transformation family, as suggested in the celebrated Box–Cox model. However, any parametric transformation can be too restrictive, causing the danger of model misspecification. We suggest a nonparametric variable transformation method after which the predictors become normal. To demonstrate the main idea, we combine this flexible transformation method with two well-established SDR techniques, sliced inverse regression (SIR) and inverse regression estimator (IRE). The resulting SDR techniques are referred to as TSIR and TIRE, respectively. Both simulation and real data results show that TSIR and TIRE have very competitive performance. Asymptotic theory is established to support the proposed method. The technical proofs are available as supplementary materials.  相似文献   

8.
Tolerance design affects the quality and cost of a product cycle time. Most of the literature on tolerance design problems has focused on developing exact methods to minimize manufacturing cost or quality loss. The inherent assumption in this approach is that the assembly function is known before a tolerance design problem is analysed. With the current development in CAD (Computer-Aided Design) software, design engineers can proceed with the tolerance design problems, without knowing assembly functions in advance. In this study, the Monte Carlo simulation is employed using VSA-3D/Pro software to obtain experimental data. Then the design of experiments (DOE) approach is adopted for data analysis in order to select critical components for cost reduction and quality improvement. Implementing the discussed computer experiments, a tolerance design analysis which improves quality and reduces cost can be performed for any complex assembly via computer during the early stage of design.  相似文献   

9.
A multivariate statistical process control (MSPC) method using dynamic multiway neighborhood preserving embedding (DMNPE) is proposed for fed-batch process monitoring. Different from principal component analysis (PCA) which aims at preserving the global Euclidean structure of the data set, neighborhood preserving embedding aims to preserve the local neighborhood structure of the data set. The neighborhood preserving property enables NPE to find more meaningful intrinsic information hidden in the high-dimensional observations compared with PCA. Moreover, the robustness of NPE is better than that of PCA. On the other hand, a dynamic monitoring approach based on moving window technique is employed to deal with the time-variant property of the dynamic processes. An industrial cephalosporin fed-batch fermentation process is used to demonstrate the performance of the DMNPE. The results show the advantages of DMNPE over those methods such as dynamic multiway PCA (DMPCA), static multiway NPE (SMNPE) and static multiway PCA (SMPCA) in fed-batch process monitoring. Finally, the robustness of the DMNPE monitoring is tested by adding noises to the original data sets.  相似文献   

10.
In a previous article, a new method allowing the treatment of large Markovian problems was presented. Based on a graph describing the influences between the components of the system, it performs successive approximate aggregations on the exact Markovian system to reduce its size. The main drawback of this method, as of any approximate method, is to assess its validity. That is why we develop a new presentation of the method here and we define, from this presentation, error bounds for the approximate results. They are then tested for two applications, one being very large with more than 1030 states for the exact Markovian system. We also extend the method, initially defined for availability problems, to reliability problems.  相似文献   

11.
F. J. Giron  S. Rios 《TEST》1980,31(1):17-38
Summary In this paper the theoretical and practical implications of dropping-from the basic Bayesian coherence principles- the assumption of comparability of every pair of acts is examined. The resulting theory is shown to be still perfectly coherent and has Bayesian theory as a particular case. In particular we question the need of weakening or ruling out some of the axioms that constitute the coherence principles; what are their practical implications; how this drive to the notion of partial information or partial uncertainty in a certain sense; how this partial information is combined with sample information and how this relates to Bayesian methods. We also point out the relation of this approach to rational behaviour with the more (and apparently unrelated) general notion of domination structures as applied to multicrieria decision making.  相似文献   

12.
In this paper, we study the rich class of formulations that arise in the limit analysis and design of elastic/plastic structures in the presence of contact constraints. It is well-known that in the absence of contacts, both the limit analysis and limit design problems can be written as linear programs. However, when contact constraints are present, the structure effectively exhibits both softening and stiffening behaviour under monotonically increasing loading. The resulting limit analysis and limit design problems are non-convex and are difficult to solve due to the presence of complementary type of equality constraints. We show that by using a mixed form of the minimum principle, we can restate the limit analysis and limit design problems as two- and three-level formulations, respectively. Further, under a strong assumption on the problem and solution data, we can take advantage of the underlying convexity to reduce both these multilevel formulations to equivalent linear programs. While it may not be possible to always verify this assumption in practice, we show that a two-step iterative procedure is effective in reaching a solution to the limit design problem.  相似文献   

13.
R. Toscano  S. B. Amouri 《工程优选》2013,45(12):1425-1446
This article introduces an extension of standard geometric programming (GP) problems, called quasi geometric programming (QGP) problems. The idea behind QGP is very simple, it means that a problem becomes a GP problem when some variables are kept constant. The consideration of this particular kind of nonlinear and possibly non-smooth optimization problem is motivated by the fact that many engineering problems can be formulated, or well approximated, as a QGP problem. However, solving a QGP problem remains a difficult task due to its intrinsic nonconvex nature. This is why this article introduces some simple approaches for easily solving this kind of nonconvex problem. The interesting thing is that the proposed methods do not require the development of a customized solver and work well with any existing solver able to solve conventional GP problems. Some considerations on the robustness issue are also presented. Various optimization problems are considered to illustrate the ability of the proposed methods for solving a QGP problem. Comparison with previously published work is also given.  相似文献   

14.
Robust Parameter Estimation in Dynamic Systems   总被引:1,自引:0,他引:1  
In this paper we present a practical method for robust parameter estimation in dynamic systems. In our study we follow the very successful approach for solving optimization problems in dynamic systems, namely the boundary value problem (BVP) approach. The suggested method combines multiple shooting for parameterizing dynamics, a flexible realization of the BVP principle, with a fast Gauss-Newton algorithm for solving the resulting constrained l 1 problem. We give an overview of the theoretical background as well as the details of a numerical implementation. We discuss why the Gauss-Newton algorithm, which is known to perform well mainly on well-conditioned problems, is appropriate for parameter estimation problems, while quasi-Newton methods have only limited use for parameter estimation. The method is implemented on the basis of the direct multiple shooting method as implemented in PARFIT, thus inheriting all basic properties of PARFIT such as numerical stability, reliability and efficiency. The new code has been successfully applied to real-life parameter estimation problems in enzyme and chemical kinetics.  相似文献   

15.
Hojo  Hiroshi 《Behaviormetrika》1986,13(20):1-12

The present paper provides two response models: one for binary ranking and the other for sorting. The former is a behavior of choosing, in a random order, only those comparison stimuli which are judged to be very similar to a standard stimulus, and the latter is that of selecting stimuli which are judged very similar to each other to form them into clusters. The key assumption of these models is that the subject perceives any two stimuli as very similar to each other when their dissimilarity, which varies over time, is below response thresholds that are associated with those stimuli. Maximum likelihood estimation procedures are used for the estimation of parameters of these models. The proposed models are applied, for illustrative purposes, to the similarity data collected by the binary ranking and sorting methods. We discuss some advantages of the binary ranking method to be used for collecting similarity data and a practical limitation of our response model for sorting.

  相似文献   

16.
任意的拉-欧边界元法解大晃动问题   总被引:6,自引:0,他引:6  
本文针对可动边界问题提出了一个任意的拉格朗日-欧拉边界元法.运用该方法不仅边界上的控制点可以精确地跟踪自由面,而且可以避免单元畸变,保持良好的数值稳定性,实例计算表明这对求解大振幅晃动问题特别有效.另外文中引入了临界振幅的概念,对晃动分析中使用小振幅假设的条件进行了论证,首次给出了它的量级界限.  相似文献   

17.
Liu  Z. Cao  Y. 《IET systems biology》2008,2(5):334-341
Morton-Firth and Bray's stochastic simulator (StochSim) and Gillespie's stochastic simulation algorithm (SSA) are two important methods for stochastic modelling and simulation of biochemical systems. They have been widely applied to different biological problems. A key question is discussed here: Are these two methods equivalent? These two methods are compared using fundamental probability analysis. The analysis clearly shows that, when the time step in the StochSim is chosen very small, the StochSim can be viewed as a first-order approximation to the SSA. The analysis also explains why the SSA is usually much more efficient than the StochSim for biochemical systems. However, when multistate species present in a system, the StochSim clearly shows its advantage. The Complexity analysis is used to explain this advantage. The hybrid SSA (HSSA) is proposed to combine the advantages of both the StochSim and SSA. When the populations of the multistate species are small, the HSSA is very efficient. Numerical experiments are presented to verify the analysis.  相似文献   

18.
Resolving factor analysis   总被引:1,自引:0,他引:1  
Bilinear data matrices may be resolved into abstract factors by factor analysis. The underlying chemical processes that generated the data may be deduced from the abstract factors by hard (model fitting) or soft (model-free) analyses. We propose a novel approach that combines the advantages of both hard and soft methods, in that only a few parameters have to be fitted, but the assumptions made about the system are very general and common to a range of possible models: The true chemical factors span the same space as the abstract factors and may be mapped onto the abstract factors by a transformation matrix T, since they are a linear combination of the abstract factors. The difference between the true factors and any other linear combination of the abstract factors is that the true factors conform to known chemical constraints (for instance, nonnegativity of absorbance spectra or monomodality of chromatographic peaks). Thus, by excluding linear combinations of the abstract factors that are not physically possible (assuming a unique solution), we can find the true chemical factors. This is achieved by passing the elements of a transformation matrix to a nonlinear optimization routine, to find the best estimate of T that fits the criteria. The optimization routine usually converges to the correct minimum with random starting parameters, but more difficult problems require starting parameters based on some other method, for instance EFA. We call the new method resolving factor analysis (RFA). The use of RFA is demonstrated with computer-generated kinetic and chromatographic data and with real chromatographic (HPLC) data. RFA produces correct solutions with data sets that are refractory to other methods, for instance, data with an embedded nonconstant baseline.  相似文献   

19.
In flat glass manufacturing, glass products of various dimensions are cut from a glass ribbon that runs continuously on a conveyor belt. Placement of glass products on the glass ribbon is restricted by the defects of varying severity located on the ribbon as well as the quality grades of the products to be cut. In addition to cutting products, a common practice is to remove defective parts of the glass ribbon as scrap glass. As the glass ribbon moves continuously, cutting decisions need to be made within seconds, which makes this online problem very challenging. A simplifying assumption is to limit scrap cuts to those made immediately behind a defect (a cut-behind-fault or CBF). We propose an online algorithm for the glass cutting problem that solves a series of static cutting problems over a rolling horizon. We solve the static problem using two methods: a dynamic programming algorithm (DP) that utilises the CBF assumption and a mixed integer programming (MIP) formulation with no CBF restriction. While both methods improve the process yield substantially, the results indicate that MIP significantly outperforms DP, which suggests that the computational benefit of the CBF assumption comes at a cost of inferior solution quality.  相似文献   

20.
The assessment of structural integrity data requires a statistical assessment. However, most statistical analysis methods make some assumption regarding the underlying distribution. Here, a new distribution-free statistical assessment method based on a combination of Rank and Bimodal probability estimates is presented and shown to result in consistent estimates of different probability quantiles. The method is applicable for any data set expressed as a function of two parameters. Data for more than two parameters can always be expressed as different subsets varying only two parameters. In principle, this makes the method applicable to the analysis of more complex data sets. The strength in the statistical analysis method presented lies in the objectiveness of the result. There is no need to make any subjective assumptions regarding the underlying distribution, or of the relationship between the parameters considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号