首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Consensus Principal Component Analysis is a multiblock method which is designed to reveal covariant patterns between and within several multivariate data sets. The computation of the parameters of this method namely, block scores, block loadings, global loadings and global scores are based on an iterative procedure. However, very few properties are known regarding the convergence of this iterative procedure. The paper discloses a monotony property of CPCA and exhibits an optimisation criterion for which CPCA algorithm provides a monotonic convergent solution. This makes it possible to highlight new properties of this method of analysis and pinpoint its connection to existing methods such as Generalized Canonical Correlation Analysis and Multiple Co-inertia Analysis.  相似文献   

2.
Principal component analysis (PCA) is the most commonly used dimensionality reduction technique for detecting and diagnosing faults in chemical processes. Although PCA contains certain optimality properties in terms of fault detection, and has been widely applied for fault diagnosis, it is not best suited for fault diagnosis. Discriminant partial least squares (DPLS) has been shown to improve fault diagnosis for small-scale classification problems as compared with PCA. Fisher's discriminant analysis (FDA) has advantages from a theoretical point of view. In this paper, we develop an information criterion that automatically determines the order of the dimensionality reduction for FDA and DPLS, and show that FDA and DPLS are more proficient than PCA for diagnosing faults, both theoretically and by applying these techniques to simulated data collected from the Tennessee Eastman chemical plant simulator.  相似文献   

3.
We study the two-parameter maximum likelihood estimation (MLE) problem for the Weibull distribution with consideration of interval data. Without interval data, the problem can be solved easily by regular MLE methods because the restricted MLE of the scale parameter β for a given shape parameter α has an analytical form, thus α can be efficiently solved from its profile score function by traditional numerical methods. In the presence of interval data, however, the analytical form for the restricted MLE of β does not exist and directly applying regular MLE methods could be less efficient and effective. To improve efficiency and effectiveness in handling interval data in the MLE problem, a new approach is developed in this paper. The new approach combines the Weibull-to-exponential transformation technique and the equivalent failure and lifetime technique. The concept of equivalence is developed to estimate exponential failure rates from uncertain data including interval data. Since the definition of equivalent failures and lifetimes follows EM algorithms, convergence of failure rate estimation by applying equivalent failures and lifetimes is mathematically proved. The new approach is demonstrated and validated through two published examples, and its performance in different conditions is studied by Monte Carlo simulations. It indicates that the profile score function for α has only one maximum in most cases. Such good characteristic enables efficient search for the optimal value of α.  相似文献   

4.
Robust principal component analysis for functional data   总被引:1,自引:0,他引:1  
A method for exploring the structure of populations of complex objects, such as images, is considered. The objects are summarized by feature vectors. The statistical backbone is Principal Component Analysis in the space of feature vectors. Visual insights come from representing the results in the original data space. In an ophthalmological example, endemic outliers motivate the development of a bounded influence approach to PCA.  相似文献   

5.
Artificial weathering tests on ethylene-propylene-diene monomer (EPDM) containing 5-ethylidene-2-norbornene (ENB) as diene were conducted in a xenon lamp light exposure and weathering equipment for different time periods. Principal component analysis (PCA) was used to evaluate the 12 degradation parameters of EPDM including surface properties, crosslink density and mechanical properties. The results showed that the combined evaluating parameter Z of EPDM degradation increased quickly in the first 12 days of exposure and then leveled off. After 45 days, it began to increase rapidly again. Among the 12 degradation parameters, the crosslink density is strongly associated with tensile stress at 300% elongation and tear strength. The surface tension is also well correlated with the color aberration.  相似文献   

6.
 In a previous paper we proposed a mixed least squares method for solving problems in linear elasticity. The solution to the equations of linear elasticity was obtained via minimization of a least squares functional depending on displacements and stresses. The performance of the method was tested numerically for low order elements for classical examples with well known analytical solutions. In this paper we derive a condition for the existence and uniqueness of the solution of the discrete problem for both compressible and incompressible cases, and verify the uniqueness of the solution analytically for two low order piece-wise polynomial FEM spaces. Received: 20 January 2001 / Accepted: 14 June 2002 The authors gratefully acknowledge the financial support provided by NASA George C. Marshall Space Flight Centre under contract number NAS8-38779.  相似文献   

7.
Multi-way data analysis techniques are becoming ever more widely used to extract information from data, such as 3D excitation-emission fluorescence spectra, that are structured in (hyper-) cubic arrays. Parallel Factor Analysis (PARAFAC) is very commonly applied to resolve 3D-fluorescence data and to recover the signals corresponding to the various fluorescent constituents of the sample. The choice of the appropriate number of factors to use in PARAFAC is one of the crucial steps in the analysis. When the signals in the data come from a relatively small number of easily distinguished constituents, the choice of the appropriate number of factors is usually easy and the mathematical diagnostic tools such as the Core Consistency, in general give good results. However, when the data is from a set of natural samples, the core consistency may not be a good indicator for the choice of the appropriate number of factors.In this work, Multi-way Principal Component Analysis (MPCA) and the Durbin-Watson criterion (DW) are utilized to choose the number of factors to use in PARAFAC decomposition. This is demonstrated in a case where 3D-front-face fluorescence spectroscopy is used to monitor of the evolution of naturally occurring and neo-formed fluorescent components in oils during thermal treatment.  相似文献   

8.
S S Dhillon  J C Thompson 《Strain》1990,26(4):141-144
This paper demonstrates that highly accurate predictions of the stress fields, including the peak stress of a stress concentration region, can be obtained easily by a least squares asymptotic analysis (LSAA) of even a relatively sparse set of displacement data from points in this zone lying sufficiently far from the boundary to avoid 'edge effects'.  相似文献   

9.
Traditional statistical theory is the most common tool used for trend analysis in accident data. In this paper, we point at some serious problems in using this theory in a practical safety management setting. An alternative approach is presented and discussed in which focus is on observable quantities and expressing uncertainties regarding these rather than on hypothetical probability distributions.  相似文献   

10.
Various conflicting proposals for degrees of freedom associated with the residuals of a principal component analysis have been published in the chemometrics-oriented literature. Here, a detailed derivation is given of the ‘standard’ formula from statistics. This derivation intends to be more accessible to chemometricians than, for example, the impeccable, but condensed proof that was published by John Mandel in a relatively unknown paper (J. Res. Nat. Bur. Stand., 74B (1970) 149–154). The derivation is presented in the form of a two-stage recipe that also appears to apply to more complex multiway models like the ones considered by Ceulemans and Kiers (Br. J. Math. Stat. Psych., 59 (2006) 133–150).  相似文献   

11.
Strategies Analysis, the third phase of Cognitive Work Analysis, helps investigators consider the range of ways in which workers can perform control tasks. Most existing approaches to Strategies Analysis identify a limited number of domain-specific strategies. We present a two-phase formative Strategies Analysis method intended to expose the range of strategies possible within a work system and the likelihood that different types of strategies will be selected in different contexts. The first phase, the preparatory phase, identifies generalised constraints that affect the range and selection of strategies, and the categories of strategies that may be applied to any domain. In the second phase, the application phase, investigators use the outputs of the preparatory phase to explore the impact that different work situations, tasks and workers have on the categories of strategies most likely to be adopted.  相似文献   

12.
A general and complete methodology is presented to facilitate systematic modeling and design of polymer processes during the early development period. To capture and handle the subjective type of uncertainty, embedded in the preliminary process development, fuzzy theories are used as a basis to model and design the process in the presence of ambiguity and vagueness. Physical membership functions are developed for mapping the relation between process variables and the associated fuzzy uncertainties. Based on the qualitative results generated using our previously proposed “linguistic based preliminary design method,” the process modeling can be followed even in the absence of any process governing equations. The modeling is carried out by establishing an appropriate fuzzy reasoning system which provides a specific functional mapping that relates input process variables to one (or more than one) output performance parameter(s). A reduced yet feasible domain is generated by our qualitative design scheme to constrain the process variables. Now, any optimization routine can then be employed to search for a proper process design. We demonstrate the effectiveness of the proposed methodology by its application to a typical compression molding process.  相似文献   

13.
Multiway methods are tested for their ability to explore and model near-infrared (NIR) spectra from a pharmaceutical batch process. The study reveals that blocking of data having a nonlinear behaviour into higher-order array can improve the predictive ability. The variation in each control point is independently modelled and N-way techniques overcome the nonlinearity problem. Important issues as variable selection and how to fill in for missing values have been discussed. Variable selection was shown to be essential to be able to perform multiway modelling. For spectra not yet monitored, use of mean spectra from calibration set gave close to the best results. Decomposing the spectra by N-way techniques gave additional information about the chemical system. To support the results simulated data sets were used.  相似文献   

14.
The ‘ensemble’ up-crossing rate technique consists of averaging the rate at which a random load process up-crosses a deterministic barrier level over the resistance distribution at successive time points. Averaging over the resistance makes the assumption of independent up-crossings less appropriate. As a result, first passage failure probabilities may become excessively conservative in problems with other than extremely low failure probabilities. The ensemble up-crossing rate technique has a significant potential in simplifying the solution of time variant reliability problems under resistance degradation. However, little is known about the quality of this approximation or its limits of application. In the paper, a Monte Carlo simulation-based methodology is developed to predict the error in the approximation. An error parameter is identified and error functions are constructed. The methodology is applied to a range of time-invariant and time-variant random barriers, and it is shown that the error in the original ensemble up-crossing rate approximation is largely reduced. The study provides unprecedented insight into characteristics of the ensemble up-crossing rate approximation.  相似文献   

15.
As the first phase of quality function deployment (QFD) and the only interface between the customers and product development team, house of quality (HOQ) plays the most important role in developing quality products that are able to satisfy customer needs. No matter in what shape or form HOQ can be built, the key to this process is to find out the hidden relationship between customers’ requirements and product design specifications. This paper presents a general rough set based data mining approach for HOQ analysis. It utilises the historical information of customer needs and the design specifications of the product that was purchased, employs the basic rough set notions to reveal the interrelationships between customer needs and design specifications automatically. Due to the data reduction nature of the approach, a minimal set of customer needs that are crucial for the decision on the correlated design specifications is derived. The end result of the approach is in the form of a minimal rule set, which not only fulfils the goal of HOQ, but can be used as supporting data for marketing purposes. A case study on the product of electrically powered bicycles is included to illustrate the approach and its efficiency.  相似文献   

16.
A computational model is developed, by implementing the damage models previously proposed by authors into a finite element code, for simulating the damage evolution and crushing behavior of chopped random fiber composites. Material damages induced by fiber debonding and crack nucleation and growth are considered. Systematic computational algorithms are developed to combine the damage models into the constitutive relation. Based on the implemented computational model, a range of simulations are carried out to probe the behavior of the composites and to validate the proposed methodology. Numerical examples show that the present computational model is capable of modeling progressive deterioration of effective stiffness and softening behavior after the peak load. Crushing behavior of composite tube is also simulated, which shows the applicability of the proposed computational model for crashworthiness simulations.  相似文献   

17.
Milling is the most practical machining (corrective) operation for removing excess material to produce a well defined and high quality surface. However, milling composite materials presents a number of problems such as surface delamination associated with the characteristics of the material and the cutting parameters used. In order to minimize these problem is presented a study with the objective of evaluating the cutting parameters (cutting velocity and feed rate) related to machining force in the workpiece, delamination factor, surface roughness and international dimensional precision in two GFRP composite materials (Viapal VUP 9731 and ATLAC 382-05). A plan of experiments, based on an orthogonal array, was established considering milling with prefixed cutting parameters. Finally an analysis of variance (ANOVA) was preformed to investigate the cutting characteristics of GFRP composite materials using a cemented carbide (K10) end mill.  相似文献   

18.
Second-order instrumental signals showing a non-linear behaviour with respect to analyte concentration can still be adequately processed in order to achieve the important second-order advantage. The combination of unfolded principal component analysis with residual bilinearization, followed by application of a variety of neural network models, allows one to obtain the second-order advantage. While principal component analysis models the training data, residual bilinearization models the contribution of the potential interferents which may be present in the test samples. Neural networks such as multilayer perceptron, radial basis functions and support vector machines, are all able to model the non-linear relationship between analyte concentrations and sample principal component scores. Three different experimental systems have been analyzed, all requiring the second-order advantage: 1) pH–UV absorbance matrices for the determination of two active principles in pharmaceutical preparations, 2) fluorescence excitation–emission matrices for the determination of polycyclic aromatic hydrocarbons, and 3) UV-induced fluorescence excitation–emission matrices for the determination of amoxicillin in the presence of salicylate. In all cases, reasonably accurate predictions can be made with the proposed techniques, which cannot be reached using traditional methods for processing second-order data.  相似文献   

19.
Systems, structures, and components of Nuclear Power Plants are subject to Technical Specifications (TSs) that establish operational limitations and maintenance and test requirements with the objective of keeping the risk associated to the plant within the limits imposed by the regulatory agencies. Recently, in an effort to improve the competitiveness of nuclear energy in a deregulated market, modifications to maintenance policies and TSs are being considered within a risk-informed viewpoint, which judges the effectiveness of a TS, e.g. a particular maintenance policy, with respect to its implications on the safety and economics of the system operation.In this regard, a recent policy statement of the US Nuclear Regulatory Commission declares appropriate the use of Probabilistic Risk Assessment models to evaluate the effects on the system of a particular TS. These models rely on a set of parameters at the component level (failure rates, repair rates, frequencies of failure on demand, human error rates, inspection durations, and others) whose values are typically affected by uncertainties. Thus, the estimate of the system performance parameters corresponding to a given TS value must be supported by some measure of the associated uncertainty.In this paper we propose an approach, based on the effective coupling of genetic algorithms and Monte Carlo simulation, for the multiobjective optimization of the TSs of nuclear safety systems. The method transparently and explicitly accounts for the uncertainties in the model parameters by attempting to minimize both the expected value of the system unavailability and its associated variance. The costs of the alternative TSs solutions are included as constraints in the optimization. An application to the Reactor Protection Instrumentation System of a Pressurized Water Reactor is demonstrated.  相似文献   

20.
Abstract

This study investigates the feasibility of enhancing steam‐driven ejector performance. Initially, a one‐dimensional ejector theory is used to examine the effects on ejector performance of three isentropic efficiencies: nozzle efficiency ηm , mixing efficiency ηm, and diffuser efficiency ηm . Theoretical analysis demonstrates that mixing efficiency profoundly affects ejector performance, but that the other two efficiencies have slightly influenced ejector performance. This finding suggests that efficient mixing can promote ejector performance. This study also attempts to improve mixing efficiency using a petal nozzle. The behavior and characteristics of a petal nozzle are investigated by testing the nozzle under various operating conditions, i.e. primary pressure, secondary pressure, and back pressure. In addition, the study compares the experimental and theoretical results. These results prove that using a petal nozzle can improve ejector performance. The shadowgraph method was used to visualize the inner flow field of an ejector. The flow patterns observed should help to further improve ejector performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号