首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Three prominent multivariate statistical analyses, canonical correlation analysis (CCA), principal component analysis (PCA) and CAS-Regression analysis (CAS-R) are appropriately applied to the formulation optimization data associated with Product-T for determining a set of key excipient/process variables and a set of key response variables to be used in monitoring the future performance of the optimizated formula. CCA which considers both sets of variables simultaneously in a single analysis, successfully delineated two key parameters, one for each set. PCA which considers only the response variables concurred with the CCA results and CAS-R which considers each response variable separately also concurred. Even though CCA is a predominant technique, adjunct results of PCA and CAS-R could be supplemented for a comprehensive interpretation. It is recommended that all three analyses be carried out and interpreted appropriately.  相似文献   

2.
Abstract

Three prominent multivariate statistical analyses, canonical correlation analysis (CCA), principal component analysis (PCA) and CAS-Regression analysis (CAS-R) are appropriately applied to the formulation optimization data associated with Product-T for determining a set of key excipient/process variables and a set of key response variables to be used in monitoring the future performance of the optimizated formula. CCA which considers both sets of variables simultaneously in a single analysis, successfully delineated two key parameters, one for each set. PCA which considers only the response variables concurred with the CCA results and CAS-R which considers each response variable separately also concurred. Even though CCA is a predominant technique, adjunct results of PCA and CAS-R could be supplemented for a comprehensive interpretation. It is recommended that all three analyses be carried out and interpreted appropriately.  相似文献   

3.
The authors apply partial least squares regression to predict three-dimensional (3D) face shape from a single image. PLS describes the relationship between independent (intensity images) and dependent (3D shape) variables by seeking directions in the space of independent variables that are associated with large variations in the space of dependent variables. We use this idea to construct statistical models of intensity and 3D shape that capture strongly linked variations in both spaces. This decomposition leads to the construction of two different models that capture common variations in 3D shape and intensity. Using the intensity model, a set of parameters is obtained from out-of-training intensity examples. These intensity parameters can then be used directly in the 3D shape model to approximate facial shape. Experiments show that prediction is achieved with reasonable accuracy, improving results obtained through canonical correlation analysis.  相似文献   

4.
Toyoda  Hideki  Maeda  Tadahiko 《Behaviormetrika》1992,19(2):117-126

The purpose of the present study is to propose a procedure for correlation analysis of several (especially two) sets of variables, which includes canonical correlation analysis, principal component analysis, and multiple regression analysis as a special case. The proposed method derives components from each set of variables which maximize the weighted geometric mean of two types of indicators: one is the contribution rate of the components for their original variables, the other is the squared correlation between the components. In terms of the test theory, the former are indicators of reliability and the latter are indicators of concurrent validity. Through the numerical examples applying this method to the data of two Japanese language personality inventory, the method is shown to be particularly useful when determining the weights for test items.

  相似文献   

5.
Shaojun Xie  Xiaoping Du 《工程优选》2013,45(12):2109-2126
In practical design problems, interval variables exist. Many existing methods can handle only independent interval variables. Some interval variables, however, are dependent. In this work, dependent interval variables constrained within a multi-ellipsoid convex set are considered and incorporated into reliability-based design optimization (RBDO). An efficient RBDO method is proposed by employing the sequential single-loop procedure, which separates the coupled reliability analysis procedure from the deterministic optimization procedure. In the reliability analysis procedure, a single-loop optimization for the inverse reliability analysis is performed, and an efficient inverse reliability analysis method for searching for the worst-case most probable point (WMPP) is developed. The search method contains two stages. The first stage deals the situation where the WMPP is on the boundary of the feasible region, while the second stage accommodates the situation where the WMPP is inside the feasible region by interpolation. Three examples are used for a demonstration.  相似文献   

6.
Non-probabilistic convex models need to be provided only the changing boundary of parameters rather than their exact probability distributions; thus, such models can be applied to uncertainty analysis of complex structures when experimental information is lacking. The interval and the ellipsoidal models are the two most commonly used modeling methods in the field of non-probabilistic convex modeling. However, the former can only deal with independent variables, while the latter can only deal with dependent variables. This paper presents a more general non-probabilistic convex model, the multidimensional parallelepiped model. This model can include the independent and dependent uncertain variables in a unified framework and can effectively deal with complex ‘multi-source uncertainty’ problems in which dependent variables and independent variables coexist. For any two parameters, the concepts of the correlation angle and the correlation coefficient are defined. Through the marginal intervals of all the parameters and also their correlation coefficients, a multidimensional parallelepiped can easily be built as the uncertainty domain for parameters. Through the introduction of affine coordinates, the parallelepiped model in the original parameter space is converted to an interval model in the affine space, thus greatly facilitating subsequent structural uncertainty analysis. The parallelepiped model is applied to structural uncertainty propagation analysis, and the response interval of the structure is obtained in the case of uncertain initial parameters. Finally, the method described in this paper was applied to several numerical examples. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
A methodology for combining automobile crash investigation case studies into an overall statistical analysis is presented. The method considers each case study as an experiment identified by a set of independent variables. For each experiment a dependent variable—such as occupant injury—is measured. Analysis of the relationship between the independent variables and the dependent variable can then he performed. A specific example which uses multiple regression analysis to fit an injury prediction model is presented. Residuals from this model are used to show the effect on injury reduction of restraint systems.  相似文献   

8.
The sure independence screening procedure by ranking the marginal Pearson correlation is well documented in literatures and works satisfactorily in the ultra-high dimensional case. However, this marginal Person correlation learning would easily miss the variable that is marginally uncorrelated with the response but correlated with the response jointly with some other variables. This failure in missing an important variable is due to the fact that the marginal Pearson correlation does not use the joint information of the response and a set of covariates. In this paper, we introduce a new screening method which leaves a variable into the active set if it jointly with some other variables has a high canonical correlation with the response. This is accomplished via ranking canonical correlations between the response and all possible sets of k variables. Our results show that the procedure has the sure screening property and substantially reduces the dimensionality to a moderate size against the sample size. Extensive simulations demonstrate that our new method performs substantially better than the existing sure independence screening approaches based on the marginal Pearson correlation or Kental’s tau rank correlation. A real data set is also analyzed by implementing our approach.  相似文献   

9.
A deterministic activity network (DAN) is a collection of activities, each with some duration, along with a set of precedence constraints, which specify that activities begin only when certain others have finished. One critical performance measure for an activity network is its makespan, which is the minimum time required to complete all activities. In a stochastic activity network (SAN), the durations of the activities and the makespan are random variables. The analysis of SANs is quite involved, but can be carried out numerically by Monte Carlo analysis. This paper concerns the optimization of a SAN, i.e., the choice of some design variables that affect the probability distributions of the activity durations. We concentrate on the problem of minimizing a quantile (e.g., 95%) of the makespan, subject to constraints on the variables. This problem has many applications, ranging from project management to digital integrated circuit (IC) sizing (the latter being our motivation). While there are effective methods for optimizing DANs, the SAN optimization problem is much more difficult; the few existing methods cannot handle large-scale problems. In this paper we introduce a heuristic method for approximately optimizing a SAN, by forming a related DAN optimization problem which includes extra margins in each of the activity durations to account for the variation. Since the method is based on optimizing a DAN, it readily handles large-scale problems. To assess the quality of the resulting suboptimal designs, we describe two widely applicable lower bounds on achievable performance in optimal SAN design. We demonstrate the method on a simplified statistical digital circuit sizing problem, in which the device widths affect both the mean and variance of the gate delays. Numerical experiments show that the resulting design is often substantially better than one in which the variation in delay is ignored, and is often quite close to the global optimum (as verified by the lower bounds).  相似文献   

10.
Conventional two-group DIF analysis for dichotomous items is extended to factorial DIF analysis for polytomous items where multiple grouping factors with multiple groups in each are jointly analyzed. By adopting the formulation of general linear models, item parameters across all possible groups are treated as a dependent variable and the grouping factors as independent variables. These item parameters are then reparameterized as a set of grand item parameters and sets of DIF parameters representing main and interaction effects of the factors on the items. Results of simulation studies show that the parameters of the proposed modeling could be satisfactorily recovered. A real data set of 10 polytomous items and 1924 subjects was analyzed. Applications and implications of the proposed modeling are addressed.  相似文献   

11.
The design of buildings, bridges, offshore platforms and other civil infrastructure systems is controlled by specifications whose purpose is to provide the engineering principles and procedures required for evaluating the safety of structural systems. The calibration of these codes and specifications is a continuous process necessary to maintain a safe national and global infrastructure system while keeping abreast of new developments in engineering principles, and data on new materials, and applied loads. The common approach to specification calibration is to use probabilistic tools to deal with the random behavior of materials and to account for the uncertainties associated with determining environmental and other load effects. This paper presents a procedure to calibrate load factors for a structural design specification based on cost and safety optimization. The procedure is illustrated by determining load factors that may be applicable for incorporation in a bridge design specification. Traditional code calibration procedures require a set of pre-determined safety levels that should be used as target values that each load combination case should satisfy. The procedure in this paper deduces the failure cost implied in present designs, and provides consistent safety levels for all load combination cases. For greater accuracy, load effects showing variance in time have been modeled by separating them into two random variables; time dependent r.v. (wind speed, vehicular loads, etc.) and time independent r.v. (modeling uncertainties). The total expected lifetime cost is used in the optimization to account for both initial construction cost and future equivalent failure costs.  相似文献   

12.
ABSTRACTS     
This article provides methods for constructing simultaneous prediction intervals to contain the means of the dependent variable in a regression model for each of k future samples at k sets of values of the independent variables, some or all of which may be different. The methods are compared with previously proposed approximate procedures. The construction of simultaneous confidence intervals to contain the true regression at all of k vectors of independent variables is also presented.  相似文献   

13.
This paper offers a method for weight optimization of multilayer fiber composite plates under the action of lateral loadings. The objective is to design a fiber composite plate of minimum thickness which can sustain multiple static loadings applied normal to its surface without exhibiting failure based on Tsai-Hill criterion in any of its layers. In this investigation, fiber orientation angles are treated as discrete variables, which can vary only by pre-assigned increments, while thicknesses of layers are treated as continuous variables. The optimization procedure is based on a two stage strategy; in the first of which only the fiber orientation angles for the layers are treated as variables, and in the second, only the layer thicknesses. A powerful criterion based on a load factor has been defined to find the best angle for a new layer in the first stage, and the method of center points has been used for thickness optimization in the second stage. After any angle and thickness optimization has been done, a new layer is added to the thickness and the procedure is repeated for other new layers. The end of the two stage procedure is signaled whenever the thickness of the new layer in the optimization process approaches zero; meaning that no new layers would improve the set of layers already found. In this way at the end of the optimization procedure the plate thickness would be made of a minimum number of layers whose fibers are optimally oriented, and whose thicknesses are minimal. A poor choice of layers in the stack produce near zero thickness for the respective layer, and are thus deleted from the set. A repeat process is performed after each cycle, to modify layer angles in order to compensate for errors due to approximations involved. The priorities exercised in the choice of new layers for inclusion in the set and exclusion of all the un-necessary ones, allow an optimal state of stacking sequence to be achieved. Several examples are given to demonstrate the operation of the algorithm.  相似文献   

14.
In this large-scale project, the design tasks were divided into two parts which allowed for increased efficiency. The modelling and generation of optimal candiatte missile designs were separated from the preference-laden design selection process. A set of performance indices including range, cost, trajectory error, system susceptibility, and reliability were established as the system measures with which all the subsystems were subsequently optimized. A team of technical designers used computer-based submodels in the areas of propulsion, lifecycle cost, and system susceptibility along with analytical models in the areas of trajectory analysis and system reliability to construct a statespace model. The system statespace included a set of established constants, and it mathematically linked the various subsystems to the performance measures. This model was then nested inside a vector optimization algorithm on a digital computer. The output of this interactive computer program is a set of efficient or nondominated missile designs. Each design is defined by its set of state variables, and accompanied by a set of performance index scores and control variables which define its particular trajectory. These nondominated designs show explicitly the tradeoffs among the performance indices for the missile systems as one moves along the efficient frontier of designs. Additional sensitivity analysis is provided by the optimization software for each efficient design. The second phase of the design process consisted of identifying one missile design from the efficient set for further development. This identification process results from ranking of the efficient designs according to a scalar scoring function which relies on the decision maker's preferences.  相似文献   

15.
A data set of 97 samples of olive oil characterized by 36 chemical compounds, collected in Jaen (Spain), was used to investigate whether the information obtained by chemometric analysis of all of the variables, considered as a whole, could also be achieved by independent and parallel studies of its subsets. Each one of the subsets introduced independent and uncorrelated information, which can be used to build decision rules which may be implemented in expert systems.  相似文献   

16.
Least squares estimates of parameters of a multiple linear regression model are known to be highly variable when the matrix of independent variables is near singular. Using the latent roots and latent vectors of the “correlation matrix” of the dependent and independent variables a modified least squares estimation procedure is introduced. This technique enables one to determine whether the near singularity has predictive value and examine alternate prediction equations in which the effect of the near singrtlarity has been removed from the estimates of the regression coefficients. In addition a method for performing backward elimination of variables using standard least squares or the modified procedure is presented.  相似文献   

17.
This paper deals with the simultaneous statistical process control of several Poisson variables. The practitioner of this type of monitoring may employ a multiple scheme, i.e. one chart for controlling each variable, or may use a multivariate scheme, based on monitoring all the variables with a single control chart. If the user employs the multivariate schemes, he or she can choose from, for example, three options: (i) a control chart based on the sum of the different Poisson variables; (ii) a control chart on the maximum value of the different Poisson variables; and (iii) in the case of only two variables, a chart that monitors the difference between them. In this paper, the previous control charts are studied when applied to the control of p = 2, 3 and 4 variables. In addition, the optimization of a set of univariate Poisson control charts (multiple scheme) is studied. The main purpose of this paper is to help the practitioner to select the most adequate scheme for her/his production process. Towards this goal, a friendly Windows© computer program has been developed. The program returns the best control limits for each control chart and makes a complete comparison of performance among all the previous schemes. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
The air-side heat transfer from wire-and-tube heat exchangers of the kind widely used in small refrigeration appliances has been studied. Radiation and free-convection components have been separately investigated. The radiation component was theoretically computed using a diffuse, gray-body network with interactions between each part of the heat exchanger and the surroundings. For the free-convection heat transfer component, a semiempirical correlation was developed on the basis of experimental tests conducted on a set of 42 low-emittance exchangers with various geometrical characteristics. Comparisons between overall heat transfer predictions and a second, independent set of experiments on eight high-emittance exchangers showed satisfactory agreement. The proposed analysis is suitable either to determine the heat transfer performance of an existing (already sized) exchanger or to design a new one for prescribed heat duty and working temperatures.  相似文献   

19.
The objective of this study was to determine human physiological capabilities for prolonged lifting tasks performed from the floor to table height. Frequency and weight of load were the independent variables. Oxygen consumption, minute ventilation, and heart rate were the dependent variables. Physiological responses were monitored continuously for each frequency-load combination. Eleven male subjects participated in the experiments. The duration of each experimental session was controlled by the subject. Each subject was instructed to perform the lifting task continuously until he could not maintain it any longer due to complete physical exhaustion. Each subject was given 10 minutes of rest every 50 minutes of work and 1 hour for lunch after the fourth hour of work. The upper limit of lifting duration was set to 8 hours. One of the main findings obtained from this study was that a physiological fatigue limit (PFL) is a function of lifting task parameters (frequency, weight of load, and task duration). Thus, one cannot recommend a single PFL value such as 1 liter/min for lifting tasks of varied work durations.  相似文献   

20.
Ship fenders often take large crash loads during berthing. This paper presents a crashworthiness design of a regular ship fender structure with varying geometric dimensions. In the optimization, specific energy absorption (SEA) and maximum crushing force (Pm) are set as two objectives and the thickness of the outer skin, the thickness of the frames, the thickness of the longitudinally reinforced stiffener and the height of the fender are selected as four design variables. Nonlinear finite element analysis is first carried out to capture the crash responses of 196 samples. Afterwards, parametric studies are performed to investigate the influences of different variables on the design objectives. A back-propagation neural network is then constructed as the surrogate model to formulate the mapping between the variables and the objectives. When the network is validated, a multi-objective genetic algorithm is applied to obtain the Pareto optimal solutions. The optimum set is defined as the solution with the maximum SEA to Pm ratio. The independent design variables of the optimum set are tested and verified using sensitivity analysis technique.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号