首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Compares component and common factor analysis using 3 levels of population factor pattern loadings (.40, .60, .80) for each of the 3 levels of variables (9, 18, 36). Common factor analysis was significantly more accurate than components in reproducing the population pattern in each of the conditions examined. The differences decreased as the number of variables and the size of the population pattern loadings increased. The common factor analysis loadings were unbiased, had a smaller standard error than component loadings, and presented no boundary problems. Component loadings were significantly and systematically inflated even with 36 variables and loadings of .80. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
To determine the stability of regression equations, researchers have typically employed a cross-validation design in which weights are developed on an estimation subset of the sample and then applied to the members of a holdout sample. The present study used a Monte Carlo simulation to ascertain the accuracy with which the shrinkage in R–2 could be estimated by 3 formulas developed for this purpose. Results indicate that R. B. Darlington's (see record 1968-08053-001) and F. M. Lord (1950) and G. E. Nicholson's (1960) formulas yielded mean estimates approximately equal to actual cross-validation values, but with smaller standard errors. Although the Wherry estimate is a good estimate of population multiple correlation, it is an overestimate on population cross-validity. It is advised that the researcher estimate weights on the total sample to maximize the stability of the regression equation and then estimate the shrinkage in R–2 that he/she can expect when going to a new sample with either the Lord-Nicholson or Darlington estimation formulas. (17 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
The precision achieved in measuring bone mineral density (BMD) by commercial dual-energy x-ray absorptiometry (DXA) machines is typically better than 1%, but accuracy is considerably worse. Errors, due to inhomogeneous distributions of fat, of up to 10% have been reported. These errors arise because the DXA technique assumes a two-component model for the human body, i.e. bone mineral and soft tissue. This paper describes an extended DXA technique that uses a three-component model of human tissue and significantly reduces errors due to inhomogeneous fat distribution. In addition to two x-ray transmission measurements, a measurement of the path length of the x-ray beam within the patient is required. This provides a third equation, i.e. T = ts + tb + tf where T, ts, tb and tf are the total, lean soft tissue, bone mineral and fatty tissue thicknesses respectively. Monte Carlo modelling was undertaken to make a comparison of the standard and extended DXA techniques in the presence of inhomogeneous fat distribution. Two geometries of varying complexity were simulated. In each case the extended DXA technique produced BMD measurements that were independent of soft tissue composition whereas the standard technique produced BMD measurements that were strongly dependent on soft tissue composition. For example, in one case, the gradients of the plots of BMD versus fractional fat content were for standard DXA (-0.183+/-0.037) g cm(-2) and for extended DXA (0.027+/-0.044) g cm(-2). In all cases the extended DXA method produced more accurate but less precise results than the standard DXA technique.  相似文献   

5.
Several alternative procedures have been advocated for analyzing nonorthogonal ANOVA data. Two in particular, J. E. Overall and D. K. Spiegel's (see record 1970-01534-001) Methods 1 and 2, have been the focus of controversy. A Monte Carlo study was undertaken to explore the relative sensitivity and error rates of these 2 methods, in addition to M. I. Applebaum and E. M. Cramer's (see record 1974-28956-001) procedure. Results of 2,250 3?×?3 ANOVAs conducted with each method and involving 3 underlying groups of population effects supported 3 hypotheses raised in the study: (a) Method 2 was more powerful than Method 1 in the absence of interaction; (b) Method 2 was biased upwards in the presence of interaction; and (c) Methods 1 and 2 both had Type I error rates close to those expected in the absence of interaction. In addition, it was found that in the absence of interaction, the Appelbaum and Cramer procedure was more powerful than Method 2 but slightly increased the Type I error rate. (16 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
7.
8.
The exchange of energy in biochemical reactions involves, in a majority of cases, the hydrolysis of phosphoanhydrides (P-O-P). This discovery has lead to a long discussion about the origin of the high energy of such bonds, and to a proposal that hydration plays a major role in the energetics of the hydrolysis. This idea was supported by recent ab initio quantum mechanical calculations (Saint-Martin et al. (1991) Biochim. Biophys. Acta 1080, 205-214) that predicted the hydrolysis of pyrophosphate is exothermic in the gas phase. This exothermicity can account for only a half of the total energy release that one measures in aqueous solutions. Here we address the problem of hydration of the reactants and products of the pyrophosphate hydrolysis by means of Monte Carlo simulations, employing polarizable potentials whose parameters are fitted to energy surfaces computed at the SCF/6-31G** level of the theory. The present results show that the hydration enthalpies of the reactants and products contribute significantly to the total energy output of the pyrophosphate hydrolysis. The study predicts that both, the orthophosphate and the pyrophosphate, have hydration spheres with the water molecules acting as proton acceptors in the P-OH ... O(water) hydrogen bonds. These water molecules weakly repel the water molecules in the further hydration spheres. The perturbation of the structure of the solvent caused by the presence of the solute molecules is short ranged: after ca. 5 A from the P atoms, the energy and the structure of water correspond to bulk water. Due mainly to nonadditive effects, the molecular structure of the hydrated pyrophosphate is quite different from two fused structures of the hydrated orthophosphates. The hydration sphere of pyrophosphate is very loose and has a limited effect on the water network, whereas for orthophosphate it has a well developed shell structure. Hence, upon hydration there will be both a gain in hydration enthalpy and a gain in entropy because of distortion of the water molecular network.  相似文献   

9.
10.
We have investigated the application of Monte Carlo significance tests to the verification of reference ranges in the context of the transfer of an established range from one laboratory to another. Here we present an introduction to the Monte Carlo technique, outline a procedure for performing these tests using a commercially available software program, and demonstrate some of the operating characteristics of the tests when they are used to compare samples of different sizes and variances.  相似文献   

11.
It is well known that the dose calibrator response/unit exposure rate depends significantly upon source energy. However, investigation of 137Cs, 192Ir, and 226Ra brachytherapy sources by empirical, analytical, and Monte Carlo techniques shows that source filtration significantly affects the calibrator reading to exposure rate conversion factor. The results demonstrate that for each clinically used filtration thickness an exposure calibrated standard source is required to establish the response of the well chamber. An interesting consequence of this analysis is that the Sievert point dose algorithm for clinical sources overestimates the dose on the order of 3% at distances of approximately 3.5 cm from the source.  相似文献   

12.
13.
The value of an attenuation equalizing filter (equalizer) has been examined by calculation using a Monte Carlo technique. The contrast enhancement caused by the equalizer is the net result of three factors: (1) The first factor arises from the contrast change caused by the difference in detector characteristics at the different energies absorbed in the detector with and without equalizer. (2) The second factor is due to the contrast change caused by the different ratios of primary to total radiation reaching the detector. (3) Finally, the third factor describes how the transparency of the object is changed by the different primary filtration with equalizer. An example is given of the reduction in radiation energy absorbed by the patient and the question of how far the attenuation equalization can be driven is discussed.  相似文献   

14.
Explored the use of transformations to improve power in within-S designs in which multiple observations are collected for each S in each condition, such as reaction time (RT) and psychophysiological experiments. Often, the multiple measures within a treatment are simply averaged to yield a single number, but other transformations have been proposed. Monte Carlo simulations were used to investigate the influence of those transformations on the probabilities of Type I and Type II errors. With normally distributed data, Z and range correction transformations led to substantial increases in power over simple averages. With highly skewed distributions, the optimal transformation depended on several variables, but Z and range correction performed well across conditions. Correction for outliers was useful in increasing power, and trimming was more effective than eliminating all points beyond a criterion. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
16.
蒙特卡洛模拟法在边坡可靠性分析中的应用   总被引:1,自引:0,他引:1  
石岩 《包钢科技》2001,27(1):8-11,40
本文系统地叙述了应用蒙特卡洛模拟法进行边坡可靠性分析的方法,并附有算例,具有实用性和可操作性。  相似文献   

17.
18.
The article reports the findings from a Monte Carlo investigation examining the impact of faking on the criterion-related validity of Conscientiousness for predicting supervisory ratings of job performance. Based on a review of faking literature, 6 parameters were manipulated in order to model 4,500 distinct faking conditions (5 [magnitude] × 5 [proportion] × 4 [variability] × 3 [faking-Conscientiousness relationship] × 3 [faking-performance relationship] × 5 [selection ratio]). Overall, the results indicated that validity change is significantly affected by all 6 faking parameters, with the relationship between faking and performance, the proportion of fakers in the sample, and the magnitude of faking having the strongest effect on validity change. Additionally, the association between several of the parameters and changes in criterion-related validity was conditional on the faking-performance relationship. The results are discussed in terms of their practical and theoretical implications for using personality testing for employee selection. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
The taxometric method effectively distinguishes between dimensional (1-class) and taxonic (2-class) latent structure, but there is virtually no information on how it responds to polytomous (3-class) latent structure. A Monte Carlo analysis showed that the mean comparison curve fit index (CCFI; Ruscio, Haslam, & Ruscio, 2006) obtained with 3 taxometric procedures—mean above minus below a cut (MAMBAC), maximum covariance (MAXCOV), and latent mode factor analysis (L-Mode)—accurately identified 1-class (dimensional) and 2-class (taxonic) samples and produced taxonic results when applied to 3-class (polytomous) samples. From these results it is concluded that using the simulated data curve approach and averaging across procedures is an effective way of distinguishing between dimensional (1-class) and categorical (2 or more classes) latent structure. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Monte Carlo studies provide the information needed to help researchers select appropriate analytical procedures under design conditions in which the underlying assumptions of the procedures are not met. In Monte Carlo studies, the 2 errors that one could commit involve (a) concluding that a statistical procedure is robust when it is not or (b) concluding that it is not robust when it is. In previous attempts to apply standard statistical design principles to Monte Carlo studies, the less severe of these errors has been wrongly designated the Type I error. In this article, a method is presented for controlling the appropriate Type I error rate; the determination of the number of iterations required in a Monte Carlo study to achieve desired power is described; and a confidence interval for a test's true Type I error rate is derived. A robustness criterion is also proposed that is a compromise between W. G. Cochran's (1952) and J. V. Bradley's (1978) criteria. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号