共查询到20条相似文献,搜索用时 15 毫秒
1.
Confidence intervals of proposed individual bioequivalence metrics are difficult to determine in closed form because their stochastic distributions are unknown. In this article, it is shown that, with slightly modified weights, the Relative Individual Risks (RIR) moment-based scaled statistic for individual bioequivalence that was presented by Schall and Williams has an exact noncentral Fisher's F distribution with noncentrality parameter give by a scaled squared difference in formulations means. This can be approximated by a central F with adjusted degrees of freedom from which it follows that an upper (1-alpha) confidence bound for RIR is given by [formula: see text] where [formula: see text] is the least square estimate of RIR; dfER is the degrees of freedom associated with the reference intrasubject variance estimate, v is the subject-by-formulation degrees of freedom adjusted for noncentrality and alpha is the significance level. Individual bioequivalence is concluded if UL does not exceed the regulatory limit. The performance of this confidence interval was investigated by comparing its experimental bioequivalence rate to that of the unweighted metric under known parameter situations through simulations of two formulations in a fully replicated study design. Results showed that the proposed metric is slightly less biased and more precise than the unweighted metric. 相似文献
2.
I. Olkin and J. D. Finn (1995) presented 2 methods for comparing squared multiple correlation coefficients for 2 independent samples. In 1 method, the researcher constructs a confidence interval for the difference between 2 population squared coefficients; in the 2nd method, a Fisher-type transformation of the sample squared correlation coefficient is used to obtain a test statistic. Both methods are based on asymptotic theory and use approximations to the sampling variance. The approximations are incorrect when the population multiple correlation coefficient is zero. The 2 procedures were examined for equal and unequal population multiple correlation coefficients in combination with equal and unequal sample sizes. As expected, the procedures were inaccurate when the population multiple correlation coefficients were zero or very small and, in some conditions, were inaccurate when sample sizes and coefficients were unequal. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
3.
Used a hierarchical multiple regression analysis to predict the number of sessions completed by 90 patients in a headache-treatment program. Demographic variables accounted for 18% of the variance (p?p? 相似文献
4.
This paper deals with analysis of data from longitudinal studies where the rate of a recurrent event characterizing morbidity is the primary criterion for treatment evaluation. We consider clinical trials which require patients to visit their clinical center at successive scheduled times as part of follow-up. At each visit, the patient reports the number of events that occurred since the previous visit, or an examination reveals the number of accumulated events, such as skin cancers. The exact occurrence times of the events are unavailable and the actual patient visit times typically vary randomly about the scheduled follow-up times. Each patient's record thus consists of a sequence of clinic visit dates, event counts corresponding to the successive time intervals between clinic visits, and baseline covariates. We propose a semiparametric regression model, extending the fully parametric model of Thall (1988, Biometrics 44, 197-209), to estimate and test for covariate effects on the rate of events over time while also accounting for the possibly time-varying nature of the underlying event rate. Covariate effects enter the model parametrically, while the underlying time-varying event rate is modelled nonparametrically. The method of Severini and Wong (1992, Annals of Statistics 20, 1768-1802) is used to construct asymptotically efficient estimators of the parametric component and to specify their asymptotic distribution. A simulation study and application to a data set are provided. 相似文献
5.
JK Lindsey 《Canadian Metallurgical Quarterly》1998,4(4):329-354
Parametric models for interval censored data can now easily be fitted with minimal programming in certain standard statistical software packages. Regression equations can be introduced, both for the location and for the dispersion parameters. Finite mixture models can also be fitted, with a point mass on right (or left) censored observations, to allow for individuals who cannot have the event (or already have it). This mixing probability can also be allowed to follow a regression equation. Here, models based on nine different distributions are compared for three examples of heavily censored data as well as a set of simulated data. We find that, for parametric models, interval censoring can often be ignored and that the density, at centres of intervals, can be used instead in the likelihood function, although the approximation is not always reliable. In the context of heavily interval censored data, the conclusions from parametric models are remarkably robust with changing distributional assumptions and generally more informative than the corresponding non-parametric models. 相似文献
6.
Describes how an apparent contradiction between the methods of coding dummy variables proposed by J. Cohen (see record 1969-06106-001) and those by J. Overall and D. Spiegel (see record 1970-01534-001) led to the discovery of a general formula for such coding, based on demonstrating a theoretical connection between multiple comparison and dummy multiple regression. Examples are given for various cases of orthogonal and nonorthogonal designs, which explicitly include assumptions about sample size. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
7.
Comments on an article by F. J. Landy et al (see record 1981-00274-001) that suggests a method for excluding halo variance in rating scales. It is argued that this approach may result in excluding true variance. The present article presents a conceptualization of the halo effect in terms of a suppressor variable. Accordingly, a multiple regression approach for the treatment of halo variance is suggested. (13 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
8.
This article develops equations for determining the asymptotic confidence limits for the difference between 2 squared multiple correlation coefficients. The present procedure uses the delta method described by I. Olkin and J. D. Finn (1995) but does not require the variance-covariance matrix and the partial derivatives for all the zero-order correlations that enter into the expression for the difference, as does their procedure. This simplified approach can lead to an extreme reduction in the calculations required, as well as a reduction in the mathematical complexity of the solution. This approach also demonstrates clearly that in some cases, it may be inappropriate to use the asymptotic confidence limits in tests of significance. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
9.
The method of moderated multiple regression is increasingly being applied in the search for moderator variables in industrial and organizational psychology. Because of frequent failures of the method in revealing moderator effects in empirical studies—in which such effects are strongly expected—it has been suggested that the procedure may lack statistical power with respect to hypothesis tests about moderating effects and, therefore, is inappropriate for the purposes of conventional moderator analyses. We evaluated this conclusion with computer simulation data. Our study indicated that the method is not overly conservative and that the Type I error rate of moderated multiple regression is approximately .05 at α?=?.05. Moreover, a proposed alternative multivariate procedure, principal component regression, is shown to have a Type I error rate that approaches unity under ordinary conditions when applied to the evaluation of moderator effects. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
10.
Combined significance tests (combined p values) and tests of the weighted mean effect size are used to combine information across studies in meta-analysis. A combined significance test (Stouffer test) is compared with a test based on the weighted mean effect size as tests of the same null hypothesis. The tests are compared analytically in the case in which the within-group variances are known and compared through large-sample theory in the more usual case in which the variances are unknown. Generalizations suggested are then explored through a simulation study. This work demonstrates that the test based on the average effect size is usually more powerful than the Stouffer test unless there is a substantial negative correlation between within-study sample size and effect size. Thus, the test based on the average effect size is generally preferable, and there is little reason to also calculate the Stouffer test. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
11.
Despite the development of procedures for calculating sample size as a function of relevant effect size parameters, rules of thumb tend to persist in designs of multiple regression studies. One explanation for their persistence may be the difficulty in formulating a reasonable a priori value of an effect size to be detected. This article presents methods for calculating effect sizes in multiple regression from a variety of perspectives and also introduces a new method based on an exchangeability structure among predictor variables. No single method is deemed superior, but rather examples show that a combination of methods is likely to be most valuable in many situations. A simulation provides a 2nd explanation for why rules of thumb for choosing sample size have persisted but also shows that the outcome of such underpowered studies will be a literature consisting of seemingly contradictory results. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
12.
A general method is presented for comparing the relative importance of predictors in multiple regression. Dominance analysis (D. V. Budescu, 1993), a procedure that is based on an examination of the R2 values for all possible subset models, is refined and extended by introducing several quantitative measures of dominance that differ in the strictness of the dominance definition. These are shown to be intuitive, meaningful, and informative measures that can address a variety of research questions pertaining to predictor importance. The bootstrap is used to assess the stability of dominance results across repeated sampling, and it is shown that these methods provide the researcher with more insights into the pattern of importance in a set of predictors than were previously available. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
13.
Congruency effects are typically smaller after incongruent than after congruent trials. One explanation is in terms of higher levels of cognitive control after detection of conflict (conflict adaptation; e.g., M. M. Botvinick, T. S. Braver, D. M. Barch, C. S. Carter, & J. D. Cohen, 2001). An alternative explanation for these results is based on feature repetition and/or integration effects (e.g., B. Hommel, R. W. Proctor, & K.-P. Vu, 2004; U. Mayr, E. Awh, & P. Laurey, 2003). Previous attempts to dissociate feature integration from conflict adaptation focused on a particular subset of the data in which feature transitions were held constant (J. G. Kerns et al., 2004) or in which congruency transitions were held constant (C. Akcay & E. Hazeltine, in press), but this has a number of disadvantages. In this article, the authors present a multiple regression solution for this problem and discuss its possibilities and pitfalls. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
14.
Studied the relationship between grades professors give their students and ratings those students give their professors, using multivariate techniques and a large sample size (2,360 of 2,449 course sections taught in the spring semester of 1973) to avoid weaknesses of previous studies. Results show the following: (a) Only one factor was present among the 8 rating items; (b) the correlation between average student grade in each course section and average student rating of the teacher of that section was .35; (c) average grade was the best predictor of average rating; and (d) when average grade was added to several other available predictors, it significantly improved the multiple correlation from .25 to .39. Findings suggest that students' grades probably do influence their ratings of faculty, accounting for about 9% of the total variance. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
15.
Missing effect-size estimates pose a particularly difficult problem in meta-analysis. Rather than discarding studies with missing effect-size estimates or setting missing effect-size estimates equal to 0, the meta-analyst can supplement effect-size procedures with vote-counting procedures if the studies report the direction of results or the statistical significance of results. By combining effect-size and vote-counting procedures, the meta-analyst can obtain a less biased estimate of the population effect size and a narrower confidence interval for the population effect size. This article describes 3 vote-counting procedures for estimating the population correlation coefficient in studies with missing sample correlations. Easy-to-use tables, based on equal sample sizes, are presented for the 3 procedures. More complicated vote-counting procedures also are given for unequal sample sizes. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
16.
The effect or sorting procedures on ranking error was investigated. Different groups of Ss ranked a series of 50 stimulus cards using 5 different sorting methods. Significant differences in ranking errors among the 5 methods were observed, with a "free" procedure showing less error than "structured" procedures. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
17.
Comments on J. Cohen's (see record 1995-12080-001) suggestion that experimenters should calculate confidence intervals. Two different ways of interpreting a confidence interval are discussed. The author asks how Cohen can criticize the logic of null hypothesis significance testing and then recommend reporting a statistic that relies on this logic. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
18.
Presents improved procedures to approximate confidence intervals for ρ–2 and ρc–2 in both fixed and random predictor models. These approximations require neither point estimates nor variance estimates and are analytically shown to be precise enough for most practical prediction purposes. An application of confidence intervals in regression model development is also given. (16 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
19.
20.
Discusses multivariate analysis of variance as a general case of familiar multiple regression analysis. A consequence of this approach is a unified treatment of multivariate analysis of variance which can be used by psychologists who are generally familiar with multiple regression approaches to univariate analysis of variance. It is suggested that the generality of the approach permits solutions consistent with any of the several available strategies for dealing with problems of unequal and disproportionate cell frequencies. Inherent in the multiple regression formulation is the otherwise not so obvious fact that univariate analysis of variance results are an integral part of the multivariate solution and that both are important for understanding complex data. Methods of interpreting multivariate analysis of variance results in complex factorial experimental designs are discussed. (32 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献