首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
An experiment to assess the efficacy of a particular treatment or process often produces dichotomous responses, either favourable or unfavourable. When we administer the treatment on two occasions to the same subjects, we often use McNemar's test to investigate the hypothesis of no difference in the proportions on the two occasions, that is, the hypothesis of marginal homogeneity. A disadvantage in using McNemar's statistic is that we estimate the variance of the sample difference under the restriction that the marginal proportions are equal. A competitor to McNemar's statistic is a Wald statistic that uses an unrestricted estimator of the variance. Because the Wald statistic tends to reject too often in small samples, we investigate an adjusted form that is useful for constructing confidence intervals. Quesenberry and Hurst and Goodman discussed methods of construction that we adapt for constructing confidence intervals for the differences in correlated proportions. We empirically compare the coverage probabilities and average interval lengths for the competing methods through simulation and give recommendations based on the simulation results.  相似文献   

2.
A shared parameter model with logistic link is presented for longitudinal binary response data to accommodate informative drop-out. The model consists of observed longitudinal and missing response components that share random effects parameters. To our knowledge, this is the first presentation of such a model for longitudinal binary response data. Comparisons are made to an approximate conditional logit model in terms of a clinical trial dataset and simulations. The naive mixed effects logit model that does not account for informative drop-out is also compared. The simulation-based differences among the models with respect to coverage of confidence intervals, bias, and mean squared error (MSE) depend on at least two factors: whether an effect is a between- or within-subject effect and the amount of between-subject variation as exhibited by variance components of the random effects distributions. When the shared parameter model holds, the approximate conditional model provides confidence intervals with good coverage for within-cluster factors but not for between-cluster factors. The converse is true for the naive model. Under a different drop-out mechanism, when the probability of drop-out is dependent only on the current unobserved observation, all three models behave similarly by providing between-subject confidence intervals with good coverage and comparable MSE and bias but poor within-subject confidence intervals, MSE, and bias. The naive model does more poorly with respect to the within-subject effects than do the shared parameter and approximate conditional models. The data analysis, which entails a comparison of two pain relievers and a placebo with respect to pain relief, conforms to the simulation results based on the shared parameter model but not on the simulation based on the outcome-driven drop-out process. This comparison between the data analysis and simulation results may provide evidence that the shared parameter model holds for the pain data.  相似文献   

3.
Cost-effectiveness ratios usually appear as point estimates without confidence intervals, since the numerator and denominator are both stochastic and one cannot estimate the variance of the estimator exactly. The recent literature, however, stresses the importance of presenting confidence intervals for cost-effectiveness ratios in the analysis of health care programmes. This paper compares the use of several methods to obtain confidence intervals for the cost-effectiveness of a randomized intervention to increase the use of Medicaid's Early and Periodic Screening, Diagnosis and Treatment (EPSDT) programme. Comparisons of the intervals show that methods that account for skewness in the distribution of the ratio estimator may be substantially preferable in practice to methods that assume the cost-effectiveness ratio estimator is normally distributed. We show that non-parametric bootstrap methods that are mathematically less complex but computationally more rigorous result in confidence intervals that are similar to the intervals from a parametric method that adjusts for skewness in the distribution of the ratio. The analyses also show that the modest sample sizes needed to detect statistically significant effects in a randomized trial may result in confidence intervals for estimates of cost-effectiveness that are much wider than the boundaries obtained from deterministic sensitivity analyses.  相似文献   

4.
In survival analysis, estimates of median survival times in homogeneous samples are often based on the Kaplan-Meier estimator of the survivor function. Confidence intervals for quantiles, such as median survival, are typically constructed via large sample theory or the bootstrap. The former has suspect accuracy for small sample sizes under moderate censoring and the latter is computationally intensive. In this paper, improvements on so-called test-based intervals and reflected intervals (cf., Slud, Byar, and Green, 1984, Biometrics 40, 587-600) are sought. Using the Edgeworth expansion for the distribution of the studentized Nelson-Aalen estimator derived in Strawderman and Wells (1997, Journal of the American Statistical Association 92), we propose a method for producing more accurate confidence intervals for quantiles with randomly censored data. The intervals are very simple to compute, and numerical results using simulated data show that our new test-based interval outperforms commonly used methods for computing confidence intervals for small sample sizes and/or heavy censoring, especially with regard to maintaining specified coverage.  相似文献   

5.
When estimating a survival time distribution, the loss of information due to right censoring results in a loss of efficiency in the estimator. In many circumstances, however, repeated measurements on a longitudinal process which is associated with survival time are made throughout the observation time, and these measurements may be used to recover information lost to censoring. For example, patients in an AIDS clinical trial may be measured at regular intervals on CD4 count and viral load. We describe a model for the joint distribution of a survival time and a repeated measures process. The joint distribution is specified by linking the survival time to subject-specific random effects characterizing the repeated measures, and is similar in form to the pattern mixture model for multivariate data with nonignorable nonresponse. We also describe an estimator of survival derived from this model. We apply the methods to a long-term AIDS clinical trial, and study properties of the survival estimator. Monte Carlo simulation is used to estimate gains in efficiency when the survival time is related to the location and scale of the random effects distribution. Under relatively light censoring (20%), the methods yield a modest gain in efficiency for estimating three-year survival in the AIDS clinical trial. Our simulation study, which mimics characteristics of the clinical trial, indicates that much larger gains in efficiency can be realized under heavier censoring or with studies designed for long term follow up on survival.  相似文献   

6.
Indices of positive and negative agreement for observer reliability studies, in which neither observer can be regarded as the standard, have been proposed. In this article, it is demonstrated by means of an example and a small simulation study that a recently published method for constructing confidence intervals for these indices leads to intervals that are too wide. Appropriate asymptotic (i.e., large sample) variance estimates and confidence intervals for the positive and negative agreement indices are presented and compared with bootstrap confidence intervals. We also discuss an alternative method of interval estimation motivated from a Bayesian viewpoint. The asymptotic intervals performed adequately for sample sizes of 200 or more. For smaller samples, alternative confidence intervals such as bootstrap intervals or Bayesian intervals should be considered.  相似文献   

7.
A nonparametric estimator for the joint distribution of a survival time and surrogate response time, which may occur earlier during follow-up, is presented. In the absence of the surrogate response variable, the estimator reduces to the Kaplan Meier nonparametric estimator for the survival time alone. The estimator derived in this paper is done so in a particular novel way using an exchangeable process (reinforced random walks) to model individual observations. The methodology introduced in the paper is readily extended to modelling multiple state processes.  相似文献   

8.
In a meta-analysis of a set of clinical trials, a crucial but problematic component is providing an estimate and confidence interval for the overall treatment effect theta. Since in the presence of heterogeneity a fixed effect approach yields an artificially narrow confidence interval for theta, the random effects method of DerSimonian and Laird, which incorporates a moment estimator of the between-trial components of variance sigma B2, has been advocated. With the additional distributional assumptions of normality, a confidence interval for theta may be obtained. However, this method does not provide a confidence interval for sigma B2, nor a confidence interval for theta which takes account of the fact that sigma B2 has to be estimated from the data. We show how a likelihood based method can be used to overcome these problems, and use profile likelihoods to construct likelihood based confidence intervals. This approach yields an appropriately widened confidence interval compared with the standard random effects method. Examples of application to a published meta-analysis and a multicentre clinical trial are discussed. It is concluded that likelihood based methods are preferred to the standard method in undertaking random effects meta-analysis when the value of sigma B2 has an important effect on the overall estimated treatment effect.  相似文献   

9.
[Correction Notice: An erratum for this article was reported in Vol 12(4) of Psychological Methods (see record 2007-18729-004). The sentence describing Equation 1 is incorrect. The corrected sentence is presented in the erratum.] The point estimate of sample coefficient alpha may provide a misleading impression of the reliability of the test score. Because sample coefficient alpha is consistently biased downward, it is more likely to yield a misleading impression of poor reliability. The magnitude of the bias is greatest precisely when the variability of sample alpha is greatest (small population reliability and small sample size). Taking into account the variability of sample alpha with an interval estimator may lead to retaining reliable tests that would be otherwise rejected. Here, the authors performed simulation studies to investigate the behavior of asymptotically distribution-free (ADF) versus normal-theory interval estimators of coefficient alpha under varied conditions. Normal-theory intervals were found to be less accurate when item skewness >1 or excess kurtosis >1. For sample sizes over 100 observations, ADF intervals are preferable, regardless of item skewness and kurtosis. A formula for computing ADF confidence intervals for coefficient alpha for tests of any size is provided, along with its implementation as an SAS macro. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
The bootstrap is a nonparametric technique for estimating standard errors and approximate confidence intervals. Rasmussen has used a simulation experiment to suggest that bootstrap confidence intervals perform very poorly in the estimation of a correlation coefficient. Part of Rasmussen's simulation is repeated. A careful look at the results shows the bootstrap intervals performing quite well. Some remarks are made concerning the virtues and defects of bootstrap intervals in general. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Reports an error in "Asymptotically distribution-free (ADF) interval estimation of coefficient alpha" by Alberto Maydeu-Olivares, Donna L. Coffman and Wolfgang M. Hartmann (Psychological Methods, 2007[Jun], Vol 12[2], 157-176). The sentence describing Equation 1 is incorrect. The corrected sentence is presented in the erratum. (The following abstract of the original article appeared in record 2007-07830-003.) The point estimate of sample coefficient alpha may provide a misleading impression of the reliability of the test score. Because sample coefficient alpha is consistently biased downward, it is more likely to yield a misleading impression of poor reliability. The magnitude of the bias is greatest precisely when the variability of sample alpha is greatest (small population reliability and small sample size). Taking into account the variability of sample alpha with an interval estimator may lead to retaining reliable tests that would be otherwise rejected. Here, the authors performed simulation studies to investigate the behavior of asymptotically distribution-free (ADF) versus normal-theory interval estimators of coefficient alpha under varied conditions. Normal-theory intervals were found to be less accurate when item skewness >1 or excess kurtosis >1. For sample sizes over 100 observations, ADF intervals are preferable, regardless of item skewness and kurtosis. A formula for computing ADF confidence intervals for coefficient alpha for tests of any size is provided, along with its implementation as an SAS macro. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Randomly right censored data often arise in industrial life testing and clinical trials. Several authors have proposed asymptotic confidence bands for the survival function when data are randomly censored on the right. All of these bands are based on the empirical estimator of the survival function. In this paper, families of asymptotic (1-alpha) 100% level confidence bands are developed from the smoothed estimate of the survival function under the general random censorship model. The new bands are compared to empirical bands, and it is shown that for small sample sizes, the smooth bands have a higher coverage probability than the empirical counterparts.  相似文献   

13.
We show that the nonparametric maximum likelihood estimator (NPMLE) of a survival function may severely underestimate the survival probabilities at very early times for left-truncated and interval-censored data. As an alternative, we propose to compute the (nonparametric) MLE under a nondecreasing hazard assumption, the monotone MLE, by a gradient projection algorithm when the assumption holds. The projection step is accomplished via an isotonic regression algorithm, the pool-adjacent-violators algorithm. This gradient projection algorithm is computationally efficient and converges globally. Monte Carlo simulations show superior performance of the monotone MLE over that of the NPMLE in terms of either bias or variance, even for large samples. The methodology is illustrated with the application to the Wisconsin Epidemiological Study of Diabetic Retinopathy data to estimate the probability of incidence of retinopathy.  相似文献   

14.
Bootstrapping is introduced as a method for approximating the standard errors of validity generalization (VG) estimates. A Monte Carlo study was conducted to evaluate the accuracy of bootstrap validity-distribution parameter estimates, bootstrap standard error estimates, and nonparametric bootstrap confidence intervals. In the simulation study the authors manipulated the sample sizes per correlation coefficient, the number of coefficients per VG analysis, and the variance of the distribution of true correlation coefficients. The results indicate that the standard error estimates produced by the bootstrapping procedure were very accurate. It is recommended that the bootstrap standard-error estimates and confidence intervals be used in the interpretation of the results of VG analyses. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Techniques that test for linkage between a marker and a trait locus based on the regression methods proposed by Haseman and Elston [1972] involve testing a null hypothesis of no linkage by examination of the regression coefficient. Modified Haseman-Elston methods accomplish this using ordinary least squares (OLS), weighted least squares (WLS), in which weights are reciprocals of estimated variances, and generalized estimating equations (GEE). Methods implementing the WLS and GEE currently use a diagonal covariance matrix, thus incorrectly treating the squared trait differences of two sib pairs within a family as uncorrelated. Correctly specifying the correlations between sib pairs in a family yields the best linear unbiased estimator of the regression coefficient [Scheffe, 1959]. This estimator will be referred to as the generalized least squares (GLS) estimator. We determined the null variance of the GLS estimator and the null variance of the WLS/OLS estimator. The correct null variance of the WLS/OLS estimate of the Haseman-Elston (H-E) regression coefficient may be either larger or smaller than the variance of the WLS/OLS estimate calculated assuming that the squared sib-pair differences are uncorrelated. For a fully informative marker locus, the gain in efficiency using GLS rather than WLS/OLS under the null hypothesis is approximately 11% in a large multifamily study with three siblings per family and 25% for families with four siblings each.  相似文献   

16.
The authors describe a new simple noniterative, yet efficient method to estimate the risk ratio in studies using case-parental control design. The new method is compared with two other noniterative methods, Khoury's method and Flanders and Khoury's method, and with a maximum likelihood-based method of Schaid and Sommer. The authors found that the variance of the new estimation method is usually smaller than that of Khoury's method or Flanders and Khoury's method and that it is slightly larger than that of the maximum likelihood-based method of Schaid and Sommer. Despite the slightly large variance of the new estimator compared with that of the maximum likelihood-based method, the simplicity of the new estimator and its variance makes the new method appealing. When genotypic information for only one parent is available, the authors also describe a method to estimate the risk ratio without assuming Hardy-Weinberg equilibrium or random mating. A simple formula for the variance of the estimator is given.  相似文献   

17.
This study used Monte Carlo simulations to evaluate the performance of alternative models for the analysis of group-randomized trials having more than two time intervals for data collection. The major distinction among the models tested was the sampling variance of the intervention effect. In the mixed-model ANOVA, the sampling variance of the intervention effect is based on the variance among group x time-interval means. In the random coefficients model, the sampling variance of the intervention effect is based on the variance among the group-specific slopes. These models are equivalent when the design includes only two time intervals, but not when there are more than two time intervals. The results indicate that the mixed-model ANOVA yields unbiased estimates of sampling variation and nominal type I error rates when the group-specific time trends are homogenous. However, when the group-specific time trends are heterogeneous, the mixed-model ANOVA yields downwardly biased estimates of sampling variance and inflated type I error rates. In contrast, the random coefficients model yields unbiased estimates of sampling variance and the nominal type I error rate regardless of the pattern among the groups. We discuss implications for the analysis of group-randomized trials with more than two time intervals.  相似文献   

18.
This article deals with the semiparametric analysis of multivariate survival data with random block (group) effects. Survival times within the same group are correlated as a consequence of a frailty random block effect. The standard approaches assume either a parametric or a completely unknown baseline hazard function. This paper considers an intermediate solution, that is, a nonparametric function that is reasonably smooth. This is accomplished by a Bayesian model in which the conditional proportional hazards model is used with a correlated prior process for the baseline hazard. The posterior likelihood based on data, as well as the prior process, is similar to the discretized penalized likelihood for the frailty model. The methodology is exemplified with the recurrent kidney infections data of McGilchrist and Aisbett (1991, Biometrics 47, 461-466), in which the times to infections within the same patients are expected to be correlated. The reanalysis of the data has shown that the estimates of the parameters of interest and the associated standard errors depend on the prior knowledge about the smoothness of the baseline hazard.  相似文献   

19.
Predicting survival from out-of-hospital cardiac arrest: a graphic model   总被引:2,自引:0,他引:2  
STUDY OBJECTIVE: To develop a graphic model that describes survival from sudden out-of-hospital cardiac arrest as a function of time intervals to critical prehospital interventions. PARTICIPANTS: From a cardiac arrest surveillance system in place since 1976 in King County, Washington, we selected 1,667 cardiac arrest patients with a high likelihood of survival: they had underlying heart disease, were in ventricular fibrillation, and had arrested before arrival of emergency medical services (EMS) personnel. METHODS: For each patient, we obtained the time intervals from collapse to CPR, to first defibrillatory shock, and to initiation of advanced cardiac life support (ACLS). RESULTS: A multiple linear regression model fitting the data gave the following equation: survival rate = 67%-2.3% per minute to CPR-1.1% per minute to defibrillation-2.1% per minute to ACLS, which was significant at P < .001. The first term, 67%, represents the survival rate if all three interventions were to occur immediately on collapse. Without treatment (CPR, defibrillatory shock, or definitive care), the decline in survival rate is the sum of the three coefficients, or 5.5% per minute. Survival rates predicted by the model for given EMS response times approximated published observed rates for EMS systems in which paramedics respond with or without emergency medical technicians. CONCLUSION: The model is useful in planning community EMS programs, comparing EMS systems, and showing how different arrival times within a system affect survival rate.  相似文献   

20.
WN Rida 《Canadian Metallurgical Quarterly》1996,15(21-22):2393-404; discussion 2405-12
Traditionally, measures of vaccine efficacy have focused on a vaccine's ability to prevent infection or disease. HIV vaccination, however, may have important indirect effects by reducing the level of infectiousness of vaccinees who become infected. This latter effect is not captured by the usual estimators of vaccine efficacy. To obtain an estimate of a vaccine's effect on infectiousness, Koopman and Little have proposed a trial design in which HIV-uninfected couples are randomized to the vaccine or control arm of the study. At least one member is assumed to be at risk of HIV infection from outside the partnership. Using this design, we formulate martingales from counting processes which record the number of infected participants over the course of the trial. An alternative estimator of a vaccine's effect on infectiousness along with an estimate of its variance is derived from these martingales. The precision of the estimate is shown to depend on the secondary attack rate within the couple. High secondary attack rates are required for narrow confidence intervals unless very large studies are contemplated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号