首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Data on the incubation period of AIDS patients are often fragmented and censored. parametric models have been proposed in the literature to impute the missing segment of the incubation period. The numerical results vary widely with the parametric models used. We propose a nonparametric conditional bootstrap (CB) procedure for imputation. The quality of the CB data is studied by checking the asymptotic accuracy of the CB estimators. We establish the asymptotic accuracy of the CB procedure for two basic nonparametric estimators: the empirical distribution function and a kernel-type conditional empirical distribution function. The rates of convergence of the CB approximation are obtained. The results for the kernel-type estimators hold also for the nearest-neighbor-type estimators.  相似文献   

2.
In survival analysis, estimates of median survival times in homogeneous samples are often based on the Kaplan-Meier estimator of the survivor function. Confidence intervals for quantiles, such as median survival, are typically constructed via large sample theory or the bootstrap. The former has suspect accuracy for small sample sizes under moderate censoring and the latter is computationally intensive. In this paper, improvements on so-called test-based intervals and reflected intervals (cf., Slud, Byar, and Green, 1984, Biometrics 40, 587-600) are sought. Using the Edgeworth expansion for the distribution of the studentized Nelson-Aalen estimator derived in Strawderman and Wells (1997, Journal of the American Statistical Association 92), we propose a method for producing more accurate confidence intervals for quantiles with randomly censored data. The intervals are very simple to compute, and numerical results using simulated data show that our new test-based interval outperforms commonly used methods for computing confidence intervals for small sample sizes and/or heavy censoring, especially with regard to maintaining specified coverage.  相似文献   

3.
Parametric models for interval censored data can now easily be fitted with minimal programming in certain standard statistical software packages. Regression equations can be introduced, both for the location and for the dispersion parameters. Finite mixture models can also be fitted, with a point mass on right (or left) censored observations, to allow for individuals who cannot have the event (or already have it). This mixing probability can also be allowed to follow a regression equation. Here, models based on nine different distributions are compared for three examples of heavily censored data as well as a set of simulated data. We find that, for parametric models, interval censoring can often be ignored and that the density, at centres of intervals, can be used instead in the likelihood function, although the approximation is not always reliable. In the context of heavily interval censored data, the conclusions from parametric models are remarkably robust with changing distributional assumptions and generally more informative than the corresponding non-parametric models.  相似文献   

4.
Epidemiologists sometimes study the association between two measurements of exposure on the same subjects by grouping the original bivariate continuous data into categories that are defined by the empirical quantiles of the two marginal distributions. Although such grouped data are presented in a two-way contingency table, the cell counts in this table do not have a multinomial distribution. We describe the joint distribution of counts in such a table by the term empirical bivariate quantile-partitioned (EBQP) distribution. Blomqvist (1950, Annals of Mathematical Statistics 21, 539-600) gave an asymptotic EBQP theory for bivariate data partitioned by the sample medians. We demonstrate that his asymptotic theory is not correct, however, except in special cases. We present a general asymptotic theory for tables of arbitrary dimensions and apply this theory to construct confidence intervals for the kappa statistic. We show by simulations that the confidence interval procedures we propose have near nominal coverage for sample sizes exceeding 60 for both 2 x 2 and 3 x 3 tables. These simulations also illustrate that the asymptotic theory of Blomqvist (1950) and the methods that Fleiss, Cohen, and Everitt (1969, Psychological Bulletin 72, 323-327) give for multinomial tables can yield subnominal coverage for kappa calculated from EBQP tables, although in some cases the coverage for these procedures is near nominal levels.  相似文献   

5.
In the presence of dependent competing risks in survival analysis, the Cox model can be utilized to examine the covariate effects on the cause-specific hazard function for the failure type of interest. For this situation, the cumulative incidence function provides an intuitively appealing summary curve for marginal probabilities of this particular event. In this paper, we show how to construct confidence intervals and bands for such a function under the Cox model for future patients with certain covariates. Our proposals are illustrated with data from a prostate cancer trial.  相似文献   

6.
Inferences for survival curves based on right censored continuous or grouped data are studied. Testing homogeneity with an ordered restricted alternative and testing the order restriction as the null hypothesis are considered. Under a proportional hazards model, the ordering on the survival curves corresponds to an ordering on the regression coefficients. Approximate likelihood methods are obtained by applying order restricted procedures to the estimates of the regression coefficients. Ordered analogues to the log rank test which are based on the score statistics are considered also. Chi-bar-squared distributions, which have been studied extensively, are shown to provide reasonable approximations to the null distributions of these tests statistics. Using Monte Carlo techniques, the powers of these two types of tests are compared with those that are available in the literature.  相似文献   

7.
The p-value evidence for an alternative to a null hypothesis regarding the mean lifetime can be unreliable if based on asymptotic approximations when there is only a small sample of right-censored exponential data. However, a guarded weight of evidence for the alternative can always be obtained without approximation, no matter how small the sample, and has some other advantages over p-values. Weights of evidence are defined as estimators of 0 when the null hypothesis is true and 1 when the alternative is true, and they are judged on the basis of the ensuing risks, where risk is mean squared error of estimation. The evidence is guarded in that a pre-assigned bound is placed on the risk under the hypothesis. Practical suggestions are given for choosing the bound and for interpreting the magnitude of the weight of evidence. Acceptability profiles are obtained by inversion of a family of guarded weights of evidence for two-sided alternatives to point hypotheses, just as confidence intervals are obtained from tests; these profiles are arguably more informative than confidence intervals, and are easily determined for any level and any sample size, however small. They can help understand the effects of different amounts of censoring. They are found for several small size data sets, including a sample of size 12 for post-operative cancer patients. Both singly Type I and Type II censored examples are included. An examination of the risk functions of these guarded weights of evidence suggests that if the censoring time is of the same magnitude as the mean lifetime, or larger, then the risks in using a guarded weight of evidence based on a likelihood ratio are not much larger than they would be if the parameter were known.  相似文献   

8.
Flexible modelling in survival analysis can be useful both for exploratory and predictive purposes. Feed forward neural networks were recently considered for flexible non-linear modelling of censored survival data through the generalization of both discrete and continuous time models. We show that by treating the time interval as an input variable in a standard feed forward network with logistic activation and entropy error function, it is possible to estimate smoothed discrete hazards as conditional probabilities of failure. We considered an easily implementable approach with a fast selection criteria of the best configurations. Examples on data sets from two clinical trials are provided. The proposed artificial neural network (ANN) approach can be applied for the estimation of the functional relationships between covariates and time in survival data to improve model predictivity in the presence of complex prognostic relationships.  相似文献   

9.
10.
A comparison was made among breeding values of sires for longevity that were obtained by different methods: phenotypic averages of daughters using only uncensored records, BLUP using only uncensored records, survival analysis using only uncensored records, and survival analysis using both censored and uncensored records. Two data files were used: one contained data from small herds, and the other contained data from large herds. The results from both data files were similar. Different methods of predicting breeding values resulted in different rankings of sires. The results obtained using phenotypic averages were weakly correlated (< or = 0.46) with those results obtained using the other methods of prediction. The REML BLUP had strong correlations (< or = -0.91) with the survival analysis predictor if the same data were used, and correlations weakened (< or = -0.60) when censored records were included in the survival analysis. The correlations are negative because the linear method analyzed longevity, and survival analysis measured the risk of being culled, which has an antagonistic relationship with longevity. The results from REML BLUP and survival analysis methods differed mainly because of the different data that were used (uncensored only versus both censored and uncensored).  相似文献   

11.
Survival trees methods are nonparametric alternatives to the semiparametric Cox regression in survival analysis. In this paper, a tree-based method for censored survival data with time-dependent covariates is proposed. The proposed method assumes a very general model for the hazard function and is fully nonparametric. The recursive partitioning algorithm uses the likelihood estimation procedure to grow trees under a piecewise exponential structure that handles time-dependent covariates in a parallel way to time-independent covariates. In general, the estimated hazard at a node gives the risk for a group of individuals during a specific time period. Both cross-validation and bootstrap resampling techniques are implemented in the tree selection procedure. The performance of the proposed survival trees method is shown to be good through simulation and application to real data.  相似文献   

12.
As the incidence of tuberculosis (TB) has increased in the United States, occupationally acquired TB has increased among the health care workers (HCWs). This paper describes a model developed in response to the needs of an outbreak of multidrug-resistant TB. One of the goals of the outbreak investigation was to estimate the risk of tuberculin skin test (TST) conversion as a function of HCW job type and the period during which persons were employed over the study period. TST conversions were evaluated at periodic examinations and data are interval-censored. We present a generalized linear model that extends Efron's survival model for censored survival data to the case of interval-censored data.  相似文献   

13.
We review currently known results concerning the estimation of an 'immune' or 'cured' proportion, and testing for the presence of immunes, in censored survival data, suggesting that a firm theoretical foundation now exists for analysis. Two types of estimators, parametric and non-parametric, are discussed and compared with respect to their theoretical properties, and, by simulation, with respect to their small sample behaviour. Both estimators have advantages and drawbacks, but together provide powerful tools for the perceptive analysis of survival data with, or even without, immune individuals.  相似文献   

14.
Span-Dependent Distributions of the Bending Strength of Spruce Timber   总被引:1,自引:0,他引:1  
Tests data of bending strengths of a large number of timber beams of different spans obtained at the Swedish Institute for Wood Technology Research reveal a statistical structure that can be represented in a simple probabilistic model of series system type. A particular feature of the data from one of the large test series is that unintentionally the data became randomly censored upwards. This censoring of the data rules out both the moment estimation method and the maximum likelihood method. Instead valid parameter estimates can be obtained by maximizing the posterior density defined as the likelihood function multiplied by a suitably chosen noninformative prior density (MP method). Subsequently using bias factors assessed by simulation ensures that the corrected MP estimates are unbiased. A closed form analytical expression for the distribution function of the bending strength of a beam with any given number of defect clusters follows from the obtained distribution model for the bending strength of the random single defect cluster. The empirical distribution function of bending test results for a sample of beams with two defect clusters is well predicted, and for long beams with several defect clusters the same is the case in the lower tail up to at least about the 50% probability level.  相似文献   

15.
This paper applies White's (1982, Econometrica 50, 1-25) information matrix (IM) test for correct model specification to proportional hazards models of univariate and multivariate censored survival data. Several alternative estimators of the test statistic are presented and their size performance examined. White also suggested an estimator of the parameter covariance matrix that was robust to certain forms of model misspecification. This has been subsequently proposed by others (e.g., Royall, 1986, International Statistical Review 54, 221-226) and applied by Huster, Brookmeyer, and Self (1989, Biometrics 45, 145-156) as part of an independence working model (IWM) approach to multivariate censored survival data. We illustrate how the IM test can be used for both univariate data and as part of the IWM approach to multivariate data.  相似文献   

16.
A Bayesian variable selection method for censored data is proposed in this paper. Based on the sufficiency and asymptotic normality of the maximum partial likelihood estimator, we approximate the posterior distribution of the parameters in a proportional hazards model. We consider a parsimonious model as the full model with some covariates unobserved and replaced by their conditional expected values. A loss function based on the posterior expected estimation error of the log-risk for the proportional hazards model is used to select a parsimonious model. We derive computational expressions for this loss function for both continuous and binary covariates. This approach provides an extension of Lindley's (1968, Journal of the Royal Statistical Society, Series B 30, 31-66) variable selection criterion for the linear case. Data from a randomized clinical trial of patients with primary biliary cirrhosis of the liver (PBC) (Fleming and Harrington, 1991, Counting Processes and Survival Analysis) is used to illustrate the proposed method and a simulation study compares it with the backward elimination procedure.  相似文献   

17.
The two-point method is one of the best known procedures for estimating empirical infiltration parameters from surface irrigation evaluation data and mass balance, mainly because of its limited data requirements and mathematical simplicity. However, past research have shown that the method can produce inaccurate results. This paper examines the limitations of the method, reviews alternatives for improving two-point method results based on data that are collected or can easily be collected as part of a two-point evaluation, and suggests strategies for estimation and validation of results for different levels of evaluation data. Results show the limitations of formulating the estimation problem with advance data only and the benefits of using instead an advance and a postadvance mass balance relationship in the analysis. Because different combinations of parameters can satisfy the mass balance equations, the estimated function cannot be extrapolated reliably beyond the times used in formulating those relationships. While results can be used with confidence to characterize the performance of the evaluated irrigation event, they need to be used carefully for operational analysis and design purposes.  相似文献   

18.
D. C. Rubin and A. E. Wenzel (see record 1996-06397-006) fitted many simple functions to a large collection of retention data sets. Their search for the mathematical form of the retention function can be simplified by (a) attending to the failures of simple functions, (b) considering the constraints and process assumptions that any psychological theory must obey, and (c) drawing on results from survival theory. Three sets of psychologically plausible assumptions to interpret the form of a retention function are described. These representations converge on a single functional form, demonstrating the impossibility of determining process purely from empirical fits. A candidate form for an empirical retention function whose parameters separate the various aspects of retention is proposed. These parameters can be used to compare results from different studies. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Existing methods for setting confidence intervals for the difference theta between binomial proportions based on paired data perform inadequately. The asymptotic method can produce limits outside the range of validity. The 'exact' conditional method can yield an interval which is effectively only one-sided. Both these methods also have poor coverage properties. Better methods are described, based on the profile likelihood obtained by conditionally maximizing the proportion of discordant pairs. A refinement (methods 5 and 6) which aligns 1-alpha with an aggregate of tail areas produces appropriate coverage properties. A computationally simpler method based on the score interval for the single proportion also performs well (method 10).  相似文献   

20.
Indices of positive and negative agreement for observer reliability studies, in which neither observer can be regarded as the standard, have been proposed. In this article, it is demonstrated by means of an example and a small simulation study that a recently published method for constructing confidence intervals for these indices leads to intervals that are too wide. Appropriate asymptotic (i.e., large sample) variance estimates and confidence intervals for the positive and negative agreement indices are presented and compared with bootstrap confidence intervals. We also discuss an alternative method of interval estimation motivated from a Bayesian viewpoint. The asymptotic intervals performed adequately for sample sizes of 200 or more. For smaller samples, alternative confidence intervals such as bootstrap intervals or Bayesian intervals should be considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号