共查询到20条相似文献,搜索用时 0 毫秒
1.
In this article, a class of transformed hazards models is proposed for recurrent gap time data, including both the proportional and additive hazards models as special cases. An estimating equation-based inference procedure is developed for the model parameters, and the asymptotic properties of the resulting estimators are established. In addition, a lack-of-fit test is presented to assess the adequacy of the model. The finite sample behavior of the proposed estimators is evaluated through simulation studies, and an application to a clinic study on chronic granulomatous disease (CGD) is illustrated. 相似文献
2.
Jie FanSomnath Datta 《Computational statistics & data analysis》2011,55(12):3295-3303
Methods for analyzing clustered survival data are gaining popularity in biomedical research. Naive attempts to fitting marginal models to such data may lead to biased estimators and misleading inference when the size of a cluster is statistically correlated with some cluster specific latent factors or one or more cluster level covariates. A simple adjustment to correct for potentially informative cluster size is achieved through inverse cluster size reweighting. We give a methodology that incorporates this technique in fitting an accelerated failure time marginal model to clustered survival data. Furthermore, right censoring is handled by inverse probability of censoring reweighting through the use of a flexible model for the censoring hazard. The resulting methodology is examined through a thorough simulation study. Also an illustrative example using a real dataset is provided that examines the effects of age at enrollment and smoking on tooth survival. 相似文献
3.
Xingqiu ZhaoXingwei Tong 《Computational statistics & data analysis》2011,55(1):291-300
This paper discusses regression analysis of panel count data that arise naturally when recurrent events are considered. For the analysis of panel count data, most of the existing methods have assumed that observation times are completely independent of recurrent events or given covariates, which may not be true in practice. We propose a joint modeling approach that uses an unobserved random variable and a completely unspecified link function to characterize the correlations between the response variable and the observation times. For inference about regression parameters, estimating equation approaches are developed without involving any estimation for latent variables, and the asymptotic properties of the resulting estimators are established. In addition, a technique is provided for assessing the adequacy of the model. The performance of the proposed estimation procedures are evaluated by means of Monte Carlo simulations, and a data set from a bladder tumor study is analyzed as an illustrative example. 相似文献
4.
Xuerong Meggie Wen 《Computational statistics & data analysis》2010,54(8):1975-1982
The requirement of constant censoring parameter β in Koziol-Green (KG) model is too restrictive. When covariates are present, the conditional KG model (Veraverbekea and Cadarso-Suárez, 2000) which allows β to be dependent on the covariates is more realistic. In this paper, using sufficient dimension reduction methods, we provide a model-free diagnostic tool to test if β is a function of the covariates. Our method also allows us to conduct a model-free selection of the related covariates. A simulation study and a real data analysis are also included to illustrate our approach. 相似文献
5.
Jukka Jokinen 《Computational statistics & data analysis》2006,51(3):1509-1522
Likelihood-based marginal regression modelling for repeated, or otherwise clustered, categorical responses is computationally demanding. This is because the number of measures needed to describe the associations within a cluster increase geometrically with increasing cluster size. The proposed estimation methods typically describe the associations using odds ratios, which result in computationally unfeasible solutions for large cluster sizes. An alternative method for joint modelling of the regression, association, and dropout mechanism for clustered categorical responses is presented. The joint distribution of a multivariate categorical response is described by utilizing the mean parameterization, which facilitates maximum likelihood estimation in two important respects. The models are illustrated by analyses of the presence and absence of schizophrenia symptoms on 86 patients at 12 repeated time-points, and a survey of opinions of 607 adults regarding government spending on nine different targets, measured on a common 3-level ordinal scale. Free software is available. 相似文献
6.
D. Y. Lin 《Computer methods and programs in biomedicine》1993,40(4):279-293
Multivariate failure time data is commonly encountered in biomedicine, because each study subject may experience multiple events or because there exists clustering of subjects such that failure times within the same cluster are correlated. MULCOX2 implements a general statistical methodology for analyzing such data. This approach formulates the marginal distributions of multivariate failure times by Cox proportional hazards models without specifying the nature of dependence among related failure times. The baseline hazard functions for the marginal models may be identical or different. A variety of statistical inference can be made regarding the effects of (possibly time-dependent) covariates on the failure rates. Although designed primarily for the marginal approach, MULCOX2 is general enough to implement several alternative methods. The program runs on any computer with a FORTRAN compiler. The running time is minimal. Two illustrative examples are provided. 相似文献
7.
Because of increased manufacturing competitiveness, new methods for reliability estimation are being developed. Intelligent manufacturing relies upon accurate component and product reliability estimates for determining warranty costs, as well as optimal maintenance, inspection, and replacement schedules. Accelerated life testing is one approach that is used for shortening the life of products or components or hastening their performance degradation with the purpose of obtaining data that may be used to predict device life or performance under normal operating conditions. The proportional hazards (PH) model is a non-parametric multiple regression approach for reliability estimation, in which a baseline hazard function is modified multiplicatively by covariates (i.e. applied stresses). While the PH model is a distribution-free approach, specific assumptions need to be made about the time behavior of the hazard rates. A neural network (NN) is particularly useful in pattern recognition problems that involve capturing and learning complex underlying (but consistent) trends in the data. Neural networks are highly non-linear, and in some cases are capable of producing better approximations than multiple regression. This paper reports on the comparison of PH and NN models for the analysis of time-dependent dielectric breakdown data for a metal-oxide-semiconductor integrated circuit. In this case, the NN model results in a better fit to the data based upon minimizing the mean square error of the predictions when using failure data from an elevated temperature and voltage to predict reliability at a lower temperature and voltage. 相似文献
8.
In biomedical, genetic and social studies, there may exist a fraction of individuals not experiencing the event of interest such that the survival curves eventually level off to nonzero proportions. These people are referred to as “cured” or “nonsusceptible” individuals. Models that have been developed to address this issue are known as cured models. The mixture model, which consists of a model for the binary cure status and a survival model for the event times of the noncured individuals, is one of the widely used cure models. In this paper, we propose a class of semiparametric transformation cure models for multivariate survival data with a surviving fraction by fitting a logistic regression model to the cure status and a semiparametric transformation model to the event time of the noncured individual. Both models allow incorporating covariates and do not require any assumption of the association structure. The statistical inference is based on the marginal approach by constructing a system of estimating equations. The asymptotic properties of the proposed estimators are proved, and the performance of the estimation is demonstrated via simulations. In addition, the approach is illustrated by analyzing the smoking cessation data. 相似文献
9.
Frailty models based on the proportional hazards model are useful to analyze correlated or clustered failure time data. However, there has been some recent interest in a frailty model based on the popular accelerated failure time model for correlated failure time data ([Pan, W., 2001. Using frailties in the accelerated failure time model. Lifetime Data Anal. 7(1), 55-64], for example). In this paper we review recent advances on this model in the literature. We propose a general estimation method based on M-estimators and the EM algorithm to estimate the parameters in the model. To evaluate the performance of the proposed method, we carry out a simulation study. The results of the simulation study show that the proposed method performs well in comparison with existing estimation methods. As an illustration, the model and the proposed method are applied to analyze the effects of sublingual nitroglycerin and oral isosorbide dinitrate on angina pectoris of coronary heart disease patients [Danahy, D.T., Burwell, D.T., Aronow, W.S., Prakash, R., 1977. Sustained hemodynamic and antianginal effect of high dose oral isosorbide dinitrate. Circulation 55, 381-387]. 相似文献
10.
Due to advances in medical research, more and more diseases can be cured nowadays, which largely increases the need for an easy-to-use software in calculating sample size of clinical trials with cure fractions. Current available sample size software, such as PROC POWER in SAS, Survival Analysis module in PASS, powerSurvEpi package in R are all based on the standard proportional hazards (PH) model which is not appropriate to design a clinical trial with cure fractions. Instead of the standard PH model, the PH mixture cure model is an important tool in handling the survival data with possible cure fractions. However, there are no tools available that can help design a trial with cure fractions. Therefore, we develop an R package NPHMC to determine the sample size needed for such study design. 相似文献
11.
Ruggero Bellio 《Computational statistics & data analysis》2007,51(5):2531-2541
Bounded-influence estimation is a well developed and useful theory. It provides fairly efficient estimators which are robust to outliers and local model departures. However, its use has been limited thus far, mainly because of computational difficulties. A careful implementation in modern statistical software can effectively overcome the numerical problems of bounded-influence estimators. The proposed approach is based on general methods for solving estimating equations, together with suitable methods developed in the statistical literature, such as the delta algorithm and nested iterations. The focus is on Mallows estimation in generalized linear models and on optimal bias-robust estimation in models for independent data, such as regression models with asymmetrically distributed errors. 相似文献
12.
Joly P Letenneur L Alioum A Commenges D 《Computer methods and programs in biomedicine》1999,60(3):414-231
The Cox model is the model of choice when analyzing right-censored and possibly left-truncated survival data. The present paper proposes a program to estimate the hazard function in a proportional hazards model and also to treat more complex observation schemes involving general censored and left-truncated data. The hazard function estimator is defined non-parametrically as the function which maximizes a penalized likelihood, and the solution is approximated using splines. The smoothing parameter is chosen using approximate cross-validation. Confidence bands for the estimator are given. As an illustration, the age-specific incidence of dementia is estimated and one of its risk factors is studied. 相似文献
13.
In some retrospective observational studies, the subject is asked to recall the age at a particular landmark event. The resulting data may be partially incomplete because of the inability of the subject to recall. This type of incompleteness may be regarded as interval censoring, where the censoring is likely to be informative. The problem of fitting Cox’s relative risk regression model to such data is considered. While a partial likelihood is not available, a method of semi-parametric inference of the regression parameters as well as the baseline distribution is proposed. Monte Carlo simulations show reasonable performance of the regression parameters, compared to Cox estimators of the same parameters computed from the complete version of the data. The proposed method is illustrated through the analysis of data on age at menarche from an anthropometric study of adolescent and young adult females in Kolkata, India. 相似文献
14.
Chi-Chung WenYi-Hau Chen 《Computational statistics & data analysis》2011,55(2):1053-1060
The Cox model with frailties has been popular for regression analysis of clustered event time data under right censoring. However, due to the lack of reliable computation algorithms, the frailty Cox model has been rarely applied to clustered current status data, where the clustered event times are subject to a special type of interval censoring such that we only observe for each event time whether it exceeds an examination (censoring) time or not. Motivated by the cataract dataset from a cross-sectional study, where bivariate current status data were observed for the occurrence of cataracts in the right and left eyes of each study subject, we develop a very efficient and stable computation algorithm for nonparametric maximum likelihood estimation of gamma-frailty Cox models with clustered current status data. The algorithm proposed is based on a set of self-consistency equations and the contraction principle. A convenient profile-likelihood approach is proposed for variance estimation. Simulation and real data analysis exhibit the nice performance of our proposal. 相似文献
15.
There has been an increasing interest in the joint analysis of repeated measures and time to event data. In many studies, there could also exist heterogeneous subgroups. Thus a new model is proposed for the joint analysis of longitudinal and survival data with underlying subpopulations identified by latent class model. Within each latent class, a joint model of longitudinal and survival data with shared random effects is adopted. The proposed model is applied to Terry Beirn Community Programs for Clinical Research on AIDS study (CPCRA) to characterize the underlying heterogeneity of the cohort and to study the relation between longitudinal CD4 measures and time to death. The proposed model is desirable when the heterogeneity among subjects cannot be ignored and both the longitudinal and survival outcomes are of interest. 相似文献
16.
The constrained estimation in Cox’s model for the right-censored survival data is studied and the asymptotic properties of the constrained estimators are derived by using the Lagrangian method based on Karush–Kuhn–Tucker conditions. A novel minorization–maximization (MM) algorithm is developed for calculating the maximum likelihood estimates of the regression coefficients subject to box or linear inequality restrictions in the proportional hazards model. The first M-step of the proposed MM algorithm is to construct a surrogate function with a diagonal Hessian matrix, which can be reached by utilizing the convexity of the exponential function and the negative logarithm function. The second M-step is to maximize the surrogate function with a diagonal Hessian matrix subject to box constraints, which is equivalent to separately maximizing several one-dimensional concave functions with a lower bound and an upper bound constraint, resulting in an explicit solution via a median function. The ascent property of the proposed MM algorithm under constraints is theoretically justified. Standard error estimation is also presented via a non-parametric bootstrap approach. Simulation studies are performed to compare the estimations with and without constraints. Two real data sets are used to illustrate the proposed methods. 相似文献
17.
Xiaobing ZhaoXian Zhou 《Computational statistics & data analysis》2012,56(2):370-383
Gap times between recurrent events are often encountered in longitudinal follow-up studies related to medical science, biostatistics, econometrics, reliability, criminology, demography, and other areas. There have been many models to fit such data, such as proportional hazards (PH) model and additive hazards (AH) model, among others. Standard partial likelihood can be employed to draw their statistical inference. The inference from a direct PH or AH assumption on the gap times, however, is less intuitive and straightforward than marginal rate models-which are often preferred by practitioners due to their more direct interpretations for identifying risk factors. In addition, the existing models have not adequately considered zero-recurrence subjects often encountered in recurrent event data. To overcome these shortcomings, we propose an alternative gap time model using an additive marginal rate function that accounts for zero-recurrence subjects. Local profile-likelihood is applied to estimate the model attributes, and the asymptotic properties of the estimators are established as well. The performance of the proposed estimators is evaluated by a simulation study. The proposed model is applied to analyze a set of data on pulmonary exacerbations and rhDNase treatment. 相似文献
18.
19.
Multivariate recurrent event data arise in many clinical and observational studies, in which subjects may experience multiple types of recurrent events. In some applications, event times can be always observed, but types for some events may be missing. In this article, a semiparametric additive rates model is proposed for analyzing multivariate recurrent event data when event categories are missing at random. A weighted estimating equation approach is developed to estimate parameters of interest, and the resulting estimators are shown to be consistent and asymptotically normal. In addition, a lack-of-fit test is presented to assess the adequacy of the model. Simulation studies demonstrate that the proposed method performs well for practical settings. An application to a platelet transfusion reaction study is provided. 相似文献
20.
山丹县草地地上生物量遥感估算模型 总被引:6,自引:0,他引:6
选择黑河流域草地植被的典型区域-山丹县作为研究区, 利用2003 年8 月野外实测50 个样方的草地地上生物量数据和同期的陆地卫星TM 影像数据, 分析了植被指数与草地地上生物量的相关关系, 进而建立基于遥感植被指数DV I 的草地地上生物量估算模型。结果表明: 在草地地上生物量和TM 影像植被指数之间关系微弱、直接利用TM 影像数据建立估算模型不可行的情况下,用地面实测的草地植被反射光谱数据对遥感影像数据进行校正, 能够弥补传统的“点-面”建模方法的不足, 获得比较理想的估算模型; 植被指数DVI 与草地地上生物量之间存在较好的相关性, 其估算模型为Y = 2477X - 77. 598 (R 2= 0. 7589) , 经实测数据验证, 总体精度达到80% 以上, 基本上能够满足中尺度的草地地上生物量估算。 相似文献