首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Patient survival is one of the most important measures of cancer patient care (the diagnosis and treatment of cancer). The optimal method for monitoring the progress of patient care across the full spectrum of provider settings is through the population-based study of cancer patient survival, which is only possible using data collected by population-based cancer registries. The probability of cure, “statistical cure”, is defined for a cohort of cancer patients as the percent of patients whose annual death rate equals the death rate of general cancer-free population. Mixture cure models have been widely used to model failure time data. The models provide simultaneous estimates of the proportion of the patients cured from cancer and the distribution of the failure times for the uncured patients (latency distribution). CANSURV (CAN-cer SURVival) is a Windows software fitting both the standard survival models and the cure models to population-based cancer survival data. CANSURV can analyze both cause-specific survival data and, especially, relative survival data, which is the standard measure of net survival in population-based cancer studies. It can also fit parametric (cure) survival models to the individual data. The program is available at http://srab.cancer.gov/cansurv. The colorectal cancer survival data from the Surveillance, Epidemiology and End Results (SEER) program [Surveillance, Epidemiology and End Results Program, The Portable Survival System/Mainframe Survival System, National Cancer Institute, Bethesda, 1999.] of the National Cancer Institute, NIH is used to demonstrate the use of CANSURV program.  相似文献   

2.
Cure models have been developed to analyze failure time data with a cured fraction. For such data, standard survival models are usually not appropriate because they do not account for the possibility of cure. Mixture cure models assume that the studied population is a mixture of susceptible individuals, who may experience the event of interest, and non-susceptible individuals that will never experience it. The aim of this paper is to propose a SAS macro to estimate parametric and semiparametric mixture cure models with covariates. The cure fraction can be modelled by various binary regression models. Parametric and semiparametric models can be used to model the survival of uncured individuals. The maximization of the likelihood function is performed using SAS PROC NLMIXED for parametric models and through an EM algorithm for the Cox's proportional hazards mixture cure model. Indications and limitations of the proposed macro are discussed and an example in the field of cancer clinical trials is shown.  相似文献   

3.
In biomedical, genetic and social studies, there may exist a fraction of individuals not experiencing the event of interest such that the survival curves eventually level off to nonzero proportions. These people are referred to as “cured” or “nonsusceptible” individuals. Models that have been developed to address this issue are known as cured models. The mixture model, which consists of a model for the binary cure status and a survival model for the event times of the noncured individuals, is one of the widely used cure models. In this paper, we propose a class of semiparametric transformation cure models for multivariate survival data with a surviving fraction by fitting a logistic regression model to the cure status and a semiparametric transformation model to the event time of the noncured individual. Both models allow incorporating covariates and do not require any assumption of the association structure. The statistical inference is based on the marginal approach by constructing a system of estimating equations. The asymptotic properties of the proposed estimators are proved, and the performance of the estimation is demonstrated via simulations. In addition, the approach is illustrated by analyzing the smoking cessation data.  相似文献   

4.
5.
Over the years many efficient algorithms for the multiplierless design of multiple constant multiplications (MCMs) have been introduced. These algorithms primarily focus on finding the fewest number of addition/subtraction operations that generate the MCM. Although the complexity of an MCM design is decreased by reducing the number of operations, their solutions may not lead to an MCM design with optimal area at gate-level since they do not consider the implementation costs of the operations in hardware. This article introduces two approximate algorithms that aim to optimize the area of the MCM operation by taking into account the gate-level implementation of each addition and subtraction operation which realizes a constant multiplication. To find the optimal tradeoff between area and delay, the proposed algorithms are further extended to find an MCM design with optimal area under a delay constraint. Experimental results clearly indicate that the solutions of the proposed algorithms lead to significantly better MCM designs at gate-level when compared to those obtained by the solutions of algorithms designed for the optimization of the number of operations.  相似文献   

6.
Clustered failure time data often arise in biomedical studies and a marginal regression modeling approach is often preferred to avoid assumption on the dependence structure within clusters. A novel estimating equation approach is proposed based on a semiparametric marginal proportional hazards model to take the correlation within clusters into account. Different from the traditional marginal method for clustered failure time data, our method explicitly models the correlation structure within clusters by using a pre-specified working correlation matrix. The estimates from the proposed method are proved to be consistent and asymptotically normal. Simulation studies show that the proposed method is more efficient than the existing marginal methods. Finally, the model and the proposed method are applied to a kidney infections study.  相似文献   

7.
A generalization of the semiparametric Cox’s proportional hazards model by means of a random effect or frailty approach to accommodate clustered survival data with a cure fraction is considered. The frailty serves as a quantification of the health condition of the subjects under study and may depend on some observed covariates like age. One single individual-specific frailty that acts on the hazard function is adopted to determine the cure status of an individual and the heterogeneity on the time to event if the individual is not cured. Under this formulation, an individual who has a high propensity to be cured would tend to have a longer time to event if he is not cured. Within a cluster, both the cure statuses and the times to event of the individuals would be correlated. In contrast to some models proposed in the literature, the model accommodates the correlations among the observations in a more natural way. A multiple imputation estimation method is proposed for both right-censored and interval-censored data. Simulation studies show that the performance of the proposed estimation method is highly satisfactory. The proposed model and method are applied to the National Aeronautics and Space Administration’s hypobaric decompression sickness data to investigate the factors associated with the occurrence and the time to onset of grade IV venous gas emboli under hypobaric environments.  相似文献   

8.
9.
Manatunga and Chen [A.K. Manatunga, S. Chen, Sample size estimation for survival outcomes in cluster-randomized studies with small cluster sizes, Biometrics 56 (2000) 616-621] proposed a method to estimate sample size and power for cluster-randomized studies where the primary outcome variable was survival time. The sample size formula was constructed by considering a bivariate marginal distribution (Clayton-Oakes model) with univariate exponential marginal distributions. In this paper, a user-friendly FORTRAN 90 program was provided to implement this method and a simple example was used to illustrate the features of the program.  相似文献   

10.
针对具有未知切换规则与未知子系统数量的切换系统的辨识问题,提出一种两阶段辨识方法,包括模式检测与参数辨识.在模式检测阶段,首先建立高斯混合模型表示采样数据的分布,并通过轮盘法选择合适的初始模型参数.其次,计算采样数据属于每个子系统的后验概率,通过极大似然估计算法迭代更新模型参数,使高斯混合模型最大化地拟合采样数据的分布.在此基础上,通过贝叶斯信息准则确定子系统的数量,并根据最大后验概率准则估计切换规则.在参数辨识阶段,通过递推增广最小二乘法估计每个子系统的参数向量.最后,通过仿真结果验证了所提方法的有效性.  相似文献   

11.
The cure fraction models have been widely used to analyze survival data in which a proportion of the individuals is not susceptible to the event of interest. In this article, we introduce a bivariate model for survival data with a cure fraction based on the three-parameter generalized Lindley distribution. The joint distribution of the survival times is obtained by using copula functions. We consider three types of copula function models, the Farlie–Gumbel–Morgenstern (FGM), Clayton and Gumbel–Barnett copulas. The model is implemented under a Bayesian framework, where the parameter estimation is based on Markov Chain Monte Carlo (MCMC) techniques. To illustrate the utility of the model, we consider an application to a real data set related to an invasive cervical cancer study.  相似文献   

12.
树状结构多芯片组件互连网络延迟的研究   总被引:1,自引:1,他引:0  
大多芯片组件互连传输线的电路模型中,必须同时考虑线电感和线电阻,因此其互连延迟的研究比传统的PCB和IC互连更具复杂性。研究了具有树状拓扑结构的MCM互连网络的延迟:在明确了MCM互连延迟的独特点后,着重给出了树状结构互连网络冲激响应的矩的求法,从矩与延迟的密切关系中给出了求延迟的一种有效方法。  相似文献   

13.
A unified scheme for developing BoxJenkins (BJ) type models from input–output plant data by combining orthonormal basis filter (OBF) model and conventional time series models, and the procedure for the corresponding multi-step-ahead prediction are presented. The models have a deterministic part that has an OBF structure and an explicit stochastic part which has either an AR or an ARMA structure. The proposed models combine all the advantages of an OBF model over conventional linear models together with an explicit noise model. The parameters of the OBF–AR model are easily estimated by linear least square method. The OBF–ARMA model structure leads to a pseudo-linear regression where the parameters can be easily estimated using either a two-step linear least square method or an extended least square method. Models for MIMO systems are easily developed using multiple MISO models. The advantages of the proposed models over BJ models are: parameters can be easily and accurately determined without involving nonlinear optimization; a prior knowledge of time delays is not required; and the identification and prediction schemes can be easily extended to MIMO systems. The proposed methods are illustrated with two SISO simulation case studies and one MIMO, real plant pilot-scale distillation column.  相似文献   

14.
This paper presents a variational Bayes expectation maximization algorithm for time series based on Attias? variational Bayesian theory. The proposed algorithm is applied in the blind source separation (BSS) problem to estimate both the source signals and the mixing matrix for the optimal model structure. The distribution of the mixing matrix is assumed to be a matrix Gaussian distribution due to the correlation of its elements and the inverse covariance of the sensor noise is assumed to be Wishart distributed for the correlation between sensor noises. The mixture of Gaussian model is used to approximate the distribution of each independent source. The rules to update the posterior hyperparameters and the posterior of the model structure are obtained. The optimal model structure is selected as the one with largest posterior. The source signals and mixing matrix are estimated by applying LMS and MAP estimators to the posterior distributions of the hidden variables and the model parameters respectively for the optimal structure. The proposed algorithm is tested with synthetic data. The results show that: (1) the logarithm posterior of the model structure increases with the accuracy of the posterior mixing matrix; (2) the accuracies of the prior mixing matrix, the estimated mixing matrix, and the estimated source signals increase with the logarithm posterior of the model structure. This algorithm is applied to Magnetoencephalograph data to localize the source of the equivalent current dipoles.  相似文献   

15.
In RBDO, input uncertainty models such as marginal and joint cumulative distribution functions (CDFs) need to be used. However, only limited data exists in industry applications. Thus, identification of the input uncertainty model is challenging especially when input variables are correlated. Since input random variables, such as fatigue material properties, are correlated in many industrial problems, the joint CDF of correlated input variables needs to be correctly identified from given data. In this paper, a Bayesian method is proposed to identify the marginal and joint CDFs from given data where a copula, which only requires marginal CDFs and correlation parameters, is used to model the joint CDF of input variables. Using simulated data sets, performance of the Bayesian method is tested for different numbers of samples and is compared with the goodness-of-fit (GOF) test. Two examples are used to demonstrate how the Bayesian method is used to identify correct marginal CDFs and copula.  相似文献   

16.
To improve the performance of speaker recognition, the embedded linear transformation is used to integrate both transformation and diagonal-covariance Caussian mixture into a unified framework. In the case, the mixture number of GMM must be fixed in model training. The cluster expectation-maximization (EM) algorithm is a well-known technique in which the mixture number is regarded as an estimated parameter. This paper presents a new model structure that integrates a multi-step cluster algorithm into the estimating process of GMM with the embedded transformation. In the approach, the transformation matrix, the mixture number and model parameters are simultaneously estimated according to a maximum likelihood criterion. The proposed method is demonstrated on a database of three data sessions for text independent speaker identification. The experiments show that this method outperforms the traditional GMM with cluster EM algorithm. This text was submitted by the authors in English.  相似文献   

17.
The original ARMarkov identification method explicitly determines the first μ Markov parameters from plant input–output data and approximates the slower dynamics of the process by an ARX model structure. In this paper, the method is extended to include a disturbance model and an ARIMAX structure is used to approximate the slower dynamics. This extended ARMarkov model is then used to formulate a predictive controller. As the number of Markov parameters in the model varies from one to P (prediction horizon)+1, the controller changes from generalized predictive control (GPC) to dynamic matrix control (DMC). The advantages of the proposed ARM-MPC are the consistency of the Markov parameters estimated by the ARMarkov method, independent tuning of the controller for servo and regulatory responses and the ability to combine the characteristics of GPC and DMC. The theoretical results are illustrated through simulation examples.  相似文献   

18.
针对分布区域广、结构复杂的系统,提出了有序树模型来实时、有效地实现故障检测。通过分析复杂结构系统的布局特性,构建了有序树模型,实现了对检测数据关联性的有效表示;利用有序树结点间的关联性和传感器的测量值,估算出相应结点的数据计 算值;根据相应结点测量值与估计值的关系,推算出管道运行工况,实现对复杂系统故障的检测。通过仿真验证了该方法故障检测的有效性,为系统的维护提供了理论依据。  相似文献   

19.
利用图像小波子带内系数的相关性,提出了一种局部自适应小波去噪方法。首先在贝叶斯最大后验概率准则下推导出基于拉普拉斯先验分布的MAP估计表达式和子带MapShrink阈值。为得到局部自适应的MapShrink阈值和去噪算法,提出将子带内的每个小波系数建模为具有不同边缘标准差的拉普拉斯分布,而边缘标准差又假设为强局部相关的随机变量,可通过邻域局部窗口进行估计。实验结果表明,与经典的子带自适应去噪算法相比较,该方法获得了明显的峰值信噪比增益,主观视觉效果也得到了改善。  相似文献   

20.
为降低能耗和最大化网络生存期,论文提出了在一定误差范围内的高效近似数据收集算法。首先利用节点感知数据的时间相关性生成局部估计模型,然后根据节点间估计数据的空间相关性进行分簇,在簇首进行相关性检测,动态调整簇结构,并将簇首的模型参数上传给SINK节点,最后在SINK节点进行全局近似数据收集。仿真结果表明,该算法能充分利用节点数据的时空相关性去除冗余数据,在给定误差界限内能显著降低通信成本。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号