首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A variety of approaches have been proposed to provide formal and informal validation of proposed surrogate markers. To achieve true clinical impact, the validation must convince both the statistical and clinical communities. In this paper, we argue that the best approach is not a single method but a multi-faceted exploration, using multiple approaches, including those that directly appeal to clinicians but with less statistical foundation and those arising from statistical considerations but more difficult to interpret clinically. We illustrate our approach using data from clinical trials in both early and advanced colorectal cancer.  相似文献   

2.
Input-output data sets are ubiquitous in chemical process engineering. We introduce a real-time interactive navigation framework that provides several capabilities to the decision maker (DM). Once a surrogate model is trained the DM can perform what-if analyses in both input and output spaces by manipulating sliders. An approximated convex hull spanned in the input space supports both a reliable surrogate prediction and a navigation close to the data set. The framework has been tested on data sets obtained with a flowsheet simulator modeling a real steam methane reforming process.  相似文献   

3.
The last two decades have seen a lot of development in the area of surrogate marker validation. One of these approaches places the evaluation in a meta-analytic framework, leading to definitions in terms of trial- and individual-level association. A drawback of this methodology is that different settings have led to different measures at the individual level. Using information theory, Alonso et al. proposed a unified framework, leading to a new definition of surrogacy, which offers interpretational advantages and is applicable in a wide range of situations. In this work, we illustrate how this information-theoretic approach can be used to evaluate surrogacy when both endpoints are of a time-to-event type. Two meta-analyses, in early and advanced colon cancer, respectively, are then used to evaluate the performance of time to cancer recurrence as a surrogate for overall survival.  相似文献   

4.
So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bayesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.  相似文献   

5.
There are clear advantages to using biomarkers and surrogate endpoints, but concerns about clinical and statistical validity and systematic methods to evaluate these aspects hinder their efficient application. Section 2 is a systematic, historical review of the biomarker-surrogate endpoint literature with special reference to the nomenclature, the systems of classification and statistical methods developed for their evaluation. In Section 3 an explicit, criterion-based, quantitative, multidimensional hierarchical levels of evidence schema - Biomarker-Surrogacy Evaluation Schema - is proposed to evaluate and co-ordinate the multiple dimensions (biological, epidemiological, statistical, clinical trial and risk-benefit evidence) of the biomarker clinical endpoint relationships. The schema systematically evaluates and ranks the surrogacy status of biomarkers and surrogate endpoints using defined levels of evidence. The schema incorporates the three independent domains: Study Design, Target Outcome and Statistical Evaluation. Each domain has items ranked from zero to five. An additional category called Penalties incorporates additional considerations of biological plausibility, risk-benefit and generalizability. The total score (0-15) determines the level of evidence, with Level 1 the strongest and Level 5 the weakest. The term ;surrogate' is restricted to markers attaining Levels 1 or 2 only. Surrogacy status of markers can then be directly compared within and across different areas of medicine to guide individual, trial-based or drug-development decisions. This schema would facilitate communication between clinical, researcher, regulatory, industry and consumer participants necessary for evaluation of the biomarker-surrogate-clinical endpoint relationship in their different settings.  相似文献   

6.
Persistent genital infection with human papillomavirus (HPV) is a natural candidate as a surrogate marker for cervical cancer because of the strong epidemiologic and molecular evidence that HPV infection is the causative agent for almost all cervical cancers. However, while infection with high-risk types of HPV appears to be necessary for the development of cervical cancer, most infections are controlled by host immune response and do not lead to cancer in the vast majority of infected women. Because diagnostic tests cannot distinguish a persistent infection in the pathogenesis of cervical cancer from a transient infection, it is difficult to describe the disease mechanism as a progressive process based on observations. Therefore, the disease pathogenesis pathway does not fit into the usual surrogate marker framework, raising practical concerns about using HPV infection as a surrogate for a clinical endpoint in vaccine trials. In this paper, we describe the challenges in defining HPV infection as a surrogate endpoint in a HPV vaccine trial that is aimed at reducing cervical cancer rates and examine potential effects of the vaccine. We then outline some issues in the design and analysis of HPV vaccine trials, including the use of operationally defined HPV infection events meant to capture persistent infections. We conclude with a recommendation for a multistate model that uses HPV infection to help explain the mechanisms of vaccine action rather than validate it as an endpoint substitute.  相似文献   

7.
A surrogate endpoint is an endpoint that is observed before a true endpoint and is used to draw conclusions about the effect of intervention on true endpoint. To gauge confidence in the use of a surrogate endpoint, it must be validated. Two simple validation methods using data from multiple trials with surrogate and true endpoints are discussed: an estimation method extending previous work and new method based on hypothesis tests. The validation methods were applied to two data sets, each involving 10 randomized trials: one for patients with early colon cancer where the true endpoint was survival status at eight years and surrogate endpoint was cancer recurrence status at three years, and one for patients with advanced colorectal cancer where the true endpoint was survival status at 12 months and the surrogate endpoint was cancer recurrence status at six months. The estimation method uses the surrogate endpoint in the new trial and a model relating surrogate and true endpoints in previous trials to predict the effect of intervention on true endpoint in the new trial. For validation, each trial was successively treated as the ;new' trial and a comparison was made between predicted and observed effects of intervention on true endpoint. Performance of the surrogate endpoint was good in both data sets. The hypothesis testing method involves the z-statistic for the surrogate endpoint, which is the estimated effect of intervention on surrogate endpoint divided by its standard error. To use this z-statistic to test a null hypothesis of no effect of intervention on true endpoint, the critical value is increased above a standard level of 1.96 to a level determined by the relationships between surrogate and true endpoints in the data sets. This elevated critical value could be used for accelerated approval.  相似文献   

8.
A new approach of using computationally cheap surrogate models for efficient optimization of simulated moving bed (SMB) chromatography is presented. Two different types of surrogate models are developed to replace the detailed but expensive full-order SMB model for optimization purposes. The first type of surrogate is built through a coarse spatial discretization of the first-principles process model. The second one falls into the category of reduced-order modeling. The proper orthogonal decomposition (POD) method is employed to derive cost-efficient reduced-order models (ROMs) for the SMB process. The trust-region optimization framework is proposed to implement an efficient and reliable management of both types of surrogates. The framework restricts the amount of optimization performed with one surrogate and provides an adaptive model update mechanism during the course of optimization. The convergence to an optimum of the original optimization problem can be guaranteed with the help of this model management method. The potential of the new surrogate-based solution algorithm is evaluated by examining a separation problem characterized by nonlinear bi-Langmuir adsorption isotherms. By addressing the feed throughput maximization problem, the performance of each surrogate is compared to that of the standard full-order model based approach in terms of solution accuracy, CPU time and number of iterations. The quantitative results prove that the proposed scheme not only converges to the optimum obtained with the full-order system, but also provides significant computational advantages.  相似文献   

9.
This article presents an integrated, simulation‐based optimization procedure that can determine the optimal process conditions for injection molding without user intervention. The idea is to use a nonlinear statistical regression technique and design of computer experiments to establish an adaptive surrogate model with short turn‐around time and adequate accuracy for substituting time‐consuming computer simulations during system‐level optimization. A special surrogate model based on the Gaussian process (GP) approach, which has not been employed previously for injection molding optimization, is introduced. GP is capable of giving both a prediction and an estimate of the confidence (variance) for the prediction simultaneously, thus providing direction as to where additional training samples could be added to improve the surrogate model. While the surrogate model is being established, a hybrid genetic algorithm is employed to evaluate the model to search for the global optimal solutions in a concurrent fashion. The examples presented in this article show that the proposed adaptive optimization procedure helps engineers determine the optimal process conditions more efficiently and effectively. POLYM. ENG. SCI., 47:684–694, 2007. © 2007 Society of Plastics Engineers.  相似文献   

10.
Arctic sea ice extent has been of considerable interest to scientists in recent years, mainly due to its decreasing temporal trend over the past 20 years. In this article, we propose a hierarchical spatio‐temporal generalized linear model for binary Arctic sea‐ice‐extent data, where statistical dependencies in the data are modeled through a latent spatio‐temporal linear mixed effects model. By using a fixed number of spatial basis functions, the resulting model achieves both dimension reduction and non‐stationarity for spatial fields at different time points. An EM algorithm is proposed to estimate model parameters, and an empirical–hierarchical‐modeling approach is applied to obtain the predictive distribution of the latent spatio‐temporal process. We illustrate the accuracy of the parameter estimation through a simulation study. The hierarchical model is applied to spatial Arctic sea‐ice‐extent data in the month of September for 20 years in the recent past, where several posterior summaries are obtained to detect the changes of Arctic sea ice cover. In particular, we consider a time series of latent 2 × 2 tables to infer the spatial changes of Arctic sea ice over time.  相似文献   

11.
With liquefied natural gas becoming increasingly prevalent as a flexible source of energy, the design and optimization of industrial refrigeration cycles becomes even more important. In this article, we propose an integrated surrogate modeling and optimization framework to model and optimize the complex CryoMan Cascade refrigeration cycle. Dimensionality reduction techniques are used to reduce the large number of process decision variables which are subsequently supplied to an array of Gaussian processes, modeling both the process objective as well as feasibility constraints. Through iterative resampling of the rigorous model, this data-driven surrogate is continually refined and subsequently optimized. This approach was not only able to improve on the results of directly optimizing the process flow sheet but also located the set of optimal operating conditions in only 2 h as opposed to the original 3 weeks, facilitating its use in the operational optimization and enhanced process design of large-scale industrial chemical systems.  相似文献   

12.
13.
Many statisticians have contributed to studies of the HIV epidemic and progression to AIDS. They have developed new statistical methodology, where needed, to address HIV-related issues. The transfer of methods from one area to another often involves a substantial delay. This paper points to methods that were developed in the HIV context and have either already found applications in other areas of medical research or have the potential for such applications, with the hope that this will promote a speedier transfer of the research methods. Among the new tools that HIV studies have placed firmly into the pool of statistical methods for medical research are the methods of back-calculation, methods for the analysis of retrospective ascertainment data and methods of analysis for the combined data from clinical trials and associated longitudinal studies. Notions that have been stimulated substantially are use of surrogate endpoints in clinical trials and screening blood products by the use of pooled serum samples. Research activity in many other areas has been boosted substantially through contributions motivated by HIV/AIDS studies. Noteworthy examples are analyses for doubly-censored lifetime data and methods for assessing vaccines for transmissible diseases.  相似文献   

14.
Two conditions must be fulfilled for an intermediate endpoint to be an acceptable surrogate for a true clinical endpoint: (1) there must be a strong association between the surrogate and the true endpoint, and (2) there must be a strong association between the effects of treatment on the surrogate and the true endpoint. We test whether these conditions are fulfilled for disease-free survival (DFS) and progression-free survival (PFS) on data from 20 clinical trials comparing experimental treatments with standard treatments for early and advanced colorectal cancer. The effects of treatment on DFS (or PFS in advanced disease) and OS were quantified through log hazard ratios (log HR), estimated through a Weibull model stratified for trial. The rank correlation coefficients between DFS and OS, and trial-specific treatment effects, were estimated using a bivariate copula distribution for these endpoints. A linear regression model between the estimated log hazard ratios was used to compute the "surrogate threshold effect", which is the minimum treatment effect on DFS required to predict a non-zero treatment effect on OS in a future trial. In early disease, the rank correlation coefficient between DFS and OS was equal to 0.96 (CI 0.95-0.97). The correlation coefficient between the log hazard ratios was equal to 0.94 (CI 0.87-1.01). The risk reductions were approximately 3% smaller on OS than on DFS, and the surrogate threshold effect corresponded to a DFS hazard ratio of 0.93. In advanced disease, the rank correlation coefficient between PFS and OS was equal to 0.82 (CI 0.82-0.83). The correlation coefficient between the log hazard ratios was equal to 0.99 (CI 0.94-1.04). The risk reductions were approximately 19% smaller on OS than on PFS, and the surrogate threshold effect corresponded to a PFS hazard ratio of 0.86. One trial with a large treatment effect on PFS and OS had a strong influence on the results in advanced disease. DFS (and PFS in advanced disease) are acceptable surrogates for OS in colorectal cancer.  相似文献   

15.
A comparison of Bayesian spatial models for disease mapping   总被引:4,自引:0,他引:4  
With the advent of routine health data indexed at a fine geographical resolution, small area disease mapping studies have become an established technique in geographical epidemiology. The specific issues posed by the sparseness of the data and possibility for local spatial dependence belong to a generic class of statistical problems involving an underlying (latent) spatial process of interest corrupted by observational noise. These are naturally formulated within the framework of hierarchical models, and over the past decade, a variety of spatial models have been proposed for the latent level(s) of the hierarchy. In this article, we provide a comprehensive review of the main classes of such models that have been used for disease mapping within a Bayesian estimation paradigm, and report a performance comparison between representative models in these classes, using a set of simulated data to help illustrate their respective properties. We also consider recent extensions to model the joint spatial distribution of multiple disease or health indicators. The aim is to help the reader choose an appropriate structural prior for the second level of the hierarchical model and to discuss issues of sensitivity to this choice.  相似文献   

16.
In randomized clinical trials comparing treatment effects on diseases such as cancer, a multicentre trial is usually conducted to accrue the required number of patients within a reasonable period of time. The fundamental point of conducting a multicentre trial is that all participating investigators must agree to follow the common study protocol. However, even with every attempt having been made to standardize the methods for diagnosing severity of disease and evaluating response to treatment, for example, they might be applied differently at different centres, and these may vary from comprehensive cancer centres to university hospitals to community hospitals. Therefore, in multicentre trials there is likely to be some degree of variation (heterogeneity) among centres in both the baseline risks and the treatment effects. While we estimate the overall treatment effect using a summary measure such as hazard ratio and usually interpret it as an average treatment effect over the centre, it is necessary to examine the homogeneity of the observed treatment effects across centres, that is, treatment-by-centre interaction. If the data are reasonably consistent with homogeneity of the observed treatment effects across centres, a single summary measure is adequate to describe the trial results and those results will contribute to the scientific generalization, the process of synthesizing knowledge from observations. On the other hand, if heterogeneity of treatment effects is found, we should carefully interpret the trial results and investigate the reason why the variation is seen. In the analyses of multicentre trials, a random effects approach is often used to model the centre effects. In this article, we focus on the proportional hazards models with random effects to examine centre variation in the treatment effects as well as the baseline risks, and review the parameter estimation procedures, frequentist approach-penalized maximum likelihood method--and Bayesian approach--Gibbs sampling method. We also briefly review the models for bivariate responses. We present a few real data examples from the biometrical literature to highlight the issues.  相似文献   

17.
Following several attempts to achieve a molecular stratification of bladder cancer (BC) over the last decade, a “consensus” classification has been recently developed to provide a common base for the molecular classification of bladder cancer (BC), encompassing a six-cluster scheme with distinct prognostic and predictive characteristics. In order to implement molecular subtyping (MS) as a risk stratification tool in routine practice, immunohistochemistry (IHC) has been explored as a readily accessible, relatively inexpensive, standardized surrogate method, achieving promising results in different clinical settings. The second part of this review deals with the pathological and clinical features of the molecular clusters, both in conventional and divergent urothelial carcinoma, with a focus on the role of IHC-based subtyping.  相似文献   

18.
The attachment ability of insects and lizards is well known. The Tokay gecko, in particular, has the most complex adhesion structures. The pads are covered by a large number of small hairs (setae) that contain many branches per seta with spatulae. Seta branch morphology is hierarchical. Hierarchical morphology of setae is responsible for adaptation of a large number of spatulae to rough surfaces. Van der Waals attraction between the large numbers of spatulae in contact with a rough surface is the primary mechanism for high adhesion. In order to investigate the effect of hierarchical structure, for the first time, the two-level hierarchical model has been developed. We consider one- and two-level hierarchically structured spring models for simulation of setae contacting with random rough surfaces and demonstrate the effect of the two-level hierarchical structure on the adhesion force, the number of contacts and the adhesion energy. Tip of spatula in a single contact was assumed as spherical. Rough surfaces with various roughness parameters which cover a common range of most of natural and artificial rough surfaces at the scale of gecko's pad were generated. It was found that significant adhesion enhancements are created with the two-level structure until a certain value of roughness which appears to be related to the maximum spring deformation. We conclude that the hierarchical morphology of a gecko seta is the necessary part for 'smart adhesion' of gecko, the ability to cling on and detach from different smooth, as well as rough surfaces.  相似文献   

19.
Cost-effectiveness analysis is now an integral part of health technology assessment and addresses the question of whether a new treatment or other health care program offers good value for money. In this paper we introduce the basic framework for decision making with cost-effectiveness data and then review recent developments in statistical methods for analysis of uncertainty when cost-effectiveness estimates are based on observed data from a clinical trial. Although much research has focused on methods for calculating confidence intervals for cost-effectiveness ratios using bootstrapping or Fieller's method, these calculations can be problematic with a ratio-based statistic where numerator and/or denominator can be zero. We advocate plotting the joint density of cost and effect differences, together with cumulative density plots known as cost-effectiveness acceptability curves (CEACs) to summarize the overall value-for-money of interventions. We also outline the net-benefit formulation of the cost-effectiveness problem and show that it has particular advantages over the standard incremental cost-effectiveness ratio formulation.  相似文献   

20.
This paper is based on a conference presentation in which several authors presented results from analyses of the same dataset concerning the evaluation of progression-free survival (PFS) as a surrogate endpoint for overall survival in advanced colorectal cancer clinical trials. In evaluating a potential surrogate endpoint, there is a hierarchy of information that might usually be considered desirable: 1) a biological rationale for surrogacy, 2) demonstration of the prognostic value of the surrogate endpoint in untreated patients and 3) in treated patients and 4) demonstration across randomized comparisons that differences in the effect of randomized treatments on the surrogate endpoint are associated with the corresponding differences in the effect on the clinical endpoint of interest. Results from analyses that might be used to address the third and four requirements are presented and some of the practical issues that arise in evaluating a surrogate endpoint, which would be relevant to many diseases, are illustrated. Although the results presented should not be seen as a definitive analysis of the value of PFS as a surrogate endpoint, concerns are identified about the potential lack of standardization of the definition of PFS or the frequency of evaluation of disease progression and the high leverage of one study in evaluating the association in addressing the fourth requirement.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号