首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Efficacy, which we define as the effect of receiving intervention on health outcomes among a group of subjects, is the quantity of interest for many investigators. In contrast, intent-to-treat analyses in randomized trials and their analogue for observational before-and-after studies compare outcomes between randomization groups or before-and-after time periods. When there is switching of interventions, estimates based on intent-to-treat are biased for estimating efficacy. By constructing a model based on potential outcomes, one can make reasonable assumptions to estimate efficacy under 'all-or-none' switching of interventions in which switching occurs immediately after randomization or at the start of the time period. This paper reviews the basic methodology, with emphasis on simple maximum likelihood estimates that arise with completely observed outcomes, partially missing binary outcomes, and discrete-time survival outcomes. Particular attention is paid to estimating efficacy in meta-analysis, where the interpretation is much more straightforward than with intent-to-treat analyses.  相似文献   

2.
This review covers a number of the many design and analytic issues associated with clinical trials that incorporate patient reported outcomes as primary or secondary endpoints. We use a clinical trial designed to evaluate a new therapy for the prevention of migraines to illustrate how endpoints are defined by the objectives of the study, the methods for handling longitudinal assessments with multiple scales or outcomes, and the methods of analysis in the presence of missing data.  相似文献   

3.
Motivated by problems encountered in studying treatments for drug dependence, where repeated binary outcomes arise from monitoring biomarkers for recent drug use, this article discusses a statistical strategy using Markov transition model for analyzing incomplete binary longitudinal data. When the mechanism giving rise to missing data can be assumed to be ;ignorable', standard Markov transition models can be applied to observed data to draw likelihood-based inference on transition probabilities between outcome events. Illustration of this approach is provided using binary results from urine drug screening in a clinical trial of baclofen for cocaine dependence. When longitudinal data have ;nonignorable' missingness mechanisms, random-effects Markov transition models can be used to model the joint distribution of the binary data matrix and the matrix of missingness indicators. Categorizing missingness patterns into those for occasional or ;intermittent' missingness and those for monotonic missingness or ;missingness due to dropout', the random-effects Markov transition model was applied to a data set containing repeated breath samples analyzed for expired carbon monoxide levels among opioid-dependent, methadone-maintained cigarette smokers in a smoking cessation trial. Markov transition models provide a novel reconceptualization of treatment outcomes, offering both intuitive statistical values and relevant clinical insights.  相似文献   

4.
There has been much debate about the relative merits of mixed effects and population-averaged logistic models. We present a different perspective on this issue by noting that the investigation of the relationship between these models for a given dataset offers a type of sensitivity analysis that may reveal problems with assumptions of the mixed effects and/or population-averaged models for clustered binary response data in general and longitudinal binary outcomes in particular. We present several datasets in which the following violations of assumptions are associated with departures from the expected theoretical relationship between these two models: 1) negative intra-cluster correlations; 2) confounding of the response-covariate relationship by cluster effects; and 3) confounding of autoregressive relationships by the link between baseline outcomes and subject effects. Under each of these conditions, the expected theoretical attenuation of the population-averaged odds ratio relative to the cluster-specific odds ratio does not necessarily occur. In all cases, the naive fitting of a random intercept logistic model appears to lead to bias. In response, the random intercept model is modified to accommodate negative intra-cluster correlations, confounding due to clusters, or baseline correlations with random effects. Comparisons are made with GEE estimation of population-averaged models and conditional likelihood estimation of cluster-specific models. Several examples, including a cross-over trial, a multicentre nonrandomized treatment study, and a longitudinal observational study are used to illustrate these modifications.  相似文献   

5.
Response-adaptive designs have become popular for allocation of the entering patients among two or more competing treatments in a phase III clinical trial. Although there are a lot of designs for binary treatment responses, the number of designs involving covariates is very small. Sometimes the patients give repeated responses. The only available response-adaptive allocation design for repeated binary responses is the urn design by Biswas and Dewanji [Biswas A and Dewanji AA. Randomized longitudinal play-the-winner design for repeated binary data. ANZJS 2004; 46: 675-684; Biswas A and Dewanji A. Inference for a RPW-type clinical trial with repeated monitoring for the treatment of rheumatoid arthritis. Biometr J 2004; 46: 769-779.], although it does not take care of the covariates of the patients in the allocation design. In this article, a covariate-adjusted response-adaptive randomisation procedure is developed using the log-odds ratio within the Bayesian framework for longitudinal binary responses. The small sample performance of the proposed allocation procedure is assessed through a simulation study. The proposed procedure is illustrated using some real data set.  相似文献   

6.
Mendelian randomisation analyses use genetic variants as instrumental variables (IVs) to estimate causal effects of modifiable risk factors on disease outcomes. Genetic variants typically explain a small proportion of the variability in risk factors; hence Mendelian randomisation analyses can require large sample sizes. However, an increasing number of genetic variants have been found to be robustly associated with disease-related outcomes in genome-wide association studies. Use of multiple instruments can improve the precision of IV estimates, and also permit examination of underlying IV assumptions. We discuss the use of multiple genetic variants in Mendelian randomisation analyses with continuous outcome variables where all relationships are assumed to be linear. We describe possible violations of IV assumptions, and how multiple instrument analyses can be used to identify them. We present an example using four adiposity-associated genetic variants as IVs for the causal effect of fat mass on bone density, using data on 5509 children enrolled in the ALSPAC birth cohort study. We also use simulation studies to examine the effect of different sets of IVs on precision and bias. When each instrument independently explains variability in the risk factor, use of multiple instruments increases the precision of IV estimates. However, inclusion of weak instruments could increase finite sample bias. Missing data on multiple genetic variants can diminish the available sample size, compared with single instrument analyses. In simulations with additive genotype-risk factor effects, IV estimates using a weighted allele score had similar properties to estimates using multiple instruments. Under the correct conditions, multiple instrument analyses are a promising approach for Mendelian randomisation studies. Further research is required into multiple imputation methods to address missing data issues in IV estimation.  相似文献   

7.
In randomized clinical trials comparing treatment effects on diseases such as cancer, a multicentre trial is usually conducted to accrue the required number of patients within a reasonable period of time. The fundamental point of conducting a multicentre trial is that all participating investigators must agree to follow the common study protocol. However, even with every attempt having been made to standardize the methods for diagnosing severity of disease and evaluating response to treatment, for example, they might be applied differently at different centres, and these may vary from comprehensive cancer centres to university hospitals to community hospitals. Therefore, in multicentre trials there is likely to be some degree of variation (heterogeneity) among centres in both the baseline risks and the treatment effects. While we estimate the overall treatment effect using a summary measure such as hazard ratio and usually interpret it as an average treatment effect over the centre, it is necessary to examine the homogeneity of the observed treatment effects across centres, that is, treatment-by-centre interaction. If the data are reasonably consistent with homogeneity of the observed treatment effects across centres, a single summary measure is adequate to describe the trial results and those results will contribute to the scientific generalization, the process of synthesizing knowledge from observations. On the other hand, if heterogeneity of treatment effects is found, we should carefully interpret the trial results and investigate the reason why the variation is seen. In the analyses of multicentre trials, a random effects approach is often used to model the centre effects. In this article, we focus on the proportional hazards models with random effects to examine centre variation in the treatment effects as well as the baseline risks, and review the parameter estimation procedures, frequentist approach-penalized maximum likelihood method--and Bayesian approach--Gibbs sampling method. We also briefly review the models for bivariate responses. We present a few real data examples from the biometrical literature to highlight the issues.  相似文献   

8.
Throughout the 1980s and 1990s cluster randomization trials have been increasingly used to evaluate effectiveness of health care intervention. Such trials have raised several methodologic challenges in analysis. Meta-analyses involving cluster randomization trials are becoming common in the area of health care intervention. However, as yet there has been no empirical evidence of current practice in the meta-analyses. Thus a review was performed to identify and examine synthesis approaches of meta-analyses involving cluster randomization trials in the published literature. Electronic databases were searched for meta-analyses involving cluster randomization trials from the earliest date available to 2000. Once a meta-analysis was identified, papers on the relevant cluster randomization trials included were also requested. Each of the original papers of cluster randomization trials included was examined for its randomized design and unit, and adjustment for clustering effect in analysis. Each of the selected meta-analyses was then evaluated as to its synthesis concerning clustering effect. In total, 25 eligible meta-analyses were reviewed. Of these, 15 meta-analyses reported simple conventional methods of the fixed-effect model as method of analysis, while six did not incorporate the cluster randomization trial results in the synthesis methods but described the trial results individually. Three meta-analyses attempted to account for the clustering effect in the synthesis methods but approaches were in arbitrary. Fifteen meta-analyses included more than one cluster randomization trial, each of which included cluster randomization trials with a mixture of randomized designs and units, and units of analysis. These mixture situations might increase heterogeneity, but have not been considered in any meta-analysis. Some methods dealing with a binary outcome for some specific situations have been discussed. In conclusion, some difficulties in the quantitative synthesis procedures were found in the meta-analyses involving cluster randomization trials. Recommendations in the applications of approaches to some specific situations in a binary outcome variable have also been provided. There are still, however, several methodologic issues of the meta-analyses involving cluster randomization trials that need to be investigated further.  相似文献   

9.
Two types of modules are most common in gas permeation: the hollow fibre module and the spiral wound module. With some simplifying assumptions regarding the flow pattern, the separation characteristics of such modules can be calculated for binary and ternary mixtures.More important in practice, however, is the separation of multicomponent mixtures. This paper discusses the design of multicomponent systems including cases where non-permeating components or carrier gases at the permeate side are present.The results of some calculations are discussed and compared with the usual short-cut method based on the assumption of a pseudo-binary mixture. The results demonstrate that the reduction of a multicomponent mixture to a pseudo-binary mixture is only reasonable when components of similar permeability are lumped together. Serious deviations with respect to membrane area or product composition must be expected for larger differences in permeabilities.  相似文献   

10.
Two types of modules are most common in gas permeation: the hollow fibre module and the spiral wound module. With some simplifying assumptions regarding the flow pattern, the separation characteristics of such modules can be calculated for binary and ternary mixtures.More important in practice, however, is the separation of multicomponent mixtures. This paper discusses the design of multicomponent systems including cases where non-permeating components or carrier gases at the permeate side are present.The results of some calculations are discussed and compared with the usual short-cut method based on the assumption of a pseudo-binary mixture. The results demonstrate that the reduction of a multicomponent mixture to a pseudo-binary mixture is only reasonable when components of similar permeability are lumped together. Serious deviations with respect to membrane area or product composition must be expected for larger differences in permeabilities.  相似文献   

11.
Estimating causal effects from incomplete data requires additional and inherently untestable assumptions regarding the mechanism giving rise to the missing data. We show that using causal diagrams to represent these additional assumptions both complements and clarifies some of the central issues in missing data theory, such as Rubin's classification of missingness mechanisms (as missing completely at random (MCAR), missing at random (MAR) or missing not at random (MNAR)) and the circumstances in which causal effects can be estimated without bias by analysing only the subjects with complete data. In doing so, we formally extend the back-door criterion of Pearl and others for use in incomplete data examples. These ideas are illustrated with an example drawn from an occupational cohort study of the effect of cosmic radiation on skin cancer incidence.  相似文献   

12.
Perfectly implemented randomized clinical trials, particularly of complex interventions, are extremely rare. Almost always they are characterized by imperfect adherence to the randomly allocated treatment and variable amounts of missing outcome data. Here we start by describing a wide variety of examples and then introduce instrumental variable methods for the analysis of such trials. We concentrate mainly on situations in which compliance is all or nothing (either the patient receives the allocated treatment or they do not--in the latter case they may receive no treatment or a treatment other than the one allocated). The main purpose of the review is to illustrate the use of latent class (finite mixture) models, using maximum likelihood, for complier-average causal effect estimation under varying assumptions concerning the mechanism of the missing outcome data.  相似文献   

13.
New drugs, including immune checkpoint inhibitors and targeted therapy, have changed the prognosis in a subset of patients with advanced lung cancer, and are now actively investigated in a number of trials with neoadjuvant and adjuvant regimens. However, no phase III randomized studies were published yet. The current narrative review proves that targeted therapies are safe in neoadjuvant approach. Unsurprisingly, administration of therapy is related to an acceptable toxicity profile. Severe adverse events’ rate that rarely compromises outcomes of patients with advanced lung cancer is not that commonly accepted in early lung cancer as it may lead to missing the chance of curative surgery. Among those complications, the most important factors that may limit the use of targeted therapies are severe respiratory adverse events precluding the resection occurring after treatment with some anaplastic lymphoma kinase and rarely after epidermal growth factor receptor tyrosine kinase inhibitors. At this point, in the presented literature assessing the feasibility of neoadjuvant therapies with anaplastic lymphoma kinase and epidermal growth factor receptor tyrosine kinase inhibitors, we did not find any unexpected intraoperative events that would be of special interest to a thoracic surgeon. Moreover, the postoperative course was associated with typical rate of complications.  相似文献   

14.
应用密度泛函理论研究了二元非极性混合流体的表面性质.分子作为球形链处理,不同分子内的两链节相互作用以硬核Yukawa势能表示.为了避免计算中势能作用数值积分截断导致的误差,采用了合理的长程校正方法.根据微扰理论建立了流体的状态方程以计算汽液相平衡.从纯流体汽液相平衡数据回归得分子的链节作用参数ε/k、d和ms,这些参数预测纯流体表面张力时可获得较好结果.继而引入混合参数kij表示不同分子链节作用情况,计算了6种非极性混合流体的汽液相平衡、表面张力、表面密度剖面.结果表明,本方法应用于二元非极性流体混合物时汽液表面张力计算值与实验值符合良好,同时显示某些二元混合流体表面区可能出现组分的相对富集现象.  相似文献   

15.
丙烯酸聚氨酯涂料成膜过程(Ⅱ)模型及模拟   总被引:1,自引:0,他引:1       下载免费PDF全文
夏正斌  涂伟萍  陈焕钦 《化工学报》2003,54(10):1446-1449
将双组分丙烯酸聚氨酯涂料体系按二元聚合物-溶剂体系进行简化处理建立了涂膜中溶剂组分的连续性方程、涂膜厚度的时间依赖性方程及其边界条件和初始条件,并采用有限元技术求出了本模型方程组的解.模拟结果表明,一些实际测得的宏观实验数据与模型预测结果非常吻合,而且模型预测还能够较好地解释涂膜干燥过程中表层较内部干燥快所形成的结皮现象.  相似文献   

16.
It is now widely accepted that multiple imputation (MI) methods properly handle the uncertainty of missing data over single imputation methods. Several standard statistical software packages, such as SAS, R and STATA, have standard procedures or user-written programs to perform MI. The performance of these packages is generally acceptable for most types of data. However, it is unclear whether these applications are appropriate for imputing data with a large proportion of zero values resulting in a semi-continuous distribution. In addition, it is not clear whether the use of these applications is suitable when the distribution of the data needs to be preserved for subsequent analysis. This article reports the findings of a simulation study carried out to evaluate the performance of the MI procedures for handling semi-continuous data within these statistical packages. Complete resource use data on 1060 participants from a large randomized clinical trial were used as the simulation population from which 500 bootstrap samples were obtained and missing data imposed. The findings of this study showed differences in the performance of the MI programs when imputing semi-continuous data. Caution should be exercised when deciding which program should perform MI on this type of data.  相似文献   

17.
Non-inferiority trials are motivated in the context of clinical research where a proven active treatment exists and placebo-controlled trials are no longer acceptable for ethical reasons. Instead, active-controlled trials are conducted where a treatment is compared to an established treatment with the objective of demonstrating that it is non-inferior to this treatment. We review and compare the methodologies for calculating sample sizes and suggest appropriate methods to use. We demonstrate how the simplest method of using the anticipated response is predominantly consistent with simulations. In the context of trials with binary outcomes with expected high proportions of positive responses, we show how the sample size is quite sensitive to assumptions about the control response. We recommend when designing such a study that sensitivity analyses be performed with respect to the underlying assumptions and that the Bayesian methods described in this article be adopted to assess sample size.  相似文献   

18.
A good extrusion die must distribute the polymer melt in the flow channel such that the material exits from the die with a uniform velocity and temperature. Coathanger dies are commonly used for the extrusion of plastic sheets and films. The die is usually provided with a straining bar allowing a regulation of the flowrate in the case of a poor design. But this, in turn, can affect temperature uniformity. Therefore, the design of coathanger die is a complex task which is mainly accomplished by trial and error in industry. Analyses of the flow in coathanger dies have been reported in the literature. Analytical and numerical approaches are used to solve this problem. The first one involves many simplifying assumptions: the most important ones being the unidirectional and isothermal flow of the polymer. Most numerical methods deal with a 2-D geometry, but only a few of them have considered the non-isothermal flow. A new model has been developed using a modified FAN method (Flow Analysis Network introduced by Tadmor) for the calculation of the 2-D flow, coupled with a finite-difference scheme for the calculation of temperature. The overall model can run on a PC with only a few minutes of calculation. Good agreement was obtained between experimental data and simulations.  相似文献   

19.
往复式压缩机是炼油装置中常用的动设备。往复式压缩机管道容易发生振动,合理的管道布置、支架设置是往复式压缩机管道设计的关键,它能使管系固有频率避开共振。本文初步探讨往复式压缩机管道设计。  相似文献   

20.
Abstract

Bioequivalence (BE) trials are sometimes preceded by a pilot relative bioavailability (BA) trial to investigate whether the test formulation is sufficiently similar to the reference. The geometric mean ratio and its confidence bounds provide guidance as to how the BE trial can be appropriately sized to attain sufficient power. The aim of this work is to optimize the sample size of a pilot BA trial in order to minimize the overall sample size for the combination of pilot and pivotal trials. This is done through specification of a gain function associated with any of two possible outcomes of the trial; that is, abandon further development of the test formulation or proceed to a pivotal BE trial. The gain functions will be constructed on the basis of sample size considerations only, because subject numbers are indicative of both the cost and the feasibility of a clinical trial. Using simulations, it is demonstrated that for drugs with high intrasubject variability, the BA trial should be sufficiently sized to avoid erroneous decision making and to control the overall cost. In contrast, when the intrasubject variability of the pharmacokinetic (PK) parameters is low, not conducting the BA trial should be considered. It is concluded that the rather typical practice of conducting small pilot trials is unlikely to be a cost-effective approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号