首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
When comparing an experimental treatment with a standard treatment in a randomized clinical trial (RCT), we often use the risk difference (RD) to measure the efficacy of an experimental treatment. In this paper, we have developed four asymptotic interval estimators for the RD in a stratified RCT with noncompliance. These include an asymptotic interval estimator based on the weighted-least-squares (WLS) estimator of the RD, an asymptotic interval estimator using tanh−1(x) transformation with the WLS optimal weight, an asymptotic interval estimator derived from Fieller’s Theorem, and an asymptotic interval estimator using a randomization-based approach. Based on Monte Carlo simulations, we have compared these four asymptotic interval estimators with the asymptotic interval estimator recently proposed elsewhere. We have found that when the probability of compliance is high, the interval estimator using a randomization-based approach is probably more accurate than the others, especially when the stratum size is not large. When the probability of compliance is moderate, the interval estimator using tanh−1(x) transformation is likely to be the best among all interval estimators considered here. We note that the interval estimator proposed elsewhere can be of use when the underlying RD is small, but lose accuracy when the RD is large. We also note that when the number of patients per assigned treatment is large, the four asymptotic interval estimators developed here are essentially equivalent; they are all appropriate for use. Finally, to illustrate the use of these interval estimators, we consider the data taken from a large field trial studying the effect of a multifactor intervention program on reducing the mortality of coronary heart disease in middle-aged men.  相似文献   

3.
4.
It is not uncommon to encounter a randomized clinical trial (RCT), in which we need to account for both the noncompliance of patients to their assigned treatment and confounders to avoid making a misleading inference. In this paper, we focus our attention on estimation of the relative treatment efficacy measured by the odds ratio (OR) in large strata for a stratified RCT with noncompliance. We have developed five asymptotic interval estimators for the OR. We employ Monte Carlo simulation to evaluate the finite-sample performance of these interval estimators in a variety of situations. We note that the interval estimator using the weighted least squares (WLS) method may perform well when the number of strata is small, but tend to be liberal when the number of strata is large. We find that the interval estimator using weights which are not functions of unknown parameters required to be estimated from data can improve the accuracy of the interval estimator based on the WLS method, but lose precision. We note that the estimator using the logarithmic transformation of the WLS point estimator and the interval estimator using the logarithmic transformation of the Mantel-Haenszel (MH) type of point estimator can perform well with respect to both the coverage probability and the average length in all the situations considered here. We further note that the interval estimator derived from a quadratic equation using a randomization-based method can be of use as the number of strata is large. Finally, we use the data taken from a multiple risk factor intervention trial to illustrate the use of interval estimators appropriate for being employed when the number of strata is small or moderate.  相似文献   

5.
Clustered or correlated samples with binary data are frequently encountered in biomedical studies. The clustering may be due to repeated measurements of individuals over time or may be due to subsampling of the primary sampling units. Individuals in the same cluster tend to behave more alike than individuals who belong to different clusters. This exhibition of intracluster correlation decreases the amount of information about the effect of the intervention. In the analysis of randomized cluster trials one must adjust the variance of estimator of the mean for the effect of the positive intraclass correlation p;. We review selected alternative methods to the typical Pearson's chi2 analysis, illustrate these alternatives, and out line an alternative analysis algorithm. We have written and tested a FORTRAN program that produces the statistics outlined in this paper. The program is available in an executable format and is available from the author on request.  相似文献   

6.
Disease-modifying (DM) trials on chronic diseases such as Alzheimer’s disease (AD) require a randomized start or withdrawal design. The analysis and optimization of such trials remain poorly understood, even for the simplest scenario in which only three repeated efficacy assessments are planned for each subject: one at the baseline, one at the end of the trial, and the other at the time when the treatments are switched. Under the assumption that the repeated measures across subjects follow a trivariate distribution whose mean and covariance matrix exist, the DM efficacy hypothesis is formulated by comparing the change of efficacy outcome between treatment arms with and without a treatment switch. Using a minimax criterion, a methodology is developed to optimally determine the sample size allocations to individual treatment arms as well as the optimum time when treatments are switched. The sensitivity of the optimum designs with respect to various model parameters is further assessed. An intersection–union test (IUT) is proposed to test the DM hypothesis, and determine the asymptotic size and the power of the IUT. Finally, the proposed methodology is demonstrated by using reported statistics on the placebo arms from several recently published symptomatic trials on AD to estimate necessary parameters and then deriving the optimum sample sizes and the time of treatment switch for future DM trials on AD.  相似文献   

7.
When we have difficulty in recruiting patients into a randomized clinical trial (RCT), we may consider taking more than one measurement per patient to reduce the number of patients needed to achieve a desired power. In this paper, we consider a double blind RCT with two courses of treatment per patient. At each course, a patient assigned to the experimental treatment could switch to receive the placebo if the patient declined his/her assigned (experimental) treatment, and a patient assigned to the placebo could switch to receive the experimental treatment if the patient refused his/her assigned (placebo) treatment as well. Sample size calculation without accounting for this non-compliance can be inadequate when we apply the standard procedure of intention-to-treat analysis for non-compliance trials to test no treatment effect. Based on the simple additive risk model proposed elsewhere, we have incorporated the initial probability of compliance, the dependence of patient's selection of a treatment on his/her previous response, and the variation of probabilities of response between patients into sample size determination. We have included a quantitative discussion that provides an insight into the effect of various parameters on the minimum required sample size. We have also noted the situation in which taking repeated measurements per patient can be most effective to reduce the number of patients needed to maintain a given power.  相似文献   

8.
The biased coin randomization approach is frequently adopted in randomized clinical trials to control the balance in overall treatment assignments. In this article, an algorithm is developed to theoretically determine the exact allocation ratios for all the patients that will be enrolled in a randomized clinical trial with biased coin randomization based on the order in which they are randomized. Our results show that the exact allocation ratios can significantly deviate from the ratio as specified for the trial, which poses challenges to the enrollment of the trial and to the interpretation of the results. In order to maintain a constant allocation ratio throughout the trial, a modification of the widely adopted permuted block randomization is proposed, which is shown to achieve better balance for not only the overall treatment assignments but also the baseline stratification variables that are desired to be balanced at the end of the trial.  相似文献   

9.
Cluster randomization trials are increasingly popular among healthcare researchers. Intact groups (called ‘clusters’) of subjects are randomized to receive different interventions, and all subjects within a cluster receive the same intervention. In cluster randomized trials, a cluster is the unit of randomization, and a subject is the unit of analysis. Variation in cluster sizes can affect the sample size estimate or the power of the study. [Guittet, L., Ravaud, P., Giraudeau, B., 2006. Planning a cluster randomized trial with unequal cluster sizes: Practical issues involving continuous outcomes. BMC Medical Research Methodology 6 (17), 1-15] investigated the impact of an imbalance in cluster size on the power of trials with continuous outcomes through simulations. In this paper, we examine the impact of cluster size variation and intracluster correlation on the power of the study for binary outcomes through simulations. Because the sample size formula for cluster randomization trials is based on a large sample approximation, we evaluate the performance of the sample size formula with small sample sizes through simulation. Simulation study findings show that the sample size formula (mp) accounting for unequal cluster sizes yields empirical powers closer to the nominal power than the sample size formula (ma) for the average cluster size method. The differences in sample size estimates and empirical powers between ma and mp get smaller as the imbalance in cluster sizes gets smaller.  相似文献   

10.
Extra-dispersion (overdispersion or underdispersion) is a common phenomenon in practice when the variance of count data differs from that of a Poisson model. This can arise when the data come from different subpopulations or when the assumption of independence is violated. This paper develops a procedure for testing the equality of the means of several groups of counts, when extra-dispersions among the treatment groups are unequal, based on the adjusted counts using the concept of the design and size effects employed by Rao and Scott, [Rao, J.N.K., Scott, A.J., 1999. A simple method for analyzing overdispersion in clustered Poisson data. Statist. Med. 18, 1373-1385]. We also obtain the score-type test statistics based on quasi-likelihoods using the mean-variance structure of the negative binomial model, and study the properties and performance characteristics of these. The simulation results indicate that the statistic based on the adjusted count data, which has a very simple form and does not require the estimates of the extra-dispersion parameters, performs best among all the statistics considered in this paper. Finally, the proposed test statistic and the score-type statistic based on double-extended quasi-likelihood are illustrated by an analysis of a set of fetal implants in mice arising from a developmental toxicity study.  相似文献   

11.
A major and difficult task is the design of clinical trials with a time to event endpoint. In fact, it is necessary to compute the number of events and in a second step the required number of patients. Several commercial software packages are available for computing sample size in clinical trials with sequential designs and time to event endpoints, but there are a few R functions implemented. The purpose of this paper is to describe features and use of the R function. plansurvct.func, which is an add-on function to the package gsDesign which permits in one run of the program to calculate the number of events, and required sample size but also boundaries and corresponding p-values for a group sequential design. The use of the function plansurvct.func is illustrated by several examples and validated using East software.  相似文献   

12.
When two interventions are randomized to multiple sub-clusters within a whole cluster, accounting for the within sub-cluster (intra-cluster) and between sub-clusters (inter-cluster) correlations is needed to produce valid analyses of the effect of interventions. With the growing interest in copulas and their applications in statistical research, we demonstrate, through applications, how copula functions may be used to account for the correlation among responses across sub-clusters. Copulas having asymmetric dependence property may prove useful for modeling the relationship between random functions especially in clinical, health and environmental sciences where response data are in general skewed. These functions can in general be used to study scale-free measures of dependence, and they can be used as a starting point for constructing families of bivariate distributions, with a view to simulations. The core contribution of this paper is to provide an alternative approach for estimating the inter-cluster correlation using copula to accurately estimate the treatment effect when the outcome variable is measured on the dichotomous scale. Two data sets are used to illustrate the proposed methodology.  相似文献   

13.
The evaluation of surrogate endpoints is thought to be first studied by Prentice, who presented a definition of a surrogate as well as a set of criteria. These criteria were later supplemented with the so-called proportion explained after notifying some drawbacks in Prentice's approach. Subsequently, the evaluation exercise was framed within a meta-analytic setting, thereby overcoming difficulties that necessarily surround evaluation efforts based on a single trial. The meta-analytic approach for continuous outcomes is briefly reviewed. Advantages and problems are highlighted by means of two case studies, one in schizophrenia and one in ophthalmology, and a simulation study.One of the critical issues for the broad adoption of methodology like the one presented here is the availability of flexible implementations in standard statistical software. Generically applicable SAS macros and R functions are developed and made available to the reader.  相似文献   

14.
Scanning spherical lens antennas are typically constructed using concentric dielectric shells with external primary feed. A cover layer may also be used. A modal expansion method is reported whereby the scattering matrix of this structure is derived and which accounts for both internal and external dielectric layers. This allows for the radome to be designed in an integrated fashion. Measured radiation patterns for a 35‐dBi antenna are reported for 28 GHz. © 2007 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2007.  相似文献   

15.
We define four different properties of relational databases which are related tothe notion of homogeneity in classical model theory. The main question for their definition is, for any given database to determine the minimum integer k, such that whenever two k-tuples satisfy the same properties which are expressible in first order logic with up to k variables (FO k ), then there is an automorphism which maps each of these k-tuples onto each other. We study these four properties as a means to increase the computational power of subclasses of the reflective relational machines (RRMs) of bounded variable complexity. These were introduced by S. Abiteboul, C. Papadimitriou and V. Vianu and are known to be incomplete. For this sake we first give a semantic characterization of the subclasses of total RRM with variable complexity k (RRM k ) for every natural number k. This leads to the definition of classes of queries denoted as Q C Q k . We believe these classes to be of interest in their own right. For each k>0, we define the subclass Q C Q k as the total queries in the class C Q of computable queries which preserve realization of properties expressible in FO k . The nature of these classes is implicit in the work of S. Abiteboul, M. Vardi and V. Vianu. We prove Q C Q k =total(RRM k ) for every k>0. We also prove that these classes form a strict hierarchy within a strict subclass of total(C Q). This hierarchy is orthogonal to the usual classification of computable queries in time-space-complexity classes. We prove that the computability power of RRM k machines is much greater when working with classes of databases which are homogeneous, for three of the properties which we define. As to the fourth one, we prove that the computability power of RRM with sublinear variable complexity also increases when working on databases which satisfy that property. The strongest notion, pairwise k-homogeneity, allows RRM k machines to achieve completeness.  相似文献   

16.
In social choice voting, majorities based on difference of votes and their extension, majorities based on difference in support, implement the crisp preference values (votes) and the intensities of preference provided by voters when comparing pairs of alternatives, respectively. The aim of these rules is declaring which alternative is socially preferred and to that, they require the winner alternative to reach a certain positive difference in its social valuation with respect to the one reached by the loser alternative. This paper introduces a new aggregation rule that extends majorities based on difference of votes from the context of crisp preferences to the framework of linguistic preferences. Under linguistic majorities with difference in support, voters express their intensities of preference between pairs of alternatives using linguistic labels and an alternative defeats another one when a specific support, fixed before the election process, is reached. There exist two main representation methodologies of linguistic preferences: the cardinal one based on the use of fuzzy sets, and the ordinal one based on the use of the 2-tuples. Linguistic majorities with difference in support are formalised in both representation settings, and conditions are given to guarantee that fuzzy linguistic majorities and 2-tuple linguistic majorities are mathematically isomorphic. Finally, linguistic majorities based on difference in support are proved to verify relevant normative properties: anonymity, neutrality, monotonicity, weak Pareto and cancellativeness.  相似文献   

17.
18.
Human genetic linkage studies have the objective of testing whether disease genes are linked to genetic markers based on family genetic data. Sometimes, these studies require many years of recruiting informative families and large amount of funds. One way to reduce the required sample size for such studies is to use sequential testing procedures. In this paper, we investigate two group sequential tests for homogeneity in binomial mixture models that are commonly used in genetic linkage analysis. We conduct Monte Carlo simulations to examine the performance of the group sequential procedures. The results show that the proposed group sequential procedures can save, on average, substantial sample size and detect linkage with almost the same power as their nonsequential counterparts.  相似文献   

19.
测试资源受约束的安全关键软件加速测试方法   总被引:1,自引:0,他引:1  
基于马尔可夫链使用模型提出了一种针对安全关键软件测试资源受约束的启发式加速测试方法.该方法利用一种新的随机优化技术--交叉熵方法,以软件投放后软件失效风险损失最小为目标,基于失效风险损失通过修正操作剖面,自动生成测试数据集.实验结果表明该方法能有效地降低软件失效风险,提高测试效率,是一种快速有效的加速测试方法.  相似文献   

20.
In this paper we perform a computationally intensive empirical investigation of interday homogeneity in the intraday rate of trading for six NYSE-traded stocks. For each of these six stocks, we test the homogeneity of the k  th trading day to the remainder of the sample using a likelihood ratio test for each of the forty trading days in the sample. At the α=0.01α=0.01 level, we find that about one-half of all trading days considered are homogeneous to the remainder of the sample, although this proportion varies across individual samples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号