首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Mixed-effects linear regression models have become more widely used for analysis of repeatedly measured outcomes in clinical trials over the past decade. There are formulae and tables for estimating sample sizes required to detect the main effects of treatment and the treatment by time interactions for those models. A formula is proposed to estimate the sample size required to detect an interaction between two binary variables in a factorial design with repeated measures of a continuous outcome. The formula is based, in part, on the fact that the variance of an interaction is fourfold that of the main effect. A simulation study examines the statistical power associated with the resulting sample sizes in a mixed-effects linear regression model with a random intercept. The simulation varies the magnitude (Δ) of the standardized main effects and interactions, the intraclass correlation coefficient (ρ), and the number (k) of repeated measures within-subject. The results of the simulation study verify that the sample size required to detect a 2×2 interaction in a mixed-effects linear regression model is fourfold that to detect a main effect of the same magnitude.  相似文献   

2.
In comparing the mean count of two independent samples, some practitioners would use the t-test or the Wilcoxon rank sum test while others may use methods based on a Poisson model. It is not uncommon to encounter count data that exhibit overdispersion where the Poisson model is no longer appropriate. This paper deals with methods for overdispersed data using the negative binomial distribution resulting from a Poisson-Gamma mixture. We investigate the small sample properties of the likelihood-based tests and compare their performances to those of the t-test and of the Wilcoxon test. We also illustrate how these procedures may be used to compute power and sample sizes to design studies with response variables that are overdispersed count data. Although methods are based on inferences about two independent samples, sample size calculations may also be applied to problems comparing more than two independent samples. It will be shown that there is gain in efficiency when using the likelihood-based methods compared to the t-test and the Wilcoxon test. In studies where each observation is very costly, the ability to derive smaller sample size estimates with the appropriate tests is not only statistically, but also financially, appealing.  相似文献   

3.
OBJECTIVE: The performance costs associated with cell phone use while driving were assessed meta-analytically using standardized measures of effect size along five dimensions. BACKGROUND: There have been many studies on the impact of cell phone use on driving, showing some mixed findings. METHODS: Twenty-three studies (contributing 47 analysis entries) met the appropriate conditions for the meta-analysis. The statistical results from each of these studies were converted into effect sizes and combined in the meta-analysis. RESULTS: Overall, there were clear costs to driving performance when drivers were engaged in cell phone conversations. However, subsequent analyses indicated that these costs were borne primarily by reaction time tasks, with far smaller costs associated with tracking (lane-keeping) performance. Hands-free and handheld phones revealed similar patterns of results for both measures of performance. Conversation tasks tended to show greater costs than did information-processing tasks (e.g., word games). There was a similar pattern of results for passenger and remote (cell phone) conversations. Finally, there were some small differences between simulator and field studies, though both exhibited costs in performance for cell phone use. CONCLUSION: We suggest that (a) there are significant costs to driver reactions to external hazards or events associated with cell phone use, (b) hands-free cell phones do not eliminate or substantially reduce these costs, and (c) different research methodologies or performance measures may underestimate these costs. APPLICATION: Potential applications of this research include the assessment of performance costs attributable to different types of cell phones, cell phone conversations, experimental measures, or methodologies.  相似文献   

4.
Repeated measurements arising from longitudinal studies occur frequently in applied research. Methods to calculate power in the context of repeated measures are available for experimental settings where the covariate of interest is a discrete treatment indicator. However, no closed form expression exists to calculate power for generalized linear models with non-zero within-cluster correlation that are common in epidemiological and observational studies in which the covariate of interest varies over time and is often measured on a continuous scale, and where the researchers control for several potential confounders. We describe a Monte Carlo simulation approach conducted to calculate power, and illustrate its application in two models frequently encountered in practice, the normal linear mixed model, and the logistic regression model, both with repeated measurements and non-zero within-cluster correlation. This approach can be used to calculate the effect on power of changing various simulation conditions controlled by the researcher, such as sample size, within-cluster correlation structure, smallest meaningful difference to detect, and distributional assumptions.  相似文献   

5.
Per definition, CSCL research deals with the data of individuals nested in groups, and the influence of a specific learning setting on the collaborative process of learning. Most well-established statistical methods are not able to analyze such nested data adequately. This article describes the problems which arise when standard methods are applied and introduces multilevel modelling (MLM) as an alternative and adequate statistical approach in CSCL research. MLM enables testing interactional effects of predictor variables varying within groups (for example, the activity of group members in a chat) and predictors varying between groups (for example, the group homogeneity created by group members’ prior knowledge). So it allows taking into account that an instruction, tool or learning environment has different but systematic effects on the members within the groups on the one hand and on the groups on the other hand. The underlying statistical model of MLM is described using an example from CSCL. Attention is drawn to the fact that MLM requires large sample sizes which are not provided in most CSCL research. A proposal is made for the use of some analyses which are useful.  相似文献   

6.
Sample size and power calculations in repeated measurement analysis   总被引:3,自引:0,他引:3  
Controlled clinical trials in neuropsychopharmacology, as in numerous other clinical research domains, tend to employ a conventional parallel-groups design with repeated measurements. The hypothesis of primary interest in the relatively short-term, double-blind trials, concerns the difference between patterns or magnitudes of change from baseline. A simple two-stage approach to the analysis of such data involves calculation of an index or coefficient of change in stage 1 and testing the significance of difference between group means on the derived measure of change in stage 2. This article has the aim of introducing formulas and a computer program for sample size and/or power calculations for such two-stage analyses involving each of three definitions of change, with or without baseline scores entered as a covariate, in the presence of homogeneous or heterogeneous (autoregressive) patterns of correlation among the repeated measurements. Empirical adjustments of sample size for the projected dropout rates are also provided in the computer program.  相似文献   

7.
Lower bounds on sample size in structural equation modeling   总被引:1,自引:0,他引:1  
Computationally intensive structural equation modeling (SEM) approaches have been in development over much of the 20th century, initiated by the seminal work of Sewall Wright. To this day, sample size requirements remain a vexing question in SEM based studies. Complexities which increase information demands in structural model estimation increase with the number of potential combinations of latent variables; while the information supplied for estimation increases with the number of measured parameters times the number of observations in the sample size – both are non-linear. This alone would imply that requisite sample size is not a linear function solely of indicator count, even though such heuristics are widely invoked in justifying SEM sample size. This paper develops two lower bounds on sample size in SEM, the first as a function of the ratio of indicator variables to latent variables, and the second as a function of minimum effect, power and significance. The algorithm is applied to a meta-study of a set of research published in five of the top MIS journals. The study shows a systematic bias towards choosing sample sizes that are significantly too small. Actual sample sizes averaged only 50% of the minimum needed to draw the conclusions the studies claimed. Overall, 80% of the research articles in the meta-study drew conclusions from insufficient samples. Lacking accurate sample size information, researchers are inclined to economize on sample collection with inadequate samples that hurt the credibility of research conclusions. Guidelines are provided for applying the algorithms developed in this study, and companion software encapsulating the paper’s formulae is made available for download.  相似文献   

8.
9.
主动队列管理算法的仿真研究领域内普遍存在着一些容易被忽视的问题,如缓冲区客量设置不合理、对仿真结果缺乏统计分析等.对其中一些问题提出了建议,对另一些问题,则介绍了领域前沿采用的处理或改进方法,推荐领域内的研究者使用.通过这些建议,希望能对推动学术界在主动队列管理算法的仿真研究领域内建立研究规范起到一点作用,使仿真研究的成果更加可靠,更有效地为理论分析和研究提供佐证和深入研究的基础.  相似文献   

10.
Consider clustered matched-pair studies for non-inferiority where clusters are independent but units in a cluster are correlated. An inexpensive new procedure and the expensive standard one are applied to each unit and outcomes are binary responses. Appropriate statistics testing non-inferiority of a new procedure have been developed recently by several investigators. In this paper, we investigate power and sample size requirement of the clustered matched-pair study for non-inferiority. Power of a test is related primarily to the number of clusters. The effect of a cluster size on power is secondary. The efficiency of a clustered matched-pair design is inversely related to the intra-class correlation coefficient within a cluster. We present an explicit formula for obtaining the number of clusters for a given cluster size and the cluster size for a given number of clusters for a specific power. We also provide alternative sample size calculations when available information regarding parameters are limited. The formulas can be useful in designing a clustered matched-pair study for non-inferiority. An example for determining sample size to establish non-inferiority for a clustered matched-pair study is illustrated.  相似文献   

11.
Power and sample size determination has been a challenging issue for multiple testing procedures, especially stepwise procedures, mainly because (1) there are several power definitions, (2) power calculation usually requires multivariate integration involving order statistics, and (3) expansion of these power expressions in terms of ordinary statistics, instead of order statistics, is generally a difficult task. Traditionally power and sample size calculations rely on either simulations or some recursive algorithm; neither is straightforward and computationally economic. In this paper we develop explicit formulas for minimal power and r-power of stepwise procedures as well as complete power of single-step procedures for exchangeable and non-exchangeable bivariate and trivariate test statistics. With the explicit power expressions, we were able to directly calculate the desired power, given sample size and correlation. Numerical examples are presented to illustrate the relationship among power, sample size and correlation.  相似文献   

12.
Statistical power is an inherent part of empirical studies that employ significance testing and is essential for the planning of studies, for the interpretation of study results, and for the validity of study conclusions. This paper reports a quantitative assessment of the statistical power of empirical software engineering research based on the 103 papers on controlled experiments (of a total of 5,453 papers) published in nine major software engineering journals and three conference proceedings in the decade 1993–2002. The results show that the statistical power of software engineering experiments falls substantially below accepted norms as well as the levels found in the related discipline of information systems research. Given this study's findings, additional attention must be directed to the adequacy of sample sizes and research designs to ensure acceptable levels of statistical power. Furthermore, the current reporting of significance tests should be enhanced by also reporting effect sizes and confidence intervals.  相似文献   

13.
14.
The main objective of this study was to develop a simulation program to determine the sample size for a clinical study to confirm a genetic-disease association observed in a retrospective exploratory study. The effect of misclassification of a binary response variable on the power is also investigated. A general expression for the magnitude of the decrease in statistical power due to misclassification is obtained based on the Pitman asymptotic relative efficiency. The simulation program presents an estimate of the exact power when misclassification exists. Running the program several times under different settings of parameters, it revealed that the effect of even low misclassification rates is serious. Response misclassification should be taken into consideration when determining the sample size. The program can be used on the Internet.  相似文献   

15.
While active shape model (ASM) has been increasingly adopted in the medical domain, there are issues that need to be addressed for it to be applicable in practice. Among them, the small sample size problem and how to represent the variation of the clutter of surroundings are two of the challenges. In this paper, to overcome these problems, we propose a novel multi-resolution statistical deformable model and the associated techniques for the reconstruction of soft-tissue organs such as livers. To address the small sample size problem, we define a multi-resolution integrated model for soft-tissue organs called MISTO that is able to capture the most significant deformations from a small training set as well as to generate representative variation modes of the organ shapes. To deal with the complex surroundings of the model surface or landmark points in the underlying medical images during model deformation, we propose to apply multi-resolution appearance models which allows the surrounding visual context of the model surface points to be learnt and characterized automatically from the training samples. By combining the powerful shape models and the resulting context constraints, the object segmentation and reconstruction process can be carried out very robustly. Furthermore, to avoid the local minima during model optimization, we develop an adaptive deformation strategy such that the more stable parts of the surface are moved prior to the rest of the model surface. The experimental and validation results verify that our proposed approaches can be successfully and robustly applied to the reconstruction of the soft-tissue organs such as the human liver. The major contributions of our approaches are that we extend the traditional ASM to address open problems associated with reconstructing significantly deformable three-dimensional anatomies in cluttered surrounding, and we propose effective ways to formulate the perceptual knowledge of the anatomies and make use of it in the process of model construction and deformation for medical reconstruction.  相似文献   

16.
A large number of problems involve making decisions in an uncertain environment and, hence, with unknown outcomes. Optimization models aimed at controlling the trade‐off between risk and return in finance have been widely studied since the seminal work by Markowitz in 1952. In financial applications, shortfall or quantile risk measures are receiving ever‐increasing attention. Conditional value‐at‐risk (CVaR) is arguably the most popular of such measures. In the last decades, optimization models aimed at controlling risk have been applied to several application domains different from financial optimization. This survey provides an overview of the main contributions where CVaR is incorporated into an optimization approach and applied to a context different from financial engineering. The literature is classified following an application‐oriented perspective. The applications cover classical areas studied in operational research—such as supply chain management, scheduling, and networks—and less classical areas such as energy and medicine. For each area, concise paper excerpts are provided that convey the main ideas of the problems studied, and analyze how the CVaR has been used to cope with different sources of uncertainty. Finally, some open research directions are outlined.  相似文献   

17.
A mathematical model, for which rigorous methods of statistical inference are available, is described and techniques for image enhancement and linear discriminant analysis of groups are developed. Since the gray values of neighboring pixels in tomographically produced medical images are spatially correlated, the calculations are carried out in the Fourier domain to insure statistical independence of the variables. Furthermore, to increase the power of statistical tests the known spatial covariance was used to specify constraints in the spectral domain. These methods were compared to statistical procedures carried out in the spatial domain. Positron emission tomography (PET) images of alcoholics with organic brain disorders were compared by these techniques to age-matched normal volunteers. Although these techniques are employed to analyze group characteristics of functional images, they provide a comprehensive set of mathematical and statistical procedures in the spectral domain that can also be applied to images of other modalities, such as computed tomography (CT) or magnetic resonance imaging (MRI).  相似文献   

18.
A new architecture and a statistical model for a pulse-mode digital multilayer neural network (DMNN) are presented. Algebraic neural operations are replaced by stochastic processes using pseudo-random pulse sequences. Synaptic weights and neuron states are represented as probabilities and estimated as average rates of pulse occurrences in corresponding pulse sequences. A statistical model of error (or noise) is developed to estimate relative accuracy associated with stochastic computing in terms of mean and variance. The stochastic computing technique is implemented with simple logic gates as basic computing elements leading to a high neuron-density on a chip. Furthermore, the use of simple logic gates for neural operations, the pulse-mode signal representation, and the modular design techniques lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Any size of a feedforward network can be configured where processing speed is independent of the network size. Multilayer feedforward networks are modeled and applied to pattern classification problems such as encoding and character recognition.  相似文献   

19.
Replicating and comparing computational experiments in applied evolutionary computing may sound like a trivial task. Unfortunately, it is not so. Namely, many papers do not document experimental settings in sufficient detail, and hence replication of experiments is almost impossible. Additionally, some work fails to satisfy the thumb rules for Experimentation throughout all disciplines, such that all experiments should be conducted and compared under the same or stricter conditions. Also, because of the stochastic properties inherent in evolutionary algorithms (EAs), experimental results should always be rich enough with respect to Statistics. Moreover, the comparisons conducted should be based on suitable performance measures and show the statistical significance of one approach over others. Otherwise, the derived conclusions may fail to have scientific merits. The primary objective of this paper is to offer some preliminary guidelines and reminders for assisting researchers to conduct any replications and comparisons of computational experiments when solving practical problems, by the use of EAs in the future. The common pitfalls are explained, that solve economic load dispatch problems using EAs from concrete examples found in some papers.  相似文献   

20.
This article develops 11 hypotheses on impacts of six customer characteristics on an individual??s willingness to use mobile location based services (LBS). Hypotheses are tested in a sample of 217 mobile communications customers in Germany who participated in a standardized online-survey. PLS analysis suggests that reported frequency of ??on the move?? information needs, perceived assessment of LBS in a customer??s social environment and extent of past use of other mobile data services have statistically as well as practically significant effects on adoption intentions for pull LBS. Data privacy risks and cost/bill size concerns are only weakly or not related to such intentions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号