首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Repeated measures allow additional tests of common assumptions in twin correlation analysis. Analysis of log serum triglyceride level in NHLBI male twins using generalized estimating equations disclosed that the mean and variance shifted across exams, presumably because of changes in laboratory practice.  相似文献   

2.
Correlation functions in large sets of non-homologous protein sequences are analysed. Finite size corrections are applied and fluctuations are estimated. As symbol sequences have to be mapped to sequences of numbers to calculate correlation functions, several property codes are tested as such mappings. We found hydrophobicity autocorrelation functions to be strongly oscillating. Another strong signal is the monotonously decaying alpha-helix propensity autocorrelation function. Furthermore, we detected signals corresponding to an alteration of positively and negatively charged residues at a distance of 3-4 amino acids. To look beyond the property codes gained by the methods of physical chemistry, mappings yielding a strong correlation signal are sought for using a Monte Carlo simulation. The mappings leading to strong signals are found to be related to hydrophobicity of alpha-helix propensity. A cluster analysis of the top scoring mappings leads to two novel property codes. These two property codes are gained from sequence data only. They turn out to be similar to known property codes for hydrophobicity or polarity.  相似文献   

3.
A common measure in clinical trials and epidemiologic studies is the number of events such as seizures, hospitalizations, or bouts of disease. Frequently, a binary measure of severity for each event is available but is not incorporated in the analysis. This paper proposes methodology for jointly modeling the number of events and the vector of correlated binary severity measures. Our formulation exploits the notion that a given covariate may affect both outcomes in a similar way. We functionally link the regression parameters for the counts and binary means and discuss a generalized estimating equation (GEE) approach for parameter estimation. We discuss conditions under which the proposed joint modeling approach provides marked gains in efficiency relative to the common procedure of simply modeling the counts, and we illustrate the methodology with epilepsy clinical trial data.  相似文献   

4.
Experimental studies of prevention programs often randomize clusters of individuals rather than individuals to treatment conditions. When the correlation among individuals within clusters is not accounted for in statistical analysis, the standard errors are biased, potentially resulting in misleading conclusions about the significance of treatment effects. This study demonstrates the generalized estimating equations (GEE) method, focusing specifically on the GEE-independent method, to control for within-cluster correlation in regression models with either continuous or binary outcomes. The GEE-independent method yields consistent and robust variance estimates. Data from Project DARE, a youth substance abuse prevention program, are used for illustration. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
The emotional content of stimuli can enhance memory for those stimuli. This process may occur via an interaction with systems responsible for perception and memory or via the addition of distinct brain regions specialized for emotion which augment mnemonic processing. We performed an 15O PET study to identify neuroanatomical systems which encode visual stimuli with strong negative emotional valence compared to stimuli with neutral valence. Subjects also performed a recognition memory task for these same images, mixed with distracters of similar emotional valence. The experimental design permitted us to independently test effects of emotional content and recognition memory on regional activity. We found activity in the left amygdaloid complex associated with the encoding of emotional stimuli, although this activation appeared early in the scanning session and was not detectable during recognition memory. Visual recognition memory recruited the right middle frontal gyrus and the superior anterior cingulate cortex for both negative and neutral stimuli. An interaction occurred between emotional content and recognition in the lingual gyrus, where greater activation occurred during recognition of negative images compared to recognition of neutral images. Instead of distinct neuroanatomical systems for emotion augmenting memory, we found that emotionally salient stimuli appeared to enhance processing of early sensory input during visual recognition.  相似文献   

6.
During the operations of purging and disposal of sediments of a reservoir it is necessary to know the values of turbidity in the river downstream in natural condition, in the absence of dams or river training works. The paper shows that under these conditions the ratio of the average values of sediment discharge to the annual maximum value of water discharge is a function of the average annual turbidity. Turbidity can be considered as representative synthetic index of the climatic conditions, the lithological features and the land cover of the basin, and the geometric characteristics of the river network. The proposed relationship of sediment discharge as a function of water discharge were validated on the basis of data collected from different Italian regions that have very different morphological, geo-lithological and rainfall features and that are characterised by a basin area changing between a few dozen and thousands of square kilometres. The results can be considered satisfying.  相似文献   

7.
A semi-continuous relaxation model is constructed using sums of gamma functions and non-negative least squares for the inversion of Carr-Purcell-Meiboom-Gill (CPMG) echo data. No regularization is necessary for this approach, and yet the solution is stable even with noisy data. Test results derived from 60 echo trains are presented. Computational advantages of the method are presented.  相似文献   

8.
9.
Classic explanations of the "group polarization phenomenon" emphasize interpersonal processes such as informational influence and social comparison (Myers & Lamm, 1976). Based on earlier research, we hypothesized that at least part of the polarization observed during group discussion might be due to repeated attitude expression. Two studies provide support for this hypothesis. In Study 1, we manipulated how often each group member talked about an issue and how often he or she heard other group members talk about the issue. We found that repeated expression produced a reliable shift in extremity. A detailed coding of the groups' discussions showed that the effect of repeated expression on attitude polarization was enhanced in groups where the group members repeated each other's arguments and used them in their own line of reasoning. Study 2 tested for this effect experimentally. The results showed that the effect of repeated expression was augmented in groups where subjects were instructed to use each others' arguments compared to groups where instructions were given to avoid such repetitions.… (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Scale score measures are ubiquitous in the psychological literature and can be used as both dependent and independent variables in data analysis. Poor reliability of scale score measures leads to inflated standard errors and/or biased estimates, particularly in multivariate analysis. Reliability estimation is usually an integral step to assess data quality in the analysis of scale score data. Cronbach’s α is a widely used indicator of reliability but, due to its rather strong assumptions, can be a poor estimator (L. J. Cronbach, 1951). For longitudinal data, an alternative approach is the simplex method; however, it too requires assumptions that may not hold in practice. One effective approach is an alternative estimator of reliability that relaxes the assumptions of both Cronbach’s α and the simplex estimator and thus generalizes both estimators. Using data from a large-scale panel survey, the benefits of the statistical properties of this estimator are investigated, and its use is illustrated and compared with the more traditional estimators of reliability. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
We present a program, mhghseq, for designing a clinical trial to compare two treatments. For a user-specified hypothesis and a user-specified alternative, mhghseq finds the number of subjects required to achieve a given size and power. It permits each treatment to have an arbitrary survival curve and it allows for group sequential testing. It provides solutions for small samples as well as for large samples. There may be a follow-up period after accrual is completed. Tests may be one or two-sided, either the log-rank or the Gehan test may be used, and either of two commonly used boundary forms--the O'Brien-Fleming or Pocock boundary forms--may be specified. Although mhghseq relies primarily on simulation to produce results, the large sample sequential boundary for the log-rank test optionally may be computed with an algorithm given here that is different from ones used in the past for this purpose.  相似文献   

12.
13.
Dawson and Lagakos (1993, Biometrics 49,1022-1032) proposed a stratified test for repeated measures data that contain missing observations. They recommended stratification based on missing data patterns and considered sufficient conditions under which the size of the test is properly retained. In this paper, we point out some practical problems with these conditions and illustrate them with their CD4 count example as well as a stimulation study. We give a less stringent condition and delineate its merit. We also discuss what to do when none of the conditions are met.  相似文献   

14.
Repeated measures data often occur in practice. This has led to considerable progress in the development of methods for inference in models for such data. In this paper, projection methods are proposed for examining goodness-of-fit in regression models for repeated measures. Rao's (1959, Biometrika 46, 49-58) F-test for testing a postulated mean structure using an independently identically normally distributed random sample is extended to a broad class of models including both fixed and random effects. The paper also shows how projection methods may be utilized for checking multivariate normality. In addition, application of projection to test the adequacy of extremely unbalanced models is considered. Two examples are given to demonstrate the underlying techniques.  相似文献   

15.
Evaluates 4 statistical tests of treatment effect for the nonequivalent control group design. This design consists of pre- and posttreatment measures of a dependent variable with biased assignments to treatment groups. The biased assignment creates a treatment-pretest confounding for which different statistical techniques adjust. The different statistical tests discussed are the analysis of covariance, analysis of covariance with reliability correction, raw change score analysis, and standardized change score analysis. If assignment to treatment groups is based on the pretest score (a very infrequent event), analysis of covariance is the appropriate mode of analysis. Selection based on the pretest true scores necessitates a reliability correction procedure. Selection based on stable group differences and selection that occurs midway between the pre- and posttest necessitates change score analysis. (35 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Data are analysed from a longitudinal psychiatric study in which there are no dropouts that do not occur completely at random. A marginal proportional odds model is fitted that relates the response (severity of side effects) to various covariates. Two methods of estimation are used: generalized estimating equations (GEE) and maximum likelihood (ML). Both the complete set of data and the data from only those subjects completing the study are analysed. For the completers-only data, the GEE and ML analyses produce very similar results. These results differ considerably from those obtained from the analyses of the full data set. There are also marked differences between the results obtained from the GEE and ML analysis of the full data set. The occurrence of such differences is consistent with the presence of a non-completely-random dropout process and it can be concluded in this example that both the analyses of the completers only and the GEE analysis of the full data set produce misleading conclusions about the relationships between the response and covariates.  相似文献   

17.
In a meta-analysis of a set of clinical trials, a crucial but problematic component is providing an estimate and confidence interval for the overall treatment effect theta. Since in the presence of heterogeneity a fixed effect approach yields an artificially narrow confidence interval for theta, the random effects method of DerSimonian and Laird, which incorporates a moment estimator of the between-trial components of variance sigma B2, has been advocated. With the additional distributional assumptions of normality, a confidence interval for theta may be obtained. However, this method does not provide a confidence interval for sigma B2, nor a confidence interval for theta which takes account of the fact that sigma B2 has to be estimated from the data. We show how a likelihood based method can be used to overcome these problems, and use profile likelihoods to construct likelihood based confidence intervals. This approach yields an appropriately widened confidence interval compared with the standard random effects method. Examples of application to a published meta-analysis and a multicentre clinical trial are discussed. It is concluded that likelihood based methods are preferred to the standard method in undertaking random effects meta-analysis when the value of sigma B2 has an important effect on the overall estimated treatment effect.  相似文献   

18.
19.
The authors present a technique for correcting for exposure measurement error in the analysis of case-control data when subjects have a variable number of repeated measurements, and the average is used as the subject's measure of exposure. The true exposure as well as the measurement error are assumed to be normally distributed. The method transforms each subject's observed average by a factor which is a function of the measurement error parameters, prior to fitting the logistic regression model. The resulting logistic regression coefficient estimate based on the transformed average is corrected for error. A bootstrap method for obtaining confidence intervals for the true regression coefficient, which takes into account the variability due to estimation of the measurement error parameters, is also described. The method is applied to data from a nested case-control study of hormones and breast cancer.  相似文献   

20.
Although the pharmacologic treatment of somatoform disorders has scarcely been investigated, there is reason to believe that antidepressants might be useful. We examined the response of 29 patients with somatoform disorders from a general medicine clinic to a selective serotonin reuptake inhibitor, fluvoxamine. The drug was administered in doses of up to 300 mg daily for 8 weeks. Sixty-one percent of the patients who took medication for at least 2 weeks were at least moderately improved. In addition to antidepressant effects, fluvoxamine had other beneficial effects and was well-tolerated. The benefits of drug therapy were modest but appear to warrant a placebo-controlled trial.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号