首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Partial Fourier (PF) methods take advantage of data symmetry to allow for either faster image acquisition or increased image resolution. Faster acquisition and increased spatial resolution are advantageous for fMRI because of increased temporal resolution and/or reduced partial volume effects, respectively. Standard PF methods, which use a phase reference obtained from a low resolution image, are adequate for the reconstruction of time-stationary images acquired using either spin echoes or short TE gradient echoes. In fMRI, however, multiple images are acquired using long TE gradient echoes, which introduces possible phase drifts in the fMRI data and high spatial frequencies in the phase reference. This work investigates several techniques developed to reconstruct fMRI data obtained with PF acquisitions. PF methods that account for both high-frequency spatial variations and time-dependent drifts in the phase reference are discussed and are quantitatively evaluated using receiver operator characteristic curve analysis.  相似文献   

2.
Biologically based markers (biomarkers) are currently used to provide information on exposure, health effects, and individual susceptibility to chemical and radiological wastes. However, the development and validation of biomarkers are expensive and time consuming. To determine whether biomarker development and use offer potential improvements to risk models based on predictive relationships or assumed values, we explore the use of uncertainty analysis applied to exposure models for dietary methyl mercury intake. We compare exposure estimates based on self-reported fish intake and measured fish mercury concentrations with biomarker-based exposure estimates (i.e., hair or blood mercury concentrations) using a published data set covering 1 month of exposure. Such a comparison of exposure model predictions allowed estimation of bias and random error associated with each exposure model. From these analyses, both bias and random error were found to be important components of uncertainty regarding biomarker-based exposure estimates, while the diary-based exposure estimate was susceptible to bias. Application of the proposed methods to a simple case study demonstrates their utility in estimating the contribution of population variability and measurement error in specific applications of biomarkers to environmental exposure and risk assessment. Such analyses can guide risk analysts and managers in the appropriate validation, use, and interpretation of exposure biomarker information.  相似文献   

3.
A shared parameter model with logistic link is presented for longitudinal binary response data to accommodate informative drop-out. The model consists of observed longitudinal and missing response components that share random effects parameters. To our knowledge, this is the first presentation of such a model for longitudinal binary response data. Comparisons are made to an approximate conditional logit model in terms of a clinical trial dataset and simulations. The naive mixed effects logit model that does not account for informative drop-out is also compared. The simulation-based differences among the models with respect to coverage of confidence intervals, bias, and mean squared error (MSE) depend on at least two factors: whether an effect is a between- or within-subject effect and the amount of between-subject variation as exhibited by variance components of the random effects distributions. When the shared parameter model holds, the approximate conditional model provides confidence intervals with good coverage for within-cluster factors but not for between-cluster factors. The converse is true for the naive model. Under a different drop-out mechanism, when the probability of drop-out is dependent only on the current unobserved observation, all three models behave similarly by providing between-subject confidence intervals with good coverage and comparable MSE and bias but poor within-subject confidence intervals, MSE, and bias. The naive model does more poorly with respect to the within-subject effects than do the shared parameter and approximate conditional models. The data analysis, which entails a comparison of two pain relievers and a placebo with respect to pain relief, conforms to the simulation results based on the shared parameter model but not on the simulation based on the outcome-driven drop-out process. This comparison between the data analysis and simulation results may provide evidence that the shared parameter model holds for the pain data.  相似文献   

4.
Calculation of reference limits by regression analysis makes it unnecessary to partition the reference data into subgroups, and age-dependent limits can be estimated within as narrow age intervals as necessary. However, the reliability of the regression-based reference limits has not been considered before. To get valid regression-based confidence intervals (CIs) for reference limits, one must evaluate the convolution of two distributions. In this study, age-dependent reference limits with corresponding CIs were produced for blood hemoglobin concentrations over the age interval from newborns to 12 months. We describe how the variance associated with the reference limits can be estimated, and present a Table from which appropriate values can be chosen for the calculation of regression-based reference limits and exact CIs. Also, an equation for the calculation of approximate CIs is given. The data were modeled by linear regression in several cumulative age groups to find the transition zone where the slope changed. After defining this cutoff point, piecewise linear regression was applied. Reference limits and their CIs calculated by conventional and piecewise linear regression methods were almost the same in older age groups but differed significantly during the period of most rapid age-dependent changes, i.e., during the 2 months after birth.  相似文献   

5.
A real-time estimation of water distribution system state variables such as nodal pressures and chlorine concentrations can lead to savings in time and money and provide better customer service. While a good knowledge of nodal demands is prerequisite for pressure and water quality prediction, little effort has been placed in real-time demand estimation. This study presents a real-time demand estimation method using field measurement provided by supervisory control and data acquisition systems. For real-time demand estimation, a recursive state estimator based on weighted least-squares scheme and Kalman filter are applied. Furthermore, based on estimated demands, real-time nodal pressures and chlorine concentrations are predicted. The uncertainties in demand estimates and predicted state variables are quantified in terms of confidence limits. The approximate methods such as first-order second-moment analysis and Latin hypercube sampling are used for uncertainty quantification and verified by Monte Carlo simulation. Application to a real network with synthetically generated data gives good demand estimations and reliable predictions of nodal pressure and chlorine concentration. Alternative measurement data sets are compared to assess the value of measurement types for demand estimation. With the defined measurement error magnitudes, pipe flow data are significantly more important than pressure head measurements in estimating demands with a high degree of confidence.  相似文献   

6.
Cost-effectiveness ratios usually appear as point estimates without confidence intervals, since the numerator and denominator are both stochastic and one cannot estimate the variance of the estimator exactly. The recent literature, however, stresses the importance of presenting confidence intervals for cost-effectiveness ratios in the analysis of health care programmes. This paper compares the use of several methods to obtain confidence intervals for the cost-effectiveness of a randomized intervention to increase the use of Medicaid's Early and Periodic Screening, Diagnosis and Treatment (EPSDT) programme. Comparisons of the intervals show that methods that account for skewness in the distribution of the ratio estimator may be substantially preferable in practice to methods that assume the cost-effectiveness ratio estimator is normally distributed. We show that non-parametric bootstrap methods that are mathematically less complex but computationally more rigorous result in confidence intervals that are similar to the intervals from a parametric method that adjusts for skewness in the distribution of the ratio. The analyses also show that the modest sample sizes needed to detect statistically significant effects in a randomized trial may result in confidence intervals for estimates of cost-effectiveness that are much wider than the boundaries obtained from deterministic sensitivity analyses.  相似文献   

7.
Correlational analysis is a cornerstone method of statistical analysis, yet most presentations of correlational techniques deal primarily with tests of significance. The focus of this article is obtaining explicit expressions for confidence intervals for functions of simple, partial, and multiple correlations. Not only do these permit tests of hypotheses about differences but they also allow a clear statement about the degree to which correlations differ. Several important differences of correlations for which tests and confidence intervals are not widely known are included among the procedures discussed. Among these is the comparison of 2 multiple correlations based on independent samples. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Assessment of body composition remains a goal for the routine assessment of nutritional status of patients on long-term dialysis. Methods generally available for estimation of body fat in healthy individuals are limited by practicality and availability for use in this patient population. Anthropometry, which is cost effective and easy to perform, is limited by the lack of appropriate reference standards for patients on dialysis and artifact caused by hydration status. Bioelectrical impedance affords new opportunities for non-invasive assessment of fluid volume, its distribution, and body cell mass; estimation of fat-free mass and body fat can be affected by hydration status. Dual x-ray absorptiometry permits estimation of bone status and fat mass because changes in hydration status are reflected in estimates of fat-free mass. Evaluation of validity of techniques for fluid status and body composition assessment requires the use of appropriate reference methods and proper statistical procedures to examine error, not only between groups, but by individual. Use of body composition assessment methods together with biochemical measurements will enhance the nutritional assessment of end-stage renal disease patients on long-term hemodialysis.  相似文献   

9.
Currently available software for nonlinear regression does not account for errors in both the independent and the dependent variables. In pharmacodynamics, measurement errors are involved in the drug concentrations as well as in the effects. Instead of minimizing the sum of squared vertical errors (OLS), a Fortran program was written to find the closest distance from a measured data point to the tangent line of an estimated nonlinear curve and to minimize the sum of squared perpendicular distances (PLS). A Monte Carlo simulation was conducted with the sigmoidal Emax model to compare the OLS and PLS methods. The area between the true pharmacodynamic relationship and the fitted curve was compared as a measure of goodness of fit. The PLS demonstrated an improvement over the OLS by 20.8% with small differences in the parameter estimates when the random noise level had a standard deviation of five for both concentration and effect. Consideration of errors in both concentrations and effects with the PLS could lead to a more rational estimation of pharmacodynamic parameters.  相似文献   

10.
A procedure is described for estimating the rate constants of a two-compartment stochastic model for which the covariance structure over time of the observations is known. The proposed estimation procedure, by incorporating the known (as a function of the parameters to be estimated) covariance structure of the observations, produces regular best asymptotically normal (RBAN) estimators for the parameters. In addition, the construction of approximate confidence intervals and regions for the parameters is made possible by identification of the asymptotic covariance matrix of the estimators. The explicit form of the inverse of the covariance matrix, which is required in the estimation procedure, is presented. The procedure is illustrated by application to real as well as simulated data, and a comparison is made to the widely used nonlinear least squares procedure, which does not account for correlations over time.  相似文献   

11.
Owing to the complexities involved in obtaining direct measures of in vivo muscle forces, validation of predictive models of muscle activity has been difficult. An artificial neural network (ANN) model had been previously developed for the estimation of lumbar muscle activity during moderate levels of static exertion. The predictive ability of this model is evaluated in this study using several techniques, including comparison of response surfaces and composite statistical tests of values derived from model output, with multiple EMG experimental datasets. ANN-predicted activation levels were accurately modelled to within 3% across a range of experiments and levels of combined flexion/extension and lateroflexion loadings. The results indicate both a high degree of consistency in the averaged muscle activity measured in several different experiments, and substantiate the ability of the ANN model to predict generalized recruitment patterns. It also is suggested that the use of multiple comparison methods provides a better indication of model behaviour and prediction accuracy than a single evaluation criterion.  相似文献   

12.
Statistical methods to map quantitative trait loci (QTL) in outbred populations are reviewed, extensions and applications to human and plant genetic data are indicated, and areas for further research are identified. Simple and computationally inexpensive methods include (multiple) linear regression of phenotype on marker genotypes and regression of squared phenotypic differences among relative pairs on estimated proportions of identity-by-descent at a locus. These methods are less suited for genetic parameter estimation in outbred populations but allow the determination of test statistic distributions via simulation or data permutation; however, further inferences including confidence intervals of QTL location require the use of Monte Carlo or bootstrap sampling techniques. A method which is intermediate in computational requirements is residual maximum likelihood (REML) with a covariance matrix of random QTL effects conditional on information from multiple linked markers. Testing for the number of QTLs on a chromosome is difficult in a classical framework. The computationally most demanding methods are maximum likelihood and Bayesian analysis, which take account of the distribution of multilocus marker-QTL genotypes on a pedigree and permit investigators to fit different models of variation at the QTL. The Bayesian analysis includes the number of QTLs on a chromosome as an unknown.  相似文献   

13.
标准物质定值分析的溯源性要求比常规分析高很多。在常规分析中,有证标准物质(CRM)可用于校准以及建立可溯源性。尽管与标准物质生产和认定相关的ISO导则并没有十分明确地提到校准,但众多标准物质生产者在定值分析时却不接受基准标准物质用于校准。这些要求制约了标准物质定值方法的使用,因为这些方法可以采用已知高纯度的标准物质、化学计量法及已知高纯度标准物质的混合物进行校准。有效的固体取样技术,例如通常使用的块状标准物质进行校准的辉光放电质谱法(GD MS)或火花源发射光谱法(spark OES),似乎不太适合定值分析。尤其是固体样品中痕量元素分析测定的有力工具辉光放电质谱法,除金属杂质外,使用特定气体混合物,这种方法也可以分析相关的非金属杂质如硫和磷。我们开发了一种可以使用已知高纯度的标准物质混合物和化学计量法的校准方法,这种校准方法同样可用于辉光放电质谱法。与溶液中基体匹配技术类似,此校准方法基于掺杂的粉末压片。本文还结合铜和钢标准物质的定值分析结果进行了陈述。  相似文献   

14.
Application of the EM algorithm for estimation in the generalized mixed model has been largely unsuccessful because the E-step cannot be determined in most instances. The E-step computes the conditional expectation of the complete data log-likelihood and when the random effect distribution is normal, this expectation remains an intractable integral. The problem can be approached by numerical or analytic approximations; however, the computational burden imposed by numerical integration methods and the absence of an accurate analytic approximation have limited the use of the EM algorithm. In this paper, Laplace's method is adapted for analytic approximation within the E-step. The proposed algorithm is computationally straightforward and retains much of the conceptual simplicity of the conventional EM algorithm, although the usual convergence properties are not guaranteed. The proposed algorithm accommodates multiple random factors and random effect distributions besides the normal, e.g., the log-gamma distribution. Parameter estimates obtained for several data sets and through simulation show that this modified EM algorithm compares favorably with other generalized mixed model methods.  相似文献   

15.
Fecal output estimates derived from a one-compartment, Gamma-2, age-dependent model were compared with estimates derived algebraically by computing the area under the marker excretion curve for lambs given a pulse dose of ytterbium-labeled forage. Lambs were fed one of four diets (as-fed basis): 100% alfalfa hay, 100% prairie hay, 50:50 alfalfa:sorghum grain, and 50:50 prairie hay:sorghum grain. For the one-compartment model, fecal output was calculated as the dose of Yb (micrograms) divided by the initial concentration in the compartment (micrograms of Yb/gram of DM) multiplied by the age-dependent rate constant (hours-1). For the algebraic method, fecal output was calculated as the dose of Yb divided by the area under the marker excretion curve ([micrograms of Yb/gram of fecal DM].hours), both with the full complement of fecal samples and with fecal samples collected at 12-h intervals. Fecal output estimated by the three methods did not differ (P > .15) from measured fecal output (total collection). Marker retention time calculated from the one-compartment, age-dependent model was numerically greater (P > .10) than retention time calculated algebraically (sum of concentration x time divided by sum of concentrations weighted for collection interval) for lambs fed all four diets. These results suggest that the area under the marker excretion curve generated from a pulse dose of Yb-labeled forage will provide estimates of fecal output that do not differ from those calculated from a one-compartment, age-dependent model.  相似文献   

16.
Estimates of brief time intervals—ranging from 2-120 sec.—were obtained from 24 young offenders and 48 controls by the methods of production and verbal estimation. The verbal estimations were obtained of "empty" intervals as well as of intervals "filled" with a buzzer tone. Intelligence estimates were obtained on all Ss. The results indicate that brief time intervals appear longer to delinquents than to nondelinquents. The controls, but not the delinquents, gave shorter verbal estimates of the "filled" than of the "empty" intervals. Intelligence was not a significant source of variance in either the delinquents' or the controls' verbal estimation scores, but correlated significantly with the delinquents' production scores of the relatively longer intervals (15-120 sec.). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Although the capabilities of engineering surface wave methods have improved in recent years due to several advances, a number of issues still require study to further improve the capabilities of modern surface wave methods. Near-field effects, which are one of these issues, have been studied for traditional surface wave methods with two receivers, and several criteria to mitigate the effects have been recommended. However, these criteria are not applicable to surface wave methods with multiple receivers. Moreover, the criteria are not quantitatively based and do not account for different types of soil profiles, which strongly influence near-field effects. A new study of near-field effects on surface wave methods with multiple receivers was conducted via numerical simulations, laboratory simulations, and field tests. Quantitatively based criteria for near-field effects in different soil profiles are presented using two normalized parameters developed in this study. There was excellent agreement between numerical and experimental results, and it was found that underestimation of measured Rayleigh phase velocities was the major symptom of near-field effects.  相似文献   

18.
OBJECTIVE: To present an application of logistic regression modelling to estimate ratios of proportions, such as prevalence ratio or relative risk, and the Delta Method to estimate confidence intervals. METHOD: The Delta Method was used because it is appropriate for the estimation of variance of non-linear functions of random variables. The method is based on Taylor's series expansion and provides a good approximation of variance estimates. A computer program, utilizing the matrix module of SAS, was developed to compute the variance estimates. A practical demonstration is presented with data from a cross-sectional study carried out on a sample of 611 women, to test the hypothesis that the lack of housework sharing is associated with high scores of psychological symptoms as measured by a validated questionnaire. RESULTS: Crude and adjusted prevalence ratio estimated by logistic regression were similar to those estimated by tabular analysis. Also, ranges of the confidence intervals of the prevalence ratio according to the Delta Method were nearly equal to those obtained by the Mantel-Haenszel approach. CONCLUSIONS: The results give support to the use of the Delta Method for the estimation of confidence intervals for ratios of proportions. The method should be seen as an alternative for situations in which the need to control a large number of potential confounders limits the use of stratified analysis.  相似文献   

19.
Physicians commonly prescribe drugs in a multiple dosage regimen for prolonged therapeutic activity. To study the effect of multiple dosing on drug concentration in blood, researchers often use deterministic models with the assumption that drugs are administered at a fixed dosage, with equal or unequal (fixed) dosing intervals. In practice, many patients do not comply with such a rigid schedule. Hence, two possible scenarios might occur: patients might not take the prescribed dosing amount, resulting in erratic dosing sizes; they might not adhere to the dosing schedule, resulting in erratic dosing times. We propose separate statistical models for these two scenarios and study their impact on blood serum/plasma concentration. With non-compliance, some basic concepts such as steady state need new definition. We provide a rigorous formulation for the principle of superposition which enables us to generalize the concept of steady state. Applying the proposed models, we demonstrate that non-compliance causes the drug concentration time curve to exhibit an increase in fluctuation. The increase in fluctuation due to non-compliance cannot be explained with use of the classical deterministic multiple dose model.  相似文献   

20.
Formulas for premorbid intelligence estimates are typically derived by linear regression and are therefore biased in individual cases because of regression to the mean. It is shown that it is inappropriate to compare such IQ estimates with current IQ scores to determine whether a decline from premorbid levels has occurred. This widespread practice grossly overestimates the probability of an IQ decline in the below-average range and grossly underestimates it in the above-average range, with serious implications for clinical practice. The authors present a formula for computing unbiased estimates of IQ decline as well as a test of the null hypothesis of no decline. Corresponding tables for several combinations of test indices and estimation methods are included for practical reference. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号