首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 698 毫秒
1.
Existing methods for setting confidence intervals for the difference theta between binomial proportions based on paired data perform inadequately. The asymptotic method can produce limits outside the range of validity. The 'exact' conditional method can yield an interval which is effectively only one-sided. Both these methods also have poor coverage properties. Better methods are described, based on the profile likelihood obtained by conditionally maximizing the proportion of discordant pairs. A refinement (methods 5 and 6) which aligns 1-alpha with an aggregate of tail areas produces appropriate coverage properties. A computationally simpler method based on the score interval for the single proportion also performs well (method 10).  相似文献   

2.
An experiment to assess the efficacy of a particular treatment or process often produces dichotomous responses, either favourable or unfavourable. When we administer the treatment on two occasions to the same subjects, we often use McNemar's test to investigate the hypothesis of no difference in the proportions on the two occasions, that is, the hypothesis of marginal homogeneity. A disadvantage in using McNemar's statistic is that we estimate the variance of the sample difference under the restriction that the marginal proportions are equal. A competitor to McNemar's statistic is a Wald statistic that uses an unrestricted estimator of the variance. Because the Wald statistic tends to reject too often in small samples, we investigate an adjusted form that is useful for constructing confidence intervals. Quesenberry and Hurst and Goodman discussed methods of construction that we adapt for constructing confidence intervals for the differences in correlated proportions. We empirically compare the coverage probabilities and average interval lengths for the competing methods through simulation and give recommendations based on the simulation results.  相似文献   

3.
Thirteen methods for computing binomial confidence intervals are compared based on their coverage properties, widths and errors relative to exact limits. The use of the standard textbook method, x/n +/- 1.96 square root of [(x/n)(1-x/n)/n], or its continuity corrected version, is strongly discouraged. A commonly cited rule of thumb stating that alternatives to exact methods may be used when the estimated proportion p is such that np and n(1(-)p) both exceed 5 does not ensure adequate accuracy. Score limits are easily calculated from closed form solutions to quadratic equations and can be used at all times. Based on coverage functions, the continuity corrected score method is recommended over exact methods. Its conservative nature should be kept in mind, as should the wider fluctuation of actual coverage that accompanies omission of the continuity correction.  相似文献   

4.
Multistage models are frequently applied in carcinogenic risk assessment. In their simplest form, these models relate the probability of tumor presence to some measure of dose. These models are then used to project the excess risk of tumor occurrence at doses frequently well below the lowest experimental dose. Upper confidence limits on the excess risk associated with exposures at these doses are then determined. A likelihood-based method is commonly used to determine these limits. We compare this method to two computationally intensive "bootstrap" methods for determining the 95% upper confidence limit on extra risk. The coverage probabilities and bias of likelihood-based and bootstrap estimates are examined in a simulation study of carcinogenicity experiments. The coverage probabilities of the nonparametric bootstrap method fell below 95% more frequently and by wider margins than the better-performing parametric bootstrap and likelihood-based methods. The relative bias of all estimators are seen to be affected by the amount of curvature in the true underlying dose-response function. In general, the likelihood-based method has the best coverage probability properties while the parametric bootstrap is less biased and less variable than the likelihood-based method. Ultimately, neither method is entirely satisfactory for highly curved dose-response patterns.  相似文献   

5.
A study was carried out to determine whether recombinant human erythropoietin can induce newborn-like hemoglobin synthesis in adult rats. A fixed dose of recombinant erythropoietin was administered each time intravenously in each rat for altogether 5 weeks. Blood samples drawn at 7-day intervals were analyzed by DEAE-cellulose chromatography. Hematological parameters like red blood cell counts, hematocrit values and reticulocyte counts were evaluated and compared. A significant changing pattern for certain hemoglobin components in red cells of erythropoietin-treated rats was measured compared to their baseline values. However, aspirin (a prostaglandin synthesis inhibitor) intake along with recombinant erythropoietin administration totally abolished the reversion of hemoglobin proportions toward newborn values, but not the increase in hemoglobin synthesis. These data reveal that concurrent prostaglandin synthesis is needed for reversing hemoglobin proportions in adult rats, but not for hemoglobin synthesis per se.  相似文献   

6.
We evaluated the effect of the image acquisition parameters on the accuracy of the principal axes and surface-fitting techniques for three-dimensional image registration. Using two types of phantom objects, MR brain image and a mathematically defined ellipsoid, we simulated pairs of scans with known acquisition parameters, including longitudinal coverage, magnitude of mis-registration, number of sections and section thickness. Both methods are sensitive to the systematic deformation of contours. The principal axes method is also sensitive to incomplete scan coverage and to the x-axis and y-axis misangulation. Both methods are insensitive to the number of sections, section thickness and the number of points per section. Surface fitting performed well without user supervision. There is no need for routine inclusion of the scaling factors as search parameters. The results confirm the feasibility of three-dimensional multimodality registration of brain scans with accuracy 1-2 mm, with surface fitting being the method of choice.  相似文献   

7.
A method is proposed which is an alternative to existing methods of determining of the surface tension of solids and is based on a generalization of the Rittinger law and the Gibbs thermodynamic equations, as well as on a diffraction analysis of the granulometric characteristics of finely dispersed powders of templates (Fe, Co) and originals (W, AlNi). The surface tension coefficients of tungsten and AlNi are evaluated. The confidence intervals of varying their values are found.  相似文献   

8.
We investigate the importance of the assumed covariance structure for longitudinal modelling of CD4 counts. We examine how individual predictions of future CD4 counts are affected by the covariance structure. We consider four covariance structures: one based on an integrated Ornstein-Uhlenbeck stochastic process; one based on Brownian motion, and two derived from standard linear and quadratic random-effects models. Using data from the Multicenter AIDS Cohort Study and from a simulation study, we show that there is a noticeable deterioration in the coverage rate of confidence intervals if we assume the wrong covariance. There is also a loss in efficiency. The quadratic random-effects model is found to be the best in terms of correctly calibrated prediction intervals, but is substantially less efficient than the others. Incorrectly specifying the covariance structure as linear random effects gives too narrow prediction intervals with poor coverage rates. Fitting using the model based on the integrated Ornstein-Uhlenbeck stochastic process is the preferred one of the four considered because of its efficiency and robustness properties. We also use the difference between the future predicted and observed CD4 counts to assess an appropriate transformation of CD4 counts; a fourth root, cube root and square root all appear reasonable choices.  相似文献   

9.
Statistical methods to map quantitative trait loci (QTL) in outbred populations are reviewed, extensions and applications to human and plant genetic data are indicated, and areas for further research are identified. Simple and computationally inexpensive methods include (multiple) linear regression of phenotype on marker genotypes and regression of squared phenotypic differences among relative pairs on estimated proportions of identity-by-descent at a locus. These methods are less suited for genetic parameter estimation in outbred populations but allow the determination of test statistic distributions via simulation or data permutation; however, further inferences including confidence intervals of QTL location require the use of Monte Carlo or bootstrap sampling techniques. A method which is intermediate in computational requirements is residual maximum likelihood (REML) with a covariance matrix of random QTL effects conditional on information from multiple linked markers. Testing for the number of QTLs on a chromosome is difficult in a classical framework. The computationally most demanding methods are maximum likelihood and Bayesian analysis, which take account of the distribution of multilocus marker-QTL genotypes on a pedigree and permit investigators to fit different models of variation at the QTL. The Bayesian analysis includes the number of QTLs on a chromosome as an unknown.  相似文献   

10.
RATIONALE AND OBJECTIVES: The authors performed this study to address two practical questions. First, how large does the sample size need to be for confidence intervals (CIs) based on the usual asymptotic methods to be appropriate? Second, when the sample size is smaller than this threshold, what alternative method of CI construction should be used? MATERIALS AND METHODS: The authors performed a Monte Carlo simulation study where 95% CIs were constructed for the receiver operating characteristic (ROC) area and for the difference between two ROC areas for rating and continuous test results--for ROC areas of moderate and high accuracy--by using both parametric and nonparametric estimation methods. Alternative methods evaluated included several bootstrap CIs and CIs with the Student t distribution. RESULTS: For the difference between two ROC areas, CIs based on the asymptotic theory provided adequate coverage even when the sample size was very small (20 patients). In contrast, for a single ROC area, the asymptotic methods do not provide adequate CI coverage for small samples; for ROC areas of high accuracy, the sample size must be large (more than 200 patients) for the asymptotic methods to be applicable. The recommended alternative (bootstrap percentile, bootstrap t, or bootstrap bias-corrected accelerated method) depends on the estimation approach, format of the test results, and ROC area. CONCLUSION: Currently, there is not a single best alternative for constructing CIs for a single ROC area for small samples.  相似文献   

11.
In survival analysis, estimates of median survival times in homogeneous samples are often based on the Kaplan-Meier estimator of the survivor function. Confidence intervals for quantiles, such as median survival, are typically constructed via large sample theory or the bootstrap. The former has suspect accuracy for small sample sizes under moderate censoring and the latter is computationally intensive. In this paper, improvements on so-called test-based intervals and reflected intervals (cf., Slud, Byar, and Green, 1984, Biometrics 40, 587-600) are sought. Using the Edgeworth expansion for the distribution of the studentized Nelson-Aalen estimator derived in Strawderman and Wells (1997, Journal of the American Statistical Association 92), we propose a method for producing more accurate confidence intervals for quantiles with randomly censored data. The intervals are very simple to compute, and numerical results using simulated data show that our new test-based interval outperforms commonly used methods for computing confidence intervals for small sample sizes and/or heavy censoring, especially with regard to maintaining specified coverage.  相似文献   

12.
Cost-effectiveness ratios usually appear as point estimates without confidence intervals, since the numerator and denominator are both stochastic and one cannot estimate the variance of the estimator exactly. The recent literature, however, stresses the importance of presenting confidence intervals for cost-effectiveness ratios in the analysis of health care programmes. This paper compares the use of several methods to obtain confidence intervals for the cost-effectiveness of a randomized intervention to increase the use of Medicaid's Early and Periodic Screening, Diagnosis and Treatment (EPSDT) programme. Comparisons of the intervals show that methods that account for skewness in the distribution of the ratio estimator may be substantially preferable in practice to methods that assume the cost-effectiveness ratio estimator is normally distributed. We show that non-parametric bootstrap methods that are mathematically less complex but computationally more rigorous result in confidence intervals that are similar to the intervals from a parametric method that adjusts for skewness in the distribution of the ratio. The analyses also show that the modest sample sizes needed to detect statistically significant effects in a randomized trial may result in confidence intervals for estimates of cost-effectiveness that are much wider than the boundaries obtained from deterministic sensitivity analyses.  相似文献   

13.
Heterophase salt fluxes are mixtures of liquid salts and solid phases. The liquid phase provides for full surface coverage of the metal being protected from the influence of aggressive gaseous phase components. Solid phases retard the reagent delivery and the products withdrawal through the flux layer. This allows less salt consumption and improves protective flux properties.

Technological features of heterophase fluxes depend greatly on their structure. There are several approaches to the structure formation

Mechanically mixing certain proportions of salt mixtures and solid phases of definite coarseness, the former being indifferent to the latter and to the metal being protected.

A mechanical heterophase flux is a porous plate made of material indifferent to metals and soaked with a liquid salt.

Introduce the salts into fluxes. The salts should be easily hydrolyzed and vaporized and must also easily interact either with the gas atmosphere or the metal protected.

This composition and structure of the first two methods remain stable with time while in the last method it changes with time and can be formed in accordance with a program.

The heterophase flux production technology based on the usage of waste electrolyte from magnesium electrolyzers has been designed. The flux has been tested and the high temperature has been reduced tenfold and considerable amount of metallic zinc has been saves, both factors contributing to environmental protection.  相似文献   

14.
可能有几种方法可使铁基粉末冶金零件达到较高密度。二次压制/二次烧结可使密度大于7.3g/cm3,但受到成本与几何形状条件的限制。对一种新方法进行了评价,用这种方法可一次压制得到高使用性能材料,并将其和其他生产工艺进行了比较。比较是利用Ancorsteel 85HP与Distaloy 4800A基本材料进行的。对各种生坯与烧结件性能进行了评价,其中包括:生坯强度,横向断裂强度,拉伸性能及冲击值。数据清楚地证明,拥有专利的[1]ANCORDENSETM工艺(温压)提供的使用性能可与二次压制/二次烧结制作的相比拟。用一次压制达到了生坯密度值的无孔隙密度极限的98.5%。  相似文献   

15.
A shared parameter model with logistic link is presented for longitudinal binary response data to accommodate informative drop-out. The model consists of observed longitudinal and missing response components that share random effects parameters. To our knowledge, this is the first presentation of such a model for longitudinal binary response data. Comparisons are made to an approximate conditional logit model in terms of a clinical trial dataset and simulations. The naive mixed effects logit model that does not account for informative drop-out is also compared. The simulation-based differences among the models with respect to coverage of confidence intervals, bias, and mean squared error (MSE) depend on at least two factors: whether an effect is a between- or within-subject effect and the amount of between-subject variation as exhibited by variance components of the random effects distributions. When the shared parameter model holds, the approximate conditional model provides confidence intervals with good coverage for within-cluster factors but not for between-cluster factors. The converse is true for the naive model. Under a different drop-out mechanism, when the probability of drop-out is dependent only on the current unobserved observation, all three models behave similarly by providing between-subject confidence intervals with good coverage and comparable MSE and bias but poor within-subject confidence intervals, MSE, and bias. The naive model does more poorly with respect to the within-subject effects than do the shared parameter and approximate conditional models. The data analysis, which entails a comparison of two pain relievers and a placebo with respect to pain relief, conforms to the simulation results based on the shared parameter model but not on the simulation based on the outcome-driven drop-out process. This comparison between the data analysis and simulation results may provide evidence that the shared parameter model holds for the pain data.  相似文献   

16.
OBJECTIVE: To present an application of logistic regression modelling to estimate ratios of proportions, such as prevalence ratio or relative risk, and the Delta Method to estimate confidence intervals. METHOD: The Delta Method was used because it is appropriate for the estimation of variance of non-linear functions of random variables. The method is based on Taylor's series expansion and provides a good approximation of variance estimates. A computer program, utilizing the matrix module of SAS, was developed to compute the variance estimates. A practical demonstration is presented with data from a cross-sectional study carried out on a sample of 611 women, to test the hypothesis that the lack of housework sharing is associated with high scores of psychological symptoms as measured by a validated questionnaire. RESULTS: Crude and adjusted prevalence ratio estimated by logistic regression were similar to those estimated by tabular analysis. Also, ranges of the confidence intervals of the prevalence ratio according to the Delta Method were nearly equal to those obtained by the Mantel-Haenszel approach. CONCLUSIONS: The results give support to the use of the Delta Method for the estimation of confidence intervals for ratios of proportions. The method should be seen as an alternative for situations in which the need to control a large number of potential confounders limits the use of stratified analysis.  相似文献   

17.
Immunoassay techniques yield estimates of concentrations of analytes based on comparison to known concentrations of a reference solution. The use of the nonlinear logistic model makes the error estimates and confidence levels approximate. When the goal of such a study is estimation of several unknowns, methods in common usage do not account for 'simultaneous' inference, i.e. the repeated use of the standard curve for estimating several concentrations. Alternative methods are described which take multiple use of the reference curve into account. Simulations using normally distributed data with variance proportional to a power of the mean compare different methods of obtaining calibration intervals and illustrate the approximate nature of all such techniques. Calibration intervals based on simple, commonly used methods do not provide the coverage promised, even for one-at-a-time estimation, and are not suited for multiple estimation and comparison.  相似文献   

18.
Murine acute myeloid leukemia is characterized by chromosome 2 aberrations, and genesis of the marker chromosome 2 by radiation is suspected to be an initiating event of radiation leukemogenesis. A detailed analysis of the type and frequency of chromosome 2 aberrations in murine bone marrow cells at an early stage after irradiation is provided here. A total of 40 male C3H/He mice was exposed to 137Cs gamma-ray at a dose of 1, 2 or 3 Gy, and sacrificed 24 hours after irradiation. Metaphase samples prepared from bone marrow cells were Q-banded for karyotyping or painted with DNA probes specific to chromosome 2. In 5 mice analyzed by karyotyping, one mouse showed high frequency of the marker aberrations as well as other chromosome 2 aberrations. Chromosome painting analysis for the rest of the mice also detected 3 animals showing significantly high frequencies of chromosome 2 aberrations. Dose-dependence of the frequencies was observed even among those mice that tended to be sensitive. The results indicated that there was a subgroup of mice carrying hypersensitive chromosome 2. The subgroup could be leukemia-sensitive if radiation-induced chromosome aberrations are responsible for an early change in myeloid leukemogenesis.  相似文献   

19.
顾凯  吴胜利  寇明银 《钢铁》2019,54(2):20-25
 焦炭是高炉重要的燃料之一。 为研究不同范围内焦炭热态性能对高炉产质量指标的影响,首先统计实际高炉生产数据,将焦炭热态性能划分为不同区间,利用SPSS软件对不同区间数据进行统计解析,通过线性或非线性拟合得出影响规律和适宜的反应性和反应后强度。鉴于反应性和反应后强度存在较强的负相关性,综合考虑两因素,得出合适的焦炭综合热态性能,从而指导高炉和炼焦生产并提供一种新的焦炭热态性能研究方法。  相似文献   

20.
Sperm chromosomes from 15 fertile men were analyzed after fusion of their spermatozoa with zona-free hamster eggs. The total proportion of abnormal metaphases as well as the proportions of aneuploidy and structural aberrations were calculated for every man and examined for linear correlations with [1] sperm morphology and [2] the age of the persons studied. A positive correlation between the cytogenetic parameters and the percentage of abnormal sperm morphology was not evident, suggesting that assessment of sperm morphology cannot be used as an indicator of chromosomal damage in human spermatozoa. In contrast, there was a more distinct positive correlation between the age of donors and the three cytogenetic parameters studied.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号