首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
OBJECTIVES: This paper describes 2 statistical methods designed to correct for bias from exposure measurement error in point and interval estimates of relative risk. METHODS: The first method takes the usual point and interval estimates of the log relative risk obtained from logistic regression and corrects them for nondifferential measurement error using an exposure measurement error model estimated from validation data. The second, likelihood-based method fits an arbitrary measurement error model suitable for the data at hand and then derives the model for the outcome of interest. RESULTS: Data from Valanis and colleagues' study of the health effects of antineoplastics exposure among hospital pharmacists were used to estimate the prevalence ratio of fever in the previous 3 months from this exposure. For an interdecile increase in weekly number of drugs mixed, the prevalence ratio, adjusted for confounding, changed from 1.06 to 1.17 (95% confidence interval [CI] = 1.04, 1.26) after correction for exposure measurement error. CONCLUSIONS: Exposure measurement error is often an important source of bias in public health research. Methods are available to correct such biases.  相似文献   

2.
The issue of the relative merit of biomarkers and alternative measures of exposure arises most commonly in the context of epidemiological studies aimed at hazard detection and quantification. When exposures are from biological agents, biomarkers are usually the first and often the only justifiable choice. In general, however, the relative merit of different types of exposure measurements need to be evaluated on a case-by-case basis. Biomarkers may be affected by random errors, time-related sampling errors, physiological confounding and disease-induced differential error, all of which need to be explicitly evaluated before embarking on the use of a biomarker in a full-scale epidemiological study. Random errors affecting biomarkers may be reduced by replication or combination of measurements, or both. Alternative measurements of exposure can be evaluated against a biomarker when there is adequate evidence for regarding the marker as the true measure of a biologically relevant exposure.  相似文献   

3.
A model of daily-average inhalation exposures and total-absorbed doses of benzene to members of large populations was developed as part of a series of multimedia exposure and absorbed dose models. The benzene exposure and dose model is based upon probabilistic rather than sequential simulation of time-activity patterns, a simpler approach to modeling personal benzene exposures than other existing models. An important innovation of the benzene model is the incorporation of an anthropometric module for generating correlated exposure factors used to estimate absorbed doses occurring from inhalation, ingestion, and dermal absorption of benzene. A preliminary validation exercise indicates that the benzene model produces reasonable estimates of the distribution of benzene personal air concentrations expected for a large population. Uncertainty about specific percentiles of the predicted distributions of personal air concentrations was found to be dominated by uncertainty about microenvironmental benzene concentrations rather than time-activity patterns, and uncertainty about total absorbed doses was dominated by a lack of knowledge about the true absorption coefficient for benzene in the lung rather than knowledge gaps about microenvironmental concentrations or intake rates. The results of this modeling effort have implications for environmental control decisions, including evaluation of source control options, characterization of population and individual risk, and allocation of resources for future studies.  相似文献   

4.
We explore the effects of measurement error in a time-varying covariate for a mixed model applied to a longitudinal study of plasma levels and dietary intake of beta-carotene. We derive a simple expression for the bias of large sample estimates of the variance of random effects in a longitudinal model for plasma levels when dietary intake is treated as a time-varying covariate subject to measurement error. In general, estimates for these variances made without consideration of measurement error are biased positively, unlike estimates for the slope coefficients which tend to be 'attenuated'. If we can assume that the residuals from a longitudinal fit for the time-varying covariate behave like measurement errors, we can estimate the original parameters without the need for additional validation or reliability studies. We propose a method to test this assumption and show that the assumption is reasonable for the example data. We then use a likelihood-based method of estimation that involves a simple extension of existing methods for fitting mixed models. Simulations illustrate the properties estimators.  相似文献   

5.
Residual error models, traditionally used in population pharmacokinetic analyses, have been developed as if all sources of error have properties similar to those of assay error. Since assay error often is only a minor part of the difference between predicted and observed concentrations, other sources, with potentially other properties, should be considered. We have simulated three complex error structures. The first model acknowledges two separate sources of residual error, replication error plus pure residual (assay) error. Simulation results for this case suggest that ignoring these separate sources of error does not adversely affect parameter estimates. The second model allows serially correlated errors, as may occur with structural model misspecification. Ignoring this error structure leads to biased random-effect parameter estimates. A simple autocorrelation model, where the correlation between two errors is assumed to decrease exponentially with the time between them, provides more accurate estimates of the variability parameters in this case. The third model allows time-dependent error magnitude. This may be caused, for example, by inaccurate sample timing. A time-constant error model fit to time-varying error data can lead to bias in all population parameter estimates. A simple two-step time-dependent error model is sufficient to improve parameter estimates, even when the true time dependence is more complex. Using a real data set, we also illustrate the use of the different error models to facilitate the model building process, to provide information about error sources, and to provide more accurate parameter estimates.  相似文献   

6.
Ecologic regression studies conducted to assess the cancer risk of indoor radon to the general population are subject to methodological limitations, and they have given seemingly contradictory results. The authors use simulations to examine the effects of two major methodological problems that affect these studies: measurement error and misspecification of the risk model. In a simulation study of the effect of measurement error caused by the sampling process used to estimate radon exposure for a geographic unit, both the effect of radon and the standard error of the effect estimate were underestimated, with greater bias for smaller sample sizes. In another simulation study, which addressed the consequences of uncontrolled confounding by cigarette smoking, even small negative correlations between county geometric mean annual radon exposure and the proportion of smokers resulted in negative average estimates of the radon effect. A third study considered consequences of using simple linear ecologic models when the true underlying model relation between lung cancer and radon exposure is nonlinear. These examples quantify potential biases and demonstrate the limitations of estimating risks from ecologic studies of lung cancer and indoor radon.  相似文献   

7.
The exposure of an individual to an air pollutant can be assessed indirectly, with a "microenvironmental" approach, or directly with a personal sampler. Both methods of assessment are subject to measurement error, which can cause considerable bias in estimates of health effects. If the exposure estimates are unbiased and the measurement error is nondifferential, the bias in a linear model can be corrected when the variance of the measurement error is known. Unless the measurement error is quite large, estimates of health effects based on individual exposures appear to be more accurate than those based on ambient levels.  相似文献   

8.
Biological dosimeters are useful for epidemiologic risk assessment in populations exposed to catastrophic nuclear events and as a means of validating physical dosimetry in radiation workers. Application requires knowledge of the magnitude of uncertainty in the biological dose estimates and an understanding of potential statistical pitfalls arising from their use. This paper describes the statistical aspects of biological dosimetry in general and presents a detailed analysis in the specific case of dosimetry for risk assessment using stable chromosome aberration frequency. Biological dose estimates may be obtained from a dose-response curve, but negative estimates can result and adjustment must be made for regression bias due to imprecise estimation when the estimates are used in regression analyses. Posterior-mean estimates, derived as the mean of the distribution of true doses compatible with a given value of the biological endpoint, have several desirable properties: they are nonnegative, less sensitive to extreme skewness in the true dose distribution, and implicitly adjusted to avoid regression bias. The methods necessitate approximating the true-dose distribution in the population in which biological dosimetry is being applied, which calls for careful consideration of this distribution through other information. An important question addressed here is to what extent the methods are robust to misspecification of this distribution, because in many applications of biological dosimetry it cannot be characterized well. The findings suggest that dosimetry based solely on stable chromosome aberration frequency may be useful for population-based risk assessment.  相似文献   

9.
One of the possible uses of biomarkers in epidemiological research is as early-outcome measures to predict the occurrence of clinical disease and to elucidate the biological mechanism of pathogenesis. This use is conceptually less straightforward than the well established use of biomarkers to improve or extend exposure assessment or to study interindividual variations in disease susceptibility. In principle, this form of use could accelerate or otherwise facilitate etiological research. However, in practice, the recent review literature suggests that this mode of biomarker use, especially in cancer epidemiology, is the least clear-cut and the least well developed. The recurrent problem is identifying biomarkers that: (1) are on the causal pathway, (2) have a high probability of progression to clinical disease, and (3) account for all or most of the cases of the specified clinical outcome. Such biomarkers would be most useful if they conferred a long lead-time relative to clinical disease occurrence.  相似文献   

10.
We combine two major approaches currently used in human air pollution exposure assessment, the direct approach and the indirect approach. The direct approach measures exposures directly using personal monitoring. Despite its simplicity, this approach is costly and is also vulnerable to sample selection bias because it usually imposes a substantial burden on the respondents, making it difficult to recruit a representative sample of respondents. The indirect approach predicts exposures using the activity pattern model to combine activity pattern data with microenvironmental concentrations data. This approach is lower in cost and imposes less respondent burden, thus is less vulnerable to sample selection bias. However, it is vulnerable to systematic measurement error in the predicted exposures because the microenvironmental concentration data might need to be "grafted" from other data sources. The combined approach combines the two approaches to remedy the problems in each. A dual sample provides both the direct measurements of exposures based on personal monitoring and the indirect estimates based on the activity pattern model. An indirect-only sample provides additional indirect estimates. The dual sample is used to calibrate the indirect estimates to correct the systematic measurement error. If both the dual sample and the indirect-only sample are representative, the indirect estimates from the indirect-only sample is used to improve the precision for the overall estimates. If the dual sample is vulnerable to sample selection bias, the indirect-only sample is used to correct the sample selection bias. We discuss the allocation of the resources between the two subsamples and provide algorithms which can be used to determine the optimal sample allocation. The theory is illustrated with applications to the empirical data obtained from the Washington, DC, Carbon Monoxide (CO) Study.  相似文献   

11.
Hydropic vacuolation (HydVac) of biliary epithelial cells and hepatocytes is described for 3 species of U.S. West Coast bottom fishes. Risk assessment analyses are also conducted to determine if the prevalence of this lesion increases in association with contaminant exposure and site of capture. The morphology of HydVac in starry flounder Platichthys stellatus, white croaker Genyonemus lineatus and rock sole Lepidopsetta bilineata was similar to that described in winter flounder Pleuronectes americanus from the U.S. Atlantic Coast, especially in that HydVac most commonly affected biliary epithelial cells. Hydropic vacuolation was the most prevalent liver lesion in starry flounder and white croaker captured from contaminated environments. Risk assessment analyses confirmed that the relative risk for HydVac increased with the presence of aromatic and chlorinated hydrocarbons in sediment, fish bile, and fish liver for these species. Hydropic vacuolation also frequently occurred in rock sole, but the lesion showed no clear association with contaminant exposure in this species. The types of liver lesions that were useful biomarkers of contaminant effects in fish depended on the species and this factor must be taken into account when evaluating histopathological biomarkers of response to contaminant exposure.  相似文献   

12.
In order to develop a population pharmacokinetic model for ciprofloxacin after single oral dosing in patients with liver impairments, a retrospective population analysis of already published data was undertaken. The purpose of the study was to compare the population model parameter estimates for ciprofloxacin obtained with the non-parametric expectation maximization (NPEM2) algorithm based on a full data set (NPEM2-FULL) with those based on a set of 3 randomly chosen time/concentrations data (NPEM2-3RPs). Parameter values generated by the standard two-stage (STS) approach using traditional data-rich situation were used as a "gold standard" for comparative purposes. There was no significant difference between parameter means at p < 0.05 for Gauss-Newton and maximum a posteriori Bayesian (MAPB) estimators. The values of k(s) (min/ml/h) as estimated by STS and NPEM2-FULL models, on the one hand, and by STS and NPEM2-3RPs population models on the other hand (0.001, 0.00095, and 0.001, respectively), were not significantly different (p = 0.1457, respectively p = 0.6276). The population models values of k(s) suggest that good approximation between ciprofloxacin renal clearance and creatinine clearance could be expected for most of the patients and support previous observations that creatinine clearance is a meaningful predictor for ciprofloxacin elimination from the body. The 3 population models estimated Vs/F (l/kg) without significant difference. The predictive performance of these population models was subsequently assessed using internal validation approach. The 3 population models demonstrated comparable accuracy and precision in Bayesian forecasting of drug plasma levels of validation group patients based on 1 random and 2 suboptimal prior drug concentrations. There was, however, a one-order of magnitude decrease in population models bias when 2 suboptimal data points were used as Bayesian priors.  相似文献   

13.
As part of a soil lead regulation process, this review was conducted to determine the association between lead in soil and established human health effects of lead or validated biomarkers of lead exposure. We reviewed only studies where soil exposure could be distinguished from other sources of lead and whose design could reasonably be used to infer a causal relationship between soil lead and either biomarkers or health effects. No such studies of health effects were found. Studies describing a quantitative relationship between soil lead and blood lead did meet our criteria: 22 cross-sectional studies in areas with polluted soil; and three prospective studies of soil lead pollution abatement trials. The cross-sectional studies indicated that, compared to children exposed to soil lead levels of 100 ppm, those exposed to levels of 1000 ppm had mean blood lead concentrations 1.10-1.86 times higher and those exposed to soil lead levels of 2000 ppm had blood lead concentrations 1.13-2.25 times higher. The prospective studies showed effects within the ranges predicted by the cross-sectional studies. Differences in results between studies were surprisingly modest and likely explainable by random sampling error, different explanatory variables included in data analyses and differences in methods of measuring lead in environmental specimens.  相似文献   

14.
For selected priority pollutants, like organochlorine pesticides, PAHs and PCBs, and mercury and cadmium, the transfer along marine food chains was assessed based on monitoring data. Comparison of the acquired body burden for marine fish and the toxicity thresholds for predating marine birds and mammals provides evidence for the relevance of contaminant uptake with the food and the liability for secondary poisoning. As a consequence, contaminant residues in prey organisms (critical body burden) should be used for marine hazard and risk assessments. Evaluations solely from aquatic exposure concentrations are not adequate to account for potential secondary effects in marine ecosystems.  相似文献   

15.
Uncertainty and risk are central features of geotechnical and geological engineering. Engineers can deal with uncertainty by ignoring it, by being conservative, by using the observational method, or by quantifying it. In recent years, reliability analysis and probabilistic methods have found wide application in geotechnical engineering and related fields. The tools are well known, including methods of reliability analysis and decision trees. Analytical models for deterministic geotechnical applications are also widely available, even if their underlying reliability is sometimes suspect. The major issues involve input and output. In order to develop appropriate input, the engineer must understand the nature of uncertainty and probability. Most geotechnical uncertainty reflects lack of knowledge, and probability based on the engineer’s degree of belief comes closest to the profession’s practical approach. Bayesian approaches are especially powerful because they provide probabilities on the state of nature rather than on the observations. The first point in developing a model from geotechnical data is that the distinction between the trend or systematic error and the spatial error is a modeling choice, not a property of nature. Second, properties estimated from small samples may be seriously in error, whether they are used probabilistically or deterministically. Third, experts generally estimate mean trends well but tend to underestimate uncertainty and to be overconfident in their estimates. In this context, engineering judgment should be based on a demonstrable chain of reasoning and not on speculation. One difficulty in interpreting results is that most people, including engineers, have difficulty establishing an allowable probability of failure or dealing with low values of probability. The F–N plot is one useful vehicle for comparing calculated probabilities with observed frequencies of failure of comparable facilities. In any comparison it must be noted that a calculated probability is a lower bound because it must fail to incorporate the factors that are ignored in the analysis. It is useful to compare probabilities of failure for alternative designs, and the reliability methods reveal the contributions of different components to the uncertainty in the probability of failure. Probability is not a property of the world but a state of mind; geotechnical uncertainty is primarily epistemic, Bayesian, and belief based. The current challenges to the profession are to make use of probabilistic methods in practice and to sharpen our investigations and analyses so that each additional data point provides maximal information.  相似文献   

16.
Probabilistic models are developed to predict the deformation and shear demands due to seismic excitation on reinforced concrete (RC) columns in bridges with two-column bents. A Bayesian methodology is used to develop the models. The models are unbiased and properly account for the predominant uncertainties, including model errors, arising from a potentially inaccurate model form or missing variables, measurement errors, and statistical uncertainty. The probabilistic models developed are akin to deterministic demand models and procedures commonly used in practice, but they have additional correction terms that explicitly describe the inherent systematic and random errors. Through the use of a set of “explanatory” functions, terms that correct the bias in the existing deterministic demand models are identified. These explanatory functions provide insight into the underlying behavioral phenomena and provide a means to select ground motion parameters that are most relevant to the seismic demands. The approach takes into account information gained from scientific/engineering laws, observational data from laboratory experiments, and simulated data from numerical dynamic responses. The demand models are combined with previously developed probabilistic capacity models for RC bridge columns to objectively estimate the seismic vulnerability of bridge components and systems. The vulnerability is expressed in terms of the conditional probability (or fragility) that a demand quantity (deformation or shear) will be greater than or equal to the corresponding capacity. Fragility estimates are developed for an example RC bridge with two-column bents, designed based on the current specifications for California. Fragility estimates are computed at the individual column, bent, and bridge system levels, as a function of the spectral acceleration and the ratio between the peak ground velocity and the peak ground acceleration.  相似文献   

17.
Random error (misclassification) in exposure measurements usually biases a relative risk, regression coefficient, or other effect measure towards the null value (no association). The most important exception is Berkson type error, which causes little or no bias. Berkson type error arises, in particular, due to use of group average exposure in place of individual values. Random error in exposure measurements, Berkson or otherwise, reduces the power of a study, making it more likely that real associations are not detected. Random error in confounding variables compromises the control of their effect, leaving residual confounding. Random error in a variable that modifies the effect of exposure on health--for example, an indicator of susceptibility--tends to diminish the observed modification of effect, but error in the exposure can create a supurious appearance of modification. Methods are available to correct for bias (but not generally power loss) due to measurement error, if information on the magnitude and type of error is available. These methods can be complicated to use, however, and should be used cautiously as "correction" can magnify confounding if it is present.  相似文献   

18.
James Bay Quebec Crees are exposed to methylmercury (MM) through fish consumption. Hair mercury concentrations were measured in women of child-bearing age and men and women 40 years of age and above in a small Cree community of James Bay (with traditionally low exposure to MM) before and after fishing expeditions to inland lakes where fish were contaminated with methylmercury. Median hair mercury concentrations in persons 40 years and above increased from 4.1 mg/kg to 9.9 mg/kg and the highest value from 17.4 to 47.2 mg/kg. A similar increase was seen after a second fishing expedition where the median hair concentration increased from 3.4 mg/kg to 7.2 mg/kg and the highest value from 17.7 to 49.9 mg/kg. Populations with traditionally low exposure to MM can become highly exposed with changes in sources or quantities of fish consumed.  相似文献   

19.
Clinical cancer prevention studies that use disease as an endpoint are of necessity, large, lengthy, and extremely costly. Development of the field of cancer chemoprevention is being accelerated by the application of intermediate markers to preclinical and clinical studies. Sensitive and specific analytic methods have been developed for detecting and quantifying levels of covalent adducts of aflatoxins with cellular DNA and blood proteins at ambient levels of exposure. Such biomarkers can be applied to the preselection of exposed individuals for study cohorts, thereby reducing study size requirements. Levels of these aflatoxin-DNA and albumin adducts can be modulated by chemopreventive agents such as oltipraz and chlorophyllin in experimental models. Overall, a good concordance is seen between diminution of biomarkers and reductions in tumor incidence and/or multiplicity in these settings. Thus, these markers can also be used to rapidly assess the efficacy of preventive interventions. However, the successful application of these biomarkers to clinical prevention trials will be dependent upon prior determination of the associative or causal role of the marker to the carcinogenic process, establishment of the relationship between dose and response, and appreciation of the kinetics of adduct formation and removal. The general approach that has been utilized for the development, validation and application of aflatoxin-DNA and protein adduct biomarkers to cancer chemoprevention trials is summarized.  相似文献   

20.
The population risk, for example the control group mortality rate, is an aggregate measurement of many important attributes of a clinical trial, such as the general health of the patients treated and the experience of the staff performing the trial. Plotting measurements of the population risk against the treatment effect estimates for a group of clinical trials may reveal an apparent association, suggesting that differences in the population risk might explain heterogeneity in the results of clinical trials. In this paper we consider using estimates of population risk to explain treatment effect heterogeneity, and show that using these estimates as fixed covariates will result in bias. This bias depends on the treatment effect and population risk definitions chosen, and the magnitude of measurement errors. To account for the effect of measurement error, we represent clinical trials in a bivariate two-level hierarchical model, and show how to estimate the parameters of the model by both maximum likelihood and Bayes procedures. We use two examples to demonstrate the method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号