首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
The use of multiple imputation for the analysis of missing data.   总被引:1,自引:0,他引:1  
This article provides a comprehensive review of multiple imputation (MI), a technique for analyzing data sets with missing values. Formally, MI is the process of replacing each missing data point with a set of m > 1 plausible values to generate m complete data sets. These complete data sets are then analyzed by standard statistical software, and the results combined, to give parameter estimates and standard errors that take into account the uncertainty due to the missing data values. This article introduces the idea behind MI, discusses the advantages of MI over existing techniques for addressing missing data, describes how to do MI for real problems, reviews the software available to implement MI, and discusses the results of a simulation study aimed at finding out how assumptions regarding the imputation model affect the parameter estimates provided by MI. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Average change in list recall was evaluated as a function of missing data treatment (Study 1) and dropout status (Study 2) over ages 70 to 105 in Asset and Health Dynamics of the Oldest-Old data. In Study 1 the authors compared results of full-information maximum likelihood (FIML) and the multiple imputation (MI) missing-data treatments with and without independent predictors of missingness. Results showed declines in all treatments, but declines were larger for FIML and MI treatments when predictors were included in the treatment of missing data, indicating that attrition bias was reduced. In Study 2, models that included dropout status had better fits and reduced random variance compared with models without dropout status. The authors conclude that change estimates are most accurate when independent predictors of missingness are included in the treatment of missing data with either MI or FIML and when dropout effects are modeled. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
A 2-step approach for obtaining internal consistency reliability estimates with item-level missing data is outlined. In the 1st step, a covariance matrix and mean vector are obtained using the expectation maximization (EM) algorithm. In the 2nd step, reliability analyses are carried out in the usual fashion using the EM covariance matrix as input. A Monte Carlo simulation examined the impact of 6 variables (scale length, response categories, item correlations, sample size, missing data, and missing data technique) on 3 different outcomes: estimation bias, mean errors, and confidence interval coverage. The 2-step approach using EM consistently yielded the most accurate reliability estimates and produced coverage rates close to the advertised 95% rate. An easy method of implementing the procedure is outlined. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
A Monte Carlo simulation examined full information maximum-likelihood estimation (FIML) in structural equation models with nonnormal indicator variables. The impacts of 4 independent variables were examined (missing data algorithm, missing data rate, sample size, and distribution shape) on 4 outcome measures (parameter estimate bias, parameter estimate efficiency, standard error coverage, and model rejection rates). Across missing completely at random and missing at random patterns, FIML parameter estimates involved less bias and were generally more efficient than those of ad hoc missing data techniques. However, similar to complete-data maximum-likelihood estimation in structural equation modeling, standard errors were negatively biased and model rejection rates were inflated. Simulation results suggest that recently developed correctives for missing data (e.g., rescaled statistics and the bootstrap) can mitigate problems that stem from nonnormal data. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
OBJECTIVES: We sought to determine the cost advantage of a strategy of same-sitting diagnostic catheterization and percutaneous transluminal coronary angioplasty (PTCA) (ad hoc) in comparison with staged PTCA. BACKGROUND: It is widely assumed that an ad hoc strategy lowers costs by reducing the length of hospital stay (LOS). However, this assumption has not been examined in a contemporary data set. METHODS: We studied 395 patients undergoing PTCA during 6 consecutive months. Cost analysis was performed using standard cost-accounting methods and a mature cost-accounting system. Costs were examined within three clinical strata based on the indication for PTCA (stable angina, unstable angina and after myocardial infarction [MI]). RESULTS: For the entire patient cohort, there was no significant cost advantage of an ad hoc approach within any of the strata, although there was a nonsignificant trend toward an ad hoc approach in patients with stable angina. For patients treated with conventional balloon PTCA alone, the lack of a significant difference between ad hoc and staged strategies persisted. For patients who received stents, there was a significant cost advantage of an ad hoc approach in all three clinical strata. An important cost driver was the occurrence of complications. Differences in the rates of complications did not reach statistical significance between ad hoc and staged strategies, but even a small trend toward greater complications in patients who had the ad hoc strategy negated cost and LOS advantages. Our study had the power to detect significant cost differences of $1,300 for patients with stable angina, $2,100 for patients with unstable angina and $2,500 for post-MI patients. It is possible that we failed to detect smaller cost advantages as significant. CONCLUSIONS: A cost savings with an ad hoc strategy of PTCA could not be consistently demonstrated. The cost advantage of an ad hoc approach may be most readily realized in clinical settings where the intrinsic risks are low (e.g., stable angina) or in which the device used carries a reduced risk of complications (e.g., stenting), because even a small increase in the complication rate will negate any financial advantage of an ad hoc approach.  相似文献   

6.
Missing data: Our view of the state of the art.   总被引:5,自引:0,他引:5  
Statistical procedures for missing data have vastly improved, yet misconception and unsound practice still abound. The authors frame the missing-data problem, review methods, offer advice, and raise issues that remain unresolved. They clear up common misunderstandings regarding the missing at random (MAR) concept. They summarize the evidence against older procedures and, with few exceptions, discourage their use. They present, in both technical and practical language, 2 general approaches that come highly recommended: maximum likelihood (ML) and Bayesian multiple imputation (MI). Newer developments are discussed, including some for dealing with missing data that are not MAR. Although not yet in the mainstream, these procedures may eventually extend the ML and MI methods that currently represent the state of the art. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
We devised a diagnostic approach based on screening plasma for an Aspergillus antigen with use of a sandwich enzyme-linked immunosorbent assay (ELISA), thoracic computed tomographic scanning, and radionuclide imaging for managing patients at risk for invasive aspergillosis. We used a decision analytic model to compare this alternative strategy with the conventional strategy, which relies only on the presence of clinical symptoms, persistent fever, and chest roentgenographic findings. Use of the alternative strategy reduced the number of patients who would receive antifungal treatment empirically, but this strategy was more expensive. The specificity of the sandwich ELISA had a significant impact on cost, but the sensitivity did not. A 13% prevalence of infection resulted in equal costs for both strategies. As much as 43.3% of the patients treated empirically could be given liposomal amphotericin B (L-AmB) before the conventional strategy became the most expensive. The costs of the alternative strategy were less than those of the conventional strategy when >5.3% of all patients, irrespective of strategy, were treated with L-AmB.  相似文献   

8.
The past decade has seen a noticeable shift in missing data handling techniques that assume a missing at random (MAR) mechanism, where the propensity for missing data on an outcome is related to other analysis variables. Although MAR is often reasonable, there are situations where this assumption is unlikely to hold, leading to biased parameter estimates. One such example is a longitudinal study of substance use where participants with the highest frequency of use also have the highest likelihood of attrition, even after controlling for other correlates of missingness. There is a large body of literature on missing not at random (MNAR) analysis models for longitudinal data, particularly in the field of biostatistics. Because these methods allow for a relationship between the outcome variable and the propensity for missing data, they require a weaker assumption about the missing data mechanism. This article describes 2 classic MNAR modeling approaches for longitudinal data: the selection model and the pattern mixture model. To date, these models have been slow to migrate to the social sciences, in part because they required complicated custom computer programs. These models are now quite easy to estimate in popular structural equation modeling programs, particularly Mplus. The purpose of this article is to describe these MNAR modeling frameworks and to illustrate their application on a real data set. Despite their potential advantages, MNAR-based analyses are not without problems and also rely on untestable assumptions. This article offers practical advice for implementing and choosing among different longitudinal models. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

9.
Highly selective inhibitors of cyclooxygenase-2 (COX-2i) were introduced to minimize peptic ulcers and their complications caused by dual COX inhibitors (COXi). Co-prescribing a (generally cheap) dual COXi with a gastroprotectant is an alternative strategy, proven to reduce the incidence of NSAID-associated endoscopic ulcers. This review compares the efficacies of these two strategies and makes some estimates of their relative cost-effectiveness. In standard risk patients, endoscopic ulcers are reduced to about the same extent (around 70-80%) by either co-prescribing omeprazole or lansoprazole with a dual COXi or preferring a COX-2i alone. COX-2i reduced ulcer complications by a weighted mean of around 60% in comparative studies with dual COXi. There is little information about the influence of PPI on this endpoint, although one study using H. pylori treatment as a possible surrogate for placebo intervention found 77% protection against recurrent upper gastrointestinal bleeding by co-administered omeprazole. One direct comparison of the two strategies in high-risk patients (recent ulcer bleed) found quite high rates of re-presentation with bleeding ulcer using either strategy, and the differences between them were not significant. Drug costs in four Western countries were compared for each strategy. In one, the costs were similar, but in the others the combination of a cheap dual COXi with omeprazole was usually more expensive than using a COX-2i. The safest strategy in highest risk patients may be to co-prescribe a gastroprotectant with a COX-2i, with resulting higher drug costs but possibly offset by savings in other health costs. The efficacy and cost-benefit of this alternative approach warrants investigation.  相似文献   

10.
Hollow shapes, particularly the shafts of road vehicle power trains, are a major potential for technical innovation since these components have to be especially strong while component weight and unit costs are highly restrictive. This strategy for innovation requires focusing on a wide range of disciplines of product and technology development and demands intensive and advanced interdisciplinary development of the required production technology. Hollow shapes can be produced with the spin extrusion process. Spin extrusion is a flexible rotatory pressure forming process which can be applied in dimensional ranges that cannot be attained by comparable processes, not in a cost‐effective fashion or for very expensive materials. Spin extrusion is an incremental technique, meaning there are also the known difficulties standing in the way of analytically describing processes and targeted process control. The energetic approach has proved to be the most practicable because it makes it possible to differentiate between a wide range of factors and parameters having an impact on the process while applying them to machine and process control. The results of process development was a new processing principle for manufacturing hollow shaft components by forming for specific needs of the automobile industry and other industries with major accuracy and safety requirements.  相似文献   

11.
This article urges counseling psychology researchers to recognize and report how missing data are handled, because consumers of research cannot accurately interpret findings without knowing the amount and pattern of missing data or the strategies that were used to handle those data. Patterns of missing data are reviewed, and some of the common strategies for dealing with them are described. The authors provide an illustration in which data were simulated and evaluate 3 methods of handling missing data: mean substitution, multiple imputation, and full information maximum likelihood. Results suggest that mean substitution is a poor method for handling missing data, whereas both multiple imputation and full information maximum likelihood are recommended alternatives to this approach. The authors suggest that researchers fully consider and report the amount and pattern of missing data and the strategy for handling those data in counseling psychology research and that editors advise researchers of this expectation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Matching methods such as nearest neighbor propensity score matching are increasingly popular techniques for controlling confounding in nonexperimental studies. However, simple k:1 matching methods, which select k well-matched comparison individuals for each treated individual, are sometimes criticized for being overly restrictive and discarding data (the unmatched comparison individuals). The authors illustrate the use of a more flexible method called full matching. Full matching makes use of all individuals in the data by forming a series of matched sets in which each set has either 1 treated individual and multiple comparison individuals or 1 comparison individual and multiple treated individuals. Full matching has been shown to be particularly effective at reducing bias due to observed confounding variables. The authors illustrate this approach using data from the Woodlawn Study, examining the relationship between adolescent marijuana use and adult outcomes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Describes a new theory of propositional reasoning, that is, deductions depending on if, or, and, and not. The theory proposes that reasoning is a semantic process based on mental models. It assumes that people are able to maintain models of only a limited number of alternative states of affairs, and they accordingly use models representing as much information as possible in an implicit way. They represent a disjunctive proposition, such as "There is a circle or there is a triangle," by imagining initially 2 alternative possibilities: one in which there is a circle and the other in which there is a triangle. This representation can, if necessary, be fleshed out to yield an explicit representation of an exclusive or an inclusive disjunction. The theory elucidates all the robust phenomena of propositional reasoning. It also makes several novel predictions, which were corroborated by the results of 4 experiments. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
MOTIVATION: The simulation of biochemical kinetic systems is a powerful approach that can be used for: (i) checking the consistency of a postulated model with a set of experimental measurements, (ii) answering 'what if?' questions and (iii) exploring possible behaviours of a model. Here we describe a generic approach to combine numerical optimization methods with biochemical kinetic simulations, which is suitable for use in the rational design of improved metabolic pathways with industrial significance (metabolic engineering) and for solving the inverse problem of metabolic pathways, i.e. the estimation of parameters from measured variables. RESULTS: We discuss the suitability of various optimization methods, focusing especially on their ability or otherwise to find global optima. We recommend that a suite of diverse optimization methods should be available in simulation software as no single one performs best for all problems. We describe how we have implemented such a simulation-optimization strategy in the biochemical kinetics simulator Gepasi and present examples of its application. AVAILABILITY: The new version of Gepasi (3.20), incorporating the methodology described here, is available on the Internet at http://gepasi.dbs.aber.ac.uk/softw/Gepasi. html. CONTACT: prm@aber.ac.uk  相似文献   

15.
The peroral administration of (poly)peptide drugs requires the development of delivery systems, which provide a protective effect toward a gastrointestinal enzymatic attack. A promising strategy for such systems represents polymer-enzyme inhibitor conjugates in which the embedded therapeutic agent is protected. However, the practical use of polymer-inhibitor conjugates has so far been limited by high production costs of these auxiliary agents. To solve this problem for delivery systems shielding from pepsinic degradation, structurally simplified analogues of the pepsin inhibitor pepstatin A have been synthesized. The synthesis of tripeptide analogues, described by McConnell et al., led us to pursue further modifications varying the C-terminus. Our target to attach a spacer moiety-enabling the free access of pepsin to the inhibitor-should be combined with an attractive synthetic approach providing low production costs in large-scale preparation. Structure modifications comprised either the side chain of the third amino acid which served as starting compound designing the C-terminus (L-leucine, L-isoleucine, L-norvaline) as the length of the spacer link, simulated by a linear alkyl group (n-butyl, n-hexyl, and n-octyl). The inhibitory activities which have been evaluated by an enzyme assay were significantly dependent on the nature of the side chain, whereas the length of the spacer had no influence on the inhibitory effect. Analogues bearing the isobutyl or n-propyl moiety as side chain displayed a strong inhibitory effect which was comparable to that pepstatin A. These congeners represent promising auxiliary agents for the peroral administration of (poly)peptide drugs.  相似文献   

16.
OBJECTIVES: We sought to determine the clinical, angiographic, treatment and outcome correlates of the intermediate-term cost of caring for patients with suspected coronary artery disease (CAD). BACKGROUND: To adequately predict medical costs and to compare different treatment and cost reduction strategies, the determinants of cost must be understood. However, little is known about the correlates of costs of treatment of CAD in heterogeneous patient populations that typify clinical practice. METHODS: From a consecutive series of 781 patients undergoing cardiac catheterization in 1992 to 1994, we analyzed 44 variables as potential correlates of total (direct and indirect) in-hospital, 12- and 36-month cardiac costs. RESULTS: Mean (+/-SD) patient age was 65+/-10 years; 71% were men, and 45% had multiple vessel disease. The initial treatment strategy was medical therapy alone in 47% of patients, percutaneous intervention (PI) in 30% and coronary artery bypass graft surgery (CABG) in 24%. The 36-month survival and event-free (death, infarction, CABG, PI) survival rates were 89.6+/-0.2% and 68.4+/-0.4%, respectively. Median hospital and 36-month costs were $8,301 and $28,054, respectively, but the interquartile ranges for both were wide and skewed. Models for log(e) costs were superior to those for actual costs. The variances accounted for by the all-inclusive models of in-hospital, 12- and 36-month costs were 57%, 60% and 71%, respectively. Baseline cardiac variables accounted for 38% of the explained in-hospital costs, whereas in-hospital treatment and complication variables accounted for 53% of the actual costs. Noncardiac variables accounted for only 9% of the explained costs. Over time, complications (e.g., late hospital admission, PI, CABG) and drug use to prevent complications of heart transplantation became more important, but many baseline cardiac variables retained their importance. CONCLUSIONS: 1) Variables readily available from a comprehensive cardiovascular database explained 57% to 71% of cardiac costs from a hospital perspective over 3 years of care; 2) the initial revascularization strategy was a key determinant of in-hospital costs, but over 3 years, the initial treatment become somewhat less important, and late complications became more important determinants of costs.  相似文献   

17.
A genetic-fuzzy learning from examples (GFLFE) approach is presented for determining fuzzy rule bases generated from input/output data sets. The method is less computationally intensive than existing fuzzy rule base learning algorithms as the optimization variables are limited to the membership function widths of a single rule, which is equal to the number of input variables to the fuzzy rule base. This is accomplished by primary width optimization of a fuzzy learning from examples algorithm. The approach is demonstrated by a case study in masonry bond strength prediction. This example is appropriate as theoretical models to predict masonry bond strength are not available. The GFLFE method is compared to a similar learning method using constrained nonlinear optimization. The writers’ results indicate that the use of a genetic optimization strategy as opposed to constrained nonlinear optimization provides significant improvement in the fuzzy rule base as indicated by a reduced fitness (objective) function and reduced root-mean-squared error of an evaluation data set.  相似文献   

18.
In the management of unstable angina and non-Q-wave acute myocardial infarction (AMI), there is considerable debate regarding the use of invasive strategy versus conservative strategy. The Thrombolysis In Myocardial Infarction (TIMI) III B trial found similar clinical outcomes for the 2 strategies, but the Veterans Administration Non-Q-Wave Infarction Strategies in-Hospital trial found a higher mortality with the invasive strategy. Both these trials were conducted before platelet glycoprotein IIb/IIIa inhibition and coronary stenting, both of which improve clinical outcome. Thus, there is a need to reexamine the question of which management strategy is optimal in the current era of platelet glycoprotein IIb/IIIa inhibition and new coronary interventions. The Treat Angina with Aggrastat and determine Cost of Therapy with an Invasive or Conservative Strategy (TACTICS-TIMI 18) trial is an international, multicenter, randomized trial that is evaluating the clinical efficacy of early invasive and early conservative treatment strategies in patients with unstable angina or non-Q-wave AMI treated with tirofiban, heparin, and aspirin. Patients are randomized to an invasive strategy, involving cardiac catheterization within 4 to 48 hours and revascularization with angioplasty or bypass surgery if feasible, versus a conservative strategy, where patients are referred for catheterization only for recurrent pain at rest or provokable ischemia. The primary end point is death, MI, or rehospitalization for acute coronary syndromes through a 6-month follow-up. The trial is also testing the "troponin hypothesis," that baseline troponins T and I will be useful in selecting an optimal management strategy.  相似文献   

19.
Optimal Design with Probabilistic Objective and Constraints   总被引:1,自引:0,他引:1  
Significant challenges are associated with solving optimal structural design problems involving the failure probability in the objective and constraint functions. In this paper, we develop gradient-based optimization algorithms for estimating the solution of three classes of such problems in the case of continuous design variables. Our approach is based on a sequence of approximating design problems, which is constructed and then solved by a semiinfinite optimization algorithm. The construction consists of two steps: First, the failure probability terms in the objective function are replaced by auxiliary variables resulting in a simplified objective function. The auxiliary variables are determined automatically by the optimization algorithm. Second, the failure probability constraints are replaced by a parametrized first-order approximation. The parameter values are determined in an adaptive manner based on separate estimations of the failure probability. Any computational reliability method, including first-order reliability and second-order reliability methods and Monte Carlo simulation, can be used for this purpose. After repeatedly solving the approximating problem, an approximate solution of the original design problem is found, which satisfies the failure probability constraints at a precision level corresponding to the selected reliability method. The approach is illustrated by a series of examples involving optimal design and maintenance planning of a reinforced concrete bridge girder.  相似文献   

20.
Maintenance management is becoming increasingly important in the building industry. Some of the reasons for this trend are: The large variety of uses for which buildings are constructed, the increase in the number of tall buildings, the increased use of electro-mechanical systems in buildings, and the higher performance of buildings. Because the financial resources for the maintenance of buildings and infrastructures are always limited, there is a need to find ways to allocate them among the various projects suggested for rehabilitation, renovation, and upgrading of existing buildings. The model developed in the present research to solve the problem of resource allocation is implemented in two stages, namely: (1) elimination of unfeasible solutions and (2) identification of five solution configurations that are close to the optimum. The model may be implemented in either of two ways: (1) maximization of benefits while adhering to a fixed budget or (2) minimization of costs while putting the emphasis on the performance of the buildings. The first approach is suitable for organizations interested in reducing the costs of maintenance. The second approach is suitable for organizations that wish to achieve the highest performance possible. The model developed was tested in a computer decision support system. The resultant solutions were evaluated using a large number of representative cases. The development of the model was finalized after subjecting it to sensitivity analyses. In this way, the effects of variations in conditions on the overall performance and costs were examined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号