首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article describes how and why I became involved in consulting for the tobacco industry. I briefly discuss the four relatively distinct statistical topics that were the primary focus of my work, all of which have been central to my published academic research for over three decades: missing data; causal inference; adjustment for covariates in observational studies; and meta-analysis. To me, it is entirely appropriate to present the application of this academic work in a legal setting.  相似文献   

2.
3.
Abstract. The classical statistical inference for integer‐valued time‐series has primarily been restricted to the integer‐valued autoregressive (INAR) process. Markov chain Monte Carlo (MCMC) methods have been shown to be a useful tool in many branches of statistics and is particularly well suited to integer‐valued time‐series where statistical inference is greatly assisted by data augmentation. Thus in this article, we outline an efficient MCMC algorithm for a wide class of integer‐valued autoregressive moving‐average (INARMA) processes. Furthermore, we consider noise corrupted integer‐valued processes and also models with change points. Finally, in order to assess the MCMC algorithms inferential and predictive capabilities we use a range of simulated and real data sets.  相似文献   

4.
Prospective studies of reproductive outcomes frequently record data at multiple cycles. For example, studies of in vitro fertilization and embryo transfer (IVF-ET) follow women or couples for possibly several IVF cycles and record outcomes such as pregnancy status and embryo implantation. Several time-varying covariates, such as age and diagnostic markers, typically are available as well. When attention is focused on measurement of exposure effects, the use of multiple cycle data poses several complications. If the study is observational, the exposure probability may depend on subject characteristics. Moreover, attrition rates in IVF-ET can be substantial, and the attrition process can be expected to depend heavily on prior outcome. In fact, both success (pregnancy) and failure (lack of embryo implantations) can be prognostic of dropout. In this paper, we illustrate the use of causal modeling for multiple cycle data. Key assumptions are reviewed, and inference based on weighted estimating equations is described in detail. The methods are applied to a study of the effects of hydrosalpinx among women with tubal disease undergoing IVF-ET.  相似文献   

5.
Making statistical inference on quantities defining various characteristics of a temporally measured biochemical process and analyzing its variability across different experimental conditions is a core challenge in various branches of science. This problem is particularly difficult when the amount of data that can be collected is limited in terms of both the number of replicates and the number of time points per process trajectory. We propose a method for analyzing the variability of smooth functionals of the growth or production trajectories associated with such processes across different experimental conditions. Our modeling approach is based on a spline representation of the mean trajectories. We also develop a bootstrap-based inference procedure for the parameters while accounting for possible multiple comparisons. This methodology is applied to study two types of quantities—the “time to harvest” and “maximal productivity”—in the context of an experiment on the production of recombinant proteins. We complement the findings with extensive numerical experiments comparing the effectiveness of different types of bootstrap procedures for various tests of hypotheses. These numerical experiments convincingly demonstrate that the proposed method yields reliable inference on complex characteristics of the processes even in a data-limited environment where more traditional methods for statistical inference are typically not reliable.  相似文献   

6.
Causality inference and root cause analysis are important for fault diagnosis in the chemical industry. Due to the increasing scale and complexity of chemical processes, data-driven methods become indispensable in causality inference. This paper proposes an approach based on the concept of transfer entropy which was presented by Schreiber in 2000 to generate a causal map. To get a better performance in estimating the time delay of causal relations, a modified form of the transfer entropy is presented in this paper. Case studies on two simulated chemical processes, including the benchmark Tennessee Eastman process are performed to illustrate the effectiveness of this approach.  相似文献   

7.
In epidemiological research, the causal effect of a modifiable phenotype or exposure on a disease is often of public health interest. Randomized controlled trials to investigate this effect are not always possible and inferences based on observational data can be confounded. However, if we know of a gene closely linked to the phenotype without direct effect on the disease, it can often be reasonably assumed that the gene is not itself associated with any confounding factors - a phenomenon called Mendelian randomization. These properties define an instrumental variable and allow estimation of the causal effect, despite the confounding, under certain model restrictions. In this paper, we present a formal framework for causal inference based on Mendelian randomization and suggest using directed acyclic graphs to check model assumptions by visual inspection. This framework allows us to address limitations of the Mendelian randomization technique that have often been overlooked in the medical literature.  相似文献   

8.
A novel networked process monitoring, fault propagation identification, and root cause diagnosis approach is developed in this study. First, process network structure is determined from prior process knowledge and analysis. The network model parameters including the conditional probability density functions of different nodes are then estimated from process operating data to characterize the causal relationships among the monitored variables. Subsequently, the Bayesian inference‐based abnormality likelihood index is proposed to detect abnormal events in chemical processes. After the process fault is detected, the novel dynamic Bayesian probability and contribution indices are further developed from the transitional probabilities of monitored variables to identify the major faulty effect variables with significant upsets. With the dynamic Bayesian contribution index, the statistical inference rules are, thus, designed to search for the fault propagation pathways from the downstream backwards to the upstream process. In this way, the ending nodes in the identified propagation pathways can be captured as the root cause variables of process faults. Meanwhile, the identified fault propagation sequence provides an in‐depth understanding as to the interactive effects of faults throughout the processes. The proposed approach is demonstrated using the illustrative continuous stirred tank reactor system and the Tennessee Eastman chemical process with the fault propagation identification results compared against those of the transfer entropy‐based monitoring method. The results show that the novel networked process monitoring and diagnosis approach can accurately detect abnormal events, identify the fault propagation pathways, and diagnose the root cause variables. © 2013 American Institute of Chemical Engineers AIChE J, 59: 2348–2365, 2013  相似文献   

9.
Many computational methods have been developed to infer causality among genes using cross-sectional gene expression data, such as single-cell RNA sequencing (scRNA-seq) data. However, due to the limitations of scRNA-seq technologies, time-lagged causal relationships may be missed by existing methods. In this work, we propose a method, called causal inference with time-lagged information (CITL), to infer time-lagged causal relationships from scRNA-seq data by assessing the conditional independence between the changing and current expression levels of genes. CITL estimates the changing expression levels of genes by “RNA velocity”. We demonstrate the accuracy and stability of CITL for inferring time-lagged causality on simulation data against other leading approaches. We have applied CITL to real scRNA data and inferred 878 pairs of time-lagged causal relationships. Furthermore, we showed that the number of regulatory relationships identified by CITL was significantly more than that expected by chance. We provide an R package and a command-line tool of CITL for different usage scenarios.  相似文献   

10.
Using the one-way effect extraction method, this paper presents a set of partial causal measures which represents quantitatively the interdependence between a pair of vector-valued processes in the presence of a third process. Those measures are defined for stationary as well as for a class of non-stationary time series. In contrast to conventional conditioning methods, the partial concept defined in the paper would be mostly devoid of feedback distortion by the third process. The paper also discusses statistical inference on the proposed measures.  相似文献   

11.
The effect of missing data in causal inference problems is widely recognized. In malaria drug efficacy studies, it is often difficult to distinguish between new and old infections after treatment, resulting in indeterminate outcomes. Methods that adjust for possible bias from missing data include a variety of imputation procedures (extreme case analysis, hot-deck, single and multiple imputation), weighting methods, and likelihood based methods (data augmentation, EM procedures and their extensions). In this article, we focus our discussion on multiple imputation and two weighting procedures (the inverse probability weighted and the doubly robust (DR) extension), comparing the methods' applicability to the efficient estimation of malaria treatment effects. Simulation studies indicate that DR estimators are generally preferable because they offer protection to misspecification of either the outcome model or the missingness model. We apply the methods to analyze malaria efficacy studies from Uganda.  相似文献   

12.
This paper reviews recent statistical advances in HIV/AIDS therapy trials. Our emphasis is on three emerging areas that address key challenges in AIDS research: the determination of optimal treatment sequences, estimating efficacy of intended treatment, and inference for repeated measures with dependent censoring. A common theme of these topics is the use of observational data within clinical trials to answer questions not addressed by the conventional intent-to-treat analysis. We also give a brief overview of some recent contributions to other topics relevant to AIDS clinical trials, including modelling of treatment compliance data, modelling of repeated measures, and group sequential testing.  相似文献   

13.
This paper deals with an application of partial least squares (PLS) methods to an industrial terephthalic acid (TPA) manufacturing process to identify and remove the major causes of variability in the product quality. Multivariate statistical analyses were performed to find the major causes of variability in the product quality, using the PLS models built from historical data measured on the process and quality variables. It was found from the PLS analyses that the variations in the catalyst concentrations and the process throughput significantly affect the product quality, and that the quality variations are propagated from the oxidation unit to the digestion units of the TPA process. A simulation-based approach was addressed to roughly estimate the effects of eliminating the major causes on the product quality using the PLS models. Based on the results that considerable amounts of the variations in the product quality could be reduced, we have proposed practical approaches for removing the major causes of product quality variations in the TPA manufacturing process.  相似文献   

14.
BAYESIAN THRESHOLD AUTOREGRESSIVE MODELS FOR NONLINEAR TIME SERIES   总被引:2,自引:0,他引:2  
Abstract. This paper provides a Bayesian approach to statistical inference in the threshold autoregressive model for time series. The exact posterior distribution of the delay and threshold parameters is derived, as is the multi-step-ahead predictive density. The proposed methods are applied to the Wolfe's sunspot and Canadian lynx data sets.  相似文献   

15.
This paper investigates the challenging problem of diagnosing novel faults whose fault mechanisms and relevant historical data are not available. Most existing fault diagnosis systems are incapable to explain root causes for unanticipated, novel faults, because they rely on either models or historical data of known faulty conditions. To address this issue we propose a new framework for novel fault diagnosis, which integrates causal reasoning on signed digraph models with multivariate statistical process monitoring. The prerequisites for our approach include historical data of normal process behavior and qualitative cause–effect relationships that can be derived from process flow diagrams. In this new approach, a set of candidate root nodes is identified first via qualitative reasoning on signed digraph; then quantitative local consistency tests are implemented for each candidate based on multivariate statistical process monitoring techniques; finally, using the resulting multiple local residuals, diagnosis is performed based on the exoneration principle. The cause–effect relationships in the digraph enable automatic variable selection and the local residual interpretations for statistical monitoring. The effectiveness of this new approach is demonstrated using numerical examples based on the Tennessee Eastman process data.  相似文献   

16.
A novel real‐time soft sensor based on a sparse Bayesian probabilistic inference framework is proposed for the prediction of melt index in industrial polypropylene process. The Bayesian framework consists of a relevance vector machine for predicting melt index and a particle filtering algorithm for soft sensor optimization. An online correcting strategy is also developed for improving the performance of real‐time melt index prediction. The method takes advantages of the probabilistic inference and using prior statistical knowledge of polymerization process. Developed soft sensors are validated with ten public databases from UCI machine learning repository and real data from industrial polypropylene process. Experimental results indicate the effectiveness of proposed method and show the improvement in both prediction precision and generalization capability compared with the reported models in literatures. © 2017 Wiley Periodicals, Inc. J. Appl. Polym. Sci. 2017 , 134, 45384.  相似文献   

17.
Liquid composite molding is broadly used for manufacturing composite parts. Apart from the preforming of the dry fibrous material, mold filling and curing of the resin are the main steps in the manufacturing process. For process simulation numerical methods, like finite element methods are applied. Flow models describing the flow behavior through a porous medium are well established. The ability to predict and monitor the curing process in liquid composite molding is crucial for manufacturing process optimization in case of application of rapid curing resin systems. Based on differential scanning calorimetry and rheological experiments, cure kinetics and viscosity of a resin system were characterized. A new kinetic and complex viscosity model is proposed to predict epoxy resin properties in numerical modeling of liquid composite molding. The semi-empirical models are simple to use and therefore suitable for process optimization in an industrial environment. Both models were validated by a fitting to the experimental data by the Levenberg-Marquardt method. A process to determine the initial values for the fitting procedure is also proposed. The predictions of the validated models were in good agreement with the measured data, and are therefore applicable for numerical process optimization. Polym. Compos. 25:255–269, 2004. © 2004 Society of Plastics Engineers.  相似文献   

18.
A prototype expert system, called MODEX, for locating the cause(s) of a set of abnormalities in a chemical process id described. We discuss a methodology that aids the developement of expert systems which are process-independent, transparent in their reasoning, and capable of diagnosing a wide diversity of faults. The domain knowlege of the system is based on qualitative reasoning principles and captures physical interconnections between equipment units as well as causal relationships among process state variables. The inference strategy uses model-based reasoning for analyzing the plant behavior. Using a variant of the technique adopted from fault tree synthesis, an initially observed abnormal symptom is considered to be a top level event and a tree structure is constructed as the system searches for a basic event to which the fault can be traced. The diagnostic reasoning process is driven by a problem reduction strategy. The knowledge base is process-independent, thereby enhancing the generality of the expert system. Reasoning from first-principles with the aid of causal and fault models facilitates the diagnoses of novel or unanticipated faults. The system does not assume a single causal origin for all initially observed faults in the chemical process. Moreover, the system has the ability to locate multiple basic causes of a fault. The methodology also permits one to investigate the causal origins of multiple, unrelated, faults. The system provides explanations to user queries at various degrees of detail. Two test cases are discussed in detail.  相似文献   

19.
Additive manufacturing describes technologies that translate virtual computer‐aided design data into physical models in a fast process. While industries such as automotive and aerospace adopt this manufacturing technique rapidly, it is little applied within process engineering. Additive manufacturing offers freedom of design which gives access to novel shapes and geometries with fast production times. This review analyses the most important layer fabrication principles first and shows applications of additive manufacturing in fluid process engineering second. The review focuses on applications where liquids and gases are involved and it showcases the potential of additive manufacturing within process engineering of functional devices. Examples of current research projects show the potential of the technology for advances process engineering.  相似文献   

20.
General principles of systems analysis of self-compaction technology are considered. A group of factors responsible for functioning of individual blocks and the process as a whole is established. Analytical dependences relating the characteristic factors of the manufacturing process are derived by methods of statistical analysis and interpreted graphically. The results of the study are used to solve practical problems of choosing raw materials for refractory compositions and the process regimes. Translated from Ogneupory i Tekhnicheskaya Keramika, No. 4, pp. 2–7, April, 1998.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号