首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Based on the concept of a Lévy copula to describe the dependence structure of a multi‐variate Lévy process, we present a new estimation procedure. We consider a parametric model for the marginal Lévy processes as well as for the Lévy copula and estimate the parameters by a two‐step procedure. We first estimate the parameters of the marginal processes and then estimate in a second step only the dependence structure parameter. For infinite Lévy measures, we truncate the small jumps and base our statistical analysis on the large jumps of the model. Prominent example will be a bivariate stable Lévy process, which allows for analytic calculations and, hence, for a comparison of different methods. We prove asymptotic normality of the parameter estimates from the two‐step procedure, and in particular, we derive the Godambe information matrix, whose inverse is the covariance matrix of the normal limit law. A simulation study investigates the loss of efficiency because of the two‐step procedure and the truncation.  相似文献   

2.
Chemical processes are becoming increasingly complicated, leading to an increase in process variables and more complex relationships among them. The vine copula has a significant advantage in portraying the dependence of high-dimensional variables. However, as the dimensions increase, the vine copula model incurs a high computational load; such pressure greatly reduces model efficiency. Relationships among variables in the industrial process are complex. Different variables may be strongly or weakly associated or even independent. This paper proposes a process monitoring method based on correlation variable classification and vine copula. The weighted correlation measure is first used to divide variables into a correlated subspace and weakly correlated subspace. Then, two vine structures, C-vine and D-vine, are applied to the correlated and weakly correlated subspaces, respectively. This method takes advantage of C-vine for correlated variables and the flexibility of D-vine for weakly correlated variables. Finally, comprehensive statistics are established based on different subspaces. Monitoring results of the numerical system and the Tennessee Eastman process demonstrate the effectiveness and validity of the proposed method.  相似文献   

3.
Competing risks data arise naturally in medical research, when subjects under study are at risk of more than one mutually exclusive event such as death from different causes. The competing risks framework also includes settings where different possible events are not mutually exclusive but the interest lies on the first occurring event. For example, in HIV studies where seropositive subjects are receiving highly active antiretroviral therapy (HAART), treatment interruption and switching to a new HAART regimen act as competing risks for the first major change in HAART. This article introduces competing risks data and critically reviews the widely used statistical methods for estimation and modelling of the basic (estimable) quantities of interest. We discuss the increasingly popular Fine and Gray model for subdistribution hazard of interest, which can be readily fitted using standard software under the assumption of administrative censoring. We present a simulation study, which explores the robustness of inference for the subdistribution hazard to the assumption of administrative censoring. This shows a range of scenarios within which the strictly incorrect assumption of administrative censoring has a relatively small effect on parameter estimates and confidence interval coverage. The methods are illustrated using data from HIV-1 seropositive patients from the collaborative multicentre study CASCADE (Concerted Action on SeroConversion to AIDS and Death in Europe).  相似文献   

4.
Abstract. The dependence structure in multivariate financial time series is of great importance in portfolio management. By studying daily return histories of 17 exchange‐traded index funds, we identify important features of the data, and we propose two new models to capture these features. The first is an extension of the multivariate BEKK (Baba, Engle, Kraft, Kroner) model, which includes a multivariate t‐type error distribution with different degrees of freedom. We demonstrate that this error distribution is able to accommodate different levels of heavy‐tailed behaviour and thus provides a better fit than models based on a multivariate t‐with a common degree of freedom. The second model is copula based, and can be regarded as an extension of the standard and the generalized dynamic conditional correlation model [Engle, Journal of Business and Economics Statistics (2002) Vol. 17, 425–446; Cappiello et al. (2003) Working paper, UCSD] to a Student copula. Model comparison is carried out using criteria including the Akaike information criteria and Bayesian information criteria. We also evaluate the two models from an asset‐allocation perspective using a three‐asset portfolio as an example, constructing optimal portfolios based on the Markowitz theory. Our results indicate that, for our data, the proposed models both outperform the standard BEKK model, with the copula model performing better than the extension of the BEKK model.  相似文献   

5.
This paper deals with the competing risks model as a special case of a multi-state model. The properties of the model are reviewed and contrasted to the so-called latent failure time approach. The relation between the competing risks model and right-censoring is discussed and regression analysis of the cumulative incidence function briefly reviewed. Two real data examples are presented and a guide to the practitioner is given.  相似文献   

6.
This article develops asymptotic theory for estimation of parameters in regression models for binomial response time series where serial dependence is present through a latent process. Use of generalized linear model estimating equations leads to asymptotically biased estimates of regression coefficients for binomial responses. An alternative is to use marginal likelihood, in which the variance of the latent process but not the serial dependence is accounted for. In practice, this is equivalent to using generalized linear mixed model estimation procedures treating the observations as independent with a random effect on the intercept term in the regression model. We prove that this method leads to consistent and asymptotically normal estimates even if there is an autocorrelated latent process. Simulations suggest that the use of marginal likelihood can lead to generalized linear model estimates result. This problem reduces rapidly with increasing number of binomial trials at each time point, but for binary data, the chance of it can remain over 45% even in very long time series. We provide a combination of theoretical and heuristic explanations for this phenomenon in terms of the properties of the regression component of the model, and these can be used to guide application of the method in practice.  相似文献   

7.
Abstract. A nonparametric test statistic based on the distance between the joint and marginal densities is developed to test for the serial dependence for a given sequence of time series data. The key idea lies in observing that, under the null hypothesis of independence, the joint density of the observations is equal to the product of their individual marginals. Histograms are used in constructing such a statistic which is nonparametric and consistent. It possesses high power in capturing subtle or diffuse dependence structure. A bilinear time series model is used to illustrate its performance with the classical correlation approach.  相似文献   

8.
This article considers the reliability analysis of a hybrid system with dependent components, which are linked by a copula function. Based on Type I progressive hybrid censored and masked system lifetime data, we drive some probability results for the hybrid system and then the maximum likelihood estimates as well as the asymptotic confidence intervals and bootstrap confidence intervals of the unknown parameters are obtained. The effects of different dependence structures on the estimates of the parameter and the reliability function are investigated. Finally, Monte Carlo simulations are implemented to compare the performances of the estimates when the components are dependent with those when the components are independent.  相似文献   

9.
Multi-state models for event history analysis   总被引:1,自引:0,他引:1  
An introduction to event history analysis via multi-state models in given. Examples include the two-state model for survival analysis, the competing risks and illness-death models, and models for bone marrow transplantation. Statistical model specification via transition intensities and likelihood inference is introduced. Consequences of observational patterns are discussed, and a real example concerning mortality and bleeding episodes in a liver cirrhosis trial is discussed.  相似文献   

10.
The first‐order nonnegative integer valued autoregressive process has been applied to model the counts of events in consecutive points of time. It is known that, if the innovations are assumed to follow a Poisson distribution then the marginal model is also Poisson. This model may however not be suitable for overdispersed count data. One frequent manifestation of overdispersion is that the incidence of zero counts is greater than expected from a Poisson model. In this paper, we introduce a new stationary first‐order integer valued autoregressive process with zero inflated Poisson innovations. We derive some structural properties such as the mean, variance, marginal and joint distribution functions of the process. We consider estimation of the unknown parameters by conditional or approximate full maximum likelihood. We use simulation to study the limiting marginal distribution of the process and the performance of our fitting algorithms. Finally, we demonstrate the usefulness of the proposed model by analyzing some real time series on animal health laboratory submissions.  相似文献   

11.
There has recently been an upsurge of interest in time series models for count data. Many papers focus on the model with first‐order (Markov) dependence and Poisson innovations. Our paper considers practical models that can capture higher‐order dependence based on the work of Joe (1996). In this framework we are able to model both equidispersed and overdispersed marginal distributions of data. The latter is approached using generalized Poisson innovations. Central to the models is the use of the property of closure under convolution of certain families of random variables. The models can be thought of as stationary Markov chains of finite order. Parameter estimation is undertaken by maximum likelihood, inference procedures are considered and means of assessing model adequacy employed. Applications to two new data sets are provided.  相似文献   

12.
This paper presents an extension of a general parametric class of transitional models of order p. In these models, the conditional distribution of the current observation, given the present and past history, is a mixture of conditional distributions, each of them corresponding to the current observation, given each one of the p-lagged observations. Such conditional distributions are constructed using bivariate copula models which allow for a rich range of dependence suitable to model non-Gaussian time series. Fixed and time varying covariates can be included in the models. These models have the advantage of straightforward construction and estimation for the analysis of time series and more general longitudinal data. A poliomyelitis incidence data set is used to illustrate the proposed methods, contrary to other researches' conclusions whose methods are mainly based on linear models, we find significant evidence of a decreasing trend in polio infection after accounting for seasonality.  相似文献   

13.
Abstract. The analysis of liquidity in financial markets is generally performed by means of the dynamics of the observed intertrade durations (possibly weighted by price or volume). Various dynamic models for duration data have been considered in the literature, such as the Autoregressive Conditional Duration (ACD) model. These models are often excessively constrained, introducing, for example, a deterministic link between conditional expectation and variance in the case of the ACD model. Moreover, the stationarity properties and the patterns of the stationary distributions are often unknown. The aim of this article is to solve these difficulties by considering a duration time series satisfying the proportional hazard property. We describe in detail this class of dynamic models, discuss its various representations and provide the ergodicity conditions. The proportional hazard copula can be specified either parametrically, or nonparametrically. We discuss estimation methods in both contexts, and explain why they are efficient, that is, why they reach the parametric (respectively, nonparametric) efficiency bound.  相似文献   

14.
Five mechanistic models of mixing and chemical reaction having an analogy with isotropic turbulent mixing are evaluated. The turbulence analogies, based on matching variance decay laws of models and turbulence theory, provide a physical basis for the models and a means of estimating their micromixing parameters apriori. Experimental data for single second order liquid phase reactions provide strong support for the analogies. However, it is demonstrated that in spite of their success for single reactions, the models may predict grossly different selectivities in the case of competing reactions in a plug flow reactor. This emphasizes the importance of certain structural features of the models which are independent of the existence of a turbulence analogy, such as: (i) reacting regions which are rich in each of the reactants of a two feedstream reactor, and (ii) unmixed regions in the reaction mixture. The importance of obtaining data for competing reactions in a highly segregated plug flow reactor for the purpose of model discrimination is made apparent.  相似文献   

15.
Two conditions must be fulfilled for an intermediate endpoint to be an acceptable surrogate for a true clinical endpoint: (1) there must be a strong association between the surrogate and the true endpoint, and (2) there must be a strong association between the effects of treatment on the surrogate and the true endpoint. We test whether these conditions are fulfilled for disease-free survival (DFS) and progression-free survival (PFS) on data from 20 clinical trials comparing experimental treatments with standard treatments for early and advanced colorectal cancer. The effects of treatment on DFS (or PFS in advanced disease) and OS were quantified through log hazard ratios (log HR), estimated through a Weibull model stratified for trial. The rank correlation coefficients between DFS and OS, and trial-specific treatment effects, were estimated using a bivariate copula distribution for these endpoints. A linear regression model between the estimated log hazard ratios was used to compute the "surrogate threshold effect", which is the minimum treatment effect on DFS required to predict a non-zero treatment effect on OS in a future trial. In early disease, the rank correlation coefficient between DFS and OS was equal to 0.96 (CI 0.95-0.97). The correlation coefficient between the log hazard ratios was equal to 0.94 (CI 0.87-1.01). The risk reductions were approximately 3% smaller on OS than on DFS, and the surrogate threshold effect corresponded to a DFS hazard ratio of 0.93. In advanced disease, the rank correlation coefficient between PFS and OS was equal to 0.82 (CI 0.82-0.83). The correlation coefficient between the log hazard ratios was equal to 0.99 (CI 0.94-1.04). The risk reductions were approximately 19% smaller on OS than on PFS, and the surrogate threshold effect corresponded to a PFS hazard ratio of 0.86. One trial with a large treatment effect on PFS and OS had a strong influence on the results in advanced disease. DFS (and PFS in advanced disease) are acceptable surrogates for OS in colorectal cancer.  相似文献   

16.
We present a theoretical model for the nucleation of amyloid fibrils. In our model, we use helix-coil theory to describe the equilibrium between a soluble native state and an aggregation-prone unfolded state. We then extend the theory to include oligomers with β-sheet cores, and calculate the free energy of these states using estimates for the energies of H-bonds, steric-zipper interactions, and the conformational entropy cost of forming secondary structure. We find that states with fewer than ∼10 β-strands are unstable, relative to the dissociated state, and three β-strands is the highest free-energy state. We then use a modified version of classical nucleation theory to compute the nucleation rate of fibrils from a supersaturated solution of monomers, dimers, and trimers. The nucleation rate has a nonmonotonic dependence on denaturant concentration, reflecting the competing effects of destabilizing the fibril and increasing the concentration of unfolded monomers. We estimate heterogeneous nucleation rates, and discuss the application of our model to secondary nucleation.  相似文献   

17.
Integrated safety analysis of hazardous process facilities calls for an understanding of both stochastic and topological dependencies, going beyond traditional Bayesian Network (BN) analysis to study cause-effect relationships among major risk factors. This paper presents a novel model based on the Copula Bayesian Network (CBN) for multivariate safety analysis of process systems. The innovation of the proposed CBN model is in integrating the advantage of copula functions in modelling complex dependence structures with the cause-effect relationship reasoning of process variables using BNs. This offers a great flexibility in probabilistic analysis of individual risk factors while considering their uncertainty and stochastic dependence. Methods based on maximum likelihood evaluation and information theory are presented to learn the structure of CBN models. The superior performance of the CBN model and its advantages compared to traditional BN models are demonstrated by application to an offshore managed pressure drilling case study.  相似文献   

18.
High-dose chemotherapy followed by stem cell recovery, more commonly called a bone marrow transplant, is a common treatment for a number of diseases. This article examines four problems commonly encountered when dealing with bone marrow transplant studies. First, we look at the problem of competing causes of failure and at methods based on a multi-state model to estimate meaningful probabilities for these risks. Second, we examine methods for estimating the prevalence of an intermediate condition, here the prevalence of chronic GVHD. Third, we look at the problem of modeling the post transplant recovery process and we provide two examples of how these estimates can be used to assess dynamically a patient's prognosis or how these probabilities can be used to design trials of new therapy. Finally, we present an estimate of a new measure of treatment efficiency, the current leukemia free survival function, which is derived from a multi-state model approach.  相似文献   

19.
We develop a likelihood ratio (LR) test procedure for discriminating between a short‐memory time series with a change‐point (CP) and a long‐memory (LM) time series. Under the null hypothesis, the time series consists of two segments of short‐memory time series with different means and possibly different covariance functions. The location of the shift in the mean is unknown. Under the alternative, the time series has no shift in mean but rather is LM. The LR statistic is defined as the normalized log‐ratio of the Whittle likelihood between the CP model and the LM model, which is asymptotically normally distributed under the null. The LR test provides a parametric alternative to the CUSUM test proposed by Berkes et al. (2006) . Moreover, the LR test is more general than the CUSUM test in the sense that it is applicable to changes in other marginal or dependence features other than a change‐in‐mean. We show its good performance in simulations and apply it to two data examples.  相似文献   

20.
The reversible proton dissociation and geminate recombination of photoacids was studied as a function of temperature in neat water, binary water mixture containing 0.6 mol% glycerol, and doped ice containing 0.6 mol% glycerol. The deuterium isotope effect on both condensed phases was also studied. 8-hydroxypyrene-1,3,6 trisulfonate trisodium salt was used as the electronically-excited-state proton emitter. The experimental data are analyzed by the Debye–Smoluchowski equation solved numerically with boundary conditions to account for the reversibility of the reaction. We propose a qualitative model to describe the unusual temperature dependence of the proton transfer rate in the liquid phase. We also propose a model for proton transfer in solid ice based on L-defects transport as proton acceptors. While in the liquid phase at t > 10°C the proton dissociation rate constant is almost temperature independent, in glycerol-doped ice we find a large temperature dependence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号