共查询到13条相似文献,搜索用时 15 毫秒
1.
In this paper we propose a new procedure for detecting additive outliers in a univariate time series based on a bootstrap implementation of the test of Perron and Rodríguez (2003, Journal of Time Series Analysis 24, 193‐220). This procedure is used to test the null hypothesis that a time series is uncontaminated by additive outliers against the alternative that one or more additive outliers are present. We demonstrate that the existing tests of, inter alia, Vogelsang (1999, Journal of Time Series Analysis 20, 237–52) Perron and Rodríguez (2003) and Burridge and Taylor (2006, Journal of Time Series Analysis 27, 685–701) are unable to strike a balance between size and power when the order of integration of a time series is unknown and the time series is driven by innovations drawn from an unknown distribution. We show that the proposed bootstrap testing procedure is able to control size to such an extent that its size properties are comparable with the robust test of Burridge and Taylor (2006) when the distribution of the innovations is not assumed known, whilst maintaining power in the Gaussian environment close to that of the test of Perron and Rodríguez (2003). 相似文献
2.
We discuss robust M‐estimation of INARCH models for count time series. These models assume the observation at each point in time to follow a Poisson distribution conditionally on the past, with the conditional mean being a linear function of previous observations. This simple linear structure allows us to transfer M‐estimators for autoregressive models to this situation, with some simplifications being possible because the conditional variance given the past equals the conditional mean. We investigate the performance of the resulting generalized M‐estimators using simulations. The usefulness of the proposed methods is illustrated by real data examples. 相似文献
3.
Fani Boukouvala Fernando J. Muzzio Marianthi G. Ierapetritou 《American Institute of Chemical Engineers》2010,56(11):2860-2872
Lack of knowledge of the first principles, that describe the behavior of processed particulate mixtures, has created significant attention to data‐driven models for characterizing the performance of pharmaceutical processes—which are often treated as black—box operations. Uncertainty contained in the experimental data sets, however, can decrease the quality of the produced predictive models. In this work, the effect of missing and noisy data on the predictive capability of surrogate modeling methodologies such as Kriging and Response Surface Method (RSM) is evaluated. The key areas that affect the final error of prediction and the computational efficiency of the algorithm were found to be: (a) the method used to assign initial estimate values to the missing elements and (b) the iterative procedure used to further improve these initial estimates. The proposed approach includes the combination of the most appropriate initialization technique and the Expectation Maximization Principal Component Analysis algorithm to impute missing elements and minimize noise. Comparative analysis of the use of different initial imputation techniques such as mean, matching procedure, and a Kriging‐based approach proves that the two former used approaches give more accurate, “warm‐start” estimates of the missing data points that can significantly reduce computational time requirements. Experimental data from two case studies of different unit operations of the pharmaceutical powder tablet production process (feeding and mixing) are used as examples to illustrate the performance of the proposed methodology. Results show that by introducing an extra imputation step, the pseudo complete data sets created, produce very accurate predictive responses, whereas discarding incomplete observations leads to loss of valuable information and distortion of the predictive response. Results are also given for different percentages of missing data and different missing patterns. © 2010 American Institute of Chemical Engineers AIChE J, 2010 相似文献
4.
We study inference and diagnostics for count time series regression models that include a feedback mechanism. In particular, we are interested in negative binomial processes for count time series. We study probabilistic properties and quasi‐likelihood estimation for this class of processes. We show that the resulting estimators are consistent and asymptotically normally distributed. These facts enable us to construct probability integral transformation plots for assessing any assumed distributional assumptions. The key observation in developing the theory is a mean parameterized form of the negative binomial distribution. For transactions data, it is seen that the negative binomial distribution offers a better fit than the Poisson distribution. This is an immediate consequence of the fact that transactions can be represented as a collection of individual activities that correspond to different trading strategies. 相似文献
5.
Abstract. We show that changes in the innovation covariance matrix of a vector of series can generate spurious rejections of the null hypothesis of co‐integration when applying standard residual‐based co‐integration tests. A bootstrap solution to the inference problem is suggested which is shown to perform well in practice, redressing the size problems associated with the standard test but not losing power relative to the standard test under the alternative. 相似文献
6.
Abstract. Quasi‐likelihood ratio tests for autoregressive moving‐average (ARMA) models are examined. The ARMA models are stationary and invertible with white‐noise terms that are not restricted to be normally distributed. The white‐noise terms are instead subject to the weaker assumption that they are independently and identically distributed with an unspecified distribution. Bootstrap methods are used to improve control of the finite sample significance levels. The bootstrap is used in two ways: first, to approximate a Bartlett‐type correction; and second, to estimate the p‐value of the observed test statistic. Some simulation evidence is provided. The bootstrap p‐value test emerges as the best performer in terms of controlling significance levels. 相似文献
7.
Many empirical findings show that volatility in financial time series exhibits high persistence. Some researchers argue that such persistency is due to volatility shifts in the market, while others believe that this is a natural fluctuation explained by stationary long‐range dependence models. These two approaches confuse many practitioners, and forecasts for future volatility are dramatically different depending on which models to use. In this article, therefore, we consider a statistical testing procedure to distinguish volatility shifts in generalized AR conditional heteroscedasticity (GARCH) model against long‐range dependence. Our testing procedure is based on the residual‐based cumulative sum test, which is designed to correct the size distortion observed for GARCH models. We examine the validity of our method by providing asymptotic distributions of test statistic. Also, Monte Carlo simulations study shows that our proposed method achieves a good size while providing a reasonable power against long‐range dependence. It is also observed that our test is robust to the misspecified GARCH models. 相似文献
8.
Dong Wan Shin 《时间序列分析杂志》2011,32(3):292-303
We consider stationary bootstrap approximation of the non‐parametric kernel estimator in a general kth‐order nonlinear autoregressive model under the conditions ensuring that the nonlinear autoregressive process is a geometrically Harris ergodic stationary Markov process. We show that the stationary bootstrap procedure properly estimates the distribution of the non‐parametric kernel estimator. A simulation study is provided to illustrate the theory and to construct confidence intervals, which compares the proposed method favorably with some other bootstrap methods. 相似文献
9.
Antonio Bdalo Jos L Gmez Elisa Gmez M Fuensanta Mximo Asuncin M Hidalgo 《Journal of chemical technology and biotechnology (Oxford, Oxfordshire : 1986)》2001,76(9):978-984
A simple model, based on kinetic studies, is shown to be capable of describing the transient state of an ultrafiltration membrane reactor (UFMR) for the continuous resolution of DL ‐valine. The reactor has been modelled as a perfectly mixed tank reactor. The model has been validated in different conditions (five different flow rates and three feeding concentrations) using the asymmetrical hydrolysis of N‐acetyl‐DL ‐valine catalysed by L ‐aminoacylase as experimental system, with an overall mean error of 4.08%. © 2001 Society of Chemical Industry 相似文献
10.
Andrea Baldini Alberto Greco Mirko Lomi Roberta Giannelli Paola Canale Andrea Diana Cristina Dolciotti Renata Del Carratore Paolo Bongioanni 《International journal of molecular sciences》2022,23(21)
Alzheimer’s disease (AD) is the leading cause of dementia, but the pathogenetic factors are not yet well known, and the relationships between brain and systemic biochemical derangements and disease onset and progression are unclear. We aim to focus on blood biomarkers for an accurate prognosis of the disease. We used a dataset characterized by longitudinal findings collected over the past 10 years from 90 AD patients. The dataset included 277 observations (both clinical and biochemical ones, encompassing blood analytes encompassing routine profiles for different organs, together with immunoinflammatory and oxidative markers). Subjects were grouped into four severity classes according to the Clinical Dementia Rating (CDR) Scale: mild (CDR = 0.5 and CDR = 1), moderate (CDR = 2), severe (CDR = 3) and very severe (CDR = 4 and CDR = 5). Statistical models were used for the identification of potential blood markers of AD progression. Moreover, we employed the Pathfinder tool of the Reactome database to investigate the biological pathways in which the analytes of interest could be involved. Statistical results reveal an inverse significant relation between four analytes (high-density cholesterol, total cholesterol, iron and ferritin) with AD severity. In addition, the Reactome database suggests that such analytes could be involved in pathways that are altered in AD progression. Indeed, the identified blood markers include molecules that reflect the heterogeneous pathogenetic mechanisms of AD. The combination of such blood analytes might be an early indicator of AD progression and constitute useful therapeutic targets. 相似文献
11.
M. Grosso O. Galan R. Baratti J. A. Romagnoli 《American Institute of Chemical Engineers》2010,56(8):2077-2087
A stochastic approach to describe the crystal size distribution dynamics in antisolvent based crystal growth processes is here introduced. Fluctuations in the process dynamics are taken into account by embedding a deterministic model into a Fokker‐Planck equation, which describes the evolution in time of the particle size distribution. The deterministic model used in this application is based on the logistic model, which shows to be adequate to suit the dynamics characteristic of the growth process. Validations against experimental data are presented for the NaCl–water–ethanol antisolvent crystallization system in a bench‐scale fed‐batch crystallization unit. © 2009 American Institute of Chemical Engineers AIChE J, 2010 相似文献
12.
Carsten Jentsch 《时间序列分析杂志》2012,33(2):177-192
In modelling seasonal time series data, periodically (non‐)stationary processes have become quite popular over the last years and it is well known that these models may be represented as higher‐dimensional stationary models. In this article, it is shown that the spectral density matrix of this higher‐dimensional process exhibits a certain structure if and only if the observed process is covariance stationary. By exploiting this relationship, a new L2‐type test statistic is proposed for testing whether a multivariate periodically stationary linear process is even covariance stationary. Moreover, it is shown that this test may also be used to test for periodic stationarity. The asymptotic normal distribution of the test statistic under the null is derived and the test is shown to have an omnibus property. The article concludes with a simulation study, where the small sample performance of the test procedure is improved by using a suitable bootstrap scheme. 相似文献
13.
In considering the rounding impact of an autoregressive (AR) process, there are two different models available to be considered. The first assumes that the dynamic system follows an underlying AR model and only the observations are rounded up to a certain precision. The second assumes that the updated observation is a rounded version of an autoregression on previous rounded observations. This article considers the second model and examines behaviour of rounding impacts to the statistical inferences. The conditional maximum‐likelihood estimates for the model are proposed and their asymptotic properties are established, including strong consistency and asymptotic normality. Furthermore, both the classical AR model and the ordinary rounded AR model are no longer reliable when dealing with accumulated rounding errors. The three models are also applied to fit the Ocean Wave data. It turns out that the estimates under distinct models are significantly different. Based on our findings, we strongly recommend that models for dealing with rounded data should be in accordance with the actions of rounding errors. 相似文献