首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到15条相似文献,搜索用时 0 毫秒
1.
We consider changes in the degree of persistence of a process when the degree of persistence is characterized as the order of integration of a strongly dependent process. To avoid the risk of incorrectly specifying the data generating process we employ local Whittle estimates which uses only frequencies local to zero. The limit distribution of the test statistic under the null is not standard but it is well known in the literature. A Monte Carlo study shows that this inference procedure performs well in finite samples. We demonstrate the practical utility of these results with an empirical example, where we analyze the inflation rate in Germany for the period 1986–2017.  相似文献   

2.
Abstract.  This article studies the estimation of a nonhomogeneous Wiener process model for degradation data. A pseudo-likelihood method is proposed to estimate the unknown parameters. An attractive algorithm is established to compute the estimator under this pseudo-likelihood formulation. We establish the asymptotic properties of the estimator, including consistency, convergence rate and asymptotic distribution. Random effects can be incorporated into the model to represent the heterogeneity of degradation paths by letting the mean function be random. The Wiener process model is extended naturally to a normal inverse Gaussian process model and similar pseudo-likelihood inference is developed. A score test is used to test the presence of the random effects. Simulation studies are conducted to validate the method and we apply our method to a real data set in the area of health structure monitoring.  相似文献   

3.
Abstract. We consider semiparametric estimation in time‐series regression in the presence of long‐range dependence in both the errors and the stochastic regressors. A central limit theorem is established for a class of semiparametric frequency domain‐weighted least squares estimates, which includes both narrow‐band ordinary least squares and narrow‐band generalized least squares as special cases. The estimates are semiparametric in the sense that focus is on the neighbourhood of the origin, and only periodogram ordinates in a degenerating band around the origin are used. This setting differs from earlier studies on time‐series regression with long‐range dependence, where a fully parametric approach has been employed. The generalized least squares estimate is infeasible when the degree of long‐range dependence is unknown and must be estimated in an initial step. In that case, we show that a feasible estimate which has the same asymptotic properties as the infeasible estimate, exists. By Monte Carlo simulation, we evaluate the finite‐sample performance of the generalized least squares estimate and the feasible estimate.  相似文献   

4.
We consider a fractional exponential, or FEXP estimator of the memory parameter of a stationary Gaussian long-memory time series. The estimator is constructed by fitting a FEXP model of slowly increasing dimension to the log periodogram at all Fourier frequencies by ordinary least squares, and retaining the corresponding estimated memory parameter. We do not assume that the data were necessarily generated by a FEXP model, or by any other finite-parameter model. We do, however, impose a global differentiability assumption on the spectral density except at the origin. Because of this, and its use of all Fourier frequencies, we refer to the FEXP estimator as a broadband semiparametric estimator. We demonstrate the consistency of the FEXP estimator, and obtain expressions for its asymptotic bias and variance. If the true spectral density is sufficiently smooth, the FEXP estimator can strongly outperform existing semiparametric estimators, such as the Geweke–Porter-Hudak (GPH) and Gaussian semiparametric estimators (GSE), attaining an asymptotic mean squared error proportional to (log n )/ n , where n is the sample size. In a simulation study, we demonstrate the merits of using a finite-sample correction to the asymptotic variance, and we also explore the possibility of automatically selecting the dimension of the exponential model using Mallows' CL criterion.  相似文献   

5.
Classical least squares can be strongly affected due to the inevitable occurrence of departures from its model assumptions, most notably those from the distributional assumptions. Robust estimators, on the other hand, will resist them. Unfortunately, the multiplicity of alternative robust regression estimators that have been suggested in the literature over the years is a source of confusion for practitioners of regression analysis. Moreover, little is known about their small-sample performance in the nonlinear regression setting, in particular on the chemical engineering field. A simulation study comparing six such estimators (namely LMS, LTS, LTD, MM-, τ-, and Lp-norm) together with the usual least squares estimator is presented. The results obtained provide guidance as to the choice of an appropriate estimator.  相似文献   

6.
Li Peisheng  Xiong Youhui  Yu Dunxi  Sun Xuexin 《Fuel》2005,84(18):2384-2388
Grindability index of coal is usually determined by Hardgrove Grindability Index (HGI). The correlation between the proximate analysis of Chinese coal and HGI was studied. It was found from statistical analysis that, the higher the moisture and the volatile matter content in coal, the less the HGI will be. On the contrary, the higher the ash and the fixed carbon content in coal, the higher the HGI will be. But the correlation between proximate analysis and HGI in coals is nonlinear. The prediction equation of HGI reported in literature, which is based on proximate analysis of coal and linear regression method, is not correct for coals in China. In this paper, the generalized regression neural network (GRNN) method was used to predict the HGI. A higher precision in the prediction result was obtained through such new method. By this method, the HGI can be estimated indirectly from the proximate analysis of coal when the HGI measurement equipment is not available.  相似文献   

7.
A short review of the works concerning parameter estimation in kinetic experiments was performed. An adaptive random search algorithm was applied to evaluate the parameters and parameter confidence intervals of Langmuir-Hinshelwood models. The kinetic parameters were determined using the Box - Draper criterion. The parameter confidence regions were found by using the likelihood ratio. Three heterogeneously catalyzed hydrogenations were studied.  相似文献   

8.
A novel prediction and optimization method based on improved generalized regression neural network (GRNN) and particle swarm optimization (PSO) algorithm is proposed to optimize the process conditions for styrene epoxidation to achieve higher yields. This model was designed to optimize the five input parameters reaction temperature and time as well as catalyst, solvent, and oxidant dosage. The output of the improved GRNN was given to the PSO algorithm to optimize the process conditions. The optimal smoothing parameter σ of GRNN was chosen from the training sample with a minimum cross validation error. Under the five optimized process conditions the maximum yield reached 95.76 %. This innovative model of improved GRNN hybrid PSO algorithm proved to be a useful tool for optimization of process conditions for styrene epoxidation.  相似文献   

9.
This article develops asymptotic theory for estimation of parameters in regression models for binomial response time series where serial dependence is present through a latent process. Use of generalized linear model estimating equations leads to asymptotically biased estimates of regression coefficients for binomial responses. An alternative is to use marginal likelihood, in which the variance of the latent process but not the serial dependence is accounted for. In practice, this is equivalent to using generalized linear mixed model estimation procedures treating the observations as independent with a random effect on the intercept term in the regression model. We prove that this method leads to consistent and asymptotically normal estimates even if there is an autocorrelated latent process. Simulations suggest that the use of marginal likelihood can lead to generalized linear model estimates result. This problem reduces rapidly with increasing number of binomial trials at each time point, but for binary data, the chance of it can remain over 45% even in very long time series. We provide a combination of theoretical and heuristic explanations for this phenomenon in terms of the properties of the regression component of the model, and these can be used to guide application of the method in practice.  相似文献   

10.
Abstract

In this article, we consider the problem of testing two separate families of hypotheses via a generalization of the sequential probability ratio test. In particular, the generalized likelihood ratio statistic is considered and the stopping rule is the first boundary crossing of the generalized likelihood ratio statistic. We show that this sequential test is asymptotically optimal in the sense that it achieves asymptotically the shortest expected sample size as the maximal type I and type II error probabilities tend to zero.  相似文献   

11.
While most performance metrics of high-explosive (HE) based devices like detonation velocity, detonation pressure, and energy output are expected to degrade over time, the evolution of initiation threshold appears less clear, with claims of both increasing and decreasing trends in threshold having been made in the literature. This work analyzes D-optimally designed sequential binary test data for a few thermally conditioned porous-powder and polymer-bonded HE initiator systems using a Bayesian likelihood method employing the probit regression model. We find that in most cases the initiation threshold decreases (i. e., sensitivity increases) upon accelerated thermal conditioning. However, such results are nuanced and influenced by factors like the contact area of initiating stimulus, HE characteristics like density and specific surface area, as well as possible thermally induced changes to other materials and interfaces involved.  相似文献   

12.
In this paper, a probabilistic combination form of the local independent component regression (ICR) model is proposed for quality prediction of chemical processes with multiple operation modes. Through the introduction of the Bayesian inference strategy, the posterior probabilities of the data sample in different operation modes are calculated upon two monitoring statistics of the independent component analysis (ICA) model. Then, based on the combination of local ICR models in different operation modes, a probabilistic multiple ICR (MICR) model is developed. Meanwhile, the operation mode information of the data sample is located through posterior analysis of the new model. To evaluate the multimode quality prediction performance of the proposed method, two case studies are provided.  相似文献   

13.
In this article, a robust modeling strategy for mixture probabilistic principal component analysis (PPCA) is proposed. Different from the traditional Gaussian distribution driven model such as PPCA, the multivariate student t‐distribution is adopted for probabilistic modeling to reduce the negative effect of outliers, which is very common in the process industry. Furthermore, for handling the missing data problem, a partially updating algorithm is developed for parameter learning in the robust mixture PPCA model. Therefore, the new robust model can simultaneously deal with outliers and missing data. For process monitoring, a Bayesian soft decision fusion strategy is developed which is combined with the robust local monitoring models under different operating conditions. Two case studies demonstrate that the new robust model shows enhanced modeling and monitoring performance in both outlier and missing data cases, compared to the mixture probabilistic principal analysis model. © 2014 American Institute of Chemical Engineers AIChE J, 60: 2143–2157, 2014  相似文献   

14.
Abstract

We propose a general and flexible procedure for testing multiple hypotheses about sequential (or streaming) data that simultaneously controls both the false discovery rate (FDR) and false nondiscovery rate (FNR) under minimal assumptions about the data streams, which may differ in distribution and dimension and be dependent. All that is needed is a test statistic for each data stream that controls its conventional type I and II error probabilities, and no information or assumptions are required about the joint distribution of the statistics or data streams. The procedure can be used with sequential, group sequential, truncated, or other sampling schemes. The procedure is a natural extension of Benjamini and Hochberg’s (1995 Benjamini, Y. and Hochberg, Y. (1995). Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing, Journal of the Royal Statistical Society, Series B: Methodological 57: 289300. doi:10.1111/j.2517-6161.1995.tb02031.x[Crossref] [Google Scholar]) widely used fixed sample size procedure to the domain of sequential data, with the added benefit of simultaneous FDR and FNR control that sequential sampling affords. We prove the procedure’s error control and give some tips for implementation in commonly encountered testing situations.  相似文献   

15.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号