首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose two approximate dynamic programming (ADP)-based strategies for control of nonlinear processes using input-output data. In the first strategy, which we term ‘J-learning,’ one builds an empirical nonlinear model using closed-loop test data and performs dynamic programming with it to derive an improved control policy. In the second strategy, called ‘Q-learning,’ one tries to learn an improved control policy in a model-less manner. Compared to the conventional model predictive control approach, the new approach offers some practical advantages in using nonlinear empirical models for process control. Besides the potential reduction in the on-line computational burden, it offers a convenient way to control the degree of model extrapolation in the calculation of optimal control moves. One major difficulty associated with using an empirical model within the multi-step predictive control setting is that the model can be excessively extrapolated into regions of the state space where identification data were scarce or nonexistent, leading to performances far worse than predicted by the model. Within the proposed ADP-based strategies, this problem is handled by imposing a penalty term designed on the basis of local data distribution. A CSTR example is provided to illustrate the proposed approaches.  相似文献   

2.
The Fuzzy k-Means clustering model (FkM) is a powerful tool for classifying objects into a set of k homogeneous clusters by means of the membership degrees of an object in a cluster. In FkM, for each object, the sum of the membership degrees in the clusters must be equal to one. Such a constraint may cause meaningless results, especially when noise is present. To avoid this drawback, it is possible to relax the constraint, leading to the so-called Possibilistic k-Means clustering model (PkM). In particular, attention is paid to the case in which the empirical information is affected by imprecision or vagueness. This is handled by means of LR fuzzy numbers. An FkM model for LR fuzzy data is firstly developed and a PkM model for the same type of data is then proposed. The results of a simulation experiment and of two applications to real world fuzzy data confirm the validity of both models, while providing indications as to some advantages connected with the use of the possibilistic approach.  相似文献   

3.
The forward search provides data-driven flexible trimming of a Cp statistic for the choice of regression models that reveals the effect of outliers on model selection. An informed robust model choice follows. Even in small samples, the statistic has a null distribution indistinguishable from an F distribution. Limits on acceptable values of the Cp statistic follow. Two examples of widely differing size are discussed. A powerful graphical tool is the generalized candlestick plot, which summarizes the information on all forward searches and on the choice of models. A comparison is made with the use of M-estimation in robust model choice.  相似文献   

4.
Optimal design for generalized linear models has primarily focused on univariate data. Often experiments are performed that have multiple dependent responses described by regression type models, and it is of interest and of value to design the experiment for all these responses. This requires a multivariate distribution underlying a pre-chosen model for the data. Here, we consider the design of experiments for bivariate binary data which are dependent. We explore Copula functions which provide a rich and flexible class of structures to derive joint distributions for bivariate binary data. We present methods for deriving optimal experimental designs for dependent bivariate binary data using Copulas, and demonstrate that, by including the dependence between responses in the design process, more efficient parameter estimates are obtained than by the usual practice of simply designing for a single variable only. Further, we investigate the robustness of designs with respect to initial parameter estimates and Copula function, and also show the performance of compound criteria within this bivariate binary setting.  相似文献   

5.
This article is about testing the equality of several normal means when the variances are unknown and arbitrary, i.e., the set up of the one-way ANOVA. Even though several tests are available in the literature, none of them perform well in terms of Type I error probability under various sample size and parameter combinations. In fact, Type I errors can be highly inflated for some of the commonly used tests; a serious issue that appears to have been overlooked. We propose a parametric bootstrap (PB) approach and compare it with three existing location-scale invariant tests—the Welch test, the James test and the generalized F (GF) test. The Type I error rates and powers of the tests are evaluated using Monte Carlo simulation. Our studies show that the PB test is the best among the four tests with respect to Type I error rates. The PB test performs very satisfactorily even for small samples while the Welch test and the GF test exhibit poor Type I error properties when the sample sizes are small and/or the number of means to be compared is moderate to large. The James test performs better than the Welch test and the GF test. It is also noted that the same tests can be used to test the significance of the random effect variance component in a one-way random model under unequal error variances. Such models are widely used to analyze data from inter-laboratory studies. The methods are illustrated using some examples.  相似文献   

6.
The problem of constructing optimal designs when some of the factors are not under the control of the experimenters is considered. Their values can be known or unknown before the experiment is carried out. Several criteria are taken into consideration to find optimal conditional designs given some prior information on the factors. In order to determine these optimal conditional designs a class of multiplicative algorithms is provided. Optimal designs are computed for illustrative, but simplistic, examples. Two real life problems in production models and a physical test for predicting morbidity in lung cancer surgery motivate the procedures provided.  相似文献   

7.
Semiparametric reproductive dispersion mixed-effects model (SPRDMM) is an extension of the reproductive dispersion model and the semiparametric mixed model, and it includes many commonly encountered models as its special cases. A Bayesian procedure is developed for analyzing SPRDMMs on the basis of P-spline estimates of nonparametric components. A hybrid algorithm combining the Gibbs sampler and the Metropolis-Hastings algorithm is used to simultaneously obtain the Bayesian estimates of unknown parameters, smoothing function and random effects, as well as their standard error estimates. The Bayes factor for model comparison is employed to select better approximation of the smoothing function via path sampling. Several simulation studies and a real example are used to illustrate the proposed methodologies.  相似文献   

8.
Although there have been many researches on cluster analysis considering feature (or variable) weights, little effort has been made regarding sample weights in clustering. In practice, not every sample in a data set has the same importance in cluster analysis. Therefore, it is interesting to obtain the proper sample weights for clustering a data set. In this paper, we consider a probability distribution over a data set to represent its sample weights. We then apply the maximum entropy principle to automatically compute these sample weights for clustering. Such method can generate the sample-weighted versions of most clustering algorithms, such as k-means, fuzzy c-means (FCM) and expectation & maximization (EM), etc. The proposed sample-weighted clustering algorithms will be robust for data sets with noise and outliers. Furthermore, we also analyze the convergence properties of the proposed algorithms. This study also uses some numerical data and real data sets for demonstration and comparison. Experimental results and comparisons actually demonstrate that the proposed sample-weighted clustering algorithms are effective and robust clustering methods.  相似文献   

9.
Consider the situation where the Structuration des Tableaux à Trois Indices de la Statistique (STATIS) methodology is applied to a series of studies, each study being represented by data and weight matrices. Relations between studies may be captured by the Hilbert-Schmidt product of these matrices. Specifically, the eigenvalues and eigenvectors of the Hilbert-Schmidt matrix S may be used to obtain a geometrical representation of the studies. The studies in a series may further be considered to have a common structure whenever their corresponding points lie along the first axis. The matrix S can be expressed as the sum of a rank 1 matrix λuuT with an error matrix E. Therefore, the components of the vector are sufficient to locate the points associated to the studies. Former models for S where vec(E) are mathematically tractable and yet do not take into account the symmetry of the matrix S. Thus a new symmetric model is proposed as well as the corresponding tests for a common structure. It is further shown how to assess the goodness of fit of such models. An application to the human immunodeficiency virus (HIV) infection is used for assessing the proposed model.  相似文献   

10.
Stochastic volatility (SV) models have been considered as a real alternative to time-varying volatility of the ARCH family. Existing asymmetric SV (ASV) models treat volatility asymmetry via the leverage effect hypothesis. Generalised ASV models that take account of both volatility asymmetry and normality violation expressed simultaneously by skewness and excess kurtosis are introduced. The new generalised ASV models are estimated using the Bayesian Markov Chain Monte Carlo approach for parametric and log-volatility estimation. By using simulated and real financial data series, the new models are compared to existing SV models for their statistical properties, and for their estimation performance in within and out-of-sample periods. Results show that there is much to gain from the introduction of the generalised ASV models.  相似文献   

11.
R package flexmix provides flexible modelling of finite mixtures of regression models using the EM algorithm. Several new features of the software such as fixed and nested varying effects for mixtures of generalized linear models and multinomial regression for a priori probabilities given concomitant variables are introduced. The use of the software in addition to model selection is demonstrated on a logistic regression example.  相似文献   

12.
A studentized range test using a two-stage and a one-stage sampling procedures, respectively, is proposed for testing the null hypothesis that the average deviation of the normal means is falling into a practical indifference zone. Both the level and the power of the proposed test associated with the hypotheses are controllable and they are completely independent of the unknown variances. The two-stage procedure is a design-oriented procedure that satisfies certain probability requirements and simultaneously determines the required sample sizes for an experiment while the one-stage procedure is a data-analysis procedure after the data have been collected, which can supplement the two-stage procedure when the later has to end its experiment sooner than its required experimental process is completed. Tables needed for implementing these procedures are given.  相似文献   

13.
Two-level supersaturated designs (SSDs) are designs that examine more than n−1 factors in n runs. Although SSD literature for both construction and analysis is plentiful, the dearth of actual applications suggests that SSDs are still an unproven tool. Whether using forward selection or all-subsets regression, it is easy to select simple models from SSDs that explain a very large percentage of the total variation. Hence, naive p-values can persuade the user that included factors are indeed active. We propose the use of a global model randomization test in conjunction with all-subsets (or a shrinkage method) to more appropriately select candidate models of interest. For settings where the large number of factors makes repeated use of all-subsets expensive, we propose a short-cut approximation for the p-values. Two state-of-the-art model selection methods that have received considerable attention in recent years, Least Angle Regression and the Dantzig Selector, were likewise supplemented with the global randomization test. Finally, we propose a randomization test for reducing the number of terms in candidate models with small global p-values. Randomization tests effectively emphasize the limitations of SSDs, especially those with a large factor to run size ratio.  相似文献   

14.
Statistical models are often based on normal distributions and procedures for testing such distributional assumption are needed. Many goodness-of-fit tests are available. However, most of them are quite insensitive in detecting non-normality when the alternative distribution is symmetric. On the other hand all the procedures are quite powerful against skewed alternatives. A new test for normality based on a polynomial regression is presented. It is very effective in detecting non-normality when the alternative distribution is symmetric. A comparison between well known tests and this new procedure is performed by simulation study. Other properties are also investigated.  相似文献   

15.
An adaptive controller based on multi-input fuzzy rules emulated networks (MIFRENs) is introduced for omni-directional mobile robot systems in the discrete-time domain without any kinematic or dynamic models. An approximated model for unknown systems is developed by using two MIFRENs with an online learning algorithm in addition to the stability analysis. The main theorem in this model is proposed to guarantee closed-loop performance and system robustness for all adjustable parameters inside MIFRENs. The system is validated by an experimental setup with a FESTO omni-directional mobile robot called Robotino®. The proposed algorithm is shown to have superior performance compared to that of an algorithm that uses only an embedded controller. The advantage of the MIFREN initial setting is verified comparing its results with those of a controller that is based on neural networks.  相似文献   

16.
17.
Penalized B-splines combined with the composite link model are used to estimate a bivariate density from a histogram with wide bins. The goals are multiple: they include the visualization of the dependence between the two variates, but also the estimation of derived quantities like Kendall’s tau, conditional moments and quantiles. Two strategies are proposed: the first one is semiparametric with flexible margins modeled using B-splines and a parametric copula for the dependence structure; the second one is nonparametric and is based on Kronecker products of the marginal B-spline bases. Frequentist and Bayesian estimations are described. A large simulation study quantifies the performances of the two methods under different dependence structures and for varying strengths of dependence, sample sizes and amounts of grouping. It suggests that Schwarz’s BIC is a good tool for classifying the competing models. The density estimates are used to evaluate conditional quantiles in two applications in social and in medical sciences.  相似文献   

18.
Markov chains provide a flexible model for dependent random variables with applications in such disciplines as physics, environmental science and economics. In the applied study of Markov chains, it may be of interest to assess whether the transition probability matrix changes during an observed realization of the process. If such changes occur, it would be of interest to estimate the transitions where the changes take place and the probability transition matrix before and after each change. For the case when the number of changes is known, standard likelihood theory is developed to address this problem. The bootstrap is used to aid in the computation of p-values. When the number of changes is unknown, the AIC and BIC measures are used for model selection. The proposed methods are studied empirically and are applied to example sets of data.  相似文献   

19.
A novel method for the robust identification of interpretable fuzzy models, based on the criterion that identification errors are least sensitive to data uncertainties and modelling errors, is suggested. The robustness of identification errors towards unknown disturbances (data uncertainties, modelling errors, etc.) is achieved by bounding (i.e. minimizing) the maximum possible value of energy-gain from disturbances to the identification errors. The solution of energy-gain bounding problem, being robust, shows an improved performance of the identification method. The flexibility of the proposed framework is shown by designing the variable learning rate identification algorithms in both deterministic and stochastic frameworks.  相似文献   

20.
In conjoint experiments, each respondent receives a set of profiles to rate. Sometimes, the profiles are expensive prototypes that respondents have to test before rating them. Designing these experiments involves determining how many and which profiles each respondent has to rate and how many respondents are needed. To that end, the set of profiles offered to a respondent is treated as a separate block in the design and a random respondent effect is used in the model because profile ratings from the same respondent are correlated. Optimal conjoint designs are then obtained by means of an adapted version of an algorithm for finding D-optimal split-plot designs. A key feature of the design construction algorithm is that it returns the optimal number of respondents and the optimal number of profiles each respondent has to evaluate for a given number of profiles. The properties of the optimal designs are described in detail and some practical recommendations are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号