首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper studies the system transformation using generalized orthonormal basis functions that include the Laguerre basis as a special case. The transformation of the deterministic systems is studied in the literature, which is called the Hambo transform. The aim of the paper is to develop a transformation theory for stochastic systems. The paper establishes the equivalence of continuous and transformed-discrete-time stochastic systems in terms of solutions. The method is applied to the continuous-time system identification problem. It is shown that using the transformed signals the PO-MOESP subspace identification algorithm yields consistent estimates for system matrices. An example is included to illustrate the efficacy of the proposed identification method, and to make a comparison with the method using the Laguerre filter.  相似文献   

2.
3.
RRL is a relational reinforcement learning system based on Q-learning in relational state-action spaces. It aims to enable agents to learn how to act in an environment that has no natural representation as a tuple of constants. For relational reinforcement learning, the learning algorithm used to approximate the mapping between state-action pairs and their so called Q(uality)-value has to be very reliable, and it has to be able to handle the relational representation of state-action pairs. In this paper we investigate the use of Gaussian processes to approximate the Q-values of state-action pairs. In order to employ Gaussian processes in a relational setting we propose graph kernels as a covariance function between state-action pairs. The standard prediction mechanism for Gaussian processes requires a matrix inversion which can become unstable when the kernel matrix has low rank. These instabilities can be avoided by employing QR-factorization. This leads to better and more stable performance of the algorithm and a more efficient incremental update mechanism. Experiments conducted in the blocks world and with the Tetris game show that Gaussian processes with graph kernels can compete with, and often improve on, regression trees and instance based regression as a generalization algorithm for RRL. Editors: David Page and Akihiro Yamamoto  相似文献   

4.
This work is a contribution towards the understanding of certain features of mathematical models of single neurons. Emphasis is set on neuronal firing, for which the first passage time (FPT) problem bears a fundamental relevance. We focus the attention on modeling the change of the neuron membrane potential between two consecutive spikes by Gaussian stochastic processes, both of Markov and of non-Markov types. Methods to solve the FPT problems, both of a theoretical and of a computational nature, are sketched, including the case of random initial values. Significant similarities or diversities between computational and theoretical results are pointed out, disclosing the role played by the correlation time that has been used to characterize the neuronal activity. It is highlighted that any conclusion on this matter is strongly model-dependent. In conclusion, an outline of the asymptotic behavior of FPT densities is provided, which is particularly useful to discuss neuronal firing under certain slow activity conditions.  相似文献   

5.
A novel Bayesian paradigm for the identification of output error models has recently been proposed in which, in place of postulating finite-dimensional models of the system transfer function, the system impulse response is searched for within an infinite-dimensional space. In this paper, such a nonparametric approach is applied to the design of optimal predictors and discrete-time models based on prediction error minimization by interpreting the predictor impulse responses as realizations of Gaussian processes. The proposed scheme describes the predictor impulse responses as the convolution of an infinite-dimensional response with a low-dimensional parametric response that captures possible high-frequency dynamics. Overparameterization is avoided because the model involves only a few hyperparameters that are tuned via marginal likelihood maximization. Numerical experiments, with data generated by ARMAX and infinite-dimensional models, show the definite advantages of the new approach over standard parametric prediction error techniques and subspace methods both in terms of predictive capability on new data and accuracy in reconstruction of system impulse responses.  相似文献   

6.
基于仿射传播聚类和高斯过程的多模型建模方法   总被引:3,自引:0,他引:3  
针对单模型建模存在泛化能力差的问题,提出一种基于仿射传播聚类和高斯过程的多模型建模方法。该方法定义了一种新的相似度使仿射传播聚类算法把样本数据按照不同的工作点进行聚类,获得的子聚类样本数据再分别使用高斯过程建立相应的子模型,用"切换开关"方式组合作为最终模型的输出。将该建模方法应用到某双酚A反应釜出口丙酮含量的软测量建模中,仿真结果表明该方法具有较高的估计精度和一定的实用价值。  相似文献   

7.
Remarkably fest methods for generating normal and exponential random variables have been developed for conventional computers-their average times are little more than that needed to generate the uniform variable used to produce the result. But for supercomputers, with vector and/or parallel operations, and particularly for massively parallel machines with hundreds or thousands of processors, average times are not the proper measure of the speed of a generating procedure. For them, the worst case applies: The next step in a simulation cannot begin until all of the processors have generated their particular normal (or exponential, gamma, Poisson, and such) variable. So, for such new or anticipated (SIMD) architectures we must consider efficient constant-time methods for generating the important random variables of Monte Carlo studies. We describe one here, for normal (Gaussian) random variables. It is, in effect, a very fast method for inverting the normal distribution function.Research supported by the National Science Foundation, Grant DM880976.  相似文献   

8.
We give a general overview of the state-of-the-art in subspace system identification methods. We have restricted ourselves to the most important ideas and developments since the methods appeared in the late eighties. First, the basics of linear subspace identification are summarized. Different algorithms one finds in literature (such as N4SID, IV-4SID, MOESP, CVA) are discussed and put into a unifying framework. Further, a comparison between subspace identification and prediction error methods is made on the basis of computational complexity and precision of the methods by applying them on 10 industrial data sets.  相似文献   

9.
Gaussian Processes (GP) comprise a powerful kernel-based machine learning paradigm which has recently attracted the attention of the nonlinear system identification community, specially due to its inherent Bayesian-style treatment of the uncertainty. However, since standard GP models assume a Gaussian distribution for the observation noise, i.e., a Gaussian likelihood, the learning and predictive capabilities of such models can be severely degraded when outliers are present in the data. In this paper, motivated by our previous work on GP learning with data containing outliers and recent advances in hierarchical (deep GPs) and recurrent GP (RGP) approaches, we introduce an outlier-robust recurrent GP model, the RGP-t. Our approach explicitly models the observation layer, which includes a heavy-tailed Student-t likelihood, and allows for a hierarchy of multiple transition layers to learn the system dynamics directly from estimation data contaminated by outliers. In addition, we modify the original variational framework of standard RGP in order to perform inference with the new RGP-t model. The proposed approach is comprehensively evaluated using six artificial benchmarks, within several outlier contamination levels, and two datasets related to process industry systems (pH neutralization and heat exchanger), whose estimation data undergo large contamination rates. The simulation results obtained by the RGP-t model indicates an impressive resilience to outliers and a superior capability to learn nonlinear dynamics directly from highly outlier-contaminated data in comparison to existing GP models.  相似文献   

10.
11.
The reliability-based design optimization (RBDO) using performance measure approach for problems with correlated input variables requires a transformation from the correlated input random variables into independent standard normal variables. For the transformation with correlated input variables, the two most representative transformations, the Rosenblatt and Nataf transformations, are investigated. The Rosenblatt transformation requires a joint cumulative distribution function (CDF). Thus, the Rosenblatt transformation can be used only if the joint CDF is given or input variables are independent. In the Nataf transformation, the joint CDF is approximated using the Gaussian copula, marginal CDFs, and covariance of the input correlated variables. Using the generated CDF, the correlated input variables are transformed into correlated normal variables and then the correlated normal variables are transformed into independent standard normal variables through a linear transformation. Thus, the Nataf transformation can accurately estimates joint normal and some lognormal CDFs of the input variable that cover broad engineering applications. This paper develops a PMA-based RBDO method for problems with correlated random input variables using the Gaussian copula. Several numerical examples show that the correlated random input variables significantly affect RBDO results.  相似文献   

12.
A Nonlinear Gaussian Belief Network (NLGBN) based fault diagnosis technique is proposed for industrial processes. In this study, a three-layer NLGBN is constructed and trained to extract useful features from noisy process data. The nonlinear relationships between the process variables and the latent variables are modelled by a set of sigmoidal functions. To take into account the noisy nature of the data, model variances are also introduced to both the process variables and the latent variables. The three-layer NLGBN is first trained with normal process data using a variational Expectation and Maximization algorithm. During real-time monitoring, the online process data samples are used to update the posterior mean of the top-layer latent variable. The absolute gradient denoted as G-index to update the posterior mean is monitored for fault detection. A multivariate contribution plot is also generated based on the G-index for fault diagnosis. The NLGBN-based technique is verified using two case studies. The results demonstrate that the proposed technique outperforms the conventional nonlinear techniques such as KPCA, KICA, SPA, and Moving Window KPCA.  相似文献   

13.
Aimed at the determination of the number of mixtures for finite mixture models (FMMs), in this work, a new method called the penalized histogram difference criterion (PHDC) is proposed and evaluated with other criteria such as Akaike information criterion (AIC), the minimum message length (MML), the information complexity (ICOMP) and the evidence of data criterion (EDC). The new method, which calculates the penalized histogram difference between the data generated from estimated FMMs and those for modeling purpose, turns out to be better than others for data with complicate mixtures patterns. It is demonstrated in this work that the PHDC can determine the optimal number of clusters of the FMM. Furthermore, the estimated FMMs asymptotically approximate the true model. The utility of the new method is demonstrated through synthetic data sets analysis and the batch-wise comparison of citric acid fermentation processes.  相似文献   

14.
In this paper we introduce and illustrate non-trivial upper and lower bounds on the learning curves for one-dimensional Guassian Processes. The analysis is carried out emphasising the effects induced on the bounds by the smoothness of the random process described by the Modified Bessel and the Squared Exponential covariance functions. We present an explanation of the early, linearly-decreasing behavior of the learning curves and the bounds as well as a study of the asymptotic behavior of the curves. The effects of the noise level and the lengthscale on the tightness of the bounds are also discussed.  相似文献   

15.
1IntroductionTheknapsackproblemisawelLknowncombinatorialoptimizationproblemthatfindsapplicationstocapitalbudgeting,loadingproblems,solutionoflargeoptimizationproblems,andcomputersystems.Anextensiveliteratureexistsonapproximationalgorithmsforvariousformsofknapsackproblems['--'l.Inthispaper,westudytheapproximationforfourkindsofknapsackproblemswithmultipleconstraints:0/1MultipleConstraintKnapsackProblem(0/1MCKP),IntegerMultipleConstraintKnapsackProblem(IntegerMCKP),0/1k-ConstraintKnapsack…  相似文献   

16.
《国际计算机数学杂志》2012,89(9):1069-1076
In this article, we present a stochastic simulation-based genetic algorithm for solving chance constraint programming problems, where the random variables involved in the parameters follow any continuous distribution. Generally, deriving the deterministic equivalent of a chance constraint is very difficult due to complicated multivariate integration and is only possible if the random variables involved in the chance constraint follow some specific distribution such as normal, uniform, exponential and lognormal distribution. In the proposed method, the stochastic model is directly used. The feasibility of the chance constraints are checked using stochastic simulation, and the genetic algorithm is used to obtain the optimal solution. A numerical example is presented to prove the efficiency of the proposed method.  相似文献   

17.
Sufficient sampling is usually time-consuming and expensive but also is indispensable for supporting high precise data-driven modeling of wire-cut electrical discharge machining (WEDM) process. Considering the natural way to describe the behavior of a WEDM process by IF-THEN rules drawn from the field experts, engineering knowledge and experimental work, in this paper, the fuzzy logic model is chosen as prior knowledge to leverage the predictive performance. Focusing on the fusion between rough fuzzy system and very scarce noisy samples, a simple but effective re-sampling algorithm based on piecewise relational transfer interpolation is presented and it is integrated with Gaussian processes regression (GPR) for WEDM process modeling. First, by using re-sampling algorithm encoded derivative regularization, the prior model is translated into a pseudo training dataset, and then the dataset is trained by the Gaussian processes. An empirical study on two benchmark datasets intuitively demonstrates the feasibility and effectiveness of this approach. Experiments on high-speed WEDM (DK7725B) are conducted for validation of nonlinear relationship between the design variables (i.e., workpiece thickness, peak current, on-time and off-time) and the responses (i.e., material removal rate and surface roughness). The experimental result shows that combining very rough fuzzy prior model with training examples still significantly improves the predictive performance of WEDM process modeling, even with very limited training dataset. That is, given the generalized prior model, the samples needed by GPR model could be reduced greatly meanwhile keeping precise.  相似文献   

18.
To simplify the process for identifying 12 types of symmetric variables in the canonical OR-coincidence (COC) algebra system, we propose a new symmetry detection algorithm based on OR-NXOR expansion. By analyzing the relationships between the coefficient matrices of sub-functions and the order coefficient subset matrices based on OR-NXOR expansion around two arbitrary logical variables, the constraint conditions of the order coefficient subset matrices are revealed for 12 types of symmetric variables. Based on the proposed constraints, the algorithm is realized by judging the order characteristic square value matrices. The proposed method avoids the transformation process from OR-NXOR expansion to AND-OR-NOT expansion, or to AND-XOR expansion, and solves the problem of completeness in the dj-map method. The application results show that, compared with traditional methods, the new algorithm is an optimal detection method in terms of applicability of the number of logical variables, detection type, and complexity of the identification process. The algorithm has been implemented in C language and tested on MCNC91 benchmarks. Experimental results show that the proposed algorithm is convenient and efficient.  相似文献   

19.
Gaussian process (GP) models form an emerging methodology for modelling nonlinear dynamic systems which tries to overcome certain limitations inherent to traditional methods such as e.g. neural networks (ANN) or local model networks (LMN).The GP model seems promising for three reasons. First, less training parameters are needed to parameterize the model. Second, the variance of the model's output depending on data positioning is obtained. Third, prior knowledge, e.g. in the form of linear local models can be included into the model. In this paper the focus is on GP with incorporated local models as the approach which could replace local models network.Much of the effort up to now has been spent on the development of the methodology of the GP model with included local models, while no application and practical validation has yet been carried out. The aim of this paper is therefore twofold. The first aim is to present the methodology of the GP model identification with emphasis on the inclusion of the prior knowledge in the form of linear local models. The second aim is to demonstrate practically the use of the method on two higher order dynamical systems, one based on simulation and one based on measurement data.  相似文献   

20.
通过把贷款的收益率刻画为模糊变量,提出了商业银行贷款组合优化决策的机会准则模型,即可能性准则模型、必要性准则模型和可信性准则模型。对于贷款收益率是特殊的三角模糊变量的情况,给出了模型的清晰等价类,这些等价类可以用传统的方法进行求解。对于贷款收益率的隶属函数比较复杂的情况,应用集成模糊模拟、神经网络、遗传算法和同步扰动随机逼近算法的混合优化算法求解模型。数值算例验证了模型和算法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号