首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
A new nonparametric estimate for nonlinear discrete-time dynamic systems is considered. The new algorithm is weakly consistent under a specific condition on the transition probability operator of a stationary Markov process. The estimate is applicable when a parametric state model of the system is difficult to choose.  相似文献   

3.
Statistical relational learning (SRL) is a subarea in machine learning which addresses the problem of performing statistical inference on data that is correlated and not independently and identically distributed (i.i.d.)—as is generally assumed. For the traditional i.i.d. setting, distribution-free bounds exist, such as the Hoeffding bound, which are used to provide confidence bounds on the generalization error of a classification algorithm given its hold-out error on a sample size of N. Bounds of this form are currently not present for the type of interactions that are considered in the data by relational classification algorithms. In this paper, we extend the Hoeffding bounds to the relational setting. In particular, we derive distribution-free bounds for certain classes of data generation models that do not produce i.i.d. data and are based on the type of interactions that are considered by relational classification algorithms that have been developed in SRL. We conduct empirical studies on synthetic and real data which show that these data generation models are indeed realistic and the derived bounds are tight enough for practical use.  相似文献   

4.
Statistical calibration of model parameters conditioned on observations is performed in a Bayesian framework by evaluating the joint posterior probability density function (pdf) of the parameters. The posterior pdf is very often inferred by sampling the parameters with Markov Chain Monte Carlo (MCMC) algorithms. Recently, an alternative technique to calculate the so-called Maximal Conditional Posterior Distribution (MCPD) appeared. This technique infers the individual probability distribution of a given parameter under the condition that the other parameters of the model are optimal. Whereas the MCMC approach samples probable draws of the parameters, the MCPD samples the most probable draws when one of the parameters is set at various prescribed values. In this study, the results of a user-friendly MCMC sampler called DREAM(ZS) and those of the MCPD sampler are compared. The differences between the two approaches are highlighted before running a comparison inferring two analytical distributions with collinearity and multimodality. Then, the performances of both samplers are compared on an artificial multistep outflow experiment from which the soil hydraulic parameters are inferred. The results show that parameter and predictive uncertainties can be accurately assessed with both the MCMC and MCPD approaches.  相似文献   

5.
The simultaneous perturbation stochastic approximation (SPSA) algorithm has attracted considerable attention for challenging optimization problems where it is difficult or impossible to obtain a direct gradient of the objective (say, loss) function. The approach is based on a highly efficient simultaneous perturbation approximation to the gradient based on loss function measurements. SPSA is based on picking a simultaneous perturbation (random) vector in a Monte Carlo fashion as part of generating the approximation to the gradient. This paper derives the optimal distribution for the Monte Carlo process. The objective is to minimize the mean square error of the estimate. The authors also consider maximization of the likelihood that the estimate be confined within a bounded symmetric region of the true parameter. The optimal distribution for the components of the simultaneous perturbation vector is found to be a symmetric Bernoulli in both cases. The authors end the paper with a numerical study related to the area of experiment design  相似文献   

6.
Uncertainty quantification (UQ) refers to quantitative characterization and reduction of uncertainties present in computer model simulations. It is widely used in engineering and geophysics fields to assess and predict the likelihood of various outcomes. This paper describes a UQ platform called UQ-PyL (Uncertainty Quantification Python Laboratory), a flexible software platform designed to quantify uncertainty of complex dynamical models. UQ-PyL integrates different kinds of UQ methods, including experimental design, statistical analysis, sensitivity analysis, surrogate modeling and parameter optimization. It is written in Python language and runs on all common operating systems. UQ-PyL has a graphical user interface that allows users to enter commands via pull-down menus. It is equipped with a model driver generator that allows any computer model to be linked with the software. We illustrate the different functions of UQ-PyL by applying it to the uncertainty analysis of the Sacramento Soil Moisture Accounting Model. We will also demonstrate that UQ-PyL can be applied to a wide range of applications.  相似文献   

7.
The iteratively reweighted multivariate alteration detection (IR-MAD) algorithm may be used both for unsupervised change detection in multi- and hyperspectral remote sensing imagery and for automatic radiometric normalization of multitemporal image sequences. Principal components analysis (PCA), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed to run on massively parallel CUDA-enabled graphics processors, when available, giving an order of magnitude enhancement in computational speed. The software is available from the authors' Web sites.  相似文献   

8.
The polynomial chaos (PC) method has been widely adopted as a computationally feasible approach for uncertainty quantification (UQ). Most studies to date have focused on non-stiff systems. When stiff systems are considered, implicit numerical integration requires the solution of a non-linear system of equations at every time step. Using the Galerkin approach the size of the system state increases from n to S × n, where S is the number of PC basis functions. Solving such systems with full linear algebra causes the computational cost to increase from O(n3) to O(S3n3). The S3-fold increase can make the computation prohibitive. This paper explores computationally efficient UQ techniques for stiff systems using the PC Galerkin, collocation, and collocation least-squares (LS) formulations. In the Galerkin approach, we propose a modification in the implicit time stepping process using an approximation of the Jacobian matrix to reduce the computational cost. The numerical results show a run time reduction with no negative impact on accuracy. In the stochastic collocation formulation, we propose a least-squares approach based on collocation at a low-discrepancy set of points. Numerical experiments illustrate that the collocation least-squares approach for UQ has similar accuracy with the Galerkin approach, is more efficient, and does not require any modification of the original code.  相似文献   

9.
We consider a particular problem which arises when apply-ing the method of gradient projection for solving constrained optimiza-tion and finite dimensional variational inequalities on the convex set formed by the convex hull of the standard basis unit vectors. The method is especially important for relaxation labeling techniques applied to problems in artificial intelligence. Zoutendijk's method for finding feasible directions, which is relatively complicated in general situations, yields a very simple finite algorithm for this problem. We present an extremely simple algorithm for performing the gradient projection and an independent verification of its correctness.  相似文献   

10.
The idea of hierarchical gradient methods for optimization is considered. It is shown that the proposed approach provides powerful means to cope with some global convergence problems characteristic of the classical gradient methods. Concerning global convergence problems, four topics are addressed: The detour effect, the problem of multiscale models, the problem of highly ill-conditioned objective functions, and the problem of local-minima traps related to ambiguous regions of attractions. The great potential of hierarchical gradient algorithms is revealed through a hierarchical Gauss-Newton algorithm for unconstrained nonlinear least-squares problems. The algorithm, while maintaining a superlinear convergence rate like the common conjugate gradient or quasi-Newton methods, requires the evaluation of partial derivatives with respect to only one variable on each iteration. This property enables economized consumption of CPU time in case the computer codes for the derivatives are intensive CPU consumers, e.g., when the gradient evaluations of ODE or PDE models are produced by numerical differentiation. The hierarchical Gauss-Newton algorithm is extended to handle interval constraints on the variables and its effectiveness demonstrated by computational results.  相似文献   

11.
Artificial Intelligence (AI) use in automated Electrocardiogram (ECG) classification has continuously attracted the research community’s interest, motivated by their promising results. Despite their great promise, limited attention has been paid to the robustness of their results, which is a key element for their implementation in clinical practice. Uncertainty Quantification (UQ) is a critical for trustworthy and reliable AI, particularly in safety-critical domains such as medicine. Estimating uncertainty in Machine Learning (ML) model predictions has been extensively used for Out-of-Distribution (OOD) detection under single-label tasks. However, the use of UQ methods in multi-label classification remains underexplored.This study goes beyond developing highly accurate models comparing five uncertainty quantification methods using the same Deep Neural Network (DNN) architecture across various validation scenarios, including internal and external validation as well as OOD detection, taking multi-label ECG classification as the example domain. We show the importance of external validation and its impact on classification performance, uncertainty estimates quality, and calibration. Ensemble-based methods yield more robust uncertainty estimations than single network or stochastic methods. Although current methods still have limitations in accurately quantifying uncertainty, particularly in the case of dataset shift, incorporating uncertainty estimates with a classification with a rejection option improves the ability to detect such changes. Moreover, we show that using uncertainty estimates as a criterion for sample selection in active learning setting results in greater improvements in classification performance compared to random sampling.  相似文献   

12.
The paper concerns with novel first-order methods for monotone variational inequalities. They use a very simple linesearch procedure that takes into account a local information of the operator. Also, the methods do not require Lipschitz continuity of the operator and the linesearch procedure uses only values of the operator. Moreover, when the operator is affine our linesearch becomes very simple, namely, it needs only simple vector–vector operations. For all our methods, we establish the ergodic convergence rate. In addition, we modify one of the proposed methods for the case of a composite minimization. Preliminary results from numerical experiments are quite promising.  相似文献   

13.
Projected gradient methods for nonnegative matrix factorization   总被引:13,自引:0,他引:13  
Lin CJ 《Neural computation》2007,19(10):2756-2779
Nonnegative matrix factorization (NMF) can be formulated as a minimization problem with bound constraints. Although bound-constrained optimization has been studied extensively in both theory and practice, so far no study has formally applied its techniques to NMF. In this letter, we propose two projected gradient methods for NMF, both of which exhibit strong optimization properties. We discuss efficient implementations and demonstrate that one of the proposed methods converges faster than the popular multiplicative update approach. A simple Matlab code is also provided.  相似文献   

14.
Clustering algorithms are a useful tool to explore data structures and have been employed in many disciplines. The focus of this paper is the partitioning clustering problem with a special interest in two recent approaches: kernel and spectral methods. The aim of this paper is to present a survey of kernel and spectral clustering methods, two approaches able to produce nonlinear separating hypersurfaces between clusters. The presented kernel clustering methods are the kernel version of many classical clustering algorithms, e.g., K-means, SOM and neural gas. Spectral clustering arise from concepts in spectral graph theory and the clustering problem is configured as a graph cut problem where an appropriate objective function has to be optimized. An explicit proof of the fact that these two paradigms have the same objective is reported since it has been proven that these two seemingly different approaches have the same mathematical foundation. Besides, fuzzy kernel clustering methods are presented as extensions of kernel K-means clustering algorithm.  相似文献   

15.
The pre-image problem in kernel methods   总被引:2,自引:0,他引:2  
In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method in which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.  相似文献   

16.
A novel fuzzy nonlinear classifier, called kernel fuzzy discriminant analysis (KFDA), is proposed to deal with linear non-separable problem. With kernel methods KFDA can perform efficient classification in kernel feature space. Through some nonlinear mapping the input data can be mapped implicitly into a high-dimensional kernel feature space where nonlinear pattern now appears linear. Different from fuzzy discriminant analysis (FDA) which is based on Euclidean distance, KFDA uses kernel-induced distance. Theoretical analysis and experimental results show that the proposed classifier compares favorably with FDA.  相似文献   

17.
Techniques for producing metamodels for the efficient Monte Carlo simulation of high consequence systems are presented. The bias of f.e.m mesh discretization errors is eliminated or minimized by extrapolation, using rational functions, rather than the power series representation of Richardson extrapolation. Examples, including estimation of the vibrational frequency of a one-dimensional bar, show that the rational function model gives more accurate estimates using fewer terms than Richardson extrapolation, an important consideration for computational reliability assessment of high-consequence systems, where small biases in solutions can significantly affect the accuracy of small-magnitude probability estimates. Rational function representation of discretization error enable the user to accurately extrapolate to the continuum from numerical experiments performed outside the asymptotic region of the usual power series, allowing use of coarser meshes in the numerical experiments, resulting in significant savings.  相似文献   

18.
近年来,基于核主元分析与核偏最小二乘的方法经常被应用于过程监控与故障检测领域以克服工业过程的非线性.研究发现此类方法的检测性能很大程度上受核参数的影响,而目前学术界对该参数的优化方法研究较少.因此,本文以最常用的高斯核方法为例,首先总结了3类常用的核参数优化方法:二分法、基于BP神经网络的重构法和基于样本分类的重构法;其次重点分析每个方法的特点和它们之间的联系,并评估它们的性能;最后将上述方法设计成一个核参数优化系统应用于热连轧过程的故障检测中.应用结果表明,优化后的核参数能显著提高故障检测性能.  相似文献   

19.
针对传统非局部均值(NLM)滤波算法中邻域间相似性计算易受噪声干扰的问题,提出了一种基于梯度特征的双核非局部均值滤波算法。通过图像块之间的欧氏距离及梯度特征度量邻域间相似性,采用双核函数代替传统指数核函数计算相似性权值,并通过衡量搜索区域中的邻域块与当前像素邻域的相似程度,对像素点的权值进行重分配,在此基础上,重估像素点去噪值并得到滤波图像。实验结果表明,提出的滤波算法与传统的NLM滤波算法及分别含有高斯核和正弦核的改进NLM滤波算法相比,可以更准确地反映邻域间的相似度,保存图像的细节及边缘信息,从而有效提升图像的去噪效果。  相似文献   

20.
Considerable concern has arisen regarding the quality of intelligence analysis. This has been, in large part, motivated by the task of determining whether Iraq had weapons of mass destruction. One problem that made this analysis difficult was the uncertainty in much of the information available to the intelligence analysts. In this work, we introduce some tools that can be of use to intelligence analysts for representing and processing uncertain information. We make considerable use of technologies based on fuzzy sets and related disciplines such as approximate reasoning. © 2006 Wiley Periodicals, Inc. Int J Int Syst 21: 523–544, 2006.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号