首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   46篇
  免费   2篇
电工技术   1篇
化学工业   2篇
水利工程   3篇
无线电   6篇
自动化技术   36篇
  2021年   2篇
  2019年   3篇
  2018年   2篇
  2017年   1篇
  2016年   2篇
  2015年   1篇
  2014年   1篇
  2013年   2篇
  2012年   5篇
  2011年   3篇
  2010年   1篇
  2009年   1篇
  2007年   1篇
  2006年   1篇
  2005年   4篇
  2004年   3篇
  2003年   2篇
  2002年   2篇
  2001年   4篇
  2000年   1篇
  1999年   3篇
  1998年   1篇
  1997年   1篇
  1996年   1篇
排序方式: 共有48条查询结果,搜索用时 31 毫秒
1.
2.
Although over a thousand scientific papers address the topic of load forecasting every year, only a few are dedicated to finding a general framework for load forecasting that improves the performance, without depending on the unique characteristics of a certain task such as geographical location. Meta-learning, a powerful approach for algorithm selection has so far been demonstrated only on univariate time-series forecasting. Multivariate time-series forecasting is known to have better performance in load forecasting. In this paper we propose a meta-learning system for multivariate time-series forecasting as a general framework for load forecasting model selection. We show that a meta-learning system built on 65 load forecasting tasks returns lower forecasting error than 10 well-known forecasting algorithms on 4 load forecasting tasks for a recurrent real-life simulation. We introduce new metafeatures of fickleness, traversity, granularity and highest ACF. The meta-learning framework is parallelized, component-based and easily extendable.  相似文献   
3.
In subspace identification methods, the system matrices are usually estimated by least squares, based on estimated Kalman filter state sequences and the observed inputs and outputs. For a finite number of data points, the estimated system matrix is not guaranteed to be stable, even when the true linear system is known to be stable. In this paper, stability is imposed by using regularization. The regularization term used here is the trace of a matrix which involves the dynamical system matrix and a positive (semi) definite weighting matrix. The amount of regularization can be determined from a generalized eigenvalue problem. The data augmentation method of Chui and Maciejowski (1996) is obtained by using specific choices for the weighting matrix in the regularization term  相似文献   
4.
Recent studies show that principal component analysis (PCA) of heartbeats is a well-performing method to derive a respiratory signal from ECGs. In this study, an improved ECG-derived respiration (EDR) algorithm based on kernel PCA (kPCA) is presented. KPCA can be seen as a generalization of PCA where nonlinearities in the data are taken into account by nonlinear mapping of the data, using a kernel function, into a higher dimensional space in which PCA is carried out. The comparison of several kernels suggests that a radial basis function (RBF) kernel performs the best when deriving EDR signals. Further improvement is carried out by tuning the parameter σ(2) that represents the variance of the RBF kernel. The performance of kPCA is assessed by comparing the EDR signals to a reference respiratory signal, using the correlation and the magnitude squared coherence coefficients. When comparing the coefficients of the tuned EDR signals using kPCA to EDR signals obtained using PCA and the algorithm based on the R peak amplitude, statistically significant differences are found in the correlation and coherence coefficients (both p<0.0001), showing that kPCA outperforms PCA and R peak amplitude in the extraction of a respiratory signal from single-lead ECGs.  相似文献   
5.
Benchmarking Least Squares Support Vector Machine Classifiers   总被引:16,自引:0,他引:16  
In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set of equations in the dual space. While the SVM classifier has a large margin interpretation, the LS-SVM formulation is related in this paper to a ridge regression approach for classification with binary targets and to Fisher's linear discriminant analysis in the feature space. Multiclass categorization problems are represented by a set of binary classifiers using different output coding schemes. While regularization is used to control the effective number of parameters of the LS-SVM classifier, the sparseness property of SVMs is lost due to the choice of the 2-norm. Sparseness can be imposed in a second stage by gradually pruning the support value spectrum and optimizing the hyperparameters during the sparse approximation procedure. In this paper, twenty public domain benchmark datasets are used to evaluate the test set performance of LS-SVM classifiers with linear, polynomial and radial basis function (RBF) kernels. Both the SVM and LS-SVM classifier with RBF kernel in combination with standard cross-validation procedures for hyperparameter selection achieve comparable test set performances. These SVM and LS-SVM performances are consistently very good when compared to a variety of methods described in the literature including decision tree based algorithms, statistical algorithms and instance based learning methods. We show on ten UCI datasets that the LS-SVM sparse approximation procedure can be successfully applied.  相似文献   
6.
A new formulation for multiway spectral clustering is proposed. This method corresponds to a weighted kernel principal component analysis (PCA) approach based on primal-dual least-squares support vector machine (LS-SVM) formulations. The formulation allows the extension to out-of-sample points. In this way, the proposed clustering model can be trained, validated, and tested. The clustering information is contained on the eigendecomposition of a modified similarity matrix derived from the data. This eigenvalue problem corresponds to the dual solution of a primal optimization problem formulated in a high-dimensional feature space. A model selection criterion called the Balanced Line Fit (BLF) is also proposed. This criterion is based on the out-of-sample extension and exploits the structure of the eigenvectors and the corresponding projections when the clusters are well formed. The BLF criterion can be used to obtain clustering parameters in a learning framework. Experimental results with difficult toy problems and image segmentation show improved performance in terms of generalization to new samples and computation times.  相似文献   
7.
In multipass rendering, care has to be taken to include all light transport only once in the final solution. Therefore the different methods in current multipass configurations handle a perfectly disjunct part of the light transport. In this paper a Monte Carlo variance reduction technique is presented that probabilistically weights overlapping transport between different methods. A good heuristic for the weights is derived so that strengths of the respective methods are retained. The technique is applied to a combination of radiosity and bidirectional path tracing and significant improvement is obtained over the non-weighted combination. This method promises to be a very useful extension to other multipass algorithms as well.  相似文献   
8.
The Bayesian evidence framework has been successfully applied to the design of multilayer perceptrons (MLPs) in the work of MacKay. Nevertheless, the training of MLPs suffers from drawbacks like the nonconvex optimization problem and the choice of the number of hidden units. In support vector machines (SVMs) for classification, as introduced by Vapnik, a nonlinear decision boundary is obtained by mapping the input vector first in a nonlinear way to a high-dimensional kernel-induced feature space in which a linear large margin classifier is constructed. Practical expressions are formulated in the dual space in terms of the related kernel function, and the solution follows from a (convex) quadratic programming (QP) problem. In least-squares SVMs (LS-SVMs), the SVM problem formulation is modified by introducing a least-squares cost function and equality instead of inequality constraints, and the solution follows from a linear system in the dual space. Implicitly, the least-squares formulation corresponds to a regression formulation and is also related to kernel Fisher discriminant analysis. The least-squares regression formulation has advantages for deriving analytic expressions in a Bayesian evidence framework, in contrast to the classification formulations used, for example, in gaussian processes (GPs). The LS-SVM formulation has clear primal-dual interpretations, and without the bias term, one explicitly constructs a model that yields the same expressions as have been obtained with GPs for regression. In this article, the Bayesian evidence framework is combined with the LS-SVM classifier formulation. Starting from the feature space formulation, analytic expressions are obtained in the dual space on the different levels of Bayesian inference, while posterior class probabilities are obtained by marginalizing over the model parameters. Empirical results obtained on 10 public domain data sets show that the LS-SVM classifier designed within the Bayesian evidence framework consistently yields good generalization performances.  相似文献   
9.
This paper considers the estimation of monotone nonlinear regression functions based on Support Vector Machines (SVMs), Least Squares SVMs (LS-SVMs) and other kernel machines. It illustrates how to employ the primal-dual optimization framework characterizing LS-SVMs in order to derive a globally optimal one-stage estimator for monotone regression. As a practical application, this letter considers the smooth estimation of the cumulative distribution functions (cdf), which leads to a kernel regressor that incorporates a Kolmogorov–Smirnoff discrepancy measure, a Tikhonov based regularization scheme and a monotonicity constraint.  相似文献   
10.
ABSTRACT

In several countries, the transfer of legal rights to rivers is being discussed as an approach for more effective water resources management. But what could this transfer mean in terms of a healthy river? We address this question by identifying the ecological requirements for naturally functioning rivers and then explore the demands which these requirements impose on society, the current policy responses to these requirements and whether the transfer of rights to the river could facilitate the preservation of healthy freshwater ecosystems.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号