首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
Nonlinear classifiers, e.g., support vector machines (SVMs) with radial basis function (RBF) kernels, have been used widely for automatic diagnosis of diseases because of their high accuracies. However, it is difficult to visualize the classifiers, and thus difficult to provide intuitive interpretation of results to physicians. We developed a new nonlinear kernel, the localized radial basis function (LRBF) kernel, and new visualization system visualization for risk factor analysis (VRIFA) that applies a nomogram and LRBF kernel to visualize the results of nonlinear SVMs and improve the interpretability of results while maintaining high prediction accuracy. Three representative medical datasets from the University of California, Irvine repository and Statlog dataset-breast cancer, diabetes, and heart disease datasets-were used to evaluate the system. The results showed that the classification performance of the LRBF is comparable with that of the RBF, and the LRBF is easy to visualize via a nomogram. Our study also showed that the LRBF kernel is less sensitive to noise features than the RBF kernel, whereas the LRBF kernel degrades the prediction accuracy more when important features are eliminated. We demonstrated the VRIFA system, which visualizes the results of linear and nonlinear SVMs with LRBF kernels, on the three datasets.  相似文献   

2.
In semisupervised learning (SSL), a predictive model is learn from a collection of labeled data and a typically much larger collection of unlabeled data. These paper presented a framework called multi-view point cloud regularization (MVPCR), which unifies and generalizes several semisupervised kernel methods that are based on data-dependent regularization in reproducing kernel Hilbert spaces (RKHSs). Special cases of MVPCR include coregularized least squares (CoRLS), manifold regularization (MR), and graph-based SSL. An accompanying theorem shows how to reduce any MVPCR problem to standard supervised learning with a new multi-view kernel.  相似文献   

3.
This paper concerns the use of support vector regression (SVR), which is based on the kernel method for learning from examples, in identification of walking robots. To handle complex dynamics in humanoid robot and realize stable walking, this paper develops and implements two types of reference natural motions for a humanoid, namely, walking trajectories on a flat floor and on an ascending slope. Next, SVR is applied to model stable walking motions by considering these actual motions. Three kinds of kernels, namely, linear, polynomial, and radial basis function (RBF), are considered, and the results from these kernels are compared and evaluated. The results show that the SVR approach works well, and SVR with the RBF kernel function provides the best performance. Plus, it can be effectively applied to model and control a practical biped walking robot.  相似文献   

4.
Support vector machines (SVMs) have been widely used for creating fast and efficient performance macro-models for quickly predicting the performance parameters of analog circuits. These models have proved to be not only effective and fast but accurate also while predicting the performance. A kernel function is an integral part of SVM to obtain an optimized and accurate model. There is no formal way to decide, which kernel function is suited to a class of regression problem. While most commonly used kernels are radial basis function, polynomial, spline, multilayer perceptron; we have explored many other un-conventional kernel functions and report their efficacy and computational efficiency in this paper. These kernel functions are used with SVM regression models and these macromodels are tested on different analog circuits to check for their robustness and performance. We have used HSPICE for generating the set of learning data. Least Square SVM toolbox along with MATLAB was used for regression. The models which contained modified compositions of kernels were found to be more accurate and thus have lower root mean square error than those containing standard kernels. We have used different CMOS circuits varying in size and complexity as test vehicles—two-stage op amp, cascode op amp, comparator, differential op amp and voltage controlled oscillator.  相似文献   

5.
Minimum class variance support vector machines.   总被引:4,自引:0,他引:4  
In this paper, a modified class of support vector machines (SVMs) inspired from the optimization of Fisher's discriminant ratio is presented, the so-called minimum class variance SVMs (MCVSVMs). The MCVSVMs optimization problem is solved in cases in which the training set contains less samples that the dimensionality of the training vectors using dimensionality reduction through principal component analysis (PCA). Afterward, the MCVSVMs are extended in order to find nonlinear decision surfaces by solving the optimization problem in arbitrary Hilbert spaces defined by Mercer's kernels. In that case, it is shown that, under kernel PCA, the nonlinear optimization problem is transformed into an equivalent linear MCVSVMs problem. The effectiveness of the proposed approach is demonstrated by comparing it with the standard SVMs and other classifiers, like kernel Fisher discriminant analysis in facial image characterization problems like gender determination, eyeglass, and neutral facial expression detection.  相似文献   

6.
The paper considers a number of strategies for training radial basis function (RBF) classifiers. A benchmark problem is constructed using ten-dimensional input patterns which have to be classified into one of three classes. The RBF networks are trained using a two-phase approach (unsupervised clustering for the first layer followed by supervised learning for the second layer), error backpropagation (supervised learning for both layers) and a hybrid approach. It is shown that RBF classifiers trained with error backpropagation give results almost identical to those obtained with a multilayer perceptron. Although networks trained with the two-phase approach give slightly worse classification results, it is argued that the hidden-layer representation of such networks is much more powerful, especially if it is encoded in the form of a Gaussian mixture model. During training, the number of subclusters present within the training database can be estimated: during testing, the activities in the hidden layer of the classification network can be used to assess the novelty of input patterns and thereby help to validate network outputs  相似文献   

7.
The Wiener-Lee-Schetzen scheme of using Gaussian white noise to test a nonlinear dynamical system is extended in two ways. 1) An arbitrary non-Ganssian white noise stationary signal can be used as the test stimulus. 2) An arbitrary function of this stimulus can then be used as the analyzing function for cross correlating with the response to obtain the kernels characterizing the system. Closed form expressions are given for the generalized orthogonal basis functions. The generalized kernels are expanded in terms of Volterra kernels and Wiener kernels. The expansion coefficients are closely related to the cumulants of the stimulus probability distribution. These results are applied to the special case of a Gaussian stimulus and a three-level analysis function. For this case a detailed analysis is Lade of the magnitude of the deviation of the kernels obtained with the ternary truncation as compared to the Wiener kernels obtained by cross correlating with the same Gaussian as was used for the stimulus. The deviations are found to be quite small.  相似文献   

8.
Chen  S. 《Electronics letters》1995,31(2):117-118
An improved clustering and recursive least squares (RLS) learning algorithm for Gaussian radial basis function (RBF) networks is described for modelling and predicting nonlinear time series. Significant performance gain can be achieved with a much smaller network compared with the usual clustering and RLS method  相似文献   

9.
Support vector machines (SVMs) are receiving increased attention in different application domains for which neural networks (NNs) have had a prominent role. However, in quality monitoring little attention has been given to this more recent development encompassing a technique with foundations in statistic learning theory. In this paper, we compare C-SVM and /spl nu/-SVM classifiers with radial basis function (RBF) NNs in data sets corresponding to product faults in an industrial environment concerning a plastics injection molding machine. The goal is to monitor in-process data as a means of indicating product quality and to be able to respond quickly to unexpected process disturbances. Our approach based on SVMs exploits the first part of this goal. Model selection which amounts to search in hyperparameter space is performed for study of suitable condition monitoring. In the multiclass problem formulation presented, classification accuracy is reported for both strategies. Experimental results obtained thus far indicate improved generalization with the large margin classifier as well as better performance enhancing the strength and efficacy of the chosen model for the practical case study.  相似文献   

10.
A new approach to the interpolation of sampled data   总被引:3,自引:0,他引:3  
  相似文献   

11.
We propose the use of support vector machines (SVMs) for automatic hyperspectral data classification and knowledge discovery. In the first stage of the study, we use SVMs for crop classification and analyze their performance in terms of efficiency and robustness, as compared to extensively used neural and fuzzy methods. Efficiency is assessed by evaluating accuracy and statistical differences in several scenes. Robustness is analyzed in terms of: (1) suitability to working conditions when a feature selection stage is not possible and (2) performance when different levels of Gaussian noise are introduced at their inputs. In the second stage of this work, we analyze the distribution of the support vectors (SVs) and perform sensitivity analysis on the best classifier in order to analyze the significance of the input spectral bands. For classification purposes, six hyperspectral images acquired with the 128-band HyMAP spectrometer during the DAISEX-1999 campaign are used. Six crop classes were labeled for each image. A reduced set of labeled samples is used to train the models, and the entire images are used to assess their performance. Several conclusions are drawn: (1) SVMs yield better outcomes than neural networks regarding accuracy, simplicity, and robustness; (2) training neural and neurofuzzy models is unfeasible when working with high-dimensional input spaces and great amounts of training data; (3) SVMs perform similarly for different training subsets with varying input dimension, which indicates that noisy bands are successfully detected; and (4) a valuable ranking of bands through sensitivity analysis is achieved.  相似文献   

12.
13.
The authors investigate the problem of nonlinear adaptive equalisation in the presence of intersymbol interference, additive white Gaussian noise and co-channel interference. An extended radial basis function (RBF) network is proposed, in which regression weights are used in the output layer and the hidden unit is defined to have the Gaussian formula with the Mahalanobis distance. It is shown by simulation that the proposed structure gives reduced computational complexity without performance degradation, compared to that of the conventional RBF equaliser  相似文献   

14.
该文对传统的径向基函数(RBF)神经网络的结构和学习算法进行了总结,并在此基础上提出了广义径向基函数模型概念,使这种网络具有更好的应用灵活性与可扩充性。文章基于Mackey-Glass造血模型方程的数值解数据,对此广义模型与现有的RBF模型和梯度径向基函数(GRBF)模型对非线性时间序列预测问题的应用结果进行了比较与讨论,显示出这种广义模型的应用有效性。  相似文献   

15.
Previous research applying kernel methods such as support vector machines (SVMs) to hyperspectral image classification has achieved performance competitive with the best available algorithms. However, few efforts have been made to extend SVMs to cover the specific requirements of hyperspectral image classification, for example, by building tailor-made kernels. Observation of real-life spectral imagery from the AVIRIS hyperspectral sensor shows that the useful information for classification is not equally distributed across bands, which provides potential to enhance the SVM's performance through exploring different kernel functions. Spectrally weighted kernels are, therefore, proposed, and a set of particular weights is chosen by either optimizing an estimate of generalization error or evaluating each band's utility level. To assess the effectiveness of the proposed method, experiments are carried out on the publicly available 92AV3C dataset collected from the 220-dimensional AVIRIS hyperspectral sensor. Results indicate that the method is generally effective in improving performance: spectral weighting based on learning weights by gradient descent is found to be slightly better than an alternative method based on estimating "relevance" between band information and ground truth.  相似文献   

16.
The support vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights, and threshold that minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by X-means clustering, and the weights are computed using error backpropagation. We consider three machines, namely, a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the United States postal service database of handwritten digits, the SV machine achieves the highest recognition accuracy, followed by the hybrid system. The SV approach is thus not only theoretically well-founded but also superior in a practical application  相似文献   

17.
径向基函数神经网络的软竞争学习算法   总被引:7,自引:0,他引:7       下载免费PDF全文
张志华  郑南宁  史罡 《电子学报》2002,30(1):132-135
本文构造了径向基函数(RBF)神经网络的一类软竞争学习算法(SCLA).该算法的主要思想是首先在高斯基函数中心向量的训练过程中引入了隶属度函数,对每个输入样本,所有中心向量根据该样本属于其代表的类的隶属度值的大小进行自适应地调整;第二,把隶属度函数的模糊因子的倒数与模拟退火算法中的温度等同起来,在迭代过程中采用递增的方式来调整它.SCLA是RBF网络基于k-均值方法训练中心向量的学习算法的软竞争格式,它可以克服后者对初始值敏感和死节点的问题.仿真实验论证了SCLA是有效的.  相似文献   

18.
Flood forecasting using radial basis function neural networks   总被引:1,自引:0,他引:1  
A radial basis function (RBF) neural network (NN) is proposed to develop a rainfall-runoff model for three-hour-ahead flood forecasting. For faster training speed, the RBF NN employs a hybrid two-stage learning scheme. During the first stage, unsupervised learning, fuzzy min-max clustering is introduced to determine the characteristics of the nonlinear RBFs. In the second stage, supervised learning, multivariate linear regression is used to determine the weights between the hidden and output layers. The rainfall-runoff relation can be considered as a linear combination of some nonlinear RBFs. Rainfall and runoff events of the Lanyoung River collected during typhoons are used to train, validate,and test the network. The results show that the RBF NN can be considered a suitable technique for predicting flood flow  相似文献   

19.
The evolutionary structure optimisation (ESO) method for Gaussian radial basis function (RBF) networks has already been presented by the authors. Here, they improve the ESO method in its mutation operator and apply it to a mixture of experts (ME) for modelling and predicting nonlinear time series. The ME implementation provides much better generalisation performance with fewer network parameters, compared to the Gaussian RBF networks  相似文献   

20.
This paper addresses the problem of the classification of hyperspectral remote sensing images by support vector machines (SVMs). First, we propose a theoretical discussion and experimental analysis aimed at understanding and assessing the potentialities of SVM classifiers in hyperdimensional feature spaces. Then, we assess the effectiveness of SVMs with respect to conventional feature-reduction-based approaches and their performances in hypersubspaces of various dimensionalities. To sustain such an analysis, the performances of SVMs are compared with those of two other nonparametric classifiers (i.e., radial basis function neural networks and the K-nearest neighbor classifier). Finally, we study the potentially critical issue of applying binary SVMs to multiclass problems in hyperspectral data. In particular, four different multiclass strategies are analyzed and compared: the one-against-all, the one-against-one, and two hierarchical tree-based strategies. Different performance indicators have been used to support our experimental studies in a detailed and accurate way, i.e., the classification accuracy, the computational time, the stability to parameter setting, and the complexity of the multiclass architecture. The results obtained on a real Airborne Visible/Infrared Imaging Spectroradiometer hyperspectral dataset allow to conclude that, whatever the multiclass strategy adopted, SVMs are a valid and effective alternative to conventional pattern recognition approaches (feature-reduction procedures combined with a classification method) for the classification of hyperspectral remote sensing data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号