排序方式: 共有8条查询结果,搜索用时 62 毫秒
1
1.
Mika S. Ratsch G. Weston J. Scholkopf B. Smola A. Muller K.-R. 《IEEE transactions on pattern analysis and machine intelligence》2003,25(5):623-628
We incorporate prior knowledge to construct nonlinear algorithms for invariant feature extraction and discrimination. Employing a unified framework in terms of a nonlinearized variant of the Rayleigh coefficient, we propose nonlinear generalizations of Fisher's discriminant and oriented PCA using support vector kernel functions. Extensive simulations show the utility of our approach. 相似文献
2.
New support vector algorithms 总被引:15,自引:0,他引:15
We propose a new class of support vector algorithms for regression and classification. In these algorithms, a parameter nu lets one effectively control the number of support vectors. While this can be useful in its own right, the parameterization has the additional benefit of enabling us to eliminate one of the other free parameters of the algorithm: the accuracy parameter epsilon in the regression case, and the regularization constant C in the classification case. We describe the algorithms, give some theoretical results concerning the meaning and the choice of nu, and report experimental results. 相似文献
3.
Pfingsten T. Herrmann D.J.L. Schnitzler T. Feustel A. Scholkopf B. 《Automation Science and Engineering, IEEE Transactions on》2007,4(3):465-469
The final properties of sophisticated products can be affected by many unapparent dependencies within the manufacturing process, and the products' integrity can often only be checked in a final measurement. Troubleshooting can therefore be very tedious if not impossible in large assembly lines. In this paper, we show that feature selection is an efficient tool for serial-grouped lines to reveal causes for irregularities in product attributes. 相似文献
4.
Input space versus feature space in kernel-based methods 总被引:21,自引:0,他引:21
Scholkopf B. Mika S. Burges C.J.C. Knirsch P. Muller K.-R. Ratsch G. Smola A.J. 《Neural Networks, IEEE Transactions on》1999,10(5):1000-1017
This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data 相似文献
5.
Ratsch G. Mika S. Scholkopf B. Muller K.-R. 《IEEE transactions on pattern analysis and machine intelligence》2002,24(9):1184-1199
We show via an equivalence of mathematical programs that a support vector (SV) algorithm can be translated into an equivalent boosting-like algorithm and vice versa. We exemplify this translation procedure for a new algorithm: one-class leveraging, starting from the one-class support vector machine (1-SVM). This is a first step toward unsupervised learning in a boosting framework. Building on so-called barrier methods known from the theory of constrained optimization, it returns a function, written as a convex combination of base hypotheses, that characterizes whether a given test point is likely to have been generated from the distribution underlying the training data. Simulations on one-class classification problems demonstrate the usefulness of our approach 相似文献
6.
Generalization performance of regularization networks and supportvector machines via entropy numbers of compact operators 总被引:2,自引:0,他引:2
Williamson R.C. Smola A.J. Scholkopf B. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2001,47(6):2516-2532
We derive new bounds for the generalization error of kernel machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs make use of a viewpoint that is apparently novel in the field of statistical learning theory. The hypothesis class is described in terms of a linear operator mapping from a possibly infinite-dimensional unit ball in feature space into a finite-dimensional space. The covering numbers of the class are then determined via the entropy numbers of the operator. These numbers, which characterize the degree of compactness of the operator can be bounded in terms of the eigenvalues of an integral operator induced by the kernel function used by the machine. As a consequence, we are able to theoretically explain the effect of the choice of kernel function on the generalization performance of support vector machines 相似文献
7.
Comparing support vector machines with Gaussian kernels to radialbasis function classifiers 总被引:2,自引:0,他引:2
Scholkopf B. Kah-Kay Sung Burges C.J.C. Girosi F. Niyogi P. Poggio T. Vapnik V. 《Signal Processing, IEEE Transactions on》1997,45(11):2758-2765
The support vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights, and threshold that minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by X-means clustering, and the weights are computed using error backpropagation. We consider three machines, namely, a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the United States postal service database of handwritten digits, the SV machine achieves the highest recognition accuracy, followed by the hybrid system. The SV approach is thus not only theoretically well-founded but also superior in a practical application 相似文献
8.
An introduction to kernel-based learning algorithms 总被引:155,自引:0,他引:155
Muller K.-R. Mika S. Ratsch G. Tsuda K. Scholkopf B. 《Neural Networks, IEEE Transactions on》2001,12(2):181-201
This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis. 相似文献
1