首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   16篇
  免费   0篇
水利工程   1篇
无线电   1篇
冶金工业   1篇
自动化技术   13篇
  2019年   1篇
  2017年   1篇
  2008年   1篇
  2002年   3篇
  2000年   1篇
  1999年   4篇
  1998年   1篇
  1997年   1篇
  1996年   1篇
  1995年   1篇
  1981年   1篇
排序方式: 共有16条查询结果,搜索用时 31 毫秒
1.
Knowledge transfer in SVM and neural networks   总被引:1,自引:0,他引:1  
The paper considers general machine learning models, where knowledge transfer is positioned as the main method to improve their convergence properties. Previous research was focused on mechanisms of knowledge transfer in the context of SVM framework; the paper shows that this mechanism is applicable to neural network framework as well. The paper describes several general approaches for knowledge transfer in both SVM and ANN frameworks and illustrates algorithmic implementations and performance of one of these approaches for several synthetic examples.  相似文献   
2.
An overview of statistical learning theory   总被引:335,自引:0,他引:335  
Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems.  相似文献   
3.
4.
Model complexity control for regression using VC generalizationbounds   总被引:8,自引:0,他引:8  
It is well known that for a given sample size there exists a model of optimal complexity corresponding to the smallest prediction (generalization) error. Hence, any method for learning from finite samples needs to have some provisions for complexity control. Existing implementations of complexity control include penalization (or regularization), weight decay (in neural networks), and various greedy procedures (aka constructive, growing, or pruning methods). There are numerous proposals for determining optimal model complexity (aka model selection) based on various (asymptotic) analytic estimates of the prediction risk and on resampling approaches. Nonasymptotic bounds on the prediction risk based on Vapnik-Chervonenkis (VC)-theory have been proposed by Vapnik. This paper describes application of VC-bounds to regression problems with the usual squared loss. An empirical study is performed for settings where the VC-bounds can be rigorously applied, i.e., linear models and penalized linear models where the VC-dimension can be accurately estimated, and the empirical risk can be reliably minimized. Empirical comparisons between model selection using VC-bounds and classical methods are performed for various noise levels, sample size, target functions and types of approximating functions. Our results demonstrate the advantages of VC-based complexity control with finite samples.  相似文献   
5.
Support vector machines for histogram-based image classification.   总被引:39,自引:0,他引:39  
Traditional classification approaches generalize poorly on image classification tasks, because of the high dimensionality of the feature space. This paper shows that support vector machines (SVM) can generalize well on difficult image classification problems where the only features are high dimensional histograms. Heavy-tailed RBF kernels of the form K(x, y)=e(-rho)Sigma(i)(|xia-yia|b ) with a =/<1 and b=/<2 are evaluated on the classification of images extracted from the Corel stock photo collection and shown to far outperform traditional polynomial or Gaussian radial basis function (RBF) kernels. Moreover, we observed that a simple remapping of the input x(i)-->x(i)(a) improves the performance of linear SVM to such an extend that it makes them, for this problem, a valid alternative to RBF kernels.  相似文献   
6.
OBJECTIVE: The study examined the effectiveness of a partial hospital treatment program combining behavioral therapy, medication, and psychosocial intervention for severe and treatment-resistant obsessive-compulsive disorder. METHODS: A total of 58 patients with a primary diagnosis of obsessive-compulsive disorder who underwent treatment in a partial hospital program were assessed at baseline, at program discharge, and at six-, 12-, and 18-month follow-ups. Obsessive-compulsive symptoms, depression, anxiety symptoms, and global functioning were rated. RESULTS: The majority of patients (71 percent) met the criterion for a successful outcome, which was a 25 percent decrease in score on the Yale-Brown Obsessive Compulsive Scale (YBOCS). Fifty-five percent finished the program with YBOCS scores of 16 or less, indicating only mild symptoms. Most of these patients sustained their improvement at six, 12, and 18 months after discharge, and many showed further improvement with continued outpatient management. CONCLUSIONS: The partial hospital treatment program for obsessive-compulsive disorder appears to be an effective intervention that should be implemented and investigated further.  相似文献   
7.
Large margin vs. large volume in transductive learning   总被引:2,自引:0,他引:2  
We consider a large volume principle for transductive learning that prioritizes the transductive equivalence classes according to the volume they occupy in hypothesis space. We approximate volume maximization using a geometric interpretation of the hypothesis space. The resulting algorithm is defined via a non-convex optimization problem that can still be solved exactly and efficiently. We provide a bound on the test error of the algorithm and compare it to transductive SVM (TSVM) using 31 datasets.  相似文献   
8.
Support-Vector Networks   总被引:722,自引:0,他引:722  
Cortes  Corinna  Vapnik  Vladimir 《Machine Learning》1995,20(3):273-297
Thesupport-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.  相似文献   
9.
Choosing Multiple Parameters for Support Vector Machines   总被引:158,自引:0,他引:158  
The problem of automatically tuning multiple parameters for pattern recognition Support Vector Machines (SVMs) is considered. This is done by minimizing some estimates of the generalization error of SVMs using a gradient descent algorithm over the set of parameters. Usual methods for choosing parameters, based on exhaustive search become intractable as soon as the number of parameters exceeds two. Some experimental results assess the feasibility of our approach for a large number of parameters (more than 100) and demonstrate an improvement of generalization performance.  相似文献   
10.
Bounds on error expectation for support vector machines   总被引:23,自引:0,他引:23  
We introduce the concept of span of support vectors (SV) and show that the generalization ability of support vector machines (SVM) depends on this new geometrical concept. We prove that the value of the span is always smaller (and can be much smaller) than the diameter of the smallest sphere containing the support vectors, used in previous bounds (Vapnik, 1998). We also demonstrate experimentally that the prediction of the test error given by the span is very accurate and has direct application in model selection (choice of the optimal parameters of the SVM).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号