排序方式: 共有16条查询结果,搜索用时 15 毫秒
1.
Progress in supervised neural networks 总被引:5,自引:0,他引:5
Theoretical results concerning the capabilities and limitations of various neural network models are summarized, and some of their extensions are discussed. The network models considered are divided into two basic categories: static networks and dynamic networks. Unlike static networks, dynamic networks have memory. They fall into three groups: networks with feedforward dynamics, networks with output feedback, and networks with state feedback, which are emphasized in this work. Most of the networks discussed are trained using supervised learning 相似文献
2.
3.
4.
Day P Hush NS Clark RJ 《Philosophical transactions. Series A, Mathematical, physical, and engineering sciences》2008,366(1862):5-14
Mixed-valence compounds were recognized by chemists more than a century ago for their unusual colours and stoichiometries, but it was just 40 years ago that two seminal articles brought together the then available evidence. These articles laid the foundations for understanding the physical properties of such compounds and how the latter correlate with molecular and crystal structures. This introduction to a discussion meeting briefly surveys the history of mixed valence and sets in context contributions to the discussion describing current work in the field. 相似文献
5.
A dynamic programming algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown
to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive
grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, replacing the
dynamic programming algorithm with a memoized recursive algorithm whose run time is substantially smaller for most regularization
parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant
run time enhancement in practice. 相似文献
6.
Stack Filters are a class of non-linear filter typically used for noise suppression. Advantages of Stack Filters are their
generality and the existence of efficient optimization algorithms under mean absolute error (Wendt et al. in IEEE Trans. Acoust.
Speech Signal Process. 34:898–910, 1986). In this paper we describe our recent efforts to use the class of Stack Filters for classification problems. This leads
to a novel class of continuous domain classifiers which we call Ordered Hypothesis Machines (OHM). We develop convex optimization
based learning algorithms for Ordered Hypothesis Machines and highlight their relationship to Support Vector Machines and
Nearest Neighbor classifiers. We report on the performance on synthetic and real-world datasets including an application to
change detection in remote sensing imagery. We conclude that OHM provides a novel way to reduce the number of exemplars used
in Nearest Neighbor classifiers and achieves competitive performance to the more computationally expensive K-Nearest Neighbor
method. 相似文献
7.
8.
This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results. 相似文献
9.
In this paper we prove a result that is fundamental to the generalization properties of Vapnik's support vector machines and other large margin classifiers. In particular, we prove that the minimum margin over all dichotomies of k n + 1 points inside a unit ball in R
n is maximized when the points form a regular simplex on the unit sphere. We also provide an alternative proof directly in the framework of level fat shattering. 相似文献
10.