首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   16篇
  免费   0篇
无线电   3篇
一般工业技术   4篇
冶金工业   1篇
原子能技术   1篇
自动化技术   7篇
  2012年   1篇
  2010年   1篇
  2008年   2篇
  2007年   1篇
  2006年   2篇
  2003年   3篇
  2001年   1篇
  1998年   2篇
  1996年   1篇
  1993年   2篇
排序方式: 共有16条查询结果,搜索用时 15 毫秒
1.
Progress in supervised neural networks   总被引:5,自引:0,他引:5  
Theoretical results concerning the capabilities and limitations of various neural network models are summarized, and some of their extensions are discussed. The network models considered are divided into two basic categories: static networks and dynamic networks. Unlike static networks, dynamic networks have memory. They fall into three groups: networks with feedforward dynamics, networks with output feedback, and networks with state feedback, which are emphasized in this work. Most of the networks discussed are trained using supervised learning  相似文献   
2.
3.
为配合HIRFL进行重离子核物理研究,我们建造了集质量、电荷鉴别及能量为一体的重离子飞行时间谱仪实验终端,该终端主要由一φ600mm的靶室,长度为0.5~3m(每0.5m可调)的飞行管道以及探测系统组成。质量鉴别由TOF方法来完成,所用的探测器组合时间分辨,对~(252)Cf源α粒子为200ps。利用ΔE-E探测器望远镜或BCS探测器进行产物的电荷鉴别。  相似文献   
4.
Mixed-valence compounds were recognized by chemists more than a century ago for their unusual colours and stoichiometries, but it was just 40 years ago that two seminal articles brought together the then available evidence. These articles laid the foundations for understanding the physical properties of such compounds and how the latter correlate with molecular and crystal structures. This introduction to a discussion meeting briefly surveys the history of mixed valence and sets in context contributions to the discussion describing current work in the field.  相似文献   
5.
A dynamic programming algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, replacing the dynamic programming algorithm with a memoized recursive algorithm whose run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.  相似文献   
6.
Stack Filters are a class of non-linear filter typically used for noise suppression. Advantages of Stack Filters are their generality and the existence of efficient optimization algorithms under mean absolute error (Wendt et al. in IEEE Trans. Acoust. Speech Signal Process. 34:898–910, 1986). In this paper we describe our recent efforts to use the class of Stack Filters for classification problems. This leads to a novel class of continuous domain classifiers which we call Ordered Hypothesis Machines (OHM). We develop convex optimization based learning algorithms for Ordered Hypothesis Machines and highlight their relationship to Support Vector Machines and Nearest Neighbor classifiers. We report on the performance on synthetic and real-world datasets including an application to change detection in remote sensing imagery. We conclude that OHM provides a novel way to reduce the number of exemplars used in Nearest Neighbor classifiers and achieves competitive performance to the more computationally expensive K-Nearest Neighbor method.  相似文献   
7.
8.
This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.  相似文献   
9.
Hush  Don  Scovel  Clint 《Machine Learning》2001,45(1):33-44
In this paper we prove a result that is fundamental to the generalization properties of Vapnik's support vector machines and other large margin classifiers. In particular, we prove that the minimum margin over all dichotomies of k n + 1 points inside a unit ball in R n is maximized when the points form a regular simplex on the unit sphere. We also provide an alternative proof directly in the framework of level fat shattering.  相似文献   
10.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号