首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2篇
  免费   0篇
自动化技术   2篇
  2004年   1篇
  2000年   1篇
排序方式: 共有2条查询结果,搜索用时 0 毫秒
1
1.
A three-layer neural network (NN) with novel adaptive architecture has been developed. The hidden layer of the network consists of slabs of single neuron models, where neurons within a slab-but not between slabs- have the same type of activation function. The network activation functions in all three layers have adaptable parameters. The network was trained using a biologically inspired, guided-annealing learning rule on a variety of medical data. Good training/testing classification performance was obtained on all data sets tested. The performance achieved was comparable to that of SVM classifiers. It was shown that the adaptive network architecture, inspired from the modular organization often encountered in the mammalian cerebral cortex, can benefit classification performance.  相似文献   
2.
Poirazi P  Mel BW 《Neural computation》2000,12(5):1189-1205
Biophysical modeling studies have suggested that neurons with active dendrites can be viewed as linear units augmented by product terms that arise from interactions between synaptic inputs within the same dendritic subregions. However, the degree to which local nonlinear synaptic interactions could augment the memory capacity of a neuron is not known in a quantitative sense. To approach this question, we have studied the family of subsampled quadratic classifiers: linear classifiers augmented by the best k terms from the set of K = (d2 + d)/2 second-order product terms available in d dimensions. We developed an expression for the total parameter entropy, whose form shows that the capacity of an SQ classifier does not reside solely in its conventional weight values-the explicit memory used to store constant, linear, and higher-order coefficients. Rather, we identify a second type of parameter flexibility that jointly contributes to an SQ classifier's capacity: the choice as to which product terms are included in the model and which are not. We validate the form of the entropy expression using empirical studies of relative capacity within families of geometrically isomorphic SQ classifiers. Our results have direct implications for neurobiological (and other hardware) learning systems, where in the limit of high-dimensional input spaces and low-resolution synaptic weight values, this relatively little explored form of choice flexibility could constitute a major source of trainable model capacity.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号