首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2篇
  免费   0篇
金属工艺   2篇
  1990年   1篇
  1989年   1篇
排序方式: 共有2条查询结果,搜索用时 62 毫秒
1
1.
This paper specifies the main features of connectionist and brain-like connectionist models; argues for the need for, and usefulness of, appropriate successively larger brainlike structures; and examines parallel-hierarchical Recognition Cone models of perception from this perspective, as examples of networks exploiting such structures (e.g. local receptive fields, global convergence-divergence). The anatomy, physiology, behavior, and development of the visual system are briefly summarized to motivate the architecture of brain-structured networks for perceptual recognition. Results are presented from simulations of carefully pre-designed Recognition Cone structures that perceive objects (e.g. houses) in digitized photographs. A framework for perceptual learning is introduced, including mechanisms for generation learning, i.e. the growth of new links and possibly, nodes, subject to brain-like topological constraints. The information processing transforms discovered through feedback-guided generation are fine-tuned by feedback-guided reweighting of links. Some preliminary results are presented of brain-structured networks that learn to recognize simple objects (e.g. letters of the alphabet, cups, apples, bananas) through generation and reweighting of transforms. These show large improvements over networks that either lack brain-like structure or/and learn by reweighting of links alone. It is concluded that brain-like structures and generation learning can significantly increase the power of connectionist models.  相似文献   
2.
LEONARD UHR 《连接科学》1990,2(3):179-193
A crucial dilemma is how to increase the power of connectionist networks (CN), since simply increasing the size of today's relatively small CNs often slows down and worsens learning and performance. There are three possible ways: (1) use more powerful structures; (2) increase the amount of stored information, and the power and the variety of the basic processes; (3) have the network modify itself (learn, evolve) in more powerful ways. Today's connectionist networks use only a few of the many possible topological structures, handle only numerical values using only very simple basic processes, and learn only by modifying weights associated with links. This paper examines the great variety of potentially muck more powerful possibilities, focusing on what appear to be the most promising: appropriate brain-like structures (e.g. local connectivity, global convergence and divergence); matching, symbol-handling, and list-manipulating capabilities; and learning by extraction-generation-discovery.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号