首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4篇
  免费   0篇
化学工业   1篇
无线电   1篇
一般工业技术   1篇
冶金工业   1篇
  1994年   1篇
  1988年   1篇
  1983年   1篇
  1972年   1篇
排序方式: 共有4条查询结果,搜索用时 0 毫秒
1
1.
2.
The significance of the mutual exclusivity assumption for early word learning has been questioned on 2 accounts: (1) Children learn second labels for objects, which violates the assumption, and (2) evidence documenting use of mutual exclusivity comes mostly from older children. This article addresses both concerns. Use of mutual exclusivity predicts not that learning 2nd labels is impossible but that it is harder than learning 1st labels. To test this, very young children were taught novel labels for objects they either could or could not already name. In Study 1, contrary to predictions, 24-mo-olds learned 1st and 2nd labels equally well. But in Study 2, when children had an additional word to learn, they had trouble learning second (but not first) labels. Similarly, in Study 3, 16-mo-olds who were taught only one new word had trouble learning second labels. Thus, from 16 mo on, mutual exclusivity helps children interpret novel words. Yet, when their processing capacity is not overly taxed, 24-mo-olds can override this default assumption. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
3.
Parameter estimates for censored life data using ordinary censoring methods (noninformative censoring) can be biased when the cause of censoring for a unit is related to its final life (informative censoring). An algorithm is presented for obtaining estimated lifetimes for informatively censored units. It is based on linguistic variable concepts from fuzzy set theory. These estimates are combined with the lifetimes for actual field units to form a complete sample for parameter estimation purposes  相似文献   
4.
Due to changing matrix elements, many of the computational benefits embodied in sparse matrix theory and implemented in commercial LP codes for maintaining a sparse matrix inverse updates are lost for NLP. This paper reports on the results of investigating the use of structural decomposition in large, sparse NLP problems using the GRG algorithm. The approach is to partition the basis matrix into block lower triangular (BLT) form. At each step of the GRG algorithm, all operations are based upon the smaller diagonal subsets of variables.This approach led to the development of an algorithm to dynamically order a square matrix into block, lower triangular form after a column replacement. The method is fast, showing computational time reductions of up to a factor of 10 over performing the ordering on the complete occurrence matrix, while requiring a minimal amount of computer memory. Only one subset of the occurrence matrix followed by only the condensed occurrence matrix is required in computer memory to order the modified matrix. The algorithm is applicable to any numerical method which requires structural modification of a matrix by changing the structure of one column.An experimental GRG computer code, called GRGLSS, was developed for testing the technique. Examples demonstrate significantly faster computation speeds in the feasibility phase of the GRG algorithm.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号