首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   27篇
  免费   0篇
金属工艺   1篇
自动化技术   26篇
  2021年   1篇
  2012年   1篇
  2009年   1篇
  2007年   2篇
  2006年   2篇
  2005年   1篇
  2004年   1篇
  2003年   2篇
  2002年   3篇
  2001年   2篇
  1999年   3篇
  1998年   2篇
  1997年   2篇
  1996年   2篇
  1995年   2篇
排序方式: 共有27条查询结果,搜索用时 15 毫秒
1.
In this paper, genetic algorithm is used to help improve the tolerance of feedforward neural networks against an open fault. The proposed method does not explicitly add any redundancy to the network, nor does it modify the training algorithm. Experiments show that it may profit the fault tolerance as well as the generalisation ability of neural networks.  相似文献   
2.
MIQR Active Learning on a Continuous Function and a Discontinuous Function   总被引:1,自引:0,他引:1  
Active learning balances the cost of data acquisition against its usefulness for training. We select only those data points which are the most informative about the system being modelled. The MIQR (Maximum Inter-Quartile Range) criterion is defined by computing the inter-quartile range of the outputs of an ensemble of networks, and finding the input parameter values for which this is maximal. This method ensures data selection is not unduly influenced by ‘outliers’, but is principally dependent upon the ‘mainstream’ state of the ensemble. MIQR is more effective and efficient than contending methods1 . The algorithm automatically regulates the training threshold and the network architecture as necessary. We compare active learning methods by applying them to a continuous function and a discontinuous function. Training is more difficult for a discontinuous function than a continuous function, and the volume of data for active learning is substantially less than for passive learning.  相似文献   
3.
 XCS [1, 2] represents a new form of learning classifier system [3] that uses accuracy as a means of guiding fitness for selection within a Genetic Algorithm. The combination of accuracy-based selection and a dynamic niche-based deletion mechanism achieve a long sought-after goal–the reliable production, maintenance, and proliferation of the sub-population of optimally general accurate classifiers that map the problem domain [4]. Wilson [2] and Lanzi [5, 6] have demonstrated the applicability of XCS to the identification of the optimal action-chain leading to the optimum trade-off between reward distance and magnitude. However, Lanzi [6] also demonstrated that XCS has difficulty in finding an optimal solution to the long action-chain environment Woods-14 [7]. Whilst these findings have shed some light on the ability of XCS to form long action-chains, they have not provided a systematic and, above all, controlled investigation of the limits of XCS learning within multiple-step environments. In this investigation a set of confounding variables in such problems are identified. These are controlled using carefully constructed FSW environments [8, 9] of increasing length. Whilst investigations demonstrate that XCS is able to establish the optimal sub-population [O] [4] when generalisation is not used, it is shown that the introduction of generalisation introduces low bounds on the length of action-chains that can be identified and chosen between to find the optimal pathway. Where these bounds are reached a form of over-generalisation caused by the formation of dominant classifiers can occur. This form is further investigated and the Domination Hypothesis introduced to explain its formation and preservation.  相似文献   
4.
We address the problem of training multilayer perceptrons to instantiate a target function. In particular, we explore the accuracy of the trained network on a test set of previously unseen patterns — the generalisation ability of the trained network. We systematically evaluate alternative strategies designed to improve the generalisation performance. The basic idea is to generate a diverse set of networks, each of which is designed to be an implementation of the target function. We then have a set of trained, alternative versions — a version set. The goal is to achieve useful diversity within this set, and thus generate potential for improved generalisation performance of the set as a wholewhen compared to the performance of any individual version. We define this notion of useful diversity, we define a metric for it, we explore a number of ways of generating it, and we present the results of an empirical study of a number of strategies for exploiting it to achieve maximum generalisation performance. The strategies encompass statistical measures as well as a selectornet approach which proves to be particularly promising. The selector net is a form of metanet that operates in conjunction with a version set.  相似文献   
5.
6.
Normalized Gaussian Radial Basis Function networks   总被引:4,自引:0,他引:4  
Guido Bugmann 《Neurocomputing》1998,20(1-3):97-110
The performances of normalised RBF (NRBF) nets and standard RBF nets are compared in simple classification and mapping problems. In normalized RBF networks, the traditional roles of weights and activities in the hidden layer are switched. Hidden nodes perform a function similar to a Voronoi tessellation of the input space, and the output weights become the network's output over the partition defined by the hidden nodes. Consequently, NRBF nets lose the localized characteristics of standard RBF nets and exhibit excellent generalization properties, to the extent that hidden nodes need to be recruited only for training data at the boundaries of class domains. Reflecting this, a new learning rule is proposed that greatly reduces the number of hidden nodes needed in classification tasks. As for mapping applications, it is shown that NRBF nets may outperform standard RBFs nets and exhibit more uniform errors. In both applications, the width of basis functions is uncritical, which makes NRBF nets easy to use.  相似文献   
7.
针对目前数据发布方法不能有效处理不同个体隐私保护需求的问题,依据个体隐私自治的原则,从面向个体和敏感属性值角度,提出一个敏感数据发布的个性化匿名发布模型和基于泛化技术的启发式算法.通过Adult数据实验,验证了算法的可行性.与Basic Incognito和Mondrian相比,信息损失少,算法性能良好.  相似文献   
8.
The Pseudo Fisher Linear Discriminant (PFLD) based on a pseudo-inverse technique shows a peaking behaviour of the generalisation error for training sample sizes that are about the feature size: with an increase in the training sample size, the generalisation error first decreases, reaching a minimum, then increases, reaching a maximum at the point where the training sample size is equal to the data dimensionality, and afterwards begins again to decrease. A number of ways exist to solve this problem. In this paper, it is shown that noise injection by adding redundant features to the data is similar to other regularisation techniques, and helps to improve the generalisation error of this classifier for critical training sample sizes. Received: 10 November 1998?Received in revised form: 7 January 1999?Accepted: 7 January 1999  相似文献   
9.
A Bayesian selective combination method is proposed for combining multiple neural networks in nonlinear dynamic process modelling. Instead of using fixed combination weights, the probability of a particular network being the true model is used as the combination weight for combining that network. The prior probability is calculated using the sum of squared errors of individual networks on a sliding window covering the most recent sampling times. A nearest neighbour method is used for estimating the network error for a given input data point, which is then used in calculating the combination weights for individual networks. Forward selection and backward elimination are used to select the individual networks to be combined. In forward selection, individual networks are gradually added into the aggregated network until the aggregated network error on the original training and testing data sets cannot be further reduced. In backward elimination, all the individual networks are initially aggregated and some of the individual networks are then gradually eliminated until the aggregated network error on the original training and testing data sets cannot be further reduced. Application results demonstrate that the proposed techniques can significantly improve model generalisation and perform better than aggregating all the individual networks.  相似文献   
10.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号