首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 15 毫秒
1.
2.
Different formal learning models address different aspects of human learning. Below we compare Gold-style learning—modelling learning as a limiting process in which the learner may change its mind arbitrarily often before converging to a correct hypothesis—to learning via queries—modelling learning as a one-shot process in which the learner is required to identify the target concept with just one hypothesis. In the Gold-style model considered below, the information presented to the learner consists of positive examples for the target concept, whereas in query learning, the learner may pose a certain kind of queries about the target concept, which will be answered correctly by an oracle (called teacher). Although these two approaches seem rather unrelated at first glance, we provide characterisations of different models of Gold-style learning (learning in the limit, conservative inference, and behaviourally correct learning) in terms of query learning. Thus we describe the circumstances which are necessary to replace limit learners by equally powerful one-shot learners. Our results are valid in the general context of learning indexable classes of recursive languages. This analysis leads to an important observation, namely that there is a natural query learning type hierarchically in-between Gold-style learning in the limit and behaviourally correct learning. Astonishingly, this query learning type can then again be characterised in terms of Gold-style inference.  相似文献   

3.
We introduce a new fault-tolerant model of algorithmic learning using an equivalence oracle and anincomplete membership oracle, in which the answers to a random subset of the learner's membership queries may be missing. We demonstrate that, with high probability, it is still possible to learn monotone DNF formulas in polynomial time, provided that the fraction of missing answers is bounded by some constant less than one. Even when half the membership queries are expected to yield no information, our algorithm will exactly identifym-term,n-variable monotone DNF formulas with an expectedO(mn 2) queries. The same task has been shown to require exponential time using equivalence queries alone. We extend the algorithm to handle some one-sided errors, and discuss several other possible error models. It is hoped that this work may lead to a better understanding of the power of membership queries and the effects of faulty teachers on query models of concept learning.  相似文献   

4.
We present a membership query (i.e. black box interpolation) algorithm for exactly identifying the class of read-once formulas over the basis of Boolean threshold functions. We also present a catalogue of generic transformations that can be used to convert an algorithm in one learning model into an algorithm in a different model.  相似文献   

5.
An efficient query learning algorithm for ordered binary decision diagrams   总被引:1,自引:0,他引:1  
In this paper, we propose a new algorithm that exactly learns ordered binary decision diagrams (OBDDs) with a given variable ordering via equivalence and membership queries. Our algorithm uses at most n equivalence queries and at most 2n (log2 m + 3n) membership queries, where n is the number of nodes in the target-reduced OBDD and m is the number of variables. The upper bound on the number of membership queries is smaller by a factor of O(m) compared with that for the previous best known algorithm proposed by [R. Gavaldà, D. Guijarro, Learning Ordered Binary Decision Diagrams, Proceedings of the 6th International Workshop on Algorithmic Learning Theory, 1995, pp. 228–238].  相似文献   

6.
Recently, the complexity control of dynamic neural models has gained significant attention. The performance of such a process depends highly on the applied definition of model complexity. On the other hand, the learning theory creates a framework to assess the learning properties of models. These properties include the required size of the training samples as well as the statistical confidence over the model. In this Letter, we apply the learning properties of two families of Radial Basis Function Networks (RBFN's) to introduce new complexity measures that reflect the learning properties of such neural model. Then, based on these complexity terms we define cost functions, which provide a balance between the training and testing performances of the model.  相似文献   

7.
8.
9.
On-line learning of linear functions   总被引:1,自引:0,他引:1  
We present an algorithm for the on-line learning of linear functions which is optimal to within a constant factor with respect to bounds on the sum of squared errors for a worst case sequence of trials. The bounds are logarithmic in the number of variables. Furthermore, the algorithm is shown to be optimally robust with respect to noise in the data (again to within a constant factor).  相似文献   

10.
We study several complexity parameters for first order formulas and their suitability for first order learning models. We show that the standard notion of size is not captured by sets of parameters that are used in the literature and thus they cannot give a complete characterization in terms of learnability with polynomial resources. We then identify an alternative notion of size and a simple set of parameters that are useful for first order Horn Expressions. These parameters are the number of clauses in the expression, the maximum number of distinct terms in a clause, and the maximum number of literals in a clause. Matching lower bounds derived using the Vapnik Chervonenkis dimension complete the picture showing that these parameters are indeed crucial. This work has been partly supported by NSF Grant IIS-0099446. A preliminary version of this paper appeared in the proceeding of the conference on Inductive Logic Programming 2003. Most of this work was done while M.A. was at Tufts University. Editors: Tamás Horváth and Akihiro Yamamoto  相似文献   

11.
In this paper, we study the quantum PAC learning model, offering an improved lower bound on the query complexity. For a concept class with VC dimension d, the lower bound is for ? accuracy and 1−δ confidence, where e can be an arbitrarily small positive number. The lower bound is very close to the best lower bound known on query complexity for the classical PAC learning model, which is .  相似文献   

12.
13.
14.
15.
Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms,such as the (1 1)-EA,on toy problems.These efforts produced a deeper understanding of how EAs perform on different kinds of fitness landscapes and general mathematical tools that may be extended to the analysis of more complicated EAs on more realistic problems.In fact,in recent years,it has been possible to analyze the (1 1)-EA on combinatorial optimization problems with practical applications and more realistic population-baeed EAs on structured toy problems. This paper presents a survey of the results obtained in the last decade along these two research lines.The most common mathematical techniques are introduced,the basic ideas behind them are discussed and their elective applications are highlighted.Solved problems that were still open are enumerated as are those still awaiting for a solution.New questions and problems arisen in the meantime are also considered.  相似文献   

16.
17.
随机森林学习算法是一种有效的单图像超分辨率方法,然而其决策函数是确定的二值函数,这对某些图像块的确定性划分并不是最优的选择。为提升单图像超分辨率性能,采用高斯隶属度函数构建随机森林各决策节点的决策函数,将决策函数的输出值由0和1的确定值转换到0~1之间的概率值,并在叶节点上依据数据划分路径上各决策节点概率的乘积进行预测,依据最小经验冒险准则学习决策参数,使随机森林能更好学习不同的样本数据。实验结果表明,与随机森林学习等目前主流单图像超分辨率方法相比,该方法可以提升超分辨率图像的峰值信噪比,同时运算效率与传统随机森林学习算法相当。  相似文献   

18.
Complex application domains involve difficult pattern classification problems. The state space of these problems consists of regions that lie near class separation boundaries and require the construction of complex discriminants while for the rest regions the classification task is significantly simpler. The motivation for developing the Supervised Network Self-Organizing Map (SNet-SOM) model is to exploit this fact for designing computationally effective solutions. Specifically, the SNet-SOM utilizes unsupervised learning for classifying at the simple regions and supervised learning for the difficult ones in a two stage learning process. The unsupervised learning approach is based on the Self-Organizing Map (SOM) of Kohonen. The basic SOM is modified with a dynamic node insertion/deletion process controlled with an entropy based criterion that allows an adaptive extension of the SOM. This extension proceeds until the total number of training patterns that are mapped to neurons with high entropy (and therefore with ambiguous classification) reduces to a size manageable numerically with a capable supervised model. The second learning phase (the supervised training) has the objective of constructing better decision boundaries at the ambiguous regions. At this phase, a special supervised network is trained for the computationally reduced task of performing the classification at the ambiguous regions only. The performance of the SNet-SOM has been evaluated on both synthetic data and on an ischemia detection application with data extracted from the European ST-T database. In all cases, the utilization of SNet-SOM with supervised learning based on both Radial Basis Functions and Support Vector Machines has improved the results significantly related to those obtained with the unsupervised SOM and has enhanced the scalability of the supervised learning schemes. The highly disciplined design of the generalization performance of the Support Vector Machine allows to design the proper model for the particular training set.  相似文献   

19.
Adhesively bonded joints have been extensively employed in the aeronautical and automotive industries to join thin-layer materials for developing lightweight components. To strengthen the structural integrity of joints, it is critical to estimate and improve joint failure loads effectually. To accomplish the aforementioned purpose, this paper presents a novel deep neural network (DNN) model-enabled approach, and a single lap joint (SLJ) design is used to support research development and validation. The approach is innovative in the following aspects: (i) the DNN model is reinforced with a transfer learning (TL) mechanism to realise an adaptive prediction on a new SLJ design, and the requirement to re-create new training samples and re-train the DNN model from scratch for the design can be alleviated; (ii) a fruit fly optimisation (FFO) algorithm featured with the parallel computing capability is incorporated into the approach to efficiently optimise joint parameters based on joint failure load predictions. Case studies were developed to validate the effectiveness of the approach. Experimental results demonstrate that, with this approach, the number of datasets and the computational time required to re-train the DNN model for a new SLJ design were significantly reduced by 92.00% and 99.57% respectively, and the joint failure load was substantially increased by 9.96%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号