首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper was originally motivated by aiming at explicating why a simple iterative learning control scheme for complicated robot dynamics with strong non-linearities works well in acquiring any given desired motion over a finite or infinite time duration or any periodic motion. To gain a physical insight into the problem, a class of linear dynamical systems with specified input and output of the same dimension is treated by defining two properties: output-dissipativity and learnability. It is then shown that the former implies the latter and furthermore, for a class of linear systems with single input and single output, they are equivalent to each other and each of them is also equivalent to strict positive realness of input-output transfer function. For a class of MIMO (multiple inputs and multiple outputs) systems, it is possible to prove that each of these properties is equivalent to strict positive realness of the input-output transfer function matrix if it is strictly proper or otherwise its direct term from input to output satisfies an extra condtion.  相似文献   

2.
3.
In this paper the problem of optimal experimental design for parameter identification of static non-linear blocks is addressed. Non-linearities are assumed to be polynomial and represented according to the Vandermonde base. The optimality problem is formulated in a set membership context and the cost functions to be minimized are the worst case parameter uncertainties. Closed form optimal input sequences are derived when the input u is allowed to vary on a given interval [ u a, u b ]. Since optimal input sequences are, in general, not invariant to base changes, results and criteria for representing polymomials with different bases, still preserving the optimal set of input levels derived from the Vandermonde parameterization, are introduced as well. Finally numerical results are reported showing the effectiveness of using optimal input sequences especially when identifying some block described dynamic models that include in their structure static non-linearities (such as Hammerstein and LPV models). In such cases the improvement achieved in the confidence of the estimates can add up to a factor of several hundreds with respect to the case of random generated inputs.  相似文献   

4.
Query learning is to learn aconcept (i.e., a representation of some language) through communication with ateacher (i.e., someone who knows the concept). The purpose of this paper is to prepare a formal framework for studying polynomial-time query learnability. We introduce necessary notation and, by using several examples, clarify notions that are necessary for discussing polynomial-time query learning.This is an extended version of a part of the paper A Formal Study of Learning via Queries, which was presented at the 17th International Colloquium on Automata, Languages, and Programming. The preparation of this paper was done while the author was visiting the Departament de Llenguatges i Sistemes Informàtics, Universitat Politècnica de Catalunya, and was supported in part by ESPRIT II Basic Research Actions program of the EC under Contract No. 3075 (project ALCOM) and by Grant in Aid for Scientific Research of the Ministry of Education, Science, and Culture of Japan under Grant-in-Aid for Co-operative Research (A) 02302047 (1990).  相似文献   

5.
6.
Dmitry Saulov   《Calphad》2006,30(4):405-414
The present study addresses the problem of predicting the properties of multicomponent systems from those of corresponding binary systems. Two types of multicomponent polynomial models have been analysed. A probabilistic interpretation of the parameters of the Polynomial model, which explicitly relates them with the Gibbs free energies of the generalised quasichemical reactions, is proposed. The presented treatment provides a theoretical justification for such parameters. A methodology of estimating the ternary interaction parameter from the binary ones is presented. The methodology provides a way in which the power series multicomponent models, where no projection is required, could be incorporated into the Calphad approach.  相似文献   

7.
When sensing its environment, an agent often receives information that only partially describes the current state of affairs. The agent then attempts to predict what it has not sensed, by using other pieces of information available through its sensors. Machine learning techniques can naturally aid this task, by providing the agent with the rules to be used for making these predictions. For this to happen, however, learning algorithms need to be developed that can deal with missing information in the learning examples in a principled manner, and without the need for external supervision. We investigate this problem herein.We show how the Probably Approximately Correct semantics can be extended to deal with missing information during both the learning and the evaluation phase. Learning examples are drawn from some underlying probability distribution, but parts of them are hidden before being passed to the learner. The goal is to learn rules that can accurately recover information hidden in these learning examples. We show that for this to be done, one should first dispense the requirement that rules should always make definite predictions; “don't know” is sometimes necessitated. On the other hand, such abstentions should not be done freely, but only when sufficient information is not present for definite predictions to be made. Under this premise, we show that to accurately recover missing information, it suffices to learn rules that are highly consistent, i.e., rules that simply do not contradict the agent's sensory inputs. It is established that high consistency implies a somewhat discounted accuracy, and that this discount is, in some defined sense, unavoidable, and depends on how adversarially information is hidden in the learning examples.Within our proposed learning model we prove that any PAC learnable class of monotone or read-once formulas is also learnable from incomplete learning examples. By contrast, we prove that parities and monotone-term 1-decision lists, which are properly PAC learnable, are not properly learnable under the new learning model. In the process of establishing our positive and negative results, we re-derive some basic PAC learnability machinery, such as Occam's Razor, and reductions between learning tasks. We finally consider a special case of learning from partial learning examples, where some prior bias exists on the manner in which information is hidden, and show how this provides a unified view of many previous learning models that deal with missing information.We suggest that the proposed learning model goes beyond a simple extension of supervised learning to the case of incomplete learning examples. The principled and general treatment of missing information during learning, we argue, allows an agent to employ learning entirely autonomously, without relying on the presence of an external teacher, as is the case in supervised learning. We call our learning model autodidactic to emphasize the explicit disassociation of this model from any form of external supervision.  相似文献   

8.
We study the problem of PAC-learning Boolean functions with random attribute noise under the uniform distribution. We define a noisy distance measure for function classes and show that if this measure is small for a class and an attribute noise distribution D then is not learnable with respect to the uniform distribution in the presence of noise generated according to D. The noisy distance measure is then characterized in terms of Fourier properties of the function class. We use this characterization to show that the class of all parity functions is not learnable for any but very concentrated noise distributions D. On the other hand, we show that if is learnable with respect to uniform using a standard Fourier-based learning technique, then is learnable with time and sample complexity also determined by the noisy distance. In fact, we show that this style algorithm is nearly the best possible for learning in the presence of attribute noise. As an application of our results, we show how to extend such an algorithm for learning AC0 so that it handles certain types of attribute noise with relatively little impact on the running time.  相似文献   

9.
10.
Case-based reasoning is deemed an important technology to alleviate the bottleneck of knowledge acquisition in Artificial Intelligence (AI). In case-based reasoning, knowledge is represented in the form of particular cases with an appropriate similarity measure rather than any form of rules. The case-based reasoning paradigm adopts the view that an Al system is dynamically changing during its life-cycle which immediately leads to learning considerations. Within the present paper, we investigate the problem of case-based learning of indexable classes of formal languages. Prior to learning considerations, we study the problem of case-based representability and show that every indexable class is case-based representable with respect to a fixed similarity measure. Next, we investigate several models of case-based learning and systematically analyze their strengths as well as their limitations. Finally, the general approach to case-based learnability of indexable classes of formal languages is prototypically applied to so-called containmet decision lists, since they seem particularly tailored to case-based knowledge processing.  相似文献   

11.
Scoring rules and voting trees are two broad and concisely-representable classes of voting rules; scoring rules award points to alternatives according to their position in the preferences of the voters, while voting trees are iterative procedures that select an alternative based on pairwise comparisons. In this paper, we investigate the PAC-learnability of these classes of rules. We demonstrate that the class of scoring rules, as functions from preferences into alternatives, is efficiently learnable in the PAC model. With respect to voting trees, while in general a learning algorithm would require an exponential number of samples, we show that if the number of leaves is polynomial in the size of the set of alternatives, then a polynomial training set suffices. We apply these results in an emerging theory: automated design of voting rules by learning.  相似文献   

12.
13.
This paper proposes a kind of probably approximately correct (PAC) learning framework for inferring a set of functional dependencies (FDs) from example tuples. A simple algorithm is considered that outputs a set of all FDs which hold in a set of example tuples. Letr be a relation (a set of tuples). We define the error for a set of FDsFS as the minimum Σ t∈ν; where ν (ν ⊂r) is a set such thatFS holds inr − ν, andP(t) denotes the probability that tuplet is picked fromr. Our attention is focused on the sample complexity, and we show that the number of example tuples required to infer a set of FDs whose error does not exceed ω with probability at least 1 − δ under an arbitrary probability distribution is. Tatsuya Akutsu, Ph. D.: He is an associate professor of Department of Computer Science, Gunma University. He received the B. E. degree in 1984, the M. E. degree in 1986 and the Dr. Eng. degree in 1989 from The University of Tokyo. From 1989 to 1994, he was with Mechanical Engineering Laboratory, MITI, Japan. His research interests are design and analysis of algorithms, computational learning theory and bioinformatics.  相似文献   

14.
15.
In this paper we study the computational power of polynomialtime query learning systems for several query types, and the computational complexity of a learning problem for several representation classes. As corollaries of our results, we prove some polynomial-time nonlearnability results, and relate polynomial-time learnability of some representation classes to the complexity of representation finding problems of P/poly oracles. For example, forCIR, a representation class by logical circuits, it is shown that P(NP 1 () ) is an upper bound of power of query learning systems forCIR, and that P(NP 1 () ) is also a lower bounds of power of query learning systems forCIR when they are used to learn a certain subclassR ofCIR. It is also shown that the problem of learningCIR is P(NP(NP 1 () ))-solvable. Then, using these results, the following relations are proved: (1) If, for someA P/poly, the representation finding problem ofA is not in P(NP 1 A ), thenCIR is not polynomial-time query learnable even by using queries such as membership, equivalence, subset, superset, etc. (2) On the other hand, if the above-mentioned subclassR ofCIR is not polynomial-time query learnable by using subset and superset queries, then some B P/poly exists such that its representation finding problem is not in P(NP 1 B ).This is an extended version of a part of the paper A Formal Study of Learning via Queries, which was presented at the 17th International Colloquium of Automata, Languages, and Programming. The first author was supported in part by Grant in Aid for Scientific Research of the Ministry of Education, Science, and Culture of Japan under Grant-in-Aid for Co-operative Research (A) 02302047 (1991). The second author was supported in part by ESPRIT II Basic Research Actions Program of the EC under Contract No. 3075 (project ALCOM), and by the National Science Foundation under Grant CCR89-13584, while visiting the University of California, Santa Barbara.  相似文献   

16.
This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.  相似文献   

17.
Developing robust and reliable control code for autonomous mobile robots is difficult, because the interaction between a physical robot and the environment is highly complex, subject to noise and variation, and therefore partly unpredictable. This means that to date it is not possible to predict robot behaviour based on theoretical models. Instead, current methods to develop robot control code still require a substantial trial-and-error component to the software design process.This paper proposes a method of dealing with these issues by (a) establishing task-achieving sensor-motor couplings through robot training, and (b) representing these couplings through transparent mathematical functions that can be used to form hypotheses and theoretical analyses of robot behaviour.We demonstrate the viability of this approach by teaching a mobile robot to track a moving football and subsequently modelling this task using the NARMAX system identification technique.  相似文献   

18.
We describe an alternative construction of an existing canonical representation for definite Horn theories, the Guigues-Duquenne basis (or GD basis), which minimizes a natural notion of implicational size. We extend the canonical representation to general Horn, by providing a reduction from definite to general Horn CNF. Using these tools, we provide a new, simpler validation of the classic Horn query learning algorithm of Angluin, Frazier, and Pitt, and we prove that this algorithm always outputs the GD basis regardless of the counterexamples it receives.  相似文献   

19.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

20.
Classical prediction error approaches for the identification of non-linear polynomial NARX/NARMAX models often yield unsatisfactory results for long-range prediction or simulation purposes, mainly due to incorrect or redundant model structure selection. The paper discusses some limitations of the standard approach and suggests two modifications: namely, a new index, based on the simulation error, is employed as the regressor selection criterion and a pruning mechanism is introduced in the model selection algorithm. The resulting algorithm is shown to be effective in the identification of compact and robust models, generally yielding model structures closer to the correct ones. Computational issues are also discussed. Finally, the identification algorithm is tested on a long-range prediction benchmark application.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号