首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Machine learning deals with the issue of how to build programs that improve their performance at some task through experience. Machine learning algorithms have proven to be of great practical value in a variety of application domains. They are particularly useful for (a) poorly understood problem domains where little knowledge exists for the humans to develop effective algorithms; (b) domains where there are large databases containing valuable implicit regularities to be discovered; or (c) domains where programs must adapt to changing conditions. Not surprisingly, the field of software engineering turns out to be a fertile ground where many software development and maintenance tasks could be formulated as learning problems and approached in terms of learning algorithms. This paper deals with the subject of applying machine learning in software engineering. In the paper, we first provide the characteristics and applicability of some frequently utilized machine learning algorithms. We then summarize and analyze the existing work and discuss some general issues in this niche area. Finally we offer some guidelines on applying machine learning methods to software engineering tasks and use some software development and maintenance tasks as examples to show how they can be formulated as learning problems and approached in terms of learning algorithms.  相似文献   

3.
Computability theoretic learning theory (machine inductive inference) typically involves learning programs for languages or functions from a stream of complete data about them and, importantly, allows mind changes as to conjectured programs. This theory takes into account algorithmicity but typically does not take into account feasibility of computational resources. This paper provides some example results and problems for three ways this theory can be constrained by computational feasibility. Considered are: the learner has memory limitations, the learned programs are desired to be optimal, and there are feasibility constraints on learning each output program as well as other constraints to minimize postponement tricks. Work supported in part by NSF Grant Number CCR-0208616 at UD.  相似文献   

4.
Multi-agent reinforcement learning methods suffer from several deficiencies that are rooted in the large state space of multi-agent environments. This paper tackles two deficiencies of multi-agent reinforcement learning methods: their slow learning rate, and low quality decision-making in early stages of learning. The proposed methods are applied in a grid-world soccer game. In the proposed approach, modular reinforcement learning is applied to reduce the state space of the learning agents from exponential to linear in terms of the number of agents. The modular model proposed here includes two new modules, a partial-module and a single-module. These two new modules are effective for increasing the speed of learning in a soccer game. We also apply the instance-based learning concepts, to choose proper actions in states that are not experienced adequately during learning. The key idea is to use neighbouring states that have been explored sufficiently during the learning phase. The results of experiments in a grid-soccer game environment show that our proposed methods produce a higher average reward compared to the situation where the proposed method is not applied to the modular structure.  相似文献   

5.
Cooperative Multi-Agent Learning: The State of the Art   总被引:5,自引:4,他引:1  
Cooperative multi-agent systems (MAS) are ones in which several agents attempt, through their interaction, to jointly solve tasks or to maximize utility. Due to the interactions among the agents, multi-agent problem complexity can rise rapidly with the number of agents or their behavioral sophistication. The challenge this presents to the task of programming solutions to MAS problems has spawned increasing interest in machine learning techniques to automate the search and optimization process. We provide a broad survey of the cooperative multi-agent learning literature. Previous surveys of this area have largely focused on issues common to specific subareas (for example, reinforcement learning, RL or robotics). In this survey we attempt to draw from multi-agent learning work in a spectrum of areas, including RL, evolutionary computation, game theory, complex systems, agent modeling, and robotics. We find that this broad view leads to a division of the work into two categories, each with its own special issues: applying a single learner to discover joint solutions to multi-agent problems (team learning), or using multiple simultaneous learners, often one per agent (concurrent learning). Additionally, we discuss direct and indirect communication in connection with learning, plus open issues in task decomposition, scalability, and adaptive dynamics. We conclude with a presentation of multi-agent learning problem domains, and a list of multi-agent learning resources.  相似文献   

6.
Although many studies have investigated the effects of digital game-based learning (DGBL) on learning and motivation, its benefits have never been systematically demonstrated. In our first experiment, we sought to identify the conditions under which DGBL is most effective, by analyzing the effects of two different types of instructions (learning instruction vs. entertainment instruction). Results showed that the learning instruction elicited deeper learning than the entertainment one, without impacting negatively on motivation. In our second experiment, we showed that if learners are given regular feedback about their performance, the entertainment instruction results in deep learning. These two experiments demonstrate that a serious game environment can promote learning and motivation, providing it includes features that prompt learners to actively process the educational content.  相似文献   

7.
This paper explores the ways three different theoretical perspectives of the social aspects of self-regulated learning [Hadwin, A. F. (2000). Building a case for self-regulating as a socially constructed phenomenon. Unpublished doctoral dissertation, Simon Fraser University, Burnaby, BC, Canada; Hadwin, A. F., & Oshige, M. (2006). Self-regulation, co-regulation, and socially-shared regulation: Examining many faces of social in models of SRL. In A. F. Hadwin, & S. Jarvela (Chairs), Socially constructed self-regulated learning: Where social and self meet in strategic regulation of learning. Symposium conducted at the Annual Meeting of the American Educational Research Association, San Francisco, CA] have been operationalized in a computer supported learning environment called gStudy. In addition to contrasting social aspects of SRL and drawing connections with specific collaborative tools and structures, this paper explores the potential of gStudy to advance theory, research, and practice. Specifically it discusses how the utilization of differing collaborative models provides new avenues for systematically researching social aspects of SRL and their roles in collaboration.  相似文献   

8.
Although both online learning and kernel learning have been studied extensively in machine learning, there is limited effort in addressing the intersecting research problems of these two important topics. As an attempt to fill the gap, we address a new research problem, termed Online Multiple Kernel Classification (OMKC), which learns a kernel-based prediction function by selecting a subset of predefined kernel functions in an online learning fashion. OMKC is in general more challenging than typical online learning because both the kernel classifiers and the subset of selected kernels are unknown, and more importantly the solutions to the kernel classifiers and their combination weights are correlated. The proposed algorithms are based on the fusion of two online learning algorithms, i.e., the Perceptron algorithm that learns a classifier for a given kernel, and the Hedge algorithm that combines classifiers by linear weights. We develop stochastic selection strategies that randomly select a subset of kernels for combination and model updating, thus improving the learning efficiency. Our empirical study with 15 data sets shows promising performance of the proposed algorithms for OMKC in both learning efficiency and prediction accuracy.  相似文献   

9.
As machine learning (ML) and artificial intelligence progress, more complex tasks can be addressed, quite often by cascading or combining existing models and technologies, known as the bottom‐up design. Some of those tasks are addressed by agents, which attempt to simulate or emulate higher cognitive abilities that cover a broad range of functions; hence, those agents are named cognitive agents. We formulate, implement, and evaluate such a cognitive agent, which combines learning by example with ML. The mechanisms, algorithms, and theories to be merged when training a cognitive agent to read and learn how to represent knowledge have not, to the best of our knowledge, been defined by the current state‐of‐the‐art research. The task of learning to represent knowledge is known as semantic parsing, and we demonstrate that it is an ability that may be attained by cognitive agents using ML, and the knowledge acquired can be represented by using conceptual graphs. By doing so, we create a cognitive agent that simulates properties of “learning by example,” while performing semantic parsing with good accuracy. Due to the unique and unconventional design of this agent, we first present the model and then gauge its performance, showcasing its strengths and weaknesses.  相似文献   

10.
This work extends studies of Angluin, Lange and Zeugmann on the dependence of learning on the hypothesis space chosen for the language class in the case of learning uniformly recursive language classes. The concepts of class-comprising (where the learner can choose a uniformly recursively enumerable superclass as the hypothesis space) and class-preserving (where the learner has to choose a uniformly recursively enumerable hypothesis space of the same class) are formulated in their study. In subsequent investigations, uniformly recursively enumerable hypothesis spaces have been considered. In the present work, we extend the above works by considering the question of whether learners can be effectively synthesized from a given hypothesis space in the context of learning uniformly recursively enumerable language classes. In our study, we introduce the concepts of prescribed learning (where there must be a learner for every uniformly recursively enumerable hypothesis space of the same class) and uniform learning (like prescribed, but the learner has to be synthesized effectively from an index of the hypothesis space). It is shown that while for explanatory learning, these four types of learnability coincide, some or all are different for other learning criteria. For example, for conservative learning, all four types are different. Several results are obtained for vacillatory and behaviourally correct learning; three of the four types can be separated, however the relation between prescribed and uniform learning remains open. It is also shown that every (not necessarily uniformly recursively enumerable) behaviourally correct learnable class has a prudent learner, that is, a learner using a hypothesis space such that the learner learns every set in the hypothesis space. Moreover the prudent learner can be effectively built from any learner for the class.  相似文献   

11.
The latest Deep Learning (DL) models for detection and classification have achieved an unprecedented performance over classical machine learning algorithms. However, DL models are black-box methods hard to debug, interpret, and certify. DL alone cannot provide explanations that can be validated by a non technical audience such as end-users or domain experts. In contrast, symbolic AI systems that convert concepts into rules or symbols – such as knowledge graphs – are easier to explain. However, they present lower generalization and scaling capabilities. A very important challenge is to fuse DL representations with expert knowledge. One way to address this challenge, as well as the performance-explainability trade-off is by leveraging the best of both streams without obviating domain expert knowledge. In this paper, we tackle such problem by considering the symbolic knowledge is expressed in form of a domain expert knowledge graph. We present the eXplainable Neural-symbolic learning (X-NeSyL) methodology, designed to learn both symbolic and deep representations, together with an explainability metric to assess the level of alignment of machine and human expert explanations. The ultimate objective is to fuse DL representations with expert domain knowledge during the learning process so it serves as a sound basis for explainability. In particular, X-NeSyL methodology involves the concrete use of two notions of explanation, both at inference and training time respectively: (1) EXPLANet: Expert-aligned eXplainable Part-based cLAssifier NETwork Architecture, a compositional convolutional neural network that makes use of symbolic representations, and (2) SHAP-Backprop, an explainable AI-informed training procedure that corrects and guides the DL process to align with such symbolic representations in form of knowledge graphs. We showcase X-NeSyL methodology using MonuMAI dataset for monument facade image classification, and demonstrate that with our approach, it is possible to improve explainability at the same time as performance.  相似文献   

12.
This paper introduces adaptive reinforcement learning (ARL) as the basis for a fully automated trading system application. The system is designed to trade foreign exchange (FX) markets and relies on a layered structure consisting of a machine learning algorithm, a risk management overlay and a dynamic utility optimization layer. An existing machine-learning method called recurrent reinforcement learning (RRL) was chosen as the underlying algorithm for ARL. One of the strengths of our approach is that the dynamic optimization layer makes a fixed choice of model tuning parameters unnecessary. It also allows for a risk-return trade-off to be made by the user within the system. The trading system is able to make consistent gains out-of-sample while avoiding large draw-downs.  相似文献   

13.
In this paper, we propose two general multiple-instance active learning (MIAL) methods, multiple-instance active learning with a simple margin strategy (S-MIAL) and multiple-instance active learning with fisher information (F-MIAL), and apply them to the active learning in localized content based image retrieval (LCBIR). S-MIAL considers the most ambiguous picture as the most valuable one, while F-MIAL utilizes the fisher information and analyzes the value of the unlabeled pictures by assigning different labels to them. In experiments, we will show their superior performances in LCBIR tasks.  相似文献   

14.
A Critical Look at Experimental Evaluations of EBL   总被引:1,自引:0,他引:1  
A number of experimental evaluations ofexplanation-based learning (EBL) have been reported in the literature on machine learning. A close examination of the design of these experiments revelas certain methodological problems that could affect the conclusions drawn from the experiments. This article analyzes some of the more common methodological difficulties, and illustrates them using selected previous studies.  相似文献   

15.
Instance-Based Learning Algorithms   总被引:46,自引:1,他引:45  
Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several real-world databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithm's performance degrades gracefully with increasing noise levels and compares favorably with a noise-tolerant decision tree algorithm.  相似文献   

16.
Object tracking is one of the most important processes for object recognition in the field of computer vision. The aim is to find accurately a target object in every frame of a video sequence. In this paper we propose a combination technique of two algorithms well-known among machine learning practitioners. Firstly, we propose a deep learning approach to automatically extract the features that will be used to represent the original images. Deep learning has been successfully applied in different computer vision applications. Secondly, object tracking can be seen as a ranking problem, since the regions of an image can be ranked according to their level of overlapping with the target object (ground truth in each video frame). During object tracking, the target position and size can change, so the algorithms have to propose several candidate regions in which the target can be found. We propose to use a preference learning approach to build a ranking function which will be used to select the bounding box that ranks higher, i.e., that will likely enclose the target object. The experimental results obtained by our method, called \( DPL ^{2}\) (Deep and Preference Learning), are competitive with respect to other algorithms.  相似文献   

17.
 A new and original trend in the learning classifier system (LCS) framework is focussed on latent learning. These new LCSs call upon classifiers with a (condition), an (action) and an (effect) part. In psychology, latent learning is defined as learning without getting any kind of reward. In the LCS framework, this process is in charge of discovering classifiers which are able to anticipate accurately the consequences of actions under some conditions. Accordingly, the latent learning process builds a model of the dynamics of the environment. This model can be used to improve the policy learning process. This paper describes YACS, a new LCS performing latent learning, and compares it with ACS.  相似文献   

18.
One problem which frequently surfaces when applying explanation-based learning (EBL) to imperfect theories is themultiple inconsistent explanation problem. The multiple inconsistent explanation problem occurs when a domain theory produces multiple explanations for a training instance, only some of which are correct. Domain theories which suffer from the multiple inconsistent explanation problem can occur in many different contexts, such as when some information is missing and must be assumed: since such assumptions can be incorrect, incorrect explanations can be constructed. This paper proposes an extension of explanation-based learning, calledabductive explanation-based learning (A-EBL) which solves the multiple inconsistent explanation problem by using set covering techniques and negative examples to choose among the possible explanations of a training example. It is shown by formal analysis that A-EBL has convergence properties that are only logarithmically worse than EBL/TS, a formalization of a certain type of knowledge-level EBL; A-EBL is also proven to be computationally efficient, assuming that the domain theory is tractable. Finally, experimental results are reported on an application of A-EBL to learning correct rules for opening bids in the game of contract bridge given examples and an imperfect domain theory.  相似文献   

19.
In cognitive science, artificial intelligence, psychology, and education, a growing body of research supports the view that the learning process is strongly influenced by the learner's goals. The fundamental tenet ofgoal-driven learning is that learning is largely an active and strategic process in which the learner, human or machine, attempts to identify and satisfy its information needs in the context of its tasks and goals, its prior knowledge, its capabilities, and environmental opportunities for learning. This article examines the motivations for adopting a goal-driven model of learning, the relationship between task goals and learning goals, the influences goals can have on learning, and the pragmatic implications of the goal-driven learning model. It presents a new integrative framework for understanding the goal-driven learning process and applies this framework to characterizing research on goal-driven learning.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号