首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8篇
  免费   0篇
自动化技术   8篇
  2022年   1篇
  2015年   1篇
  2011年   3篇
  2010年   1篇
  2007年   2篇
排序方式: 共有8条查询结果,搜索用时 15 毫秒
1
1.
Learning with partly labeled data   总被引:2,自引:0,他引:2  
Learning with partly labeled data aims at combining labeled and unlabeled data in order to boost the accuracy of a classifier. This paper outlines the two main classes of learning methods to deal with partly labeled data: pre-labeling-based learning and semi-supervised learning. Concretely, we introduce and discuss three methods from each class. The first three ones are two-stage methods consisting of selecting the data to be labeled and then training the classifier using the pre-labeled and the originally labeled data. The last three ones show how labeled and unlabeled data can be combined in a symbiotic way during training. The empirical evaluation of these methods shows: (1) pre-labeling methods tend be better than semi-supervised learning methods, (2) both labeled and unlabeled have positive effect on the classification accuracy of each of the proposed methods, (3) the combination of all the methods improve the accuracy, and (4) the proposed methods compare very well with the state-of-art methods.  相似文献   
2.
Fuzzy classification systems (FCS) are traditionally built from observations (data points) in an off-line one shot-experiment. Once the learning phase is exhausted, the classifier is no more capable to learn further knowledge from new observations nor is it able to update itself in the future. This paper investigates the problem of incremental learning in the context of FCS. It shows how, in contrast to off-line or batch learning, incremental learning infers knowledge in the form of fuzzy rules from data that evolves over time. To accommodate incremental learning, appropriate mechanisms are applied in all steps of the FCS construction: (1) Incremental supervised clustering to generate granules in a progressive manner, (2) Systematic and automatic update of fuzzy partitions, (3) Incremental feature selection using an incremental version of Fisher’s interclass separability criterion. The effect of incrementality on various aspects is demonstrated via a numerical evaluation.  相似文献   
3.
Mohamad  Saad  Alamri  Hamad  Bouchachia  Abdelhamid 《Machine Learning》2022,111(11):4039-4079

Stochastic gradient descent (SGD) is a widely adopted iterative method for optimizing differentiable objective functions. In this paper, we propose and discuss a novel approach to scale up SGD in applications involving non-convex functions and large datasets. We address the bottleneck problem arising when using both shared and distributed memory. Typically, the former is bounded by limited computation resources and bandwidth whereas the latter suffers from communication overheads. We propose a unified distributed and parallel implementation of SGD (named DPSGD) that relies on both asynchronous distribution and lock-free parallelism. By combining two strategies into a unified framework, DPSGD is able to strike a better trade-off between local computation and communication. The convergence properties of DPSGD are studied for non-convex problems such as those arising in statistical modelling and machine learning. Our theoretical analysis shows that DPSGD leads to speed-up with respect to the number of cores and number of workers while guaranteeing an asymptotic convergence rate of \(O(1/\sqrt{T})\) given that the number of cores is bounded by \(T^{1/4}\) and the number of workers is bounded by \(T^{1/2}\) where T is the number of iterations. The potential gains that can be achieved by DPSGD are demonstrated empirically on a stochastic variational inference problem (Latent Dirichlet Allocation) and on a deep reinforcement learning (DRL) problem (advantage actor critic - A2C) resulting in two algorithms: DPSVI and HSA2C. Empirical results validate our theoretical findings. Comparative studies are conducted to show the performance of the proposed DPSGD against the state-of-the-art DRL algorithms.

  相似文献   
4.
5.
The persistence and evolution of systems essentially depend on their adaptivity to new situations. As an expression of intelligence, adaptivity is a distinguishing quality of any system that is able to learn and to adjust itself in a flexible manner to new environmental conditions and such ability ensures self-correction over time as new events happen, new input becomes available, or new operational conditions occur. This requires self-monitoring of the performance in an ever-changing environment. The relevance of adaptivity is established in numerous domains and by versatile real-world applications. The present paper presents an incremental fuzzy rule-based system for classification purposes. Relying on fuzzy min–max neural networks, the paper explains how fuzzy rules can be continuously online generated to meet the requirements of non-stationary dynamic environments, where data arrives over long periods of time. The approach proposed to deal with an ambient intelligence application. The simulation results show its effectiveness in dealing with dynamic situations and its performance when compared with existing approaches.  相似文献   
6.
Self-adaptation is an inherent part of any natural and intelligent system. Specifically, it is about the ability of a system to reconcile its requirements or goal of existence with the environment it is interacting with, by adopting an optimal behavior. Self-adaptation becomes crucial when the environment changes dynamically over time. In this paper, we investigate self-adaptation of classification systems at three levels: (1) natural adaptation of the base learners to change in the environment, (2) contributive adaptation when combining the base learners in an ensemble, and (3) structural adaptation of the combination as a form of dynamic ensemble. The present study focuses on neural network classification systems to handle a special facet of self-adaptation, that is, incremental learning (IL). With IL, the system self-adjusts to accommodate new and possibly non-stationary data samples arriving over time. The paper discusses various IL algorithms and shows how the three adaptation levels are inherent in the system's architecture proposed and how this architecture is efficient in dealing with dynamic change in the presence of various types of data drift when applying these IL algorithms.  相似文献   
7.
8.
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号