首页 | 本学科首页   官方微博 | 高级检索  
     

拉普拉斯多层极速学习机
引用本文:丁世飞,张楠,史忠植. 拉普拉斯多层极速学习机[J]. 软件学报, 2017, 28(10): 2599-2610
作者姓名:丁世飞  张楠  史忠植
作者单位:中国矿业大学 计算机科学与技术学院, 江苏 徐州 221116;中国科学院计算技术研究所 智能信息处理重点实验室, 北京 100190,中国矿业大学 计算机科学与技术学院, 江苏 徐州 221116;中国科学院计算技术研究所 智能信息处理重点实验室, 北京 100190,中国科学院计算技术研究所 智能信息处理重点实验室, 北京 100190
基金项目:国家重点基础研究发展计划(2013CB329502);国家自然科学基金(61379101)
摘    要:极速学习机不仅仅是有效的分类器,还能应用到半监督学习中.但是,半监督极速学习机和拉普拉斯光滑孪生支持向量机一样是一种浅层学习算法.深度学习实现了复杂函数的逼近并缓解了以前多层神经网络算法的局部最小性问题,目前在机器学习领域中引起了广泛的关注.多层极速学习机(ML-ELM)是根据深度学习和极速学习机的思想提出的算法,通过堆叠极速学习机-自动编码器算法(ELM-AE)构建多层神经网络模型,不仅实现复杂函数的逼近,并且训练过程中无需迭代,学习效率高.我们把流形正则化框架引入ML-ELM中提出拉普拉斯多层极速学习机算法(Lap-ML-ELM).然而,ELM-AE不能很好的解决过拟合问题,针对这一问题我们把权值不确定引入ELM-AE中提出权值不确定极速学习机-自动编码器算法(WU-ELM-AE),它学习到更为鲁棒的特征.最后,我们在前面两种算法的基础上提出权值不确定拉普拉斯多层极速学习机算法(WUL-ML-ELM),它堆叠WU-ELM-AE构建深度模型,并且用流形正则化框架求取输出权值,该算法在分类精度上有明显提高并且不需花费太多的时间.实验结果表明,Lap-ML-ELM与WUL-ML-ELM都是有效的半监督学习算法.

关 键 词:极速学习机  半监督学习  多层极速学习机  流形正则化  权值不确定
收稿时间:2015-06-03
修稿时间:2015-11-23

Laplacian Multi Layer Extreme Learning Machine
DING Shi-Fei,ZHANG Nan and SHI Zhong-Zhi. Laplacian Multi Layer Extreme Learning Machine[J]. Journal of Software, 2017, 28(10): 2599-2610
Authors:DING Shi-Fei  ZHANG Nan  SHI Zhong-Zhi
Affiliation:School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221116, China;Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, The Chinese Academy of Sciences, Beijing 100190, China,School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221116, China;Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, The Chinese Academy of Sciences, Beijing 100190, China and Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, The Chinese Academy of Sciences, Beijing 100190, China
Abstract:Extreme learning machine (ELM) not only is an effective classifier in supervised learning, but also can be applied on semi-supervised learning. However, semi-supervised ELM (SS-ELM) is as a surface learning algorithm as laplacian smooth twin support vector machine (Lap-STSVM). Deep learning has the advantage of approximating the complicated function and alleviating the optimization difficulty associated with deep models. Multi layer extreme learning machine (ML-ELM) is presented according to the idea of deep learning and extreme learning machine, which stacks extreme learning machines based auto encoder (ELM-AE) to create a multi-layer neural network. ML-ELM not only approximates the complicated function but also does not need to iterate during training process, and ML-ELM has the merits of high learning efficiency. We introduce manifold regularization into the model of ML-ELM and put forward Laplacian ML-ELM (Lap-ML-ELM). However, ELM-AE is not satisfactorily for solving the over-fitting problem. In order to solve the problem, we introduce weight uncertainty into ELM-AE and propose Weight Uncertainty ELM-AE (WU-ELM-AE) which can learn more robust features. Finally, we propose Weight Uncertainty Laplacian ML-ELM (WUL-ML-ELM) based on the above two algorithms, which stacks WU-ELM-AE to create a deep network and uses the manifold regularization framework to obtain the output weights. Lap-ML-ELM and WUL-ML-ELM are more efficient than SS-ELM in classification and do not need to spend too much time. Experimental results show that Lap-ML-ELM and WUL-ML-ELM are efficient semi-supervised learning algorithms.
Keywords:extreme learning machine  semi-supervised learning  multi layer extreme learning machine  manifold regularization  weight uncertainty
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号