首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7篇
  免费   1篇
综合类   1篇
建筑科学   1篇
自动化技术   6篇
  2019年   2篇
  2017年   1篇
  2016年   1篇
  2014年   1篇
  2013年   1篇
  2012年   1篇
  2002年   1篇
排序方式: 共有8条查询结果,搜索用时 15 毫秒
1
1.
Facial expressions are one of the most important characteristics of human behaviour. They are very useful in applications on human computer interaction. To classify facial emotions, different feature extraction methods are used with machine learning techniques. In supervised learning, information about the distribution of data is given by data points not belonging to any of the classes. These data points are known as universum data. In this work, we use universum data to perform multiclass classification of facial emotions from human facial images. Moreover, the existing universum based models suffer from the drawback of high training cost, so we propose an iterative universum twin support vector machine (IUTWSVM) using Newton method. Our IUTWSVM gives good generalization performance with less computation cost. To solve the optimization problem of proposed IUTWSVM, no optimization toolbox is required. Further, improper selection of universum points always leads to degraded performance of the model. For generating better universum, a novel scheme is proposed in this work based on information entropy of data. To check the effectiveness of proposed IUTWSVM, several numerical experiments are performed on benchmark real world datasets. For multiclass classification of facial emotions, the performance of IUTWSVM is compared with existing algorithms using different feature extraction techniques. Our proposed algorithm shows better generalization performance with less training cost in both binary as well as multiclass classification problems.  相似文献   
2.
3.
度量学习通过更真实的刻画样本之间的距离,来提高分类和聚类的精度。GMML(Geometric Mean Metric Learning)在学习度量矩阵[A]时,使得在该度量下同类点之间的距离尽可能小,不同类点之间的距离尽可能大。GMML用来学习的训练样本均为目标类数据,而对于现实存在的为数众多的同领域非目标类数据,即Universum数据并未加以利用,不免造成信息的浪费,针对此,提出一种新的度量学习算法--融入Universum学习的GMML(U-GMML)。U-GMML期望得到一个新的度量矩阵[A],使得同类点之间的距离尽可能小,不同类点之间的距离尽可能大,且Universum数据与目标类数据的距离尽可能大,从而使得所学习的度量矩阵[A]更有利于分类。真实数据集上的实验结果验证了该算法的有效性。  相似文献   
4.
5.
分类是机器学习领域的重要分支,利用少量的标签数据进行分类和高维数据的分类是近期研究的热点问题。传统的半监督方法能够有效利用标签样本数据或非标签样本数据,但忽略了相关的非样本数据,即Universum。利用Universum的半监督分类算法,基于线性回归和子空间学习模型,结合了传统半监督方法和利用Universum方法两者的优点,在不增加标签数据的条件下显著地提高了高维数据的分类效果。仿真实验和真实数据上的分类结果都验证了算法的有效性。  相似文献   
6.
分类器的模型参数对分类结果有直接影响.针对引入无关样本的Universum SVM算法中模型参数选择问题,采用粒子群优化(particle swarm optimization,PSO)算法对其进行优化.该方法概念简单、计算效率高且受问题维数变化的影响较小,可实现对多个参数同时优选.此外,在PSO中粒子适应度函数的选择是一个关键问题.考虑k遍交叉验证法的估计无偏性,利用交叉验证误差作为评价粒子优劣的适应值.通过舌象样本数据实验,对参数优选前后测试样本识别正确率进行比较,实验结果验证了该算法的有效性.  相似文献   
7.
8.
Recently, Universum data that does not belong to any class of the training data, has been applied for training better classifiers. In this paper, we address a novel boosting algorithm called UUAdaBoost that can improve the classification performance of AdaBoost with Universum data. UUAdaBoost chooses a function by minimizing the loss for labeled data and Universum data. The cost function is minimized by a greedy, stagewise, functional gradient procedure. Each training stage of UUAdaBoost is fast and efficient. The standard AdaBoost weights labeled samples during training iterations while UUAdaBoost gives an explicit weighting scheme for Universum samples as well. In addition, this paper describes the practical conditions for the effectiveness of Universum learning. These conditions are based on the analysis of the distribution of ensemble predictions over training samples. Experiments on handwritten digits classification and gender classification problems are presented. As exhibited by our experimental results, the proposed method can obtain superior performances over the standard AdaBoost by selecting proper Universum data.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号