首页 | 本学科首页   官方微博 | 高级检索  
     


Compressing arrays of classifiers using Volterra-neural network: application to face recognition
Authors:M Rubiolo  G Stegmayer  D Milone
Affiliation:1. CONICET, CIDISI-UTN-FRSF, Lavaise 610, 3000, Santa Fe, Argentina
2. CONICET, SINC(i)-FICH-UNL, Ciudad Universitaria, RN 168 Km. 472.4, 3000, Santa Fe, Argentina
Abstract:Model compression is required when large models are used, for example, for a classification task, but there are transmission, space, time, or computing constraints that have to be fulfilled. Multilayer perceptron (MLP) models have been traditionally used as classifiers. Depending on the problem, they may need a large number of parameters (neuron functions, weights, and bias) to obtain an acceptable performance. This work proposes a technique to compress an array of MLPs, through the weights of a Volterra-neural network (Volterra-NN), maintaining its classification performance. It will be shown that several MLP topologies can be well-compressed into the first-, second-, and third-order (Volterra-NN) outputs. The obtained results show that these outputs can be used to build an array of (Volterra-NN) that needs significantly less parameters than the original array of MLPs, furthermore having the same high accuracy. The Volterra-NN compression capabilities were tested for solving a face recognition problem. Experimental results are presented on two well-known face databases: ORL and FERET.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号