首页 | 本学科首页   官方微博 | 高级检索  
     

改进卷积神经网络模型设计方法
引用本文:张涛,杨剑,宋文爱,郭雁蓉. 改进卷积神经网络模型设计方法[J]. 计算机工程与设计, 2019, 40(7): 1885-1890
作者姓名:张涛  杨剑  宋文爱  郭雁蓉
作者单位:中北大学软件学院,山西太原,030051;中北大学软件学院,山西太原,030051;中北大学软件学院,山西太原,030051;中北大学软件学院,山西太原,030051
基金项目:山西省回国留学人员科研基金项目
摘    要:针对现有卷积神经网络模型参数量大、训练耗费时间的问题,提出一种网络串联和并联共用的方法,使用较小的卷积核和较多的非线性激活减少参数量的同时增加网络特征学习能力,提出尺度归一化池化层取代全连接层,避免全连接层参数过多容易导致过拟合的问题,改进后的模型支持训练任意尺寸的图片。实验结果表明,提出方法减少了大量的参数和训练消耗的时间,有效提升了算法的效率。

关 键 词:卷积神经网络  卷积核  非线性激活  尺度归一化池化  图像分类

Improvement of convolutional neural network model design method
ZHANG Tao,YANG Jian,SONG Wen-ai,GUO Yan-rong. Improvement of convolutional neural network model design method[J]. Computer Engineering and Design, 2019, 40(7): 1885-1890
Authors:ZHANG Tao  YANG Jian  SONG Wen-ai  GUO Yan-rong
Affiliation:(School of Software,North University of China,Taiyuan 030051,China)
Abstract:Aiming at the problems that the existing convolutional neural network model has large parameters and its training is time-consuming,a method of network serial and parallel sharing was proposed,which used a smaller convolution kernel and more nonlinear activation to reduce the parameter quantity and increase the network feature learning ability,and scale normalization pooling layer was proposed to replace the full connection layer,avoiding the problem that the full connection layer parameters are too easy to cause over-fitting,and the improved model supported the training of images of any size. Experimental results show that the proposed method reduces the number of parameters and time spent on training,which effectively improves the efficiency of the algorithm.
Keywords:convolution neural network  convolution kernel  nonlinear activation  standardization of pool size  image classification
本文献已被 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号