首页 | 本学科首页   官方微博 | 高级检索  
     

基于少样本学习的通用隐写分析方法
引用本文:李大秋,付章杰,程旭,宋晨,孙星明.基于少样本学习的通用隐写分析方法[J].软件学报,2022,33(10):3874-3890.
作者姓名:李大秋  付章杰  程旭  宋晨  孙星明
作者单位:南京信息工程大学 计算机与软件学院, 江苏 南京 210044;南京信息工程大学 计算机与软件学院, 江苏 南京 210044;鹏城实验室, 广东 深圳 518055
基金项目:江苏省基础研究计划(自然科学基金)(BK20200039);国家自然科学基金(U1836110,61802058)
摘    要:近年来,深度学习在图像隐写分析任务中表现出了优越的性能.目前,大多数基于深度学习的图像隐写分析模型为专用型隐写分析模型,只适用于特定的某种隐写术.使用专用隐写分析模型对其他隐写算法的隐写图像进行检测,则需要该隐写算法的大量载密图像作为数据集对模型进行重新训练.但在实际的通用隐写分析任务中,隐写算法的大量载密图像数据集是难以得到的.如何在极少隐写图像样本的情况下训练通用隐写分析模型是一个极大的挑战.对此,受少样本学习领域研究成果的启发,提出了基于转导传播网络的通用隐写分析方法.首先,在已有的少样本学习分类框架上改进了特征提取部分,设计了多尺度特征融合网络,使少样本分类模型能够提取到更多的隐写分析特征,使其可用于基于秘密噪声残差等弱信息的分类任务;其次,针对少样本隐写分析模型难收敛的问题,提出了预训练初始化的方式得到具有先验知识的初始模型;然后,分别训练了频域和空域的少样本通用隐写分析模型,通过自测和交叉测试,结果表明,检测平均准确率在80%以上;接着,在此基础上,采用数据集增强的方式重新训练了频域、空域少样本通用隐写分析模型,使少样本通用隐写分析模型检测准确率与之前相比提高到87%以上;...

关 键 词:隐写术  隐写分析  少样本学习  深度学习
收稿时间:2020/12/28 0:00:00
修稿时间:2021/2/22 0:00:00

Universal Steganalysis Based on Few-shot Learning
LI Da-Qiu,FU Zhang-Jie,CHENG Xu,SONG Chen,SUN Xing-Ming.Universal Steganalysis Based on Few-shot Learning[J].Journal of Software,2022,33(10):3874-3890.
Authors:LI Da-Qiu  FU Zhang-Jie  CHENG Xu  SONG Chen  SUN Xing-Ming
Affiliation:School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 210044, China;School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 210044, China;Peng Cheng Laboratory, Shenzhen 518055, China
Abstract:In recent years, deep learning has shown excellent performance in image steganalysis. At present, most of the image steganalysis models based on deep learning are special steganalysis models, which are only applied to a specific steganography. To detect the stego images of other steganographic algorithms using the special steganalysis model, a large number of stego images encoded by the steganographic algorithms are regarded as datasets to retrain the model. However, in practical steganalysis tasks, it is difficult to obtain a large number of encoded stego images, and it is a great challenge to train the universal steganalysis model with very few stego image samples. Inspired by the research results in the field of few-shot learning, a universal steganalysis method is proposed based on transductive propagation network. First, the feature extraction network is improved based on the existing few-shot learning classification framework, and the multi-scale feature fusion network is designed, so that the few-shot classification model can extract more steganalysis features for the classification task based on weak information such as secret noise residue. Second, to solve the problem that steganalysis model based on few-shot learning is difficult to converge, the initial model with prior knowledge is obtained by pre-training. Then, the steganalysis models based on few-shot learning in frequency domain and spatial domain are trained respectively. The results of self-test and cross-test show that the average detection accuracy is above 80%. Furthermore, the steganalysis models based on few-shot learning in frequency domain and spatial domain are retrained by means of dataset enhancement, so that the detection accuracy of the steganalysis models based on few-shot learning is improved to more than 87% compared with the previous steganalysis model based on few-shot learning. Finally, the proposed steganalysis model based on few-shot learning is compared with the existing steganalysis models in frequency domain and spatial domain, the result shows that the detection accuracy of the universal steganalysis model based on few-shot learning is slightly below those of SRNet and ZhuNet in spatial domain and is beyond that of existing best steganalysis model in frequency domain under the experimental setup of few-shot learning. The experimental results show that the proposed method based on few-shot learning is efficient and robust for the detection of unknown steganographic algorithms.
Keywords:steganography  steganalysis  few-shot learning  deep learning
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号