首页 | 本学科首页   官方微博 | 高级检索  
     

深度图像修复的动态特征融合取证网络
引用本文:任洪昊,朱新山,卢俊彦. 深度图像修复的动态特征融合取证网络[J]. 哈尔滨工业大学学报, 2022, 54(11): 47-58
作者姓名:任洪昊  朱新山  卢俊彦
作者单位:天津大学 电气自动化与信息工程学院,天津300072;天津大学 电气自动化与信息工程学院,天津300072;数字出版技术国家重点实验室,北京100871
基金项目:国家自然科学基金(2,3)
摘    要:基于深度学习的图像修复方案在篡改后图像中遗留很少的痕迹信息给取证带来了极大的困难。目前针对深度图像修复的取证工作研究较少,并且存在篡改区域定位不准确的问题。为此,本文提出了一种动态特征融合取证网络(dynamic feature fusion forensics network, DF3Net)用于定位经过深度图像修复操作的篡改区域。首先,该网络采用不同的篡改痕迹增强方式包括SRM滤波、空间域高通滤波和频率域高通滤波将单输入图像扩展到多输入,并提出动态特征融合模块对多种输入提取有效的修复痕迹特征后进行动态的特征融合;其次,网络采用编码器-解码器架构作为基础框架,并在编码器末端增加多尺度特征提取模块以获取不同尺度的上下文信息;最后,本文还设计了空间加权的通道注意力模块用于编、解码器之间的跳跃连接,以期实现有侧重地补充损失的边界细节。实验结果表明,面对不同的深度修复方案以及不同的图像数据库,DF3Net相较于现有的图像修复取证方法均可以更准确地定位篡改区域,并且对于JPEG压缩和高斯噪声具有较强的鲁棒性。

关 键 词:图像修复取证  深度神经网络  深度修复  输入扩展  动态特征融合  痕迹特征  多尺度  注意力机制
收稿时间:2022-01-17

Dynamic feature fusion forensics network for deep image inpainting
REN Honghao,ZHU Xinshan,LU Junyan. Dynamic feature fusion forensics network for deep image inpainting[J]. Journal of Harbin Institute of Technology, 2022, 54(11): 47-58
Authors:REN Honghao  ZHU Xinshan  LU Junyan
Affiliation:School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China;School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China ;State Key Laboratory of Digital Publishing Technology, Beijing 100871, China
Abstract:Deep learning-based image inpainting methods leave little trace information in the tampered image, which makes forensics extremely difficult. There are few studies on inpainting forensics, and the localization of tampered areas is inaccurate. Therefore, a dynamic feature fusion forensics network (DF3Net) was proposed for locating tampered areas that have undergone deep image inpainting operations. Firstly, the network expanded single input to multi-inputs by exploiting different tamper trace enhancement methods including spatial rich model (SRM) filtering, high-pass filtering of spatial domain, and high-pass filtering of frequency domain. Then, a dynamic feature fusion module was proposed to extract effective inpainting trace features and conduct dynamic feature fusion. Secondly, the network adopted the encoder-decoder architecture as basic framework, and a multi-scale feature extraction module was added at the end of the encoder to obtain contextual information at different scales. Finally, a spatially weighted channel attention module was designed for the skip connection between encoder and decoder, so as to achieve a focused supplementation of the lost details. Experimental results show that DF3Net could locate the tampered areas more accurately than existing methods on different datasets, and was robust against JPEG compression and Gaussian noise.
Keywords:image inpainting forensics   deep neural network   deep inpainting   input expansion   dynamic feature fusion   trace features   multi-scale   attention mechanism
本文献已被 万方数据 等数据库收录!
点击此处可从《哈尔滨工业大学学报》浏览原始摘要信息
点击此处可从《哈尔滨工业大学学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号