首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Zhang  Ru  Dong  Shiqi  Liu  Jianyi 《Multimedia Tools and Applications》2019,78(7):8559-8575
Multimedia Tools and Applications - Nowadays, there are plenty of works introducing convolutional neural networks (CNNs) to the steganalysis and exceeding conventional steganalysis algorithms....  相似文献   

2.
基于密码学中的RSA签名方案与RSA加密方案,提出了一种能够让特定分类器输出对抗样本正确分类的对抗攻击方法。通过单像素攻击的思想使正常图像在嵌入附加信息的同时能够具有让其余分类器发生错误分类的能力。所提方法可以应用在分类器授权管理与在线图像防伪等领域。实验结果表明,所提方法生成的对抗样本对于人眼难以察觉,并能被特定分类器识别。  相似文献   

3.
目的 图像信息隐藏包括图像隐写术和图像水印技术两个分支。隐写术是一种将秘密信息隐藏在载体中的技术,目的是为了实现隐秘通信,其主要评价指标是抵御隐写分析的能力。水印技术与隐写术原理类似,但其是通过把水印信息嵌入到载体中以达到保护知识产权的作用,追求的是防止水印被破坏而尽可能地提高水印信息的鲁棒性。研究者们试图利用生成对抗网络(generative adversarial networks,GANs)进行自动化的隐写算法以及鲁棒水印算法的设计,但所设计的算法在信息提取准确率、嵌入容量和隐写安全性或水印鲁棒性、水印图像质量等方面存在不足。方法 本文提出了基于生成对抗网络的新型端到端隐写模型(image information hiding-GAN,IIH-GAN)和鲁棒盲水印模型(image robust blind watermark-GAN,IRBW-GAN),分别用于图像隐写术和图像鲁棒盲水印。网络模型中使用了更有效的编码器和解码器结构SE-ResNet(squeeze and excitation ResNet),该模块根据通道之间的相互依赖性来自适应地重新校准通道方式的特征响应。结果 实验结果表明隐写模型IIH-GAN相对其他方法在性能方面具有较大改善,当已知训练好的隐写分析模型的内部参数时,将对抗样本加入到IIH-GAN的训练过程,最终可以使隐写分析模型的检测准确率从97.43%降低至49.29%。该隐写模型还可以在256×256像素的图像上做到高达1 bit/像素(bits-per-pixel)的相对嵌入容量;IRBW-GAN水印模型在提升水印嵌入容量的同时显著提升了水印图像质量以及水印提取正确率,在JEPG压缩的攻击下较对比方法提取准确率提高了约20%。结论 本文所提IIH-GAN和IRBW-GAN模型在图像隐写和图像水印领域分别实现了领先于对比模型的性能。  相似文献   

4.
Due to the huge gap between the high dynamic range of natural scenes and the limited (low) range of consumer-grade cameras, a single-shot image can hardly record all the information of a scene. Multi-exposure image fusion (MEF) has been an effective way to solve this problem by integrating multiple shots with different exposures, which is in nature an enhancement problem. During fusion, two perceptual factors including the informativeness and the visual realism should be concerned simultaneously. To achieve the goal, this paper presents a deep perceptual enhancement network for MEF, termed as DPE-MEF. Specifically, the proposed DPE-MEF contains two modules, one of which responds to gather content details from inputs while the other takes care of color mapping/correction for final results. Both extensive experimental results and ablation studies are conducted to show the efficacy of our design, and demonstrate its superiority over other state-of-the-art alternatives both quantitatively and qualitatively. We also verify the flexibility of the proposed strategy on improving the exposure quality of single images. Moreover, our DPE-MEF can fuse 720p images in more than 60 pairs per second on an Nvidia 2080Ti GPU, making it attractive for practical use. Our code is available at https://github.com/dongdong4fei/DPE-MEF.  相似文献   

5.
针对目前主流对抗攻击算法通过扰动全局图像特征导致攻击隐蔽性降低的问题,提出一种聚焦图像的无目标攻击算法——PS-MIFGSM。首先,通过Grad-CAM算法捕获卷积神经网络(CNN)在分类任务中对图像的重点关注区域;然后,使用MI-FGSM攻击分类网络,生成对抗扰动,并且将扰动作用于图像的重点关注区域,而图像的非关注区域保持不变,从而生成新的对抗样本。在实验部分,以三种图像分类模型Inception_v1、Resnet_v1和Vgg_16为基础,对比了PS-MIFGSM和MI-FGSM两种方法分别进行单模型攻击和集合模型攻击的效果。实验结果表明,PSMIFGSM能够在攻击成功率不变的同时,有效降低对抗样本与真实样本的差异大小。  相似文献   

6.
生成式隐写通过生成足够自然或真实的含密样本来隐藏秘密消息,是信息隐藏方向的研究热点,但目前在视频隐写领域的研究还比较少。结合数字化卡登格的思想,提出一种基于深度卷积生成对抗网络(DCGAN)的半生成式视频隐写方案。该方案中,设计了基于DCGAN的双流视频生成网络,用来生成视频的动态前景、静态后景与时空掩模三个部分,并以随机噪声驱动生成不同的视频。方案中的发送方可设定隐写阈值,在掩模中自适应地生成数字化卡登格,并将其作为隐写与提取的密钥;同时以前景作为载体,实现信息的最优嵌入。实验结果表明,该方案生成的含密视频具有良好的视觉质量,Frechet Inception距离(FID)值为90,且嵌入容量优于现有的生成式隐写方案,最高可达0.11 bpp,能够更高效地传输秘密消息。  相似文献   

7.
Recently, it has become progressively more evident that classic diagnostic labels are unable to accurately and reliably describe the complexity and variability of several clinical phenotypes. This is particularly true for a broad range of neuropsychiatric illnesses such as depression and anxiety disorders or behavioural phenotypes such as aggression and antisocial personality. Patient heterogeneity can be better described and conceptualized by grouping individuals into novel categories, which are based on empirically-derived sections of intersecting continua that span both across and beyond traditional categorical borders. In this context, neuroimaging data (i.e. the set of images which result from functional/metabolic (e.g. functional magnetic resonance imaging, functional near-infrared spectroscopy, or positron emission tomography) and structural (e.g. computed tomography, T1-, T2- PD- or diffusion weighted magnetic resonance imaging) carry a wealth of spatiotemporally resolved information about each patient's brain. However, they are usually heavily collapsed a priori through procedures which are not learned as part of model training, and consequently not optimized for the downstream prediction task. This is due to the fact that every individual participant usually comes with multiple whole-brain 3D imaging modalities often accompanied by a deep genotypic and phenotypic characterization, hence posing formidable computational challenges.In this paper we design and validate a deep learning architecture based on generative models rooted in a modular approach and separable convolutional blocks (which result in a 20-fold decrease in parameter utilization) in order to a) fuse multiple 3D neuroimaging modalities on a voxel-wise level, b) efficiently convert them into informative latent embeddings through heavy dimensionality reduction, c) maintain good generalizability and minimal information loss. As proof of concept, we test our architecture on the well characterized Human Connectome Project database (n = 974 healthy subjects), demonstrating that our latent embeddings can be clustered into easily separable subject strata which, in turn, map to different phenotypical information (including organic, neuropsychological, personality variables) which was not included in the embedding creation process.The ability to extract meaningful and separable phenotypic information from brain images alone can aid in creating multi-dimensional biomarkers able to chart spatio-temporal trajectories which may correspond to different pathophysiological mechanisms unidentifiable to traditional data analysis approaches. In turn, this may be of aid in predicting disease evolution as well as drug response, hence supporting mechanistic disease understanding and also empowering clinical trials.  相似文献   

8.
隐写术与隐写分析是信息安全领域的热门研究方向,近年来得到了广泛的研究与快速的发展。随着深度学习新技术的兴起,深度学习也被引入到隐写术与隐写分析领域,并在方法和性能上取得了一系列突破性的研究成果。为推进基于深度学习的隐写术与隐写分析的研究,本文对目前的主要方法和代表性工作进行了归纳与探讨。对于图像隐写术与隐写分析这两个领域,本文分别各自比较了传统方法和与相关深度学习方法的异同,详细介绍了目前主要的基于深度学习的图像隐写术与隐写分析的基本原理和方法,最后讨论了基于深度学习的图像隐写术与隐写分析仍需要解决的问题及未来的研究趋势。  相似文献   

9.
Cycle-consistent generative adversarial network (CycleGAN) has been widely used for cross-domain medical image synthesis tasks particularly due to its ability to deal with unpaired data. However, most CycleGAN-based synthesis methods cannot achieve good alignment between the synthesized images and data from the source domain, even with additional image alignment losses. This is because the CycleGAN generator network can encode the relative deformations and noises associated to different domains. This can be detrimental for the downstream applications that rely on the synthesized images, such as generating pseudo-CT for PET-MR attenuation correction. In this paper, we present a deformation invariant cycle-consistency model that can filter out these domain-specific deformation. The deformation is globally parameterized by thin-plate-spline (TPS), and locally learned by modified deformable convolutional layers. Robustness to domain-specific deformations has been evaluated through experiments on multi-sequence brain MR data and multi-modality abdominal CT and MR data. Experiment results demonstrated that our method can achieve better alignment between the source and target data while maintaining superior image quality of signal compared to several state-of-the-art CycleGAN-based methods.  相似文献   

10.
Multimedia Tools and Applications - Counting-based secret sharing is becoming a vital efficient multimedia technique for raising the security of sensitive data especially when collective access to...  相似文献   

11.
This study proposes a unified gradient- and intensity-discriminator generative adversarial network for various image fusion tasks, including infrared and visible image fusion, medical image fusion, multi-focus image fusion, and multi-exposure image fusion. On the one hand, we unify all fusion tasks into discriminating a fused image’s gradient and intensity distributions based on a generative adversarial network. The generator adopts a dual-encoder–single-decoder framework to extract source image features by using different encoder paths. A dual-discriminator is employed to distinguish the gradient and intensity, ensuring that the generated image contains the desired geometric structure and conspicuous information. The dual adversarial game can tackle the generative adversarial network’s mode collapse problem. On the other hand, we define a loss function based on the gradient and intensity that can be adapted to various fusion tasks by using varying relevant parameters with the source images. Qualitative and quantitative experiments on publicly available datasets demonstrate our method’s superiority over state-of-the-art methods.  相似文献   

12.
Lyu  Qiongshuai  Guo  Min  Ma  Miao 《Neural computing & applications》2021,33(10):4833-4847
Neural Computing and Applications - Boosting has received considerable attention to improve the overall performance of model in multiple tasks by cascading many steerable sub-modules. In this...  相似文献   

13.
在深度学习中图像分类任务研究里发现,对抗攻击现象给深度学习模型的安全应用带来了严峻挑战,引发了研究人员的广泛关注。首先,围绕深度学习中用于生成对抗扰动的对抗攻击技术,对图像分类任务中重要的白盒对抗攻击算法进行了详细介绍,同时分析了各个攻击算法的优缺点;然后,分别从移动终端、人脸识别和自动驾驶三个现实中的应用场景出发,介绍了白盒对抗攻击技术的应用现状;此外,选择了一些典型的白盒对抗攻击算法针对不同的目标模型进行了对比实验并分析了实验结果;最后,对白盒对抗攻击技术进行了总结,并展望了其有价值的研究方向。  相似文献   

14.
图像隐写是网络空间安全领域一个重要的研究方向,已经有大量的隐写以及隐写分析方法提出。为了解决图像隐写的检测问题,使用实验的方式对文中介绍的图像隐写方法中的拼接和LSB隐写进行分析和测试,结果表明,文中的实验方法能够有效地对图像隐写的隐藏信息进行分析和提取,同时提供了图像隐写在检测中的一般思路。  相似文献   

15.
杨帆  李阳  苗壮  张睿  王家宝  李航 《计算机应用研究》2021,38(12):3760-3764
基于深度学习的图像检索技术使得图像隐私泄露成为一个亟待解决的问题.利用对抗攻击生成的对抗样本,可在一定程度上实现隐私保护.但现有针对图像检索系统的目标对抗攻击方法易受选取目标样本质量和数量的影响,导致其攻击效果不佳.针对该问题,提出了一种基于特征加权聚合的图像检索目标对抗攻击方法,该方法将目标图像的检索准确率作为衡量样本质量的权重,利用目标类中少量样本的特征进行加权聚合获取类特征作为最终攻击目标.在RParis和ROxford两个数据集上的实验结果表明,该方法生成的对抗样本相比TMA方法,检索精度平均提升38%,相比DHTA方法,检索精度平均提升7.5%.  相似文献   

16.
Infrared images can distinguish targets from their backgrounds on the basis of difference in thermal radiation, which works well at all day/night time and under all weather conditions. By contrast, visible images can provide texture details with high spatial resolution and definition in a manner consistent with the human visual system. This paper proposes a novel method to fuse these two types of information using a generative adversarial network, termed as FusionGAN. Our method establishes an adversarial game between a generator and a discriminator, where the generator aims to generate a fused image with major infrared intensities together with additional visible gradients, and the discriminator aims to force the fused image to have more details existing in visible images. This enables that the final fused image simultaneously keeps the thermal radiation in an infrared image and the textures in a visible image. In addition, our FusionGAN is an end-to-end model, avoiding manually designing complicated activity level measurements and fusion rules as in traditional methods. Experiments on public datasets demonstrate the superiority of our strategy over state-of-the-arts, where our results look like sharpened infrared images with clear highlighted targets and abundant details. Moreover, we also generalize our FusionGAN to fuse images with different resolutions, say a low-resolution infrared image and a high-resolution visible image. Extensive results demonstrate that our strategy can generate clear and clean fused images which do not suffer from noise caused by upsampling of infrared information.  相似文献   

17.
Multimedia Tools and Applications - Due to the fast growth of image data on the web, it is necessary to ensure the content security of uploaded images. One of the fundamental problems behind this...  相似文献   

18.
This paper aims to study the deep clustering problem with heterogeneous features and unknown cluster number. To address this issue, a novel deep Bayesian clustering framework is proposed. In particular, a heterogeneous feature metric is first constructed to measure the similarity between different types of features. Then, a feature metric-restricted hierarchical sample generation process is established, in which sample with heterogeneous features is clustered by generating it from a similarity constraint hidden space. When estimating the model parameters and posterior probability, the corresponding variational inference algorithm is derived and implemented. To verify our model capability, we demonstrate our model on the synthetic dataset and show the superiority of the proposed method on some real datasets. Our source code is released on the website: Github.com/yexlwh/Heterogeneousclustering.  相似文献   

19.
Mao  Qingyu  Yang  Xiaomin  Zhang  Rongzhu  Jeon  Gwanggil  Hussain  Farhan  Liu  Kai 《Multimedia Tools and Applications》2022,81(9):12305-12323
Multimedia Tools and Applications - Recently, most existing learning-based fusion methods are not fully end-to-end, which still predict the decision map and recover the fused image by the refined...  相似文献   

20.
We consider image transformation problems, and the objective is to translate images from a source domain to a target one. The problem is challenging since it is difficult to preserve the key properties of the source images, and to make the details of target being as distinguishable as possible. To solve this problem, we propose an informative coupled generative adversarial networks (ICoGAN). For each domain, an adversarial generator-and-discriminator network is constructed. Basically, we make an approximately-shared latent space assumption by a mutual information mechanism, which enables the algorithm to learn representations of both domains in unsupervised setting, and to transform the key properties of images from source to target.Moreover, to further enhance the performance, a weightsharing constraint between two subnetworks, and different level perceptual losses extracted from the intermediate layers of the networks are combined. With quantitative and visual results presented on the tasks of edge to photo transformation, face attribute transfer, and image inpainting, we demonstrate the ICo- GAN’s effectiveness, as compared with other state-of-the-art algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号