首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
This paper proposes a new approach for color transfer between two images. Our method is unique in its consideration of the scene illumination and the constraint that the mapped image must be within the color gamut of the target image. Specifically, our approach first performs a white‐balance step on both images to remove color casts caused by different illuminations in the source and target image. We then align each image to share the same ‘white axis’ and perform a gradient preserving histogram matching technique along this axis to match the tone distribution between the two images. We show that this illuminant‐aware strategy gives a better result than directly working with the original source and target image's luminance channel as done by many previous methods. Afterwards, our method performs a full gamut‐based mapping technique rather than processing each channel separately. This guarantees that the colors of our transferred image lie within the target gamut. Our experimental results show that this combined illuminant‐aware and gamut‐based strategy produces more compelling results than previous methods. We detail our approach and demonstrate its effectiveness on a number of examples.  相似文献   

2.
In this paper, we propose a method to jointly transfer the color and detail of multiple source images to a target video or image. Our method is based on a probabilistic segmentation scheme using Gaussian mixture model (GMM) to divide each source image as well as the target video frames or image into soft regions and determine the relevant source regions for each target region. For detail transfer, we first decompose each image as well as the target video frames or image into base and detail components. Then histogram matching is performed for detail components to transfer the detail of matching regions from source images to the target. We propose a unified framework to perform both color and detail transforms in an integrated manner. We also propose a method to maintain consistency for video targets, by enforcing consistent region segmentations for consecutive video frames using GMM-based parameter propagation and adaptive scene change detection. Experimental results demonstrate that our method automatically produces consistent color and detail transferred videos and images from a set of source images.  相似文献   

3.
高动态范围图像是一种可以大范围表示场景中亮度的变化,像素值正比于场景实际亮度的图像类型。对高动态范围图像以及普通图像间的色彩迁移进行了研究。先将图像变换到正交的色彩空间,然后根据图像的色彩信息统计值,将参考图像的色彩迁移至目标图像中,使生成的新图像具有参考图像的整体色彩特征。实验表明,色彩迁移效果满意。  相似文献   

4.
目的 在高分辨率遥感图像场景识别问题中,经典的监督机器学习算法大多需要充足的标记样本训练模型,而获取遥感图像的标注费时费力。为解决遥感图像场景识别中标记样本缺乏且不同数据集无法共享标记样本问题,提出一种结合对抗学习与变分自动编码机的迁移学习网络。方法 利用变分自动编码机(variational auto-encoders,VAE)在源域数据集上进行训练,分别获得编码器和分类器网络参数,并用源域编码器网络参数初始化目标域编码器。采用对抗学习的思想,引入判别网络,交替训练并更新目标域编码器与判别网络参数,使目标域与源域编码器提取的特征尽量相似,从而实现遥感图像源域到目标域的特征迁移。结果 利用两个遥感场景识别数据集进行实验,验证特征迁移算法的有效性,同时尝试利用SUN397自然场景数据集与遥感场景间的迁移识别,采用相关性对齐以及均衡分布适应两种迁移学习方法作为对比。两组遥感场景数据集间的实验中,相比于仅利用源域样本训练的网络,经过迁移学习后的网络场景识别精度提升约10%,利用少量目标域标记样本后提升更为明显;与对照实验结果相比,利用少量目标域标记样本时提出方法的识别精度提升均在3%之上,仅利用源域标记样本时提出方法场景识别精度提升了10%~40%;利用自然场景数据集时,方法仍能在一定程度上提升场景识别精度。结论 本文提出的对抗迁移学习网络可以在目标域样本缺乏的条件下,充分利用其他数据集中的样本信息,实现不同场景图像数据集间的特征迁移及场景识别,有效提升遥感图像的场景识别精度。  相似文献   

5.
目的 针对GANILLA、Paint Transformer、StrokeNet等已有的风格迁移算法存在生成图像笔触丢失、线条灵活度低以及训练时间长等问题,提出一种基于曲线笔触渲染的图像风格迁移算法。方法 首先按照自定义的超像素数量将图像前景分割为小区域的子图像,保留更多图像细节,背景分割为较大区域的子图像,再对分割后的每个子区域选取控制点,采用Bezier方程对控制点进行多尺度笔触生成,最后采用风格迁移算法将渲染后的图像与风格图像进行风格迁移。结果 与AST (arbitrary style transfer)方法相比,本文方法在欺骗率指标上提升了0.13,测试者欺骗率提升了0.13。与Paint Transformer等基于笔触渲染的算法对比,本文能够在纹理丰富的前景区域生成细粒度笔触,在背景区域生成粗粒度笔触,保存更多的图像细节。结论 与GANILLA、AdaIN (adaptive instance normalization)等风格迁移算法相比,本文采用图像分割算法取点生成笔触参数,无需训练,不仅提高了算法效率,而且生成的多风格图像保留风格化图像的笔触绘制痕迹,图像色彩鲜明。  相似文献   

6.
高阶矩在颜色传输中的应用   总被引:24,自引:6,他引:24  
图像之间的颜色传输有效地利用了图像的基本统计信息,是一类较新的改变图像颜色的方法.文中实现了这种方法并引入了更高的矩:斜度和峰度.利用幂变换和模变换对源图像数据的斜度和峰度等高阶矩进行了调整,使之更加类似于目标图像的分布,使图像之间颜色传输的效果更好。  相似文献   

7.
目的 现有的灰度图像彩色化方法为了保证彩色化结果在颜色空间上的一致性,往往采用全局优化的算法,使得图像边界区域易产生过渡平滑现象。为此提出一种局部自适应的灰度图像彩色化方法,在迁移过程中考虑局部邻域像素信息,同时自动调节邻域像素权重,在颜色正确迁移的同时保证清晰的边界信息。方法 首先结合SVM(support vector machine)和ISLIC(improved simple linear iterative clustering)算法获取彩色图像和灰度图像分类结果图;然后在分类基础上,确定灰度图像高置信度像素点,并根据图像纹理特征,在彩色图像中寻找灰度图像的像素匹配点;最后利用自适应权重均值滤波实现高置信度匹配像素点的颜色迁移,并利用迁移结果对低置信度像素点进行颜色扩散,以完成灰度图像彩色化。结果 实验结果显示,本文方法获得的彩色化迁移结果评分均高于3.5分,特别是局部放大区域评价结果均接近或高于4.0分,高于其他现有彩色化方法评价分数。表明本文方法不仅能够保证颜色迁移的准确性和颜色空间的一致性,同时也能获取颜色区分度高的边界细节信息。与现有的典型灰度图像彩色化方法相比,彩色化结果图在颜色迁移的正确性和抑制边界区域颜色的过渡平滑上都有更优的表现。结论 本文算法为灰度图像彩色化过程中抑制颜色越界问题提供了新的指导方法,能有效地应用于遥感、黑白图像/视频处理、医学图像着色等领域。  相似文献   

8.
In vector graphics, gradient meshes represent an image object by one or more regularly connected grids. Every grid point has attributes as the position, colour and gradients of these quantities specified. Editing the attributes of an existing gradient mesh (such as the colour gradients) is not only non‐intuitive but also time‐consuming. To facilitate user‐friendly colour editing, we develop an optimization‐based colour transfer method for gradient meshes. The key idea is built on the fact that we can approximate a colour transfer operation on gradient meshes with a linear transfer function. In this paper, we formulate the approximation as an optimization problem, which aims to minimize the colour distribution of the example image and the transferred gradient mesh. By adding proper constraints, i.e. image gradients, to the optimization problem, the details of the gradient meshes can be better preserved. With the linear transfer function, we are able to edit the colours and colour gradients of the mesh points automatically, while preserving the structure of the gradient mesh. The experimental results show that our method can generate pleasing recoloured gradient meshes.  相似文献   

9.
Recovering Surface Layout from an Image   总被引:2,自引:1,他引:2  
Humans have an amazing ability to instantly grasp the overall 3D structure of a scene—ground orientation, relative positions of major landmarks, etc.—even from a single image. This ability is completely missing in most popular recognition algorithms, which pretend that the world is flat and/or view it through a patch-sized peephole. Yet it seems very likely that having a grasp of this “surface layout” of a scene should be of great assistance for many tasks, including recognition, navigation, and novel view synthesis. In this paper, we take the first step towards constructing the surface layout, a labeling of the image intogeometric classes. Our main insight is to learn appearance-based models of these geometric classes, which coarsely describe the 3D scene orientation of each image region. Our multiple segmentation framework provides robust spatial support, allowing a wide variety of cues (e.g., color, texture, and perspective) to contribute to the confidence in each geometric label. In experiments on a large set of outdoor images, we evaluate the impact of the individual cues and design choices in our algorithm. We further demonstrate the applicability of our method to indoor images, describe potential applications, and discuss extensions to a more complete notion of surface layout.  相似文献   

10.
We present an example‐based approach for radiometrically linearizing photographs that takes as input a radiometrically linear exemplar image and a target regular uncalibrated image of the same scene, possibly from a different viewpoint and/or under different lighting. The output of our method is a radiometrically linearized version of the target image. Modeling the change in appearance of a small image patch seen from a different viewpoint and/or under different lighting as a linear 1D subspace, allows us to recast radiometric transfer in a form similar to classic radiometric calibration from exposure stacks. The resulting radiometric transfer method is lightweight and easy to implement. We demonstrate the accuracy and validity of our method on a variety of scenes.  相似文献   

11.
目的 当前的大型数据集,例如ImageNet,以及一些主流的网络模型,如ResNet等能直接高效地应用于正常场景的分类,但在雾天场景下则会出现较大的精度损失。雾天场景复杂多样,大量标注雾天数据成本过高,在现有条件下,高效地利用大量已有场景的标注数据和网络模型完成雾天场景下的分类识别任务至关重要。方法 本文使用了一种低成本的数据增强方法,有效减小图像在像素域上的差异。基于特征多样性和特征对抗的思想,提出多尺度特征多对抗网络,通过提取数据的多尺度特征,增强特征在特征域分布的代表性,利用对抗机制,在多个特征上减少特征域上的分布差异。通过缩小像素域和特征域分布差异,进一步减小领域偏移,提升雾天场景的分类识别精度。结果 在真实的多样性雾天场景数据上,通过消融实验,使用像素域数据增强方法后,带有标签的清晰图像数据在风格上更趋向于带雾图像,总的分类精度提升了8.2%,相比其他的数据增强方法,至少提升了6.3%,同时在特征域上使用多尺度特征多对抗网络,相比其他的网络,准确率至少提升了8.0%。结论 像素域数据增强以及多尺度特征多对抗网络结合的雾天图像识别方法,综合考虑了像素域和特征域的领域分布差异,结合了多尺度的丰富特征信息,同时使用多对抗来缩小雾天数据的领域偏移,在真实多样性雾天数据集上获得了更好的图像分类识别效果。  相似文献   

12.
ABSTRACT

Deep convolutional neural network (CNN) transfer has recently shown strong performance in scene classification of high-resolution remote-sensing images. However, the majority of transfer learning solutions are categorized as homogeneous transfer learning, which ignores differences between target and source domains. In this paper, we propose a heterogeneous model to transfer CNNs to remote-sensing scene classification to correct input feature differences between target and source datasets. First, we extract filters from source images using the principal component analysis (PCA) method. Next, we convolute the target images with the extracted PCA filters to obtain an adopted target dataset. Then, a pretrained CNN is transferred to the adopted target dataset as a feature extractor. Finally, a classifier is used to accomplish remote-sensing scene classification. We conducted extensive experiments on the UC Merced dataset, the Brazilian coffee scene dataset and the Aerial Images Dataset to verify the effectiveness of the proposed heterogeneous model. The experimental results show that the proposed heterogeneous model outperforms the homogeneous model that uses pretrained CNNs as feature extractors by a wide margin and gains similar accuracies by fine-tuning a homogeneous transfer learning model with few training iterations.  相似文献   

13.
We present a novel approach for animating static images that contain objects that move in a subtle, stochastic fashion (e.g. rippling water, swaying trees, or flickering candles). To do this, our algorithm leverages example videos of similar objects, supplied by the user. Unlike previous approaches which estimate motion fields in the example video to transfer motion into the image, a process which is brittle and produces artefacts, we propose an Eulerian phase‐based approach which uses the phase information from the sample video to animate the static image. As is well known, phase variations in a signal relate naturally to the displacement of the signal via the Fourier Shift Theorem. To enable local and spatially varying motion analysis, we analyse phase changes in a complex steerable pyramid of the example video. These phase changes are then transferred to the corresponding spatial sub‐bands of the input image to animate it. We demonstrate that this simple, phase‐based approach for transferring small motion is more effective at animating still images than methods which rely on optical flow.  相似文献   

14.
We present a novel algorithm to denoise deep Monte Carlo renderings, in which pixels contain multiple colour values, each for a different range of depths. Deep images are a more expressive representation of the scene than conventional flat images. However, since each depth bin receives only a fraction of the flat pixel's samples, denoising the bins is harder due to the less accurate mean and variance estimates. Furthermore, deep images lack a regular structure in depth—the number of depth bins and their depth ranges vary across pixels. This prevents a straightforward application of patch‐based distance metrics frequently used to improve the robustness of existing denoising filters. We address these constraints by combining a flat image‐space non‐local means filter operating on pixel colours with a deep cross‐bilateral filter operating on auxiliary features (albedo, normal, etc.). Our approach significantly reduces noise in deep images while preserving their structure. To our best knowledge, our algorithm is the first to enable efficient deep‐compositing workflows with denoised Monte Carlo renderings. We demonstrate the performance of our filter on a range of scenes highlighting the challenges and advantages of denoising deep images.  相似文献   

15.
Abstract— Starting from measured scene luminances, the retinal images of high‐dynamic‐range (HDR) test targets were calculated. These test displays contain 40 gray squares with a 50% average surround. In order to approximate a natural scene, the surround area was made up of half‐white and half‐black squares of different sizes. In this display, the spatial‐frequency distribution approximates a 1/f function of energy vs. spatial frequency. Images with 2.7 and 5.4 optical density ranges were compared. Although the target luminances are very different, after computing the retinal image according to the CIE scatter glare formula, it was found that the retinal ranges are very similar. Intraocular glare strongly restricts the range of the retinal image. Furthermore, uniform, equiluminant target patches are spatially transformed to different gradients with unequal retinal luminances. The usable dynamic range of the display correlates with the range on the retina. Observers report that appearances of white and black squares are constant and uniform, despite the fact that the retinal stimuli are variable and non‐uniform. Human vision uses complex spatial processing to calculate appearance from retinal arrays. Spatial image processing increases apparent contrast with increased white area in the surround. Post‐retinal spatial vision counteracts glare.  相似文献   

16.
研究在不同光照条件下两幅彩色图像的匹配问题,提出了一种新的基于全局颜色传递的具有尺度不变性的特征变换(SIFT)匹配算法。新算法对不同光照下同一场景或目标的两幅彩色图像进行全局颜色传递,以减小匹配时由颜色差异带来的误差;利用SIFT算法提取处理后的图像的特征信息完成初步匹配;采用随机抽验一致性(RANSAC)算法消除误匹配点。实验结果表明新算法具有良好的彩色图像匹配性能。  相似文献   

17.
We present an integrated, fully GPU‐based processing pipeline to interactively render new views of arbitrary scenes from calibrated but otherwise unstructured input views. In a two‐step procedure, our method first generates for each input view a dense proxy of the scene using a new multi‐view stereo formulation. Each scene proxy consists of a structured cloud of feature aware particles which automatically have their image space footprints aligned to depth discontinuities of the scene geometry and hence effectively handle sharp object boundaries and occlusions. We propose a particle optimization routine combined with a special parameterization of the view space that enables an efficient proxy generation as well as robust and intuitive filter operators for noise and outlier removal. Moreover, our generic proxy generation allows us to flexibly handle scene complexities ranging from small objects up to complete outdoor scenes. The second phase of the algorithm combines these particle clouds in real‐time into a view‐dependent proxy for the desired output view and performs a pixel‐accurate accumulation of the colour contributions from each available input view. This makes it possible to reconstruct even fine‐scale view‐dependent illumination effects. We demonstrate how all these processing stages of the pipeline can be implemented entirely on the GPU with memory efficient, scalable data structures for maximum performance. This allows us to generate new output renderings of high visual quality from input images in real‐time.  相似文献   

18.
Guo  Yuanhao  Zhao  Rongkai  Wu  Song  Wang  Chao 《Multimedia Tools and Applications》2018,77(17):22299-22318

Panoramic photography requires intensive operations of image stitching. A large quantity of images may lead to a rather expensive image stitching; while a sparse imaging may cause a poor-quality panorama due to the insufficient correlation between adjacent images. So, a good study for the balance between image quantity and image correlation may improve the efficiency and quality of panoramic photography. Therefore, in this work, we are motivated to present a novel approach to estimate the optimal image capture patterns for panoramic photography. We aim at the minimization of the image quantity which still preserves sufficient image correlation. We represent the image correlation as overlap area between the view range that can be separately observed from adjacent images. Moreover, a time-consuming imaging process of panoramic photography will result in a considerable illumination variation of the scene in images. Subsequently, the image stitching will be more challenged. To solve this problem, we design a series of imaging routines for our image capture patterns to preserve the content consistency, ensuring the generalization of our method to various cameras. Experimental results show that the proposed method can obtain the optimal image capture pattern in a very efficient manner. In these patterns, we can obtain a balanced image quantity but still achieve good results of panoramic photography.

  相似文献   

19.
目的 由于一些光学镜头聚焦范围的有限性,很难对同一场景中所有物体都清晰地成像在一幅图像中,而将同一场景中的多幅源图像进行融合可以得到一幅全景更加清晰的图像,为了增强融合图像的质量,提出了一种新的非下采样四元数剪切波变换(NSQST)的图像融合算法。方法 首先将源图像经过NSQST分解得到低频子带系数和高频子带系数;其次,对低频子带,提出了一种改进的稀疏表示(ISR)的融合规则;对于高频子带,提出一种改进的空间频率、边缘能量和局部区域相似匹配度相结合的融合规则;最后通过NSQST逆变换得到融合图像。结果 与其他5种融合方法进行对比,本文方法获得了较好的客观指标和视觉效果,其中与NSCT-SR算法相比,本文方法获得的4个客观指标分别提高了3.6%、2.9%、1.5%、5.2%,3.7%、3.2%、3.2%、3.0%和6.2%、3.8%、3.4%、8.6%。结论 通过多聚焦图像进行融合实验,实验结果表明该方法可进一步应用于目标识别、医学诊断等领域。  相似文献   

20.
Underexposed, low-light, images are acquired when scene illumination is insufficient for a given camera. Camera limitation originates in the high chance of producing motion blurred images due to shaky hands. In this paper we suggest to actively use underexposing as a measure to prevent motion blurred images to appear and propose a novel color transfer as a method for low light image amplification. The proposed solution envisages a dual acquisition, containing a normally exposed, possibly blurred image and an underexposed/low-light, but sharp one. Good colors are learned from the normal exposed image and transferred to the low light one using a framework matching solution. To ensure that the transfer is spatially consistent, the images are divided into luminance perceptual consistent patches called frameworks and the optimal mapping is piece-wise approximated. The two image may differ by colors and subject to improve the robustness of the spatial matching, we added supplementary extreme channels. The proposed method shows robust results from both an objective and a subjective point of view.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号