首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
基于YUV空间的彩色夜视融合方法   总被引:1,自引:0,他引:1  
钱小燕  韩磊  王帮峰 《计算机应用》2010,30(12):3222-3224
提出了一种基于图像增强和颜色迁移理论的彩色夜视融合方法。首先采用局部增强技术以及中值滤波方法调整低光照可视图像和红外图像的对比度;然后将增强后的图像进行线性运算获得三幅灰度图像,并分别将其映射到RGB三个通道生成伪彩色融合图像;最后,基于YUV颜色空间将参考图像的颜色传输至融合图像获得自然、真实的彩色融合图像。实验结果表明,彩色迁移图像较伪彩色融合图像颜色更加自然、真实,更有利于人眼对目标和环境的判断识别。与其他融合方法相比,算法执行速度更快。  相似文献   

2.
Fusion of visible and infrared imagery for night color vision   总被引:1,自引:0,他引:1  
A combined approach for fusing night-time infrared with visible imagery is presented in this paper. Night color vision is thus accomplished and the final scene has a natural day-time color appearance. Fusion is based either on non-negative matrix factorization or on a transformation that takes into consideration perceptual attributes. The final obtained color images possess a natural day-time color appearance due to the application of a color transfer technique. In this way inappropriate color mappings are avoided and the overall discrimination capabilities are enhanced. Two different data sets are employed and the experimental results establish the overall method as being efficient, compact and perceptually meaningful.  相似文献   

3.
一种基于IHS和小波变换的彩色图像融合算法   总被引:3,自引:1,他引:2  
针对昼夜彩色图像的融合,提出了基于IHS变换和小波变换的昼夜彩色图像融合新算法。该算法首先对昼夜彩色图像作IHS变换得到三个分量:亮度I、色度H和饱和度S;然后利用小波变换融合昼夜彩色图像的亮度分量,并用融合后的亮度分量替代夜晚图像的亮度分量;再作IHS反变换得到新的夜间图像;最后将得到的新的夜间图像和原白天图像在空域进行加权融合。实验分析表明,新方法的性能优于简单的空域加权融合,也优于单纯的基于IHS和小波变换的融合,在保留昼夜彩色图像信息的基础上,增强了融合图像的空间细节表现能力;并用客观评价标准对算  相似文献   

4.
《Information Fusion》2008,9(2):186-199
A natural color mapping method has been previously proposed that matches the statistical properties (mean and standard deviation) of night-vision (NV) imagery to those of a daylight color image (manually selected as the “target” color distribution). Thus the rendered NV image appears to resemble the natural target image in terms of color appearance. However, in this prior method (termed “global-coloring”) the colored NV image may appear unnatural if the target image’s “global” color statistics are different from that of the night-vision scene (e.g., it would appear to have too much green if much more vegetation was contained in the target image). Consequently, a new “local-coloring” method is presented that functions to render the NV image segment-by-segment by taking advantage of image segmentation, pattern recognition, histogram matching and image fusion. Specifically, a false-color image (source image) is formed by assigning multi-band NV images to three RGB (red, green and blue) channels. A nonlinear diffusion filter is then applied to the false-colored image to reduce the number of colors. The final grayscale segments are obtained by using clustering and merging techniques. With a supervised nearest-neighbor paradigm, a segment can be automatically associated with a known “color scheme”. The statistic-matching procedure is merged with the histogram-matching procedure to enhance the color mapping effect. Instead of extracting the color set from a single target image, the mean, standard deviation and histogram distribution of the color planes from a set of natural scene images are used as the target color properties for each color scheme. The target color schemes are grouped by their scene contents and colors such as plants, mountain, roads, sky, water, etc. In our experiments, five pairs of night-vision images were initially analyzed, and the images that were colored (segment-by-segment) by the proposed “local-coloring” method are shown to possess much more natural and realistic coloration when compared with those produced by the previous “global-coloring” method.  相似文献   

5.
Psychophysical assessments of image-sensor fused imagery   总被引:4,自引:0,他引:4  
Krebs WK  Sinai MJ 《Human factors》2002,44(2):257-271
The goal of this study was to determine the perceptual advantages of multiband sensor-fused (achromatic and chromatic) imagery over conventional single-band nighttime (image-intensified and infrared) imagery for a wide range of visual tasks, including detection, orientation, and scene recognition. Participants were 151 active-duty military observers whose reaction time and accuracy scores were recorded during a visual search task. Data indicate that sensor fusion did not improve performance relative to that obtained with single-band imagery on a target detection task but did facilitate object recognition, judgments of spatial orientation, and scene recognition. Observers' recognition and orientation judgments were improved by the emergent information within the image-fused imagery (i.e., combining dominant information from two or more sensors into a single displayed image). Actual or potential applications of this research include the deployment of image-sensor fused systems for automobile, aviation, and maritime displays to increase operators' visual processing during low-light conditions.  相似文献   

6.
一种基于色彩保持的低照度图像增强算法   总被引:2,自引:0,他引:2  
保持色彩不变是彩色图像增强的一个重要问题.针对低照度图像,提出一种快速保持色彩增强方法,采用抛物线函数进行亮度增强,获得亮度增益后,对RGB值进行同比增强,在增强亮度的同时,保持色彩不变.实验表明,本算法运算速度快且增强效果良好.  相似文献   

7.
This paper presents learning the uncorrelated color space (UCS), the independent color space (ICS), and the discriminating color space (DCS) for face recognition. The new color spaces are derived from the RGB color space that defines the tristimuli R, G, and B component images. While the UCS decorrelates its three component images using principal component analysis (PCA), the ICS derives three independent component images by means of blind source separation, such as independent component analysis (ICA). The DCS, which applies discriminant analysis, defines three new component images that are effective for face recognition. Effective color image representation is formed in these color spaces by concatenating their component images, and efficient color image classification is achieved using the effective color image representation and an enhanced Fisher model (EFM). Experiments on the face recognition grand challenge (FRGC) and the biometric experimentation environment (BEE) show that for the most challenging FRGC version 2 Experiment 4, which contains 12 776 training images, 16 028 controlled target images, and 8014 uncontrolled query images, the ICS, DCS, and UCS achieve the face verification rate (ROC III) of 73.69%, 71.42%, and 69.92%, respectively, at the false accept rate of 0.1%, compared to the RGB color space, the 2-D Karhunen-Loeve (KL) color space, and the FRGC baseline algorithm with the face verification rate of 67.13%, 59.16%, and 11.86%, respectively, with the same false accept rate.  相似文献   

8.
In remote sensing, satellite images acquired from sensors provide either high spectral or high spatial resolution. The pansharpening framework is applied to remote-sensing systems to enhance the spatial quality of coarse-resolution multispectral (MS) images using information from panchromatic imagery. A multidecomposition pansharpening approach combining MS and panchromatic (PAN) images is proposed in this paper in order to bring the resolution of the low-resolution MS imagery up to that of the panchromatic images. In particular, multilevel wavelet decomposition is applied to the luminance-chrominance (YUV) space transformation (taking into account the red green and blue (RGB) bands) or extended-YUV transformation (taking into account the near infrared (NIR) band in addition to RGB) of the original MS channels, where geometrical details from the panchromatic image are introduced into the MS ones. Our approach contains a preprocessing step that consists of homogenizing the luminance, Y, and the panchromatic image reflectance, which are, respectively, a value integrated over a wavelength spectrum and simply a linear combination of some values in the same spectrum. Hence, as the panchromatic image reflectance and luminance reflectance correspond to different measurements, they do not correspond to the same physical information, which results in a difference between their histograms. Therefore, simple histogram matching is traditionally applied to panchromatic data to fit it to the luminance to avoid colour distortion after fusion. However, as the transformation concerns just the details of the panchromatic and MS images, a new scheme for matching the images which ignores the divergence between their approximations and maximizes the resemblance between their details is proposed in this work. After that, the fusion approach is applied, and in contrast to the original approach where the details of the fused MS luminance are set equal to the PAN luminance, we propose an adaptive approach in which just a part of the PAN details proportional to the similarity between the luminance and lowered PAN image is taken. Indeed, high-resolution geometrical details cannot be similar if the low-resolution details are not in good agreement. Besides, as the agreement between PAN and MS images depends on the occupation class, we have created a segmentation map and then computed separately the correlation in each region. Finally, the evaluation is done based on QuickBird and Pleiades-1A data sets showing rural and suburban areas. When compared to recent methods, our approach provides better results.  相似文献   

9.
该方法首先提取源图像的亮度分量,采用非下采样Contourlet变换对其进行分解,通过“合成图像像素值取大”准则对高频系数进行处理得到融合决策图,并对其进行一致性校验,最后根据校验后的决策图在RGB空间进行像素点选取,得到融合图像。实验结果表明,本文方法解决了RGB空间融合方法容易导致的颜色失真,同时本方法仅对亮度分量融合,降低运算复杂度。融合图像在保留图像有用信息的同时,弥补了传统空间域方法在细节表现力上的不足,更加符合人类的视觉特征。本文融合方法还用于灰度多聚焦图像融合中,实践证明,融合效果很好。  相似文献   

10.
A pixel‐level fusion technique for RGB representation of multispectral images is proposed. The technique results in highly correlated RGB components, a fact which occurs in natural colour images and is strictly related to the colour perception attributes of the human eye. Accordingly, specific properties for the covariance matrix of the final RGB image are demanded. Mutual information is employed as an objective criterion for quality refinement. The method provides dimensionality reduction, while the resulting RGB colour image is perceptually of high quality. Comparisons with existing techniques are carried out using both subjective and objective measures.  相似文献   

11.
针对现有基于全局统计信息的彩色夜视算法不能较好保持图像目标色彩恒常性的不足,提出一种简单有效的局部自然彩色夜视算法。首先依据人类视觉系统的中心-环绕对抗结构生成伪彩色融合图像;按照不同颜色特征对伪彩色融合图像进行聚类,分割成不同颜色域,参考自然彩色图像中对应域为与融合图像对应位置像素集合;最后在简化Lab空间实现对应域间的局部颜色映射,将参考图像颜色特征传输至融合图像。实验结果表明,本文方法使得夜视图像获得如白天参考图像般自然、真实的效果,有利于提高场景辨识能力;方法执行速度快,便于实时处理。  相似文献   

12.
Hierarchical image fusion   总被引:39,自引:0,他引:39  
A hierarchical image fusion scheme is presented that preserves those details from the input images that are most relevant to visual perception. Results show that fused images present a more detailed representation of the scene and provide information that cannot be obtained by viewing the input images separately. Detection, recognition, and search tasks may therefore benefit from this fused image representation.  相似文献   

13.
基于YCgCr颜色空间的光照自适应的肤色区域检测   总被引:2,自引:0,他引:2  
为了校正由于光源颜色变化而引起的图像色彩的偏差,在RGB颜色空间基于Gray World假设,采用颜色平衡方法对输入的彩色图像进行颜色校正;为了有效地检测彩色图像中的肤色区域,采用了新颜色空间YCgCr,并在该颜色空间建立了联合亮度信息和Cg-Cr色度查找表的肤色模型.大量实验结果表明,该模型具有较强的光照适应性,可应用于复杂环境下的彩色图像中的人体肤色区域检测.  相似文献   

14.
李慧慧  秦品乐  梁军 《计算机应用》2016,36(5):1378-1382
图像去雾技术处理的目的是消除雾霾对视频监控图像的影响,提高雾霾图像的视觉效果。目前,一般去雾图像只是比较去雾后和去雾前的图像,处理结果通常失真严重或过饱和,不能在保证细节清晰的同时保证颜色信息完整。针对上述问题,提出了一种基于大气散射模型和光学原理,建立具有散射特性的HIS亮度转换模型,并与RGB空间结合计算的图像复原方法。该方法通过分析晴天图像和雾霾图像的对比关系,结合HSI空间人眼视觉最敏感的亮度分量计算出图像场景的相对深度关系,利用大气散射模型以及景深比,对雾霾视频图像进行清晰复原和结果的测评。实验结果证明,与只从RGB空间计算的去雾霾方法对比,所提方法去雾效果更清晰,彩色失真和过饱和程度更小。  相似文献   

15.
一种基于结构相似度的IHS变换融合算法   总被引:1,自引:0,他引:1  
由于IHS色彩空间表现的颜色更加符合人眼的视觉规律,因此,IHS变换在遥感影像融合中被广泛应用。针对传统的IHS变换融合算法进行融合实验时有较大的色彩畸变问题,提出了一种基于结构相似度(Strucral Similarity SSIM)的IHS变换融合算法。对影像进行IHS变换之后,计算原始多光谱影像I分量与全色波段影像的SSIM矩阵,并由该SSIM矩阵确定对应不同区域的新的亮度分量I。实验结果表明,该算法在增强影像空间分辨率的同时,能很好地保持其光谱特征。  相似文献   

16.
李桂  李腾 《图学学报》2020,41(4):539
人物视频生成技术是通过学习人体结构与运动的特征表示,实现从特征表示到 人物视频帧的空间生成映射。针对现有的人物视频生成算法未考虑背景环境转换及人体姿态 估计精度较低等问题,提出一种基于姿态引导的场景保留人物视频生成算法(PSPVG)。首先, 取合适的源视频和目标视频,利用分割人物外观的视频帧代替源视频帧作为网络的输入;然 后,基于GAN 的运动转换模型将源视频中的人物替换成目标人物,并保持动作一致性;最后, 引用泊松图像编辑将人物外观与源背景融合,去除边界异常像素,实现将人物自然地融入源 场景且避免改变画面背景环境和整体风格。该算法使用分割出的前景人物图代替源视频帧中 的人物,减少背景干扰,提高姿态估计精度,自然地实现运动转移过程中源场景的保留,生 成艺术性与真实性和谐并存的人物视频。  相似文献   

17.
传统的多源色彩迁移算法常常利用欧氏色彩距离来分割目标图像,由于色彩序列的模糊性与不确定性,使得这种分割极易出现色彩扭曲现象. 针对这个问题,提出一种基于主动轮廓探索的多源色彩迁移算法. 首先,为将目标图像的主体与背景分离开,利用一种主动进化的方法生成虚拟轮廓线,并采用能量函数评价机制迫使虚拟轮廓线逐渐逼近实际轮廓线. 其次,合理利用源图像与目标图像在RGB、Gray和LMS等不同色彩空间的表示、分割、转换,实现其在lαβ空间的多源色彩迁移. 最后,将在lαβ空间迁移得到的目标图像逆向操作后恢复为RGB显示. 单源与多源色彩迁移的对比、灰度化色彩通道的选择以及各色彩空间不同色彩通道间的干涉性对比等实验验证了所提算法的合理性与有效性.  相似文献   

18.
A novel ghost-free exposure fusion method for generating an HDR image of a dynamic scene is presented in this paper. Given a sequence of input images with gradually increased exposures, due to the theory that the luminance is linearly depended on the exposure time (Mertens et al. Comput Graph Forum 28(1):161–171, 2009), each input image is normalized to make it have consistent luminance with a reference image. Then moving objects in the dynamic scene are detected using a modified difference method for further exposure fusion. Experiments and comparisons show that our method has advantage in deghosting when the reference image contains saturated regions and generate high-quality results with natural textures. Furthermore, our method has a largely improved timing performance compared with previous reference-guided methods.  相似文献   

19.
基于Curvelet变换的红外与彩色可见光图像融合算法   总被引:1,自引:0,他引:1       下载免费PDF全文
提出了基于二代Curvelet变换的红外与彩色可见光图像的融合算法。首先对彩色可见光图像进行IV1V2颜色空间变换提取亮度分量,然后对彩色图像的亮度分量和红外图像应用Curvelet变换,对低频系数应用亮度重映射技术后采用加权平均的融合规则,高频系数则采用取大融合规则,再对融合系数应用Curvelet逆变换获得融合图像的亮度分量,最后运用颜色空间逆变换得到融合图像。实验对比表明,相对于对传统融合算法中强度较高的源图像会淹没另一方背景纹理及细节的问题,该算法能有效提高红外光谱信息的充分融合,获得了视觉效果更好、综合指标更优的融合图像。  相似文献   

20.
The demand for the exploration of ocean resources is increasing exponentially. Underwater image data plays a significant role in many research areas. Despite this, the visual quality of underwater images is degraded because of two main factors namely, backscattering and attenuation. Therefore, visual enhancement has become an essential process to recover the required data from the images. Many algorithms had been proposed in a decade for improving the quality of images. This paper aims to propose a single image enhancement technique without the use of any external datasets. For that, the degraded images are subjected to two main processes namely, color correction and image fusion. Initially, veiling light and transmission light is estimated to find the color required for correction. Veiling light refers to unwanted light, whereas transmission light refers to the required light for color correction. These estimated outputs are applied in the scene recovery equation. The image obtained from color correction is subjected to a fusion process where the image is categorized into two versions and applied to white balance and contrast enhancement techniques. The resultants are divided into three weight maps namely, luminance, saliency, chromaticity and fused using the Laplacian pyramid. The results obtained are graphically compared with their input data using RGB Histogram plot. Finally, image quality is measured and tabulated using underwater image quality measures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号