首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 203 毫秒
1.
为解决传统多曝光图像融合的实时性和动态场景鬼影消除问题,提出了基于灰度级映射函数建模的多曝光高动态图像重建算法。对任意大小的低动态范围(Low dynamic range,LDR)图像序列,仅需拟合与灰阶数目相同个数而不是与相机分辨率个数相同的视觉适应的S形曲线,利用最佳成像值判别方法直接融合,提高了算法的融合效率,能够达到实时性图像融合要求。对动态场景的融合,设计灰度级映射关系恢复理想状态的多曝光图像,利用差分法检测运动目标区域,作鬼影消除处理,融合得到一幅能够反映真实场景信息且不受鬼影影响的高动态范围图像。  相似文献   

2.
高动态范围(High dynamic range, HDR)图像成像技术的出现, 为解决由于采集设备动态范围不足而导致现有数字图像动态范围有限的问题提供了一条切实可行的思路.合成高动态范围图像的过程中因相机抖动或运动物体所造成的模糊和伪影问题, 可通过块匹配对多曝光图像序列进行去伪影融合加以解决.但对于具有复杂运动变化的真实场景, 现有的去伪影融合方法准确度和效率仍存在不足.为此, 本文结合相机响应函数和一致性敏感哈希提出了一种高动态图像去伪影融合方法.仿真结果表明, 该方法有效降低了计算复杂度, 具有较好的鲁棒性, 在有效去除伪影的同时提升了高动态范围图像质量.  相似文献   

3.
一种基于细节层分离的单曝光HDR图像生成算法   总被引:1,自引:1,他引:0  
张红英  朱恩弘  吴亚东 《自动化学报》2019,45(11):2159-2170
针对利用单幅低动态范围(Low dynamic range,LDR)图像生成高动态范围(High dynamic range,HDR)图像细节信息不足的问题,本文提出了一种基于细节层分离的单曝光HDR图像生成算法.该算法基于人类视觉系统模型,首先分别提取出LDR图像的亮度分量和色度分量,对伽马校正后的亮度分量进行双边滤波,提取出亮度分量的基本层,再对基本层和亮度分量进行遍历运算,得到亮度分量的细节层;然后,构造反色调映射函数,分别对细节层和伽马校正后的亮度图像进行扩展,得到各自的反色调映图像;之后,将反色调映射后亮度分量与压缩后的细节层进行融合,得到新的亮度分量.最后,融合色度分量与新的亮度分量,并对融合后图像进行去噪,得到最终的HDR图像.实验表明该算法能挖掘出部分隐藏的图像细节信息,处理效果较好,运行效率高,具有较好的鲁棒性.  相似文献   

4.
目的 利用低秩矩阵恢复方法可从稀疏噪声污染的数据矩阵中提取出对齐且线性相关低秩图像的优点,提出一种新的基于低秩矩阵恢复理论的多曝光高动态范围(HDR)图像融合的方法,以提高HDR图像融合技术的抗噪声与去伪影的性能。方法 以部分奇异值(PSSV)作为优化目标函数,可构建通用的多曝光低动态范围(LDR)图像序列的HDR图像融合低秩数学模型。然后利用精确增广拉格朗日乘子法,求解输入的多曝光LDR图像序列的低秩矩阵,并借助交替方向乘子法对求解算法进行优化,对不同的奇异值设置自适应的惩罚因子,使得最优解尽量集中在最大奇异值的空间,从而得到对齐无噪声的场景完整光照信息,即HDR图像。结果 本文求解方法具有较好的收敛性,抗噪性能优于鲁棒主成分分析(RPCA)与PSSV方法,且能适用于多曝光LDR图像数据集较少的场合。通过对经典的Memorial Church与Arch多曝光LDR图像序列的HDR图像融合仿真结果表明,本文方法对噪声与伪影的抑制效果较为明显,图像细节丰富,基于感知一致性(PU)映射的峰值信噪比(PSNR)与结构相似度(SSIM)指标均优于对比方法:对于无噪声的Memorial Church图像序列,RPCA方法的PSNR、SSIM值分别为28.117 dB与0.935,而PSSV方法的分别为30.557 dB与0.959,本文方法的分别为32.550 dB与0.968。当为该图像序列添加均匀噪声后,RPCA方法的PSNR、SSIM值为28.115 dB与0.935,而PSSV方法的分别为30.579 dB与0.959,本文方法的为32.562 dB与0.967。结论 本文方法将多曝光HDR图像融合问题与低秩最优化理论结合,不仅可以在较少的数据量情况下以较低重构误差获取到HDR图像,还能有效去除动态场景伪影与噪声的干扰,提高融合图像的质量,具有更好的鲁棒性,适用于需要记录场景真实光线变化的场合。  相似文献   

5.
基于概率模型的高动态范围图像色调映射   总被引:3,自引:0,他引:3  
提出了一种概率模型对HDR(high dynamic range)图像进行色调再生.分别对局部像素的色调能量分布与HDR/LDR(low dynamic range)间梯度变化约束建立概率统计模型,通过求解最大后验概率(maximum a posteriori,简称MAP)将整个色调映射过程转化为一个能量最小化问题.实验结果表明,所提出的基于概率模型的色调映射方法能够生成比以往方法具有更多视觉信息的LDR 图像,可用于高级图像编辑、显示设备开发等领域.  相似文献   

6.
针对目前生成高动态范围(HDR)图像的方法对采集设备要求高且不适用于动态场景的问题,提出一种基于Retinex增强的单幅低动态范围(LDR)图像生成HDR图像方法.首先基于Retinex方法将单幅LDR图像映射生成多幅不同曝光的图像,然后结合图像的对比度、饱和度等4个测量因子计算各幅图像的权重,最后利用金字塔分解来融合生成HDR图像.在多幅图像上的实验结果表明,该方法生成的HDR图像纹理更清晰,更符合人眼视觉效果.  相似文献   

7.
一种亮度可控与细节保持的高动态范围图像色调映射方法   总被引:2,自引:0,他引:2  
高动态范围(High dynamic range, HDR)图像通常需压缩其动态范围,以便于进行存储、传输、重现. 本文提出一种具有亮度可控与细节保持特性的HDR图像的全局色调映射方法.该方法对HDR图像 照度直方图进行裁剪与补偿,令色调映射后的低动态范围(Low dynamic range, LDR)图像仍能够保持原有的细节特性, 同时利用概率模型估算出输出LDR图像的亮度与标准差,进而调整直方图亮度区域的分配, 使得输出LDR图像的亮度接近用户设置的亮度,最后以分段直方图均衡的方法进行HDR色调映射处理. 仿真结果表明,该方法能对HDR图像动态范围进行合理的压缩映射,输出的LDR图像的亮度可由用户控制或自适应选择, 同时能保持图像的细节信息,令图像的主观视觉感受对比和谐.  相似文献   

8.
目的 多曝光图像融合(multi-exposure fusion,MEF)是利用一组不同曝光度的低动态范围(low dynamic range,LDR)图像进行合成,得到类似高动态范围(high dynamic range,HDR)图像视觉效果图像的过程。传统多曝光图像融合在一定程度上存在图像细节信息受损、边界不清晰以及部分色彩失真等问题。为了充分综合待融合图像的有效信息,提出了一种基于图像分解和色彩先验的双尺度多曝光图像融合方法。方法 使用快速导向滤波进行图像分解,分离出细节层对其进行增强处理,保留更多的细节信息,同时减少融合图像的光晕伪影;根据色彩先验,利用亮度和饱和度之差判断图像曝光程度,并联合亮度与饱和度之差以及图像对比度计算多曝光图像融合权重,同时保障融合图像的亮度和对比度;利用导向滤波对权重图进行优化,抑制噪声,增加像素之间的相关性,提升融合图像的视觉效果。结果 在24组多曝光图像序列上进行实验,从主观评价角度来看,该融合方法能够提升图像整体对比度及色彩饱和度,并兼顾过曝光区域和欠曝光区域的细节提升。从客观评价标准分析,采用两种不同的多曝光图像序列融合结果的质量评估算法,评价结果显示融合性能均有所提高,对应的指标均值分别为0.982和0.970。与其他对比算法的数据结果比较,在两种不同的结构相似性指标上均有所提升,平均提升分别为1.2%和1.1%。结论 通过主观和客观评价,证实了所提方法在图像对比度、色彩饱和度以及细节信息保留的处理效果十分显著,具有良好的融合性能。  相似文献   

9.
为了提高HDR(high dynamic range)图像的质量,提出一种针对HDR图像滤噪的新方法。首先取得图像组中相同空间位置的点构成像素组;然后利用相机的ITF(intensity transfer function)还原成亮度值;接着通过求解最小二乘问题拟合L-t(luminance-exposure time)直线,并用该直线修正各点亮度值;最后再通过ITF转换成像素值。图像组经滤噪之后,再用于合成HDR图像。实验结果表明,该算法运算量小,能有效去除椒盐型噪声,对高斯型噪声抑制效果明显,优于A.&E.(ahmet & erit)算法,这也是该算法的主要特色。  相似文献   

10.
色外观匹配的高动态范围图像再现   总被引:1,自引:0,他引:1  
针对高动态范围(high dynamic range,HDR)图像再现因观察条件变化引起的再现图像与源图像色外观不一致问题,提出一种色外观匹配的HDR图像再现算法.首先将HDR图像的色度和亮度信息分离,通过估计源场景观察条件,利用色外观模型(color appearance model,CAM)将HDR图像再现至显示环境,保持源场景的色度外观;然后针对亮度图像,根据直方图特征进行自适应分区,构造分段线性色阶调整函数,将显示亮度范围动态分配给不同的亮度分区,以增强图像感知对比度,并利用双边滤波技术提取图像细节信息进行细节补偿;最后,将处理后的彩色和非彩色信息合成,并对亮度压缩带来的色外观变化进行色度校正,得到与源HDR图像色外观一致的低动态范围(low dynamic range,LDR)再现图像.实验表明,新算法在色外观保持、动态范围压缩和细节表现上均优于传统算法.  相似文献   

11.

Image captured by low dynamic range (LDR) camera fails to capture entire exposure level of scene, and instead only covers certain range of exposures. In order to cover entire exposure level in single image, bracketed exposure LDR images are combined. The range of exposures in different images results in information loss in certain regions. These regions need to be addressed and based on this motive a novel methodology of layer based fusion is proposed to generate high dynamic range image. High and low-frequency layers are formed by dividing each image based on pixel intensity variations. The regions are identified based on information loss section created in differently exposed images. High-frequency layers are combined using region based fusion with Dense SIFT which is used as activity level testing measure. Low-frequency layers are combined using weighted sum. Finally combined high and low-frequency layers are merged together on pixel to pixel basis to synthesize fused image. Objective analysis is performed to compare the quality of proposed method with state-of-the-art. The measures indicate superiority of the proposed method.

  相似文献   

12.
基于相机响应曲线的高动态范围图像融合   总被引:1,自引:0,他引:1  
高动态范围成像技术由于能够更加真实地再现成像目标及周围场景,在军事、航天等领域具有重要的研究价值。首先利用佳能1DC针对同一场景拍摄不同曝光量的序列图像,通过推导计算获得彩色图像各通道的相机响应曲线;然后通过建立图像灰度值与辐照度之间的照度映射获得一幅高动态范围的辐照度图像;其次利用基于双边滤波色调映射算法对获得的高动态范围图像进行压缩;最后通过基于镜面反射白平衡算法对图像进行色彩校正,从而获得拍摄场景的高动态范围图像。抽取不同帧数的实验拍摄图像对所提算法进行有效性验证,通过实验结果分析该算法实现高动态范围图像融合所需最少图像帧数为4帧。  相似文献   

13.
This paper describes a cluster-based method for combining differently exposed images in order to increase their dynamic range. Initially an image is decomposed into a set of arbitrary shaped regions. For each region we compute a utility function which is based on the amount of presented information and an entropy. This function is used to select the most appropriate exposure for each region. After the exposures are selected, a bilateral filtering is applied in order to make the interregional transitions smooth. As a result we obtain weighting coefficients for each exposure and pixel. An output image is combined from clusters of input images using weights. Each pixel of the output image is calculated as a weighted sum of exposures. The proposed method allows recovering details from overexposed and underexposed parts of image without producing additional noise. Our experiments show effectiveness of the algorithm for the high dynamic range scenes. It requires no information about shutter speed or camera parameters. This method shows robust results even if the exposure difference between input images is 2-stops or higher.  相似文献   

14.
One of the biggest problems in applying stereo vision techniques in field robotics is how to acquire 3D terrain maps under extreme light conditions. Through multiple exposures, the dynamic range of images can be increased. In this paper, instead of using existing lighting enhancement methods such as exposure fusion to increase the texture of 2D image, we propose that the matching costs of the images grabbed with multiple exposures are directly summed by weight. Compared with the previous methods such as exposure fusion, with the proposed method, it is not necessary to fuse the 2D images captured with multiple exposures, and for each pixel of the matching image, the local information in its local window can be better retained. Since it is possible that the camera is moved between exposures when the images are grabbed, the images captured with multiple exposures are aligned to the image acquired with auto exposure. In order to evaluate the performance of the proposed method, two different stereo matching algorithms were used: a local window-based method and semi-global method. Through experiments in laboratory and outdoors with a stereo vision camera fixed on a tripod and held in the hand, it was verified that the proposed method consistently allowed more valid points to be obtained and the 3D model of terrain can be built more accurately. Especially when the local window-based method was used, the proposed method performed much better.  相似文献   

15.
同一场景不同曝光图像的配准及HDR图像合成   总被引:1,自引:0,他引:1  
提出了一种针对同一场景不同曝光的多幅照片图像的配准、恢复光照响应曲线及高动态范围图像合成算法.首先将不同曝光的序列图像中值二值化,利用多尺度思想构造二值图像的金字塔序列并逐级比较,以实现多幅图像间的平移配准和转角配准.从配准后的序列图像中采样,根据最小二乘原理拟合出相机的光照响应曲线,并获得图像中像素值与曝光量之间的映射关系,进而将不同曝光的场景照片图像融合成一幅高动态范围图像.实验结果表明,合成的高动态范围图像效果满意.  相似文献   

16.
We present an approach that significantly enhances the capabilities of traditional image mosaicking. The key observation is that as a camera moves, it senses each scene point multiple times. We rigidly attach to the camera an optical filter with spatially varying properties, so that multiple measurements are obtained for each scene point under different optical settings. Fusing the data captured in the multiple images yields an image mosaic that includes additional information about the scene. We refer to this approach as generalized mosaicing. In this paper we show that this approach can significantly extend the optical dynamic range of any given imaging system by exploiting vignetting effects. We derive the optimal vignetting configuration and implement it using an external filter with spatially varying transmittance. We also derive efficient scene sampling conditions as well as ways to self calibrate the vignetting effects. Maximum likelihood is used for image registration and fusion. In an experiment we mounted such a filter on a standard 8-bit video camera, to obtain an image panorama with dynamic range comparable to imaging with a 16-bit camera.  相似文献   

17.
针对无人车(UGV)自主跟随目标车辆检测过程中需要对激光雷达(LiDAR)数据和摄像机图像进行信息融合的问题,提出了一种基于梯形棋盘格标定板对激光雷达和摄像机进行联合标定的方法。首先,利用激光雷达在梯形标定板上的扫描信息,获取激光雷达安装的俯仰角和安装高度;然后,通过梯形标定板上的黑白棋盘格标定出摄像机相对于车体的外参数;其次,结合激光雷达数据点与图像像素坐标之间的对应关系,对两个传感器进行联合标定;最后,综合激光雷达和摄像机的标定结果,对激光雷达数据和摄像机图像进行了像素级的数据融合。该方法只要让梯形标定板放置在车体前方,采集一次图像和激光雷达数据就可以满足整个标定过程,实现两种类型传感器的标定。实验结果表明,该标定方法的平均位置偏差为3.5691 pixel,折算精度为13 μm,标定精度高。同时从激光雷达数据和视觉图像融合的效果来看,所提方法有效地完成激光雷达与摄像机的空间对准,融合效果好,对运动中的物体体现出了强鲁棒性。  相似文献   

18.
Current HDR acquisition techniques are based on either (i) fusing multibracketed, low dynamic range (LDR) images, (ii) modifying existing hardware and capturing different exposures simultaneously with multiple sensors, or (iii) reconstructing a single image with spatially‐varying pixel exposures. In this paper, we propose a novel algorithm to recover high‐quality HDRI images from a single, coded exposure. The proposed reconstruction method builds on recently‐introduced ideas of convolutional sparse coding (CSC); this paper demonstrates how to make CSC practical for HDR imaging. We demonstrate that the proposed algorithm achieves higher‐quality reconstructions than alternative methods, we evaluate optical coding schemes, analyze algorithmic parameters, and build a prototype coded HDR camera that demonstrates the utility of convolutional sparse HDRI coding with a custom hardware platform.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号