首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
High dynamic range imaging (HDRI) is an excellent high-quality image acquisition technique, which can reflect real human visual characteristics from one (or several) captured low dynamic range (LDR) image. However, the input LDR image only provides partial information of the scene. Besides, in traditional HDRI methods that require multiple captured images as input, field of view errors can be induced, which will be difficult to apply it to the emerging image acquisition systems. Here, we propose a novel HDRI method that reconstructs an HDR image from only a pair of short- and long-exposure images based on artificial remapping using multi-scale exposure fusion. Firstly, we introduce a simulated exposure model called artificial remapping to synthesize a multi-exposure image sequence from the input LDR image pairs. Then, weighting maps of the sequence for fusion can be obtained according to the evaluation factors of contrast, saturation, as well as improved exposedness. Finally, we utilize the pyramid based multiscale exposure fusion framework to integrate them into an enhanced HDR image. Comparative experiments, fully implemented on some source images, have been demonstrated that better performance can be realized compared with some competing methods in qualitative and quantitative evaluation. Note that the operation of the proposed method is simple yet effective, which is easy to popularize. The method thus can be potentially applied to the emerging image acquisition systems where two images are captured simultaneously by two image sensors or by one image sensor with a pair of short- and long-exposure setting.  相似文献   

2.
This paper considers the problem of high-dynamic-range (HDR) image capture using low-dynamic-range (LDR) cameras. We present three different minimal-bracketing algorithms for computing minimum-sized exposure sets bracketing of HDR scenes. Each algorithm is applicable to a different HDR-imaging scenario depending on the amount of target-scene-irradiance information and real-time image processing available at the time of image acquisition. We prove the optimality of each algorithm with respect to its ability to obtain a theoretically minimum-size bracketing set of exposures. We also provide closed-form expressions for computing minimal-bracketing exposure sets for two common types of HDR-imaging systems, those with geometrically varying and arithmetically varying exposure settings. We experimentally demonstrate the advantages of the proposed methods by capturing and processing multiple HDR scenes using minimal-bracketing and 1-stop bracketing methods. The results show that minimal-bracketing can be used to produce high-quality HDR images, while requiring only one third as many LDR images be acquired compared to 1-stop bracketing. We also perform a detailed SNR analysis that quantifies the tradeoff between signal-to-noise ratio and image-bracketing-set size.  相似文献   

3.
High dynamic range (HDR) image generation and display technologies are becoming increasingly popular in various applications. A standard and commonly used approach to obtain an HDR image is the multiple exposures' fusion technique which consists of combining multiple images of the same scene with varying exposure times. However, if the scene is not static during the sequence acquisition, moving objects manifest themselves as ghosting artefacts in the final HDR image. Detecting and removing ghosting artefacts is an important issue for automatically generating HDR images of dynamic scenes. The aim of this paper is to provide an up-to-date review of the recently proposed methods for ghost-free HDR image generation. Moreover, a classification and comparison of the reviewed methods is reported to serve as a useful guide for future research on this topic.  相似文献   

4.
To overcome the dynamic range limitations in images taken with regular consumer cameras, several methods exist for creating high dynamic range (HDR) content. Current low-budget solutions apply a temporal exposure bracketing which is not applicable for dynamic scenes or HDR video. In this article, a framework is presented that utilizes two cameras to realize a spatial exposure bracketing, for which the different exposures are distributed among the cameras. Such a setup allows for HDR images of dynamic scenes and HDR video due to its frame by frame operating principle, but faces challenges in the stereo matching and HDR generation steps. Therefore, the modules in this framework are selected to alleviate these challenges and to properly handle under- and oversaturated regions. In comparison to existing work, the camera response calculation is shifted to an offline process and a masking with a saturation map before the actual HDR generation is proposed. The first aspect enables the use of more complex camera setups with different sensors and provides robust camera responses. The second one makes sure that only necessary pixel values are used from the additional camera view, and thus, reduces errors in the final HDR image. The resulting HDR images are compared with the quality metric HDR-VDP-2 and numerical results are given for the first time. For the Middlebury test images, an average gain of 52 points on a 0-100 mean opinion score is achieved in comparison to temporal exposure bracketing with camera motion. Finally, HDR video results are provided.  相似文献   

5.
贺理  陈果  郭宏  金伟其 《红外技术》2020,42(4):340-347
高动态范围成像技术能够全面有效反映场景信息,有利于在高动态范围场景下获得高质量的成像。但当前常用的基于单台相机的多次曝光融合方法在动态场景下易出现“鬼影”问题,基于多个传感器同时曝光的系统复杂且价格昂贵,基于单幅低动态范围图像的拓展方法易丢失欠曝光或过曝光区域的细节信息,且多用于较好的照明条件。针对低照度动态场景成像,研究了一种基于双通道低照度CMOS相机的高动态范围图像融合方法,对双通道CMOS相机采集低照度动态场景两幅不同曝光图像,依据累计直方图拓展原则分别进行动态范围拓展,并采用像素级融合方法对动态范围拓展的序列图像进行融合。实验表明,动态范围拓展融合方法可满足低照度动态场景下获取高动态范围图像的应用要求,获得更佳的成像质量。  相似文献   

6.
High dynamic range (HDR) images greatly improve visual content quality, but pose challenges in processing, acquisition, and display. Images captured in real-world scenarios with multiple nonlinear cameras, extremely short unknown exposure time, and a shared light source present the additional challenges of incremental baseline and angle deviation amongst the cameras. The disparity maps in such conditions are not reliable; therefore, we propose a method that relies on the accurate detection and matching of feature points across adjacent viewpoints. We determine the exposure gain among the matched feature points in the involved views and design an image restoration method to restore multiview low dynamic range (MVLDR) images for each viewpoint. Finally, the fusion of these restored MVLDR images produces high-quality images for each viewpoint without capturing a series of bracketed exposure. Extensive experiments are conducted in controlled and uncontrolled conditions, and results prove that the proposed method competes for the state-of-the-arts.  相似文献   

7.
Proposes a new method for statistical classification of multisource data. The method is suited for land-use classification based on the fusion of remotely sensed images of the same scene captured at different dates from multiple sources. It incorporates a priori information about the likelihood of changes between the acquisition of the different images to be fused. A framework for the fusion of remotely sensed data based on a Bayesian formulation is presented. First, a simple fusion model is given, and then the basic model is extended to take into account the temporal attribute if the different data sources are acquired at different dates. The performance of the model is evaluated by fusing Landsat TM images and ERS-1-SAR images for land-use classification. The fusion model gives significant improvements in the classification error rates compared to the conventional single-source classifiers  相似文献   

8.
Unlike high dynamic range (HDR) imaging, exposure fusion is a process of generating a tonemapped-like HDR image directly by fusing a series of bracketed images. Since it frees users from the tedious radiometric calibration and tone mapping steps, this technique is getting more and more popular, and becomes a basic tool in many graphics software. The main drawback of exposure fusion is its limitation to static scenes and any object movement of the target scene will incur severe ghosting artifacts in the fused result. In this paper, we intend to overcome this limitation and make exposure fusion applicable in dynamic scenes. A new quality assessment system is developed, where both temporal consistency and spatial consistency are introduced to account for ghosting artifacts. Experimental results of various dynamic scenes are shown to prove the effectiveness of the proposed method.  相似文献   

9.
This paper evaluates the suitability of High Dynamic Range (HDR) imaging techniques for Feature Point (FP) detection under demanding lighting conditions. The FPs are evaluated in HDR, tone mapped HDR, and traditional Low Dynamic Range (LDR) images. Eleven global and local tone mapping operators are evaluated and six widely used FP detectors are used in the experiments (Harris, Shi–Tomasi, DoG, Fast Hessian, FAST, and BRISK). The distribution and repeatability rate of FPs are studied under changes of camera viewpoint, camera distance, and scene lighting. The results of the experiments show that current FP detectors cannot cope with HDR images well. The best contemporary solution is thus tone mapping of HDR images using a local tone mapper as a pre-processing step.  相似文献   

10.
多曝光图像融合是将同一场景不同曝光度的图像进行融合,是当前高动态场景成像的主流方法。为了获得更自然的融合效果,该文提出基于深度引导与自学习的多曝光图像融合网络(MEF-Net)。该网络是以端到端的方式融合任意数量的不同曝光度图像,无监督地输出最优的融合结果。在损失函数方面,通过引入强度保真约束项和加权的多曝光图像融合结构相似度(MEF-SSIM),提升融合效果。此外,针对两幅极度曝光情况下的图像融合,该文采用自学习的方式,基于预训练的模型进行参数微调与优化,减弱光晕现象。基于大量测试数据,实验结果表明,该文所提算法在定量指标和视觉融合效果方面均优于现有主流算法。  相似文献   

11.
A general model for multisource classification of remotely sensed data based on Markov random fields (MRF) is proposed. A specific model for fusion of optical images, synthetic aperture radar (SAR) images, and GIS (geographic information systems) ground cover data is presented in detail and tested. The MRF model exploits spatial class dependencies (spatial context) between neighboring pixels in an image, and temporal class dependencies between different images of the same scene. By including the temporal aspect of the data, the proposed model is suitable for detection of class changes between the acquisition dates of different images. The performance of the proposed model is investigated by fusing Landsat TM images, multitemporal ERS-1 SAR images, and GIS ground-cover maps for land-use classification, and on agricultural crop classification based on Landsat TM images, multipolarization SAR images, and GIS crop field border maps. The performance of the MRF model is compared to a simpler reference fusion model. On an average, the MRF model results in slightly higher (2%) classification accuracy when the same data is used as input to the two models. When GIS field border data is included in the MRF model, the classification accuracy of the MRF model improves by 8%. For change detection in agricultural areas, 75% of the actual class changes are detected by the MRF model, compared to 62% for the reference model. Based on the well-founded theoretical basis of Markov random field models for classification tasks and the encouraging experimental results in our small-scale study, the authors conclude that the proposed MRF model is useful for classification of multisource satellite imagery  相似文献   

12.
侯幸林  罗海波  周培培 《红外与激光工程》2017,46(7):726001-0726001(7)
基于融合多幅低动态图像来获取高动态图像的过程中,传统方法中低动态图像获取时对曝光时间选取的策略简单,使拍摄的多幅图像信息冗余较多,严重影响融合效率。提出了一种基于局部信息熵最大准则的多曝光控制方法。讨论了低动态场景图像信息熵与曝光时间的关系,得出了低动态范围场景的图像信息熵随曝光时间的增加呈现先增加后减小的规律,并在某个曝光时间处信息熵最大。对于高动态场景,首先,利用图像平均灰度响应与曝光时间的近似线性关系确定场景的曝光时间范围;然后,根据图像直方图将高动态场景分成若干个低动态范围场景区域;最后,以信息熵最大为优化目标,设计一维搜索算法,搜索各个低动态范围区域的最优曝光时间,直到所有区域都搜索到最优曝光时间。此方法将场景的局部信息熵与曝光时间联系起来,能针对不同的区域进行曝光时间优化,目的性强,有效地避免了传统曝光控制中的缺点,实验证明:用该方法获取的图像进行融合获得了良好的效果。  相似文献   

13.
The key objective of super-resolution (SR) imaging is to reconstruct a higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘low-resolution’ images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review of SR image and video reconstruction methods developed in the literature and highlight the future research challenges. The SR image approaches reconstruct a single higher-resolution image from a set of given lower-resolution images, and the SR video approaches reconstruct an image sequence with a higher-resolution from a group of adjacent lower-resolution image frames. Furthermore, several SR applications are discussed to contribute some insightful comments on future SR research directions. Specifically, the SR computations for multi-view images and the SR video computation in the temporal domain are discussed.  相似文献   

14.
范璐璐  安成斌 《激光与红外》2012,42(9):1076-1079
介绍了将来自不同传感器的图像进行融合的改进方法。涵盖了配准,图像分解,融合,图像重建四个主要步骤,并进行了仿真,最后用评价函数对不同方法得到的图像进行评价。在融合准则上提出了一些新的处理方式。试验结果表明,融合后的图像确实能够汲取两幅原图像的各自不同特点,达到了较好的效果。本文还针对工程应用对算法进行了优化。  相似文献   

15.
The 16-day revisit cycle of Landsat has long limited its use for studying global biophysical processes, which evolve rapidly during the growing season. In cloudy areas of the Earth, the problem is compounded, and researchers are fortunate to get two to three clear images per year. At the same time, the coarse resolution of sensors such as the Advanced Very High Resolution Radiometer and Moderate Resolution Imaging Spectroradiometer (MODIS) limits the sensors' ability to quantify biophysical processes in heterogeneous landscapes. In this paper, the authors present a new spatial and temporal adaptive reflectance fusion model (STARFM) algorithm to blend Landsat and MODIS surface reflectance. Using this approach, high-frequency temporal information from MODIS and high-resolution spatial information from Landsat can be blended for applications that require high resolution in both time and space. The MODIS daily 500-m surface reflectance and the 16-day repeat cycle Landsat Enhanced Thematic Mapper Plus (ETM+) 30-m surface reflectance are used to produce a synthetic "daily" surface reflectance product at ETM+ spatial resolution. The authors present results both with simulated (model) data and actual Landsat/MODIS acquisitions. In general, the STARFM accurately predicts surface reflectance at an effective resolution close to that of the ETM+. However, the performance depends on the characteristic patch size of the landscape and degrades somewhat when used on extremely heterogeneous fine-grained landscapes.  相似文献   

16.
基于小波变换的红外与可见光图像融合技术研究   总被引:1,自引:0,他引:1  
王明泉  王玉 《红外》2013,34(3):12-14
随着传感器技术的发展,单一的图像传感器往往不能够从场景中提取足够多的信息,需进行多源图像融合.为了解决多传感器图像所表现的目标特征不一致的问题,本文采用小波变换对红外及可见光图像进行了融合.首先利用小波变换将图像进行多尺度分解.对于高频部分融合,取两幅图像小波系数矩阵对应元素的最大绝对值构造小波系数矩阵;针对低频部分融合,采用基于领域像素相关和基于区域方差相结合的策略.实验结果表明,该算法将红外与可见光图像对同一目标所表现出的不同特征、细节有效地融合在一幅图像里,增加了单幅图像的信息量,丰富了目标的信息层次,为图像显示观察和后续图像处理系统获取信息提供了基础.  相似文献   

17.
Ridgelet变换是继经典的小波变换之后提出来的一种新型图像多尺度几何分析工具.针对不同波段的远程遥感图像提出了一种基于新型正交Ridgelet变换的遥感图像融合方法.该算法基于新型的可逆离散脊波多尺度变换,通过客观评估融合性能说明该方法优于其他三种典型融合方法,尤其是优于各种基于小波变换的图像融合方法,仿真实验证明该方法融合效果良好.  相似文献   

18.
A modified statistical approach for image fusion using wavelet transform   总被引:1,自引:0,他引:1  
The fusion of images is an important technique within many disparate fields such as remote sensing, robotics and medical applications. For image fusion, selecting the required region from input images is a vital task. Recently, wavelet-based fusion techniques have been effectively used to integrate the perceptually important information generated by different imaging systems about the same scene. In this paper, a modified wavelet-based region level fusion algorithm for multi-spectral and multi-focus images is discussed. Here, the low frequency sub-bands are combined, not averaged, based on the edge information present in the high frequency sub-bands, so that the blur in fused image can be eliminated. The absolute mean and standard deviation of each image patch over 3 × 3 window in the high-frequency sub-bands are computed as activity measurement and are used to integrate the approximation band. The performance of the proposed algorithm is evaluated using the entropy, fusion symmetry and peak signal-to-noise ratio and is compared with recently published results. The experimental result proves that the proposed algorithm performs better in many applications.  相似文献   

19.
夜视图像彩色融合中基于谱残差的显著目标增强算法   总被引:1,自引:0,他引:1  
孟凡龙 《红外》2013,34(3):15-20
根据人类视觉系统的特点,提出了一种基于谱残差、可在夜视图像彩色融合中的增强显著目标的算法.该方法在彩色融合过程中,利用谱残差确定显著目标范围,用5×5的像素大小窗口对显著目标进行定位.当像素邻域均值大于图像整体均值时,此像素位于显著目标,用红色凸显目标可达到增强彩色融合图像的目的.实验结果表明,该方法具有较强的稳定性和实用性,增强显著目标的效果较好.  相似文献   

20.
倪钏  阮秀凯  周志立  崔桂华 《红外》2019,40(11):42-48
针对红外与可见光的融合图像存在对比度低、场景细节信息不清晰的问题,提出基于整体变分的红外和可见光图像融合方法。该方法将每次迭代后的融合图像分别与可见光和红外图像进行差分运算,作为整体变分模型的正则项和保真项。同时,引入关于红外图像的非增扩散函数来引导扩散,抑制红外图像边缘信息的平滑,从而把图像融合问题转化为图像去噪问题,最终转化为一个泛函求极值问题。实验结果从视觉质量及客观评价上验证了该方法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号