首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In many computer vision systems, it is assumed that the image brightness of a point directly reflects the scene radiance of the point. However, the assumption does not hold in most cases due to nonlinear camera response function, exposure changes, and vignetting. The effects of these factors are most visible in image mosaics and textures of 3D models where colors look inconsistent and notable boundaries exist. In this paper, we propose a full radiometric calibration algorithm that includes robust estimation of the radiometric response function, exposures, and vignetting. By decoupling the effect of vignetting from the response function estimation, we approach each process in a manner that is robust to noise and outliers. We verify our algorithm with both synthetic and real data which shows significant improvement compared to existing methods. We apply our estimation results to radiometrically align images for seamless mosaics and 3D model textures. We also use our method to create high dynamic range (HDR) mosaics which are more representative of the scene than normal mosaics.  相似文献   

2.
周思羽  包国琦  刘凯 《计算机应用》2020,40(6):1812-1817
渐晕是图像中像素光强沿径向方向衰减的现象,为了解决其对计算机视觉任务和图像处理精度的影响,提出了低通滤波下约束对数强度熵的图像渐晕校正方法。首先,使用偶数项的六阶多项式函数建立渐晕模型;其次,通过低通滤波计算最小的目标图像对数强度熵,在目标值的约束下求出既满足渐晕函数变化规律又能减小图像的对数强度熵的渐晕模型的最优参数解;最后,对渐晕图像采用渐晕模型的逆向补偿校正渐晕。采用结构相似性指标(SSIM)和均方根误差(RMSE)度量渐晕校正效果,结果表明,所提方法不仅能有效恢复渐晕区域的亮度信息,得到真实、自然的无渐晕图像,而且能有效校正不同程度的渐晕,渐晕校正结果视觉一致性良好。  相似文献   

3.
We present an approach that significantly enhances the capabilities of traditional image mosaicking. The key observation is that as a camera moves, it senses each scene point multiple times. We rigidly attach to the camera an optical filter with spatially varying properties, so that multiple measurements are obtained for each scene point under different optical settings. Fusing the data captured in the multiple images yields an image mosaic that includes additional information about the scene. We refer to this approach as generalized mosaicing. In this paper we show that this approach can significantly extend the optical dynamic range of any given imaging system by exploiting vignetting effects. We derive the optimal vignetting configuration and implement it using an external filter with spatially varying transmittance. We also derive efficient scene sampling conditions as well as ways to self calibrate the vignetting effects. Maximum likelihood is used for image registration and fusion. In an experiment we mounted such a filter on a standard 8-bit video camera, to obtain an image panorama with dynamic range comparable to imaging with a 16-bit camera.  相似文献   

4.
由于组合式建筑立面图像进行全景拼接时,交界处光照度降低,产生渐晕现象,提出一种基于曲面拟合的渐晕校正方法,对组合式建筑立面全景图像渐晕校正.通过推算角点,分析图像坐标系及世界坐标系之间的对照关联,获取世界坐标系中角点的单位坐标数据,构建相机成像的组合式建筑三维交互模型;采用ORB特征匹配算法实施图像匹配,通过分解单应性...  相似文献   

5.

This paper discusses how the vignetting effect of paintings may be transferred to photographs, with attention to center-corner contrast. First, the lightness distribution of both are analyzed. The results show that the painter’s vignette is more complex than that achieved using common digital post-processing methods. It is shown to involve both the 2D and 3D geometry of the scene. Then, an algorithm is developed to transfer the vignetting effect from an example painting to a photograph. The example painting is selected as that has similar contextual geometry with the photograph. The lightness weighting pattern extracted from the selected example painting is adaptively blended with the input photograph to create vignetting effect. In order to avoid over-brightened or over-darkened regions in the enhancement result, the extracted lightness weighting pattern is corrected using a nonlinear curve. A content-aware interpolation method is also proposed to warp the lightness weighting to fit the contextual structure of the photograph. Finally, the local contrast is restored. Experiments show that the proposed algorithm can successfully perform this function. The resulting vignetting effect is more naturally presented with regard to esthetic composition as compared with vignetting achieved with popular software tools and camera models.

  相似文献   

6.
Optical flow methods are used to estimate pixelwise motion information based on consecutive frames in image sequences. The image sequences traditionally contain frames that are similarly exposed. However, many real-world scenes contain high dynamic range content that cannot be captured well with a single exposure setting. Such scenes result in certain image regions being over- or underexposed, which can negatively impact the quality of motion estimates in those regions. Motivated by this, we propose to capture high dynamic range scenes using different exposure settings every other frame. A framework for OF estimation on such image sequences is presented, that can straightforwardly integrate techniques from the state-of-the-art in conventional OF methods. Different aspects of robustness of OF methods are discussed, including estimation of large displacements and robustness to natural illumination changes that occur between the frames, and we demonstrate experimentally how to handle such challenging flow estimation scenarios. The flow estimation is formulated as an optimization problem whose solution is obtained using an efficient primal–dual method.  相似文献   

7.
Photometric camera calibration is often required in physics-based computer vision. There have been a number of studies to estimate camera response functions (gamma function), and vignetting effect from images. However less attention has been paid to camera spectral sensitivities and white balance settings. This is unfortunate, since those two properties significantly affect image colors. Motivated by this, a method to estimate camera spectral sensitivities and white balance setting jointly from images with sky regions is introduced. The basic idea is to use the sky regions to infer the sky spectra. Given sky images as the input and assuming the sun direction with respect to the camera viewing direction can be extracted, the proposed method estimates the turbidity of the sky by fitting the image intensities to a sky model. Subsequently, it calculates the sky spectra from the estimated turbidity. Having the sky \(RGB\) values and their corresponding spectra, the method estimates the camera spectral sensitivities together with the white balance setting. Precomputed basis functions of camera spectral sensitivities are used in the method for robust estimation. The whole method is novel and practical since, unlike existing methods, it uses sky images without additional hardware, assuming the geolocation of the captured sky is known. Experimental results using various real images show the effectiveness of the method.  相似文献   

8.
Gaussian mixture model (GMM) is a flexible tool for image segmentation and image classification. However, one main limitation of GMM is that it does not consider spatial information. Some authors introduced global spatial information from neighbor pixels into GMM without taking the image content into account. The technique of saliency map, which is based on the human visual system, enhances the image regions with high perceptive information. In this paper, we propose a new model, which incorporates the image content-based spatial information extracted from saliency map into the conventional GMM. The proposed method has several advantages: It is easy to implement into the expectation–maximization algorithm for parameters estimation, and therefore, there is only little impact in computational cost. Experimental results performed on the public Berkeley database show that the proposed method outperforms the state-of-the-art methods in terms of accuracy and computational time.  相似文献   

9.
目的 光场相机可以通过单次曝光同时从多个视角采样单个场景,在深度估计领域具有独特优势。消除遮挡的影响是光场深度估计的难点之一。现有方法基于2D场景模型检测各视角遮挡状态,但是遮挡取决于所采样场景的3D立体模型,仅利用2D模型无法精确检测,不精确的遮挡检测结果将降低后续深度估计精度。针对这一问题,提出了3D遮挡模型引导的光场图像深度获取方法。方法 向2D模型中的不同物体之间添加前后景关系和深度差信息,得到场景的立体模型,之后在立体模型中根据光线的传输路径推断所有视角的遮挡情况并记录在遮挡图(occlusion map)中。在遮挡图引导下,在遮挡和非遮挡区域分别使用不同成本量进行深度估计。在遮挡区域,通过遮挡图屏蔽被遮挡视角,基于剩余视角的成像一致性计算深度;在非遮挡区域,根据该区域深度连续特性设计了新型离焦网格匹配成本量,相比传统成本量,该成本量能够感知更广范围的色彩纹理,以此估计更平滑的深度图。为了进一步提升深度估计的精度,根据遮挡检测和深度估计的依赖关系设计了基于最大期望(exception maximization,EM)算法的联合优化框架,在该框架下,遮挡图和深度图通过互相引导的方式相继提升彼此精度。结果 实验结果表明,本文方法在大部分实验场景中,对于单遮挡、多遮挡和低对比度遮挡在遮挡检测和深度估计方面均能达到最优结果。均方误差(mean square error,MSE)对比次优结果平均降低约19.75%。结论 针对遮挡场景的深度估计,通过理论分析和实验验证,表明3D遮挡模型相比传统2D遮挡模型在遮挡检测方面具有一定优越性,本文方法更适用于复杂遮挡场景的深度估计。  相似文献   

10.
ABSTRACT

The requirements of spectral and spatial quality differ from region to region in remote sensing images. The employment of saliency in pan-sharpening methods is an effective approach to fulfil this kind of demands. Common saliency feature analysis, which considers the mutual information between multiple images, can ensure the consistency and accuracy when assigning saliency to regions in different images. Thus, we propose a pan-sharpening method based on common saliency feature analysis and multiscale spatial information extraction for multiple remote sensing images. Firstly, we extract spatial information by the guided filter and accurate intensity component estimation. Then, a common saliency feature analysis method based on global contrast calculation and intensity feature extraction is designed to obtain preliminary pixel-wise saliency estimation, which is subsequently integrated with text-featured based compensation to generate adaptive injection gains. The introduction of common saliency feature analysis guarantees that the same pan-sharpening strategy will be applied to regions with similar features in multiple images. Finally, the injection gains are used to implement the detail injection. Our proposal satisfies diverse needs of spatial and spectral information for different regions in the single image and guarantees that regions with similar features in different images are treated consistently in the process of pan-sharpening. Both visual and quantitative results demonstrate that our method has better performance in guaranteeing consistency in multiple images, improving spatial quality and preserving spectral fidelity.  相似文献   

11.
红外图像即使在低光照条件下,也能根据热辐射的差异将目标与背景区分开来,而可见光图像具有高空间分辨率的纹理细节,此外,红外和可见光图像都含有相应的语义信息.因此,红外与可见光图像融合,需要既保留红外图像的辐射信息,也保留可见光图像的纹理细节,同时,也要反映出二者的语义信息.而语义分割可以将图像转换为带有语义的掩膜,提取源图像的语义信息.提出了一种基于语义分割的红外和可见光图像融合方法,能够克服现有融合方法不能针对性地提取不同区域特有信息的缺点.使用生成式对抗神经网络,并针对源图像的不同区域设计了2种不同的损失函数,以提高融合图像的质量.首先通过语义分割得到含有红外图像目标区域语义信息的掩模,并利用掩模将红外和可见光图像分割为红外图像目标区域、红外图像背景区域、可见光图像目标区域和可见光图像背景区域;然后对目标区域和背景区域分别采用不同的损失函数得到目标区域和背景区域的融合图像;最后将2幅融合图像结合起来得到最终融合图像.实验表明,融合结果目标区域对比度更高,背景区域纹理细节更丰富,提出的方法取得了较好的融合效果.  相似文献   

12.
The primary goal in motion vision is to extract information about the motion and shape of an object in a scene that is encoded in the optic flow. While many solutions to this problem, both iterative and in closed form, have been proposed, practitioners still view the problem as unsolved, since these methods, for the most part, cannot deal with some important aspects of realistic scenes. Among these are complex unsegmented scenes, nonsmooth objects, and general motion of the camera. In addition, the performance of many methods degrades ungracefully as the quality of the data deteriorates.Here, we will derive a closed-form solution for motion estimation based on thefirst-order information from two image regions with distinct flow structures. A unique solution is guaranteed when these corespond to two surface patches with different normal vectors. Given an image sequence, we will show how the image may be segmented into regions with the necessary properties, optical flow is computed for these regions, and motion parameters are calculated. The method can be applied to arbitrary scenes and any camera motion. We will show theoretically why the method is more robust than other proposed techniques that require the knowledge of the full flow or information up to the second-order terms of it. Experimental results are presented to support the theoretical derivations.  相似文献   

13.
去运动模糊一直是计算机视觉领域中面向画质增强的一个热点研究方向。模糊核的估算是去运动模糊中的关键问题。提出一种新的思路,即首先将模糊图像按照模糊核的相似度进行图像分割,再对分割后的图像应用空间不变去模糊算法。本文方法主要包含以下几个步骤:分离输入图像中的光照、颜色和纹理信息;分割图像;分区域估算模糊核,计算重叠区域模糊核,并根据计算出的模糊核进行分区域单核去模糊;利用重叠区域整合拼接去模糊结果并还原光照和颜色信息。实验结果表明本文方法比基于单核的去运动模糊算法效果要好。  相似文献   

14.
针对当前视差图生成算法中常用的匹配代价对光晕等复杂光学畸变比较敏感的问题,在对相机成像模型进行分析的基础上提出了一种基于张量分析的匹配代价。和目前常用的匹配代价相比,该匹配代价不但对光晕效应具有更强的鲁棒性而且能够更加准确的反映图像的局部结构信息,有效的降低了匹配的歧义性。在本文中,张量在积分图像上构造,张量间的距离在黎曼流形上测量;此外,提出了一种简单有效的视差图后续处理方法进一步修正了初始视差图中的不可信视差。在标准测试集和人工图像上的实验结果验证了该方法的有效性。  相似文献   

15.
In this paper, we present a four-step technique for simultaneously estimating a human's anthropometric measurements (up to a scale parameter) and pose from a single uncalibrated image. The user initially selects a set of image points that constitute the projection of selected landmarks. Using this information, along with a priori statistical information about the human body, a set of plausible segment length estimates is produced. In the third step, a set of plausible poses is inferred using a geometric method based on joint limit constraints. In the fourth step, pose and anthropometric measurements are obtained by minimizing an appropriate cost function subject to the associated constraints. The novelty of our approach is the use of anthropometric statistics to constrain the estimation process that allows the simultaneous estimation of both anthropometry and pose. We demonstrate the accuracy, advantages, and limitations of our method for various classes of both synthetic and real input data.  相似文献   

16.
This paper describes a novel method to estimate appropriate traversable regions from an outdoor scene image. The traversable regions output by the proposed method reflect the common sense of people. For example, a candidate traversable region is “a paved road somewhat distant from the side ditch.” The input to the traversable region estimation is one color image. First, category is assigned to each pixel in the image. The categorization result is then input to the region estimator. Finally, the traversable region are estimated on the input image. An important aspect of this method is the application of two score functions in region estimation process. One score function places high value on categories selected as traversable paths by subjects. The other function places high value on categories that are not selected as traversable regions but are adjacent to categories with traversable paths. A combination of these two functions produces feasible estimation results. The effectiveness of the combined score functions was evaluated by experiments and a questionnaire.  相似文献   

17.
Previous deep learning-based super-resolution (SR) methods rely on the assumption that the degradation process is predefined (e.g., bicubic downsampling). Thus, their performance would suffer from deterioration if the real degradation is not consistent with the assumption. To deal with real-world scenarios, existing blind SR methods are committed to estimating both the degradation and the super-resolved image with an extra loss or iterative scheme. However, degradation estimation that requires more computation would result in limited SR performance due to the accumulated estimation errors. In this paper, we propose a contrastive regularization built upon contrastive learning to exploit both the information of blurry images and clear images as negative and positive samples, respectively. Contrastive regularization ensures that the restored image is pulled closer to the clear image and pushed far away from the blurry image in the representation space. Furthermore, instead of estimating the degradation, we extract global statistical prior information to capture the character of the distortion. Considering the coupling between the degradation and the low-resolution image, we embed the global prior into the distortion-specific SR network to make our method adaptive to the changes of distortions. We term our distortion-specific network with contrastive regularization as CRDNet. The extensive experiments on synthetic and real-world scenes demonstrate that our lightweight CRDNet surpasses state-of-the-art blind super-resolution approaches.   相似文献   

18.
边缘与灰度检测相结合的场景图像文本定位   总被引:1,自引:0,他引:1       下载免费PDF全文
自然场景图像中包含大量的图像和文本信息,其文本字符能够提供重要的语义信息。利用计算机自动检测并识别自然场景中的文本信息,是模式识别和文字信息处理领域重要的研究内容。本文提出一种有效的从场景图像中定位文本的方法,其原理为:首先基于边缘检测进行文本区域粗定位,对定位到的区域进行灰度检测,来确定文本域中的字符位置,其后对所得到的检测区域进行筛选,去掉噪声区域,获取到目标文本域。实验结果表明,本文提出的方法对字体的大小、样式、颜色、以及排布方向具有较强的鲁棒性, 能够准确定位并提取自然场景下的文本信息。  相似文献   

19.
Multi-frame estimation of planar motion   总被引:4,自引:0,他引:4  
Traditional plane alignment techniques are typically performed between pairs of frames. We present a method for extending existing two-frame planar motion estimation techniques into a simultaneous multi-frame estimation, by exploiting multi-frame subspace constraints of planar surfaces. The paper has three main contributions: 1) we show that when the camera calibration does not change, the collection of all parametric image motions of a planar surface in the scene across multiple frames is embedded in a low dimensional linear subspace; 2) we show that the relative image motion of multiple planar surfaces across multiple frames is embedded in a yet lower dimensional linear subspace, even with varying camera calibration; and 3) we show how these multi-frame constraints can be incorporated into simultaneous multi-frame estimation of planar motion, without explicitly recovering any 3D information, or camera calibration. The resulting multi-frame estimation process is more constrained than the individual two-frame estimations, leading to more accurate alignment, even when applied to small image regions.  相似文献   

20.
We discuss calibration and removal of "vignetting" (radial falloff) and exposure (gain) variations from sequences of images. Even when the response curve is known, spatially varying ambiguities prevent us from recovering the vignetting, exposure, and scene radiances uniquely. However, the vignetting and exposure variations can nonetheless be removed from the images without resolving these ambiguities or the previously known scale and gamma ambiguities. Applications include panoramic image mosaics, photometry for material reconstruction, image-based rendering, and preprocessing for correlation-based vision algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号