首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 734 毫秒
1.
Current state-of-the-art image-based scene reconstruction techniques are capable of generating high-fidelity 3D models when used under controlled capture conditions. However, they are often inadequate when used in more challenging environments such as sports scenes with moving cameras. Algorithms must be able to cope with relatively large calibration and segmentation errors as well as input images separated by a wide-baseline and possibly captured at different resolutions. In this paper, we propose a technique which, under these challenging conditions, is able to efficiently compute a high-quality scene representation via graph-cut optimisation of an energy function combining multiple image cues with strong priors. Robustness is achieved by jointly optimising scene segmentation and multiple view reconstruction in a view-dependent manner with respect to each input camera. Joint optimisation prevents propagation of errors from segmentation to reconstruction as is often the case with sequential approaches. View-dependent processing increases tolerance to errors in through-the-lens calibration compared to global approaches. We evaluate our technique in the case of challenging outdoor sports scenes captured with manually operated broadcast cameras as well as several indoor scenes with natural background. A comprehensive experimental evaluation including qualitative and quantitative results demonstrates the accuracy of the technique for high quality segmentation and reconstruction and its suitability for free-viewpoint video under these difficult conditions.  相似文献   

2.
宋洪军  陈阳舟  郜园园 《计算机应用》2012,32(12):3397-3403
为了解决传统的能见度仪价格昂贵、采样有限,以及现有的一些视频测量手段需人工标记物、稳定性差等问题,基于车道线检测与图像拐点提出一种通过固定摄像机识别雾天天气并计算道路能见度的算法。与以往研究不同,在交通模型增加了均质雾天因素。该算法主要分为三步:首先,计算场景活动图,利用区域搜索算法(ASA)结合纹理特征提取待识别区域,如果在待识别区域内像素自顶向下以双曲线形式变化则判断当前天气为雾天,同时计算区域内图像亮度曲线的拐点;其次,基于可伸缩窗算法检测车道线,提取车道线端点并标定摄像机;最后,结合图像拐点以及摄像机参数计算大气消光系数,根据国际气象组织给出的能见度定义计算能见度。通过三种场景下的能见度检测,实验结果表明,该算法与人眼观测效果一致,准确率高于86%,检测误差在20m以内,鲁棒性好。  相似文献   

3.
High Dynamic Range (HDR) imaging requires one to composite multiple, differently exposed images of a scene in the irradiance domain and perform tone mapping of the generated HDR image for displaying on Low Dynamic Range (LDR) devices. In the case of dynamic scenes, standard techniques may introduce artifacts called ghosts if the scene changes are not accounted for. In this paper, we consider the blind HDR problem for dynamic scenes. We develop a novel bottom-up segmentation algorithm through superpixel grouping which enables us to detect scene changes. We then employ a piecewise patch-based compositing methodology in the gradient domain to directly generate the ghost-free LDR image of the dynamic scene. Being a blind method, the primary advantage of our approach is that we do not assume any knowledge of camera response function and exposure settings while preserving the contrast even in the non-stationary regions of the scene. We compare the results of our approach for both static and dynamic scenes with that of the state-of-the-art techniques.  相似文献   

4.
Since indoor scenes are frequently changed in daily life, such as re‐layout of furniture, the 3D reconstructions for them should be flexible and easy to update. We present an automatic 3D scene update algorithm to indoor scenes by capturing scene variation with RGBD cameras. We assume an initial scene has been reconstructed in advance in manual or other semi‐automatic way before the change, and automatically update the reconstruction according to the newly captured RGBD images of the real scene update. It starts with an automatic segmentation process without manual interaction, which benefits from accurate labeling training from the initial 3D scene. After the segmentation, objects captured by RGBD camera are extracted to form a local updated scene. We formulate an optimization problem to compare to the initial scene to locate moved objects. The moved objects are then integrated with static objects in the initial scene to generate a new 3D scene. We demonstrate the efficiency and robustness of our approach by updating the 3D scene of several real‐world scenes.  相似文献   

5.
Computational photography relies on specialized image-processing techniques to combine multiple images captured by a camera to generate a desired image of the scene. We first consider the high dynamic range (HDR) imaging problem. We can change either the exposure time or the aperture while capturing multiple images of the scene to generate an HDR image. This paper addresses the HDR imaging problem for static and dynamic scenes captured using a stationary camera under various aperture and exposure settings, when we do not have any knowledge of the camera settings. We have proposed a novel framework based on sparse representation which enables us to process images while getting rid of artifacts due to moving objects and defocus blur. We show that the proposed approach is able to produce significantly good results through dynamic object rejection and deblurring capabilities. We compare the results with other competitive approaches and discuss the relative advantages of the proposed approach.  相似文献   

6.
Extracting objects from range and radiance images   总被引:6,自引:0,他引:6  
In this paper, we present a pipeline and several key techniques necessary for editing a real scene captured with both cameras and laser range scanners. We develop automatic algorithms to segment the geometry from range images into distinct surfaces, register texture from radiance images with the geometry, and synthesize compact high-quality texture maps. The result is an object-level representation of the scene which can be rendered with modifications to structure via traditional rendering methods. The segmentation algorithm for geometry operates directly on the point cloud from multiple registered 3D range images instead of a reconstructed mesh. It is a top-down algorithm which recursively partitions a point set into two subsets using a pairwise similarity measure. The result is a binary tree with individual surfaces as leaves. Our image registration technique performs a very efficient search to automatically find the camera poses for arbitrary position and orientation relative to the geometry. Thus, we can take photographs from any location without precalibration between the scanner and the camera. The algorithms have been applied to large-scale real data. We demonstrate our ability to edit a captured scene by moving, inserting, and deleting objects  相似文献   

7.
A novel algorithm for vehicle average velocity detection through automatic and dynamic camera calibration based on dark channel in homogenous fog weather condition is presented in this paper. Camera fixed in the middle of the road should be calibrated in homogenous fog weather condition, and can be used in any weather condition. Unlike other researches in velocity calculation area, our traffic model only includes road plane and vehicles in motion. Painted lines in scene image are neglected because sometimes there are no traffic lanes, especially in un-structured traffic scene. Once calibrated, scene distance will be got and can be used to calculate vehicles average velocity. Three major steps are included in our algorithm. Firstly, current video frame is recognized to discriminate current weather condition based on area search method (ASM). If it is homogenous fog, average pixel value from top to bottom in the selected area will change in the form of edge spread function (ESF). Secondly, traffic road surface plane will be found by generating activity map created by calculating the expected value of the absolute intensity difference between two adjacent frames. Finally, scene transmission image is got by dark channel prior theory, camera’s intrinsic and extrinsic parameters are calculated based on the parameter calibration formula deduced from monocular model and scene transmission image. In this step, several key points with particular transmission value for generating necessary calculation equations on road surface are selected to calibrate the camera. Vehicles’ pixel coordinates are transformed to camera coordinates. Distance between vehicles and the camera will be calculated, and then average velocity for each vehicle is got. At the end of this paper, calibration results and vehicles velocity data for nine vehicles in different weather conditions are given. Comparison with other algorithms verifies the effectiveness of our algorithm.  相似文献   

8.
Bayesian Defogging   总被引:2,自引:0,他引:2  
Atmospheric conditions induced by suspended particles, such as fog and haze, severely alter the scene appearance. Restoring the true scene appearance from a single observation made in such bad weather conditions remains a challenging task due to the inherent ambiguity that arises in the image formation process. In this paper, we introduce a novel Bayesian probabilistic method that jointly estimates the scene albedo and depth from a single foggy image by fully leveraging their latent statistical structures. Our key idea is to model the image with a factorial Markov random field in which the scene albedo and depth are two statistically independent latent layers and to jointly estimate them. We show that we may exploit natural image and depth statistics as priors on these hidden layers and estimate the scene albedo and depth with a canonical expectation maximization algorithm with alternating minimization. We experimentally evaluate the effectiveness of our method on a number of synthetic and real foggy images. The results demonstrate that the method achieves accurate factorization even on challenging scenes for past methods that only constrain and estimate one of the latent variables.  相似文献   

9.
多视点长景物图像指的是由一系列沿某一景物连续拍摄的图像拼接而成的长景物图像。提出了一种基于图割的由多视点构造长景物图像的新方法,该方法将由摄像机拍摄的原始图像投影到将要形成的长景物图像主平面上;再根据长景物图像形成的几个条件构建能量函数,将视点选择问题转变为能量函数最小化问题;通过Boykov等人提出的快速近似能量函数最小化的方法使其最小化,从而得到满足条件的长景物图像。实验结果显示,用该方法拼接而成的长景物图像在拼接处显得相当自然,真正做到了无缝光滑过渡。  相似文献   

10.
目的 为解决户外视觉系统在恶劣环境下捕捉图像存在细节模糊、对比度较低等问题,提出一种基于变差函数和形态学滤波的图像去雾算法(简称IDA_VAM)。方法 该算法首先利用变差函数获取较准确的全局环境光值,然后对最小通道图采用多结构元形态学开闭滤波器获取粗略的大气散射图,进而估计大气透射率并进行修正,接着采用双边滤波对其进行平滑操作,最后通过物理模型得到复原图像并进行色调调整,获取明亮、清晰无雾的图像。结果 本文算法与多种图像去雾算法进行对比,在含有雾气的近景图像、远景图像以及有明亮区域的图像均能很好地去除雾气,图像的信息熵值相对提高了38.0%,对比度值相对提高了34.1%,清晰度值相对提高了134.5%,得到较好的复原效果,获取一幅自然明亮的无雾图像。结论 大量仿真实验结果证实,IDA_VAM能够很好地恢复非复杂场景下的近景图像、远景图像以及含有明亮区域图像的色彩和清晰度,获得清晰明亮的无雾图像,细节可见度较高,且算法的时间复杂度与图像像素点个数呈线性关系,具有较好的实时性。  相似文献   

11.
We present a novel color multiplexing method for extracting depth edges in a scene. It has been shown that casting shadows from different light positions provides a simple yet robust cue for extracting depth edges. Instead of flashing a single light source at a time as in conventional methods, our method flashes all light sources simultaneously to reduce the number of captured images. We use a ring light source around a camera and arrange colors on the ring such that the colors form a hue circle. Since complementary colors are arranged at any position and its antipole on the ring, shadow regions where a half of the hue circle is occluded are colorized according to the orientations of depth edges, while non-shadow regions where all the hues are mixed have a neutral color in the captured image. Thus the colored shadows in the single image directly provide depth edges and their orientations in an ideal situation. We present an algorithm that extracts depth edges from a single image by analyzing the colored shadows. We also present a more robust depth edge extraction algorithm using an additional image captured by rotating the hue circle with \(180^\circ \) to compensate for scene textures and ambient lights. We compare our approach with conventional methods for various scenes using a camera prototype consisting of a standard camera and 8 color LEDs. We also demonstrate a bin-picking system using the camera prototype mounted on a robot arm.  相似文献   

12.
人脸识别是生物特征识别技术中应用最广的技术之一。其中,能判断人脸图像是否是真实人脸的活体检测模块,是系统安全运行的重要保障。目前从安全度和经济性两方面综合考虑,最常用的活体检测方法是双目活体检测。但由于不同场景下光线亮度和角度变化很大,拍摄的人脸图片质量参差不齐,严重影响了活体检测的质量。针对这一问题,提出了通过对场景光照识别进行优化从而提升检测准确度的双目活体识别算法。算法通过串级PID算法对摄像头的感光度和补光灯进行控制,并利用人脸识别算法定位优化测光区域,从而对不同的光线强度和角度采取不同的策略。经过实验验证:本方法将活体检测在复杂场景下的准确率提升约30%,保证了算法在室内外不同光照场景下的有效性。  相似文献   

13.
The scattering of light by particles present in the medium through which it is travelling gives clues allowing us to infer structural information about the observed scene. Based on a single-scattering model of light, we review related work and propose a novel method for computing the 3D structure of a scene from two images captured in different conditions of visibility. We also present two new methods for identifying occlusion edges from the same images. The originality of the proposed methods is based on the study of spatial variations of intensity, particularly how they are affected by a change in visibility conditions, while existing methods rely on intensities. We validate our work with experimental results produced from both synthetic and real scenes.  相似文献   

14.
15.
In this paper, we introduce a novel method for depth acquisition based on refraction of light. A scene is captured directly by a camera and by placing a transparent medium between the scene and the camera. A depth map of the scene is then recovered from the displacements of scene points in the images. Unlike other existing depth from refraction methods, our method does not require prior knowledge of the pose and refractive index of the transparent medium, but instead can recover them directly from the input images. By analyzing the displacements of corresponding scene points in the images, we derive closed form solutions for recovering the pose of the transparent medium and develop an iterative method for estimating the refractive index of the medium. Experimental results on both synthetic and real-world data are presented, which demonstrate the effectiveness of the proposed method.  相似文献   

16.
基于场景几何约束未标定两视图的三维模型重建   总被引:7,自引:1,他引:7       下载免费PDF全文
提出了一种从两幅未标定图象重建场景三维模型的方法 .这种方法充分利用了人造结构场景中大量存在的平行性和正交性几何约束 ,即利用每幅视图中三组互相垂直的平行线 ,计算出 3个影灭点 ,从而对每幅视图进行标定 .对两幅未标定图象 ,从基本矩阵只能得到射影重构 ,如果每幅图象都已标定 ,则可将基本矩阵转化为本质矩阵 .三维重构过程有两个步骤 :先是恢复相机的位置和运动 ;后是用三角测量法计算出点的三维坐标 .对多平面组成的场景进行三维重构实验 ,所得三维模型产生新的视点图象 ,与所观察的场景一致 ,重构的两个平面夹角与实际值相近 ,实验结果表明 ,该算法是行之有效的  相似文献   

17.
针对不同天气情况下在同一太阳方位拍摄的室外场景图像,提出了一种基于色度一致性的光照参数估计算法。该算法基于太阳光与天空光基图像分解理论,利用色度一致性这一约束条件求解太阳光和天空光的光照系数;并利用光照色度校正模型对基图像进行光照色度校正,从而得到更准确的光照参数。 实验结果表明,所提算法是有效且正确的,根据基图像和光照系数可以准确重构原图像,从而实现虚拟物体与真实场景的无缝融合。  相似文献   

18.
Multimedia Tools and Applications - Rainy weather greatly affects the visibility of salient objects and scenes in the captured images and videos. The object/scene visibility varies with the type of...  相似文献   

19.
Creating Architectural Models from Images   总被引:9,自引:0,他引:9  
We present methods for creating 3D graphical models of scenes from a limited numbers of images, i.e. one or two, in situations where no scene co-ordinate measurements are available. The methods employ constraints available from geometric relationships that are common in architectural scenes – such as parallelism and orthogonality – together with constraints available from the camera. In particular, by using the circular points of a plane simple, linear algorithms are given for computing plane rectification, plane orientation and camera calibration from a single image. Examples of image based 3D modelling are given for both single images and image pairs.  相似文献   

20.
This paper presents a novel method for virtual view synthesis that allows viewers to virtually fly through real soccer scenes, which are captured by multiple cameras in a stadium. The proposed method generates images of arbitrary viewpoints by view interpolation of real camera images near the chosen viewpoints. In this method, cameras do not need to be strongly calibrated since projective geometry between cameras is employed for the interpolation. For avoiding the complex and unreliable process of 3-D recovery, object scenes are segmented into several regions according to the geometric property of the scene. Dense correspondence between real views, which is necessary for intermediate view generation, is automatically obtained by applying projective geometry to each region. By superimposing intermediate images for all regions, virtual views for the entire soccer scene are generated. The efforts for camera calibration are reduced and correspondence matching requires no manual operation; hence, the proposed method can be easily applied to dynamic events in a large space. An application for fly-through observations of soccer match replays is introduced along with the algorithm of view synthesis and experimental results. This is a new approach for providing arbitrary views of an entire dynamic event.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号