首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Dust particle detection in video aims to automatically determine whether the video is degraded by dust particle or not. Dust particles are usually stuck on the camera lends and typically temporally static in the images of a video sequence captured from a dynamic scene. The moving objects in the scene can be occluded by the dusts; consequently, the motion information of moving objects tends to yield singularity. Motivated by this, a dust detection approach is proposed in this paper by exploiting motion singularity analysis in the video. First, the optical model of dust particle is theoretically studied in by simulating optical density of artifacts produced by dust particles. Then, the optical flow is exploited to perform motion singularity analysis for blind dust detection in the video without the need for ground truth dust-free video. More specifically, a singularity model of optical flow is proposed in this paper using the direction of the motion flow field, instead of the amplitude of the motion flow field. The proposed motion singularity model is further incorporated into a temporal voting mechanism to develop an automatic dust particle detection in the video. Experiments are conducted using both artificially-simulated dust-degraded video and real-world dust-degraded video to demonstrate that the proposed approach outperforms conventional approaches to achieve more accurate dust detection.  相似文献   

2.
A method of estimating range flow (space displacement vector field) on nonrigid as well as rigid objects from a sequence of range images is described. This method can directly estimate the deformable motion parameters by solving a system of linear equations that are obtained from substituting a linear transformation of nonrigid objects expressed by the Jacobian matrix into motion constraints based on an extension of the conventional scheme used in intensity image sequences. The range flow is directly computed by substituting these estimated motion parameters into the linear transformation. The algorithm is supported by experimental estimations of range flow on a sheet of paper, a piece of cloth, human skin, and a rubber balloon being inflated, using real range image sequences acquired by a video rate range camera  相似文献   

3.
为了提高重构图像或者视频的分辨率.提出把新型的基于光流法的图像配准算法应用于迭代反投影(IBP)超分辨率算法中。在所提出的方法中.基于光流法的图像配准算法用来提高图像配准的准确性。首先,为了得到像素级别的运动矢量.基于光流法的图像配准算法被用于估计图像间的运动矢量。以得到更加准确的运动矢量矩阵。接着,利用所获得的运动矢量矩阵结合迭代反投影算法重构高分辨率的图像。同时.由于基于光流法的图像配准能够很好地估计视频图像间的运动.所提出的方法同样适用于视频图像的超分辨。实验结果表明.提出的方法对于图像或者视频的超分辨率效果.在主观效果和客观评价上都有一定的提升。  相似文献   

4.
The blur in target images caused by camera vibration due to robot motion or hand shaking and by object(s) moving in the background scene is different to deal with in the computer vision system.In this paper,the authors study the relation model between motion and blur in the case of object motion existing in video image sequence,and work on a practical computation algorithm for both motion analysis and blut image restoration.Combining the general optical flow and stochastic process,the paper presents and approach by which the motion velocity can be calculated from blurred images.On the other hand,the blurred image can also be restored using the obtained motion information.For solving a problem with small motion limitation on the general optical flow computation,a multiresolution optical flow algoritm based on MAP estimation is proposed. For restoring the blurred image ,an iteration algorithm and the obtained motion velocity are used.The experiment shows that the proposed approach for both motion velocity computation and blurred image restoration works well.  相似文献   

5.
基于运动矢量的摄像机运动定性分类方法   总被引:1,自引:0,他引:1  
在视频序列中,摄像机运动在某种程序上反映了当前视频的语分语义信息,知道了摄像机运动将能够取更好的实现对视频的游览以及检索,针对现有的依据光流分布来摄像机运动参数算法中的不足,给出了一种定性分析相机运动的方法,该方法能够在相机聚焦中心-FOE(focus-of-expansion)不在成像平面中心时检测出给定类型的摄像机运动,实验结果表明该方法对于从视频序列中分析出给定类型的摄像机运动具有较的效果。  相似文献   

6.
针对手持移动设备拍摄的抖动视频问题,提出了一种基于特征跟踪和网格路径运动的视频稳像算法。通过SIFT算法提取视频帧的特征点,采用KLT算法追踪特征点,利用RANSAC算法估计相邻帧间的仿射变换矩阵,将视频帧划分为均匀的网格,计算视频的运动轨迹,再通过极小化能量函数优化平滑多条网格路径。最后由原相机路径与平滑相机路径的关系,计算相邻帧间的补偿矩阵,利用补偿矩阵对每一帧进行几何变换,从而得到稳定的视频。实验表明,该算法在手持移动设备拍摄的抖动视频中有较好的结果,其中稳像后视频的PSNR平均值相比原抖动视频PSNR值大约提升了11.2 dB。与捆绑相机路径方法相比约提升了2.3 dB。图像间的结构相似性SSIM平均值大约提升了59%,与捆绑相机路径方法相比约提升了3.3%。  相似文献   

7.
目的 视觉里程计(visual odometry,VO)仅需要普通相机即可实现精度可观的自主定位,已经成为计算机视觉和机器人领域的研究热点,但是当前研究及应用大多基于场景为静态的假设,即场景中只有相机运动这一个运动模型,无法处理多个运动模型,因此本文提出一种基于分裂合并运动分割的多运动视觉里程计方法,获得场景中除相机运动外多个运动目标的运动状态。方法 基于传统的视觉里程计框架,引入多模型拟合的方法分割出动态场景中的多个运动模型,采用RANSAC(random sample consensus)方法估计出多个运动模型的运动参数实例;接着将相机运动信息以及各个运动目标的运动信息转换到统一的坐标系中,获得相机的视觉里程计结果,以及场景中各个运动目标对应各个时刻的位姿信息;最后采用局部窗口光束法平差直接对相机的姿态以及计算出来的相机相对于各个运动目标的姿态进行校正,利用相机运动模型的内点和各个时刻获得的相机相对于运动目标的运动参数,对多个运动模型的轨迹进行优化。结果 本文所构建的连续帧运动分割方法能够达到较好的分割结果,具有较好的鲁棒性,连续帧的分割精度均能达到近100%,充分保证后续估计各个运动模型参数的准确性。本文方法不仅能够有效估计出相机的位姿,还能估计出场景中存在的显著移动目标的位姿,在各个分段路径中相机自定位与移动目标的定位结果位置平均误差均小于6%。结论 本文方法能够同时分割出动态场景中的相机自身运动模型和不同运动的动态物体运动模型,进而同时估计出相机和各个动态物体的绝对运动轨迹,构建出多运动视觉里程计过程。  相似文献   

8.
目的 卷积神经网络广泛应用于目标检测中,视频目标检测的任务是在序列图像中对运动目标进行分类和定位。现有的大部分视频目标检测方法在静态图像目标检测器的基础上,利用视频特有的时间相关性来解决运动目标遮挡、模糊等现象导致的漏检和误检问题。方法 本文提出一种双光流网络指导的视频目标检测模型,在两阶段目标检测的框架下,对于不同间距的近邻帧,利用两种不同的光流网络估计光流场进行多帧图像特征融合,对于与当前帧间距较小的近邻帧,利用小位移运动估计的光流网络估计光流场,对于间距较大的近邻帧,利用大位移运动估计的光流网络估计光流场,并在光流的指导下融合多个近邻帧的特征来补偿当前帧的特征。结果 实验结果表明,本文模型的mAP(mean average precision)为76.4%,相比于TCN(temporal convolutional networks)模型、TPN+LSTM(tubelet proposal network and long short term memory network)模型、D(&T loss)模型和FGFA(flow-guided feature aggregation)模型分别提高了28.9%、8.0%、0.6%和0.2%。结论 本文模型利用视频特有的时间相关性,通过双光流网络能够准确地从近邻帧补偿当前帧的特征,提高了视频目标检测的准确率,较好地解决了视频目标检测中目标漏检和误检的问题。  相似文献   

9.
为了提高三维建筑模型的精准度,需要深入研究BIM建筑三维重建方法。当前方法耗时较长,得到的三维建筑模型与实际建筑之间的误差较大,存在效率低和精准度低的问题。将透视式增强现实技术应用到BIM建筑三维重建中,提出基于透视式增强现实的BIM建筑三维重建方法,通过BIM构建初始三维建筑模型,采用直接线性变换算法计算摄像机的内部参数和外部参数,完成摄像机标定。在摄像机标定结果的基础上采用LK光流计算方法得到像素在图像中的光流,根据光流的方向阈值和光流的大小筛选图像中的光流,提取到图像的匹配点,基于初始三维建筑模型针对建筑图像匹配点构成空间三维点云,采用Delaunay方法对空间三维点云进行三角化处理,针对处理后的建筑图像通过贴纹理完成BIM建筑三维重建。仿真结果表明,所提方法的效率高、精准度高。  相似文献   

10.
郭黎  廖宇  陈为龙  廖红华  李军  向军 《计算机应用》2014,34(12):3580-3584
任何视频摄像设备均具有一定的时间分辨率限制,时间分辨率不足会造成视频中存在运动模糊和运动混叠现象。针对这一问题常用的解决方法是空间去模糊和时间插值,然而这些方法无法从根本上解决问题。提出一种基于最大后验概率(MAP)的单视频时间超分辨率重建方法,该方法通过重建约束来确定条件概率模型,然后利用视频自身具有的时间自相似先验信息得到先验信息模型,最后求得基于最大后验概率的估计值,即通过对单个低时间分辨率视频重建来得到高时间分辨率视频,从而有效解决由于相机曝光时间过长所造成的“运动模糊”和相机帧率不足引起的“运动混叠”现象。通过理论分析与实验,证明了所提方法的有效性。  相似文献   

11.
3-D translational motion and structure from binocular image flows   总被引:1,自引:0,他引:1  
Image flow fields from parallel stereo cameras are analyzed to determine the relative 3-D translational motion of the camera platform with respect to objects in view and to establish stereo correspondence of features in the left and right images. A two-step procedure is suggested. In the first step, translational motion parameters are determined from linear equations the coefficients of which consist of the sums of measured quantities in the two images. Separate equations are developed for cases when measurements of either the full optical flow or the normal flow are available. This computation does not require feature-to-feature correspondence. In addition, no assumption is made about the surfaces being viewed. In the second step of the calculation, with the knowledge of the estimated translational motion parameters, the binocular flow information is used to find features in one image that correspond to given features in the other image. Experimental results with synthetic and laboratory images indicate that the method provides accurate results even in the presence of noise  相似文献   

12.
We propose a new method for a qualitative estimation of camera motion from a video sequence. The proposed method suggests to use the properties of vanishing point perspective to complete the information obtained from apparent motion, that is to use a cooperative estimation from several visual cues. Focal length and rotational parameters are first retrieved using perspective, then apparent motion is used to retrieve remaining parameters. The proposed method can retrieve all seven camera motions, including combination of motions. Experimentations confirm that the usefulness of the additional information gained from perspective.  相似文献   

13.
This paper addresses the problem of non-rigid video registration, or the computation of optical flow from a reference frame to each of the subsequent images in a sequence, when the camera views deformable objects. We exploit the high correlation between 2D trajectories of different points on the same non-rigid surface by assuming that the displacement of any point throughout the sequence can be expressed in a compact way as a linear combination of a low-rank motion basis. This subspace constraint effectively acts as a trajectory regularization term leading to temporally consistent optical flow. We formulate it as a robust soft constraint within a variational framework by penalizing flow fields that lie outside the low-rank manifold. The resulting energy functional can be decoupled into the optimization of the brightness constancy and spatial regularization terms, leading to an efficient optimization scheme. Additionally, we propose a novel optimization scheme for the case of vector valued images, based on the dualization of the data term. This allows us to extend our approach to deal with colour images which results in significant improvements on the registration results. Finally, we provide a new benchmark dataset, based on motion capture data of a flag waving in the wind, with dense ground truth optical flow for evaluation of multi-frame optical flow algorithms for non-rigid surfaces. Our experiments show that our proposed approach outperforms state of the art optical flow and dense non-rigid registration algorithms.  相似文献   

14.
In this paper, we present a novel video stabilization method with a pixel-wise motion model. In order to avoid distortion introduced by traditional feature points based motion models, we focus on constructing a more accurate model to capture the motion in videos. By taking advantage of dense optical flow, we can obtain the dense motion field between adjacent frames and set up a pixel-wise motion model which is accurate enough. Our method first estimates dense motion field between adjacent frames. A PatchMatch based dense motion field estimation algorithm is proposed. This algorithm is specially designed for similar video frames rather than arbitrary images to reach higher speed and better performance. Then, a simple and fast smoothing algorithm is performed to make the jittered motion stabilized. After that, we warp input frames using a weighted average algorithm to construct the output frames. Some pixels in output frames may be still empty after the warping step, so in the last step, these empty pixels are filled using a patch based image completion algorithm. We test our method on many challenging videos and demonstrate the accuracy of our model and the effectiveness of our method.  相似文献   

15.
针对手机拍摄过程中产生的视频抖动问题,提出了一种基于光流法和卡尔曼滤波的视频稳像算法。首先通过光流法预稳定抖动视频,对其生成的预稳定视频帧进行Shi-Tomasi角点检测,并采用LK算法跟踪角点,再利用RANSAC算法估计相邻帧间的仿射变换矩阵,由此计算得出原始相机路径;然后通过卡尔曼滤波器优化平滑相机路径,得到平滑相机路径;最后由原始相机路径与平滑路径的关系,计算相邻帧间的补偿矩阵,再利用补偿矩阵对视频帧逐一进行几何变换,由此得到稳定的视频输出。实验表明,该算法在处理6大类抖动视频时均有较好的效果,其中稳像后视频的PSNR值相比原始视频的PSNR值约提升了6.631 dB,视频帧间的结构相似性SSIM约提升了40%,平均曲率值约提升了8.3%。  相似文献   

16.
This study investigates a variational, active curve evolution method for dense three-dimensional (3D) segmentation and interpretation of optical flow in an image sequence of a scene containing moving rigid objects viewed by a possibly moving camera. This method jointly performs 3D motion segmentation, 3D interpretation (recovery of 3D structure and motion), and optical flow estimation. The objective functional contains two data terms for each segmentation region, one based on the motion-only equation which relates the essential parameters of 3D rigid body motion to optical flow, and the other on the Horn and Schunck optical flow constraint. It also contains two regularization terms for each region, one for optical flow, the other for the region boundary. The necessary conditions for a minimum of the functional result in concurrent 3D-motion segmentation, by active curve evolution via level sets, and linear estimation of each region essential parameters and optical flow. Subsequently, the screw of 3D motion and regularized relative depth are recovered analytically for each region from the estimated essential parameters and optical flow. Examples are provided which verify the method and its implementation  相似文献   

17.
相机运动的干扰是造成视频振动检测误差的重要原因。针对该问题,提出一种互抑制一致采样方法,对视频中的振动信号和相机运动信号实现有效分离,从而提高视频振动检测的可靠性。通过SURF(加速稳健特征)算法提取候选的特征点,并设计了振动与相机运动的互抑制测度,对候选的特征点进行分离,以获得相机运动的特征点。根据相机运动特征点对视频图像进行配准,以获得去除相机运动干扰的视频序列。对稳定相机的视频序列,采用欧拉视频振动检测方法获得振动频率。自行采集了不同相机运动下的视频,并针对互抑制测度的参数进行估计。通过对测试集数据进行验证,得到的振动频率准确率优于现有的非接触振动检测方法。  相似文献   

18.
This study investigates the problem of estimating camera calibration parameters from image motion fields induced by a rigidly moving camera with unknown parameters, where the image formation is modeled with a linear pinhole-camera model. The equations obtained show the flow to be separated into a component due to the translation and the calibration parameters and a component due to the rotation and the calibration parameters. A set of parameters encoding the latter component is linearly related to the flow, and from these parameters the calibration can be determined.However, as for discrete motion, in general it is not possible to decouple image measurements obtained from only two frames into translational and rotational components. Geometrically, the ambiguity takes the form of a part of the rotational component being parallel to the translational component, and thus the scene can be reconstructed only up to a projective transformation. In general, for full calibration at least four successive image frames are necessary, with the 3D rotation changing between the measurements.The geometric analysis gives rise to a direct self-calibration method that avoids computation of optical flow or point correspondences and uses only normal flow measurements. New constraints on the smoothness of the surfaces in view are formulated to relate structure and motion directly to image derivatives, and on the basis of these constraints the transformation of the viewing geometry between consecutive images is estimated. The calibration parameters are then estimated from the rotational components of several flow fields. As the proposed technique neither requires a special set up nor needs exact correspondence it is potentially useful for the calibration of active vision systems which have to acquire knowledge about their intrinsic parameters while they perform other tasks, or as a tool for analyzing image sequences in large video databases.  相似文献   

19.
《Real》1999,5(4):231-241
In order to provide sophisticated access methods to the contents of video servers, it is necessary to automatically process and represent each video through a number of visual indexes. We focus on two tasks, namely the hierarchical representation of a video as a sequence of uniform segments (shots), and the characterization of each shot by a vector describing the camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analysing motion vectors. Adaptability to different compression qualities is achieved by learning different classification masks. For the second task, the optical flow is processed in order to distinguish between stationary and moving shots. A least-squares fitting procedure determines the pan/tilt/zoom camera parameters within shots that present regular motion. Each shot is then indexed by a vector representing the dominant motion components and the type of motion. In order to maximize processing speed, all techniques directly process and analyse MPEG-1 motion vectors, without the need for video decompression. An overall processing rate of 59 frames/s is achieved on software. The successful classification performance, evaluated on various news video clips for a total of 61 023 frames, attains 97.7% for the shot segmentation, 88.4% for the stationary vs. moving shot classification, and 94.7% for the detailed camera motion characterization.  相似文献   

20.
丁丁  张小国 《测控技术》2020,39(8):76-81
大范围定点监控存在摄像头预置位不足以覆盖全部监控区域的问题,而依靠摄像头本身的三维定位功能抓取监控点图像速度太慢,且可能存在定位误差。针对上述问题,提出了一种用于土地违法与违章建筑大范围定点监控的虚拟预置位图像提取及配准方法。对前后两帧图像进行灰度转化和中值滤波去除噪声点,使用基于金字塔的Lucas-Kanade光流算法计算前一帧图像强角点的光流,通过计算仿射变换得到帧间运动矢量;对相邻帧间运动矢量进行积分,得到每一帧图像总的运动矢量;通过提取出运动矢量与虚拟预置位运动矢量最接近的那一帧,得到所需的新时相虚拟预置位图像。实验结果表明,本文算法可以更快速地提取出同一监控点不同时相的监控对比图像,图像重叠率优于95%,不论是重叠率还是图像质量都可以很好地满足日常监控的需求。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号