首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
针对运动平台下红外图像目标的检测,提出一个基于MAX金字塔和仿射运动模型的检测方法。表达图像信息的数据结构是成功地进行图像处理的关键,根据红外图像的特点,适应实时处理的要求,构造了一个新的图像金字塔结构--MAX金字塔。该金字塔的特点是速度快、易于用逻辑电路实现。用仿射运动模型描述平台的运动,通过基于MAX金字塔的仿射运动估计和对准,清除图像中平台的自运动信息。在残留图像中获得当前目标并补偿了平台运动后前一帧图像中目标的位置,从而检测出当前目标的位置和运动距离。  相似文献   

2.
To allow remotely sensed datasets to be used for data fusion, either to gain additional insight into the scene or for change detection, reliable spatial referencing is required. With modern remote sensing systems, reliable registration can be gained by applying an orbital model for spaceborne data or through the use of global positioning (GPS) and inertial navigation (INS) systems in the case of airborne data. Whilst, individually, these datasets appear well registered when compared to a second dataset from another source (e.g., optical to LiDAR or optical to radar) the resulting images may still be several pixels out of alignment. Manual registration techniques are often slow and labour intensive and although an improvement in registration is gained, there can still be some misalignment of the datasets. This paper outlines an approach for automatic image-to-image registration where a topologically regular grid of tie points was imposed within the overlapping region of the images. To ensure topological consistency, tie points were stored within a network structure inspired from Kohonen’s self-organising networks [24]. The network was used to constrain the motion of the tie points in a manner similar to Kohonen’s original method. Using multiple resolutions, through an image pyramid, the network structure was formed at each resolution level where connections between the resolution levels allowed tie point movements to be propagated within and to all levels. Experiments were carried out using a range of manually registered multi-modal remotely sensed datasets where known linear and non-linear transformations were introduced against which our algorithm’s performance was tested. For single modality tests with no introduced transformation a mean error of 0.011 pixels was identified increasing to 3.46 pixels using multi-modal image data. Following the introduction of a series of translations a mean error of 4.98 pixels was achieve across all image pairs while a mean error of 7.12 pixels was identified for a series of non-linear transformations. Experiments using optical reflectance and height data were also conducted to compare the manually and automatically produced results where it was found the automatic results out performed the manual results. Some limitations of the network data structure were identified when dealing with very large errors but overall the algorithm produced results similar to, and in some cases an improvement over, that of a manual operator. We have also positively compared our method to methods from two other software packages: ITK and ITT ENVI.  相似文献   

3.
In this paper a method for moving objects segmentation and tracking from the so-called permanency matrix is introduced. Our motion-based algorithms enable to obtain the shapes of moving objects in video sequences starting from those image pixels where a change in their grey levels is detected between two consecutive frames by means of the permanency values. In the segmentation phase matching between objects along the image sequence is performed by using fuzzy bi-dimensional rectangular regions. The tracking phase performs the association between the various fuzzy regions in all the images through time. Finally, the analysis phase describes motion through a long video sequence. Segmentation, tracking an analysis phases are enhanced through the use of fuzzy logic techniques, which enable to work with the uncertainty of the permanency values due to image noise inherent to computer vision.  相似文献   

4.
This paper proposes an effective approach to detect and segment moving objects from two time-consecutive stereo frames, which leverages the uncertainties in camera motion estimation and in disparity computation. First, the relative camera motion and its uncertainty are computed by tracking and matching sparse features in four images. Then, the motion likelihood at each pixel is estimated by taking into account the ego-motion uncertainty and disparity in computation procedure. Finally, the motion likelihood, color and depth cues are combined in the graph-cut framework for moving object segmentation. The efficiency of the proposed method is evaluated on the KITTI benchmarking datasets, and our experiments show that the proposed approach is robust against both global (camera motion) and local (optical flow) noise. Moreover, the approach is dense as it applies to all pixels in an image, and even partially occluded moving objects can be detected successfully. Without dedicated tracking strategy, our approach achieves high recall and comparable precision on the KITTI benchmarking sequences.  相似文献   

5.
This paper presents a novel method to accurately detect moving objects from a video sequence captured using a nonstationary camera. Although common methods provide effective motion detection for static backgrounds or through only planar-perspective transformation, many detection errors occur when the background contains complex dynamic interferences or the camera undergoes unknown motions. To solve this problem, this study proposed a motion detection method that incorporates temporal motion and spatial structure. In the proposed method, first, spatial semantic planes are segmented, and image registration based on stable background planes is applied to overcome the interferences of the foreground and dynamic background. Thus, the estimated dense temporal motion ensures that small moving objects are not missed. Second, motion pixels are mapped on semantic planes, and then, the spatial distribution constraints of motion pixels, regional shapes and plane semantics, which are integrated into a planar structure, are used to minimise false positives. Finally, based on the dense temporal motion and spatial structure, moving objects are accurately detected. The experimental results on CDnet dataset, Pbi dataset, Aeroscapes dataset, and other challenging self-captured videos under difficult conditions, such as fast camera movement, large zoom variation, video jitters, and dynamic background, revealed that the proposed method can remove background movements, dynamic interferences, and marginal noises and can effectively obtain complete moving objects.© 2017 ElsevierInc.Allrightsreserved.  相似文献   

6.
嵌入式系统中视频运动对象分割   总被引:1,自引:0,他引:1  
肖德贵  王蕴泽 《计算机应用》2006,26(3):598-0600
提出了一种基于嵌入式系统的视频运动对象分割算法。首先利用差图像法抽取出运动的像素点,然后通过统计像素点的状态变化频率来区分运动物体和动态背景,并配合一权值状态矩阵将全局光照突变和动态背景像素自适应融合到背景中,从而分割出运动对象并进行跟踪。实验结果表明,该算法在嵌入式系统中实时跟踪运动目标取得了很好的效果。  相似文献   

7.
Computing occluding and transparent motions   总被引:13,自引:6,他引:7  
Computing the motions of several moving objects in image sequences involves simultaneous motion analysis and segmentation. This task can become complicated when image motion changes significantly between frames, as with camera vibrations. Such vibrations make tracking in longer sequences harder, as temporal motion constancy cannot be assumed. The problem becomes even more difficult in the case of transparent motions.A method is presented for detecting and tracking occluding and transparent moving objects, which uses temporal integration without assuming motion constancy. Each new frame in the sequence is compared to a dynamic internal representation image of the tracked object. The internal representation image is constructed by temporally integrating frames after registration based on the motion computation. The temporal integration maintains sharpness of the tracked object, while blurring objects that have other motions. Comparing new frames to the internal representation image causes the motion analysis algorithm to continue tracking the same object in subsequent frames, and to improve the segmentation.  相似文献   

8.
In this paper, we propose a method for detecting humans and vehicles in imagery taken from a UAV. This is a challenging problem due to a limited number of pixels on target, which makes it more difficult to distinguish objects from background clutter, and results in much larger search space. We propose a method for constraining the search based on a number of geometric constraints obtained from the metadata. Specifically, we obtain the orientation of ground plane normal, the orientation of shadows cast by out of plane objects in the scene, and the relationship between object heights and the size of their corresponding shadows. We use the aforementioned information in a geometry-based shadow, and ground-plane normal blob detector, which provides an initial estimation for locations of shadow casting out of plane (SCOOP) objects in the scene. These SCOOP candidate locations are then classified as either human or clutter using a combination of wavelet features and a Support Vector Machine. To detect vehicles, we similarly find potential vehicle candidates by combining SCOOP and inverted-SCOOP candidates and then classify them using wavelet features and SVM. Our method works on a single frame, and unlike motion detection based methods, it bypasses the entire pipeline of registration, motion detection, and tracking. This method allows for detection of stationary and slowly moving humans and vehicles while avoiding the search across the entire image, allowing accurate and fast localization. We show impressive results on sequences from VIVID and CLIF datasets and provide comparative analysis.  相似文献   

9.
阐述了一种快速而高效的由视频图象或视频图象序列生成全景的配准方法,为了估计图象配准的校正参数,该方法计算伪运动矢量,这些伪运动矢量是光流在每一选定象素处的粗略估计,使用方法,实现了一个在低价PC上就能实时创建和显示全景图像的软件。  相似文献   

10.
The detection of moving objects is a crucial step for many video surveillance applications whether using a visible camera (VIS) or an infrared (IR) one. In order to profit from both types, several fusion methods were proposed in the literature: low-level fusion, medium-level fusion and high-level fusion. The first one is the most used for moving objects’ detection in IR and VIS spectra. In this paper, we present an overview of the different moving object detection methods in IR and VIS spectra and a state of the art of the low-level fusion techniques. Moreover, we propose a new method for moving object detection using low-level fusion of IR and VIS spectra. In order to evaluate quantitatively and qualitatively our proposed method, three series of experiments were carried out using two well-known datasets namely “OSU Color-Thermal Database” and “INO-Database”; the results of these evaluations show promising results and demonstrate the effectiveness of the proposed method.  相似文献   

11.
A method for spatio-temporally smooth and consistent estimation of cardiac motion from MR cine sequences is proposed. Myocardial motion is estimated within a four-dimensional (4D) registration framework, in which all three-dimensional (3D) images obtained at different cardiac phases are simultaneously registered. This facilitates spatio-temporally consistent estimation of motion as opposed to other registration-based algorithms which estimate the motion by sequentially registering one frame to another. To facilitate image matching, an attribute vector (AV) is constructed for each point in the image, and is intended to serve as a “morphological signature” of that point. The AV includes intensity, boundary, and geometric moment invariants (GMIs). Hierarchical registration of two image sequences is achieved by using the most distinctive points for initial registration of two sequences and gradually adding less-distinctive points to refine the registration. Experimental results on real data demonstrate good performance of the proposed method for cardiac image registration and motion estimation. The motion estimation is validated via comparisons with motion estimates obtained from MR images with myocardial tagging.  相似文献   

12.
在动态场景中提取运动目标是开展视频分析的关键问题,也是当前计算机视觉与图像处理技术领域中的热门课题。本文提出了一种适用于动态场景的运动目标提取新算法,算法先根据摄像机全局运动模型计算全局运动参数,再利用三帧差分法得到分割的前景。将分割为背景的像素点映射到邻近帧,求得各帧的像素点为背景时其高斯模型的均值及方差。最后利用粒子滤波预测出下一帧前景区域,计算各像素点为前景的概率,获得运动目标的视频分割结果。实验表明,本文算法有效地克服了由于全局运动模型参数估算偏差而导致的累积误差,能以更高精度实现跳水运动视频中的目标分割。  相似文献   

13.
The registration of images from multiple types of sensors (particularly infrared sensors and visible color sensors) is a step toward achieving multi-sensor fusion. This paper proposes a registration method using a novel error function. Registration of infrared and visible color images is performed by using the trajectories of moving objects obtained using background subtraction and simple tracking. The trajectory points are matched using a RANSAC-based algorithm and a novel registration criterion, which is based on the overlap of foreground pixels in composite foreground images. This criterion allows performing registration when there are few trajectories and gives more stable results. Our method was tested and its performance quantified using nine scenarios. It outperforms a related method only based on trajectory points in cases where there are few moving objects.  相似文献   

14.
In this work, we propose a new integrated framework that addresses the problems of thermal–visible video registration, sensor fusion, and people tracking for far-range videos. The video registration is based on a RANSAC trajectory-to-trajectory matching, which estimates an affine transformation matrix that maximizes the overlapping of thermal and visible foreground pixels. Sensor fusion uses the aligned images to compute sum-rule silhouettes, and then constructs thermal–visible object models. Finally, multiple object tracking uses blobs constructed in sensor fusion to output the trajectories. Results demonstrate the advantage of our proposed framework in obtaining better results for both image registration and tracking than separate image registration and tracking methods.  相似文献   

15.
针对存在3D场景遮挡的航拍视频运动小目标跟踪问题,提出一种基于多视角航拍配准的运动小目标检测和跟踪算法。该算法首先对图像序列间隔采样,利用Harris检测器提取全局特征点,通过Delaunay三角网对待配准图像实现初始匹配,然后利用整合变换模型计算差分图像,并利用累积能量检测出目标,最后通过卡尔曼运动滤波消除运动目标跟踪的抖动。实验结果表明,该算法对城市和郊区场景的航拍视频可以检测出最小30个像素的缓慢运动目标。  相似文献   

16.
在假定背景像素以较高频率在图像序列中出现的前提下,利用DWT变换的多分辨率优点,提出一种自适应在线聚类的运动目标提取方法。首先将待处理的视频图像序列经过DWT变换提取近似分量;然后利用像素点聚类方法,结合自适应动态阈值和相似类合并来重构背景;最后借鉴图像匹配的评价标准验证重构背景的准确性。实验结果表明,该方法能够准确快速地提取运动目标,并对环境变化、目标迂回移动、多目标运动情况具有较好的鲁棒性。  相似文献   

17.
18.
19.
一种序列图像配准的计算框架   总被引:4,自引:0,他引:4       下载免费PDF全文
提出了一种对来自多传感器的序列图像进行时间.空间配准的计算框架。该框架适用于摄像机静止的场合,而且所拍摄的图像序列中有运动目标存在,但在图像序列的开始阶段为静止背景。首先对静止背景进行配准,得到空间变换的初始估计;然后,利用运动目标质心间的对应关系得到时间变换的初始估计;最后,结合共同信息计算出最终结果。本框架的空间配准精度可得到亚像素级,时间配准精度可达到亚帧级。本框架已成功应用于可见光/红外图像序列的配准实验。  相似文献   

20.
We present a new variational method for multi-view stereovision and non-rigid three-dimensional motion estimation from multiple video sequences. Our method minimizes the prediction error of the shape and motion estimates. Both problems then translate into a generic image registration task. The latter is entrusted to a global measure of image similarity, chosen depending on imaging conditions and scene properties. Rather than integrating a matching measure computed independently at each surface point, our approach computes a global image-based matching score between the input images and the predicted images. The matching process fully handles projective distortion and partial occlusions. Neighborhood as well as global intensity information can be exploited to improve the robustness to appearance changes due to non-Lambertian materials and illumination changes, without any approximation of shape, motion or visibility. Moreover, our approach results in a simpler, more flexible, and more efficient implementation than in existing methods. The computation time on large datasets does not exceed thirty minutes on a standard workstation. Finally, our method is compliant with a hardware implementation with graphics processor units. Our stereovision algorithm yields very good results on a variety of datasets including specularities and translucency. We have successfully tested our motion estimation algorithm on a very challenging multi-view video sequence of a non-rigid scene. Electronic supplementary material Electronic supplementary material is available for this article at and accessible for authorised users.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号