首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
在动态背景下的运动目标检测中,由于目标和背景两者都是各自独立运动的,在提取前景运动目标时需要考虑由移动机器人自身运动引起的背景变化。仿射变换是一种广泛用于估计图像间背景变换的方法。然而,在移动机器人上使用全方位视觉传感器(ODVS)时,由于全方位图像的扭曲变形会 造成图像中背景运动不一致,无法通过单一的仿射变换描述全方位图像上的背景运动。将图像划分为网格窗口,然后对每个窗口分别进行仿射变换,从背景变换补偿帧差中得到运动目标的区域。最后,根据ODVS的成像特性,通过视觉方法解析出运动障碍物的距离和方位信息。实验结果表明,提出的方法能准确检测出移动机器人360°范围内的运动障碍物,并实现运动障碍物的精确定位,有效地提高了移动机器人的实时避障能力。  相似文献   

2.
提出了一种新的物体三维形状恢复的遗传算法方法。用固定位置的摄像机在不同位置的球扩展光源下获取图象序列,并用遗传算法对物体表面每一点的法线矢量进行快速搜索。实验结果表明,此方法能有效地恢复扩展光源下物体的三维形状。不仅放宽了对光源和物体表面的限制,而且精度及鲁棒性均有很大提高。  相似文献   

3.
动态场景图像序列中运动目标检测新方法   总被引:1,自引:0,他引:1       下载免费PDF全文
在动态场景图像序列中检测运动目标时,如何消除因摄影机运动带来的图像帧间全局运动的影响,以便分割图像中的静止背景和运动物体,是一个必须解决的难题。针对复杂背景下动态场景图像序列的特性,给出了一种新的基于场景图像参考点3D位置恢复的图像背景判别方法和运动目标检测方法。首先,介绍了图像序列的层次化运动模型以及基于它的运动分割方法;然后,利用估计出的投影矩阵计算序列图像中各运动层的参考点3D位置,根据同一景物在不同帧中参考点3D位置恢复值的变化特性,来判别静止背景对应的运动层和运动目标对应的运动层,从而分割出图像中的静止背景和运动目标;最后,给出了动态场景图像序列中运动目标检测的详细算法。实验结果表明,新算法较好地解决了在具有多组帧间全局运动参数的动态场景序列图像中检测运动目标的问题,较大地提高了运动目标跟踪算法的有效性和鲁棒性。  相似文献   

4.
双目轮式移动机器人的运动目标识别与跟踪   总被引:1,自引:0,他引:1  
研究在室内相对复杂背景下的运动目标识别与跟踪,采用Hu氏不变矩作为目标特征,通过环形方式搜索种子点进行目标区域生长,在有干扰的情况下检出目标,并估计相对位姿和控制本体机器人快速跟踪目标.用Intel的计算机视觉库OpenCV实现图像预处理、图像分割、孔洞填充、区域生长和特征提取.实验结果表明,系统能对运动目标进行实时稳定快速的跟踪,适用性强.  相似文献   

5.
In this article, we present an algorithm for detecting moving objects from a given video sequence. Here, spatial and temporal segmentations are combined together to detect moving objects. In spatial segmentation, a multi-layer compound Markov Random Field (MRF) is used which models spatial, temporal, and edge attributes of image frames of a given video. Segmentation is viewed as a pixel labeling problem and is solved using the maximum a posteriori (MAP) probability estimation principle; i.e., segmentation is done by searching a labeled configuration that maximizes this probability. We have proposed using a Differential Evolution (DE) algorithm with neighborhood-based mutation (termed as Distributed Differential Evolution (DDE) algorithm) for estimating the MAP of the MRF model. A window is considered over the entire image lattice for mutation of each target vector of the DDE; thereby enhancing the speed of convergence. In case of temporal segmentation, the Change Detection Mask (CDM) is obtained by thresholding the absolute differences of the two consecutive spatially segmented image frames. The intensity/color values of the original pixels of the considered current frame are superimposed in the changed regions of the modified CDM to extract the Video Object Planes (VOPs). To test the effectiveness of the proposed algorithm, five reference and one real life video sequences are considered. Results of the proposed method are compared with four state of the art techniques and provide better spatial segmentation and better identification of the location of moving objects.  相似文献   

6.
Motion segmentation using occlusions   总被引:4,自引:0,他引:4  
We examine the key role of occlusions in finding independently moving objects instantaneously in a video obtained by a moving camera with a restricted field of view. In this problem, the image motion is caused by the combined effect of camera motion (egomotion), structure (depth), and the independent motion of scene entities. For a camera with a restricted field of view undergoing a small motion between frames, there exists, in general, a set of 3D camera motions compatible with the observed flow field even if only a small amount of noise is present, leading to ambiguous 3D motion estimates. If separable sets of solutions exist, motion-based clustering can detect one category of moving objects. Even if a single inseparable set of solutions is found, we show that occlusion information can be used to find ordinal depth, which is critical in identifying a new class of moving objects. In order to find ordinal depth, occlusions must not only be known, but they must also be filled (grouped) with optical flow from neighboring regions. We present a novel algorithm for filling occlusions and deducing ordinal depth under general circumstances. Finally, we describe another category of moving objects which is detected using cardinal comparisons between structure from motion and structure estimates from another source (e.g., stereo).  相似文献   

7.
提出了一种对视频图像进行实时目标分割及跟踪的新方法。该方法利用基于时间片的运动历史图像(tMHI)的灰度阶梯轮廓,对存在的子运动区域进行包围划分并予以标记,实现视频图像中运动目标的实时分割,进而将每帧tMHI图像中各个运动区域同场景中运动目标连续关联起来,实现对多运动目标的轨迹跟踪。为了提高分割质量,对tMHI进行了改进处理,去除了大部分噪声干扰,取得了明显的改善效果。实验表明,该方法可以有效地分割并跟踪视频中的多个运动目标,鲁棒性好,检出率较高,并且处理速度较快,达到了实时性的要求,还解决了局部粘连的问题。  相似文献   

8.
带有运动目标的复杂背景的提取   总被引:3,自引:0,他引:3  
针对带有运动目标的复杂场景的背景抽取问题,采用了一种自适应背景建模算法,将运动目标看作是一个对背景图像的随机扰动,利用一段连续的图像序列经过中值滤波来消除运动对象的影响,而且当场景中发生动态变化时可同时对背景图像进行及时地更新。  相似文献   

9.
IR–visible camera registration is required for multi-sensor fusion and cooperative processing. Image sequences can provide motion information, which is useful for sequence registration. The existing methods mainly focus on registration using moving objects which are observed by both cameras. However, accurate motion feature extraction for a whole moving object is difficult, because of the complex environment and different imaging mechanism of two sensors. To overcome this problem, we use motion features associated with single pixels in the two image sequences to carry out automatic registration. A normalized optical flow time sequence for each image pixel is constructed. The matching of pixels between the IR image and the visible light image is carried out using a fast similarity measurement and a three stage correspondence selection method. Finally cascaded random sample consensus is adopted to remove outlying matches, and least-square method and Levenberg–Marquardt method are used to estimate the transformation from the IR image to the visible image. The effectiveness of our method is demonstrated using several real datasets and simulated datasets.  相似文献   

10.
基于水平集的多运动目标时空分割与跟踪   总被引:1,自引:0,他引:1       下载免费PDF全文
针对背景运动时的运动目标分割问题,提出了一种对视频序列中的多个运动目标进行分割和跟踪的新方法。该方法着眼于运动的且较为复杂的背景,首先利用光流约束方程和背景运动模型建立一个基于时空域的能量函数,然后用该函数进行背景运动速度的估算和运动目标的分割和跟踪。而时空域中的运动目标的最佳分割,乃是通过使该能量函数最小化来驱动时空曲面演化实现。时空曲面的演化采用了水平集PDEs(Partial Differential Equations)方法。实验中,用实际的图像序列验证了该算法及其数值实现。实验表明,该方法能够同时进行背景运动速度的估算、运动目标的分割和跟踪。  相似文献   

11.
基于视觉的无人飞艇地面目标检测   总被引:1,自引:0,他引:1       下载免费PDF全文
针对无人飞艇地面目标检测中细节信息缺失的问题,提出一种静态目标和运动目标的检测方法。利用Lucas-Kanade方法跟踪目标区域内特征点,从而实现静态目标的连续检测。通过图像特征点的跟踪估计相邻帧图像间的全局运动,进而对图像进行运动补偿,利用补偿后的帧差图实现运动目标的检测。采用上海交通大学“致远一号”无人飞艇采集的实际视频数据进行实验与分析,结果验证了该方法的有效性。  相似文献   

12.
为了提高视频分割的实时性和效果,针对低比特率多媒体应用的视频序列,提出了一种简单快速的运动对象分割方法。首先利用对称差分得到差分图像,然后再求出当前帧的梯度图像,二者相与得到连续的运动对象边界;再对其进行形态学处理及二次扫描,得到运动对象掩模;最后用原图像的灰度值填充该区域。实验证明,使用该方法得到了较好的分割效果并缩短了处理时间。  相似文献   

13.
利用图像颜色信息进行深度图重构,可以恢复对象边界处的深度不连续性,但无法保证对象内部的深度均匀性。为解决该问题,提出图像引导下总广义变分正则化的深度图重构模型。该模型利用扩散张量将图像提供的边缘信息引入二阶总广义变分正则项,使得重构深度在保持对象边缘的同时逼近分段仿射平面,从而保证恢复深度既保持对象边界处的不连续性,又具有对象内部的均匀性。通过Legendre-Fenchel变换将模型转换成等效的凸凹鞍点问题,从而得到高效的一阶原始对偶求解算法。实验结果表明,该方法能够恢复尖锐的对象边缘,同时保持对象内部的深度均匀性。与现有算法相比,所提方法具有更高的峰值信噪比、归一化互协方差和更低的平均绝对误差。  相似文献   

14.

多目标位姿估计问题是无人驾驶、人机交互等领域的基础问题之一, 但目前受采集设备限制, 该领域数据大多集中在较小空间范围, 这使得刚体位姿估计的实用价值受到限制. 针对上述问题, 提出了一种基于孪生空间的单目图像目标位姿一体化标注方法, 并设计了一套位姿标注工具 LabelImg3D. 首先, 在孪生空间中放置同焦距的虚拟相机, 并构建与真实目标等同的3维模型;然后在孪生空间中放置真实空间拍摄图像(一次投影图), 使其填充虚拟相机视场;最后对3维模型进行平移旋转, 使目标二次投影与一次投影在虚拟相机中保持一致, 从而一体化得到目标位姿. 基于该方法, 开源了一套标注工具LabelImg3D (相似文献   


15.
对于多个动目标,依其空间分布,鈓用近邻方法,实施动目标的多分辨率显示控制。研究的目的是将动目标的时空数据可视化,显示出具有全局综合和局部放大视觉效果的可视化画面。首先针对动目标的时空离散数据的特点,提出对动目标的近邻分群准则和聚合算法;然后结合动目标的轨迹特点和目标编队跟踪思想,提出轨迹聚合的近邻分群准则和聚合算法。实现了基于近邻方法的动目标多分辨率轨迹聚合的显示控制。在原型系统中实现了多分辨率显示控制,在不同分辨率下,可示意性描画出分散的多编队动目标分布。取得了全局综合和局部放大的视觉效果。  相似文献   

16.
提出一种以ViBe算法为基础,结合三帧差分思想的运动目标检测算法。利用ViBe算法对每个像素点建模,当前帧和模型得到的差分图与前一帧得到的差分图再进行与运算,之后运用 ViBe 的思想对模型进行实时更新;同时在每一帧添加小波去噪处理,去除图像高频区域。本文算法有效地解决了光照变化对系统的影响,消除了影子问题,去除了闪烁背景点。实验结果表明,本文算法在多种环境下可以准确地提取运动目标,达到更好的鲁棒性。  相似文献   

17.
The detection of moving objects under a free-moving camera is a difficult problem because the camera and object motions are mixed together and the objects are often detected into the separated components. To tackle this problem, we propose a fast moving object detection method using optical flow clustering and Delaunay triangulation as follows. First, we extract the corner feature points using Harris corner detector and compute optical flow vectors at the extracted corner feature points. Second, we cluster the optical flow vectors using K-means clustering method and reject the outlier feature points using Random Sample Consensus algorithm. Third, we classify each cluster into the camera and object motion using its scatteredness of optical flow vectors. Fourth, we compensate the camera motion using the multi-resolution block-based motion propagation method and detect the objects using the background subtraction between the previous frame and the motion compensated current frame. Finally, we merge the separately detected objects using Delaunay triangulation. The experimental results using Carnegie Mellon University database show that the proposed moving object detection method outperforms the existing other methods in terms of detection accuracy and processing time.  相似文献   

18.
移动机器人视觉定位方法的研究与实现   总被引:1,自引:0,他引:1  
针对移动机器人的局部视觉定位问题进行了研究。首先通过移动机器人视觉定位与目标跟踪系统求出目标质心特征点的位置时间序列,然后在分析二次成像法获取目标深度信息的缺陷的基础上,提出了一种获取目标的空间位置和运动信息的方法。该方法利用序列图像和推广卡尔曼滤波,目标获取采用了HIS模型。在移动机器人满足一定机动的条件下,较精确地得到了目标的空间位置和运动信息。仿真结果验证了该方法的有效性和可行性。  相似文献   

19.
Foreground segmentation of moving regions in image sequences is a fundamental step in many vision systems including automated video surveillance, human-machine interface, and optical motion capture. Many models have been introduced to deal with the problems of modeling the background and detecting the moving objects in the scene. One of the successful solutions to these problems is the use of the well-known adaptive Gaussian mixture model. However, this method suffers from some drawbacks. Modeling the background using the Gaussian mixture implies the assumption that the background and foreground distributions are Gaussians which is not always the case for most environments. In addition, it is unable to distinguish between moving shadows and moving objects. In this paper, we try to overcome these problem using a mixture of asymmetric Gaussians to enhance the robustness and flexibility of mixture modeling, and a shadow detection scheme to remove unwanted shadows from the scene. Furthermore, we apply this method to real image sequences of both indoor and outdoor scenes. The results of comparing our method to different state of the art background subtraction methods show the efficiency of our model for real-time segmentation.  相似文献   

20.
In this study, the spatial local optimization method was improved to obtain high precision of optical flow for cases in which the object movement changes substantially and a method to trace the loci of moving objects was considered. In the spatial local optimization method, the precision of the optical flow when the object movement changes substantially becomes a problem. Therefore, to make the object movement relatively small, we obtained flow vectors from the image sequence to drop the resolution of the original input image sequence to half the initial resolution. flow vectors were then obtained from the original input image sequence that were smaller than the threshold value. We show that the precision of the optical flow when the object movement changes substantially is improved by this method. Method used to trace the loci of moving objects was demonstrated. We obtained clusters from histograms of flow vectors and pursued each cluster. We show that it is possible to trace moving objects by this method. This work was presented, in part, at the 7th International Symposium on Artificial Life and Robotics, Oita, Japan, January 16–18, 2002  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号