首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
针对复杂环境下的目标检测问题,提出了一种基于背景模型的融合检测方法。首先在多模式均值模型的基础上,构造多模式均值时空模型,结合像素在时空域上的分布信息,改善了模型对非平稳场景较为敏感的缺点,给出了模型更新方法和前景检测方法;然后利用该模型对可见光和红外图像序列分别进行建模和前景检测,给出了一种基于置信度的目标融合检测方法,利用双传感器信息提高检测精度和可靠性。实验结果验证了本文方法的有效性。  相似文献   

2.
从序列图像中提取变化区域是运动检测的主要作用,动态背景的干扰严重影响检测结果,使得有效性运动检测成为一项困难工作。受静态图像显著性检测启发,提出了一种新的运动目标检测方法,采用自底向上与自顶向下的视觉计算模型相结合的方式获取图像的空时显著性:先检测出视频序列中的空间显著性,在其基础上加入时间维度,利用改进的三帧差分算法获取具有运动目标的时间显著性,将显著性目标的检测视角由静态图像转换为空时性均显著的运动目标。实验和分析结果表明:新方法在摄像机晃动等动态背景中能较准确检测出空时均显著的运动目标,具有较高的鲁棒性。  相似文献   

3.
针对动态场景下视觉SLAM(simultaneous localization and mapping)算法易受运动特征点影响,从而导致位姿估计准确度低、鲁棒性差的问题,提出了一种基于动态区域剔除的RGB-D视觉SLAM算法。首先借助语义信息,识别出属于移动对象的特征点,并借助相机的深度信息利用多视图几何检测特征点在此时是否保持静止;然后使用从静态对象提取的特征点和从可移动对象导出的静态特征点来微调相机姿态估计,以此实现系统在动态场景中准确而鲁棒的运行;最后利用TUM数据集中的动态室内场景进行了实验验证。实验表明,在室内动态环境中,所提算法能够有效提高相机的位姿估计精度,实现动态环境中的地图更新,在提升系统鲁棒性的同时也提高了地图构建的准确性。  相似文献   

4.
In this paper, we describe a reconstruction method for multiple motion scenes, which are scenes containing multiple moving objects, from uncalibrated views. Assuming that the objects are moving with constant velocities, the method recovers the scene structure, the trajectories of the moving objects, the camera motion, and the camera intrinsic parameters (except skews) simultaneously. We focus on the case where the cameras have unknown and varying focal lengths while the other intrinsic parameters are known. The number of the moving objects is automatically detected without prior motion segmentation. The method is based on a unified geometrical representation of the static scene and the moving objects. It first performs a projective reconstruction using a bilinear factorization algorithm and, then, converts the projective solution to a Euclidean one by enforcing metric constraints. Experimental results on synthetic and real images are presented.  相似文献   

5.
Due to recent technological advances in position-aware devices, data about moving objects is becoming ubiquitous. Yet, it is a major challenge for spatial information systems to offer tools for the analysis of motion data, thereby evolving from static to dynamic frameworks. This paper aims to contribute to this area by introducing an implementation prototype for an information system based on the Qualitative Trajectory Calculus, a spatiotemporal calculus to represent and reason about moving point objects.  相似文献   

6.
A new concept in passive ranging to moving objects is described which is based on the comparison of multiple image flows. It is well known that if a static scene is viewed by an observer undergoing a known relative translation through space, then the distance to objects in the scene can be easily obtained from the measured image velocities associated with features on the objects (i.e., motion stereo). But in general, individual objects are translating and rotating at unknown rates with respect to a moving observer whose own motion may not be accurately monitored. The net effect is a complicated image flow field in which absolute range information is lost. However, if a second image flow field is produced by a camera whose motion through space differs from that of the first camera by a known amount, the range information can be recovered by subtracting the first image flow from the second. This ``difference flow' must then be corrected for the known relative rotation between the two cameras, resulting in a divergent relative flow from a known focus of expansion. This passive ranging process may be termed Dynamic Stereo, the known difference in camera motions playing the role of the stereo baseline. We present the basic theory of this ranging process, along with some examples for simulated scenes.  相似文献   

7.
付豪  徐和根  张志明  齐少华 《计算机应用》2021,41(11):3337-3344
针对动态场景下的定位与静态语义地图构建问题,提出了一种基于语义和光流约束的动态环境下的同步定位与地图构建(SLAM)算法,以降低动态物体对定位与建图的影响。首先,对于输入的每一帧,通过语义分割获得图像中物体的掩模,再通过几何方法过滤不符合极线约束的特征点;接着,结合物体掩模与光流计算出每个物体的动态概率,根据动态概率过滤特征点以得到静态特征点,再利用静态特征点进行后续的相机位姿估计;然后,基于RGB-D图片和物体动态概率建立静态点云,并结合语义分割建立语义八叉树地图。最后,基于静态点云与语义分割创建稀疏语义地图。公共TUM数据集上的测试结果表明,高动态场景下,所提算法与ORB-SLAM2相比,在绝对轨迹误差和相对位姿误差上能取得95%以上的性能提升,与DS-SLAM、DynaSLAM相比分别减小了41%和11%的绝对轨迹误差,验证了该算法在高动态场景中具有较好的定位精度和鲁棒性。地图构建的实验结果表明,所提算法创建了静态语义地图,与点云地图相比,稀疏语义地图的存储空间需求量降低了99%。  相似文献   

8.
Omni-directional stereo   总被引:2,自引:0,他引:2  
Omnidirectional views of an indoor environment at different locations are integrated into a global map. A single camera swiveling about the vertical axis takes consecutive images and arranges them into a panoramic representation, which provides rich information around the observation point: a precise omnidirectional view of the environment and coarse ranges to objects in it. Using the coarse map, the system autonomously plans consecutive observations at the intersections of lines connecting object points, where the directions of the imaging are estimated easily and precisely. From two panoramic views at the two planned locations, a modified binocular stereo method yields a more precise, but with direction-dependent uncertainties, local map. New observation points are selected to decrease the uncertainty, and another local map is yielded, which is then integrated into a more reliable global representation of the world with the adjacent local maps  相似文献   

9.
We present a new software ray tracing solution that efficiently computes visibilities in dynamic scenes. We first introduce a novel scene representation: ray-aligned occupancy map array (ROMA) that is generated by rasterizing the dynamic scene once per frame. Our key contribution is a fast and low-divergence tracing method computing visibilities in constant time, without constructing and traversing the traditional intersection acceleration data structures such as BVH. To further improve accuracy and alleviate aliasing, we use a spatiotemporal scheme to stochastically distribute the candidate ray samples. We demonstrate the practicality of our method by integrating it into a modern real-time renderer and showing better performance compared to existing techniques based on distance fields (DFs). Our method is free of the typical artifacts caused by incomplete scene information, and is about 2.5×–10× faster than generating and tracing DFs at the same resolution and equal storage.  相似文献   

10.
In this paper we consider the problem of estimating the range information of features on an affine plane in by observing its image with the aid of a CCD camera, wherein we assume that the camera is undergoing a known motion. The features considered are points, lines and planar curves located on planar surfaces of static objects. The dynamics of the moving projections of the features on the image plane have been described as a suitable differential equation on an appropriate feature space. This dynamics is used to estimate feature parameters from which the range information is readily available. In this paper the proposed identification has been carried out via a newly introduced identifier based observer. Performance of the observer has been studied via simulation.  相似文献   

11.
视觉SLAM(Simultaneous Localization And Mapping,同时定位与建图)是移动机器人领域的核心技术,传统视觉SLAM还难以适用于高动态场景并且地图中缺少语义信息。提出一种动态环境语义SLAM方法,用深度学习网络对图像进行目标检测,检测动态目标所在区域,对图像进行特征提取并剔除动态物体所在区域的特征点,利用静态的特征点进行位姿计算,对关键帧进行语义分割,在构建语义地图时滤除动态物体的地图点构建出无动态物体干扰的语义地图。在TUM数据集上进行实验,结果显示该方法在动态环境下可以提升88.3%位姿估计精度,并且可同时构建出无动态物体干扰的语义地图。  相似文献   

12.
目的 动态场景图像中所存在的静态目标、背景纹理等静态噪声,以及背景运动、相机抖动等动态噪声,极易导致运动目标检测误检或漏检。针对这一问题,本文提出了一种基于运动显著性概率图的目标检测方法。方法 该方法首先在时间尺度上构建包含短期运动信息和长期运动信息的构建时间序列组;然后利用TFT(temporal Fourier transform)方法计算显著性值。基于此,得到条件运动显著性概率图。接着在全概率公式指导下得到运动显著性概率图,确定前景候选像素,突出运动目标的显著性,而对背景的显著性进行抑制;最后以此为基础,对像素的空间信息进行建模,进而检测运动目标。结果 对提出的方法在3种典型的动态场景中与9种运动目标检测方法进行了性能评价。3种典型的动态场景包括静态噪声场景、动态噪声场景及动静态噪声场景。实验结果表明,在静态噪声场景中,Fscore提高到92.91%,准确率提高到96.47%,假正率低至0.02%。在动态噪声场景中,Fscore提高至95.52%,准确率提高到95.15%,假正率低至0.002%。而在这两种场景中,召回率指标没有取得最好的性能的原因是,本文所提方法在较好的包络目标区域的同时,在部分情况下易将部分目标区域误判为背景区域的,尤其当目标区域较小时,这种误判的比率更为明显。但是,误判的比率一直维持在较低的水平,且召回率的指标也保持在较高的值,完全能够满足于实际应用的需要,不能抵消整体性能的显著提高。另外,在动静态噪声场景中,4种指标均取得了最优的性能。因此,本文方法能有效地消除静态目标干扰,抑制背景运动和相机抖动等动态噪声,准确地检测出视频序列中的运动目标。结论 本文方法可以更好地抑制静态背景噪声和由背景变化(水波荡漾、相机抖动等)引起的动态噪声,在复杂的噪声背景下准确地检测出运动目标,提高了运动目标检测的鲁棒性和普适性。  相似文献   

13.
This paper presents a symbolic formalism for modeling and retrieving video data via the moving objects contained in the video images. The model integrates the representations of individual moving objects in a scene with the time-varying relationships between them by incorporating both the notions of object tracks and temporal sequences of PIRs (projection interval relationships). The model is supported by a set of operations which form the basis of a moving object algebra. This algebra allows one to retrieve scenes and information from scenes by specifying both spatial and temporal properties of the objects involved. It also provides operations to create new scenes from existing ones. A prototype implementation is described which allows queries to be specified either via an animation sketch or using the moving object algebra.  相似文献   

14.
解决估计运动目标和静止观测者之间的接触时间(time-to-contact)的问题.首先定义了广义接触时间的概念,并提出了基于特征点跟踪的估计匀速运动目标接触时间的理论依据和利用特征线段估计接触时间的解决思路.随后,提出了一个结合Kalman滤波器的估计匀速运动目标和静止观测者之间接触时间的特征点跟踪方案,并讨论了特征点的选择准则、运动分割的方法、以及所采用的特征点跟踪的方法.最后,针对标定TTC的运动目标序列图像进行接触时间的估计实验,实验的结果是令人满意的.  相似文献   

15.
We report an autonomous surveillance system with multiple pan-tilt-zoom (PTZ) cameras assisted by a fixed wide-angle camera. The wide-angle camera provides large but low resolution coverage and detects and tracks all moving objects in the scene. Based on the output of the wide-angle camera, the system generates spatiotemporal observation requests for each moving object, which are candidates for close-up views using PTZ cameras. Due to the fact that there are usually much more objects than the number of PTZ cameras, the system first assigns a subset of the requests/objects to each PTZ camera. The PTZ cameras then select the parameter settings that best satisfy the assigned competing requests to provide high resolution views of the moving objects. We propose an approximation algorithm to solve the request assignment and the camera parameter selection problems in real time. The effectiveness of the proposed system is validated in both simulation and physical experiment. In comparison with an existing work using simulation, it shows that in heavy traffic scenarios, our algorithm increases the number of observed objects by over 210%.  相似文献   

16.
在现实场景中,传统视觉同步定位与建图(SLAM)算法存在静态环境假设的限制。由于运动物体的影响,传统的视觉里程计存在大量误匹配,从而影响整个SLAM算法的运行精度,导致系统无法在现实场景中稳定运行。基于深度学习和多视图几何,提出一种面向室内动态环境的视觉SLAM算法。采用目标检测网络对动态物体进行预检测确定潜在运动对象,根据预检测结果,利用多视图几何完成运动物体重检测,确认实际产生运动的物体并将场景中的对象划分为动态和静态两种状态。基于跟踪线程和局部建图线程,提出一种语义数据关联方法和关键帧选取策略,以减少运动物体对算法精度的影响,提高系统的稳定性。在TUM公开数据集上的实验结果表明,在动态场景下,相较于ORB-SLAM2算法,该算法平均均方根误差降低了40%,与同样具有运动剔除的DynaSLAM算法相比,算法实时性提高10倍以上,且运行速度与精度均明显提高。  相似文献   

17.
机器人在执行同时定位与地图创建(simultaneous localization and mapping,SLAM)的复杂任务时,容易受到移动物体的干扰,导致定位精度下降、地图可读性较差、系统鲁棒性不足,为此提出一种基于深度学习和边缘检测的SLAM算法。首先,利用YOLOv4目标检测算法获取场景中的语义信息,得到初步的语义动静态区域,同时提取ORB特征点并计算光流场,筛选动态特征点,通过语义关联进一步得到动态物体,利用canny算子计算动态物体的轮廓边缘,利用动态物体以外的静态特征点进行相机位姿估计,筛选关键帧,进行点云叠加,利用剔除动态物体的点云信息构建静态环境地图。本文算法在公开数据集上与ORB_SLAM2进行对比,定位精度提升90%以上,地图可读性明显增强,实验结果表明本文算法可以有效降低移动物体对定位与建图的影响,显著提升算法稳健性。  相似文献   

18.
Modelling of the background (“uninteresting parts of the scene”), and of the foreground, play important roles in the tasks of visual detection and tracking of objects. This paper presents an effective and adaptive background modelling method for detecting foreground objects in both static and dynamic scenes. The proposed method computes SAmple CONsensus (SACON) of the background samples and estimates a statistical model of the background, per pixel. SACON exploits both color and motion information to detect foreground objects. SACON can deal with complex background scenarios including nonstationary scenes (such as moving trees, rain, and fountains), moved/inserted background objects, slowly moving foreground objects, illumination changes etc.However, it is one thing to detect objects that are not likely to be part of the background; it is another task to track those objects. Sample consensus is again utilized to model the appearance of foreground objects to facilitate tracking. This appearance model is employed to segment and track people through occlusions. Experimental results from several video sequences validate the effectiveness of the proposed method.  相似文献   

19.
20.
This paper proposes a traffic surveillance system that can efficiently detect an interesting object and identify vehicles and pedestrians in real traffic situations. The proposed system consists of a moving object detection model and an object identification model. A dynamic saliency map is used for analyzing dynamics of the successive static saliency maps, and can localize an attention area in dynamic scenes to focus on a specific moving object for traffic surveillance purposes. The candidate local areas of a moving object are followed by a blob detection processing including binarization, morphological closing and labeling methods. For identifying a moving object class, the proposed system uses a hybrid of global and local information in each local area. Although the global feature analysis is a compact way to identify an object and provide a good accuracy for non-occluded objects, it is sensitive to image translation and occlusion. Therefore, a local feature analysis is also considered and combined with the global feature analysis. In order to construct an efficient classifier using the global and local features, this study proposes a novel classifier based on boosting of support vector machines. The proposed object identification model can identify a class of moving object and discard unexpected candidate area which does not include an interesting object. As a result, the proposed road surveillance system is able to detect a moving object and identify the class of the moving object. Experimental results show that the proposed traffic surveillance system can successfully detect specific moving objects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号