首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
The computer processing of forward‐look sonar video imagery enables significant capabilities in a wide variety of underwater operations within turbid environments. Accurate automated registration of sonar video images to complement measurements from traditional positioning devices can be instrumental in the detection, localization, and tracking of distinct scene targets, building feature maps, change detection, as well as improving precision in the positioning of unmanned submarines. This work offers a novel solution for the registration of two‐dimensional (2‐D) forward‐look sonar images recorded from a mobile platform, by optimization over the sonar 3‐D motion parameters. It incorporates the detection of key features and landmarks, and effectively represents them with Gaussian maps. Improved performance is demonstrated with respect to the state‐of‐the‐art approach utilizing 2‐D similarity transformation, based on experiments with real data.  相似文献   

2.
This paper describes a method for ISAR image classification, based on a comparison of range-Doppler imagery to supplied 3D ship reference models. This comparison is performed in the image domain by first estimating ship motion for each frame in a sequence of ISAR images. ISAR images are then simulated using this estimated motion applied to some known 3D reference models, and the model image and real images are compared to produce a match score for use in classification. The effect of estimation error in each of the motion parameters is also investigated.  相似文献   

3.
Augmented reality camera tracking with homographies   总被引:4,自引:0,他引:4  
To realistically integrate 3D graphics into an unprepared environment, camera position must be estimated by tracking natural image features. We apply our technique to cases where feature positions in adjacent frames of an image sequence are related by a homography, or projective transformation. We describe this transformation's computation and demonstrate several applications. First, we use an augmented notice board to explain how a homography, between two images of a planar scene, completely determines the relative camera positions. Second, we show that the homography can also recover pure camera rotations, and we use this to develop an outdoor AR tracking system. Third, we use the system to measure head rotation and form a simple low-cost virtual reality (VR) tracking solution.  相似文献   

4.
Traditionally, inverse synthetic aperture radar (ISAR) image frames are classified individually in an automatic target recognition system. When information from different image frames is combined, it is usually in the context of time-averaging to remove statistically independent noise fluctuations between images. The sea state induced variability of the ship target projections between frames, however, also provides additional information about the target, which can be used to construct a 3D representation of the target scatterer positions. In this paper, a method for classifying a ship based on 3D scatterer information from a sequence of 2D ISAR images is described. A preliminary classification result for simulated ISAR images of nine types of ship is also provided.  相似文献   

5.
如何从空间目标序列性二维(2-D,Two-Dimentional)逆合成孔径雷达(ISAR,Inverse Synthetic Aperture Radar)成像获取目标的三维(3-D)信息,是目标特征自动识别(ATR,Automatic Target Recognition)技术的重要研究课题。利用双向射线跟踪(BART,Bidirectional Analytic Ray Tracing)方法,计算连续多角度观测条件下空间目标的电磁散射数据,并由此获取空间目标的ISAR序列2-D图像。再利用KLT(Kanade-Lucas-Tomasi)特征跟踪算法,跟踪提取2\|D序列ISAR图像中的特征点(强散射点),获得其2-D坐标。然后,基于正交因式分解法(OFM,Orthographic Factorization Method),计算强散射点的3\|D坐标,获取空间目标的3-D信息。通过简单六棱柱模型,验证重构算法的精度;并以ENVISAT卫星模型为例,给出强散射点的3-D重构结果。结果表明,本文对空间目标3\|D信息获取方法能有效地从ISAR序列2-D图像中重构目标的三维信息。  相似文献   

6.
提出了一种基于多角度序列图像特征实现外螺纹的三维模型重建的方法。 首先在旋转平台上采集多角度序列螺纹件图像,然后对每帧图像进行特征点提取,将序列图 像的特征点进行三维变换和插值,最终生成三维模型。实验结果表明,此算法能精确高精度 地实现外螺纹三维模型重构。  相似文献   

7.
8.
水下目标检测、识别和跟踪是具有重要意义的热点研究问题,在军事和民用领域都有重要的应用.鉴于此,对基于声呐图像的水下目标检测、识别和跟踪原理、方法以及典型算法的研究进展进行全面阐述.首先论述基于声呐图像的水下目标检测、图像去噪、图像分割等方面的主要进展以及典型算法和算法扩展;然后对水下目标声呐图像识别中的特征提取、特征分类方法和主要技术难点进行讨论;最后阐述基于水声信号处理和声呐图像信息的水下目标跟踪方法和算法.通过对水下目标处理过程各个过程的深入讨论和对比分析,指出基于声呐图像的水下目标检测、识别和跟踪中急需解决的关键科学问题及可能的解决思路,并对该领域的未来发展方向做进一步的展望.  相似文献   

9.
A method for estimating mobile robot ego‐motion is presented, which relies on tracking contours in real‐time images acquired with a calibrated monocular video system. After fitting an active contour to an object in the image, 3D motion is derived from the affine deformations suffered by the contour in an image sequence. More than one object can be tracked at the same time, yielding some different pose estimations. Then, improvements in pose determination are achieved by fusing all these different estimations. Inertial information is used to obtain better estimates, as it introduces in the tracking algorithm a measure of the real velocity. Inertial information is also used to eliminate some ambiguities arising from the use of a monocular image sequence. As the algorithms developed are intended to be used in real‐time control systems, considerations on computation costs are taken into account. © 2004 Wiley Periodicals, Inc.  相似文献   

10.
This article describes real-time gaze control using position-based visual servoing. The main control objective of the system is to enable a gaze point to track the target so that the image feature of the target is located at each image center. The overall system consists of two parts: the vision process and the control system. The vision system extracts a predefined color feature from images. An adaptive look-up table method is proposed in order to get the 3-D position of the feature within the video frame rate under varying illumination. An uncalibrated camera raises the problem of the reconstructed 3-D positions not being correct. To solve the calibration problem in the position-based approach, we constructed an end-point closed-loop system using an active head-eye system. In the proposed control system, the reconstructed position error is used with a Jacobian matrix of the kinematic relation. The system stability is locally guaranteed, like image-based visual servoing, and the gaze position was shown to converge to the feature position. The proposed approach was successfully applied to a tracking task with a moving target in some simulations and some real experiments. The processing speed satisfies the property of real time. This work was presented in part at the Sixth International Symposium on Artificial Life and Robotics, Tokyo, January 15–17, 2001  相似文献   

11.
In this paper, we present a robust 3D human-head tracking method. 3D head positions are essential for robots interacting with people. Natural interaction behaviors such as making eye contacts require head positions. Past researches with laser range finder (LRF) have been successful in tracking 2D human position with high accuracy in real time. However, LRF trackers cannot track multiple 3D head positions. On the other hand, trackers with multi-viewpoint images can obtain 3D head position. However, vision-based trackers generally lack robustness and scalability, especially in open environments where lightening conditions vary by time. To achieve 3D robust real-time tracking, here we propose a new method that combines LRF tracker and multi-camera tracker. We combine the results from trackers using the LRF results as maintenance information toward multi-camera tracker. Through an experiment in a real environment, we show that our method outperforms toward existing methods, both in its robustness and scalability.  相似文献   

12.
传统的船舶视觉跟踪任务主要集中于单目标船舶跟踪,对多目标船舶跟踪研究相对较少。为解决该问题,提出一种多维特征融合机制和尺度变化估计的多目标船舶跟踪框架,该框架引入位置滤波器对输入的船舶训练样本进行学习,并将其应用于待跟踪的船舶图片序列,通过寻找最大响应的方法判定图像中的船舶位置。在此基础上,构建船舶尺度估计滤波器以确定待跟踪船舶的图像尺寸。通过和中值流跟踪算法和多示例学习跟踪算法对比分析,实验结果表明不同海事交通场景下的船舶跟踪误差均小于10像素,验证了算法的有效性和可靠性。  相似文献   

13.
为了准确地跟踪机场跑道(以下简称跑道)线目标,提出了一种在末敏子弹稳态下落过程中,利用Hough变换将机场跑道线性目标转化为对点目标进行跟踪的方法。此方法在考虑各个参数解耦的情况下,先对Hough平面内的直线参数利用Kalman滤波方法进行状态估计与预测;然后在确定子弹与跑道目标的相对位置关系时,需考虑到弹上探测器旋转运动的影响,再结合当前帧图像的数据信息对预测方程进行修正处理;最后在小扰动情况下,根据伞-弹系统动力学分析理论建立了对探测系原点与地面交点至跑道中心线的距离D(t)进行处理的方法,并得到了弹下点(子弹晃动中心)至跑道中心线的距离D0的曲线,进而确定了子弹相对于跑道目标的位置关系。分析结果表明,此方法实时性强,不仅降低了参与运算的数据量,而且试验结果误差较小,可以满足目标总体设计要求。  相似文献   

14.
《Real》1997,3(6):415-432
Real-time motion capture plays a very important role in various applications, such as 3D interface for virtual reality systems, digital puppetry, and real-time character animation. In this paper we challenge the problem of estimating and recognizing the motion of articulated objects using theoptical motion capturetechnique. In addition, we present an effective method to control the articulated human figure in realtime.The heart of this problem is the estimation of 3D motion and posture of an articulated, volumetric object using feature points from a sequence of multiple perspective views. Under some moderate assumptions such as smooth motion and known initial posture, we develop a model-based technique for the recovery of the 3D location and motion of a rigid object using a variation of Kalman filter. The posture of the 3D volumatric model is updated by the 2D image flow of the feature points for all views. Two novel concepts – the hierarchical Kalman filter (KHF) and the adaptive hierarchical structure (AHS) incorporating the kinematic properties of the articulated object – are proposed to extend our formulation for the rigid object to the articulated one. Our formulation also allows us to avoid two classic problems in 3D tracking: the multi-view correspondence problem, and the occlusion problem. By adding more cameras and placing them appropriately, our approach can deal with the motion of the object in a very wide area. Furthermore, multiple objects can be handled by managing multiple AHSs and processing multiple HKFs.We show the validity of our approach using the synthetic data acquired simultaneously from the multiple virtual camera in a virtual environment (VE) and real data derived from a moving light display with walking motion. The results confirm that the model-based algorithm works well on the tracking of multiple rigid objects.  相似文献   

15.
In this paper, we introduce a method to estimate the object’s pose from multiple cameras. We focus on direct estimation of the 3D object pose from 2D image sequences. Scale-Invariant Feature Transform (SIFT) is used to extract corresponding feature points from adjacent images in the video sequence. We first demonstrate that centralized pose estimation from the collection of corresponding feature points in the 2D images from all cameras can be obtained as a solution to a generalized Sylvester’s equation. We subsequently derive a distributed solution to pose estimation from multiple cameras and show that it is equivalent to the solution of the centralized pose estimation based on Sylvester’s equation. Specifically, we rely on collaboration among the multiple cameras to provide an iterative refinement of the independent solution to pose estimation obtained for each camera based on Sylvester’s equation. The proposed approach to pose estimation from multiple cameras relies on all of the information available from all cameras to obtain an estimate at each camera even when the image features are not visible to some of the cameras. The resulting pose estimation technique is therefore robust to occlusion and sensor errors from specific camera views. Moreover, the proposed approach does not require matching feature points among images from different camera views nor does it demand reconstruction of 3D points. Furthermore, the computational complexity of the proposed solution grows linearly with the number of cameras. Finally, computer simulation experiments demonstrate the accuracy and speed of our approach to pose estimation from multiple cameras.  相似文献   

16.
Vision-Based Odometry and SLAM for Medium and High Altitude Flying UAVs   总被引:1,自引:0,他引:1  
This paper proposes vision-based techniques for localizing an unmanned aerial vehicle (UAV) by means of an on-board camera. Only natural landmarks provided by a feature tracking algorithm will be considered, without the help of visual beacons or landmarks with known positions. First, it is described a monocular visual odometer which could be used as a backup system when the accuracy of GPS is reduced to critical levels. Homography-based techniques are used to compute the UAV relative translation and rotation by means of the images gathered by an onboard camera. The analysis of the problem takes into account the stochastic nature of the estimation and practical implementation issues. The visual odometer is then integrated into a simultaneous localization and mapping (SLAM) scheme in order to reduce the impact of cumulative errors in odometry-based position estimation approaches. Novel prediction and landmark initialization for SLAM in UAVs are presented. The paper is supported by an extensive experimental work where the proposed algorithms have been tested and validated using real UAVs.  相似文献   

17.
针对三维动态数据特征点匹配所导致的错误对齐问题,采用交互标记和运动跟踪来提高特征点匹配的可靠性和稳定性。首先,对三维动态数据特定帧交互标定特征点;然后,通过运动跟踪和最优预测窗口得到标定特征点在其他帧上的位置;最后,以跟踪匹配的特征点为约束条件来构造等距二分图,得到三维动态数据紧密对齐结果。实验结果表明,所提算法的对齐准确率高于已有算法。  相似文献   

18.
杨辉  李硕  曾俊宝 《测控技术》2012,31(9):16-19
介绍了一种可用于小型水下机器人的前视声纳信息提取方法。利用该方法获取了声纳视域内目标的方位信息,这在小型水下机器人的自主目标跟踪和避碰方面具有很大的应用价值。该方法主要包括3个部分的内容:利用声纳数据生成声纳图像;对声纳图像进行预处理;从处理后的图像中提取目标的特征信息。针对图像中较大目标的边缘信息,提出了一种基于最小二乘法的分段曲线拟合的方法,并给出了基于实验室的水池中获得的实测数据的拟合结果,验证了该方法的有效性。  相似文献   

19.
目的 虚拟结肠镜是一种采用CT或者MRI图像重建出结肠3维结构,通过漫游虚拟结肠来检测结肠组织,一般用于早期结直肠癌筛查。结肠配准能够有效提高息肉检测的效率和精确度,但由于仰卧和俯卧位下的结肠图像形变太大,现有的配准方案中特征点的提取没有考虑到较多特殊情况,因此需要寻找一个新的配准方案完成完整的结肠配准。方法 提出了一种新的结肠图像配准方法,能够完成不同体位获取的虚拟结肠图像之间的配准。首先提取可以反映结肠结构信息的皱襞特征,用模板匹配和特征匹配方法找出两幅结肠中匹配的皱襞对。然后将匹配对的中心点作为标记点,做基于标记点的非刚性粗配准,最后将两幅图做B样条配准完成细配准。这种方法能够将结肠内部较大的形变先矫正,使得两幅图之间的形变缩小到一定范围,然后利用传统配准方法能够完成配准。结果 在5套数据中,找到能够成功匹配的皱襞区域数量占所有分割出的皱襞区域总数量的62%左右,匹配错误率为4.7%左右。完成皱襞粗配准后,结肠形变明显趋于一致,灰度值相对误差减小,最终完成了结肠配准。结论 先进行皱襞匹配再做基于匹配好的皱襞的映射关系做结肠配准,能够将存在较大形变的两套结肠匹配起来。在之后的工作中需要量化特征点选取对配准结果的影响,同时在做配准评估时,单纯采用灰度差值不能很好完成评估,因为灰度特征只能一定程度反映整体差异,不能很好体现结构差异,需要添加其他评估标准辅助配准评估。  相似文献   

20.
Improving Feature Tracking with Robust Statistics   总被引:6,自引:0,他引:6  
This paper addresses robust feature tracking. The aim is to track point features in a sequence of images and to identify unreliable features resulting from occlusions, perspective distortions and strong intensity changes. We extend the well-known Shi–Tomasi–Kanade tracker by introducing an automatic scheme for rejecting spurious features. We employ a simple and efficient outliers rejection rule, called X84, and prove that its theoretical assumptions are satisfied in the feature tracking scenario. Experiments with real and synthetic images confirm that our algorithm consistently discards unreliable features; we show a quantitative example of the benefits introduced by the algorithm for the case of fundamental matrix estimation. The complete code of the robust tracker is available via ftp. Received: 22 January 1999, Received in revised form: 3 May 1999, Accepted: 3 May 1999  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号