首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this work we consider the application context of planar passive navigation in which the visual control of locomotion requires only the direction of translation, and not the full set of motion parameters. If the temporally changing optic array is represented as a vector field of optical velocities, the vectors form a radial pattern emanating from a centre point, called the Focus of Expansion (FOE), representing the heading direction. The FOE position is independent of the distances of world surfaces, and does not require assumptions about surface shape and smoothness. We investigate the performance of an artificial neural network for the computation of the image position of the FOE of an Optical Flow (OF) field induced by an observer translation relative to a static environment. The network is characterized by a feed-forward architecture, and is trained by a standard supervised back-propagation algorithm which receives as input the pattern of points where the lines generated by 2D vectors are projected using the Hough transform. We present results obtained on a test set of synthetic noisy optical flows and on optical flows computed from real image sequences.  相似文献   

2.
Reliable motion estimation is a key component for autonomous vehicles. We present a visual odometry method for ground vehicles using template matching. The method uses a downward‐facing camera perpendicular to the ground and estimates the motion of the vehicle by analyzing the image shift from frame to frame. Specifically, an image region (template) is selected, and using correlation we find the corresponding image region in the next frame. We introduce the use of multitemplate correlation matching and suggest template quality measures for estimating the suitability of a template for the purpose of correlation. Several aspects of the template choice are also presented. Through an extensive analysis, we derive the expected theoretical error rate of our system and show its dependence on the template window size and image noise. We also show how a linear forward prediction filter can be used to limit the search area to significantly increase the computation performance. Using a single camera and assuming an Ackerman‐steering model, the method has been implemented successfully on a large industrial forklift and a 4×4 vehicle. Over 6 km of field trials from our industrial test site, an off‐road area and an urban environment are presented illustrating the applicability of the method as an independent sensor for large vehicle motion estimation at practical velocities. © 2011 Wiley Periodicals, Inc.  相似文献   

3.
潘超  刘建国  李峻林 《自动化学报》2015,41(6):1102-1112
昆虫能够使用视觉感受的光流(Optical flow, OF)信息执行导航任务. 启发于昆虫的视觉导航, 本文提出了一种生物视觉启发的光流复合导航方法, 它由光流导航和光流辅助导航两部分组成, 以实现高效精确的视觉导航定位. 该方法中, 光流导航的作用是使用昆虫视觉启发的光流法, 测量系统每一时刻的运动位移, 然后使用路径积分累加位移得到位置信息; 光流辅助导航的作用是针对路径积分会产生累积误差的缺点, 使用光流匹配的方法来估计和修正导航中的位置误差. 该光流辅助导航也参考了昆虫启发的光流法, 通过基于光流的卡尔曼滤波器来执行实际和预测光流的迭代匹配. 由于光流导航和光流辅助导航中的光流计算来源于同一昆虫启发光流法, 使得光流复合导航的两部分可共享输入信号和部分执行过程. 文中使用移动机器人进行导航实验,证明了该复合导航方法的效率.  相似文献   

4.
A Fuzzy-Logic-Based Approach for Mobile Robot Path Tracking   总被引:2,自引:0,他引:2  
One important problem in autonomous robot navigation is the effective following of an unknown path traced in the environment in compliance with the kinematic limits of the vehicle, i.e., bounded linear and angular velocities and accelerations. In this case, the motion planning must be implemented in real-time and must be robust with respect to the geometric characteristics of the unknown path, namely curvature and sharpness. To achieve good tracking capability, this paper proposes a path following approach based on a fuzzy-logic set of rules which emulates the human driving behavior. The input to the fuzzy system is represented by approximate information concerning the next bend ahead the vehicle; the corresponding output is the cruise velocity that the vehicle needs to attain in order to safely drive on the path. To validate the proposed algorithm two completely different experiments have been run: in the first experiment, the vehicle has to perform a lane-following task acquiring lane information in real-time using an onboard camera; in the second, the motion of the vehicle is obtained assigning in real-time a given time law. The obtained results show the effectiveness of the proposed method  相似文献   

5.
6.
This paper describes a strategy to feature point correspondence and motion recovery in vehicle navigation. A transformation of the image plane is proposed that keeps the motion of the vehicle on a plane parallel to the transformed image plane. This permits to define linear tracking filters to estimate the real-world positions of the features, and allows us to select the matches that accomplish the rigidity of the scene by a Hough transform. Candidate correspondences are selected by similarity, taking into account the smoothness of motion. Further processing brings out the final matching. The methods have been tested in a real application.  相似文献   

7.
Sparse optic flow maps are general enough to obtain useful information about camera motion. Usually, correspondences among features over an image sequence are estimated by radiometric similarity. When the camera moves under known conditions, global geometrical constraints can be introduced in order to obtain a more robust estimation of the optic flow. In this paper, a method is proposed for the computation of a robust sparse optic flow (OF) which integrates the geometrical constraints induced by camera motion to verify the correspondences obtained by radiometric-similarity-based techniques. A raw OF map is estimated by matching features by correlation. The verification of the resulting correspondences is formulated as an optimization problem that is implemented on a Hopfield neural network (HNN). Additional constraints imposed in the energy function permit us to achieve a subpixel accuracy in the image locations of matched features. Convergence of the HNN is reached in a small enough number of iterations to make the proposed method suitable for real-time processing. It is shown that the proposed method is also suitable for identifying independently moving objects in front of a moving vehicle. Received: 26 December 1995 / Accepted: 20 February 1997  相似文献   

8.
9.
The problem of steering a flying vehicle to a fixed point with a specified orientation of the terminal velocity is studied. The method is based on a modification of the well-known proportional navigation method in which the navigation parameter relating the angular rotation velocities of the line of sight and the velocity vector is chosen based on the current characteristics of the trajectory. The adaptivity of the method is ensured by the periodic correction of this parameter. For three-dimensional (3D) trajectories, a combination of two motions—(a) guidance in the plane formed by the given direction of the terminal velocity and the current state of the vehicle (in this plane, the specified terminal velocity direction is ensured using the modified proportional navigation) and (b) control in the orthogonal plane based on the classical proportional navigation with a fixed navigation parameter to minimize the rotation of the guidance plane. The proposed method does not require onboard trajectory prediction but forms the control using the current navigation data. Examples of various types of 3D trajectories of a gliding vehicle with a high lift-to-drag ratio are discussed.  相似文献   

10.
In this paper a novel integrated, single-chip solution for autonomous navigation inspired by the computations in the insect visuomotor system is proposed. A generalization of the theory of wide field integration (WFI) is presented which supports the use of sensors with a limited field of view, and the system concept is validated based on experiments using a prototype single-chip WFI sensor. The VLSI design implements (1) an array of Elementary Motion Detectors (EMDs) to derive local estimates of optic flow, (2) a novel mismatch compensation approach to handle dissimilarities in local motion detector units, and (3) on-chip programmable optic flow pattern weighting (Wide-Field Integration) to extract relative speed and proximity with respect to the surrounding environment. Computations are performed in the analog domain and in parallel, providing outputs at 1 kHz while consuming only 42.6 ??W of power. The resulting sensor is integrated with a ground vehicle and navigation of corridor-like environments is demonstrated.  相似文献   

11.
This paper presents the design and performance evaluation of a set of globally asymptotically stable time-varying kinematic filters with application to the estimation of linear motion quantities of mobile platforms (position, linear velocity, and acceleration) in three dimensions. The proposed techniques are based on the Kalman and H optimal filters for linear time-varying systems and the explicit optimal filtering solutions are obtained through the use of an appropriate coordinate transformation, whereas the design employs frequency weights to achieve adequate disturbance rejection and attenuation of the measurement noise on the state estimates. Two examples of application in the field of ocean robotics are presented that demonstrate the potential and usefulness of the proposed design methodology. In the first the proposed filtering solutions allow for the design of a complementary navigation filter for the estimation of unknown constant ocean currents, while the second addresses the problem of estimation of the velocity of an underwater vehicle, as well as the acceleration of gravity. Simulation results are included that illustrate the filtering achievable performance in the presence of both extreme environmental disturbances and realistic measurement noise.  相似文献   

12.
This paper describes the use of vision for navigation of mobile robots floating in 3D space. The problem addressed is that of automatic station keeping relative to some naturally textured environmental region. Due to the motion disturbances in the environment (currents), these tasks are important to keep the vehicle stabilized relative to an external reference frame. Assuming short range regions in the environment, vision can be used for local navigation, so that no global positioning methods are required. A planar environmental region is selected as a visual landmark and tracked throughout a monocular video sequence. For a camera moving in 3D space, the observed deformations of the tracked image region are according to planar projective transformations and reveal information about the robot relative position and orientation w.r.t. the landmark. This information is then used in a visual feedback loop so as to realize station keeping. Both the tracking system and the control design are discussed. Two robotic platforms are used for experimental validation, namely an indoor aerial blimp and a remote operated underwater vehicle. Results obtained from these experiments are described.  相似文献   

13.
For autonomous vehicles to achieve terrain navigation, obstacles must be discriminated from terrain before any path planning and obstacle avoidance activity is undertaken. In this paper, a novel approach to obstacle detection has been developed. The method finds obstacles in the 2D image space, as opposed to 3D reconstructed space, using optical flow. Our method assumes that both nonobstacle terrain regions, as well as regions with obstacles, will be visible in the imagery. Therefore, our goal is to discriminate between terrain regions with obstacles and terrain regions without obstacles. Our method uses new visual linear invariants based on optical flow. Employing the linear invariance property, obstacles can be directly detected by using reference flow lines obtained from measured optical flow. The main features of this approach are: (1) 2D visual information (i.e., optical flow) is directly used to detect obstacles; no range, 3D motion, or 3D scene geometry is recovered; (2) knowledge about the camera-to-ground coordinate transformation is not required; (3) knowledge about vehicle (or camera) motion is not required; (4) the method is valid for the vehicle (or camera) undergoing general six-degree-of-freedom motion; (5) the error sources involved are reduced to a minimum, because the only information required is one component of optical flow. Numerous experiments using both synthetic and real image data are presented. Our methods are demonstrated in both ground and air vehicle scenarios.  相似文献   

14.
《Advanced Robotics》2013,27(11):1529-1556
The problem of trajectory tracking control of an underactuated autonomous underwater robot (AUR) in a three-dimensional (3-D) space is investigated in this paper. The control of an underactuated robot is different from fully actuated robots in many aspects. In particular, these robot systems do not satisfy Brockett's necessary condition for feedback stabilization and no continuous time-invariant state feedback control law exists that makes a specified equilibrium of the closed-loop system asymptotically stable. The uncertainty of hydrodynamic parameters, along with the coupled, nonlinear dynamics of the underwater robot, also makes the navigation and tracking control a difficult task. The proposed hybrid control law is developed by combining sliding mode control (SMC) and classical proportional–integral–derivative (PID) control methods to reduce the tracking errors arising out of disturbances, as well as variations in vehicle parameters like buoyancy. Here, a trajectory planner computes the body-fixed linear and angular velocities, as well as vehicle orientations corresponding to a given 3-D inertial trajectory, which yields a feasible 6-d.o.f. trajectory. This trajectory is used to compute the control signals for the three available controllable inputs by the hybrid controller. A supervisory controller is used to switch between the SMC and PID control as per a predefined switching law. The switching function parameters are optimized using Taguchi design techniques. The effectiveness and performance of the proposed controller is investigated by comparing numerically with classical SMC and traditional linear control systems in the presence of disturbances. Numerical simulations using the full set of nonlinear equations of motion show that the controller does quite well in dealing with the plant nonlinearity and parameter uncertainties for trajectory tracking. The proposed controller response shows less tracking error without the usually present control chattering. Some practical features of this control law are also discussed.  相似文献   

15.
Many types of existing vehicles contain an inertial navigation system (INS) that can be utilized to greatly improve the performance of motion analysis techniques and make them useful for practical military and civilian applications. This article presents the results obtained with a maximally passive system of obstacle detection for ground-based vehicles and rotorcraft. Automatic detection of these obstacles and the necessary guidance and control actions triggered by such detection will facilitate autonomous vehicle navigation. Our approach to obstacle detection employs motion analysis of imagery collected by a passive sensor during vehicle travel to generate range measurements to world points within the field of view of the sensor. The approach makes use of INS data and scene analysis results to improve interest point selection, the matching of the interest points, and the subsequent motion-based range computations, tracking, and obstacle detection. In this article, we concentrate on the results obtained using lab and outdoor imagery. The range measurements that are made by INS integrated motion analysis are compared to a limited amount of ground truth that is available. © 1992 John Wiley & Sons, Inc.  相似文献   

16.
This paper presents a hierarchical simultaneous localization and mapping(SLAM) system for a small unmanned aerial vehicle(UAV) using the output of an inertial measurement unit(IMU) and the bearing-only observations from an onboard monocular camera.A homography based approach is used to calculate the motion of the vehicle in 6 degrees of freedom by image feature match.This visual measurement is fused with the inertial outputs by an indirect extended Kalman filter(EKF) for attitude and velocity estimation.Then,another EKF is employed to estimate the position of the vehicle and the locations of the features in the map.Both simulations and experiments are carried out to test the performance of the proposed system.The result of the comparison with the referential global positioning system/inertial navigation system(GPS/INS) navigation indicates that the proposed SLAM can provide reliable and stable state estimation for small UAVs in GPS-denied environments.  相似文献   

17.
Obstacle avoidance using flow field divergence   总被引:2,自引:0,他引:2  
The use of certain measures of flow field divergence is investigated as a qualitative cue for obstacle avoidance during visual navigation. It is shown that a quantity termed the directional divergence of the 2-D motion field can be used as a reliable indicator of the presence of obstacles in the visual field of an observer undergoing generalized rotational and translational motion. The necessary measurements can be robustly obtained from real image sequences. Experimental results are presented showing that the system responds as expected to divergence in real-world image sequences, and the use of the system to navigate between obstacles is demonstrated  相似文献   

18.
This research addresses the problem of noise sensitivity inherent in motion and structure algorithms. The motion and structure paradigm is a two-step process. First, we measure image velocities and, perhaps, their spatial and temporal derivatives, are obtained from time-varying image intensity data and second, we use these data to compute the motion of a moving monocular observer in a stationary environment under perspective projection, relative to a single 3-D planar surface. The first contribution of this article is an algorithm that uses time-varying image velocity information to compute the observer's translation and rotation and the normalized surface gradient of the 3-D planar surface. The use of time-varying image velocity information is an important tool in obtaining a more robust motion and structure calculation. The second contribution of this article is an extensive error analysis of the motion and structure problem. Any motion and structure algorithm that uses image velocity information as its input should exhibit error sensitivity behavior compatible with the results reported here. We perform an average and worst case error analysis for four types of image velocity information: full and normal image velocities and full and normal sets of image velocity and its derivatives. (These derivatives are simply the coefficients of a truncated Taylor series expansion about some point in space and time.) The main issues we address here are: just how sensitive is a motion and structure computation in the presence of noisy input, or alternately, how accurate must our image velocity information be, how much and what type of input data is needed, and under what circumstances is motion and structure feasible? That is, when can we be sure that a motion and structure computation will produce usable results? We base our answers on a numerical error analysis we conduct for a large number of motions.  相似文献   

19.
《Real》1996,2(5):271-284
This paper describes a method ofstabilizingimage sequences obtained by a camera carried by a ground vehicle. The motion of the vehicle can usually be regarded as consisting of a desired smooth motion combined with an undesired non-smooth motion that includes impulsive or high-frequency components. The goal of the stabilization process is to correct the images so that they are approximately the same as the images that would have been obtained if the motion of the vehicle had been smooth.We analyse the smooth and non-smooth motions of a ground vehicle and show that only the rotational components of the non-smooth motion have significant perturbing effects on the images. We show how to identify image points at which rotational image flow is dominant, and how to use such points to estimate the vehicle's rotation. Finally, we describe an algorithm that fits smooth (ideally, piecewise constant) rotational motions to these estimates; the residual rotational motion can then be used to correct the images. We have obtained good results for several image sequences obtained from a camera carried by a ground vehicle moving across bumpy terrain.  相似文献   

20.
Although a piecewise helical (polyscrew) motion is continuous, velocities are typically discontinuous at control poses when the motion switches between screws. We obtain a smooth motion through polyscrew 4-point, B-spline, or Jarek subdivision, which are trivial to implement and can be animated in real time. To overcome this problem, we use screw, or helical, motions to interpolate between poses. A screw is fully defined by the initial and final control poses. It combines a minimum-angle linear rotation around a fixed axis of direction S with a minimum-distance linear translation along S.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号