首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Rui Ni  Bobby Nguyen  Yan Zhuo 《Displays》2013,34(2):120-124
The current study investigated age-related differences in a steering control task under low visibility conditions. Younger and older drivers were presented with displays simulating forward vehicle motion through a 3D scene of random dots on a ground plane. The lateral position of the vehicle was perturbed by a simulated side wind gust according to a sum of sinusoidal functions. The drivers’ task was to steer the vehicle to maintain a straight path. The visibility of the driving scene was reduced by reducing the quantity and the quality of the optical flow field. We found that performance decreased when visibility was reduced for both older and younger drivers, with better performance for younger drivers as compared with older drivers. An age-related interaction was also found with deteriorated optical flow information. These results suggest that under reduced visibility conditions, older drivers may have increased accident risks due to decreased ability to successfully steer the vehicle.  相似文献   

2.
We propose an approach to navigation for an unmanned aerial vehicle based on finding elements of motion (EM) (linear and angular velocities) by processing the field of local velocities for the motion of an image taken by an onboard video camera. The field of velocities for the image motion, the so-called optical flow (OF), is a linear function of the EM, which allows to use it to find the latter and thus provides an additional way of navigation, which can be rather efficiently used for certain specific problems solved by an UAV in autonomous flight. In this work, we use an algorithm for computing the OF for a given motion of the vehicle (direct problem) and show how to reconstruct the motion of the vehicle by OF observations with methods of statistical estimation (inverse problem).  相似文献   

3.
An optical guiding system is modeled and the signal processing and its operation are studied. Our model is based on the principle that two moving objects are considered, in which one is the moving robot, and the other one is the sought object. The reported system is capable of detecting and measuring the relative motion of the target, and, in response generating electrical signal capable of directing and guiding the vehicle on the shop floor. In this way, the respected robot or vehicle using such a device can maintain its aim on the related target scene at all times. The proposed system offers a higher degree of accuracy and reliability since it considers the state-of-art electronic and optical components for signal processing. A software package (ORCAD 9) is used to simulate signal processing of such an optical tracking system and the results are reported.  相似文献   

4.
For autonomous vehicles to achieve terrain navigation, obstacles must be discriminated from terrain before any path planning and obstacle avoidance activity is undertaken. In this paper, a novel approach to obstacle detection has been developed. The method finds obstacles in the 2D image space, as opposed to 3D reconstructed space, using optical flow. Our method assumes that both nonobstacle terrain regions, as well as regions with obstacles, will be visible in the imagery. Therefore, our goal is to discriminate between terrain regions with obstacles and terrain regions without obstacles. Our method uses new visual linear invariants based on optical flow. Employing the linear invariance property, obstacles can be directly detected by using reference flow lines obtained from measured optical flow. The main features of this approach are: (1) 2D visual information (i.e., optical flow) is directly used to detect obstacles; no range, 3D motion, or 3D scene geometry is recovered; (2) knowledge about the camera-to-ground coordinate transformation is not required; (3) knowledge about vehicle (or camera) motion is not required; (4) the method is valid for the vehicle (or camera) undergoing general six-degree-of-freedom motion; (5) the error sources involved are reduced to a minimum, because the only information required is one component of optical flow. Numerous experiments using both synthetic and real image data are presented. Our methods are demonstrated in both ground and air vehicle scenarios.  相似文献   

5.
详细分析高速公路车辆运动模式与视频监控序列图像的特征,本文针对目标运动速度较大时,背景差分法运算复杂和连续帧差法容易产生虚影的情况,提出基于时空结构张量的光流分析法进行高速公路视频车辆检测。实验结果表明,该算法简洁并且能对高速公路视频序列中的运动车辆进行较准确的分割,为下一步的运动跟踪提供了可靠的依据。  相似文献   

6.
《Real》1996,2(5):271-284
This paper describes a method ofstabilizingimage sequences obtained by a camera carried by a ground vehicle. The motion of the vehicle can usually be regarded as consisting of a desired smooth motion combined with an undesired non-smooth motion that includes impulsive or high-frequency components. The goal of the stabilization process is to correct the images so that they are approximately the same as the images that would have been obtained if the motion of the vehicle had been smooth.We analyse the smooth and non-smooth motions of a ground vehicle and show that only the rotational components of the non-smooth motion have significant perturbing effects on the images. We show how to identify image points at which rotational image flow is dominant, and how to use such points to estimate the vehicle's rotation. Finally, we describe an algorithm that fits smooth (ideally, piecewise constant) rotational motions to these estimates; the residual rotational motion can then be used to correct the images. We have obtained good results for several image sequences obtained from a camera carried by a ground vehicle moving across bumpy terrain.  相似文献   

7.
为了改进智能交通中的运动车辆检测和跟踪方法,提出一种基于改进的帧间差分和光流技术结合的运动车辆检测和跟踪的新方法。先用帧间差分法检测出运动物体的运动区域,再计算差值图中不为零处的光流,然后利用其光流场来实现运动目标的跟踪。为了减少计算量,提出一种基于最优估计的点匹配技术和光流均匀采样策略的光流场计算方法,并通过对灰度化后的光流场进行自适应阈值分割、形态学滤波等处理,实现了实时的运动目标检测和跟踪。  相似文献   

8.
Scene flow provides the 3D motion field of point clouds, which correspond to image pixels. Current algorithms usually need complex stereo calibration before estimating flow, which has strong restrictions on the position of the camera. This paper proposes a monocular camera scene flow estimation algorithm. Firstly, an energy functional is constructed, where three important assumptions are turned into data terms derivation: a brightness constancy assumption, a gradient constancy assumption, and a short time object velocity constancy assumption. Two smooth operators are used as regularization terms. Then, an occluded map computation algorithm is used to ensure estimating scene flow only on un-occluded points. After that, the energy functional is solved with a coarse-to-fine variational equation on Gaussian pyramid, which can prevent the iteration from converging to a local minimum value. The experiment results show that the algorithm can use three sequential frames at least to get scene flow in world coordinate, without optical flow or disparity inputting.  相似文献   

9.
一种基于光流场重建三维运动和结构的新方法   总被引:3,自引:0,他引:3       下载免费PDF全文
提出了一种基于稀疏光流场计算三维运动和结构的线性新方法 ,该方法综合视觉运动分析中的两类处理方法 ,选取图象中的角点作为特征点 ;并检测和跟踪图象序列中的角点 .记录检测到的角点在图象序列中的位移 ,在理论上证明了时变图象的光流场可以近似地用角点的位移场代替 ,从而得到时变图象的稀疏光流场 ;通过光流运动模型的建立 ,推导出由稀疏光流场重建三维物体运动和结构的线性方法 .通过用真实图象序列验证该算法 ,表明该算法取得了较好的效果  相似文献   

10.
11.
The inherent ambiguities in recovering 3-D motion information from a single optical flow field are studied using a statistical model. The ambiguities are quantified using the Cramer-Rao lower bound. As a special case, the performance bound for the motion of 3-D rigid planar surfaces is studied in detail. The dependence of the bound on factors such as the underlying motion, surface position, surface orientation, field of view, and density of available pixels are derived as closed-form expressions. A subset of the results support S. Adiv's (1989) analysis of the inherent ambiguities of motion parameters. For the general motion of an arbitrary surface. It is shown that the aperture problem in computing the optical flow restricts the nontrivial information about the 3-D motion to a sparse set of pixels at which both components of the flow velocity are observable. Computer simulations are used to study the dependence of the inherent ambiguities on the underlying motion, the field of view, and the number of feature points for the motion in front of a nonplanar environment  相似文献   

12.
Motion detection can play an important role in many vision tasks. Yet image motion can arise from “uninteresting” events as well as interesting ones. In this paper, salient motion is defined as motion that is likely to result from a typical surveillance target (e.g., a person or vehicle traveling with a sense of direction through a scene) as opposed to other distracting motions (e.g., the scintillation of specularities on water, the oscillation of vegetation in the wind). We propose an algorithm for detecting this salient motion that is based on intermediate-stage vision integration of optical flow. Empirical results are presented that illustrate the applicability of the proposed methods to real-world video. Unlike many motion detection schemes, no knowledge about expected object size or shape is necessary for rejecting the distracting motion  相似文献   

13.
针对交叉路口下不确定运动流车辆跟踪问题,提出一种基于多层图多角度对车辆进行跟踪的方法。构建多层图形将各运动流分配到具有不同邻域的不同层;将多角度视图下所有多层图形映射到所选定主视图中;通过求解映射主视图下运动流最短路径,实现对车辆轨迹的跟踪。由跟踪实验及分析,结果表明,该方法对交叉路口不确定运动流的车辆运行轨迹可有效预测并进行跟踪,且跟踪效果与地面真实情况基本一致,误判率维持6%以下,具有实际应用价值,可为智能交通中车辆的跟踪提供新的方法。  相似文献   

14.
A method has been developed for estimating pitch angle, roll angle, and aircraft body rates based on horizon detection and temporal tracking using a forward‐looking camera, without assistance from other sensors. Using an image processing front end, we select several lines in an image that may or may not correspond to the true horizon. The optical flow at each candidate line is calculated, which may be used to measure the body rates of the aircraft. Using an extended Kalman filter (EKF), the aircraft state is propagated using a motion model and a candidate horizon line is associated using a statistical test based on the optical flow measurements and the location of the horizon. Once associated, the selected horizon line, along with the associated optical flow, is used as a measurement to the EKF. To test the accuracy of the algorithm, two flights were conducted, one using a highly dynamic uninhabited airborne vehicle (UAV) in clear flight conditions and the other in a human‐piloted Cessna 172 in conditions in which the horizon was partially obscured by terrain, haze, and smoke. The UAV flight resulted in pitch and roll error standard deviations of 0.42 and 0.71 deg, respectively, when compared with a truth attitude source. The Cessna flight resulted in pitch and roll error standard deviations of 1.79 and 1.75 deg, respectively. The benefits of selecting and tracking the horizon using a motion model and optical flow rather than naively relying on the image processing front end are demonstrated. © 2011 Wiley Periodicals, Inc.  相似文献   

15.
基于帧间差的区域光流分析及其应用   总被引:5,自引:0,他引:5  
李超  熊璋  赫阳  刘玉恒 《计算机工程与应用》2005,41(31):195-197,222
传统视频运动检测常用帧间差、背景差等方法检测图像变化的存在,它们能敏感地给出变化区域,在此基础上可方便提取运动对象的位置和轮廓等静态特性,但并不能直接得到运动对象的速度、方向等运动特性;光流法虽可进一步给出运动场中每个像素位置的运动特性,但它所涉及的计算量庞大,难以直接应用于有实时需求的场合。因此,文章提出了一种基于联合帧间差的区域光流分析方法,通过联合帧间差方法提取运动区域,针对运动区域进行光流计算,保持了实时处理所需的速度并降低了光流计算的计算代价,并讨论了其在视频监控、交通监管等场合的应用。  相似文献   

16.
针对Horn-Schunck光流运动估计的矢量中可能出现局部错误估计点的问题,提出一种基于维纳线性预测的光流运动矢量优化算法。首先,将光流运动矢量从笛卡尔坐标转换到极坐标下;其次,根据制定的判决规则,对于判决中的可疑点做进一步判定,而对于判决中的错估点采用维纳线性预测的方法进行重新估值;最后,将极坐标下的光流矢量转换到笛卡尔坐标下,到此就完成了光流运动矢量的优化。与直接求出的Horn-Schunck光流矢量相比,优化后的Horn-Schunck光流矢量中幅度和角度错估点的误差明显减小,光流矢量的准确性获得了一定程度的提高。将直接求出的Horn-Schunck光流矢量和优化后光流矢量分别应用到图像和视频序列的运动补偿中。结果表明:基于维纳线性预测的Horn-Schunck光流运动矢量优化算法取得了比较好的效果。  相似文献   

17.
In this paper, we explore how a wide field-of-view imaging system that consists of a number of cameras in a network arranged to approximate a spherical eye can reduce the complexity of estimating camera motion. Depth map of the imaged scene can be reconstructed once the camera motion is there. We present a direct method to recover camera motion from video data, which neither requires establishment of feature correspondences nor recovery of optical flow, but from normal flow which is directly observable. With a wide visual field, the inherent ambiguities between translation and rotation disappear. Several subsets of normal flow pairs and triplets can be utilized to constraint the directions of translation and rotation separately. The intersection of solution spaces arising from normal flow pairs or triplets yields the estimate on the direction of motion. In addition, the larger number of normal flow measurements so resulted can be used to combat the local flow extraction error. Rotational magnitude is recovered in a subsequent stage. This article details how motion recovery can be improved with the use of such an approximate spherical imaging system. Experimental results on synthetic and real image data are provided. The results show that the accuracy of motion estimation is comparable to those of the state-of-the-art methods that require to use explicit feature correspondences or full optical flows, and our method has a much faster computational speed.  相似文献   

18.
For intelligent/autonomous subsea vehicles,reliable short-range horizontal positioning is difficult to achieve,particularly over flat bottom topography.A potential solution proposed in this paper utilized a passive optical sensing method to estimate the vehicle displacement using the bottom surface texture.The suggested optical flow method does not require any feature correspondences in images and it is robust in allowing brightness changes between image frames.Fundamentally,this method is similar to correlation methods attempting to match images and compute the motion disparity.However,in correlation methods,searching a neighbor region blindly for best match is lengthy.Main contributions of this paper come from the analysis showing that optical flow computation based on the general model cannot avoid errors except for null motion although the sign of optical flow keeps correct,and from the development of an iterative shifting method based on the error characteristics to accurately determine motions.Advantages of the proposed method are verified by real image experiments.  相似文献   

19.
Two novel systems computing dense three-dimensional (3-D) scene flow and structure from multiview image sequences are described in this paper. We do not assume rigidity of the scene motion, thus allowing for nonrigid motion in the scene. The first system, integrated model-based system (IMS), assumes that each small local image region is undergoing 3-D affine motion. Non-linear motion model fitting based on both optical flow constraints and stereo constraints is then carried out on each local region in order to simultaneously estimate 3-D motion correspondences and structure. The second system is based on extended gradient-based system (EGS), a natural extension of two-dimensional (2-D) optical flow computation. In this method, a new hierarchical rule-based stereo matching algorithm is first developed to estimate the initial disparity map. Different available constraints under a multiview camera setup are further investigated and utilized in the proposed motion estimation. We use image segmentation information to adopt and maintain the motion and depth discontinuities. Within the framework for EGS, we present two different formulations for 3-D scene flow and structure computation. One formulation assumes that initial disparity map is accurate, while the other does not. Experimental results on both synthetic and real imagery demonstrate the effectiveness of our 3-D motion and structure recovery schemes. Empirical comparison between IMS and EGS is also reported.  相似文献   

20.
The automotive industry invests substantial amounts of money in driver-security and driver-assistance systems. We propose an overtaking detection system based on visual motion cues that combines feature extraction, optical flow, solid-objects segmentation and geometry filtering, working with a low-cost compact architecture based on one focal plane and an on-chip embedded processor. The processing is divided into two stages: firstly analog processing on the focal plane processor dedicated to image conditioning and relevant image-structure selection, and secondly, vehicle tracking and warning-signal generation by optical flow, using a simple digital microcontroller. Our model can detect an approaching vehicle (multiple-lane overtaking scenarios) and warn the driver about the risk of changing lanes. Thanks to the use of tightly coupled analog and digital processors, the system is able to perform this complex task in real time with very constrained computing resources. The proposed method has been validated with a sequence of more than 15,000 frames (90 overtaking maneuvers) and is effective under different traffic situations, as well as weather and illumination conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号