首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Described here is a method for estimating rolling and swaying motions of a mobile robot using optical flow. We have proposed an image sensor with a hyperboloidal mirror for the vision-based navigation of a mobile robot. Its name is HyperOmni Vision. The radial component of optical flow in HyperOmni Vision has a periodic characteristic. The circumferential component of optical flow has a symmetric characteristic. The proposed method makes use of these characteristic to estimate robustly the rolling and swaying motion of the mobile robot. Correspondence to: Y. Yagi e-mail: y-yagi@sys.es.osaka-u.ac.jp  相似文献   

2.
Omnidirectional cameras that give a 360° panoramic view of the surroundings have recently been used in many applications such as robotics, navigation, and surveillance. This paper describes the application of parametric ego-motion estimation for vehicle detection to perform surround analysis using an automobile-mounted camera. For this purpose, the parametric planar motion model is integrated with the transformations to compensate distortion in omnidirectional images. The framework is used to detect objects with independent motion or height above the road. Camera calibration as well as the approximate vehicle speed obtained from a CAN bus are integrated with the motion information from spatial and temporal gradients using a Bayesian approach. The approach is tested for various configurations of an automobile-mounted omni camera as well as a rectilinear camera. Successful detection and tracking of moving vehicles and generation of a surround map are demonstrated for application to intelligent driver support.Received: 1 August 2003, Accepted: 8 July 2004, Published online: 3 February 2005  相似文献   

3.
为解决传统相机不能在全景范围内快速连续跟踪运动物体的弊端,本文提出一种仿生复眼式全景探测思想,以及配套的跟踪策略。在硬件方面,采用多个子眼相机进行全景实时探测,中心采用一个大孔径主眼相机,并且将其安装于云台上面,通过云台控制器接收的不同指令码进行精确定位。在软件方面,本文采用自动加窗采集的思想,对于运动区域的图像进行多层窗口采集,非运动区域则减少窗口层数,然后将采集的图像进行高斯运动目标检测,而检测方法本文在重叠区采用了仿生复眼的侧抑制算法,得到了比较清晰的运动目标轮廓。按照上述软硬件思想本文搭建了实际实验装置,并且进行了重复性实验。由于实时探测特点,以及加窗采集和侧抑制算法的结合,相比传统相机的跟踪效果,在视场和灵敏度上得到了很大的提高。  相似文献   

4.
A biologically inspired visual system capable of motion detection and pursuit motion is implemented using a Discrete Leaky Integrate-and-Fire (DLIF) neuron model. The system consists of a visual world, a virtual retina, the neural network circuitry (DLIF) to process the information, and a set of virtual eye muscles that serve to move the input area (visual field) of the retina within the visual world. Temporal aspects of the DLIF model are heavily exploited including: spike propagation latency, relative spike timing, and leaky potential integration. A novel technique for motion detection is employed utilizing coincidence detection aspects of the DLIF and relative spike timing. The system as a whole encodes information using relative spike timing of individual action potentials as well as rate coded spike trains. Experimental results are presented in which the motion of objects is detected and tracked in real and animated video. Pursuit motion is successful using linear and also sinusoidal paths which include object velocity changes. The visual system exhibits dynamic overshoot correction heavily exploiting neural network characteristics. System performance is within the bounds of real-time applications.  相似文献   

5.
This paper presents a recurrent neural network based novelty filter where a Scitos G5 mobile robot explored the environment and built dynamic models of observed sensory–motor values, then the acquired models of normality are used to predict the expected future values of sensory–motor inputs during patrol. Novelties could be detected whenever the prediction error between models-predicted values and actual observed values exceeded a local novelty threshold. The network is trained on-line; it grows by inserting new nodes when abnormal observation is perceived from the environment; and also shrinks when the learned information is not necessary anymore. In addition, the network is also capable of learning region-specific novelty thresholds on-line continuously.To evaluate the proposed algorithm, real-world robotic experiments were conducted by fusing sensory perceptions (vision and laser sensors) and the robot motor control outputs (translational and rotational velocities). Experimental results showed that all of the novelty cases were highlighted by the proposed algorithms and it produced reliable local novelty thresholds while the robot patrols in the noisy environment. The statistical analysis showed that there was a strong correlation between the novelty filter responses and the actual novelty status. Furthermore, the filter was also compared with another novelty filter and the results showed that the proposed system performed better novelty detection.  相似文献   

6.
Uncalibrated obstacle detection using normal flow   总被引:2,自引:0,他引:2  
This paper addresses the problem of obstacle detection for mobile robots. The visual information provided by a single on-board camera is used as input. We assume that the robot is moving on a planar pavement, and any point lying outside this plane is treated as an obstacle. We address the problem of obstacle detection by exploiting the geometric arrangement between the robot, the camera, and the scene. During an initialization stage, we estimate an inverse perspective transformation that maps the image plane onto the horizontal plane. During normal operation, the normal flow is computed and inversely projected onto the horizontal plane. This simplifies the resultant flow pattern, and fast tests can be used to detect obstacles. A salient feature of our method is that only the normal flow information, or first order time-and-space image derivatives, is used, and thus we cope with the aperture problem. Another important issue is that, contrasting with other methods, the vehicle motion and intrinsic and extrinsic parameters of the camera need not be known or calibrated. Both translational and rotational motion can be dealt with. We present motion estimation results on synthetic and real-image data. A real-time version implemented on a mobile robot, is described.  相似文献   

7.
Dust particle detection in video aims to automatically determine whether the video is degraded by dust particle or not. Dust particles are usually stuck on the camera lends and typically temporally static in the images of a video sequence captured from a dynamic scene. The moving objects in the scene can be occluded by the dusts; consequently, the motion information of moving objects tends to yield singularity. Motivated by this, a dust detection approach is proposed in this paper by exploiting motion singularity analysis in the video. First, the optical model of dust particle is theoretically studied in by simulating optical density of artifacts produced by dust particles. Then, the optical flow is exploited to perform motion singularity analysis for blind dust detection in the video without the need for ground truth dust-free video. More specifically, a singularity model of optical flow is proposed in this paper using the direction of the motion flow field, instead of the amplitude of the motion flow field. The proposed motion singularity model is further incorporated into a temporal voting mechanism to develop an automatic dust particle detection in the video. Experiments are conducted using both artificially-simulated dust-degraded video and real-world dust-degraded video to demonstrate that the proposed approach outperforms conventional approaches to achieve more accurate dust detection.  相似文献   

8.
The implementation of a set of visually based behaviors for navigation is presented. The approach, which has been inspired by insect's behaviors, is aimed at building a “library” of embedded visually guided behaviors coping with the most common situations encountered during navigation in an indoor environment. Following this approach, the main goal is no longer how to characterize the environment, but how to embed in each behavior the perceptual processes necessary to understand the aspects of the environment required to generate a purposeful motor output.

The approach relies on the purposive definition of the task to be solved by each of the behaviors and it is based on the possibility of computing visual information during the action. All the implemented behaviors share the same input process (partial information of the image flow field) and the same control variables (heading direction and velocity) to demonstrate both the generality of the approach as well as its efficient use of the computational resources. The controlled mobile base is supposed to move on a flat surface but virtually no calibration is required of the intrinsic and extrinsic parameters of the two cameras and no attempt is made at building a 2D or 3D map of the environment: the only output of the perceptual processes is a motor command.

The first behavior, the centering reflex allows a robot to be easily controlled to navigate along corridors or following walls of a given scene structure. The second behavior extends the system capabilities to the detection of obstacles lying on the pavement in front of the mobile robot. Finally docking behaviors to control the robot to a given position in the environment, with controlled speed and orientation, are presented.

Besides the long-term goal of building a completely autonomous system, these behaviors can have very short-term applications in the area of semi-autonomous systems by taking care of the continuous, tedious control required during routine navigation.  相似文献   


9.
Robust user detection and tracking is one of the key issues for a personal robot to follow the target person. In this paper, a novel tracking system using an omnidirectional camera and IR LED tags is proposed. The users wear the tags on their ankles, and the tags emit a light pattern as its ID. The camera on the robot is used to detect and track their positions individually. A novel approach based on a track-before-detect particle filter is proposed. It detects and tracks the tags simultaneously, even if the tags are not synchronized with the camera sampling or are not fully observable. The effectiveness of the proposed system is evaluated by experiments using a prototype personal robot.  相似文献   

10.
The main objective of this paper is to provide a tool for performing path planning at the servo-level of a mobile robot. The ability to perform, in a provably-correct manner, such a complex task at the servo-level can lead to a large increase in the speed of operation, low energy consumption and high quality of response. Planning has been traditionally limited to the high level controller of a robot. The guidance velocity signal from this stage is usually converted to a control signal using what is known as an electronic speed controller (ESC). This paper demonstrates the ability of the harmonic potential field (HPF) approach to generate a provably-correct, constrained, well-behaved trajectory and control signal for a rigid, nonholonomic robot in a stationary, cluttered environment. It is shown that the HPF-based, servo-level planner can address a large number of challenges facing planning in a realistic situation. The suggested approach migrates the rich and provably-correct properties of the solution trajectories from an HPF planner to those of the robot. This is achieved using a synchronizing control signal whose aim is to align the velocity of the robot in its local coordinates, with that of the gradient of the HPF. The link between the two is made possible by representing the robot using what the paper terms “separable form”. The context-sensitive and goal-oriented control signal used to steer the robot is demonstrated to be well-behaved and robust in the presence of actuator noise, saturation and uncertainty in the parameters. The approach is developed, proofs of correctness are provided and the capabilities of the scheme are demonstrated using simulation results.  相似文献   

11.
The head trajectory is an interesting source of information for behavior recognition and can be very useful for video surveillance applications, especially for fall detection. Consequently, much work has been done to track the head in the 2D image plane using a single camera or in a 3D world using multiple cameras. Tracking the head in real-time with a single camera could be very useful for fall detection. Thus, in this article, an original method to extract the 3D head trajectory of a person in a room is proposed using only one calibrated camera. The head is represented as a 3D ellipsoid, which is tracked with a hierarchical particle filter based on color histograms and shape information. Experiments demonstrated that this method can run in quasi-real-time, providing reasonable 3D errors for a monocular system. Results on fall detection using the head 3D vertical velocity or height obtained from the 3D trajectory are also presented.  相似文献   

12.
研究基于视觉伺服的不确定非完整移动机器人的跟踪控制问题.基于视觉反馈和状态输入变换,提出一类非完整运动学系统的不确定模型,并运用两个新的变换,对3种不同情况分别设计自适应动态反馈控制器来跟踪不确定系统的期望轨迹.利用李雅普诺夫方法和推广的Barbalat引理,严格证明了误差系统的收敛性.仿真结果验证了所提方法的有效性.  相似文献   

13.
This paper studies the trajectory and force tracking control problem of mobile manipulators subject to holonomic and nonholonomic constraints with unknown inertia parameters. Adaptive controllers are proposed based on a suitable reduced dynamic model, the defined reference signals and the mixed tracking errors. The proposed controllers not only ensure the entire state of the system to asymptotically converge to the desired trajectory but also ensure the constraint force to asymptotically converge to the desired force. A detailed numerical example is presented to illustrate the developed methods.  相似文献   

14.
针对室内环境下的移动机器人运动目标跟踪问题,提出一种基于激光与单目视觉传感信息融合的机器人定位和目标运动估计方法.首先,利用激光传感信息实现对目标的检测,并完成机器人定位与环境建图;然后,设计一种基于单目视觉传感器的目标位置估计算法,获得目标的距离和角度信息;为了实现两类传感信息的有效融合,将激光与单目视觉进行联合标定,得到二者的相对位姿关系,基于此,将激光与单目视觉提取的目标距离和角度通过具有最优重要性函数和权重的粒子滤波器进行融合,实现对目标运动状态的准确估计.实验结果表明该方法具有良好的跟踪性能.  相似文献   

15.
We present a visual assistive system that features mobile face detection and recognition in an unconstrained environment from a mobile source using convolutional neural networks. The goal of the system is to effectively detect individuals that approach facing towards the person equipped with the system. We find that face detection and recognition becomes a very difficult task due to the movement of the user which causes camera shakes resulting in motion blur and noise in the input for the visual assistive system. Due to the shortage of related datasets, we create a dataset of videos captured from a mobile source that features motion blur and noise from camera shakes. This makes the application a very challenging aspect of face detection and recognition in unconstrained environments. The performance of the convolutional neural network is further compared with a cascade classifier. The results show promising performance in daylight and artificial lighting conditions while the challenges lie for moonlight conditions with the need for reduction of false positives in order to develop a robust system. We also provide a framework for implementation of the system with smartphones and wearable devices for video input and auditory notification from the system to guide the visually impaired.  相似文献   

16.
This paper proposes a framework to aid video analysts in detecting suspicious activity within the tremendous amounts of video data that exists in today’s world of omnipresent surveillance video. Ideas and techniques for closing the semantic gap between low-level machine readable features of video data and high-level events seen by a human observer are discussed. An evaluation of the event classification and detection technique is presented and a future experiment to refine this technique is proposed. These experiments are used as a lead to a discussion on the most optimal machine learning algorithm to learn the event representation scheme proposed in this paper.
Bhavani ThuraisinghamEmail:
  相似文献   

17.
This paper presents a novel method to accurately detect moving objects from a video sequence captured using a nonstationary camera. Although common methods provide effective motion detection for static backgrounds or through only planar-perspective transformation, many detection errors occur when the background contains complex dynamic interferences or the camera undergoes unknown motions. To solve this problem, this study proposed a motion detection method that incorporates temporal motion and spatial structure. In the proposed method, first, spatial semantic planes are segmented, and image registration based on stable background planes is applied to overcome the interferences of the foreground and dynamic background. Thus, the estimated dense temporal motion ensures that small moving objects are not missed. Second, motion pixels are mapped on semantic planes, and then, the spatial distribution constraints of motion pixels, regional shapes and plane semantics, which are integrated into a planar structure, are used to minimise false positives. Finally, based on the dense temporal motion and spatial structure, moving objects are accurately detected. The experimental results on CDnet dataset, Pbi dataset, Aeroscapes dataset, and other challenging self-captured videos under difficult conditions, such as fast camera movement, large zoom variation, video jitters, and dynamic background, revealed that the proposed method can remove background movements, dynamic interferences, and marginal noises and can effectively obtain complete moving objects.© 2017 ElsevierInc.Allrightsreserved.  相似文献   

18.
Building facade detection is an important problem in computer vision, with applications in mobile robotics and semantic scene understanding. In particular, mobile platform localization and guidance in urban environments can be enabled with accurate models of the various building facades in a scene. Toward that end, we present a system for detection, segmentation, and parameter estimation of building facades in stereo imagery. The proposed method incorporates multilevel appearance and disparity features in a binary discriminative model, and generates a set of candidate planes by sampling and clustering points from the image with Random Sample Consensus (RANSAC), using local normal estimates derived from Principal Component Analysis (PCA) to inform the planar models. These two models are incorporated into a two-layer Markov Random Field (MRF): an appearance- and disparity-based discriminative classifier at the mid-level, and a geometric model to segment the building pixels into facades at the high-level. By using object-specific stereo features, our discriminative classifier is able to achieve substantially higher accuracy than standard boosting or modeling with only appearance-based features. Furthermore, the results of our MRF classification indicate a strong improvement in accuracy for the binary building detection problem and the labeled planar surface models provide a good approximation to the ground truth planes.  相似文献   

19.
群体骚乱行为对社会公共安全的危害极大,是智能视频监控防范的重点之一。针对现有群体骚乱行为检测算法运算效率和检测正确率均较低的问题,提出了一种基于群组运动模式变化分析的行为检测算法。该方法提取前景像素点的光流特征作为行为分析的依据,采用K均值聚类和贝叶斯准则实现场景中不同人群的群组划分。在此基础上,分析场景中所有群组的运动模式变化,构建最大变化因子,计算最大变化因子变化量,检测群体骚乱行为。实验结果表明,采用所提方法检测群体骚乱行为的虚警率和漏警率均较低,平均检测耗时短。  相似文献   

20.
An adaptive focus-of-attention model for video surveillance and monitoring   总被引:1,自引:0,他引:1  
In current video surveillance systems, commercial pan/tilt/zoom (PTZ) cameras typically provide naive (or no) automatic scanning functionality to move a camera across its complete viewable field. However, the lack of scene-specific information inherently handicaps these scanning algorithms. We address this issue by automatically building an adaptive, focus-of-attention, scene-specific model using standard PTZ camera hardware. The adaptive model is constructed by first detecting local human activity (i.e., any translating object with a specific temporal signature) at discrete locations across a PTZ camera’s entire viewable field. The temporal signature of translating objects is extracted using motion history images (MHIs) and an original, efficient algorithm based on an iterative candidacy-classification-reduction process to separate the target motion from noise. The target motion at each location is then quantified and employed in the construction of a global activity map for the camera. We additionally present four new camera scanning algorithms which exploit this activity map to maximize a PTZ camera’s opportunity of observing human activity within the camera’s overall field of view. We expect that these efficient and effective algorithms are implementable within current commercial camera systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号