首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Motion-based motion deblurring   总被引:7,自引:0,他引:7  
Motion blur due to camera motion can significantly degrade the quality of an image. Since the path of the camera motion can be arbitrary, deblurring of motion blurred images is a hard problem. Previous methods to deal with this problem have included blind restoration of motion blurred images, optical correction using stabilized lenses, and special cmos sensors that limit the exposure time in the presence of motion. In this paper, we exploit the fundamental trade off between spatial resolution and temporal resolution to construct a hybrid camera that can measure its own motion during image integration. The acquired motion information is used to compute a point spread function (psf) that represents the path of the camera during integration. This psf is then used to deblur the image. To verify the feasibility of hybrid imaging for motion deblurring, we have implemented a prototype hybrid camera. This prototype system was evaluated in different indoor and outdoor scenes using long exposures and complex camera motion paths. The results show that, with minimal resources, hybrid imaging outperforms previous approaches to the motion blur problem. We conclude with a brief discussion on how our ideas can be extended beyond the case of global camera motion to the case where individual objects in the scene move with different velocities.  相似文献   

2.
Virtual world explorations by using topological and semantic knowledge   总被引:3,自引:0,他引:3  
This paper is dedicated to virtual world exploration techniques. Automatic camera control is important in many fields as computational geometry, visual servoing, robot motion, graph drawing, etc. The paper introduces a high-level camera controlling approach in virtual environments. The proposed method is related to real-time 3D scene exploration and is made of two steps. In the first step, a set of good viewpoints is chosen to give the user a maximum knowledge of the scene. The second step uses the viewpoints to compute a camera path between them. Finally, we define a notion of semantic distance between objects of the scene to improve the approach.  相似文献   

3.
周方波  赵怀林  刘华平   《智能系统学报》2022,17(5):1032-1038
在移动机器人执行日常家庭任务时,首先需要其能够在环境中避开障碍物,自主地寻找到房间中的物体。针对移动机器人如何有效在室内环境下对目标物体进行搜索的问题,提出了一种基于场景图谱的室内移动机器人目标搜索,其框架结合了导航地图、语义地图和语义关系图谱。在导航地图的基础上建立了包含地标物体位置信息的语义地图,机器人可以轻松对地标物体进行寻找。对于动态的物体,机器人根据语义关系图中物体之间的并发关系,优先到关系强度比较高的地标物体旁寻找。通过物理实验展示了机器人在语义地图和语义关系图的帮助下可以实现在室内环境下有效地寻找到目标,并显著地减少了搜索的路径长度,证明了该方法的有效性。  相似文献   

4.
We present an autonomous mobile robot navigation system using stereo fish-eye lenses for navigation in an indoor structured environment and for generating a model of the imaged scene. The system estimates the three-dimensional (3D) position of significant features in the scene, and by estimating its relative position to the features, navigates through narrow passages and makes turns at corridor ends. Fish-eye lenses are used to provide a large field of view, which images objects close to the robot and helps in making smooth transitions in the direction of motion. Calibration is performed for the lens-camera setup and the distortion is corrected to obtain accurate quantitative measurements. A vision-based algorithm that uses the vanishing points of extracted segments from a scene in a few 3D orientations provides an accurate estimate of the robot orientation. This is used, in addition to 3D recovery via stereo correspondence, to maintain the robot motion in a purely translational path, as well as to remove the effects of any drifts from this path from each acquired image. Horizontal segments are used as a qualitative estimate of change in the motion direction and correspondence of vertical segment provides precise 3D information about objects close to the robot. Assuming detected linear edges in the scene as boundaries of planar surfaces, the 3D model of the scene is generated. The robot system is implemented and tested in a structured environment at our research center. Results from the robot navigation in real environments are presented and discussed. Received: 25 September 1996 / Accepted: 20 October 1996  相似文献   

5.
The structural features inherent in the visual motion field of a mobile robot contain useful clues about its navigation. The combination of these visual clues and additional inertial sensor information may allow reliable detection of the navigation direction for a mobile robot and also the independent motion that might be present in the 3D scene. The motion field, which is the 2D projection of the 3D scene variations induced by the camera‐robot system, is estimated through optical flow calculations. The singular points of the global optical flow field of omnidirectional image sequences indicate the translational direction of the robot as well as the deviation from its planned path. It is also possible to detect motion patterns of near obstacles or independently moving objects of the scene. In this paper, we introduce the analysis of the intrinsic features of the omnidirectional motion fields, in combination with gyroscopical information, and give some examples of this preliminary analysis. © 2004 Wiley Periodicals, Inc.  相似文献   

6.
7.
In this paper we describe an algorithm to recover the scene structure, the trajectories of the moving objects and the camera motion simultaneously given a monocular image sequence. The number of the moving objects is automatically detected without prior motion segmentation. Assuming that the objects are moving linearly with constant speeds, we propose a unified geometrical representation of the static scene and the moving objects. This representation enables the embedding of the motion constraints into the scene structure, which leads to a factorization-based algorithm. We also discuss solutions to the degenerate cases which can be automatically detected by the algorithm. Extension of the algorithm to weak perspective projections is presented as well. Experimental results on synthetic and real images show that the algorithm is reliable under noise.  相似文献   

8.
We propose a novel approach to robot‐operated active understanding of unknown indoor scenes, based on online RGBD reconstruction with semantic segmentation. In our method, the exploratory robot scanning is both driven by and targeting at the recognition and segmentation of semantic objects from the scene. Our algorithm is built on top of a volumetric depth fusion framework and performs real‐time voxel‐based semantic labeling over the online reconstructed volume. The robot is guided by an online estimated discrete viewing score field (VSF) parameterized over the 3D space of 2D location and azimuth rotation. VSF stores for each grid the score of the corresponding view, which measures how much it reduces the uncertainty (entropy) of both geometric reconstruction and semantic labeling. Based on VSF, we select the next best views (NBV) as the target for each time step. We then jointly optimize the traverse path and camera trajectory between two adjacent NBVs, through maximizing the integral viewing score (information gain) along path and trajectory. Through extensive evaluation, we show that our method achieves efficient and accurate online scene parsing during exploratory scanning.  相似文献   

9.

In visual servoing tasks, it is an important problem to maintain the observability to feature points on objects, which are usually used to calculate the pose between objects and robots. In particular, when the robot’s vision has a limited field of view (FOV) and the points on objects are distributed separately, the problem is more serious. In this paper, based on FOV constraint region analysis and path planning, we propose a novel method for a mobile robot equipped with a pan-tilt camera to keep all points on objects in its view. According to the Horizontal-FOV and Vertical-FOV angular aperture of camera, bounding boxes assisting to calculate the regions with FOV constraint are acquired firstly. Then the region where the robot inside it cannot keep all points in its view can be obtained. Finally the mobile robot plans a shortest path from the current position to the destination, which can avoid the region with FOV constraint. The results of simulations and experiments prove that our method can make mobile robot keep all feature points in its view when it is moving.

  相似文献   

10.
Simulation of cameras in robot applications   总被引:2,自引:0,他引:2  
A method for simulating a camera for use in robot applications is described. The approach shows the physical modelling, based on a geometric model, of the camera, the light sources and the objects within the scene. Many parameters of the physical model can be adjusted to obtain simulation results close to reality. An example shows the simulation of a CCD (charge-coupled device) camera and a scene with workpieces taken from the `European benchmark'  相似文献   

11.
Video understanding has attracted significant research attention in recent years, motivated by interest in video surveillance, rich media retrieval and vision-based gesture interfaces. Typical methods focus on analyzing both the appearance and motion of objects in video. However, the apparent motion induced by a moving camera can dominate the observed motion, requiring sophisticated methods for compensating for camera motion without a priori knowledge of scene characteristics. This paper introduces two new methods for global motion compensation that are both significantly faster and more accurate than state of the art approaches. The first employs RANSAC to robustly estimate global scene motion even when the scene contains significant object motion. Unlike typical RANSAC-based motion estimation work, we apply RANSAC not to the motion of tracked features but rather to a number of segments of image projections. The key insight of the second method involves reliably classifying salient points into foreground and background, based upon the entropy of a motion inconsistency measure. Extensive experiments on established datasets demonstrate that the second approach is able to remove camera-based observed motion almost completely while still preserving foreground motion.  相似文献   

12.
This paper deals with the depth observability problem of a hand‐eye robot system. In contrast to earlier works, this paper presents a complete study of this observability problem. The velocity of the active camera in the hand‐eye robot system is considered as the input. The observability of depth estimation is then related to the velocity of the camera. A necessary and sufficient condition for the types of camera velocities necessary to ensure observability is found. This compensates for the results of earlier works, in which the velocity of camera was estimated. The theory is also verified by both simulations and experiments in this paper. Furthermore, a modified LQ visual servo control law is proposed to vary the weighting matrices so that depth estimation is improved while the level of control performance is still retained.  相似文献   

13.
Pose estimation for planar structures   总被引:1,自引:0,他引:1  
We address the registration problem for interactive AR applications. Such applications require a real-time registration process. Although the registration problem has received a lot of attention in the computer vision community, it's far from being solved. Ideally, an AR system should work in all environments without the need to prepare the scene ahead of time, and users should be able to walk anywhere they want. In the past, several AR systems have achieved accurate and fast tracking and registration, putting dots over objects and tracking the dots with a camera. We can also achieve registration by identifying features in the scene that we can carefully measure for real-world coordinates. However, such methods restrict the system's flexibility. Hence, we need to investigate registration methods that work in unprepared environments and reduce the need to know the objects' geometry in the scene. We propose an efficient solution to real-time camera tracking for scenes that contain planar structures. We can consider many types of scene with our method. We show that our system is reliable and we can use it for real-time applications. We also present results demonstrating real-time camera tracking on indoor and outdoor scenes.  相似文献   

14.
Robust topological navigation strategy for omnidirectional mobile robot using an omnidirectional camera is described. The navigation system is composed of on-line and off-line stages. During the off-line learning stage, the robot performs paths based on motion model about omnidirectional motion structure and records a set of ordered key images from omnidirectional camera. From this sequence a topological map is built based on the probabilistic technique and the loop closure detection algorithm, which can deal with the perceptual aliasing problem in mapping process. Each topological node provides a set of omnidirectional images characterized by geometrical affine and scale invariant keypoints combined with GPU implementation. Given a topological node as a target, the robot navigation mission is a concatenation of topological node subsets. In the on-line navigation stage, the robot hierarchical localizes itself to the most likely node through the robust probability distribution global localization algorithm, and estimates the relative robot pose in topological node with an effective solution to the classical five-point relative pose estimation algorithm. Then the robot is controlled by a vision based control law adapted to omnidirectional cameras to follow the visual path. Experiment results carried out with a real robot in an indoor environment show the performance of the proposed method.  相似文献   

15.
Mobile robotics has achieved notable progress, however, to increase the complexity of the tasks that mobile robots can perform in natural environments, we need to provide them with a greater semantic understanding of their surrounding. In particular, identifying indoor scenes, such as an Office or a Kitchen, is a highly valuable perceptual ability for an indoor mobile robot, and in this paper we propose a new technique to achieve this goal. As a distinguishing feature, we use common objects, such as Doors or furniture, as a key intermediate representation to recognize indoor scenes. We frame our method as a generative probabilistic hierarchical model, where we use object category classifiers to associate low-level visual features to objects, and contextual relations to associate objects to scenes. The inherent semantic interpretation of common objects allows us to use rich sources of online data to populate the probabilistic terms of our model. In contrast to alternative computer vision based methods, we boost performance by exploiting the embedded and dynamic nature of a mobile robot. In particular, we increase detection accuracy and efficiency by using a 3D range sensor that allows us to implement a focus of attention mechanism based on geometric and structural information. Furthermore, we use concepts from information theory to propose an adaptive scheme that limits computational load by selectively guiding the search for informative objects. The operation of this scheme is facilitated by the dynamic nature of a mobile robot that is constantly changing its field of view. We test our approach using real data captured by a mobile robot navigating in Office and home environments. Our results indicate that the proposed approach outperforms several state-of-the-art techniques for scene recognition.  相似文献   

16.
Detecting and tracking moving objects within a scene is an essential step for high-level machine vision applications such as video content analysis. In this paper, we propose a fast and accurate method for tracking an object of interest in a dynamic environment (active camera model). First, we manually select the region of the object of interest and extract three statistical features, namely the mean, the variance and the range of intensity values of the feature points lying inside the selected region. Then, using the motion information of the background’s feature points and k-means clustering algorithm, we calculate camera motion transformation matrix. Based on this matrix, the previous frame is transformed to the current frame’s coordinate system to compensate the impact of camera motion. Afterwards, we detect the regions of moving objects within the scene using our introduced frame difference algorithm. Subsequently, utilizing DBSCAN clustering algorithm, we cluster the feature points of the extracted regions in order to find the distinct moving objects. Finally, we use the same statistical features (the mean, the variance and the range of intensity values) as a template to identify and track the moving object of interest among the detected moving objects. Our approach is simple and straightforward yet robust, accurate and time efficient. Experimental results on various videos show an acceptable performance of our tracker method compared to complex competitors.  相似文献   

17.
This paper deals with a motion control system for a space robot with a manipulator. Many motion controllers require the positions of the robot body and the manipulator hand with respect to an inertial coordinate system. In order to measure them, a visual sensor using a camera is frequently used. However, there are two difficulties in measuring them by means of a camera. The first one is that a camera is mounted on the robot body, and hence it is difficult to directly measure the position of the robot body by means of it. The second one is that the sampling period of a vision system with a general-purpose camera is much longer than that of a general servo system. In this paper, we develop an adaptive state observer that overcomes the two difficulties. In order to investigate its performance, we design a motion control system that is constructed by combining the observer with a PD control input, and then conduct numerical simulations for the control system. Simulation results demonstrate the effectiveness of the proposed observer.  相似文献   

18.
Motion segmentation in moving camera videos is a very challenging task because of the motion dependence between the camera and moving objects. Camera motion compensation is recognized as an effective approach. However, existing work depends on prior-knowledge on the camera motion and scene structure for model selection. This is not always available in practice. Moreover, the image plane motion suffers from depth variations, which leads to depth-dependent motion segmentation in 3D scenes. To solve these problems, this paper develops a prior-free dependent motion segmentation algorithm by introducing a modified Helmholtz-Hodge decomposition (HHD) based object-motion oriented map (OOM). By decomposing the image motion (optical flow) into a curl-free and a divergence-free component, all kinds of camera-induced image motions can be represented by these two components in an invariant way. HHD identifies the camera-induced image motion as one segment irrespective of depth variations with the help of OOM. To segment object motions from the scene, we deploy a novel spatio-temporal constrained quadtree labeling. Extensive experimental results on benchmarks demonstrate that our method improves the performance of the state-of-the-art by 10%~20% even over challenging scenes with complex background.  相似文献   

19.
Dong  Xiaoming  Ai  Liefu  Jiang  Rong 《Multimedia Tools and Applications》2019,78(21):29747-29763

Robot motion estimation is fundamental in most robot applications such as robot navigation, which is an indispensable part of future internet of things. Indoor robot motion estimation is difficult to be resolved because GPS (Global Positioning System) is unavailable. Vision sensors can provide larger amount of image sequences information compared with other traditional sensors, but it is subject to the changes of light. In order to improve the robustness of indoor robot motion estimation, an enhanced particle filter framework is constructed: firstly, motion estimation was implemented based on the distinguished indoor feature points; secondly, particle filter method was utilized and the least square curve fitting was inserted into the particle resampling process to solve the problem of particle depletion. The various experiments based on real robots show that the proposed method can reduce the estimation errors greatly and provide an effective resolution for the indoor robot localization and motion estimation.

  相似文献   

20.
In this paper, we describe a reconstruction method for multiple motion scenes, which are scenes containing multiple moving objects, from uncalibrated views. Assuming that the objects are moving with constant velocities, the method recovers the scene structure, the trajectories of the moving objects, the camera motion, and the camera intrinsic parameters (except skews) simultaneously. We focus on the case where the cameras have unknown and varying focal lengths while the other intrinsic parameters are known. The number of the moving objects is automatically detected without prior motion segmentation. The method is based on a unified geometrical representation of the static scene and the moving objects. It first performs a projective reconstruction using a bilinear factorization algorithm and, then, converts the projective solution to a Euclidean one by enforcing metric constraints. Experimental results on synthetic and real images are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号