首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Advanced Robotics》2013,27(10):1097-1113
This paper proposes a real-time, robust and efficient three-dimensional (3D) model-based tracking algorithm. A virtual visual servoing approach is used for monocular 3D tracking. This method is similar to more classical non-linear pose computation techniques. A concise method for derivation of efficient distance-to-contour interaction matrices is described. An oriented edge detector is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating a M-estimator into the virtual visual control law via an iteratively re-weighted least-squares implementation. The method presented in this paper has been validated on several visual servoing experiments considering various objects. Results show the method to be robust to occlusion, changes in illumination and mis-tracking.  相似文献   

2.
《Advanced Robotics》2013,27(10):1041-1056
When considering real-world applications of robot control with visual servoing, both three-dimensional (3-D) information and a high feedback rate are required. We have developed a 3-D target-tracking system with a 1-ms feedback rate using two high-speed vision systems called Column Parallel Vision (CPV) systems. To obtain 3-D information, such as position, orientation and shape parameters of the target object, a feature-based algorithm has been introduced using moment feature values extracted from vision systems for a spheroidal object model. Also, we propose a new 3-D self-windowing method to extract the target in 3-D space using epipolar geometry, which is an extension of the conventional self-windowing method in 2-D images.  相似文献   

3.
《Advanced Robotics》2013,27(3):205-220
In this paper, we describe a visual servoing system developed as a human-robot interface to drive a mobile robot toward any chosen target. An omni-directional camera is used to get the 360° of field of view and an efficient tracking technique is developed to track the target. The use of the omni-directional geometry eliminates many of the problems common in visual tracking and makes the use of visual servoing a practical alternative for robot-human interaction. The experiments demonstrate that it is an effective and robust way to guide a robot. In particular, the experiments show robustness of the tracker to loss of template, vehicle motion, and change in scale and orientation.  相似文献   

4.
《Advanced Robotics》2013,27(3):283-304
This paper presents a new three-dimensional (3-D) biomicromanipulation system for biological objects such as embryos, cells or oocytes. As the cell is very small, kept in liquid and observed through a microscope, 2-D visual feedback makes accurate manipulation in the 3-D world difficult. To improve the manipulation work, we proposed an intelligent human–machine interface. The 3-D visual information is provided to the operator through a 3-D reconstruction method using vision-based tracking deformations of the cell embryo. In order to perform stable microinjection tasks, the operator needs force feedback and haptic assistance during penetration of the cell envelop — the chorion. Thus, realistic haptic rendering techniques have been implemented to validate stable insertion of a micropipette in a living cell. The proposed human–machine user's interface allows real-time realistic visual and haptic control strategies for constrained motion in image coordinates, virtual haptic rendering to constrain the path of insertion and removal in the 3-D scene or to avoid cell destruction by adequately controlling position, velocity and force parameters. Experiments showed that the virtualized reality interface acts as a tool for total guidance and assistance during microinjection tasks.  相似文献   

5.
《Advanced Robotics》2013,27(12):1401-1423
The area-based matching approach has been used extensively in many dynamic visual tracking systems to detect moving targets because it is computation efficient and does not require an object model. Unfortunately, area-based matching is sensitive to occlusion and illumination variation. In order to improve the robustness of visual tracking, two image cues, i.e., target template and target contour, are used in the proposed visual tracking algorithm. In particular, the target contour is represented by the active contour model that is used in combination with the fast greedy algorithm. However, to use the conventional active contour method, the initial contour needs to be provided manually. In order to facilitate the use of contour matching, a new approach that combines the adaptive background subtraction method with the border tracing technique was developed and is used to automatically generate the initial contour. In addition, a g–h filter is added to the visual loop to deal with the latency problem of visual feedback so that the performance of dynamic visual tracking can be improved. Experimental results demonstrate the effectiveness of the proposed approach.  相似文献   

6.
《Advanced Robotics》2013,27(11):1203-1218
A new visual servoing technique based on two-dimensional (2-D) ultrasound (US) image is proposed in order to control the motion of an US probe held by a medical robot. In opposition to a standard camera which provides a projection of the three-dimensional (3-D) scene to a 2-D image, US information is strictly in the observation plane of the probe and consequently visual servoing techniques have to be adapted. In this paper the coupling between the US probe and a motionless crossed string phantom used for probe calibration is modeled. Then a robotic task is developed which consists of positioning the US image on the intersection point of the crossed string phantom while moving the probe to different orientations. The goal of this task is to optimize the procedure of spatial parameter calibration of 3-D US systems.  相似文献   

7.
《Advanced Robotics》2013,27(10):1057-1072
It is an easy task for the human visual system to gaze continuously at an object moving in three-dimensional (3-D) space. While tracking the object, human vision seems able to comprehend its 3-D shape with binocular vision. We conjecture that, in the human visual system, the function of comprehending the 3-D shape is essential for robust tracking of a moving object. In order to examine this conjecture, we constructed an experimental system of binocular vision for motion tracking. The system is composed of a pair of active pan-tilt cameras and a robot arm. The cameras are for simulating the two eyes of a human while the robot arm is for simulating the motion of the human body below the neck. The two active cameras are controlled so as to fix their gaze at a particular point on an object surface. The shape of the object surface around the point is reconstructed in real-time from the two images taken by the cameras based on the differences in the image brightness. If the two cameras successfully gaze at a single point on the object surface, it is possible to reconstruct the local object shape in real-time. At the same time, the reconstructed shape is used for keeping a fixation point on the object surface for gazing, which enables robust tracking of the object. Thus these two processes, reconstruction of the 3-D shape and maintaining the fixation point, must be mutually connected and form one closed loop. We demonstrate the effectiveness of this framework for visual tracking through several experiments.  相似文献   

8.
Tracking is a very important research subject in a real-time augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a real-time, robust, and efficient 3D model-based tracking algorithm is proposed for a "video see through" monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. In this paper, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating an M-estimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D model-free augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.  相似文献   

9.
In this work, several robust vision modules are developed and implemented for fully automated micromanipulation. These are autofocusing, object and end-effector detection, real-time tracking and optical system calibration modules. An image based visual servoing architecture and a path planning algorithm are also proposed based on the developed vision modules. Experimental results are provided to assess the performance of the proposed visual servoing approach in positioning and trajectory tracking tasks. Proposed path planning algorithm in conjunction with visual servoing imply successful micromanipulation tasks.  相似文献   

10.
考虑具有可见性约束和执行器约束的载荷不确定移动机器人视觉伺服系统,提出一种鲁棒视觉伺服预测控制策略.首先将该移动机器人视觉伺服系统建模为关于视觉伺服误差和驱动的不确定系统.其次,对约束的视觉伺服误差子系统,设计基于半正定规划的速度规划预测控制算法.该算法分为离线计算和在线调度两个部分,降低预测控制算法的在线计算量.而对...  相似文献   

11.
This article describes real-time gaze control using position-based visual servoing. The main control objective of the system is to enable a gaze point to track the target so that the image feature of the target is located at each image center. The overall system consists of two parts: the vision process and the control system. The vision system extracts a predefined color feature from images. An adaptive look-up table method is proposed in order to get the 3-D position of the feature within the video frame rate under varying illumination. An uncalibrated camera raises the problem of the reconstructed 3-D positions not being correct. To solve the calibration problem in the position-based approach, we constructed an end-point closed-loop system using an active head-eye system. In the proposed control system, the reconstructed position error is used with a Jacobian matrix of the kinematic relation. The system stability is locally guaranteed, like image-based visual servoing, and the gaze position was shown to converge to the feature position. The proposed approach was successfully applied to a tracking task with a moving target in some simulations and some real experiments. The processing speed satisfies the property of real time. This work was presented in part at the Sixth International Symposium on Artificial Life and Robotics, Tokyo, January 15–17, 2001  相似文献   

12.
This article deals with the development of learning methods for an intelligent control system for an autonomous mobile robot. On the basis of visual servoing, an approach to learning the skill of tracking colored guidelines is proposed. This approach utilizes a robust and adaptive image processing method to acquire features of the colored guidelines and convert them into the controller input. The supervised learning procedure and the neural network controller are discussed. The method of obtaining the learning data and training the neural network are described. Experimental results are presented at the end of the article. This work was presented, in part, at the Sixth International Symposium on Artificial Life and Robotics, Tokyo, Japan, January 15–17, 2001  相似文献   

13.
This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.  相似文献   

14.
There are two main trends in the development of unmanned aerial vehicle(UAV)technologies:miniaturization and intellectualization,in which realizing object tracking capabilities for a nano-scale UAV is one of the most challenging problems.In this paper,we present a visual object tracking and servoing control system utilizing a tailor-made 38 g nano-scale quadrotor.A lightweight visual module is integrated to enable object tracking capabilities,and a micro positioning deck is mounted to provide accurate pose estimation.In order to be robust against object appearance variations,a novel object tracking algorithm,denoted by RMCTer,is proposed,which integrates a powerful short-term tracking module and an efficient long-term processing module.In particular,the long-term processing module can provide additional object information and modify the short-term tracking model in a timely manner.Furthermore,a positionbased visual servoing control method is proposed for the quadrotor,where an adaptive tracking controller is designed by leveraging backstepping and adaptive techniques.Stable and accurate object tracking is achieved even under disturbances.Experimental results are presented to demonstrate the high accuracy and stability of the whole tracking system.  相似文献   

15.
Visual servoing is a control method to manipulate the motion of the robot using visual information, which aims to realize “working while watching.” However, the visual servoing towards moving target with hand–eye cameras fixed at hand is inevitably affected by hand dynamical oscillation. To overcome this defect of the hand–eye fixed camera system, an eye-vergence system has been put forward, where the pose of the cameras could be rotated to observe the target object. The visual servoing controllers of hand and eye-vergence are installed independently, so that it can observe the target object at the center of camera images through eye-vergence function. In this research, genetic algorithm (GA) is used as a pose tracking method, which is called “Real-Time Multi-step GA(RM-GA),” solves on-line optimization problems for 3D visual servoing. The performances of real-time object tracking using eye-vergence system and “RM-GA” method have been examined, and also the pose tracking accuracy has been verified.  相似文献   

16.
Model-based 3-D object tracking has earned significant importance in areas such as augmented reality, surveillance, visual servoing, robotic object manipulation and grasping. Key problems to robust and precise object tracking are the outliers caused by occlusion, self-occlusion, cluttered background, reflections and complex appearance properties of the object. Two of the most common solutions to the above problems have been the use of robust estimators and the integration of visual cues. The tracking system presented in this paper achieves robustness by integrating model-based and model-free cues together with robust estimators. As a model-based cue, a wireframe edge model is used. As model-free cues, automatically generated surface texture features are used. The particular contribution of this work is the integration framework where not only polyhedral objects are considered. In particular, we deal also with spherical, cylindrical and conical objects for which the complete pose cannot be estimated using only wireframe models. Using the integration with the model-free features, we show how a full pose estimate can be obtained. Experimental evaluation demonstrates robust system performance in realistic settings with highly textured objects and natural backgrounds.  相似文献   

17.
This paper describes a new method to perform automatic tasks with a robot in an unstructured environment. A task to replace a blown light bulb in a streetlamp is described to show that this method works properly. In order to perform this task correctly, the robot is positioned by tracking secure previously defined paths. The robot, using an eye-in-hand configuration on a visual servoing scheme and a force sensor, is able to interact with its environment due to the fact that the path tracking is performed with time-independent behaviour. The desired path is expressed in the image space. However, the proposed method obtains a correct tracking not only in the image, but also in the 3D space. This method solves the problems of the previously proposed time-independent tracking systems based on visual servoing, such as the specification of the desired tracking velocity, less oscillating behaviour and a correct tracking in the 3D space when high velocities are used. The experiments shown in this paper demonstrate the necessity of time-independent behaviour in tracking and the correct performance of the system.  相似文献   

18.
《Advanced Robotics》2013,27(10):1023-1039
Effects of camera calibration errors for the point-to-point task are investigated in static-eye and hand-eye visual servoing realized with position-based and image-based control laws. For these four configurations, the effect of uncertainty on intrinsic and extrinsic parameters is analyzed. The results show local stability for all configurations under small calibration errors. However, a steady-state error is found in the hand-eye position-based configuration. Simulations have been carried out in order to confirm the theoretical results and evaluate the effects of the uncertainty in terms of the stability region. Another contribution of the paper consists of providing a method for estimating the stability region robust against uncertainty directions for the static-eye position-based case with uncertainty on the camera centers.  相似文献   

19.
《Advanced Robotics》2013,27(6):725-745
This research develops a control scheme for visual servoing that explicitly takes into account the delay introduced by image acquisition and processing. For this purpose, a predictor block, i.e., an estimator that predicts several samples ahead of time, is properly included in the scheme. The proposed approach is analytically analyzed in terms of dynamics and steady-state errors, and compared to previous approaches. Furthermore, several simulations are comparatively shown in order to illustrate the benefits and limitations of the proposed control scheme. Finally, some experimental results using a turntable and a 3-d.o.f. Cartesian robot are provided in order to validate the analytical and simulation results.  相似文献   

20.
A visual servoing tracking controller is proposed based on the sliding mode control theory in order to achieve strong robustness against parameter variations and external disturbances. A sliding plane with time delay compensation is presented by the pre-estimate of states. To reduce the chattering of the sliding mode controller, a modified exponential reaching law and hyperbolic tangent function are applied to the design of visual controller and robot joint controller. Simulation results show that the visual servoing control scheme is robust and has good tracking performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号