共查询到20条相似文献,搜索用时 15 毫秒
1.
《Advanced Robotics》2013,27(5):523-543
This paper examines the role of grasper compliance and kinematic configuration in environments where object size and location may not be well known. A grasper consisting of a pair of two-link planar fingers with compliant revolute joints was simulated as it passively deflected during contact with a target object. The kinematic configuration and joint stiffness values of the grasper were varied in order to maximize successful grasp range and minimize contact forces for a wide range of target object size. Joint rest angles around 25–45 degrees produced near-optimal results if the stiffness of the base joint was much smaller than the intermediate joint, as confirmed experimentally. 相似文献
2.
Marcel Häselich Marc Arends Nicolai Wojke Frank Neuhaus Dietrich Paulus 《Robotics and Autonomous Systems》2013,61(10):1051-1059
Autonomous navigation in unstructured environments is a complex task and an active area of research in mobile robotics. Unlike urban areas with lanes, road signs, and maps, the environment around our robot is unknown and unstructured. Such an environment requires careful examination as it is random, continuous, and the number of perceptions and possible actions are infinite.We describe a terrain classification approach for our autonomous robot based on Markov Random Fields (MRFs ) on fused 3D laser and camera image data. Our primary data structure is a 2D grid whose cells carry information extracted from sensor readings. All cells within the grid are classified and their surface is analyzed in regard to negotiability for wheeled robots.Knowledge of our robot’s egomotion allows fusion of previous classification results with current sensor data in order to fill data gaps and regions outside the visibility of the sensors. We estimate egomotion by integrating information of an IMU, GPS measurements, and wheel odometry in an extended Kalman filter.In our experiments we achieve a recall ratio of about 90% for detecting streets and obstacles. We show that our approach is fast enough to be used on autonomous mobile robots in real time. 相似文献
3.
This article describes a telerobotics testbed for performing tasks such as assembly and repair of spacecraft in unstructured environments. This fully operational multiarm system can operate in teleoperated or supervisory control modes, as well as mixed shared-control modes, thus enabling operations in totally to partially unstructured environments. Various sources of uncertainty are identified and approaches to minimize their effects are presented. In the teleoperated mode, the system uses two force-reflecting hand controllers to operate two manipulator arms. A third arm is utilized to position four cameras to view the scene. In the supervisory mode, the system can be operated from three different levels: task, process, and servo levels, providing different levels of autonomy and performance. Various tools are provided so that an operator can perform tasks even when objects are partially occluded or their positions are not known a priori. Facilities are available for building models of previously unknown objects on-line, as well as generating collision-free motions. Architecturally, the system is designed so that an operator and much of the associated computing environment can be located remotely relative to the robots. As such, ground-controlled remote manipulation is possible. 相似文献
4.
This paper presents methodologies and technologies that are exploited to design and implement the mobile haptic grasper (MHG), i.e. an integrated system consisting of a mobile robot and two grounded haptic devices (HD) fixed on it. This system features two-point contact kinaesthetic interactions while guaranteeing full user’s locomotion in large virtual environment. The workspace of haptic interaction is indefinitely extended, and this is extremely relevant for applications such as virtual grasping, where the global workspace is typically reduced with respect to those of the single-point contact devices. Regarding software architecture, we present the Haptik Library, an open source library developed at the University of Siena which allows to uniformly access HD, that has been used to implement the MHG software.
相似文献
Domenico Prattichizzo (Corresponding author)Email: Phone: +39-0577-234609Fax: +39-0577-234602 |
5.
Kamon I. Flash T. Edelman S. 《IEEE transactions on systems, man, and cybernetics. Part A, Systems and humans : a publication of the IEEE Systems, Man, and Cybernetics Society》1998,28(3):266-276
We present a general scheme for learning sensorimotor tasks, which allows rapid online learning and generalization of the learned knowledge to unfamiliar objects. The scheme consists of two modules, the first generating candidate actions and the second estimating their quality. Both modules work in an alternating fashion until an action which is expected to provide satisfactory performance is generated, at which point the system executes the action. We developed a method for off-line selection of heuristic strategies and quality predicting features, based on statistical analysis. The usefulness of the scheme was demonstrated in the context of learning visually guided grasping. We consider a system that coordinates a parallel-jaw gripper and a fixed camera. The system learns to estimate grasp quality by learning a function from relevant visual features to the quality. An experimental setup using an AdeptOne manipulator was developed to test the scheme 相似文献
6.
Sjoerd van der Zwaan Alexandre Bernardino Jos Santos-Victor 《Robotics and Autonomous Systems》2002,39(3-4):145-155
This paper describes the use of vision for navigation of mobile robots floating in 3D space. The problem addressed is that of automatic station keeping relative to some naturally textured environmental region. Due to the motion disturbances in the environment (currents), these tasks are important to keep the vehicle stabilized relative to an external reference frame. Assuming short range regions in the environment, vision can be used for local navigation, so that no global positioning methods are required. A planar environmental region is selected as a visual landmark and tracked throughout a monocular video sequence. For a camera moving in 3D space, the observed deformations of the tracked image region are according to planar projective transformations and reveal information about the robot relative position and orientation w.r.t. the landmark. This information is then used in a visual feedback loop so as to realize station keeping. Both the tracking system and the control design are discussed. Two robotic platforms are used for experimental validation, namely an indoor aerial blimp and a remote operated underwater vehicle. Results obtained from these experiments are described. 相似文献
7.
8.
Reliable obstacle detection and classification in rough and unstructured terrain such as agricultural fields or orchards remains a challenging problem. These environments involve large variations in both geometry and appearance, challenging perception systems that rely on only a single sensor modality. Geometrically, tall grass, fallen leaves, or terrain roughness can mistakenly be perceived as nontraversable or might even obscure actual obstacles. Likewise, traversable grass or dirt roads and obstacles such as trees and bushes might be visually ambiguous. In this paper, we combine appearance‐ and geometry‐based detection methods by probabilistically fusing lidar and camera sensing with semantic segmentation using a conditional random field. We apply a state‐of‐the‐art multimodal fusion algorithm from the scene analysis domain and adjust it for obstacle detection in agriculture with moving ground vehicles. This involves explicitly handling sparse point cloud data and exploiting both spatial, temporal, and multimodal links between corresponding 2D and 3D regions. The proposed method was evaluated on a diverse data set, comprising a dairy paddock and different orchards gathered with a perception research robot in Australia. Results showed that for a two‐class classification problem (ground and nonground), only the camera leveraged from information provided by the other modality with an increase in the mean classification score of 0.5%. However, as more classes were introduced (ground, sky, vegetation, and object), both modalities complemented each other with improvements of 1.4% in 2D and 7.9% in 3D. Finally, introducing temporal links between successive frames resulted in improvements of 0.2% in 2D and 1.5% in 3D. 相似文献
9.
In this paper, we propose a learning algorithm for coordinating a robot system where the movement of an arm is controlled
through a stereo camera system. Instead of calibrating the usually complex non-linear transformation between the arm and cameras,
the algorithm decomposes the whole transformation automatically into local linear transformations and then makes the linearization
map recorded by the arm controller. The linearization is carried out by a learning process based on a Kohonen-style self-organization
network. To deal with unstructured environments in which some obstacles exist, some virtual forces are introduced for dealing
with the high degree of complexity underlying in the transformation.
This work was presented, in part, at the International Symposium on Artificial Life and Robotics, Oita, Japan, February 18–20,
1996 相似文献
10.
《Advanced Robotics》2013,27(2):233-254
We will explore dynamic perception following the visually guided grasping of several objects by a human-like autonomous robot. This competency serves for object categorization. Physical interaction with the hand-held object gives the neural network of the robot the rich, coherent and multi-modal sensory input. Multi-layered self-organizing maps are designed and examined in static and dynamic conditions. The results of the tests in the former condition show its capability of robust categorization against noise. The network also shows better performance than a single-layered map does. In the latter condition we focus on shaking behavior by moving only the forearm of the robot. In some combinations of grasping style and shaking radius the network is capable of categorizing two objects robustly. The results show that the network capability to achieve the task largely depends on how to grasp and how to move the objects. These results together with a preliminary simulation are promising toward the self-organization of a high degree of autonomous dynamic object categorization. 相似文献
11.
Junhao Xiao Jianhua Zhang Benjamin Adler Houxiang Zhang Jianwei Zhang 《Robotics and Autonomous Systems》2013,61(12):1641-1652
This paper focuses on three-dimensional (3D) point cloud plane segmentation. Two complementary strategies are proposed for different environments, i.e., a subwindow-based region growing (SBRG) algorithm for structured environments, and a hybrid region growing (HRG) algorithm for unstructured environments. The point cloud is decomposed into subwindows first, using the points’ neighborhood information when they are scanned by the laser range finder (LRF). Then, the subwindows are classified as planar or nonplanar based on their shape. Afterwards, only planar subwindows are employed in the former algorithm, whereas both kinds of subwindows are used in the latter. In the growing phase, planar subwindows are investigated directly (in both algorithms), while each point in nonplanar subwindows is investigated separately (only in HRG). During region growing, plane parameters are computed incrementally when a subwindow or a point is added to the growing region. This incremental methodology makes the plane segmentation fast. The algorithms have been evaluated using real-world datasets from both structured and unstructured environments. Furthermore, they have been benchmarked against a state-of-the-art point-based region growing (PBRG) algorithm with regard to segmentation speed. According to the results, SBRG is 4 and 9 times faster than PBRG when the subwindow size is set to 3×3 and 4×4 respectively; HRG is 4 times faster than PBRG when the subwindow size is set to 4×4. Open-source code for this paper is available at https://github.com/junhaoxiao/TAMS-Planar-Surface-Based-Perception.git. 相似文献
12.
《Control Engineering Practice》2009,17(5):597-608
This paper describes a new method to perform automatic tasks with a robot in an unstructured environment. A task to replace a blown light bulb in a streetlamp is described to show that this method works properly. In order to perform this task correctly, the robot is positioned by tracking secure previously defined paths. The robot, using an eye-in-hand configuration on a visual servoing scheme and a force sensor, is able to interact with its environment due to the fact that the path tracking is performed with time-independent behaviour. The desired path is expressed in the image space. However, the proposed method obtains a correct tracking not only in the image, but also in the 3D space. This method solves the problems of the previously proposed time-independent tracking systems based on visual servoing, such as the specification of the desired tracking velocity, less oscillating behaviour and a correct tracking in the 3D space when high velocities are used. The experiments shown in this paper demonstrate the necessity of time-independent behaviour in tracking and the correct performance of the system. 相似文献
13.
This paper addresses the problem of visual simultaneous localization and mapping (SLAM) in an unstructured seabed environment that can be applied to an unmanned underwater vehicle equipped with a single monocular camera as the main measurement sensor. Monocular vision is regarded as an efficient sensing option in the context of SLAM, however it poses a variety of challenges when the relative motion is determined by matching a pair of images in the case of in-water visual SLAM. Among the various challenges, this research focuses on the problem of loop-closure which is one of the most important issues in SLAM. This study proposes a robust loop-closure algorithm in order to improve operational performance in terms of both navigation and mapping by efficiently reconstructing image matching constraints. To demonstrate and evaluate the effectiveness of the proposed loop-closure method, experimental datasets obtained in underwater environments are used, and the validity of the algorithm is confirmed by a series of comparative results. 相似文献
14.
Daniela Craciun Nicolas Paparoditis Francis Schmitt 《Computer Vision and Image Understanding》2010,114(11):1248-1263
We are currently developing a vision-based system aiming to perform a fully automatic pipeline for in situ photorealistic three-dimensional (3D) modeling of previously unknown, complex and unstructured underground environments. Since in such environments navigation sensors are not reliable, our system embeds only passive (camera) and active (laser) 3D vision senors. Laser Range Finders are particularly well suited for generating dense 3D maps by aligning multiples scans acquired from different viewpoints. Nevertheless, nowadays Iteratively Closest Point (ICP)-based scan matching techniques rely on heavy human operator intervention during a post-processing step. Since a human operator cannot access the site, these techniques are not suitable in high-risk underground environments. This paper presents an automatic on-line scan matcher able to cope with the nowadays 3D laser scanners’ architecture and to process either intensity or depth data to align scans, providing robustness with respect to the capture device. The proposed implementation emphasizes the portability of our algorithm on either single or multi-core embedded platforms for on-line mosaicing onboard 3D scanning devices. The proposed approach addresses key issues for in situ 3D modeling in difficult-to-access and unstructured environments and solves for the 3D scan matching problem within an environment-independent solution. Several tests performed in two prehistoric caves illustrate the reliability of the proposed method. 相似文献
15.
Rogerio Bonatti Wenshan Wang Cherie Ho Aayush Ahuja Mirko Gschwindt Efe Camci Erdal Kayacan Sanjiban Choudhury Sebastian Scherer 《野外机器人技术杂志》2020,37(4):606-641
Aerial cinematography is revolutionizing industries that require live and dynamic camera viewpoints such as entertainment, sports, and security. However, safely piloting a drone while filming a moving target in the presence of obstacles is immensely taxing, often requiring multiple expert human operators. Hence, there is a demand for an autonomous cinematographer that can reason about both geometry and scene context in real‐time. Existing approaches do not address all aspects of this problem; they either require high‐precision motion‐capture systems or global positioning system tags to localize targets, rely on prior maps of the environment, plan for short time horizons, or only follow fixed artistic guidelines specified before the flight. In this study, we address the problem in its entirety and propose a complete system for real‐time aerial cinematography that for the first time combines: (a) vision‐based target estimation; (b) 3D signed‐distance mapping for occlusion estimation; (c) efficient trajectory optimization for long time‐horizon camera motion; and (d) learning‐based artistic shot selection. We extensively evaluate our system both in simulation and in field experiments by filming dynamic targets moving through unstructured environments. Our results indicate that our system can operate reliably in the real world without restrictive assumptions. We also provide in‐depth analysis and discussions for each module, with the hope that our design tradeoffs can generalize to other related applications. Videos of the complete system can be found at https://youtu.be/ookhHnqmlaU . 相似文献
16.
The research presented in this paper approaches the issue of navigation using an automated guided vehicle (AGV) in industrial environments. The work describes the navigation system of a flexible AGV intended for operation in partially structured warehouses and with frequent changes in the floor plant layout. This is achieved by incorporating a high degree of on-board autonomy and by decreasing the amount of manual work required by the operator when establishing the a priori knowledge of the environment. The AGV's autonomy consists of the set of automatic tasks, such as planner, perception, path planning and path tracking, that the industrial vehicle must perform to accomplish the task required by the operator. The integration of these techniques has been tested in a real AGV working on an industrial warehouse environment. 相似文献
17.
Intelligent Service Robotics - This paper provides stability analysis of a robust interaction control, nonlinear bang–bang impact control, for one degree-of-freedom robot manipulators. The... 相似文献
18.
针对非结构化环境地面无人驾驶路径规划过程中路径避障以及多车路径冲突的难题,通过同调以及de Rham上同调对环境中障碍物拓扑信息的精确描述,提出一种拓扑约束下基于A*算法且用时更短的路径规划算法.该算法可实现非结构化环境中多无人车全局路径的拓扑分类,从而为多车的协同规划提供一种新的研究思路.此外,结合C-空间动态广义Voronoi图(GVD)的路径拓扑分离特性,提出一种拓扑约束下可用于多无人车全局路径规划的高效算法-----C-空间-GVD-${h_S 相似文献
19.
In this paper we consider the problem of estimating the range information of features on an affine plane in
by observing its image with the aid of a CCD camera, wherein we assume that the camera is undergoing a known motion. The features considered are points, lines and planar curves located on planar surfaces of static objects. The dynamics of the moving projections of the features on the image plane have been described as a suitable differential equation on an appropriate feature space. This dynamics is used to estimate feature parameters from which the range information is readily available. In this paper the proposed identification has been carried out via a newly introduced identifier based observer. Performance of the observer has been studied via simulation. 相似文献
20.
An optimal path provides efficient operation of unmanned ground vehicles (UGVs) for many kinds of tasks such as transportation, exploration, surveillance, and search and rescue in unstructured areas that include various unexpected obstacles. Various onboard sensors such as LiDAR, radar, sonar, and cameras are used to detect obstacles around the UGVs. However, their range of view is often limited by movable obstacles or barriers, resulting in inefficient path generation. Here, we present the aerial online mapping system to generate an efficient path for a UGV on a two-dimensional map. The map is updated by projecting obstacles detected in the aerial images taken by an unmanned aerial vehicle through an object detector based on a conventional convolutional neural network. The proposed system is implemented in real-time by a skid steering ground vehicle and a quadcopter with relatively small, low-cost embedded systems. The frameworks and each module of the systems are given in detail to evaluate the performance. The system is also demonstrated in unstructured outdoor environments such as in a football field and a park with unreliable communication links. The results show that the aerial online mapping is effective in path generation for autonomous UGVs in real environments. 相似文献