首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Using robots to harvest sweet peppers in protected cropping environments has remained unsolved despite considerable effort by the research community over several decades. In this paper, we present the robotic harvester, Harvey, designed for sweet peppers in protected cropping environments that achieved a 76.5% success rate on 68 fruit (within a modified scenario) which improves upon our prior work which achieved 58% on 24 fruit and related sweet pepper harvesting work which achieved 33% on 39 fruit (for their best tool in a modified scenario). This improvement was primarily achieved through the introduction of a novel peduncle segmentation system using an efficient deep convolutional neural network, in conjunction with three‐dimensional postfiltering to detect the critical cutting location. We benchmark the peduncle segmentation against prior art demonstrating an improvement in performance with a F 1 score of 0.564 compared to 0.302. The robotic harvester uses a perception pipeline to detect a target sweet pepper and an appropriate grasp and cutting pose used to determine the trajectory of a multimodal harvesting tool to grasp the sweet pepper and cut it from the plant. A novel decoupling mechanism enables the gripping and cutting operations to be performed independently. We perform an in‐depth analysis of the full robotic harvesting system to highlight bottlenecks and failure points that future work could address.  相似文献   

2.
Design of an autonomous agricultural robot   总被引:5,自引:0,他引:5  
This paper presents a state-of-the-art review in the development of autonomous agricultural robots including guidance systems, greenhouse autonomous systems and fruit-harvesting robots. A general concept for a field crops robotic machine to selectively harvest easily bruised fruit and vegetables is designed. Future trends that must be pursued in order to make robots a viable option for agricultural operations are focused upon.A prototype machine which includes part of this design has been implemented for melon harvesting. The machine consists of a Cartesian manipulator mounted on a mobile chassis pulled by a tractor. Two vision sensors are used to locate the fruit and guide the robotic arm toward it. A gripper grasps the melon and detaches it from the vine. The real-time control hardware architecture consists of a blackboard system, with autonomous modules for sensing, planning and control connected through a PC bus. Approximately 85% of the fruit are successfully located and harvested.  相似文献   

3.
This paper presents an autonomous robot capable of picking strawberries continuously in polytunnels. Robotic harvesting in cluttered and unstructured environment remains a challenge. A novel obstacle‐separation algorithm was proposed to enable the harvesting system to pick strawberries that are located in clusters. The algorithm uses the gripper to push aside surrounding leaves, strawberries, and other obstacles. We present the theoretical method to generate pushing paths based on the surrounding obstacles. In addition to manipulation, an improved vision system is more resilient to lighting variations, which was developed based on the modeling of color against light intensity. Further, a low‐cost dual‐arm system was developed with an optimized harvesting sequence that increases its efficiency and minimizes the risk of collision. Improvements were also made to the existing gripper to enable the robot to pick directly into a market punnet, thereby eliminating the need for repacking. During tests on a strawberry farm, the robots first‐attempt success rate for picking partially surrounded or isolated strawberries ranged from 50% to 97.1%, depending on the growth situations. Upon an additional attempt, the pick success rate increased to a range of 75–100%. In the field tests, the system was not able to pick a target that was entirely surrounded by obstacles. This failure was attributed to limitations in the vision system as well as insufficient dexterity in the grippers. However, the picking speed improved upon previous systems, taking just 6.1 s for manipulation operation in the one‐arm mode and 4.6 s in the two‐arm mode.  相似文献   

4.
Citrus harvesting is a labor-intensive and time-intensive task. As the global population continues to age, labor costs are increasing dramatically. Therefore, the citrus-harvesting robot has attracted considerable attention from the business and academic communities. However, robotic harvesting in unstructured and natural citrus orchards remains a challenge. This study aims to address some challenges faced in commercializing citrus-harvesting robots. We present a fully integrated, autonomous, and innovative solution for citrus-harvesting robots to overcome the harvesting difficulties derived from the natural growth characteristics of citrus. This solution uses a fused simultaneous localization and mapping algorithm based on multiple sensors to perform high-precision localization and navigation for the robot in the field orchard. Besides, a novel visual method for estimating fruit poses is proposed to cope with the randomization of citrus growth orientations. Further, a new end-effector is designed to improve the success and conformity rate of citrus stem cutting. Finally, a fully autonomous harvesting robot system has been developed and integrated. Field evaluations showed that the robot could harvest citrus continuously with an overall success rate of 87.2% and an average picking time of 10.9 s/fruit. These efforts provide a solid foundation for the future commercialization of citrus-harvesting robots.  相似文献   

5.
An intelligent control system for an agricultural robot which performs in an uncertain and unstructured environment was modelled as distributed, autonomous computing modules that communicate through globally accessible blackboard structures. The control architecture was implemented for a robotic harvester of melons. A CAD workstation was used to plan, model, simulate and evaluate the robot and gripper motions using 3-D, real-time animation. The intelligent control structure was verified by simulating the dynamic data flow scenarios of melon harvesting. Control algorithms were evaluated on measured melon locations.Picking time was reduced by 49% by applying the traveling salesman algorithm to define the picking sequence. Picking speeds can be increased by a continuous mode of operation. However, this decreases harvest efficiency. Therefore, an algorithm was developed to attain 100% harvest efficiency by varying the vehicle's forward speed. By comparing different motion control algorithms through animated visual simulation, the best was selected and thereby the performance improved.Journal Paper No. 13043, Agricultural Experiment Station, Purdue University, W. Lafayette, IN 47907, U.S.A. This research was supported by Grants No. US-1254-87 and US-1682-89 from BARD, the United States-Israel Binational Agricultural Research and Development Fund.  相似文献   

6.
This paper surveys the main trends of the development of advanced general-purpose software for robot manipulators. It consists of three main sections: robot planning, robot programming, and robot vision.  相似文献   

7.
Human–robot collaboration will be an essential part of the production processes in the factories of tomorrow. In this paper, a human–robot hand‐over control strategy is presented. Human and robot can be both giver and receiver. A path‐planning algorithm drives the robotic manipulator towards the hand of the human and permits to adapt the pose of the tool center point of the robot to the pose of the hand of the human worker. The movements of the operator are acquired with a multi 3D‐sensors setup so to avoid any possible occlusion related to the presence of the robot or other dynamic obstacles. Estimation of the predicted position of the hand is performed to reduce the waiting time of the operator during the hand‐over task. The hardware setup is described, and the results of experimental tests, conducted to verify the effectiveness of the control strategy, are presented and discussed.  相似文献   

8.
This paper presents a new method for behavior fusion control of a mobile robot in uncertain environments.Using behavior fusion by fuzzy logic,a mobile robot is able to directly execute its motion according to range information about environments,acquired by ultrasonic sensors,without the need for trajectory planning.Based on low-level behavior control,an efficient strategy for integrating high-level global planning for robot motion can be formulated,since,in most applications,some information on environments is prior knowledge.A global planner,therefore,only to generate some subgoal positions rather than exact geometric paths.Because such subgoals can be easily removed from or added into the plannes,this strategy reduces computational time for global planning and is flexible for replanning in dynamic environments.Simulation results demonstrate that the proposed strategy can be applied to robot motion in complex and dynamic environments.  相似文献   

9.
In the last decades, ego-motion estimation or visual odometry (VO) has received a considerable amount of attention from the robotic research community, mainly due to its central importance in achieving robust localization and, as a consequence, autonomy. Different solutions have been explored, leading to a wide variety of approaches, mostly grounded on geometric methodologies and, more recently, on data-driven paradigms. To guide researchers and practitioners in choosing the best VO method, different benchmark studies have been published. However, the majority of them compare only a small subset of the most popular approaches and, usually, on specific data sets or configurations. In contrast, in this work, we aim to provide a complete and thorough study of the most popular and best-performing geometric and data-driven solutions for VO. In our investigation, we considered several scenarios and environments, comparing the estimation accuracies and the role of the hyper-parameters of the approaches selected, and analyzing the computational resources they require. Experiments and tests are performed on different data sets (both publicly available and self-collected) and two different computational boards. The experimental results show pros and cons of the tested approaches under different perspectives. The geometric simultaneous localization and mapping methods are confirmed to be the best performing, while data-driven approaches show robustness with respect to nonideal conditions present in more challenging scenarios.  相似文献   

10.
Grasping of Static and Moving Objects Using a Vision-Based Control Approach   总被引:1,自引:0,他引:1  
Robotic systems require the use of sensing to enable flexible operation in uncalibrated or partially calibrated environments. Recent work combining robotics with vision has emphasized an active vision paradigm where the system changes the pose of the camera to improve environmental knowledge or to establish and preserve a desired relationship between the robot and objects in the environment. Much of this work has concentrated upon the active observation of objects by the robotic agent. We address the problem of robotic visual grasping (eye-in-hand configuration) of static and moving rigid targets. The objective is to move the image projections of certain feature points of the target to effect a vision-guided reach and grasp. An adaptive control algorithm for repositioning a camera compensates for the servoing errors and the computational delays that are introduced by the vision algorithms. Stability issues along with issues concerning the minimum number of required feature points are discussed. Experimental results are presented to verify the validity and the efficacy of the proposed control algorithms. We then address an adaptation to the control paradigm that focuses upon the autonomous grasping of a static or moving object in the manipulators workspace. Our work extends the capabilities of an eye-in-hand system beyond those as a pointer or a camera orienter to provide the flexibility required to robustly interact with the environment in the presence of uncertainty. The proposed work is experimentally verified using the Minnesota Robotic Visual Tracker (MRVT) [7] to automatically select object features, to derive estimates of unknown environmental parameters, and to supply a control vector based upon these estimates to guide the manipulator in the grasping of a static or moving object.  相似文献   

11.
张恒 《计算机与数字工程》2011,39(3):138-140,170
结合人体生理特征与物体运动特点,提出了一种用运动函数表示人体运动的方法,来用于视频图像的运动检测。针对智能监控系统的特点,以人行走视频为例,通过提取实验数据并对其进行分析处理,得到人行走函数方程。在此基础上,进行实验验证人行走函数中各参数的有效性,得到人行走函数关系表达式。  相似文献   

12.
In this paper we describe an algorithm to recover the scene structure, the trajectories of the moving objects and the camera motion simultaneously given a monocular image sequence. The number of the moving objects is automatically detected without prior motion segmentation. Assuming that the objects are moving linearly with constant speeds, we propose a unified geometrical representation of the static scene and the moving objects. This representation enables the embedding of the motion constraints into the scene structure, which leads to a factorization-based algorithm. We also discuss solutions to the degenerate cases which can be automatically detected by the algorithm. Extension of the algorithm to weak perspective projections is presented as well. Experimental results on synthetic and real images show that the algorithm is reliable under noise.  相似文献   

13.
In spite of the good performance of space exploratory missions, open issues still await to be solved. In autonomous or composite semi‐autonomous exploration of planetary land surfaces, rover localization is such an issue. The rovers of these missions (e.g., the MER and MSL) navigate relatively to their landing spot, ignoring their exact position on the coordinate system defined for the celestial body they explore. However, future advanced missions, like the Mars Sample Return, will require the localization of rovers on a global frame rather than the arbitrarily defined landing frame. In this paper we attempt to retrieve the absolute rover's location by identifying matching Regions of Interest (ROIs) between orbital and land images. In particular, we propose a system comprising two parts, an offline and an onboard one, which functions as follows: in advance of the mission a Global ROI Network (GN) is built offline by investigating the satellite images near the predicted touchdown ellipse, while during the mission a Local ROI Network (LN) is constructed counting on the images acquired by the vision system of the rover along its traverse. The last procedure relies on the accurate VO‐based relative rover localization. The LN is then paired with the GN through a modified 2D DARCES algorithm. The system has been assessed on real data collected by the ESA at the Atacama desert. The results demonstrate the system's potential to perform absolute localization, on condition that the area includes discriminative ROIs. The main contribution of this work is the enablement of global localization performed on contemporary rovers without requiring any additional hardware, such as long range LIDARs.  相似文献   

14.
Visual Odometry (VO) is a fundamental technique to enhance the navigation capabilities of planetary exploration rovers. By processing the images acquired during the motion, VO methods provide estimates of the relative position and attitude between navigation steps with the detection and tracking of two-dimensional (2D) image keypoints. This method allows one to mitigate trajectory inconsistencies associated with slippage conditions resulting from dead-reckoning techniques. We present here an independent analysis of the high-resolution stereo images of the NASA Mars 2020 Perseverance rover to retrieve its accurate localization on sols 65, 66, 72, and 120. The stereo pairs are processed by using a 3D-to-3D stereo-VO approach that is based on consolidated techniques and accounts for the main nonlinear optical effects characterizing real cameras. The algorithm is first validated through the analysis of rectified stereo images acquired by the NASA Mars Exploration Rover Opportunity, and then applied to the determination of Perseverance's path. The results suggest that our reconstructed path is consistent with the telemetered trajectory, which was directly retrieved onboard the rover's system. The estimated pose is in full agreement with the archived rover's position and attitude after short navigation steps. Significant differences (~10–30 cm) between our reconstructed and telemetered trajectories are observed when Perseverance traveled distances larger than 1 m between the acquisition of stereo pairs.  相似文献   

15.
Research and commercial interest toward 3D virtual worlds are recently growing because they probably represent the new direction for the next generation of web applications. Although these environments present several features that are useful for informal collaboration, structured collaboration is required to effectively use them in a working or in a didactical setting. This paper presents a system supporting synchronous collaborative learning by naturally enriching Learning Management System services with meeting management and multimedia features. Monitoring and moderation of discussions are also managed at a single group and at the teaching level. The Second Life (SL) environment has been integrated with two ad hoc developed Moodle plug‐ins and SL objects have been designed, modeled, and programmed to support synchronous role‐based collaborative activities. We also enriched SL with tools to support the capturing and displaying of textual information during collaborative sessions for successive retrieval. In addition, the multimedia support has been enhanced with functionalities for navigating multimedia contents. We also report on an empirical study aiming at evaluating the use of the proposed SL collaborative learning as compared with face‐to‐face group collaboration. Results show that the two approaches are statistically undistinguishable in terms of performance, comfort with communication, and overall satisfaction. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
针对运行方向上导航目标本身不具有明显视觉特征的情况,提出了一种新的视觉导航方法。该方法利用环境中与目标位置关系已知的显著特征点作为参考点,根据参考点的成像位置确定系统运动的姿态调整量,实现机器人运行方向的导航。为了提高算法对环境的适应能力,该算法只利用一个参考点,具有结构简单和计算复杂度小等特点。实验结果表明,该文方法能够有效地消除导航过程的方向误差,能够配合位置导航装置实现机器人的方向导航,具有较高的实用性和稳定性。  相似文献   

17.
We present a new approach to visual feedback control using image-based visual servoing with stereo vision. In order to control the position and orientation of a robot with respect to an object, a new technique is proposed using binocular stereo vision. The stereo vision enables us to calculate an exact image Jacobian not only at around a desired location, but also at other locations. The suggested technique can guide a robot manipulator to the desired location without needing such a priori knowledge as the relative distance to the desired location or a model of the object, even if the initial positioning error is large. We describes a model of stereo vision and how to generate feedback commands. The performance of the proposed visual servoing system is illustrated by a simulation and by experimental results, and compared with the conventional method for an assembling robot. This work was presented in part at the Fourth International Symposium on Artificial Life and Robotics, Oita, Japan, January 19–22, 1999  相似文献   

18.
为了在图像处理中快速地实现运动检测和相机自身运动方式估算,引入了基于生物视觉机制的Reichardt运动检测器模型(EMD)和感受域模板.分析了Reichardt运动检测器模型的基本特性及其缺陷.为了克服模型上的主要缺陷,在应用中选择了一种优化模型,应用该模型可以得到较好的运动检测结果.同时,提出了基于苍蝇视觉系统的6个感受域模板,用以实现简单自身运动方式的估算,如相机自身的平移、旋转等.最后,在FPGA(FieldProgrammable Gate Array)平台上实现了相关的算法.实验结果表明,优化后的运动检测器可以快速地判断局部运动方向,感受域模板可实现在特定背景下的简单运动方式估算;对分辨率为256×256像素的输入图片,本设计中的FPGA系统可达到每秒350帧的处理速率,所产生的延时仅为0.25μs,达到了快速处理的要求.此模型可应用于实时的机器视觉系统,如进行障碍物检测、整体运动方式估算、UAV/MAV的稳定控制等.  相似文献   

19.
This article presents an original motion control strategy for robot manipulators based on the coupling of the inverse dynamics method with the so-called second-order sliding mode control approach. Using this method, in principle, all the coupling non-linearities in the dynamical model of the manipulator are compensated, transforming the multi-input non-linear system into a linear and decoupled one. Actually, since the inverse dynamics relies on an identified model, some residual uncertain terms remain and perturb the linear and decoupled system. This motivates the use of a robust control design approach to complete the control scheme. In this article the sliding mode control methodology is adopted. Sliding mode control has many appreciable features, such as design simplicity and robustness versus a wide class of uncertainties and disturbances. Yet conventional sliding mode control seems inappropriate to be applied in robotics since it can generate the so-called chattering effect, which can be destructive for the controlled robot. In this article, this problem is suitably circumvented by designing a second-order sliding mode controller capable of generating a continuous control law making the proposed sliding mode controller actually applicable to industrial robots. To build the inverse dynamics part of the proposed controller, a suitable dynamical model of the system has been formulated, and its parameters have been accurately identified relying on a practical MIMO identification procedure recently devised. The proposed inverse dynamics-based second-order sliding mode controller has been experimentally tested on a COMAU SMART3-S2 industrial manipulator, demonstrating the tracking properties and the good performances of the controlled system.  相似文献   

20.
Juggling, which uses both hands to keep several objects in the air at once, is admired by anyone who sees it. However, skillful real‐world juggling requires long, hard practice. Therefore, we propose an interesting method to enable anyone to juggle skillfully in the virtual world. In the real world, the human motion has to follow the motion of the moving objects; in the virtual world, the objects' motion can be adjusted together with the human motion. By using this freedom, we have generated a juggling avatar that can follow the user's motion. The user simply makes juggling‐like motions in front of a motion sensor. Our system then searches for juggling motions that closely match the user's motions and connects them smoothly. We then generate moving objects that both satisfy the laws of physics and are synchronized with the synthesized motion of the avatar. In this way, we can generate a variety of juggling animations by an avatar in real time. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号