首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article describes a framework that fuses vision and force feedback for the control of highly deformable objects. Deformable active contours, or snakes, are used to visually observe changes in object shape over time. Finite‐element models of object deformations are used with force feedback to predict expected visually observed deformations. Our approach improves the performance of large, complex deformable contour tracking over traditional computer vision tracking techniques. This same approach of combining deformable active contours with finite‐element material models is modified so that a vision sensor, i.e., a charge‐coupled device (CCD) camera, can be used as a force sensor. By visually tracking changes in contours on the object, material deflections can be transformed into applied stress estimates through finite element modeling. Therefore, internal object stresses due to object manipulation can be visually observed and controlled, thus creating a framework for deformable object manipulation. A pinhole camera model is used to accomplish vision and force sensor feedback assimilation from these two sensing modalities during manipulation. © 2001 John Wiley & Sons, Inc.  相似文献   

2.
Whereas vision and force feedback—either at the wrist or at the joint level—for robotic manipulation purposes has received considerable attention in the literature, the benefits that tactile sensors can provide when combined with vision and force have been rarely explored. In fact, there are some situations in which vision and force feedback cannot guarantee robust manipulation. Vision is frequently subject to calibration errors, occlusions and outliers, whereas force feedback can only provide useful information on those directions that are constrained by the environment. In tasks where the visual feedback contains errors, and the contact configuration does not constrain all the Cartesian degrees of freedom, vision and force sensors are not sufficient to guarantee a successful execution. Many of the tasks performed in our daily life that do not require a firm grasp belong to this category. Therefore, it is important to develop strategies for robustly dealing with these situations. In this article, a new framework for combining tactile information with vision and force feedback is proposed and validated with the task of opening a sliding door. Results show how the vision-tactile-force approach outperforms vision-force and force-alone, in the sense that it allows to correct the vision errors at the same time that a suitable contact configuration is guaranteed.  相似文献   

3.
计算机视觉中传感器规划综述   总被引:5,自引:0,他引:5       下载免费PDF全文
在摄影测量和目标识别计算机视觉的应用中,为了提高三维定位精度,有必要采用传感器规划策略,即预先在摄像机和照明参数进行规划,因为这样可以更有效地完成视觉任务,为了使人们对计算机视觉中传感器规划有一概略了解,首先分析了传感器和照明参数、特征检测约束、传感器和目标模型等影响传感器规划的各种因素;然后在总结传感器规划最近研究进展的基础上,根据传感器规划所使用方法等的不同,对传感器规划问题进行了分类,并将传感器规划问题归纳为组合优化的过程;最后指出传感器规划今后几个可能的研究方向。  相似文献   

4.
ABSTRACT

This paper presents a robust bin-picking system utilizing tactile sensors and a vision sensor. The object position and orientation are estimated using a fast template-matching method through the vision sensor. When a robot picks up an object, the tactile sensors detect the success or failure of the grasping, and a force sensor detects the contact with the environment. A weight sensor is also used to judge whether the lifting of the object has been successful. The robust and efficient bin-picking system presented herein is implemented through the integration of different sensors. In particular, the tactile sensors realize rope-shaped object picking that has yet to be made possible with conventional picking systems. The effectiveness of the proposed method was confirmed through grasping experiments and in a competitive event at the World Robot Challenge 2018.  相似文献   

5.
A position/force hybrid control system based on impedance control scheme is designed to align a small gripper to a special ring object. The vision information provided by microscope vision system is used as the feedback to indicate the position relationship between the gripper and the ring object. Multiple image features of the gripper and the ring object are extracted to estimate the relative positions between them. The end-effector of the gripper is tracked using the extracted features to keep the gripper moving in the field of view. The force information from the force sensor serves as the feedback to ensure that the contact force between the gripper and the ring object is limited in a small safe range. Experimental results verify the effectiveness of the proposed control strategy.  相似文献   

6.
Object detection quality and network lifetime are two conflicting aspects of a sensor network, but both are critical to many sensor applications such as military surveillance. Partial coverage, where a sensing field is partially sensed by active sensors at any time, is an appropriate approach to balancing the two conflicting design requirements of monitoring applications. Under partial coverage, we develop an analytical framework for object detection in sensor networks, and mathematically analyze average-case object detection quality in random and synchronized sensing scheduling protocols. Our analytical framework facilitates performance evaluation of a sensing schedule, network deployment, and sensing scheduling protocol design. Furthermore, we propose three wave sensing scheduling protocols to achieve bounded worst-case object detection quality. We justify the correctness of our analyses through rigorous proof, and validate the effectiveness of the proposed protocols through extensive simulation experiments  相似文献   

7.
In this article, we present an integrated manipulation framework for a service robot, that allows to interact with articulated objects at home environments through the coupling of vision and force modalities. We consider a robot which is observing simultaneously his hand and the object to manipulate, by using an external camera (i.e. robot head). Task-oriented grasping algorithms (Proc of IEEE Int Conf on robotics and automation, pp 1794–1799, 2007) are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach (Int Conf on advanced robotics, 2007), based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in models and positioning. A position-based visual servoing control law has been designed in order to continuously align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real robot interacting with different kind of doors are presented.  相似文献   

8.
手工操作限制了MEMS传感器的批量生产.为了降低生产成本,同时提高传感器的产品质量,研制了柔性自动阳极键合系统.该系统包括一系列功能模块,包括精密定位系统、显微视觉子系统、柔性微操作手、加热系统、键合夹具、物流系统以及其它的附属系统.通过模块重构和调整,可实现不同尺寸规格传感器的键合.基于显微视觉以及微装配系统的特性,提出了基于小波变换的自动调焦清晰度评价函数,同时提出了基于改进史密斯预估器的用于克服视觉延迟的伺服控制结构.集成一维微力传感器的微操作手可实现高精度和无损操作.为了实现自动化操作,开发了包括任务规划和实时控制功能的控制系统.试验验证了该系统的自动键合功能.  相似文献   

9.
Motivated by applications involving soft-tissue manipulation such as robotic surgery, the transparency objectives in bilateral teleoperation are redefined to include monotonic nonlinear and linear-time-invariant filter mappings between the master/slave position and force signals. To demonstrate the utility of the new performance measures, a stiffness discrimination telemanipulation task of soft environments is considered. A nonlinear force mapping can enhance stiffness discrimination thresholds as shown through a set of psychophysics experiments. Lyapunov-based adaptive motion/force controllers are presented that can achieve the new transparency objectives in the presence of dynamic uncertainty in the master, slave, user, and environment and in the absence of time delay. Given a priori known bounds on unknown dynamic parameters, a framework for robust stability analysis is proposed that uses an off-axis circle criterion and the Nyquist envelope of interval plant systems. Nonlinear- and linear-filtered mappings are achieved in experiments with a two-axis teleoperation system.  相似文献   

10.
This article introduces a sensor placement measure called vision resolvability. The measure provides a technique for estimating the relative ability of various visual sensors, including monocular systems, stereo pairs, multi-baseline stereo systems, and 3D rangefinders, to accurately control visually manipulated objects. The resolvability ellipsoid illustrates the directional nature of resolvability, and can be used to direct camera motion and adjust camera intrinsic parameters in real-time so that the servoing accuracy of the visual servoing system improves with camera-lens motion. The Jacobian mapping from task space to sensor space is derived for a monocular system, a stereo pair with parallel optical axes, and a stereo pair with perpendicular optical axes. Resolvability ellipsoids based on these mappings for various sensor configurations are presented. Visual servoing experiments demonstrate that vision resolvability can be used to direct camera-lens motion to increase the ability of a visually servoed manipulator to precisely servo objects. © 1996 John Wiley & Sons, Inc.  相似文献   

11.
Gesture-based systems allow users to interact with a virtual reality application in a natural way. Visual feedback for the gesture-based interaction technique has an impact on the performance and the hand instability making the manipulation of the object less precise. This paper investigated two new interaction techniques in a virtual environment. It describes the influence of natural and non-natural virtual feedback in the selection process using the GITDVR-G interaction technique, which consists of a grasping visual feedback. The GITDVR-G was evaluated in a virtual knee surgery training system. The results showed that it was effective in terms of the task completion time, and that the participants preferred the natural grasping visual feedback. Besides that, the precise manipulation in a newly-designed interaction technique (Precise GITDVR-G) was evaluated. The Precise GITDVR-G includes a normal manipulation mode and a precise manipulation mode that can be triggered by hand gestures. During the precise manipulation mode, an inset view will appear and move with the selected object to provide a better view to users, while the movements of the virtual hand are scaled down to improve the precision. Four different configurations of the precise manipulation technique were evaluated, and the results showed that the unimanual control method with an inset view performed better in terms of the task performance time and the subjective feedback. The finding suggested that the realistic virtual grasping visual feedback can be applied in a virtual hand interaction technique, and that the inset view feature is helpful in the precise manipulation.  相似文献   

12.
《Advanced Robotics》2013,27(3):245-261
—This paper reviews the current state of the art and predicts the outlook in robotic tactile sensing for real-time control of dextrous manipulation. We begin with an overview of human touch sensing capabilities and draw lessons for robotic manipulation. Next, tactile sensor devices are described, including tactile array sensors, force-torque sensors, and dynamic tactile sensors. The information provided by these devices can be used in manipulation in many ways, such as finding contact locations and object shape, measuring contact forces, and determining contact conditions. Finally, recent progress in experimental use of tactile sensing in manipulation is discussed, and future directions for research in sensing and control are considered.  相似文献   

13.
This paper explores the combination of inertial sensor data with vision. Visual and inertial sensing are two sensory modalities that can be explored to give robust solutions on image segmentation and recovery of 3D structure from images, increasing the capabilities of autonomous robots and enlarging the application potential of vision systems. In biological systems, the information provided by the vestibular system is fused at a very early processing stage with vision, playing a key role on the execution of visual movements such as gaze holding and tracking, and the visual cues aid the spatial orientation and body equilibrium. In this paper, we set a framework for using inertial sensor data in vision systems, and describe some results obtained. The unit sphere projection camera model is used, providing a simple model for inertial data integration. Using the vertical reference provided by the inertial sensors, the image horizon line can be determined. Using just one vanishing point and the vertical, we can recover the camera's focal distance and provide an external bearing for the system's navigation frame of reference. Knowing the geometry of a stereo rig and its pose from the inertial sensors, the collineations of level planes can be recovered, providing enough restrictions to segment and reconstruct vertical features and leveled planar patches.  相似文献   

14.
《Advanced Robotics》2013,27(3):275-291
In this paper, a visual and haptic human–machine interface is proposed for teleoperated nano-scale object interaction and manipulation. Design specifications for a bilateral scaled tele-operation system with slave and master robots, sensors, actuators and control are discussed. The Phantom? haptic device is utilized as the master manipulator, and a piezoresistive atomic force microscope probe is selected as the slave manipulator and as topography and force sensors. Using the teleoperation control system, initial experiments are realized for interacting with nano-scale surfaces. It is shown that fine structures can be felt on the operator's finger successfully, and improved nano-scale interaction and manipulation using visual and haptic feedback can be achieved.  相似文献   

15.
《Advanced Robotics》2013,27(4):461-482
In hand-eye systems for advanced robotic applications such as assembly, the degrees of freedom of the vision sensor should be increased and actively made use of to cope with unstable scene conditions. Particularly, in the case of using a simple vision sensor, an intelligent adaptation of the sensor is essential to compensate for its inability to adapt to a changing environment. This paper proposes a vision sensor setup planning system which operates based on environmental models and generates plans for using the sensor and its illumination assuming freedom of positioning for both. A typical vision task in which the edges of an object are measured to determine its position and orientation is assumed for the sensor setup planning. In this context, the system is able to generate plans for the camera and illumination position, and to select a set of edges best suited for determining the object's position. The system operates for stationary or moving objects by evaluating scene conditions such as edge length, contrast, and relative angles based on a model of the object and the task environment. Automatic vision sensor setup planning functions, as shown in this paper, will play an important role not only for autonomous robotic systems, but also for teleoperation systems in assisting advanced tasks.  相似文献   

16.
Like humans, robots that need semantic perception and accurate estimation of the environment can increase their knowledge through active interaction with objects. This paper proposes a novel method for 3D object modeling for a robot manipulator with an eye-in-hand laser range sensor. Since the robot can only perceive the environment from a limited viewpoint, it actively manipulates a target object and generates a complete model by accumulation and registration of partial views. Three registration algorithms are investigated and compared in experiments performed in cluttered environments with complex rigid objects made of multiple parts. A data structure based on proximity graph, that encodes neighborhood relations in range scans, is also introduced to perform efficient range queries. The proposed method for 3D object modeling is applied to perform task-level manipulation. Indeed, once a complete model is available the object is segmented into its constituent parts and categorized. Object sub-parts that are relevant for the task and that afford a grasping action are identified and selected as candidate regions for grasp planning.  相似文献   

17.
Intelligent visual surveillance — A survey   总被引:3,自引:0,他引:3  
Detection, tracking, and understanding of moving objects of interest in dynamic scenes have been active research areas in computer vision over the past decades. Intelligent visual surveillance (IVS) refers to an automated visual monitoring process that involves analysis and interpretation of object behaviors, as well as object detection and tracking, to understand the visual events of the scene. Main tasks of IVS include scene interpretation and wide area surveillance control. Scene interpretation aims at detecting and tracking moving objects in an image sequence and understanding their behaviors. In wide area surveillance control task, multiple cameras or agents are controlled in a cooperative manner to monitor tagged objects in motion. This paper reviews recent advances and future research directions of these tasks. This article consists of two parts: The first part surveys image enhancement, moving object detection and tracking, and motion behavior understanding. The second part reviews wide-area surveillance techniques based on the fusion of multiple visual sensors, camera calibration and cooperative camera systems.  相似文献   

18.
Recent research in automated highway systems has ranged from low-level vision-based controllers to high-level route-guidance software. However, there is currently no system for tactical-level reasoning. Such a system should address tasks such as passing cars, making exits on time, and merging into a traffic stream. Many previous approaches have attempted to hand construct large rule-based systems which capture the interactions between multiple input sensors, dynamic and potentially conflicting subgoals, and changing roadway conditions. However, these systems are extremely difficult to design due to the large number of rules, the manual tuning of parameters within the rules, and the complex interactions between the rules. Our approach to this intermediate-level planning is a system which consists of a collection of autonomous agents, each of which specializes in a particular aspect of tactical driving. Each agent examines a subset of the intelligent vehicle's sensors and independently recommends driving decisions based on their local assessment of the tactical situation. This distributed framework allows different reasoning agents to be implemented using different algorithms.When using a collection of agents to solve a single task, it is vital to carefully consider the interactions between the agents. Since each reasoning object contains several internal parameters, manually finding values for these parameters while accounting for the agents' possible interactions is a tedious and error-prone task. In our system, these parameters, and the system's overall dependence on each agent, is automatically tuned using a novel evolutionary optimization strategy, termed Population-Based Incremental Learning (PBIL).Our system, which employs multiple automatically trained agents, can competently drive a vehicle, both in terms of the user-defined evaluation metric, and as measured by their behavior on several driving situations culled from real-life experience. In this article, we describe a method for multiple agent integration which is applied to the automated highway system domain. However, it also generalizes to many complex robotics tasks where multiple interacting modules must simultaneously be configured without individual module feedback.  相似文献   

19.
Generally, point-to-point control for a completely restrained (CR) parallel-wire-driven system requires a balancing internal force to prevent slackening of wires, along with a feedback term based on some displacement sensor. This paper specifically describes CR systems' internal force properties, then presents the possibility of motion convergence at a desired position when the internal force balancing at a position is given as sensorless feedforward input. Subsequently, we use the property of internal force positively for sensorless position control. This positioning method is applicable for low-cost manipulation, which does not require high accuracy, and for emergency positioning of systems when sensors malfunction.  相似文献   

20.
This article describes the design and control of a lightweight robot finger intended for tactile sensing research. The finger is a three-link planar chain with the joints actuated through cables by two motors. Kinematic coupling of the three joints provides two degrees of freedom for finger tip manipulation, and a curling action of the finger for enclosing an object. Hall effect sensors in each joint provide position feedback, and strain gage sensors on each cable provide tension information. To minimize weight and power consumption, a high speed low torque motor together with a 172:1 speed reducer is used as the actuator. A force control loop around the motor speed reducer system reduces the effect of the friction inherent in the speed reducer. Flat mounting plates are provided on each link for special purpose grasping surfaces and sensors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号