首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Active vision for sociable robots   总被引:1,自引:0,他引:1  
Ballard (1991) described the implications of having a visual system that could actively position the camera coordinates in response to physical stimuli. In humanoid robotic systems, or in any animate vision system that interacts with people, social dynamics provide additional levels of constraint and additional opportunities for processing economy. In this paper, we describe an integrated visual-motor system that was implemented on a humanoid robot to negotiate the robot's physical constraints, the perceptual needs of the robot's behavioral and motivational systems, and the social implications of the motor acts  相似文献   

2.
《Pattern recognition letters》1999,20(11-13):1423-1430
A scene registration method based on a model of dynamical receptive field organization of biological vision is presented. In this model, the receptive fields of simple cells have differently oriented Gabor-type receptive field functions. Collectively, the simple cells in a hypercolumn extract an HC-vector from local sensory input serving as the place token for the scene registration. By incorporating a data driven dynamical tuning mechanism of simple cells, the place tokens are invariant to distortions. The initial experiments show that this biologically motivated method is accurate and robust.  相似文献   

3.
Active vision   总被引:11,自引:5,他引:11  
We investigate several basic problems in vision under the assumption that the observer is active. An observer is called active when engaged in some kind of activity whose purpose is to control the geometric parameters of the sensory apparatus. The purpose of the activity is to manipulate the constraints underlying the observed phenomena in order to improve the quality of the perceptual results. For example a monocular observer that moves with a known or unknown motion or a binocular observer that can rotate his eyes and track environmental objects are just two examples of an observer that we call active. We prove that an active observer can solve basic vision problems in a much more efficient way than a passive one. Problems that are ill-posed and nonlinear for a passive observer become well-posed and linear for an active observer. In particular, the problems of shape from shading and depth computation, shape from contour, shape from texture, and structure from motion are shown to be much easier for an active observer than for a passive one. It has to be emphasized that correspondence is not used in our approach, i.e., active vision is not correspondence of features from multiple viewpoints. Finally, active vision here does not mean active sensing, and this paper introduces a general methodology, a general framework in which we believe low-level vision problems should be addressed.The author is Yiannis  相似文献   

4.
Mobile robotic devices hold great promise for a variety of applications in industry. A key step in the design of a mobile robot is to determine the navigation method for mobility control. The purpose of this paper is to describe a new algorithm for omnidirectional vision navigation. A prototype omnidirectional vision system and the implementation of the navigation techniques using this modern sensor and an advanced automatic image processor is described. The significance of this work is in the development of a new and novel approach—dynamic omnidirectional vision for mobile robots and autonomous guided vehicles.  相似文献   

5.
Abstract. This contribution introduces MOBSY, a fully integrated, autonomous mobile service robot system. It acts as an automatic dialogue-based receptionist for visitors to our institute. MOBSY incorporates many techniques from different research areas into one working stand-alone system. The techniques involved range from computer vision over speech understanding to classical robotics. Along with the two main aspects of vision and speech, we also focus on the integration aspect, both on the methodological and on the technical level. We describe the task and the techniques involved. Finally, we discuss the experiences that we gained with MOBSY during a live performance at our institute.  相似文献   

6.
Active Markov localization for mobile robots   总被引:19,自引:0,他引:19  
Localization is the problem of determining the position of a mobile robot from sensor data. Most existing localization approaches are passive, i.e., they do not exploit the opportunity to control the robot's effectors during localization. This paper proposes an active localization approach. The approach is based on Markov localization and provides rational criteria for (1) setting the robot's motion direction (exploration), and (2) determining the pointing direction of the sensors so as to most efficiently localize the robot. Furthermore, it is able to deal with noisy sensors and approximative world models. The appropriateness of our approach is demonstrated empirically using a mobile robot in a structured office environment.  相似文献   

7.
Adaptive evolutionary planner/navigator for mobile robots   总被引:4,自引:0,他引:4  
Based on evolutionary computation (EC) concepts, we developed an adaptive evolutionary planner/navigator (EP/N) as a novel approach to path planning and navigation. The EP/N is characterized by generality, flexibility, and adaptability. It unifies off-line planning and online planning/navigation processes in the same evolutionary algorithm which 1) accommodates different optimization criteria and changes in these criteria, 2) incorporates various types of problem-specific domain knowledge, and 3) enables good tradeoffs among near-optimality of paths, high planning efficiency, and effective handling of unknown obstacles. More importantly, the EP/N can self-tune its performance for different task environments and changes in such environments, mostly through adapting probabilities of its operators and adjusting paths constantly, even during a robot's motion toward the goal  相似文献   

8.
Receptive fields (RF) in the visual cortex can change their size depending on the state of the individual. This reflects a changing visual resolution according to different demands on information processing during drowsiness. So far, however, the possible mechanisms that underlie these size changes have not been tested rigorously. Only qualitatively has it been suggested that state-dependent lateral geniculate nucleus (LGN) firing patterns (burst versus tonic firing) are mainly responsible for the observed cortical receptive field restructuring. Here, we employ a neural field approach to describe the changes of cortical RF properties analytically. Expressions to describe the spatiotemporal receptive fields are given for pure feedforward networks. The model predicts that visual latencies increase nonlinearly with the distance of the stimulus location from the RF center. RF restructuring effects are faithfully reproduced. Despite the changing RF sizes, the model demonstrates that the width of the spatial membrane potential profile (as measured by the variance sigma of a gaussian) remains constant in cortex. In contrast, it is shown for recurrent networks that both the RF width and the width of the membrane potential profile generically depend on time and can even increase if lateral cortical excitatory connections extend further than fibers from LGN to cortex. In order to differentiate between a feedforward and a recurrent mechanism causing the experimental RF changes, we fitted the data to the analytically derived point-spread functions. Results of the fits provide estimates for model parameters consistent with the literature data and support the hypothesis that the observed RF sharpening is indeed mainly driven by input from LGN, not by recurrent intracortical connections.  相似文献   

9.
The optical guidance of robots spans the research topics of robotics, computer vision, communication and real-time control. The proposed method aims to improve the accuracy of guidance along a desired route in an environment that is unknown to the robot. The key idea is to indicate the numerical coordinates of target positions by means of projecting a laser light onto the ground. In contrast with other guidance methods, which communicate the target position numerically, using optical commands avoids the need to maintain the coordinate transformation between the robot’s system and that of the environmental model (“world” reference coordinates). The image processing and communication ensure that the robot accurately follows the route indicated by laser beacons, and self-localization becomes less relevant for guidance. The experimental results have proved the effectiveness of this method.  相似文献   

10.
《Advanced Robotics》2013,27(3):281-295
In the routine inspection of industrial or other areas, teams of robots with various sensors could operate together to great effect, but require reliable, accurate and flexible localization capabilities to be able to move around safely. We demonstrate accurate localization for an inspection team consisting of a robot with stereo active vision and its blind companion with an active lighting system, and show that in this case a single sensor can be used for measuring the position of known or unknown scene features, measuring the relative location of the two robots and actually carrying out an inspection task.  相似文献   

11.
This paper presents a vision-based approach for tracking people on a mobile robot using thermal images. The approach combines a particle filter with two alternative measurement models that are suitable for real-time tracking. With this approach a person can be detected independently from current light conditions and in situations where no skin colour is visible. In addition, the paper presents a comprehensive, quantitative evaluation of the different methods on a mobile robot in an office environment, for both single and multiple persons. The results show that the measurement model that was learned from local grey-scale features could improve on the performance of the elliptic contour model, and that both models could be combined to further improve performance with minimal extra computational cost.  相似文献   

12.
In recent years carbon fibre reinforced plastics (CFRP) have gained enormous popularity in aircraft applications. Since the material is very expensive, costs have to be saved through an automated production. For the manufacturing of large structures it is often advisable to use cooperating robots. However, a major problem for the economic use of complex components is the programming of the robot paths. Manual teach-in is no feasible solution and therefore often decides if automated production is profitable. In this work, a system is presented which automatically calculates robot paths using evolutionary algorithms. The use of the proposed system allows, to reduce the commissioning time drastically and changes to the process can be made without great effort by changing the component data.  相似文献   

13.
为提高室内移动机器人障碍物检测能力,提出了一套基于单目视觉的检测方案。该方案首先对拍摄的图像进行色度、饱和度、亮度(HSI)颜色空间转换;然后,针对室内图像中目标和背景的分割,提出了小目标阈值选取法,提高了特定环境下图像分割的准确性;最后,用目标场景匹配法和目标投影匹配法相结合,计算分割后目标像素的变化和投影的变化,从而判别出目标是具有高度的障碍物还是地面图形。实验结果表明该方案的有效性和可行性,可为室内小型移动机器人提供良好的导航信息。  相似文献   

14.
Visual navigation is a challenging issue in automated robot control. In many robot applications, like object manipulation in hazardous environments or autonomous locomotion, it is necessary to automatically detect and avoid obstacles while planning a safe trajectory. In this context the detection of corridors of free space along the robot trajectory is a very important capability which requires nontrivial visual processing. In most cases it is possible to take advantage of the active control of the cameras. In this paper we propose a cooperative schema in which motion and stereo vision are used to infer scene structure and determine free space areas. Binocular disparity, computed on several stereo images over time, is combined with optical flow from the same sequence to obtain a relative-depth map of the scene. Both the time to impact and depth scaled by the distance of the camera from the fixation point in space are considered as good, relative measurements which are based on the viewer, but centered on the environment. The need for calibrated parameters is considerably reduced by using an active control strategy. The cameras track a point in space independently of the robot motion and the full rotation of the head, which includes the unknown robot motion, is derived from binocular image data. The feasibility of the approach in real robotic applications is demonstrated by several experiments performed on real image data acquired from an autonomous vehicle and a prototype camera head  相似文献   

15.
Self-localization is the basis to realize autonomous ability such as motion planning and decision-making for mobile robots, and omnidirectional vision is one of the most important sensors for RoboCup Middle Size League (MSL) soccer robots. According to the characteristic that RoboCup competition is highly dynamic and the deficiency of the current self-localization methods, a robust and real-time self-localization algorithm based on omnidirectional vision is proposed for MSL soccer robots. Monte Carlo localization and matching optimization localization, two most popular approaches used in MSL, are combined in our algorithm. The advantages of these two approaches are maintained, while the disadvantages are avoided. A camera parameters auto-adjusting method based on image entropy is also integrated to adapt the output of omnidirectional vision to dynamic lighting conditions. The experimental results show that global localization can be realized effectively while highly accurate localization is achieved in real-time, and robot self-localization is robust to the highly dynamic environment with occlusions and changing lighting conditions.  相似文献   

16.
针对移动机器人导航过程中视觉图像处理速度慢以及特征点提取与匹配实时性和准确性差等特点,提出了一种基于SIFT特征提取算法与KD树搜索匹配算法相结合的新方法,通过对候选特征点进行多次模糊处理,使其分布在高斯差分图像的灰度轮廓线边缘,利用SIFT特征提取算法找到满足极限约束的极值点;通过KD树最邻近点搜索和匹配算法使处理后的特征点与原始图像进行特征匹配,快速找出匹配正确的特征点。实验证明,该方法对环境光照、视野角度频繁变化的环境具有较强的鲁棒性,能够满足移动机器人自主导航过程中对视频图像处理的实时性和准确性  相似文献   

17.
基于人工路标和立体视觉的移动机器人自定位   总被引:2,自引:0,他引:2       下载免费PDF全文
针对室内移动机器人的自定位问题,提出一种基于人工路标和双目视觉的室内移动机器人自定位方法。首先设计了一种可扩展的彩色人工路标,并给出路标的编码方法;然后利用色彩空间变换,直线交比不变性以及自适应窗口实现路标检测与识别;最后在分析双目立体视觉模型的基础上建立起基于路标的双目立体视觉定位模型,实现移动机器人的准确定位。实验结果表明,路标对光照和视觉传感器的采集位置具有较强的鲁棒性,定位精度能够满足室内移动机器人的定位要求。  相似文献   

18.
Stanley GB 《Neural computation》2002,14(12):2925-2946
The encoding properties of the visual pathway are under constant control from mechanisms of adaptation and systems-level plasticity. In all but the most artificial experimental conditions, these mechanisms serve to continuously modulate the spatial and temporal receptive field (RF) dynamics. Conventional reverse-correlation techniques designed to capture spatiotemporal RF properties assume invariant stimulus-response relationships over experimental trials and are thus limited in their applicability to more natural experimental conditions. Presented here is an approach to tracking time-varying encoding dynamics in the early visual pathway based on adaptive estimation of the spatiotemporal RF in the time domain. Simulations and experimental data from the lateral geniculate nucleus reveal that subtle features of encoding properties can be captured by the adaptive approach that would otherwise be undetected. Capturing the role of dynamically varying encoding mechanisms is vital to our understanding of vision on the natural setting, where there is absence of a true steady state.  相似文献   

19.
Liu  Yongge  Yu  Jianzhuang  Han  Yahong 《Multimedia Tools and Applications》2018,77(17):22159-22171
Multimedia Tools and Applications - Deep convolutional neural networks trained with strong pixel-level supervision have recently significantly boosted the performance in semantic image...  相似文献   

20.
Most robots are designed to operate in environments that are either highly constrained (as is the case in an assembly line) or extremely hazardous (such as the surface of Mars). Machine learning has been an effective tool in both of these environments by augmenting the flexibility and reliability of robotic systems, but this is often a very difficult problem because the complexity of learning in the real world introduces very high dimensional state spaces and applies severe penalties for mistakes. Human children are raised in environments that are just as complex (or even more so) than those typically studied in robot learning scenarios. However, the presence of parents and other caregivers radically changes the type of learning that is possible. Consciously and unconsciously, adults tailor their action and the environment to the child. They draw attention to important aspects of a task, help in identifying the cause of errors and generally tailor the task to the child's capabilities. Our research group builds robots that learn in the same type of supportive environment that human children have and develop skills incrementally through their interactions. Our robots interact socially with human adults using the same natural conventions that a human child would use. Our work sits at the intersection of the fields of social robotics (Fong et al., 2003; Breazeal and Scawellan, 2002) and autonomous mental development (Weng et al., 2000). Together, these two fields offer the vision of a machine that can learn incrementally, directly from humans, in the same ways that humans learn from each other. In this article, we introduce some of the challenges, goals, and applications of this research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号