首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7篇
  免费   0篇
无线电   2篇
自动化技术   5篇
  2020年   1篇
  2015年   1篇
  2011年   1篇
  2010年   2篇
  2009年   1篇
  2004年   1篇
排序方式: 共有7条查询结果,搜索用时 0 毫秒
1
1.
2.
In this work, we describe and evaluate a grasping mechanism that does not make use of any specific object prior knowledge. The mechanism makes use of second-order relations between visually extracted multi-modal 3D features provided by an early cognitive vision system. More specifically, the algorithm is based on two relations covering geometric information in terms of a co-planarity constraint as well as appearance based information in terms of co-occurrence of colour properties. We show that our algorithm, although making use of such rather simple constraints, is able to grasp objects with a reasonable success rate in rather complex environments (i.e., cluttered scenes with multiple objects).Moreover, we have embedded the algorithm within a cognitive system that allows for autonomous exploration and learning in different contexts. First, the system is able to perform long action sequences which, although the grasping attempts not being always successful, can recover from mistakes and more importantly, is able to evaluate the success of the grasps autonomously by haptic feedback (i.e., by a force torque sensor at the wrist and proprioceptive information about the distance of the gripper after a gasping attempt). Such labelled data is then used for improving the initially hard-wired algorithm by learning. Moreover, the grasping behaviour has been used in a cognitive system to trigger higher level processes such as object learning and learning of object specific grasping.  相似文献   
3.
In this work, we make use of 3D contours and relations between them (namely, coplanarity, cocolority, distance and angle) for four different applications in the area of computer vision and vision-based robotics. Our multi-modal contour representation covers both geometric and appearance information. We show the potential of reasoning with global entities in the context of visual scene analysis for driver assistance, depth prediction, robotic grasping and grasp learning. We argue that, such 3D global reasoning processes complement widely-used 2D local approaches such as bag-of-features since 3D relations are invariant under camera transformations and 3D information can be directly linked to actions. We therefore stress the necessity of including both global and local features with different spatial dimensions within a representation. We also discuss the importance of an efficient use of the uncertainty associated with the features, relations, and their applicability in a given context.  相似文献   
4.
5.
The goal of this review is to discuss different strategies employed by the visual system to limit data-flow and to focus data processing. These strategies can be hard-wired, like the eccentricity-dependent visual resolution or they can be dynamically changing like mechanisms of visual attention. We will ask to what degree such strategies are also useful in a computer vision context. Specifically we will discuss, how to adapt them to technical systems where the substrate for the computations is vastly different from that in the brain. It will become clear that most algorithmic principles, which are employed by natural visual systems, need to be reformulated to better fit to modern computer architectures. In addition, we will try to show that it is possible to employ multiple strategies in parallel to arrive at a flexible and robust computer vision system based on recurrent feedback loops and using information derived from the statistics of natural images.  相似文献   
6.
International Journal of Computer Vision - The use of human-level semantic information to aid robotic tasks has recently become an important area for both Computer Vision and Robotics. This has...  相似文献   
7.
In this paper we present a framework for accumulating on-line a model of a moving object (e.g., when manipulated by a robot). The proposed scheme is based on Bayesian filtering of local features, filtering jointly position, orientation and appearance information. The work presented here is novel in two aspects: first, we use an estimation mechanism that updates iteratively not only geometrical information, but also appearance information. Second, we propose a probabilistic version of the classical n-scan criterion that allows us to select which features are preserved and which are discarded, while making use of the available uncertainty model.The accumulated representations have been used in three different contexts: pose estimation, robotic grasping, and driver assistance scenario.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号